pred_label
stringclasses
2 values
pred_label_prob
float64
0.5
1
wiki_prob
float64
0.25
1
text
stringlengths
38
995k
source
stringlengths
39
45
__label__cc
0.740375
0.259625
the from the block blog Sex. Pop Culture. Rampant Self-Absorbtion. So, basically, just like every single other blog on the interwebs ever. b-logs douches i’ve dated fashion schmashion glee-caps my celebrity boyfriend my mother is AWESOME open letters to famous people oscarwatch twenty twelve reality (?) television reviewapalooza songs i wrote with words (and music) survivor : dork islands videos that make me happy TGTF (too gay to function) glenns active listing skills The Glenny Guide To… single white meals FASHIONCAP – The Emmys 2011 tags: 500 days of summer, amy poehler, ariel winter, brittany s pierce, christina hendricks, cobie cmulders, elizabeth moss, emmy awards, emmys 2011, evan rachel wood, giuliana rancic, gwyneth paltrow, heather morris, heidi klum coral dress, hemo, jane krakowski, julianna marguiles, julie bowen, katie holmes, kelly osbourne, kristen wiig, martha plimpton, muffin top, olivia munn, padma lakshmi, paz de la huerte, raising hope, robin sparkles, sofia vergara, zooey deschanel And, just like that, The Emmy Awards have been and gone for another year, washed away in a sea of (pleasantly) surprising award wins and yawn-inducing fashion choices. Really, all that is left to be said is – THANK GOD FOR GWYNETH PALTROW. Seriously, her modern Indian Sari for Anorexics meets Ethnic Cheerleader from Hell ensemble was the highlight of the evening. Don’t get me wrong, it was dreadful dreadful and not in a fun Sharon Stone kind of way, but still. It was interesting and Lord knows the evening needed more of it. So, keeping that in mind, here is a rundown of the best and worst of the night with bonus points awarded for a sense of Flair, Fun or Intentional Absurdity :- Oh Padma – what happened? Was the James Bond convention next door and this was your Goldfinger tribute? There’s a BIG difference between wanting to win a little gold man and dressing like one. Seriously, how the hell did such a beautiful woman and such a beautiful dress die so catastrophically in an explosion of monochromatic awfulness. And what’s with all the oil? You look like a cross between a Wrestler and an extra from an 80’s sunscreen commerical.. One of the basic rules of http://www.glennyfromtheblock.com is Rule #7 – Kristen Wiig can do no wrong (see also – The Jane Krakowski Principle). But this dress looks like someone took a dump on a tablecloth and preceded to tie-dye it. Even then, it would have worked on a different, less greasy looking hair colour. And the train looks like someone Augustus Galoop would try diving into. Actually, maybe that’s not such a bad thing.. Can’t wait to hear what the Fashion Police have to say about this tomorrow. Seriously though, did I miss the episode where Guiliana professed to an eating disorder? Because she’s seriously rocking some Madonna arms here and, no offense to the former Materiel Girl, but that’s officialy what we call ‘not a good thing’. Red was obviously the color du jour but this is a prime example of a dress matching too closely with its surroundings (see also – Julianna Marguiles). It actually looks like she’s being swallowed up whole by some sort of parasitic red carpet fungus and, by the looks of her, clearly she’s not much of a meal. Also, the two-tone hair just looks like she glued the wrong colored piece in by accident. Probably my second favourite color to be sported all evening (second only to amazing blue rocked by my beloved Robin Sparkles later down the entry) but this whole entire outfit is ALL. WRONG. Seriously, who knew Olivia Munn could look awful? They say the camera adds 10 pounds, but that loose fitting flap of green fabric over her chest adds at least 35. Which is a shame, because the rest of the dress has that perfect amount of va-va-voom that so few women in Hollywood seem to be able to pull off these days. The hair looks great though. Frankly, I love living in a world where Gwyneth Paltrow feels empowered to do whatever the heck she wants. Because otherwise we’d be deprived of such amazing, amazing things like her genius guest turn on Glee (CONGRATS on the Emmy, btw), her recent cookbook or, you know, The World According to Goop. So I just generally like to let Gwynny-Gwyn run free and do what she likes and just enjoying watching the pieces fall where they may. But there is no denying that she looks like she’s wearing a Dreamcatcher. Or some high fashion negligee from a factory outlet in Mumbai. In other words, it’s kind of awful. And anything that manages to make Miss Macrobiotic 2011 look like she has a muffin top is seriously FUBAR’d. As Prince once sung “Lets party like it’s 1983..” Or something. But for real, this looks like it’s straight out of a Las Vegas review of Dynasty. Or someone menstruated on a shower curtain. Definitely one of those two things – VOTE BELOW! Poor Melissa McCarthy. On one hand, she’s just come off the back of the Bridesmaids juggernaut and last night’s Emmy win. But then we have THIS. Now, women of a certain build generally get a lot of leeway in these things because, seriously, you know there are major problems with finding sartorial selections when someone like Christina Hendericks can’t find a dress that fits in this city. But this matronly navy blue number looks like you just threw on the first patchwork quilt you could find and walked out the door. If Adele has given us nothing else this year, it’s the knowledge that women of any age, shape or size can look damned sexy with the right styling. Try harder. (PS – LOVE YOU). As much as I feel uncomfortable making fun of plus-sized people in a fashion article, it’s even worse making fun of children. But, if I’m willing to give them first place when the situation calls for it, I have to be willing to swing the other way (one of the few times in life I’m willing to swing both ways). And this outfit is just AWFUL on so many levels. Firstly, it looks like Britney Spears wearing a doily circa the …Baby One More Time era. And I’m sorry, but what kind of parent lets a 13 year old have her boobs hanging out like that? It’s kind of innapropriate. Look, 13 year old Glenn was pretty much still rocking Flannel and Hypercolor T-Shirts at that age, but the biggest events he had to go to were statewide spelling bees. PARENTAL FAIL. JULIANNA MARGUILES Oh hey – it’s Casper The Friendly Ghost! In a Bridal Gown. Trying to camouflage in with the wall. NEXT. This one has moved further and further down the list with each passing dress. At first, it was pretty much a contender for Worst Dress of the Night. But there is something about it that is so delightfully absurd. Maybe it’s the way Heidi Klum wears it with such confidence and panache. Maybe it’s the fact that it’s so exquisitely made. Sure, it looks like a cross between a piece of coral and an upside-down Cauliflower, but it’s almost less dress than pure art, and props for that on a night where everyone else wore the couture equivalent of beige, it’s nice to have a big of fun. If the waistline was just a bit lower and the top lengthened, we could be looking at the best dress of the night. Oh well, there’s always next year.. PAZ DE LA HUERTE Now we’ve reached the official part of the evening where I like the dresses. Unfortunately, that’s the nicest thing I can say about the Boardwalk Empire stars inconsistent ensemble. On one hand, we have the dress, which is pretty much STUNNING but, then, we have awful earrings, wind tunnel hair and mirrorball stilettos. Also, Paz – you look like you haven’t taken a shower in a month. Seriously, I thought you were Ke$ha. Hopefully she doesn’t read this, I don’t want to get beaten up like I’m some B-Grade reality TV star. *runs and hides* This is what happens when you let the cast of Jersey Shore do your makeup. Seriously, great dress, but she looks like she’s been Tango’d.. I’d expect this kind of skin tone shenanigans from Jenna Maroney, but not Jane. Another great dress, shame about the styling. Actually, the styling is mostly fine, except for the fact that Poehler is kind of rocking her Lesley Knope Power Lesbian hair from season one of Parks and Recreation. Maybe she came in character? Almost expected the lovely Rashida Jones to be hanging off her arm. Little Brittany S Pierce is all grown up! Lea Michele and Dianna Agron got all the attention on the night, but this is one of the two Glee fashion highlights in my opinion. Very few women can pull off gun metal grey and HeMo does it with aplomb. Same goes for the ruffles, this dress is worn so elegant and gracefully while still being very much individual and fashion forward. UGH. If I were a woman, I’d hate Cat Deeley, because she just looks beautiful in EVERYTHING. Wearing what appears to be a piece of tulle that has been attacked by shotgun shrapnel and gold silkworms, Deeley looks effortlessly cool, sunkissed and chic. Its like she dressed in character! Sometimes you just wonder where Emma Pillsbury ends and Jayma Mays begins, especially with Ryan Murphy’s famed habit of writing characters based on the his intended actors (see also – The ‘Lea Michele is Really a Narcissistic Bitch’ Files). But this is just adorable. Sure, it borders on being more suitable for Ariel Winter, but if anyone over the age of 15 can pull of girly and fun, it’s Mays. This gown is just gorgeous and the detailing is exquisite. I’m surprised that the reaction in the media to it has been so muted, it’s so classic and glamourous while still having a slightly modern, youthful touch. Also, if you’re going to wear a dress that is identical to your skin tone, THIS is how you do it. Padma Lakshmi and Julianna Marguiles, take note! Old Hollywood Glamour at it’s best. Well, except for several of the ladies still listed below… My favourite shade of the night – just GORGEOUS. There’s still something I can’t quite put my finger on that is slightly clashing – either the hair colour or the make-up – but there is no denying how hot the color of the dress is and how beautiful the cut. Robin Sparkles has come a long way baby! Hmmm, this copped a lot of flack in the media but, personally, I think little Joey Potter looks amazing. It strikes that right combination of effortless modern cool with it’s own fashionable yet distinct look. Kind of reminds me of when Sharon Stone wore a Gap Tee to the Oscars one year – simple yet beautiful. Great color and the hair is fantastic. MARTHA PLIMPTON This is just PERFECT. Hard to believe that this is the same girl from The Goonies. Also, if you’re not watching Raising Hope – you’re a moron. Well, only if you ignore this and continue not watching it. Plimpton is amazing. It’s hard in Hollywood for women with curves to not fall into the trap of wearing the exact same vavavoom gowns every single occasion. But this dusky peach number is just breathtaking. With curves like that, it’d be criminal not to keep them so sleekly and tightly wrapped. WOW! If Sofia Vergara has to sometimes be careful to not become a red carpet one trick pony, them Christina Hendricks needs to be careful that she doesn’t become a supporting player to her infamous cleavage. While sometimes her shape causes her to look like a peasant wench from the 1700’s by default, this dress strikes the perfect balance of sexiness and restraint. Absolutely beautiful beading and detailing, this is one bottle of peroxide away from being Marilyn Monroe incarnate. I have the sneaking suspicion that love for Deschanel’s gown is in direct proportion to one’s love for her specific brand of adorable pixie magic. And, since we firmly believe that the 500 Days of Summer star is the most adorable thing this side of freshly washed Labrador puppies, this outfit is being filed straight under AMAZING. Really though, it’s just a simple beautiful dress worn without any drama, highlighting the beautiful shape and the simple red ribbon belt. Beautiful. Just Perfect. In fact, if it weren’t for Miss Osbourne below, this easily could have been the number of the night. Let’s be honest, if you’re going to host a show called Fashion Police, you’d better bring your A-Game to the red carpet. And Kelly did, looking just fabulous in this gorgeous maroon fish-tailed 1950’s throwback. Everything, from the fit to the light purple tinge to the hair, is just perfect. This was actually the very first outfit I saw yesterday when the red carpet pictures started coming in and it was the clear number one the whole way through. FLAWLESS. from → fashion schmashion, TGTF (too gay to function) ← 11 Things We’d Like To See Included In The Next Britney Spears Video Top 20 Tracks from Glee Seasons 1 & 2 → FASHIONCAP – The Golden Globes 2012 (Part One – The WORST) « the from the block blog It’s my TWIT-TAHHHHHHH!
cc/2019-30/en_middle_0023.json.gz/line1250
__label__cc
0.583095
0.416905
Listening Woman Hillerman, Tony Publisher: New York : Penzler, c1978. Characteristics: vii, 316 p. ;,23 cm. Read more reviews of Listening Woman at iDreamBooks.com raydat51 Jan 02, 2013 Mr. Hillerman's quiet regard for the Navajo way of life and thought comes through strongly. If his characters were less involving it might not work but, thankfully, they ARE involving and interesting. You want to know what makes them tick, particularly Leaphorn. His scenes of Navajo social interactions are fascinating and the whole thing is why I keep coming back to this series. He is a gifted writer. RichardPaul Jul 31, 2011 Listening Woman ---- by Tony Hillerman c - 1993 LP/ read reg. print pbk. c - 19?? ---- Good story with a mix of humor and Mystery ---- Enjoy! ---- RichardPaul hermlou May 09, 2011 Navajo Joe Leaphorn is a policeman trying to solve three mysteries: a helicopter which vanished, disappearance of a bank robber and his dog, and the murder of two people. By the end of the book he has solved all three. Hillerman manages to explain the Navajo ways without boring or insulting us, and he uses supense in the second half of the book to keep us reading to the last word. A good series. FatCat22 Jan 26, 2011 As usual, Joe Leaphorn's respect for others brings him clues that blow past the impatient Feds. Hillerman's own respect for others is reflected in his representation of each of his characters. Even the bad guys are "real" people. This is a refreshing change of pace from the hate-filled pages of many murder mysteries. Absolutely loved it. Am anticipating enjoying the rest of the books in the series. FavouriteFiction Oct 31, 2009 The brutal murders of an old man and a teenage girl leads Lieutenant Joe Leaphorn to a blind Navajo Listening Woman, who speaks of ghosts and witches. Navajo Indians — Fiction. Mystery — Fiction.
cc/2019-30/en_middle_0023.json.gz/line1251
__label__wiki
0.617551
0.617551
You are here: Home Page > Arts & Humanities > Linguistics > Computational Linguistics > Computer-Assisted Language Learning 320 Pages | 17 b/w figures, 17 tables Computer-Assisted Language Learning Context and Conceptualization A Clarendon Press Publication So far, the development of Computer-Assisted Language Learning (CALL) has been fragmented. In these pages, Michael Levy sets CALL in its proper historical and interdisciplinary contexts, providing a comprehensive overview of the topic. Drawing on published work as well as an international survey among CALL practitioners in eighteen countries, he looks at the relationship between CALL's theory and application, its conceptual and practical roles as tutor and tool. Levy also discusses CALL's implications for computer programming. Most books on CALL focus on specific projects, and do so mainly from a theoretical point of view, but this unique text considers CALL as a whole, analyzing the utility of the computer in language learning and teaching. A detailed review of the current literature is matched with an in-depth examination of the tutor-tool framework. An ideal introduction to the procedure and performance of CALL as a multi-faceted reflection of today's ever-evolving technology, Levy's study will appeal to students, researchers, and teachers of Applied Linguistics. So far the development of Computer-Assisted Language Learning (CALL) has been fragmented. This book sets CALL in its historical and interdisciplinary contexts, providing a comprehensive overview of the topic. Drawing on published work and an international survey among CALL practitioners, he looks at the relationship between CALL's theory and application, describes how the computer is conceptualized as both tutor and tool, and discusses the implications for computer programming, language teaching, and learning. Michael Levy is a Lecturer in Applied Linguistics at the Center for Language Teaching and Research, University of Queensland, Australia. He has written several articles on CALL and related subjects. "Should be a very useful reference tool for those involved in developing and using computer programs to augment language teaching and language learning."--Notes on Linguistics "Lucidly written and carefully researched, Levy does an excellent job of synthesizing the major components of CALL. Useful for both researchers and practitioners interested in incorporating CALL into the classroom."--Peter Shea, SUNY Albany The Oxford Handbook of Affective Computing Rafael A. Calvo, Sidney D'Mello, Jonathan Gratch... Virtual Words Jonathon Keats Law in an Era of Smart Technology Susan Brenner Practical Lexicography Thierry Fontenelle Computing for Ordinary Mortals Robert St. Amant The Future of the Professions Richard Susskind and Daniel Susskind The Oxford Handbook of Cognitive Science Susan E. F. Chipman The New ABCs of Research Ben Shneiderman The Oxford Handbook of Reference Jeanette Gundel and Barbara Abbott Spatial Language and Dialogue Kenny R. Coventry, Thora Tenbrink, and John Bateman Vowel Perception and Production B. S. Rosner and J. B. Pickering The Oxford Handbook of Linguistic Analysis Bernd Heine and Heiko Narrog Attention: Selection, Awareness, and Control Alan Baddeley and Lawrence Weiskrantz Arts & Humanities > Linguistics > Computational Linguistics Science & Mathematics > Computer Science > Human-Computer Interaction
cc/2019-30/en_middle_0023.json.gz/line1252
__label__wiki
0.883979
0.883979
Isle of Man | About Us | Jobs | Contact Us | Help You are here: Home Page > Arts & Humanities > Philosophy > Philosophy of Mathematics & Logic > The Deliverance: Logic This item has an extended shipping time. The typical delivery time is 6 weeks. Bookseller Code (AJ) http://global.oup.com/academic/covers/uk/pop-up/9780195479508 The Deliverance: Logic Asad Q. Ahmed This book offers for the first time a complete scholarly translation, commentary, and glossary in a modern European language of the logic ... This book offers for the first time a complete scholarly translation, commentary, and glossary in a modern European language of the logic section of Ibn Sīnā's (d. 1037 CE) very important compendium al-Najāt (The Deliverance). The original, written in Arabic, is the product of the middle period of the most renowned Muslim philosopher and physician, known in the Latin West as Avicenna. Avicenna's logic system took as its starting point the Aristotelian and the Peripatetic tradition, but diverged from these in fascinating and original ways. The system presented by him becaume the standard reference and focus of further elaboration, debate, and innovation in the Islamic scholarly tradition, deeply influencing both the 'traditional religious' sciences (such as theology and law) and the naturalized Greek system (such as metaphysics). Because the Najāt is both comprehensive and relatively terse, this translation, which has been the diachronic subject of study in various madāris and has a number of attached commentaries and glosses, will be extremely useful to those who do not read Arabic, but who wish to gain an overview of Avicenna's logic. Asad Q. Ahmed, Assistant Professor of Arabic and Islamic Studies, Washington University, St. Louis Asad Q. Ahmed is Assistant Professor of Arabic and Islamic Studies in the Department of Asian and Near Eastern Languages and the Program in Religion at Washington University in St. Louis. He graduated in 2000 from Yale University with an AB from the Departments of Literature and Philosophy and in 2006 with a PhD from the Department of Near Eastern Studies, Princeton University. His research interests include early Islamic history and historiography, classical Arabic poetry and poetics, Arabo-Islamic philosophy, theology, and logic, and the post-classical rationalist Islamic scholarly tradition, with a special focus on South Asia. The Logical Must Penelope Maddy Hilbert's Programs and Beyond Wilfried Sieg Steven Heine Reasons as Defaults John F. Horty Foundations of Logical Consequence Colin R. Caret, Ole T. Hjortland Robert Hudson The Construction of Logical Space Agustín Rayo Self, No Self? Mark Siderits, Evan Thompson, Dan Zahavi Unruly Words Diana Raffman Changes of Mind Neil Tennant The Concealed Art of the Soul Jonardon Ganeri Strategies of Argument Mi-Kyoung Lee Symmetry: A Very Short Introduction Sagehood Stephen C. Angle Causes, Laws, and Free Will Kadri Vihvelin Arts & Humanities > Philosophy > Philosophy of Mathematics & Logic Science & Mathematics > Mathematics > Logic Arts & Humanities > Philosophy > Non-Western Philosophy > Arabic & Persian Philosophy Science & Mathematics > Mathematics > Philosophy of Mathematics & Logic
cc/2019-30/en_middle_0023.json.gz/line1253
__label__cc
0.715817
0.284183
MPA alum Pugh attends Salzburg Global Seminar From MPA alum Sherrie Pugh of the orlando.charles company: "I attended the November 2015 Salzburg Global Seminar on Innovation of Aging Societies. It was an international group of 50 people form 28 different countries sharing the challenges the world is facing with the Silver Generation. The opportunity to learn of the many innovations different societies were utilizing and implementing in the one on one and groups conversation was extremely informative. The week of international conversation was incredibly motivating. I was so inspired that I draft a policy paper that has been shared with several Minnesota legislators. I continue to share the experience with fellow board members of aging organizations I serve. The support of the Humphrey School, McKnight Foundation and the Salzburg Seminar made this international opportunity a possibility that has impacted my work on aging." HHH has Fellowships for 2 Salzburg Seminars (Austria)--application process open now Fellowships to 2 Salzburg Seminars (Austria) open now--applications welcome, please distribute widely https://www.hhh.umn.edu/international-fellows-scholars/international-fellows Salzburg Global Seminar (SGS) McKnight Foundation Fellowship Program The International Fellows and Scholars Program office is proud to administer a university-wide program that awards fellowships to outstanding residential mid-career professionals to attend a session at the Salzburg Global Seminar through a grant from the McKnight Foundation. The Salzburg Global Seminar convenes imaginative thinkers from different cultures and institutions, organizes problem-focused initiatives, supports leadership development and engages opinion-makers through active communication networks, all in partnership with leading institutions from around the world and across different sectors of society. Sessions average about five-days supporting the mission of the Seminar to challenge present and future leaders to solve issues of global concerns. Beginning in 1947, the Salzburg Global Seminar has convened more than 25,000 fellows representing more than 160 countries. The Seminar welcomes applications to its annual programs which bring together approximately sixty distinguished international participants including emerging leaders, known as "Fellows" from government, business, academia, and nongovernmental organizations. The sessions are cross-sectoral and cross-cultural in approach, with the objective of broadening and deepening perspectives to promote informed action and far-sighted decision making among key professionals worldwide. Particular emphasis is placed on generating cutting-edge ideas and on developing proposals for action. The International Fellows and Scholars Program office is accepting applications for a limited number of full or partial scholarships to attend sessions that are available based on certain eligibility criteria. Please visit the Seminar’s website at www.SalzburgGlobal.org for additional information about the sessions. 1.Current Minnesota resident. 2. Mid-career professionals from government, business, academia, and nongovernmental organizations. To be considered for the scholarship, all applicants must send: 1. Resume or Curriculum Vitae. 2. Biographical sketch of 150 words or less. 3. A brief write-up (500 words maximum) on how the specific conference(s) you are applying to attend fits into the framework of your expertise and areas of interest. Please include how you intend to use your experience to benefit your community, the type of scholarship that you are seeking (full or partial) and why a scholarship is necessary. Below is a list of the current programs that are open for application and nominations: Better Health Care: How do we learn about improvement? 10 Jul - 15 Jul, 2016 http://www.salzburgglobal.org/calendar/2010-2019/2016/session-565.html Salzburg Global Forum for Young Cultural Innovators III 11 Oct - 16 Oct, 2016 Toward a Shared Culture of Health: Enriching and Charting the Patient-Clinician Relationship 29 Oct - 04 Nov, 2016 Rethinking Care toward the End of Life 14 Dec - 19 Dec, 2016 You must turn in all application materials at least 2 months prior to the start of a session. Please send completed applications to grayx260@umn.edu. If you have questions, please e-mail or call Sherry Gray. Alumni: For a list of our McKnight Fellows click here. Sherry Gray Lecturer, Global Policy Area Director, Global Programs and International Fellows and Scholars Programs Humphrey School of Public Affairs 232 Humphrey Center, 301 19th Avenue South Minneapolis, Minnesota 55455 USA Telephone (1-612) 626-5674(办) Fax 625-3513 Skype: graysherry Posted on Wednesday, April 06, 2016
cc/2019-30/en_middle_0023.json.gz/line1254
__label__wiki
0.933557
0.933557
Home NHL Atlantic Division Duchene, Senators Clear Avalanche In 5-2 Win Duchene, Senators Clear Avalanche In 5-2 Win Ottawa's Matt Duchene, left, celebrates one of his two goals with teammate Mark Stone in the Senators' 5-2 win over the Colorado Avalanche. (Adrian Wyld/Canadian Press) It was a magical night for Matt Duchene, and it was fitting to take place against his former team. While celebrating his 28th birthday and his first game back since the birth of his son, the Ottawa Senators sniper enjoyed a three point game against the Colorado Avalanche in a 5-2 win. The native of Haliburton, Ontario generated two goals and an assist in this match, the second of two this season between the hockey clubs. This game’s first star now has an impressive 45 points in 38 games. “You couldn’t have written it better. It’s a very special feeling… I play my best when I’m light and happy, and this week has been the best week of my life”, said a jubilant Duchene after the game. Brady Tkachuk, Mark Stone, and Ryan Dzingel also scored for the home team in this affair, while Nikita Zadorov and Nathan MacKinnon scored for the visiting Avalanche. With the win, the Senators improve to 18-24-5 and 41 points, which ranks them last in the Atlantic Division and 29th in the NHL. With their loss, the Avalanche record drops to 21-18-5 and remains at 50 points, which is ranked third in the Central Division and 15th in the NHL. Sens light the lamp Despite the Senators outshooting the Avalanche 12-8 in the first period, the score remained even. Goaltenders Anders Nilsson and Semyon Varlamov both made important saves when challenged in the opening frame to keep the game scoreless. The second period, however, was a different story. A total of four goals were registered in this frame. Tkachuk opened the scoring on a determined effort to get the puck past a sprawled Varlamov 2:06 after the intermission. Just over a minute later, the Senators scored their second goal of the game, credited to Mark Stone. Around the game’s midway point, Thomas Chabot made his presence felt. This was the 21-year-old’s first game back after missing eight due to a shoulder injury. Seconds after disrupting a Sven Andrighetto breakaway with a massive poke check, Chabot completed his shift engaging in an odd-man rush the other way, before cooly sliding the puck to Ryan Dzingel at the side of the Avs net. The Sens sniper made no mistake, scoring his 19th of the campaign, and lifting his team to a 3-0 lead. It was a remarkable shift for number 72, who hasn’t seemed to miss a beat in his triumphant return to action. Nikita Zadorov got the visiting team on the scoreboard before the end of the period with a cannon from the point, which found the top corner of the net past Nilsson. Familiar Script? After 30 minutes of play, the game seemed to be following a familiar script. Up 3-1 at the midway point of the game on Oct. 26, the Avs came alive in the final half and scored five unanswered goals for a 6-3 win over a young and inexperienced Senators team. Colorado’s top line of Gabriel Landeskog, Mikko Rantanen, and MacKinnon combined for 10 points. This time, with the game trending in a familiar fashion, the end result was very different. The Sens controlled much of the game, and successfully limited their opponent’s top line to lock down the win. “Jaros has developed so fast. He’s so good, so intense, he’s got so much to give. And Boro, both of them we felt that because of their size and speed of their top line, we needed to challenge that as much as we could”, said Senators head coach Guy Boucher. “Obviously, Pageau, Paajarvi, and Smitty are the three guys we probably would have had since the beginning of the year as a shut down line… this five man unit was terrific tonight”. In the third period, Duchene scored his 19th of the campaign. During the post-game interview, he revealed a thoughtful idea from his teammate, Christian Wolanin, after his first goal of the night. “Wolly did something pretty cool….He told me to grab it [the puck] because his dad grabbed the one from his first goal, the first game when he was born, and he still has it. That was pretty special for him to do that and I’m really happy he did”. Wolanin’s father, Craig Wolanin, played in 695 NHL games and won a Stanley Cup with the Colorado Avalanche in 1995-96. Snap back to the third period of this game, the Avalanche were given two great opportunities late in the game to rally a comeback. With Zack Smith serving two minutes for hooking, MacKinnon took advantage on the power play and narrowed the score to 4-2 with 3:59 remaining. Rantanen recorded his 50th assist of the season on this goal, which was also his second assist of the night. Immediately after, Christian Jaros was penalized for delay of game. This gave the dangerously offensive Colorado team life, and the perfect chance to trigger an avalanche of late game goals. They came agonizingly close to scoring another but struck the post after a shot deflected off Nilsson. In the end, the Senators survived their second penalty kill effort, and Duchene hit the 20-goal milestone with an empty net tally. Nilsson Shuts The Door Nilsson recorded his third win with Ottawa, and he managed to close the door against the Avalanche in style. Donning his new pads, glove, and blocker featuring Senator colors, the 28-year-old stopped 30 of 32 shots for a 2.00 goals against average and a .938 save per cent. Ottawa Senators goaltender Anders Nilsson takes a shot sporting his new Senators pads during practice ahead of their game against the Colorado Avalanche on Jan. 16, 2019 (@Senators). The Swedish native is now 3-1-0 in his last four games, along with a 1.76 goals against average and a .946 save percent. As Nilsson adjusts to Ottawa, he has looked very sound and confident in the Senators crease. Odd Rivals The Duchene component is just one reason why these two teams have built an interesting rivalry the past two seasons. As part of the exchange for the ex-Colorado sniper last year, the Senators included their 2019 first-round draft pick. With the Senators low ranking this season in the NHL, the pick is currently qualified as a lottery pick in the Jack Hughes sweepstakes this spring. It is in the Senators best interest to ensure Colorado general manager Joe Sakic does not get the pleasure of drafting this highly regarded prospect with their former pick. Every point matters. Snap shot Despite missing the last eight games, Chabot remains in the top 10 scoring amongst NHL defensemen. After this game, he has 39 points in 39 games, and is tied with Kris Letang of the Pittsburgh Penguins for sixth. Matt Duchene, Ottawa Anders Nilsson, Ottawa Mark Stone, Ottawa The Ottawa Senators take off for Raleigh, North Carolina to take on the Carolina Hurricanes this Friday. This will be the second time this month the two teams meet. On Jan. 6, the Canes defeated Ottawa 5-4 at Canadian Tire Centre in Ottawa. Carolina is fifth in the Metropolitan Division and 18th in the NHL with a record of 22-19-5 and 49 points. Brady Tkachuk (11) Chris Tierney (24) Ryan Dzingel (16) 3:13 Mark Stone (21) Cody Ceci (8) 9:01 Ryan Dzingel (19) Thomas Chabot (29) Matt Duchene (25) 2 Colorado 18:53 Nikita Zadorov (4) Nathan MacKinnon (41) Mikko Rantanen (49) 7:49 Matt Duchene (19) Zack Smith (12) Nathan MacKinnon (27) Tyson Barrie (29) 5-2 Matt Duchene (20) Unassisted Cover Photo Credit: Adrian Wyld / Canadian Press Previous articleEagles Key Offensive Injuries in 2018 Next articleWho’s Going to Coach in the AAF? Sens Break Losing Streak Against Ducks Stanley Cup Betting Odds Have Been Blown Wide Open Karlsson Traded to Sharks
cc/2019-30/en_middle_0023.json.gz/line1260
__label__cc
0.746454
0.253546
All posts in Project Spotlight 1966 Airstream Caravel Renovation; Updating an American Icon by Owain Harris In Project Spotlight Occasionally a project comes along that is so unlike anything you have done before that you are left to wonder if it is even something you should take on, or if you will come to regret it down the line. This is how I felt on a sunny August day when a 1966 Airstream Caravel trailer was towed into the shop. The project scope seemed quite straightforward on paper; create an interior space that could function as an office, a reading sanctuary and a guest bedroom for the clients summer home on lake Winnipisake. But as I came to learn through the course of this project — designing and building around the convoluted compound curves of the classic Airstream’s contours was anything but simple. The first Airstreams were built in California in the late 1930’s, but soon WW II brought aluminum shortages and made travel a luxury few could afford. It wasn’t until the post war boom that increased wealth and leisure time, coupled with the recently completed interstate highway system, created the perfect nexus for road travel, and the Airstream trailer with it’s sleek lines and futuristic look was there to fill that niche. By the 1960’s, the silver trailer was so cemented in the American psyche that when NASA needed a mobile quarantine unit for astronauts returning from moon missions (presumably infected with space germs), they turned to the Airstream company, who would eventually build 4 of the specialized trailers at it’s Jackson Center Ohio plant. Given the iconic status of The Airstream, I knew that I had to get the details of this project right. When I received the trailer it was an intimidatingly blank slate consisting only of an empty aluminum skin and a new 3/4” plywood floor. Before I could even start the construction of new cabinetry, a floor-plan had to be developed that incorporated all of the elements the clients were hoping to have in the new interior. From discussions we had, I knew that they wanted a storage closet, a banquette that would fold out into a double bed, a desk, a prep area that would include a microwave oven and a mini fridge, and sleeping accommodations for one more person. Quite a lot to squeeze elegantly into a 17’ trailer! For this phase of the project I enlisted the help of My friend Aimee Brothers of Lavender and Lotus Interior Design. I had collaborated with her in the past and I knew that her keen design sense would be invaluable when it came to working out the details of the layout. I especially wanted her input to insure that the color schemes of the flooring, walls and fabrics worked in harmony. Together we came up with a layout that incorporated all of the elements the clients were hoping to achieve, whilst retaining a sense of openness and flow through the small space. With the design phase mostly completed it was time to start considering how to construct the walls. Originally I had hoped that I would be able to find either an aftermarket fiberglass wall kit or work with a boat builder to fabricate new wall panels to fit the signature curved ends of the Airstream. Unfortunately both of these ideas ended up being dead ends, so I spent the time to figure out how to do it with the one medium I am most comfortable with – wood. After some experimenting, I discovered that I would be able to follow the compound curves using wedges of 1/8” bending poplar to mimic the same technique that was used for the aluminum skin on the exterior of the trailer. Instead of the 7 panels used for the metal though, I went with 11 panels. I had my plan in place but before I could get started, I still needed to have the rough wiring done and fabricate some curved studs that the wall panels would be attached to. Wiring – easy enough (completed by Chris Ward of Ward Electric), curved studs – less so. After a lot of experimentation, I settled on a technique of building laminated studs in place, using the curves of the trailer as the mold. The hardest part of this technique was fixing the first layer of 3/8” bending ply to the underside of the aluminum skin, which I was able to do using a combination of polyurethane glue and a hot melt glue to hold the piece in place as the poly glue set. Once the first layer was secure, it was easy enough to glue and screw subsequent layers to the first, until the stud reached the required thickness. Studs and wiring complete, and with a layer of fiberglass insulation in place, it was time to tackle the laborious process of scribing, cutting, and attaching the wall panels. There ultimately wasn’t any secret technique to this process, just lots of patience and time with a pencil and bandsaw. Once this was finished I could finally start thinking about the fun stuff – the built in cabinetry. For the cabinetry we settled on a baltic birch plywood construction with sycamore veneer and walnut trim. The client really liked the look of the exposed laminations on the edge of the Baltic birch plywood which provided a cool retro look and made my life a lot easier too! I warmed up by first building the two relatively simple cabinets that would be the prep kitchen area and the writing desk. Other than the large scribes needed to fit the curved walls and accommodating the wheel wells, both pieces were relatively straight forward. Once they were complete, it was time to move on to bigger and better things. One of my goals in designing the interior, was to mimic the curves of the shell as much as possible in the cabinetry. There would be curved cutouts in the partitions and rounded doors in the storage bins, but to really get the full effect there would need to be curves in the cabinetry too. I had done a fair amount of bent lamination work in the past and I was eager to put my skills to the test here. I designed the banquette and also a built in bench seat with conical curves in them. But the show piece would be the curved closet corner that would be scribed into the compound curves of the Airstream’s ceiling and walls. At seven feet tall, the single piece corner would be the largest lamination I had ever attempted, so I decided to dive right in and start there. As I have discussed in previous articles for the Journal, all bent laminations start with a mold, and this one would require a big one. Because I needed to be able to slide the mold and the work piece in and out of the vacuum bag with relative ease, I decided to build it with less ribbing than usual to reduce the overall weight. Luckily I did I dry run with just the mold in the bag as this turned out to be a big mistake! The light weight construction of the form was no match for the forces of the vacuum system, and the whole thing imploded inside the bag. Oh well, so much for lightweight! My next iteration of the form was more rugged and held up inside the press. The next problem to overcome was ensuring that the pressure of the bag on the work piece was evenly distributed across the surface. This was necessary because the outer layer of my closet corner would be a piece of sycamore veneer. If the pressure was not evenly applied, I could end up with air pockets on the surface where the veneer was not sufficiently pressed into the glue layer. After some experimentation, I settled on a combination of an 1/8” piece of plywood used as a caul and a layer of 1/4” quilt batting that would act as a breather fabric between the caul and the bag and would allow air to flow over the surface as the bag was pulled tight. To ease the stress of the glue-up I opted to use Unibond 800 as the adhesive which gave me plenty of open time to work with, and I made sure there was an extra person in the shop in case things got hairy and I needed a hand. All of the prep work, experimentation and dry runs payed off and I was able to achieve the bend on the first attempt. Further proof that woodworking is 90% preparation and 10% execution. As is true for most things in life I imagine. The conical pieces I needed for the bench and banquette were relatively easy by comparison. Rather than a typical bent lamination which is a section of a cylinder, these pieces would be a section of a cone, so correct layout of the mold was critical. I drew out the form full-scale to determine the size and placement of the ribs in order to achieve the correct angle. The beveled ribs were cut on a band saw with the table tipped at the necessary angle. Once the mold was built, the rest of the process was the same as any other bent lamination, although I opted to give myself a few extra inches of material on either side of the angle so that I could creep up on the final size. From here on out, most of the rest of the furnishings would need to built in place with some sections being constructed at the bench and then scribed into the curves of the Airstream and installed. The technique I developed over this part of the project consisted of using 1/4” MDF as template stock, a pair of scribes, and many trips between the band saw, spindle sander, and edge belt sander. Once I had the scribe perfect, I would transfer it to whatever piece I was working on using a trim router and a flush cut bit with the bearing riding on the template I had just created. Slowly in this manner, the interior cabinetry began to take shape. First the closet, then the banquette and day bed, and finally the small conical bench. The same scribing technique would be used to get a perfect fit on the prep cabinet counter and the desk top. There were some head-scratching engineering challenges along the way — how to get a bed with square corners to fold away into a space that is round for example — but eventually everything was in place and I could pay attention to the cosmetic details of doors, drawers and a small vanity mirror for the sleeping area. To keep the sleek, modern look of the trailer I decided on flat doors and drawers with vertical grain veneer and a small walnut bead to cover the edge of the baltic birch plywood substrate and provide a contrast to the sycamore. I also wanted to use shop made walnut pulls. I felt it was important for the pulls to be flat to reduce the chance that someone may catch themselves on one while moving around the small space. I chose to make them similar in style to a set I had developed for a recently complete jewelry cabinet. They would be inset with a cutout for your fingers. Instead of a round hole as I had done for the cabinet, I once again looked to the Airstream for inspiration and created an elliptical shape reminiscent of the elegant curves of the Airstream’s ceiling. One of the things I have learned over the years is to recognize where my strengths and passions lie, and to subcontract out the things that I either don’t have the skills for, or the interest in doing. For this project that meant paint for the walls, finish on all of the cabinetry and furnishings, and upholstery for the banquette, the daybed, and the bench. The paint and the cabinetry finish was completed by Bob Realy, and the cushions were made and upholstered by Deborah Fisher. The final parts to install were the window trim and the walnut ribbing that would cover up the seams in the paneled walls and ceiling. For these elements I fabricated strips of walnut 3/4” wide and 3/8” thick with an 1/8” radius on the edges. These pieces were substantial enough to hold the edges of the panels in place, but still thin enough that I could bend them around the curves in the ceiling. Once cut to length and scribed where necessary, they were held in place with decorative oval head screws. After many many months of planning, obsessing, experimenting, building, worrying and working, the completed Airstream was finally ready for it’s debut, and on a rainy October afternoon I watched it disappear down the road with a mixture of pride and relief. Had I known what I was getting myself into it’s possible that I would have turned the project down, but as T.S. Eliot memorably quipped: “If you aren’t in over your head, how do you know how tall you are?”, and now with some space and time between myself and the project, I am glad that I didn’t. The opportunity to collaborate with people as patient, enthusiastic and engaged as my clients were on this project is in itself a treat, but to do so on a something this unique and special was even more so. Exterior of 1966 Airstream Caraval Curved Closet Prep Area Desk Detail
cc/2019-30/en_middle_0023.json.gz/line1279
__label__wiki
0.696027
0.696027
Tag: psone Tekken 3 [PS1] – play as Crow This is a playthrough using Crow in the PS1 version of Tekken 3. Read on below for more information… Crow is a soldier character who only appears in Tekken Force mode as enemy opponents. He is an unplayable character in the game. ===== ABOUT CROW ===== Crow is one of the four types of enemy soldier characters who appear in the Tekken Force mode of the game. Crow is considered to be of a “Soldier” rank, and is the weakest of the soldiers. He is the first type of enemy character to appear in the mode. ===== ABOUT THE SOLDIERS ===== Despite the four soldier characters looking different to each other and having different names, technically they are really only “one” character. The different names and looks are just different costumes, operating in the same manner as a regular character’ Continue reading “Tekken 3 [PS1] – play as Crow” Author instagram.com/GVWTVPosted on June 10, 2019 Categories UncategorizedTags 1997, 1998, 3D, bandai, boss, bosses, character, CPU, crow, entertainment, falcon, fighter, fighting, fist, force, game, hawk, hidden, iron, king, krizalid99v2, namco, non, non-playable, non-selectable, of, owl, play, playstation, playthrough, ps1, psone, psx, secret, soldier, sony, tekken, tekken 3, the, three, through, unplayable, unselectable, versus, video, vs, 鉄拳, 鉄拳3Leave a comment on Tekken 3 [PS1] – play as Crow Tekken 3 [PS1] – play as Hawk This is a playthrough using Hawk in the PS1 version of Tekken 3. Read on below for more information… Hawk is a soldier character who only appears in Tekken Force mode as enemy opponents. He is an unplayable character in the game. ===== ABOUT HAWK ===== Hawk is one of the four types of enemy soldier characters who appear in the Tekken Force mode of the game. Hawk is considered to be of a “Sergeant” rank, and is the third strongest of the soldiers. He appears after Crow has already made his appearance in the mode. ===== ABOUT THE SOLDIERS ===== Despite the four soldier characters looking different to each other and having different names, technically they are really only “one” character. The different names and looks are just different costumes, operating in the same manner as a regul Continue reading “Tekken 3 [PS1] – play as Hawk” Author instagram.com/GVWTVPosted on June 9, 2019 Categories UncategorizedTags 1997, 1998, 3D, bandai, boss, bosses, character, CPU, crow, entertainment, falcon, fighter, fighting, fist, force, game, hawk, hidden, iron, king, krizalid99v2, namco, non, non-playable, non-selectable, of, owl, play, playstation, playthrough, ps1, psone, psx, secret, sergeant, sony, tekken, tekken 3, the, three, through, unplayable, unselectable, versus, video, vs, 鉄拳, 鉄拳3Leave a comment on Tekken 3 [PS1] – play as Hawk Tekken 3 [PS1] – play as Owl This is a playthrough using Owl in the PS1 version of Tekken 3. Read on below for more information… Owl is a soldier character who only appears in Tekken Force mode as enemy opponents. He is an unplayable character in the game. ===== ABOUT OWL ===== Owl is one of the four types of enemy soldier characters who appear in the Tekken Force mode of the game. Owl is considered to be of an “Commander” rank, and is the strongest of the soldiers. He appears after Crow, Hawk and Falcon have already made their appearances in the mode. ===== ABOUT THE SOLDIERS ===== Despite the four soldier characters looking different to each other and having different names, technically they are really only “one” character. The different names and looks are just different costumes, operating in the same manne Continue reading “Tekken 3 [PS1] – play as Owl” Author instagram.com/GVWTVPosted on June 6, 2019 Categories UncategorizedTags 1997, 1998, 3D, bandai, boss, bosses, character, commander, CPU, crow, entertainment, falcon, fighter, fighting, fist, force, game, hawk, hidden, iron, king, krizalid99v2, namco, non, non-playable, non-selectable, of, owl, play, playstation, playthrough, ps1, psone, psx, secret, soldier, sony, tekken, tekken 3, the, three, through, unplayable, unselectable, versus, video, vs, 鉄拳, 鉄拳3Leave a comment on Tekken 3 [PS1] – play as Owl Marvel Super Heroes vs. Street Fighter [PS1] – Norimaro in the US version This is a play-through using Norimaro in the PS1 USA version of Marvel Super Heroes vs. Street Fighter. Read on below for more information… Norimaro is an inaccessible character in the International versions of the game. This means that he is an unplayable character in those versions, namely this one. ===== Additional Information ===== —- If you complete the game with Norimaro, you get an actual ending for him, in English text. It appears that the decision to omit him from the International version of the game may have come after the translators did the endings. —- Surprisingly, Cyber-Akuma as the 2nd character actually has an Support strike and Team Duo super, despite not normally being available as a partner character. —- Notice how when Cyber-Akuma is selected as the 2nd ch Continue reading “Marvel Super Heroes vs. Street Fighter [PS1] – Norimaro in the US version” Author instagram.com/GVWTVPosted on July 8, 2018 Categories UncategorizedTags 1998, 2d, akuma, boss, bosses, Capcom, character, cyber, cyber-akuma, edition, ex, fighter, fighting, game, gouki, Heroes, hidden, krizalid99v2, marvel, mech, mech-gouki, msh, mshvssf, norimaro, Noritake Kinashi, playstation, ps1, psone, psx, secret, sony, street, Super, unlockable, unused, versus, video, vs, x-men, xmen, マーヴルスーパーヒーローズ VS. ストリートファイター, メカ豪鬼, 憲磨呂, 真・豪鬼, 豪鬼Leave a comment on Marvel Super Heroes vs. Street Fighter [PS1] – Norimaro in the US version
cc/2019-30/en_middle_0023.json.gz/line1289
__label__wiki
0.509529
0.509529
The game is afoot! 5 reasons there's no way Moriarty is dead on "Sherlock" For every Sherlock fan, “The Reichenbach Fall” is an episode that will live on in infamy. The season two finale is one of the most shocking Sherlock episodes to date and will probably remain so long after the series ends. After two season torturing Sherlock Holmes (Benedict Cumberbatch) and John Watson (Martin Freeman), Jim Moriarty (Andrew Scott) and Sherlock have their “final” showdown. After framing Sherlock for fraud and a whole host of other crimes, Moriarty shoots and kills himself in front of Sherlock. Since that very moment, Moriarty’s death has been a controversial topic amongst Sherlock fans. Is Moriarty actually dead or is he just faking? This conundrum has plagued me since I began watching the series. Am I playing into the hands of showrunners Mark Gatiss and Steven Moffat by believing Moriarty’s really alive, or am I fooling myself by thinking too much about the potential trickery by the creators to see he really is dead? Despite my own internal struggle, there’s something fun in the idea that Moriarty is really alive. Here are 5 reasons Moriarty is still alive: 1 Mark Gatiss and Steven Moffat are spending too much time on proving that Moriarty is dead. This is a classic showrunner trick. They’ve spoken at length, seemingly confirming Moriarty is, in fact dead, but they also dedicated an entire special, “The Abominable Bride” toying with Moriarty’s death. Sure, Sherlock concluded that Moriarty died and his network is going to continue his plans, but this is Sherlock and that conclusion is FAR too logical. 2 Like Sherlock used his network to fake death, Moriarty used his. Sherlock and Moriarty were both on the roof that day and at the end of the episode, it seemed that both had died. In Season 3, it is revealed that Sherlock used his vast network to fake his death. We all know Moriarty has his own network so it seems very plausible that he had his own plan to fake his death. 3 Someone DID die on the roof, but it wasn’t Moriarty. We all know that Moriarty uses and abuses his pawns. It doesn’t seem too out of character for him to sacrifice someone from his network to play the game. 4 Sherlock’s conclusion at the end of “The Abominable Bride” isn’t totally wrong but it isn’t totally right, either. At the end of “The Abominable Bride,” Sherlock concludes that Moriarty is dead but his network is going to follow through with his plans…but maybe the final reveal is actually thathe is alive and has been pulling strings from behind the curtain this whole time. 5 He’s Moriarty. Guys, Moriarty is one of the most incredible villains on television. You don’t kill the best villain at the end of the second season. You just don’t. So there you have it, Moriarty has to be alive and we will see him again when we least expect it — before the series ends. Popular in TV Shows The trailer for the second episode of "Sherlock" is even more intense than the first The vampires are afoot, because the "Sherlock" team is going to make a Dracula mini-series The internet wants you to know there's a really big plot hole in the season finale of "Sherlock" Here's your quick "Sherlock" recap to get you ready for Season 4 Oh my Moriarty, the "Sherlock" Season 4 trailer is here "Sherlock" co-creator talks about whether Sherlock and Watson would ever be romantically involved The "Sherlock" showrunners explain their decision to do THAT during the Season 4 premiere Hold onto your hearts, Sherlock says ~I love you~ in the latest "Sherlock" trailer We learn so many dark secrets about Mary on "Sherlock," and nothing will ever be the same It's time to talk about the season (SERIES??) finale of "Sherlock" We might know when "Sherlock" is ending and yes we are PANICKING It's time to talk about the mysterious, MIA third brother on "Sherlock" The first official photo from "Sherlock" Season 4 is here, and our boys are BACK The latest Honest Trailer is a perfect slice of "Sherlock" Oh no! "Sherlock" Season 5 is looking uncertain, to say the least Breathe a small sigh of relief, "Sherlock" Season 5 is already being teased We all completely missed a lil' Moriarty clue in this week's "Sherlock" The trailer for the finale episode of "Sherlock" Season 4 is here, and it goes out with a BANG The internet can't help but point out this *tiny* mistake in the latest "Sherlock" episode There's a rumor "Sherlock" isn't coming back for this bonkers reason All Topics in TV Shows
cc/2019-30/en_middle_0023.json.gz/line1290
__label__cc
0.565015
0.434985
Welcome to Herbal Apothecary Call our sales team on 01947 602346 Western Herbs Organic Tincture Fresh Tincture Fluid Extract Cut Herb Whole Herb Powdered Herb Chinese Herbs Detox & Pessaries Gums & Waxes Carrier Oils & Waters Sweet Cecily's Skincare Base Products Herbal Creams & Gels BeeVital Apiceuticals Propolis Whole Health Propolis Skincare Propolis Oral Health Pollen Whole Health Organic Tinctures Chinese Tinctures Fluid Extracts Fixed Oils BeeVital Glycyrrhiza glabra / Licorice Root Fluid Extract Gid0NOT LOGGED IN Fluid Extract made by a process of hydro-ethanolic percolation, with a ratio of 1 part Licorice Root to 1 part liquid. Liquid comprises of 75% water and 25% sugar beet derived ethanol. Glycyrrhiza glabra / Licorice Root Fluid Extract is available for purchase in increments of 0.25 litres (l) If you would like to buy this product, please click here to login or register We aim to dispatch products within 1 working day. Call us now for more info about our products on +44 (0)1947 602346 Return unopened items within 14 days for a refund. Review this product and let others know what you think. Liquorice (British English) or licorice (American English) is the root of Glycyrrhiza glabra from which a sweet flavour can be extracted. The liquorice plant is an herbaceous perennial legume native to southern Europe and parts of Asia, such as India. It is not botanically related to anise, star anise, or fennel, which are sources of similar flavouring compounds. Liquorice extracts have a number of medical uses. It is also used in tobacco blends and also as a flavour in candies or sweeteners. Liquorice is one of the most widely used medicinal plants, both in Western and Eastern herbal medicine and has at least 3000 years of history as a medicinal plant. Medicinal Action and Uses Analgesic - Liquorice has pain relieving properties. Anti-Inflammatory - when glycyrrhizin is broken down in the stomach it displays inflammatory and arthritic-relieving effects similar to hydrocortisone and other corticosteroid hormones. It is also used to relieve rheumatism and arthritis. Antispasmodic - Glycyrrhizic acid has an antispasmodic effect on the gastrointestinal tract, whereby the bacteria in the intestines is converted into a substance that acts as a liver-protective agent by neutralizing free radicals. Antiviral - Glycyrrhizin (found in Liquorice) is one of the best documented antiviral substances derived from the plant kingdom and has been shown to be effective against a variety of viruses, including those which cause influenza and the common cold. It does this by naturally boosting the immune system. Expectorant - The saponins found in the herb act both as mucus thinning and expectorant. Liquorice root has been used as an herbal remedy for bronchitis and whooping cough because of its expectorant abilities. Hepatic - The herb has a beneficial effect on the liver and it increases bile secretion and lowers cholesterol levels. Skin Complaints - Externally the herb has been used against dermatitis, eczema, herpes and shingles. Its anti-fungal and anti-bacterial properties explain its use as an herbal remedy for athlete’s foot, canker sores and dandruff. Stomachic - Liquorice root lowers the acid level in the stomach and relieves heartburn and indigestion. It reduces the secretion of digestive juices and helps to create protective mucus in the stomach.This makes liquorice a good remedy for ailments of the gastrointestinal system, from irritation, spasm, irritable bowel syndrome (IBS) and inflammation of gastric and duodenal ulcers. Product is supplied in amber PET bottles with tamper evident screw tops. Licorice Root Fluid Extract About Herbal Apothecary We offer professionals all the usual practitioner products and services - but with a difference. Herbal Apothecary has been in existence for over 30 years and we produce one of the largest ranges of practitioner herbal products in the UK. We are proud to be a real Living Wage employer with ISO9001:2015 quality management systems and organic certification. sales@herbalapothecaryuk.com Unit 3b, Enterprise Way, Whitby, North Yorkshire YO22 4NH United Kingdom Find us on the following social channels. Signup to our newsletter for the latest product and research news. Copyright © 2018 Herbal Apothecary UK. All Rights Reserved.
cc/2019-30/en_middle_0023.json.gz/line1292
__label__cc
0.668549
0.331451
Why Horror Equity Fund is the Independent Filmmaker and Investor’s New Best Friend January 08, 2018 / Brian Herskowitz Let me start with a little back story. I have been in the entertainment industry for a very long time, and here’s the truth: for every film I’ve seen produced, for every project that I’ve been involved with, there have been a dozen that didn’t get made. All filmmakers have a burning passion to see their work produced. If I had a dollar for every time I heard someone say “I just want to get my film made!” I’d have enough money to get my film made. But passion doesn’t always lead to success. There are levels of success akin to a ladder: First rung: You have an idea. Second rung: You (or a writer you hire) put that idea on paper. Third rung: You (or your writer) complete the script. Fourth rung: The script is good. Fifth: Others think that the script is good. Sixth rung: You film goes into production! Seventh rung: Your film is COMPLETED. Eighth rung: Your film is good. Ninth rung: Others think your film is good. Tenth rung: Your film is distributed. Eleventh rung: Your film finds an audience. Twelfth rung: The AUDIENCE LIKES your film. Top of the ladder: Your film is a financial and/or critical success! In the world of independent entertainment, about ninety-nine percent of all movies are stuck on the first five rungs. The luckier one percent get to rung number six. I’ve watched as filmmakers scratch and dig and fight to get their projects sold, funded, made. But in the throes of passion sometimes the obligation to take care of the investor gets forgotten. Time after time investors looking to support cinematic artists are disappointed, and once an investor has been burned they’re highly unlikely to revisit the industry. Horror Equity Fund has made the investor our first priority. But that means that we also must take care of the filmmaker. We start by carefully evaluating each project for potential profitability, and then reverse engineer the process by going to sales reps and distributors prior to reaching out the investors and the horror community. That’s another layer of protection we provide to the investor. By engaging with the folks that get the project into the market place early on we insure that they are onboard and can perform their function to the fullest capacity, and that in turn affects the bottom line. And once we’ve done that work, and attached some form of distribution, we will have three paths (separately and/or together) to take to secure funding: We can fund some or all of the project ourselves. * We can engage third party equity partners. ** We can create a Regulation CF raise and take the project to the Horror Community. *** While we’re on the subject of Community, let’s take a little side trip. Horror Equity Fund was partially founded on the concept that Horror fans are some of the most passionate, yet dispersed group in the known universe, and it is our goal to bring fans, content creators, and investors into a community that serves them all. This “centralization” of at least a bit of the Horror Universe could serve to bring immense value to each group in unique fashion. We call this community the “Federation of Horror” (FOH). Imagine a place where a horror fan can get news, buy memorabilia, subscribe to a “Box O’ Horror”, and interact with horror favorites. Content creators can be a member of an elite group that can have their scripts crowd sourced, and the best of those will be championed by HEF. Feedback from the community can serve investors in the judgments they need to make. Filmmakers will be able to come to the Federation of Horror for resources, from discounts on camera accessories to crews looking to help them with their project. There will also an educational corner where visitors can learn everything from the fundamentals of coiling cable, to legal and financing checklists, to tips on screenwriting. At some point this community will become a OTT network and distribution platform of its own, giving filmmakers (and other content creators) a direct pipeline to fans and providing investors another revenue stream for the projects that they have supported. Veering back onto the main road. We talked a lot about the investor and mitigating their risk, but let’s talk about the filmmaker. And we should note, while this article deals most directly with filmmakers, the lessons and objectives parallel those needs and objectives of content creators in many other media and circumstances. HEF and the FOH seek to embrace those creators and their work as well. As previously stated, taking care of the investor means taking care of the filmmaker. But what does that mean? That means we want to help the content creator make the very best project they can. That means making sure that they have the correct amount of money (not too much, not too little) to get their vision completed to their satisfaction. Once in a while, a filmmaker will come to HEF and announce that they have a $10 million movie, and then pronounce, “But I can do it for a lot less.” That’s absolutely true. You can do almost any film for less… but not without sacrifices. We want to make sure that any sacrifices you make don’t unduly compromise the quality of your vision, because if the film doesn’t come out as good as you hoped, then not only are you going to suffer, but most likely, so will the investor. We want to help you secure the right budget to make the film you want to make, as long as we can illustrate the POTENTIAL for a return to the investor. Philosophically, we are budget agnostic. If you have Margot Robbie and Hugh Jackman as the stars, and Guillermo Del Toro attached to direct and you tell us your horror film is $30 million, AND the script’s terrific, well hey, we’d be all over that. But if your film is a one location, six-character piece and you aren’t going to cast A-list talent, then you better be able to justify that $30 million. And if we took that no star, six-character, one location movie to our sales reps, and the said – “You do $50 million all day long with this movie”, we’d be all over that too. That is, of course, highly unlikely. More likely you’re going to get DOWNWARD pressure to keep the budget as low as possible. But what’s that number? This is akin to the old story about a reporter asking Abraham Lincoln how long a man’s legs should be. Lincoln reportedly replied, “Long enough to reach the ground.” Your budget needs to be just long enough to make your film reach the ground. How else will Horror Equity Fund support the passionate independent? I have witnessed time after time filmmakers caught in the whirlpool of casting to get funding, funding to get cast. The ol’ chicken and the egg issue. “I can’t get money ‘cause I don’t have a star. I can’t attach a star ‘cause I don’t have money.” Horror Equity Fund wants to solve this problem in two ways. First, we want to help with locating actors that can be a meaningful attachment, potentially without money up front. Attaching an actor that means something to sales and distribution can kick off your financing, and cement your entry into the marketplace. While we bring our full weight to bear in finding that talent for you, we know that a great many actors, and more to the point their representatives insist that you “show them the money.” Upon full funding, Horror Equity Fund will create a casting fund that can guarantee an actor’s participation in the project. Another stumbling block, as previously mentioned, is distribution. Talent often wants to know that a film they are going to work on for peanuts will have a life, and without theatrical distribution they may be reticent to sign on. One way to “guarantee” that distribution is to set aside funds for Prints and Advertising (P&A). Once fully funded, HEF will create a P&A fund that productions can utilize to give solace to nervous investors and assure actors that the film will be seen in theaters prior to going the streaming, VOD route. Lastly, using our significant outreach in the Horror entertainment universe, and the marketing might of the Federation of Horror, HEF will provide a PR and marketing boost to the projects we take on. We are the studio without the overhead, a “QQQ of Horror”. We strive to be the investor and content creator’s best friend. When we assist, you climb the ladder of success; we want to make sure that you don’t stall in the middle. And to get you to the top, we’ll be with you on every rung of the ladder. Finally, as one reads this article, we hope you consider and take the “30,000 foot view”. This community and the company is set up to succeed. Its flexible business and social model is there for many to attend and enjoy. It is an organic company and community, where growth is virtually inevitable with proper attention and involvement. And joining with HEF through investment in HEF will best enable fans, content creators and investors to “Profit From Their Passions”. There is limited time to invest in Horror Equity Fund! To learn more, head over to our investment page HERE. January 08, 2018 / Brian Herskowitz/ What does it take to become a film ...
cc/2019-30/en_middle_0023.json.gz/line1295
__label__wiki
0.679329
0.679329
The hosts are revolting! – Westworld Season 2 Episode 1 review in Serial Questions, questions, questions. It’s enough to make your head hurt Evan Rachel Wood and James Marsden in Westworld. by Mark Grassick 24th April 2018 Thank God (or should that be Robert Ford?) for recaps. If it weren’t for quick catch-ups, this first episode of Westworld’s second season would be as much of a maze as the figurative one that Ed Harris spent all of the first season searching for. We’re not five minutes in and already there are questions. Where did Ashley Stubbs (Luke Hemsworth) reappear from? What happened to Bernard and Charlotte in between the two timelines? Does Charlotte know Bernard is a host? Is this host rebellion real or are the hosts just enacting Ford’s last narrative? It’s enough to make your head hurt. Jeffrey Wright as Bernard and Tessa Thompson as Charlotte I somewhat foolishly thought that once Westworld had all its world-building in order, we’d move past the riddles and get to the full-on human vs host ass kicking. Turns out this show can’t let go of the mysteries, which could get a little tiresome. Season one asked a lot of patience from its audience as it kept every single card close to its chest right until the last moment. I’m not sure I’m on board with another season of withholding. At least the show has developed a knowing sense of humour, which serves to balance out the bloodshed – of which there is plenty. Bernard comes face-to-face with a creepy drone Westworld is up front about its dual timelines this time around, it’s just a little tricky telling which one individual plotlines belong in. We get the immediate aftermath of the uprising at the gala dinner, terrified guests being hunted down and executed by vengeful hosts, led by a surprisingly bloodthirsty Dolores (Evan Rachel Wood). In this one, Bernard (Jeffrey Wright) and Charlotte (Tessa Thompson) are trying to find a way out of the park, while Bernard tries to conceal the fact that he’s really a host. We also get a second timeline, a little further in the future, where Delos heavies are attempting to quell the rebellion by rounding up and executing hosts. They find an unconscious Bernard on the beach and enlist his help, even though he’s on the list of high value targets. There’s also no Charlotte to be seen, which raises more questions. Dolores (Evan Rachel Wood) is out for blood Meanwhile Maeve (Thandie Newton) has returned to the Delos facility and is attempting to locate her daughter. She reluctantly teams up with the weaselly Lee Sizemore (Simon Quarterman), taking the time to repeatedly humiliate him along the way. Thandie Newton remains the series’ MVP, delivering put-downs with an icy deadpan. Our last check-in is with William/The Man In Black (Ed Harris), who fends off some hosts and has an enigmatic exchange with the young host version of Ford. Lee Sizemore (Simon Quarterman) and Maeve (Thandie Newton) All of this adds up to an entertaining first episode that zips along and has the feel of a more confident version of Westworld. It’s still a show that’s easier to admire than to love, but the premiere strikes a better balance between mystery and revelation than at any point previous. That’s summed up in a curiously grim final scene that will definitely have us rushing back next week. Just don’t bet on getting any answers just yet. More bloody riddles. William (Ed Harris) meets ‘Ford’ Questions, questions, questions Just what did happen to William to turn him from hopelessly devoted to unfeeling and brutal? Jimmi Simpson was a highlight of the first season so let’s hope we get a bit more backstory as this season unfolds. The Delos heavy who finds Bernard clearly recognises him as a high-level threat from her deck of cards (nice touch that). If she knows he’s a target, then surely the rest of the Delos team do too. What’s their game? More of Westworld Emmys 2018 – What they got right and what they got wrong Our rundown on the deserving and not-so-deserving from last night’s Emmys ‘The Passenger’ – Westworld Season 2 Episode 10 review The final part of our Westworld reviews. A bumper-sized season finale exposes some of the show’s most obvious flaws ‘Vanishing Point’ – Westworld Season 2 Episode 9 review Part nine of our episode-by-episode reviews of Westworld. We grow ever closer to the end game and learn a little more about William. TV Column: Why even Hannibal Lecter deserves a second chance TV Shows you must binge-watch right now: Deadwood TV Column: I’ve never seen Seinfeld I’m on your side – Legion ‘Chapter 16’ review ‘Vanish’ – Sharp Objects Episode One review TV Column – The 2018 Emmy Nominations: Outstanding Series Why Honey, I Shrunk the Kids Is a Much Better Movie Than You Remember The Terror – A masterpiece in sustained tension
cc/2019-30/en_middle_0023.json.gz/line1298
__label__cc
0.544122
0.455878
Travelocity was created in 1995 through a joint venture between Worldview Systems Corporation and Sabre Holdings. The founding team at Worldview conceived of the idea in 1994 as an extension to their online travel database offering which had been distributed through Sabre, Bloomberg, AOL and many others. The founding team at Worldview joined with distribution partner Sabre in a 50-50 JV that resulted in the development and launch of Travelocity in 1995-1996. The founding members of the Travelocity team, responsible for the conception, development and launch at Worldview were: Steve Baloff (Founder, CEO), Sam Haugh (VP Operations), BD Goel (VP Engineering), Neil Checkoway (VP Marketing), Steve Bengston (VP Business Development), Helen Zia (Editor-in-Chief) and Katherine Chesbrough(CFO). Later in 1996, Worldview's investors (Advanced Publication and Ameritech) sold their stake in Travelocity to a subsidiary of Sabre Holdings and was run by long-time Sabre information technology executive Terry Jones.[4] As one of the pioneers of web-based disintermediation, Travelocity.com was the first website that allowed consumers the ability to reserve, book, and purchase tickets without the help of a travel agent or broker.[4] In addition to airfares, the site also permits consumers to book hotel rooms, rental cars, cruises and packaged vacations.[3] Flight Dallas - Las Vegas (DFW - LAS) $55+ Flight Los Angeles - Las Vegas (LAX - LAS) $55+ Flight Oakland - Las Vegas (OAK - LAS) $55+ Flight Seattle - Las Vegas (SEA - LAS) $55+ Flight Houston - Las Vegas (IAH - LAS) $72+ Flight Denver - Las Vegas (DEN - LAS) $77+ Flight San José - Las Vegas (SJC - LAS) $77+ Flight Houston - Las Vegas (HOU - LAS) $82+ Flight San Francisco - Las Vegas (SFO - LAS) $97+ Flight Chicago - Las Vegas (ORD - LAS) $125+ Flight Minneapolis - Las Vegas (MSP - LAS) $130+ Flight Orlando - Las Vegas (MCO - LAS) $131+ Flight Philadelphia - Las Vegas (PHL - LAS) $137+ Flight Washington - Las Vegas (BWI - LAS) $155+ Flight Atlanta - Las Vegas (ATL - LAS) $162+ Flight Newark - Las Vegas (EWR - LAS) $167+ Flight Fort Lauderdale - Las Vegas (FLL - LAS) $168+ Flight Boston - Las Vegas (BOS - LAS) $173+ Flight Washington - Las Vegas (DCA - LAS) $176+ Flight Detroit - Las Vegas (DTW - LAS) $177+ Flight New York - Las Vegas (LGA - LAS) $186+ Flight New York - Las Vegas (JFK - LAS) $219+ Flight Chicago - Las Vegas (MDW - LAS) $233+ Flight Honolulu - Las Vegas (HNL - LAS) $336+ Found Places Boston Fenway Inn $28+ The Farrington Inn $50+ Hi Boston $52+ Ramada by Wyndham Boston $79+ Comfort Inn Boston $96+ DoubleTree by Hilton Boston Bayside $100+ Best Western Plus Boston Hotel $102+ Boston Lodge and Suites $107+ Hampton Inn & Suites Boston Crosstown Center $122+ Boston Omni Parker House Hotel $125+ Boston Hotel Buckminster $126+ Yotel Boston $136+ The Boxer $138+ Aloft Boston Seaport District $141+ Flight Orlando - Washington (MCO - DCA) $97+ Flight Minneapolis - Washington (MSP - IAD) $107+ Flight Minneapolis - Washington (MSP - DCA) $117+ Flight New York - Washington (JFK - DCA) $127+ Flight New York - Washington (LGA - DCA) $147+ Flight Boston - Washington (BOS - DCA) $155+ Flight Boston - Washington (BOS - IAD) $161+ Flight Fort Lauderdale - Washington (FLL - DCA) $168+ Flight Denver - Washington (DEN - DCA) $173+ Flight Santa Ana - Washington (SNA - DCA) $182+ Flight Chicago - Washington (ORD - DCA) $186+ Flight Dallas - Washington (DFW - DCA) $198+ Flight Los Angeles - Washington (LAX - DCA) $204+ Flight San Francisco - Washington (SFO - DCA) $206+ Subject to the restrictions set out in these terms and conditions, the 15% promotion code may be applied to a qualifying stand-alone hotel (not a hotel booking in combination with any other product such as flight + hotel, or flight + hotel + car) booked on a mobile device or the Travelocity app by 12/31/2019, for 1 or more nights for travel 6/31/2020. Qualifying bookings instantly receive 15% off at check-out through the use of the promotion code. Customers are limited to one redemption of this promotion code and up to a maximum savings of $150 per booking. After the booking, this promo code will not be able to be used again, even if the booking is cancelled. Exclusions may apply and most major hotel chains are excluded. The promotion code cannot be redeemed against taxes, supplier fees, cancellation or change fees/penalties, administrative fees or other miscellaneous charges, which are the sole responsibility of the customer. Discounts are not redeemable for cash for any reason. Promotion codes are non-transferable, not for resale, and cannot be combined with other offers or used for any booking previously made. Any attempt at fraud will be prosecuted to the fullest extent of the law. Void where prohibited, taxed or restricted by law. Travelocity reserves the right to change or limit the promotion in its sole discretion. Usual booking terms and conditions apply (see https://www.travelocity.com/p/info-other/legal.htm) and all bookings are subject to availability. Flight Boston - London (BOS - LHR) $312+ Flight New York - London (JFK - LGW) $317+ Flight Newark - London (EWR - LHR) $330+ Flight New York - London (JFK - LHR) $349+ Flight Dallas - London (DFW - LHR) $367+ Flight New York - London (LGA - LHR) $372+ Flight Chicago - London (ORD - LGW) $374+ Flight Chicago - London (ORD - LHR) $387+ Flight New York - London (JFK - LCY) $390+ Flight Newark - London (EWR - LCY) $390+ Flight Newark - London (EWR - LGW) $399+ Flight San Francisco - London (SFO - LHR) $417+ Flight Washington - London (IAD - LHR) $422+ Flight Los Angeles - London (LAX - LHR) $425+ This website stores cookies on your computer. These cookies are used to improve your website and provide more personalised services to you, both on this website and through other media. To find out more about the cookies we use, see our Cookie Notice. OAG takes your privacy very seriously. For details, please see our recently updated Privacy Notice. In August 2012, Travelocity faced a viral controversy when it offered a $200 coupon code to attendees at the National Federation of the Blind annual conference in Dallas. After the NFB posted the code on Twitter without mentioning the attendee restriction, Travelocity re-tweeted it without noticing the error but deleted the tweet a day later. After some travel blogs and message boards resposted the code, many ineligible travelers used the code.[30] Travelocity responded by cancelling all trips that used the code who weren't on the list of attendees at the NFB annual conference. This resulted in a barrage of complaints from customers angry to see their trips suddenly cancelled.[31] Flight Atlanta - Newark (ATL - EWR) $99+ Flight Fort Lauderdale - Newark (FLL - EWR) $114+ Flight Fort Lauderdale - New York (FLL - LGA) $124+ Flight Chicago - New York (ORD - JFK) $133+ Flight Houston - Newark (IAH - EWR) $133+ Flight Dallas - New York (DFW - LGA) $134+ Flight Houston - Newark (HOU - EWR) $140+ Flight Denver - New York (DEN - LGA) $141+ Flight Miami - Newark (MIA - EWR) $141+ Flight Los Angeles - Newark (LAX - EWR) $145+ Flight Los Angeles - New York (LAX - LGA) $157+ Flight Orlando - New York (MCO - LGA) $157+ Flight Seattle - Newark (SEA - EWR) $165+ Flight Chicago - Newark (ORD - EWR) $175+ Flight Houston - New York (HOU - LGA) $190+ Flight Dallas - New York (DFW - JFK) $197+ Flight San Francisco - New York (SFO - LGA) $219+ Flight Dallas - Newark (DFW - EWR) $221+ Flight Portland - Newark (PDX - EWR) $232+ Flight San Francisco - Newark (SFO - EWR) $236+ Flight Los Angeles - New York (LAX - JFK) $237+ Flight Ontario - New York (ONT - JFK) $237+ Flight San Francisco - New York (SFO - JFK) $237+ Flight Phoenix - New York (PHX - JFK) $247+
cc/2019-30/en_middle_0023.json.gz/line1299
__label__cc
0.67684
0.32316
Claire’s Daughter Gets a Virus–Suddenly by Elaine Lewis, Shana Lewis Try to come up with a remedy for this month’s acute case Quiz Mom, its time for the quiz. And once again, we’re late! That could be because of our trip to the Virginia border, which brings me right to this month’s Report! My cousin Jon got married on September 6th, so we visited lovely Maryland. Was it lovely? I hadn’t noticed. Where are all the pictures we took? I sent them to you, remember? Oops. Sorry! We met members of Jon’s band, Electric Sky–Kurt and Tommy–at the reception. Yes, and a boy asked you to dance, three times! And I was complimented on my Bee Gees T-shirt! Yes, and here’s the thing, Mom, it doesn’t fit you anymore. Can I have it? Well, on to the death report… We’re on the Death Report already? The lead guitarist of REO Speedwagon, Gary Richrath, died. Aw…what a shame. Who’s Richard Richrath? Gary Richrath! You may have seen me post about it on Facebook. How could I have missed that? He also wrote a few of their songs, one of which is the one I really like called “Take it on the Run”. https://www.youtube.com/watch?v=i-RKG6sKIs0 He also does great guitar work on a song called “Keep on Loving You”. If you say so… Here it is on the Midnight Special. I didn’t know Reo Speedwagon was on the Midnight Special. It defies comprehension. Anyway, in non-death news… We have non-death news too? The Quiz is so versatile! Jeff Lynne signed a deal with Columbia Records to put out a new ELO album. Is that sooooooo…? It’s the first album of new material in fifteen years! I did not know that! It seems a lot of people are putting out/have put out new albums after many years of not putting out a new album. Who knew? It’s shocking. No, it’s exciting!!! Also, I read that Barry Gibb… Finally a name I recognize! is putting out a new album. I just don’t know when its going to come out. Also guess what else? I have to guess? Wait, don’t tell me, don’t tell me…. I’m usually very good at this…. Um….Does it have anything to do with ducks? Oh, The Rolling Stones! That would have been my second guess! …are planning a new album. I think its all rumors at the moment. Oh great! So now we’re passing on rumors! Our column has hit a new low. I’m waiting for more confirmation. I want a new Stones album. Is this going to cost me money? Of course if they tour behind the album it would probably cost millions of dollars. I don’t think I can afford to buy a Stones album for millions of dollars! No, the tour, Mom! I really want to see the Rolling Stones one day. Shana, if you’ll remember, the last time you tried to buy tickets for a Rolling Stones concert, you stood in line for hours, and they were all sold out! Don’t remind me, it’s not fair!!! Wah!!!! Well anyway, Don Henley of the Eagles and James Taylor are going to be on tv to do a PBS music show called Austin City Limits. Now that’s something we can afford–a tv show! Anyway, its about that time of year again…my favorite tv shows are coming back for the fall! The Big Bang Theory has its season 9 premiere on the 21st and that new Muppet tv show premieres on the 22nd. YAAAAY!! Spare me. I thought you liked The Muppets! I did, when you were 5! Mom, in case you haven’t noticed, the Muppets are for the Ages! There’s a new South Park this Wednesday (I can’t believe its been renewed up to season 23.) The Simpsons has been renewed for seasons 27 and 28 and Harry Shearer is back as the voice of Mr. Burns. Grey’s Anatomy will come back for season 12, though I don’t know if I will watch it but I probably will. I feel like it should’ve ended already. My favorite vampire shows wont be back until October. How tragic. Can we start the quiz now? What is the quiz? Claire’s daughter gets a virus–suddenly. Is that going to be the title? Good idea, Shana, what would the Quiz be without you? Well, obviously, it would be shorter. So let’s bring in Claire, shall we? And now, heeeeeeeeeeere’s Claire! Strange “acute” happened last night, Elaine. Everyone was fine, “Matilda” got emotionally upset about something and wandered off to another part of the house. Then she came back to me and wanted a hug; she sat quietly for a while as I was doing something, and then told me she all of a sudden had a headache! Because of her usual disposition, I thought I would try Pulsatilla because there were no other symptoms. It did nothing. Then, she said she still had a headache but now also felt nauseated! Her face felt a little warm. I gave Ipecac, which did nothing. Not sure what to do next; I asked her, “Did you have a little headache and it got bigger? Or did you just suddenly have a hurting head?” She said it was sudden. I checked her eyes. Pupils not dilated, no glassy appearance. Hands and feet not cold. She sat on my lap, seemed a little scared. I decided to give her _______________. Within a minute of a dry dose of 30c, she threw up. The nausea was now gone, and the headache was fading and gone entirely by a few minutes past the remedy; she was back to her usual playful, silly self, normal temperature as well. I don’t know if I stopped a virus from blooming, or what that was exactly, but, I was so happy to have resolved it in minutes! She is fine today, as is everyone else. OK, everybody! This is what homeopathy is all about, isn’t it? Seizing a threat and sending it on its way before it can do any damage? This is why we should all have a 30C Homeopathy Emergency Kit in our homes! What was the remedy Claire gave? Write to me at [email protected] and let me know! The answer will be in next month’s ezine. OK, see ya soon! Bye-bye! Elaine Lewis, D.Hom., C.Hom. Elaine takes online cases! Write to her at [email protected] Visit her website: elaineLewis.hpathy.com Elaine Lewis Elaine is a passionate homeopath, helping people offline as well as online. Contact her at [email protected] Elaine is a graduate of Robin Murphy's Hahnemann Academy of North America and author of many articles on homeopathy including her monthly feature in the Hpathy ezine, "The Quiz". Visit her website at: https://elainelewis.hpathy.com/ and TheSilhouettes.org Shana Lewis Shana spices up the Hpathy Quiz with her timely announcements and reviews on the latest in pop culture. Her vast knowledge of music before her time has inspired the nickname: "Shanapedia"! A Bad Cold Threatens Kellys Family Christmas by Elaine Lewis A useful article about A Bad Cold Threatens Kellys Family Christmas.Full details about A Bad Cold... Homeopathy in the Middle of the Night A useful article about Homeopathy in the Middle of the Night.Full details about Homeopathy in the... Fifi Again, 3 Years Earlier Fifi's got another virus--try and guess the remedy! Revisiting: What Remedy is Hank Hill? What is Hank Hill's constitutional remedy in homeopathy? Another in our series of "famous persons"... Homeopathic Medicine for Zika Fever Homeopathic medicine for Appendicitis Migraine Headache Homeopathy Treatment Menorrhagia, Excessive Bleeding During Menses Flu Symptoms & Homeopathy Treatment Gout Treatment with Homeopathy Remedies
cc/2019-30/en_middle_0023.json.gz/line1300
__label__cc
0.69195
0.30805
Rumour - Josh Morris Reading, QPR and Millwall are reportedly after Scunthorpe winger Josh Morris, 27 Josh can also play left side of defence and central midfield He's also the guy that scored THAT goal at Plymouth on the last day of the season. https://www.iron-bru.co.uk/millwall-rea ... sh-morris/ Re: Rumour - Josh Morris Sutekh Reading, QPR and Millwall are reportedly after Scunthorpe winger Josh Morris, 27 Presumably if he is a winger then "left side of defence" means left-back as opposed to left-sided centre-back? Good to see that we might be looking at a new left-back which I feel is important for the make-up of the first team although personally I would like to see us going for an orthodox left-back rather than a player who "can play" there. Not sure why (injuries?) but he only played in 19 league games last season for a poor Scunthorpe side but scored 5 and had 6 assists which leads me to think that he is more of a winger than a left-back but presumably he is naturally left-footed which is a good thing. Maybe getting a winger "who can play" left back would be a short term thing until it's clear if Obita is going to come back anything like the player he was? (i.e., he'd be mostly a winger, but can cover left back until Obita is back - rather than buying a left back who then might be surplus to requirements further down the line) He is more of an attacking mid I gather. Broke his leg last year hence the smaller number of appearances. Very good record the two previous seasons. On paper would look a good signing by Nameless » 15 May 2019 13:38 Hound He is more of an attacking mid I gather. Broke his leg last year hence the smaller number of appearances. Very good record the two previous seasons. Why would anyone chose us over Millwall and QPR though ? Would signing him suggest we’re cashing in on Barrow ? Where would it leave Richards and Blackett ? Wycombe Royal Location: The posh part of Winnersh by Wycombe Royal » 15 May 2019 13:42 muirinho until it's clear if Obita is going to come back anything like the player he was? Saw him at Sainsbury's the other day and it still looked like he was limping to me. Why would anyone choose Millwall or QPR over us? I hate to write a player off based on hearsay and gossip - but, it's not looking good, is it? Such an innocuous-looking injury, too. by Basildon » 15 May 2019 14:02 Not sure if serious? No it’s worrying - though could be as a result of the last set of surgery. We’ll see if he is back for pre-season by Snowflake Royal » 15 May 2019 17:46 Really? QPR finished the season in free fall and Millwall were shit too. I'd argue we've got more championship potential than Millwall and no more chance of relegation. And are at least broadly on a par with QPR. Vic, had a quick look at their games this season and didn't see where Morris lined up in defense at all, either at left back or left wing back (which isn't to say he can't, but just saying). Scunny usually employed a traditional Back 4, with either Borthwick-Jackson (on loan from Man Yoo) or James Perch at LB. +1 to that, plus there are all the off-field issues with QPR. linkenholtroyal Location: anywhere but where you want me by linkenholtroyal » 22 May 2019 13:13 Well this one doesn't seem to be going away.
cc/2019-30/en_middle_0023.json.gz/line1303
__label__cc
0.561468
0.438532
All India Radio – Contribution of Pandit Deendayal Upadhyay in Indian Politics Categories All India Radio Contribution of Pandit Deendayal Upadhyay in Indian Politics Search 9th October here http://www.newsonair.com/Main_Audio_Bulletins_Search.aspx 2016 is birth centenary year of Deendayal Upadhyay. On his birth anniversary, a rally was held in Calicut as in 1967, Pandit Deendayal Upadhyay had assumed charge as President of Bhartiya Jan Sangh in Calicut. On his birthday, PM launched a series of works dedicated to him. He also launched a compendium of 15 volume dedicated to life of Deendayal Upadhyay ji. Pandit Deendayal Upadhyay’s political ideologies The key element was humanism in political thought. His thoughts are relevant in today’s circumstances in national life of India. He was a political leader but more than it, he was a fundamental political thinker. India’s 1947 independence is political independence but Pandit Upadhyay is one of those thinkers in India who exercised on Swaraj of ideas. It means decolonisation of ideas, i.e. decolonisation of Indian minds. India was free politically but ideologically, colonial hangover was there. His relevance lies in the fact that in political, social and cultural discourse, he introduced basic concept of Indian philosophy. For example- he propounded in 1950 that there should not be artificial differences between left and right. This concept is irrelevant for India. In 2016, in latin America and EU, political thinkers are deliberating that left and right distinctions are artificial and damaging political discourse. He conceptualised that politics can’t free from ethics. Deendayal Upadhyay were known for his organisational skills as after death of Shyama Prasad Mukherjee, he managed Bhartiya Jan Sangh for 15 years. Doctrine of integral humanism According to Upadhyay ji, Integral Humanism is different from western ideologies. Most of western ideologies is based on materialism. They emphasise on development in economic term and eventually every individual is treated as economic man. His social contacts, his cultural milieu and special bent of mind is ignored in this theory. Economic without ethics and political discourse without morality are creating crisis in society. Therefore he propounded that every economic theory and policy should be in context of specialism, local tradition and nature and temperament of people. In Indian thought he said- dharm kaam arth moksh- all four are important. If there is balance between them, there is social equilibrium. Dharma and religion are different in Indian context. Dharma is more related to morality of person in individual and collective life. It is less about religion. But religion in western countries is more concerned about sects. There is difference between sects and dharma. No society can live without dharma but can live without religion. Dharma is above religion. On this, he propounded Integral Humanism. It means that an individual’s development should be in all four areas- dharm, kaam, arth and moksh Alternative of Congress In 1960, Deendayal Updhyay started polarisation against congress. He actualised it by 1965 and by 1967, there was anti-congress regime. He is called architect of non-congress movement along with Ram Manohar lohiya. In 1967 election, for the first time after independence, in the hindi belt of India, a political non-congress government was formed. Thus, Deendayal Upadhyay paved a way for non-congress alternative in India. It was not opportunism. According to him, there should be diversity in democracy. There shouldn’t be one leader-one party-one policy. This is detrimental for democracy. He believed in India’s tradition and culture and was not against modern tech but he wanted policies which suited Indian requirements and conditions. His approach was also constructive but at the same time he was not soft when it came to principles. For example, In Rajasthan, he had expelled 6 MLAs of Jan Sangh out of 8MLAs because they were opposing Zamindari abolition act. For him, quality mattered than quantity. He was a Philosopher, journalist, sociologist, economist, thinker, and worked dedicatedly for organisation and with principles. For him, morality in public life was important. In 1950s, there was a proposal to merge Jan Sangh and Swatantra party, Hindu Mahasahba and Ram Rajya Karpatri maharaj as these parties constituted 16% vote. But Deendaayal Updhyay objected the merger. The reason was that Shyama Prasad Mukherjee had asked Hindu Mahasabha to open its door for all religions but it didn’t agree. So Deendayal Updhyay objected to it. According to him, Ram Rajya Karpatri maharaj’s cottage was run from palaces which was not acceptable to Deendayal Upadhyay in politics. He believed in purity in politics and principle. This is the difference between contemporary politics and Upadhyay ji. He sacrificed LS seat for values in politics. His message should be spread across the political parties for casteless politics, communalism les politics. He stood for politics which should be value based. This is why Jan Sangh got credibility due to his value based politics. The present government is following his ideal through Last mile delivery-Sabka Sath Sabka Vikas- Development for all. Deendayal Upadhyay also talked about cottage industries, village based industries where people could be self reliant. This is reflected in Gandhiji’s philosophy also. He gave three cardinal principles for Indian politics: Decentralisation– Basic for indian republic. So village central development is there. Thus, agri should be given prime importance. Diversity in social and cultural ideas. It should not be an environment of uniformity. Because he followed this principle, he appealed to most population. Planning should be decentralised. Bottom top approach was proposed so that real needs can be known. These things are to be adopted in new context because new political discourse is posing threat to culture, society and community life. This is why Deendayal Upadhyay is more relevant in neo-liberal era. He practiced what he preached and today’s contemporary politics needs to learn it too. Connecting the dots: Who was Pandit Deendayal Upadhyay? How did he contribute towards ethics in politics? Elucidate. What is the meaning of Integral Humanism? Explain. Tag:All India Radio Dec 2016 IASbaba’s Daily Current Affairs – 1st December, 2016 IASbaba Daily Current Affairs Quiz [Day 73]
cc/2019-30/en_middle_0023.json.gz/line1313
__label__cc
0.604892
0.395108
RT @shelly: Facebook's Image Outage Reminds Us How Bad Social Media Accessibility Really Is (via @Pocket) forbes.com/sites/kalevlee… 3 hours ago RT @NationalTheatre: CASTING CALLOUT We're looking for professional actors who identify as blind or visually impaired for Tony Kushner's n… 9 hours ago RT @stenomum: Going to @edfringe and use #captioning? If the show you want to see doesn't have a #captioned performance, try FREE Captionin… 19 hours ago RT @GHMansfield: Access is not a cost-benefit analysis. Access is not a negotiation. Access is not a compromise. Access is no barriers, no… 19 hours ago RT @CyclopediaBrain: I have a free trial of YouTube TV and they take an... interesting... Approach to speaker indication for CCs. Instead o… 3 days ago RT @scope: "When we gatekeep everything, we create a lot of stress in the lives of disabled people. Rather than focusing so much on if some… 3 days ago RT @cherryrae: Hey UK disabled friends who like games! Wanna help PlayStation test one of their games? This particular user research test i… 5 days ago RT @sonalirai: Pre-recorded audio description @Opera_North Alice Gilmour speaks to us @RNIB about her role in making opera accessible and… 6 days ago RT @danielrbradley: As someone who enjoys watching Netflix with subtitles on, I'm loving having these amazingly accurate transcriptions at… 6 days ago RT @MD_Trailblazers: We had an amazing time at the opening of the new @microsoftuk store in London this week! As of today you can buy the f… 6 days ago Tagged: Access Toggle Comment Threads | Keyboard Shortcuts iheartsubtitles 4:00 pm on October 25, 2015 Permalink | Reply Tags: Access, ATVOD ( 6 ), Campaigns ( 4 ), Ofcom ( 7 ), UK ( 27 ) Earlier this year Action on Hearing loss created a survey to gather the experiences and thoughts of subtitle users access to video on demand (VOD) services in the UK. The results have been published and the report titled Progress on Pause is well worth a read. It’s part of the #SubtitleIt campaign which is a joint effort from multiple UK charities that wants to see VOD accessibility regulated as a mandatory requirement as is currently the case in the UK for linear TV. Individual advocates alongside the charities efforts has resulted in some success so far including a statement from Sky to commit to increasing their on demand subtitled content. The deadline to ask your MP to back the bill is November 2015. The campaign is far from over however. For any legislation to become a reality, it needs support from MPs. If you do not see your local MP listed here please write to them asking them to back the bill. (Note this also includes other important accessibility features such as audio description and signing). Please don’t miss the deadline. From 1st January 2016, ATVOD will no longer exist as a co-regulator and its responsibilities will be carried out by Ofcom In other regulatory news that effects VOD services in the UK, the regulatory body Ofcom announced that it would take over the role of ATVOD: The regulation of ‘video-on-demand’ programme services is being brought fully within Ofcom to sit alongside its regulation of broadcast content. The move follows an Ofcom review to ensure regulation of broadcast and on-demand content remains as effective and efficient as possible for the benefit of consumers, audiences and industry. The review included the current co-regulatory arrangements for video-on-demand services. These can include catch-up TV and on-demand services on the TV and the internet. Ofcom designated the Authority for Television On Demand (ATVOD) in 2010 as a co-regulator to take the lead in regulating editorial content for video-on-demand services. Following the review, Ofcom has decided that acting as sole regulator for video-on-demand programmes is a more effective model for the future than having two separate bodies carrying out this work. This will create operational efficiencies and allow editorial content on video-on-demand to sit alongside Ofcom’s existing regulation of broadcasting. SOURCE: Ofcom brings regulation of ‘video-on-demand’ in-house This (in my opinion) is good news. It means a far less confusing regulatory model and that all TV will sit under the same regulator. Ofcom recently published its results into access services on UK TV for the first six months of 2015. With Ofcom to take over the duties of ATVOD in 2016, wouldn’t it be great if we could have the same level of transparency on how each VOD service is performing with more regular (and legally required) statistical reporting on levels of access services here too? iheartsubtitles and virginia are discussing. Toggle Comments virginia 12:15 am on January 4, 2016 Permalink | Reply I’ve read a few of your posts as have been researching ‘live captions’ for news websites. The live captions to display on a screen would read a bit gobblydeegook if you know what I mean? So they are in a flat text file, so the image you show in the post below “How subtitles add value, not just access” about Video Metadata. Any thoughts on how best to display this for news stories and videos that have not been captioned properly? Was thinking of an accordian drop down. Obviously this is a big SEO plus if it can be done correctly. The video content has to remain on the actual news website. iheartsubtitles 12:09 pm on January 6, 2016 Permalink | Reply Hi Virginia – Are you talking about live news being captioned online when it is no longer live, or are you looking to improve live captioning of live news? If so this is something that is in its infancy compared to other media that is captioned or subtitled online. iheartsubtitles 10:04 pm on April 10, 2015 Permalink | Reply Tags: Access, BSL ( 5 ), Music ( 16 ), Stenography ( 4 ) Last month I did something I’ve never done before, and I don’t think many others will have done it either. What was it? I attended a live music gig with live subtitles! The gig was called Club Attitude. It was organised by Attitude is Everything and the live subtitling was provided by StageTEXT. Having been to several StageTEXT captioned plays, and live subtitled talks I was pretty confident that the quality of the live subtitles would be excellent. But I also know that high quality subtitling doesn’t just happen without a lot of prep, a lot of technical set up, and of course skilled subtitlers. I am sure that this gig had its challenges, especially considering it hadn’t been done before but I was really pleased to see that even for this first ever subtitled gig, the access worked well. I felt for the stenographer wearing their headphones listening intently in order to deliver the lyrics in a time accurate manner in what was already musically noisy environment. Talk about powers of concentration! The subtitles were displayed on both sides of the stage at a high height on the right so that the screen could still be seen at the back of the venue (as per the Vine above) and also on a screen at a low height on the left side of the stage in case wheelchair users also wanted to read the captions throughout the gig. I should also point out there was also a signer on stage translating the lyrics into BSL for BSL users. None of this got in the way of the band members performing. It was lovely to see full access had been thought of and was indeed being provided including an accessible venue (if only this was the norm and I wouldn’t even point it out in a review like this but sadly it is not always the case). I’d love to have known what the artists performing at the gig thought of the live subtitles (although they cannot really see it from their position on the stage.) But if they are reading this article, or any other bands who might be thinking about captioning or subtitling their gigs, an overlooked but massive benefit isn’t just the lyrics. I shall try to explain: Because the subtitling provided at this gig was live, the dialogue and conversation that the bands had with the audience is also subtitled. I am taking about the intro and chat between songs. “Hello everyone, thanks for coming.” etc That might not seem important but what if you happen to be talking to the audience about where they can buy your music or your merchandise? Ordinarily this information is lost on me. The number of gigs I’ve been to where I can enjoy the music (because I’ve listened to the songs over and over and looked up the lyrics on the internet) but cannot understand any of the talking is well pretty much all of them without a hearing friend confirming what’s being said. Even if I am close to the stage, I can’t lip-read you – your microphone is in the way. And this means you’ve lost communication with me and a connection. What I often hear is something like, “And so fdfgddfas this is our next song that dfawesfasdf and its called dfaefavdfa.” What this means is, I never catch the song title, so if I like the song, I can’t go home, search the title online, listen to it again, and you know maybe buy it! So, we know live subtitling of music can be done, so why isn’t it done more often? I do hope we have got rid of the misconception that deaf and hard of hearing people are not music lovers. I can relate to an awful lot written in this great article from @ItsThatDeafGuy especially the bit about getting the lyrics from Smash Hits magazine and subtitled music on TV! Being Deaf Doesn’t Mean You Don’t Care About Music. I too have blogged several times already on this subject including my frustration that music DVDs seem to be exempt from requiring subtitles, and how having access to subtitled music via TV was hugely important to me as a teenager. And it still is. Search the music tag for more articles. And who doesn’t love knowing what the lyrics are? The way we consume music has changed drastically in the last 20 years, and technology is providing new ways to get the lyrics. Recently the music streaming service Spotify launched lyrics integration and the company has been retweeting the positive feedback it is getting about it. Just lovin' @spotify lyrics integration thanks to @musixmatch. Well done to @maxciociola and the team 🙂 pic.twitter.com/lgKwfnxUuz — Giuliano Iacobelli (@Giuliano84) April 7, 2015 @Spotify I've been waiting for something like this feature for YEARS. thank you. http://t.co/JEamacymlX pic.twitter.com/uGRik9dsAt — Marc Haumann (@marchaumann) April 5, 2015 I also can’t help but notice that the trend of official lyric videos being released by music artists isn’t going away. And that’s just fine by me because a probably unintentional side effect is that it gives me access to the song and allows me to consume the music in my preferred way by reading the lyrics alongside listening to the song. Arena and stadium artists have started to incorporate this into some of their video screen stage graphics during concerts. And naturally I love this. Given all of these trends maybe this reviewer of Club Attitude is right: Perhaps the most extraordinary thing is that this gig night does not feel extra-ordinary at all. Now that would be something. iheartsubtitles and Victoria O'Hara are discussing. Toggle Comments Victoria O'Hara 5:47 pm on August 30, 2015 Permalink | Reply Afternoon. I am working on a research proposal, and I was wondering if there was any way that I could ask you a few questions about closed captioning in the UK? iheartsubtitles 3:43 pm on September 7, 2015 Permalink | Reply Hi Victoria, I have sent you an email. iheartsubtitles 3:13 pm on February 19, 2015 Permalink | Reply Tags: Access, accessibility ( 3 ), ASR, Campaigns ( 4 ), captions, Online Media ( 25 ), You Tube ( 18 ) Last month some high-profile vlogger’s that include Rikki Poynter and Tyler Oakley on the popular video sharing site YouTube got the attention of some mainstream press with a campaign that started with the hashtag #withcaptions. It’s fantastic to see other’s campaigning and educating their audience as to the importance of not just captioning your online videos but captioning them accurately. I won’t repeat what mainstream media coverage reported but if you missed it or have no idea what I am talking about click on the links below: ABC News: Hard of Hearing You Tube star campaigns for better closed captioning. Upworthy: Pretty much a no brainer. So why do so many people with brains forget to add this to their videos? BBC Newsbeat: Tyler Oakley adds subtitles after vlogger campaign. To anyone who accurately captions their online videos. Good job. Thank you. It is so refreshing to get some positive mainstream press coverage about the importance of subtitling and its even more brilliant that the message is being spread by individuals outside of the subtitling, captioning or SEO industry. To all of you individuals doing this or perhaps have acted on this information and are now accurately captioning your own You Tube video’s – a massive thank you from me. As most of you reading should already know, You Tube does use automatic speech recognition (ASR) technology to automatically create captions from the audio track of uploaded video content on its site but these are very rarely, if ever accurate. But what if you could fix these to make them accurate, rather than have to start from scratch to create accurate captions? That’s exactly what Michael Lockrey, who refers to these as ‘Craptions’ aims to solve with nomoreCRAPTIONS. As Lockrey explains: nomoreCRAPTIONS is a free, open source solution that enables any YouTube video with an automatic captioning (‘craptioning’) track to be fixed within the browser. Craptions is the name coined by me for Google YouTube’s automatic craptioning – as they don’t provide any accessibility outcomes for people who rely on captioning unless they are reviewed and corrected. As this rarely happens and as Google rarely explains that they haven’t really “fixed” the captioning accessibility issue, we have a huge web accessibility problem where most online videos are uncaptioned (or only craptioned which is just as poor as no captioning at all). If you don’t believe me, then look at Google YouTube’s own actions in this space. The fact that they don’t even bother to index the automatic craptioning speaks volumes – as their robots hunt down pretty much everything that moves on the internet. So it’s obvious from these actions that they don’t place any value in them at all when they are left unmodified by content creators. There is also no way to watch the automatic craptioning on an iOS device (such as an iPhone or iPad) at present, unless you use the nomoreCRAPTIONS tool. Lockrey who is profoundly deaf has taught himself web development skills to solve a problem that he feels Google (You Tube’s owners) have largely ignored. This hasn’t been easy as although there’s a huge amount of learning materials on YouTube and other platforms, most of them are uncaptioned or craptioned. Lockrey explains: Previously if I encountered yet another YouTube video that was uncaptioned or craptioned, I would often spend my own money and invest personal resources (my own personal time, effort, etc) in obtaining a transcript and / or a timed text caption file. This usually also involved taking a copy of the YouTube video and then re-uploading the video onto my own YouTube channel so I could add the accessibility layer (i.e. good quality captioning). Quite often I would end up being blacklisted from Google YouTube’s automated copyright systems, when I was only trying to access content that was freely and publicly made available by the content creators on YouTube and was not trying to earn revenue from the content (via ads) or any “funny” business, etc I knew that there simply had to be a better way. No More Craptions lets you edit You Tube’s auto-captioning errors With nomoreCRAPTIONS you simply paste in a YouTube URL or video ID and it instantly provides you with an individual web page for that video where you can go through and fix up the automatic craptioning (where there is an automatic craptioning track available). At the moment it’s a very simple interface and it is ideal for shorter YouTube videos of 4 or 5 minutes in duration (or less). It works in all languages that Google supports on YouTube with automatic craptioning. Here’s an example of the Kim Kardashian superbowl commercial which is very short and sweet. You can modify the text of the auto-captions to correct any errors via the yellow box on the right. Lockrey explains: There’s very little learning curve involved and this was intentional as whilst Amara and DotSub have great solutions in this space, they also have quite a substantial learning curve and I wanted to make it as easy as possible for anyone to just hop on and do the right thing. One the biggest advantages of the tool is that the corrected captions can be viewed immediately once you have saved them. This means it’s possible for a Deaf person to watch a hearing person fix up the craptions on a video over their shoulder and see the edits in real-time! We’ve even had a few universities using the tool as there’s so much learning content that is on YouTube, and this is simply the easiest way for them to ensure that there’s an accessible version made available to the students that need captioning – without wasting time on copyright shenanigans etc. I’ve also been using it as a great advocacy tool – it’s so easy to share corrected captions with the content creators now and hopefully we can bridge that awareness gap that Google has allowed to fester since November 2009. noMORECRAPTIONS is still very much in the early development stage and there is more to come. The next steps are a partnership with #FreeCodeCamp to help with rolling out improvements and new features in the very near future. This includes looking at other platforms such as Facebook and Vimeo videos as part of the next tranche of upgrades as more and more platforms cross over to HTML 5 video. Lockrey is keen to get as much user feedback as possible so what are you waiting for – try the tool for yourself. For more information please contact @mlockrey. And when you’ve done that, you might also want to read: OMG! I just found out there’s only 5% captioning* on YouTube. iheartsubtitles 4:54 pm on December 22, 2014 Permalink | Reply Tags: Access, accessibility ( 3 ), Creative Subtitling ( 20 ), Ofcom ( 7 ), Opinion ( 16 ), production, Translation ( 10 ), VOD ( 13 ) Accessible film making or what if subtitles were part of the programme? I was prompted to write this blog post by a recent tweet from director Samuel Dore who bemoaned the fact that he felt that film directors and distributors seem to ‘moan’ about the cost of subtitling content: Distributors moan about the time & money it takes to create subtitles when this means they get a few more people to watch / buy their films. — Samuel Dore (@Bursteardrum) November 17, 2014 @iheartsubtitles I read some articles about film makers moaning about the time and effort it takes to make subtitles 1/2 @iheartsubtitles Which was daft as the point of making films is to show it to as many people as possible, subtitles is a powerful way 2/2 And I’ve seen tweets from others with comments of a similar nature. This is a tricky topic because it would be wrong to label everyone individual or company out there as having this belief or attitude. However it’s another repeated theme I’ve seen discussed at access and language conferences this year. That’s a good thing – it means its recognised as a potential issue for some companies or individuals and others in the same industry are challenging this assumption and trying to change it. At the 2014 CSI Accessibility Conference Screen Subtitling’s John Birch asked the question “What if subtitles were part of the programme?” He pointed out that in his opinion funding issues are still not addressed. Subtitling is still not a part of the production process and not often budgeted for. Broadcasters are required to pay subtitling companies,and subtitling companies are under continued to pressure (presumably to provide more, for less money). It is a sad fact that subtitling is not ascribed the value it deserves. I would also argue that there is some lost opportunity with the current Ofcom Code on Access Television Services that gives new TV channels a one year grace period in which regardless of audience reach, if the TV channel is less than one year old it is not required to subtitle/caption any volume of its output at all. Whilst I understand the cost of doing so might be considered a barrier to even launching the channel in the first place, the problem is it promotes an attitude or thinking once again of not budgeting for subtitling/captioning from the start of the business process. So two or three years down the line when the grace period is over,the risk is that it becomes an additional cost that the channel has not budgeted for and could be perceived as hindrance or ‘punishment’ rather than something positive that adds value for the channel and its viewers. The same is also true for translation subtitling. At the 2014 Languages & The Media Conference Pablo Romero-Fresco gave this statistic: Subtitling and translation make up 57% of revenue generated from English speaking movies but translation subtitling only gets 0.1% of budget. He argued that there needs to be a shift of change in the production process of filmmaking. His suggestion is that film production should recognise and create the role of Producer of Accessibility who is involved before the final edit is locked. Sherlock – text message – on screen typography He observed that in recent years text and typography effects like those seen in the BBC’s Sherlock, and Netflix’s House of Cards (and many, many more), which uses text on screen as part of the storytelling and is part of the post production process should also be integrated in this role. I too have observed the increase in recent years of using typography on screen as part of the story telling process. It’s also being widely used in music videos. For lots of examples of kinetic typography be sure to check out this Vimeo channel. Romero repeated this vision and idea at the Future of Subtitling Conference 2014. You can read more in-depth information in the Journal of Specialised Translation. I’ve also collated further tweets and information on this topic at Storify: Why subtitles should be part of the production process. I think its a really interesting idea. I also think that it will require a monumental shift for this to happen in the industry but never say never. What is good, is that certainly between broadcast TV production companies and subtitling companies is that collaboration of a sort is happening. Information and scripts are shared well in advance so that subtitler’s can prepare as much as possible in advance of broadcasts. Clearly, Romero’s vision is to be much more integrated than that. Currently for broadcast TV that is licensed under Ofcom, the responsibility for access and provision of subtitling lies with the broadcaster/TV channel. If the creation of subtitles and captions is implemented wholly into the production process then should subtitling provision then solely lie with the production company? At the moment it would appear that the responsibility shifts between the two depending on a number of factors: Regulation, if there is any and whom is considered responsible for providing subtitles. The production company and/or the distribution company making the content (some will provide subtitles, some will not, and a broadcaster may have bought programmes from either one of these or they may be one and the same thing) The country broadcasting the content (what language do you need subtitles in and how many languages will a production company be prepared to produce?) The method of how content is viewed (digital TV, satellite, cable, online, download, streaming subscription, pay per view,) It really shouldn’t be complicated but there is no denying that with all these variables it is. A lot of the above is complicated further by distribution rights which is another topic entirely. I do like the idea a lot though as it has the potential to simplify some of the above. I also think production companies would benefit greatly from the knowledge and expertise gained from years of experience from translation and subtitling companies as to the best methods to achieve collaboration and integration. What do you think? Accessible film making or what if subtitles were part of the programme? | Caption Everything and Claude Almansi are discussing. Toggle Comments Claude Almansi 11:08 pm on December 22, 2014 Permalink | Reply Thank you, Dawn: so many creative proposals in your post. It reminded me of a tutorial that Roberto Ellero made for the Italian public administration in 2009, entitled rather sternly – well, due to the target audience – “Accessibilità e qualità dei contenuti audiovisivi”, Accessibility and quality of audiovisual content. It’s in https://www.youtube.com/watch?v=wy34n09tvKo , with Italian captions and English subtitles (1). I think you might agree with the part from 1:47: “Every audiovisual product begins with a text, a script, a storyboard, some writing geared towards visualization, which then gets enacted in a series of frames and sequences. Every video alway starts from a text and returns to a text (a book, being read generates images in our mind, and the reverse path leads to audiodescription, which, in turn, is also a text)…” (1) Apologies for the typos in the English subs: I translated them on a train journey with TextEdit and sent them from a station where I got a wireless connection: he needed them urgently for some talk he was to give the following day 🙂 Tags: Access, Metadata, Netflix, SEO, Speech Recognition ( 2 ), STT, subtitles ( 4 ), VOD ( 13 ) How subtitles add value, not just access The added value of subtitles and captions has been a repeated theme at various conferences that I have attended over the last few years, and its one of my favourite topics so I am going to write another blog post about it now. Why? Because I believe the value of captioning will continue to be tapped into by companies. A lot are already doing this. Diana Sánchez from Ericsson (formerly Red Bee Media) gave a great presentation at the Languages & The Media 2014 Conference that detailed the areas the subtitles add value. In summary, subtitles and captioning can add value in 4 areas: Communication. Communication. (Events – presentations, conferences, web calls, conference calls, improve language skills) Diagram illustrating multi-sensory learning. Learning. (Including 2nd language learning,multi- sensory learning) Speech Recognition Technology. Speech Recognition. (Algorithms and probability can be based on phonemes. Probability can be skewed using a specific engine e.g. genre feeding in audio and the correct answer. Subtitles = accurate transcript) A Subtitle file is valuable text-based information about video content. Video Metadata. (Timed text=video metadata. Why is video so important? Increased revenues = increased views.) You can read more from Diana on this topic at the Ericsson blog. Netflix were in attendance at Languages & The Media 2014 and are already captioning content on their service, and I believe all of the above are reasons why they are doing this. (Yes I know they were also sued by the NAD in 2011, but what I am getting at with this blog post is that the added value of captioning are all positive reasons why companies should be captioning more content regardless of regulatory requirements). Netflix’s competitors are starting to realise this too. This is of course a good thing for end users who also need captioning for access. You don’t need to look hard to find information on the multiple ways in which text adds value to video content. I think we will start to see VOD providers add captioning to their services not only to improve access but to help to improve video search functionality. Take a look at this article and video demonstration of interactive transcripts and its easy to see why this capability could be extremely useful for any end-user of video content to be able find and view content that they are interested in much more quickly and precisely. That’s a better user experience for all. Weekly Roundup of Web Design and Development Resources: December 12, 2014 is discussing. Toggle Comments iheartsubtitles 10:08 pm on December 3, 2014 Permalink | Reply Tags: Access, Connected TV ( 5 ), Europe ( 2 ), Opinion ( 16 ), Technology ( 20 ) Access 2020 – Languages & The Media 2014 Access 2020 was an interesting panel hosted by Alex Varley at the 10th Languages & The Media conference. The theme was for the panel to discuss what they thought media access might look like in 2020. Although it is difficult to summarise all of the discussions, Media Access Australia have written a summary of 20 highlights. Below is my two-cents. Broadcasters have to start to think what is their role? The industry still need content producers which broadcasters are likely to continue to play a big role in producing. There is likely to be a merge of broadcast and IPTV. In Europe, there is a keen focus to develop in the areas of: Machine Translation (MT), User Experience (UX), and Big Data. Subtitling is becoming a language technology business rather than editorial. Greater levels of interest and innovation in technology will lead to greater quality and lower cost. The industry is aiming for interoperability by 2020 (if not before) to ensure no technological barriers to access exist. Two interesting ideas/questions raised: Will access services start to go into/become a part of the production process for audio-visual content? Will we start to see closed signing? How to achieve all of this: Talk to end users more. Deal with the complexity. (interoperability) Different jobs will be created by new technology, but we still need humans to provide access. Regulators are not always the answer and can get it wrong. Target the businesses to provide access. Personally I’m still waiting for the hoverboard. iheartsubtitles 12:19 pm on June 27, 2014 Permalink | Reply Tags: Access, ATVOD ( 6 ), Formats ( 5 ), Ofcom ( 7 ), Opinion ( 16 ), Respeaking ( 6 ), TV ( 31 ), UK ( 27 ), VOD ( 13 ) CSI TV Accessibility Conference 2014 – Live subtitling, VOD key themes CSI TV Accessibility Conference 2014 Earlier this month the CSI TV Accessibility Conference 2014 took place in London. I had hoped to be able to give a more detailed write up with a bit of help from the transcript of the live captioning that covered the event but I’m afraid my own notes are all I have and so I will summarise some of the interesting points made that I think will be of interest to readers here. It will not cover all of the presentations but it does cover the majority. i2 Media Research gave some statistics surrounding UK TV viewing and the opportunities that exist in TV accessibility. Firstly, TV viewing is higher in the older and disabled population. And with an ageing UK population the audience requiring accessibility features for TV is only going to increase. Andrew Lambourne, Business Director for Screen Subtitling Systems had an interesting title to his presentation: “What if subtitles were part of the programme?” In his years of working in the subtitling industry he questioned why are we still asking the same questions over recent years. The questions surround the measurement of subtitling quality, and if there is incentive to provide great subtitling coverage for children. He pointed out that in his opinion funding issues are still not addressed. Subtitling is still not a part of the production process and not often budgeted for. Broadcasters are required to pay subtitling companies,and subtitling costs are under continued to pressure (presumably to provide more, for less money). It is a sad fact that subtitling is not ascribed the value it deserves. With regards to live subtitling there is a need to educate the public as to why these errors occur. This was a repeated theme in a later presentation from Deluxe Media. It is one of the reasons I wrote the #subtitlefail! TV page on this blog. Peter Bourton, head of TV Content Policy at Ofcom gave an update and summary of the subtitling quality report which was recently published at the end of April. This is a continuing process and I’m looking forward to comparing the next report to this first one to see what changes and comparisons can be made. The presentation slides are available online. Senior BBC R&D Engineer Mike Armstrong gave a presentation on his results to measuring live subtitling quality. (This is different to the quantitative approach used by Pablo Romero and adopted by Ofcom to publish its reports) What I found most interesting about this research is that the perception of quality by a user of subtitles is quite different depending on whether the audio is switched on whilst watching the subtitled content. Ultimately nearly everyone is watching TV with the audio switched on and this research found that delay has a bigger impact on perception of quality compared to the impact of errors. The BBC R&D white paper is available online. Live subtitling continued to be a talking point at the conference with a panel discussion titled: Improving subtitling. On the panel was Gareth Ford-Williams (BBC Future Media), Vanessa Furey (Action On Hearing Loss), Andrew Lambourne (Screen Subtitling Systems), and David Padmore (Red Bee Media). All panelists were encouraged that all parties – regulators, broadcasters, technology researchers are working together to continually address subtitling issues. Developments in speech recognition technology used to produce live subtitles has moved towards language modelling to understand context better. The next generation of speech recognition tools such as Dragon has moved to phrase by phrase rather than word by word (the hope being that this should reduce error rates). There was also positivity that there is now a greater interest in speech technology which should lead to greater advancements over the coming years, compared to the speed of technology improvements in the past. With regards to accessibility and Video on Demand (VOD) services it was the turn of the UK’s Authority of Television Video on Demand (ATVOD) regulatory body to present. For those that are unaware, ATVOD regulate all VOD services operating in the UK except for BBC iPlayer which is regulated under Ofcom. In addition because iTunes and Netflix operate from Luxembourg, although their services are available in the UK, they are outside of the jurisdiction of ATVOD. There are no UK regulatory rules that say VOD providers must provide access services, but ATVOD have an access services working party group that encourage providers to do so as well as draft best practice guidelines. I cannot find anywhere on their website the results of a December 2013 survey looking at the statistics of how much VOD content is subtitled, signed, or audio described which was mentioned in the presentation. If anyone else finds it please comment below. However, in the meantime some of the statistics of this report can be found in Pete Johnson’s presentation slides online. What has changed since 2012 is that this survey is now compulsory for providers to complete to ensure the statistics accurately reflect the provision. Another repeated theme, first mentioned in this presentation is the complexity of the VOD distribution chain. It is very different for different companies, and the increasing number of devices which we can choose to access our content also adds to the complexity. One of the key differences for different VOD providers is end-to-end control. Few companies control the entire process from purchasing and/or creating content for consumers to watch right through to watching the content on a device. So therefore who is responsible for a change or adaptation to a workflow to support accessible features and who is going to pay for it? I should also mention that the success of a recent campaign from hard of hearing subtitling advocates in getting Amazon to finally commit a response and say that they will start subtitling content was mentioned positively during this presentation. You may have read my previous blog post discussing my disappointment at the lack of response. Since then, with the help of comedian Mark Thomas, who set up a stunt that involved putting posters up on windows of Amazon UK’s headquarters driving the message home, Amazon have committed to adding subtitles to their VOD service later this year. See video below for the stunt. It is not subtitled, but there is no dialogue, just a music track. You can read more about this successful advocacy work on Limping Chicken’s blog. Susie Buckridge, Director of Product for YouView gave a presentation on the accessibility features of the product which are pretty impressive. Much of the focus was on access features for the visually impaired. She reminded the audience that creating an accessible platform actually creates a better user experience for everyone. You can view the presentation slides online. Deluxe Media Europe gave a presentation that I think would be really useful for other audiences outside of those working in the industry. Stuart Campbell, Senior Live Operations Manager, and Margaret Lazenby Head of Media Access Services presented clear examples and explanations of the workflow involved in creating live subtitles via the process of respeaking for live television. Given the lack of understanding or coverage in mainstream media, this kind of information is greatly needed. This very point was also highlighted by the presenters. The presentation is not currently available online but you can find information about live subtitling processes on this blog’s #SubtitleFail TV page. A later panel discussed VOD accessibility. The panelists acknowledged that the expectation of consumers is increasing as is the volume and scale of complexity. It is hoped that the agreed common standard format of subtitle file EBU-TT will resolve a lot of these issues. This was a format still being worked on when it was discussed at the 2012 Conference which you can read about on this blog. The UK DPP earlier this year also published updated common standard subtitles guidelines. Were any of my readers at the conference? What did you think? And please do comment if you think I have missed anything important to highlight. Accessible film making or what if subtitles were part of the programme? | i heart subtitles and peterprovins are discussing. Toggle Comments peterprovins 4:48 pm on July 21, 2014 Permalink | Reply Interesting blog. No excuse for TV, Film, website or even theatre not to be captioned…we do it all. Currently captioning university lectures and looking at doctors surgeries which are currently limited to BSL only. Keep up the good work. iheartsubtitles 12:59 pm on August 5, 2013 Permalink | Reply Tags: Access, BSL ( 5 ), Movies ( 8 ), Online Media ( 25 ), Streaming Content ( 13 ), TV ( 31 ), UK ( 27 ), VOD ( 13 ) Q&A with Films14 Director Shaun Sadlier A fully subtitled from launch, with the aim to also eventually provided full BSL signed movies from a Video On Demand service. Imagine that? Well one business entrepreneur Shaun Sadlier is planning to do just that through Films14. Read the Q&A from Shaun below and watch the video for more information: Q: Your service is called Films14. Is there a story behind the name? A: I was looking for a name which it is easy to remember and maximum is 7 letters or numbers, films is what we provide and 14 references 2014 when we want to launch. Q: You are based in the UK but the internet is global. Can anyone sign up to Films14 or is it UK residents only? A: That’s correct, we are global brand but we start out in UK and if it goes well then we will expand across the world. Anyone can sign up but it is for UK residents only. If I found anyone who aren’t UK residents then they have to wait for us to come over. Q: Can you reveal what content there will be available to watch? A: We’ve got two types of content, Subscription and On Demands. There will be 50+ movies / TV shows in the first month and additional 50 or more on every month for Subscription. There will be 60+ blockbusters movies every year for On Demands. Q: The subscription content – does that cost extra to access it in addition to the monthly fee? Or does the monthly fee give you access to the subscription content? A: No, it will not cost extra. It is a monthly fee to access subscription and discount blockbuster movie from On Demand. Q: Are there any benefits to signing up in advance of the Films14 launch? A: Yes, there is a benefit. 1. £4.99 for first month and then £6.99 monthly 2. Access to subscription movie’s and TV series (50+ Movie’s & TV Series addition every month) 3. Discount Blockbusters movie’s On Demands (60+ New movie’s in a year) 4. Can cancel membership after first month 5. Pay nothing until launch 6. 100% Subtitles and In-vision signer for sign language (On and Off feature!) – World first! 7. Mystery Gift on the Launch day for Pre-Launch membership only About the Mystery Gift. 1. If we get over 20,000 UK residents sign up before launch then Pre-Launch membership will get £4.99 monthly for life. 2. If we get over 50,000 UK residents sign up then before launch Pre-Launch membership will get £3.99 monthly for life. 3. If we get over 150,000 UK residents sign up then before launch Pre-Launch membership will get £2.99 monthly for life. Q: How is this service funded? A: This service will be funded by crowdfunding and then membership sign up on the first month of launch. Our Seed Enterprise Investment Scheme and Enterprise Investment Scheme are currently pending which take up 4 to 6 weeks. Q: How will the subtitles be provided, are you creating them? A: Our content distributors provides movies with subtitles included. I won’t accept any movies or TV show without subtitles available because in my view, it is pieces of junk. Q: How will the BSL be provided, are you creating them? A: I have a studio which I can use and hire professional BSL signer’s but it will take lots of time to edit them therefore I am looking around for a professional company that can offer a good deal. Q: Will all content released on the website have subtitles and BSL immediately? A: All will have subtitles immediately and BSL will start out with a few titles because it is very expensive and it is new technology. Eventually, all movies will have Sign Language included. That’s our mission. Q: What are the challenges you are facing in getting this service up and running? A: The most challenging is to get as many subscriber’s as possible to cover the costs and in-vision signer features. I am very confident it will go OK. Q: Will you be able to watch the content on all internet enabled devices or desktop and laptops only? A: It will work on Playstation 3, Wii, iPad and any devices with an internet connection and screen because we are going to use HTML5 video player. Q: What can readers do to help get the service up and running? A: Readers can help us to find weakness in our services and sign up please. Q: What is your favourite subtitled content? A: 100% Subtitles with options of size, colour and background colour to suit their need. I don’t have a favourite subtitled movie because I love so many movie’s so it is very difficult to choose. But I mostly watch Sci-fi, Horror, Thriller, Adventure and Drama. Sometime Comedy. Q: What is your favourite BSL content? A: In-vision signer with on and off feature. We are going to start with British Sign Language and when we expand to USA we will put in America Sign Language. American’s are excited and want us to come over, even Australia as well! I don’t have a favourite British Sign Language movie because I haven’t seen one yet considering we don’t get 24/7 access to entertainment and currently it is very limited access. When I heard about a movie with in-vision signer on TV, they normally show these at 2am in the morning which it is frustrating for us. And, some BSL TV series are shown on PC or Laptop which is limited devices. Therefore, our company is 24/7 access, you can watch anytime, anywhere and any devices with internet connection and screen. It will also be the fastest way to watch movies. Q: Why do you think current content providers are so slow at providing access? A: They don’t think how important about our access need because they don’t see how we feel after all these years. I feel so frustrated to have limited access to entertainment and it is getting worse. So, here I am. Q: Is there anything else you’d like readers to know about Films14? A: Films14 is Deaf-led company and we know what we need to access the enjoyment of movies and TV Series. Also, we are world first to have sign language with on and off features. Just like subtitles. Shaun Sadlier Films14 Shaun has already made a BSL signed and subtitled video explaining the service which you can watch on the Films14 website or watch it below: iheartsubtitles 10:33 am on April 28, 2013 Permalink | Reply Tags: Access, Cinema ( 16 ), Movies ( 8 ), Technology ( 20 ), UK ( 27 ) History of subtitling and cinema in the UK The film industry is forever devising new ways to capitalise on technological advancements to attract audiences. But back in the 1920s, and on the verge of going bust, Sam Warner, co-founder (with brothers Harry, Albert and Jack) of small studio Warner Bros. introduced some fancy tech that, with the help of jazz singer Al Jolson, unintentionally alienated many film fans for the next 75 years. Before the Movietone sound-on-film system became the industry standard, the short-lived Vitaphone sound-on-disc system was the most hi-tech audio product available. Originally intended to cut costs of live musicians, the 1.0 non-surround system was responsible for the innovative synchronized mix of Al Jolson’s singing, dialogue and music for Warner Bros’ The Jazz Singer (1927). Although it contained few spoken words, and played silently in many cinemas that had yet to be equipped for sound, The Jazz Singer launched the ‘talkies’ revolution, taking $3m box-office (spectacular in those days), putting the US touring stage production of ‘The Jazz Singer’ out of business, and confirming its studio as a major player in Hollywood. (Sadly, just before the premiere, Sam Warner died of complications brought on by a sinus infection. He was 40). Jolson’s next WB musical, 1928’s ‘The Singing Fool’, was an even bigger success (almost $6m) and held the box office attendance record for 10 years (eventually broken by Disney’s Snow White and the Seven Dwarfs). Jolson become America’s most famous and highest-paid entertainer of the time. So how exactly was the cinema experience ruined for many film fans? The end of the ’20s signalled the end of the silent era as sound and dialogue in movies became standard practice. With ‘talkies’, the essential plot-following device – the caption card – was deemed no longer necessary. For people with hearing loss, a cinema visit was suddenly, if unintentionally, no longer enjoyable or accessible. By and large, they stopped going. For 75 years. A major step backwards for equality, inclusion and community integration. Which is all the more ironic as Thomas Edison, ‘man of a thousand patents’ and pioneer-creator of the first copyrighted film, was almost completely deaf from an early age. Without captions he wouldn’t have been able to follow many of the new ‘talkies’. I often wonder what Edison and Alexander Graham Bell, the two inventors responsible for introducing many of the film, sound and light technologies we take for granted today, would have thought of this ‘talkies’ development, as they chatted over their latest inventions with Étienne-Jules Marey, who was a major influence on all pioneers of cinema, at the Centennial Exhibition in Philadelphia. Of course they could never have had such a discussion – Marey died 25 years before ‘The Jazz Singer’, Bell died 5 years before, and Edison 5 years after. (And, er, the exhibition was held half a century before the film, in 1876…) But let’s imagine they were all having a chat over a cappuccino, at the same exhibition, held just AFTER the films release. I would expect that they would have been very disappointed at the demise of caption cards. A few decades before the release of ‘The Jazz Singer’, Alexander Graham Bell, inventor of the telephone, created the Photophone – a device that enabled sound to be transmitted on a beam of light (the principle upon which today’s laser and fiber optic communication systems are founded). Étienne-Jules Marey had combined a camera and a Gatling gun to create a mutant photographic machine-gun/steadicam device, capable of shooting 60fps (more than a century before James Cameron and Peter Jackson attempted HFR). Edison came up with the Kinetophone, the first attempt in history to record sound and moving image in synchronization. All three pioneers were well aware of the importance of captions – words on screen (or a piece of cardboard). Edison – almost completely deaf from an early age – most likely wouldn’t have liked the film. He hated Jazz, preferring simple melodies and basic harmonies, very possibly due to his high-frequency hearing loss. Bell had founded and helped run a school for deaf children with his wife, who was also deaf. Caption cards were used to teach the deaf children reading and literacy skills. And Marey was a foreigner! (It’s well known that captions/subtitles are beneficial to students studying English as a Second Language). Your Local Cinema – lists screening of subtitled and audio described cinema across the UK Fast forward to the end of the century, and reality, when caption cards were re-introduced to UK cinemas in the form of on-screen subtitles. Steven Spielberg, an early investor in the sound company, Digital Theater Systems (DTS), championed its new cine audio format – a digital sound-on-disc system – and encouraged cinemas to install it ahead of his highly anticipated new release, Jurassic Park (1993). A decade later, DTS updated its (by now popular) system to include, alongside music and dialogue tracks, multi-language subtitles and a caption track, enabling cinemas to project synchronised captions directly on to cinema screens. Dolby launched a similar system soon afterwards. Not long after that – probably feeling bad about the Al Jolson episode – cinemas across the UK collaborated with the UK Film Council to install this new ‘access’ technology. After 75 years, people with hearing loss could once again enjoy, rather than endure, the cinema experience. Hurrah! And, for the first time in the UK, people with sight loss could also enjoy it as an audio description (AD) track – a recorded narration – could also be delivered to wireless headphones. Double hurrah! (But sadly, for people with loss of smell, things were not so good. ‘Smell-O-Vision’, introduced in the 1960s, just never caught on). As before, Warner Bros. was at the forefront of this quiet revolution in cinema. The first film to utilise the new digital caption/subtitle/AD system was Harry Potter and the Philosopher’s Stone (2001). (Steven Spielberg, having played his part in re-introducing captions to cinema audiences, had declined an offer to direct – he’d done enough). Today, another decade later, UK film distributors routinely ensure the provision of caption/subtitle/AD tracks for most popular titles. More than 1,000 have been produced to date. Almost every UK cinema is now accessible in that all d-cinema systems have built-in ‘access’ facilities and can broadcast caption/subtitle/AD tracks. Every week hundreds of cinemas present a total of around 1,000 shows with on-screen captions. Thousands more shows are screened with audio description, received via personal headphones. But as the number of shows and the audience have grown – by around 20% year-on-year – the current UK caption format has inevitably become problematic. Since captions in UK cinemas are on-screen, inconvenient and costly separate shows are necessary, segregating people and restricting the choice of films and showtimes that a cinema can provide. A limited audience, combined with limited opportunities to attend, ultimately results in limited box-office returns. For some time, the industry has wrestled with the conundrum of how to provide an economically viable service to people with hearing loss – how to get a good balance between what the public wants and what it’s possible reasonably to provide. Digital cinema brings with it digital participation – inclusion – which is just as important as digital infrastructures and digital content. For the UK film industry, a commitment to diversity and inclusion is not just a social and legal responsibility. It aims to ensure that cinema is accessible to all, regardless of age or ability, by understanding and catering for audiences with physical or sensory impairments, and their diverse technological needs. The UK film industry is currently investigating recently-developed solutions that could improve the cinema experience further for people with hearing loss. For example, ‘personal’ inclusive caption/subtitle solutions are now available from Sony, Doremi and others that, instead of projecting captions on to the cinema screen, display them on wearable glasses or small, seat-mounted displays. So, any ‘regular’ cinema show could also be a captioned show. These solutions are already being rolled out in the US and Australia. It’s hoped that for audience members with hearing loss, as well as cinema exhibitors and film distributors, the convenience of a personal solution, and the vastly increased choice it can offer, will be more favourable than separate, inconvenient, costly on-screen captioned shows. It is hoped that within the next few years, audiences with hearing or sight loss will be able to enjoy the big-screen experience as never before. As Al Jolson (who really should be forgiven by now) famously said: “I tell yer, you ain’t heard nothin’ yet!” With thanks to Your Local Cinema for this article. Posted with permission. Stay tuned for another follow-up post very shortly to this on subtitling technology for the cinema. markbutterworth, Cinema subtitling technology – could 3D be the better solution? | i heart subtitles, iheartsubtitles, and 1 other are discussing. Toggle Comments Mikel Recondo 2:18 pm on April 29, 2013 Permalink | Reply In Spain, there’s a tradition of dubbing all the foreign films into Spanish. It dates back to the dictatorship of Franco, that in 1940 stablished that all movies should be dubbed into Spanish. Then the dictatorship ended and some cinemas chose not to dub the movies and run them in their original languages with subtitles. Nowadays, these are the only cinemas that I know of that offer any kind of accessibility services. iheartsubtitles 2:29 pm on April 29, 2013 Permalink | Reply Thanks for the info Mikel. markbutterworth 7:58 pm on June 30, 2014 Permalink | Reply Reblogged this on Mark Butterworth learning journey BSL level 3 and commented: History of Subtitles iheartsubtitles 8:58 am on April 24, 2013 Permalink | Reply Tags: Access, Fun ( 24 ), Stenography ( 4 ), Theatre ( 3 ), UK ( 27 ) Festival Of The Spoken Nerd – subtitled comedy Last week I attended an event captioned by STAGETEXT but this time it wasn’t a play but live comedy. Consequently rather than scripted and cued captions being used, the comedy event called Festival Of The Spoken Nerd was captioned live by a stenographer. To get an idea of the comedy show style watch this clip: What was great about the event was that there was very much an element of audience participation both on stage and through the use of smart phones and Twitter. I think it is the first time I have ever been in a theatre and been encouraged to keep my mobile phone switched on and use it! As a result I was able to capture some great moments that were unique to this particular gig. Because it was captioned the Festival Of The Spoken Nerd cast sometimes spoke about and interacted with the live captions appearing above their head: Speech to text comedy from @fotsn and @stagetext vine.co/v/bUheDtPUdDF — Dawn Jones (@iheartsubtitles) April 18, 2013 Later on in the show the stenographer Kate was made part of the show with the use of a video camera that recorded her typing away and displaying this on screen: Stenography made part of the show by @fotsn #spokennerd vine.co/v/bUMrQOKgxFB It was such a refreshing change to see technology being used for access celebrated and then being integrated into the show. There were no complaints, everyone in the audience thoroughly enjoyed it. Captioning aside, the show is both funny and fascinating. I’ve not seen anything like it before. This was the first comedy I have ever had the pleasure of attending that has been captioned live for audience and I certainly hope it is not the last. I would love to see more. Caption users are needed for STAGETEXT film. If you are available on May 7th and can get to London, why not help STAGETEXT promote the services it provides by taking part in the film.
cc/2019-30/en_middle_0023.json.gz/line1317
__label__cc
0.737234
0.262766
Advance Your Ransomware Defenses Ransomware isn’t new. In fact, it’s 30-years-old . What IS new is ransomware’s sudden rise as a favored attack by cyber criminals . Cyber crime has become a lucrative business and, unfortunately, ransomware has become an integral attack method that many organizations are fighting a losing battle against. ... The Ominous Rise of “Island Hopping” & Counter Incident Response Continues Cybercrime certainly isn’t basketball — the stakes are higher, your jump shot doesn't matter — and yet the principle remains the same. As incident response (IR) teams and their vendors raise the defensive bar, adversaries adapt in kind. According to the world’s leading IR professionals,... Rethinking endpoint and email security for the BYOD era The attacks can and do hit organizations of all sizes and are only becoming more widespread and difficult to detect. The consumerization of IT and Bring Your Own Device (BYOD) only exacerbates the issue since companies now have many more endpoints to protect—many of which they don’t own. In a survey of... Published: Jul 18, 2013 Tips for a Successful PST Elimination Project Download this white paper specifically looking at PST elimination budgeting and justification, which will provide guidance across the steps we typically see customers taking to effectively budget and gain buy-in for PST elimination projects. 10 Essential Steps to Email Security By following some basic principles, email can be allowed to move freely into, out of and around your enterprise while stopping the things that cause damage. Download this whitepaper now and discover the 10 essential steps to email security. Those responsible for these threats are getting increasingly sophisticated... F500 Insurance Services: PST Enterprise Speeds Integration of Merged Companies Download this white paper to see how the customer can utilize PST Enterprise to centralize email for their new end user base without interruption to the end users themselves. eFax Corporate Overview j2 Global provides industry leading Internet Fax Messaging solutions for global enterprises looking to streamline the exchange of business critical information and eliminate the costly infrastructure of in-house fax machines and servers. Learn more about eFax in this informative overview that will cover the benefits of... The Move to Exchange 2013: Migraine or Migration? Migration of your most precious business asset, that which is mission-critical for the success of the company on so many levels—your Exchange messaging solution—is not something to be taken lightly. Some call it migrating, or upgrading or transitioning with a period of coexistence, but regardless of the... The Total Economic Impact Of Mimecast's Unified Email Management (UEM) Solution Mimecast delivers a software-as-a-service (SaaS)-based enterprise email management solution that complements the client’s existing on-premises email infrastructure. It provides services including antispam, antimalware, archiving, eDiscovery, continuity, and policy management. For a more detailed overview of... A Guide to Transactional Email Whether we’ve realized it or not, everyone with an email account has received a transactional email, otherwise known as the alert that Bob accepted your friend request, the receipt for the days-of-the-week socks you just bought online, or even a welcome email for signing up for that new deal of the day website. ... Mitigating Email Virus Attacks Each day, over 100 billion corporate email messages are exchanged. With email at the heart of businesses, security is a top priority. Email-based threats are more organized, targeted, and dangerous than ever before and the costs of security breaches nullify any short-term savings gained from settling for basic protection.... OpenText Secure Mail: Email Encryption Simplified and Integrated OpenText Secure Mail is a cloud-based secure messaging solution for encrypting, tracking, and preventing the leak and interception of confidential information. Why Encrypt? Securing Email Without Compromising Communications For many companies, data loss prevention (DLP) has, for too long, emphasised the management of internal data, blocking sensitive information from leaving company networks. But this is not a real world solution when email continues to be the main channel over which employees distribute and share what is often confidential...
cc/2019-30/en_middle_0023.json.gz/line1322
__label__wiki
0.66652
0.66652
ITF demands investigation into sequence of terminal deaths in Jakarta By admin On December 14, 2017 December 13, 2017 In Insurance Marine News, Keep, Marine Liability, Political Risk, Credit & Finance A worker has died in an accident at the Hutchison terminal in Jakarta, the fourth fatality in 15 months at the terminal, according to the International Transport Workers’ Federation (ITF), which described the terminal as having an atrocious safety record. Nova Hakim, chair of the Serikat Pekerja Jakarta International Container Terminal (SPJICT) said: “We are shocked and alarmed by the continuing carnage at the Hutchison’s terminal in Jakarta. Two workers have died within two months, and four within the past 15 months. This is an atrocious record that speaks for itself.” The ITF said that the worker died as the result of a fall. Both The ITF and SPJICT want Hutchison to conduct an official inquiry into the death. ITF president Paddy Crumlin said that “Hutchison needs to answer serious questions. Was this man provided with adequate fall protection? Was the outboard fencing on this vessel complete and compliant with international and class standards?” Crumlin said that falls from height and falls overboard were “100% preventable” on a modern vessel. “When a person falls overboard, management are often quick to blame the worker. We need to dig deeper to find the root causes of this horrible tragedy,” Crumlin said. Duncan Dunn becomes chairman of BEAC Case of missing Goan sailor remains open
cc/2019-30/en_middle_0023.json.gz/line1324
__label__wiki
0.639189
0.639189
Mission Support: Engineering » Analytics and Simulation » Training » Program Management » Cyber » Cloud » Enterprise IT » Ground Vehicles » Weapon Systems » Supply Chain » Sustainment » U.S. Army » U.S. Navy » U.S. Air Force » U.S. Marine Corps » Federal Civilian Agencies: Intelligence Community: Contracts and Schedules: About SAIC: Features » Life at SAIC » Working at SAIC » Analytics and Simulation Weapon Systems Sustainment Contracts and Schedules About SAIC Life at SAIC Working at SAIC SAIC Investor Relations SAIC Announces Teaming Agreement with ST Kinetics and CMI Defence to Develop Ground Combat Vehicle Prototype Click for more Stock Information Company to develop a lightweight vehicle solution to meet the U.S. Army’s requirement for a new Mobile Protected Firepower capability Thursday, October 5, 2017 4:15 pm EDT EmailPDFPrintRSS NYSE: "As a systems integrator, SAIC can deliver an alternative option to the Army that brings together best-of-breed, non-developmental components to field a new combat vehicle quickly that meets critical requirements" RESTON, Va.--(BUSINESS WIRE)--Science Applications International Corp. (NYSE: SAIC) announced today that it will compete to rapidly develop combat vehicle prototypes to meet the U.S. Army’s need as part of the Mobile Protected Firepower (MPF) program. SAIC, together with ST Kinetics and CMI Defence, will develop and integrate a vehicle that offers the Army an innovative solution that provides infantry forces access to combat environments in 21st century operations. “As a systems integrator, SAIC can deliver an alternative option to the Army that brings together best-of-breed, non-developmental components to field a new combat vehicle quickly that meets critical requirements,” said Jim Scanlon, SAIC senior vice president and general manager of the Defense Systems Customer Group. “Rapid delivery of this MPF solution is essential to the Army and our solution is extremely well-positioned to meet these requirements and deliver a modernized vehicle to soldiers.” Based on ST Kinetics’ Next Generation Armored Fighting Vehicle (NGAFV) chassis and CMI Defence’s Cockerill Series 3105 turret currently in production, SAIC will compete for an Engineering and Manufacturing Development (EMD) contract to build prototypes that incorporate a lightweight combat vehicle design while still providing mobility and lethality for Army units. Such a vehicle will enable freedom of movement and action, specifically for restrictive, urban operations but tailorable for full-spectrum combat environments. “SAIC has developed a superior solution that integrates mature, currently produced offerings from our industry partners, ST Kinetics and CMI Defence. By marrying ST Kinetics’ chassis with CMI Defence’s turret, SAIC can deliver a reliable vehicle that gives soldiers a new capability in combat environments,” said Scanlon. “ST Kinetics is indeed honored to team up with SAIC again to participate in another major defense program in the U.S. Our NGAFV is an advanced system that is fully digitalized, highly mobile and developed to support networked knowledge-based warfighting. A fleet of seven prototypes had been developed and robustly tested over several years. As the NGAFV will be in production soon, this platform brings minimal technical risk and a robust supply chain to the MPF program,” said Dr. Lee Shiang Long, president of ST Kinetics. President of CMI Defence Jean-Luc Maurange added, “We are extremely proud to participate in the MPF program with SAIC, especially as this is the 200th anniversary of our company’s founding. Our highly innovative turret and gun solution is already qualified and in production, which translates into a high level of manufacturing readiness, low technical risk and ensures our ability to meet the compressed program schedule required by the U.S. Army.” SAIC’s entry into the MPF competition builds on continued momentum in combat vehicle modernization, to include the company’s recent collaboration with the Detroit Automotive Technologies Consortium (DATC) and the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) to assist in the development of the next-generation combat vehicle - experimental prototype (NGCV-EP). This recent success expands upon SAIC’s proven experience in modernizing combat and tactical vehicles including Mine-Resistant Ambush Protected (MRAP) vehicles for the Army, and Amphibious Combat Vehicles 1.1 (ACV) and Amphibious Assault Vehicles with Survivability Upgrades (AAV-SU) for the U.S. Marine Corps. SAIC is a premier technology integrator providing full life cycle services and solutions in the technical, engineering, intelligence, and enterprise information technology markets. SAIC is Redefining Ingenuity through its deep customer and domain knowledge to enable the delivery of systems engineering and integration offerings for large, complex projects. SAIC’s more than 15,000 employees are driven by integrity and mission focus to serve customers in the U.S. federal government. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion. For more information, visit saic.com. For ongoing news, please visit our newsroom. Certain statements in this announcement constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. These statements involve risks and uncertainties and a number of factors could cause our actual results, performance, achievements, or industry results to be very different from the results, performance, or achievements expressed or implied by such forward-looking statements. Some of these factors include, but are not limited to, the risk factors set forth in SAIC's Annual Report on Form 10-K and other such filings that SAIC makes with the SEC from time to time, which may be viewed or obtained through the Investor Relations section of our web site at www.saic.com . Due to such uncertainties and risks, readers are cautioned not to place undue reliance on such forward-looking statements, which speak only as of the date hereof. SAIC Media Contact: Lauren Presti, 703-676-8982 lauren.a.presti@saic.com Investor Center Home Stockholder Resources Registration Login Media Requests Email Alerts RSS Feeds Shane P. Canestra Vice President, Investor Relations + Employee Tools + Suppliers and Small Business © SAIC Business Wire InvestorHQ℠
cc/2019-30/en_middle_0023.json.gz/line1331
__label__cc
0.656215
0.343785
Business Analyst (US Solutions) 1658731_crt:1562649874383 ConsenSys is a venture production studio and the leading technology firm in blockchain globally . We deliver products, solutions and platforms built using blockchain technology to transform how business is done in a complex network of buyers, suppliers and consumers. Our teams are busy at work building the future of identity, financial markets, commerce, the music industry, security, infrastructure and more. To accomplish this we've built out a flat organizational structure which we call the ConsenSys Mesh: a network of individuals & teams working autonomously towards the same goal. Our mission is to use these decentralized solutions to fundamentally reshape the economic, social, and political operating systems of the planet. Are you passionate about decentralizing our future and taking control of how we evolve? Then join us! We are seeking passionate, determined and resilient individuals who thrive in a self-directed, collaborative culture. Our technology is transforming global society and humanity - we welcome you to this exciting opportunity. About Us - US Solutions ConsenSys Solutions is our strategy and corporate venturing unit that provides strategic advisory, ideation and use case designing, and new venture and platform development services to enterprise and government clients - helping them discover, explore and develop blockchain solutions. Our teams have led some of the most innovative blockchain production implementations in the world. We are busy building the future of identity, financial markets, commerce, security and infrastructure, and more. We help governments and enterprises deliver products, solutions and platforms built using blockchain technology to transform how business is done across complex networks of buyers, suppliers and consumers. We are literally shaping the future of decentralized technology, and we're looking for exceptional talent to help us change the world! Are you able to interpret existing business models and see where new technologies could be beneficial or disruptive? Could you explain how blockchain works to a C-level audience? Can you connect the dots in a different way to create genuinely new product or business opportunities? If you're someone that thrives in a highly ambiguous, lightning-fast-paced environment where being self-directed, determined and resilient are a key requirement, we would love to hear from you. An interest in blockchain technology and a broad skill set is important. You will be joining a diverse team of business and technology specialists working together to develop new blockchain solutions to help deliver next generation social, economic and political systems. As a Business Analyst for the Solutions Advisory team, you will be responsible for multiple workstreams, including, but not limited to, financial modeling, market research, and associated analytics. You will be comfortable working across a range of engagements and tasks - conducting strategic analysis, running client workshops or scrum teams - but will bring a venturing mindset and requisite skill set to deliver the highest quality outcomes at pace. Ideal Experience and Skills 1-3 years of experience as a strategy consultant from top tier firm or similar role in leading tech, PE, IB, or VC firm Experience delivering on client engagement teams Experience in venture creation, including work in due diligence, valuation, joint venture governance design, operating structures, negotiation, and term sheets Knowledge of and passion for cutting edge technologies/digital initiatives (blockchain/Ethereum knowledge) Experience leading and delivering multiple workstreams, including experience in business case development, financial modeling, due diligence, valuation, CEO-ready presentation design and story-lining Strong consulting skillset: hypothesis-led problem solving, framing and communicating complex ideas, thinking strategically Entrepreneurial, curious, creative, and have a track record for thinking innovatively Outstanding academic track record from a leading university Excellent verbal and written communication skills in English Comfortable in rapidly changing, highly ambiguous environments Self-accountable, with a drive for constant self improvement A knowledge sharer, always willing to help the wider team Perks of the Mesh A dynamic startup environment. ConsenSys is a leader with vision in the blockchain space and we are absorbing a significant portion of the attention. This is both exciting and challenging, as we learn to scale our organization while adhering to the principles of decentralization. Decentralized culture. We offer a new way of structuring an organization: holacratic (non-hierarchical) meaning you are free from hierarchical restraints, working in an environment that truly encourages all levels of collaboration. Global reach. Connect with brilliant minds across 6 continents as we continue to grow and build a diversified culture. Opportunities for the Decentralized Future The forefront of a revolution. At ConsenSys we fundamentally believe that the next generation of technology presents an opportunity to craft a more just and equitable society. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Deep technical challenges. The entire ecosystem is about 10 years old. Ethereum itself is still a toddler. For these platforms to scale to the order of millions or billions of users we have much work ahead. The team at ConsenSys is building technology platforms that can get us to those next thresholds of scale. Entrepreneurial opportunities. We are always encouraging our community to push the boundaries with innovative dApps. ConsenSys is an equal opportunity employer. We encourage people from all backgrounds to apply. We are committed to ensuring that our technology is made available and accessible to everyone. All employment decisions are made without regard to race, color, national origin, ancestry, sex, gender, gender identity or expression, sexual orientation, age, genetic information, religion, disability, medical condition, pregnancy, marital status, family status, veteran status, or any other characteristic protected by law. Click here for the latest updates from ConsenSys. Similar searches: Full time, Business Analyst, California, Brooklyn, Consulting Firms
cc/2019-30/en_middle_0023.json.gz/line1344
__label__cc
0.530996
0.469004
Tag Archives: CW Comic Con, Fall TV Comic-Con TV Roundup July 25, 2016 Nguber Leave a comment Another San Diego Comic-Con came to an end this Sunday and it came with so much TV news and trailers. It seemed like TV was really the star of the con and for the first time overtook film at the big event. There were first looks at highly anticipated new series, teasers for upcoming seasons and news on some of our favorite long running series. Take a look at all the trailers and enjoy. Legion – FX FX’s series Legion is a new X-Men series from Marvel that had its coming out party at Comic-Com this weekend with the release of its first trailer. The series is an origin story of David Haller, who comic book fans know as the mutant Legion. The series will premiere some time in 2017. Continue reading Comic-Con TV Roundup → ABCAmazonAMCAmerican GodsArrowBlindspotCBSComic ConComicConCWDareDevilFear The Walking DeadFirst LookFOXFXHistory ChannelIron FistJessica JonesLegionLuciferLuke CageMan In The High CastleMarvelMTVNetflixOnce Upon A TimePBSPrison BreakSherlockStar Trek: DiscoverySyFyTeaserTeen WolfThe DefendersThe ExpanseThe FlashThe Vampire DiariesThe Walking DeadTrailersVikings Fall TV, Scheduling ‘Supergirl’ Season 1 to Air on The CW, Ahead of Season 2 Premiere in October | TVLine July 6, 2016 Nguber Leave a comment Supergirl will touch down on her new home sooner than planned, now that The CW has announced its plan to encore every episode from Season 1 Source: ‘Supergirl’ Season 1 to Air on The CW, Ahead of Season 2 Premiere in October | TVLine CWFall TVSupergirl Nikita Final Season Premiere Date Announced October 4, 2013 Nguber Leave a comment Who picked this premiere date? I mean seriously? The show is already getting cancelled can we get some good dates so they at least have a chance to get watched. Why would they program it Nov 22 – Dec 27? Right smack in between the holidays. Ughh. Booo CW. Nikita‘s farewell season has an official launch date. The erstwhile Division team will kick off its six-episode final mission on Friday, Nov. 22 at 9/8c. The CW spy thriller’s fourth season will air over six straight weeks, leading up to a bound-to-be epic series finale on Dec. 27. VIDEO |Nikita Comic-Con — Amanda Death, New Office in Final Season The official description of the swan-song run reads as follows… Framed for assassinating the President at the end of last season, Nikita finds herself at the beginning of season four alone and on the run, hunted as the most wanted woman in the world. But when she follows up a lead that could clear her name, Nikita unexpectedly finds herself reunited with her old team. Forced to work together again, Nikita and her allies have to get past their emotional wounds in order to take down their nemesis, Amanda… CWFinal seasonNikitaPremieres Fall TV, Premieres 5 Shows To Watch This Fall Fall première dates are coming up and there are too many things to watch. No worries. It was extremely difficult but I have narrowed it down to the 5 shows that you must watch. 1. Brooklyn Nine-Nine (FOX) Premieres Tuesday, September 17 @8:30/7:30c. I have a thing for André Braugher (Men Of A Certain Age, Gideon’s Crossing). I think he’s absolutely wonderful. So when I saw that this show got picked up by Fox, I was excited. The show is a comedy about is about how members of a Brooklyn squad deal with a new very straight-laced and tightly wound police chief played by Braugher. The comedy for most of the show comes from the interplay between Braugher and the squad’s lead cop, played by Andy Samberg (Saturday Night Live), who just can’t seem to adjust. This show looks hilarious and I have read nothing but good things about it. Samberg had many great moments on SNL so he will bring the funny. I just really want them to let Braugher be great. I need him to be on a show that people give a chance. Hopefully this one is it. 2. The Michael J Fox Show (NBC) Premieres Thursday, September 26 @9/8c. Michael J. Fox hasn’t been the star of a show since he left Spin City in 2000 because of his Parkinson’s disease. He has done a lot of guest stints on different shows since then; my favorite being his arcs on The Good Wife. It’s nice that now he is returning to television full-time and in something that is really good at, comedy. The new show emulates the actor’s life a bit in that it is about a news anchor at the top of his game who decided to leave his job to spend more time with his family. After being at home for 5 years, his family is ready for him to leave the house and return to work. The cast also includes Betsy Brandt (Breaking Bad) and Wendell Pierce (The Wire, Treme, Suits). I saw part of this pilot in one of my classes last semester and I really liked it so I can’t wait to see the finished product. Continue reading 5 Shows To Watch This Fall → ABCAndre BraugherAndy SambergBetsy BrandtBradley WhitfordBrooklyn Nine - NineClaire HoltCWDaniel GilliesFall TVFOXGang RelatedJoseph MorganMalin AkermanNBCPremieresTerry O'QuinnThe Michael J. Fox ShowThe OriginalsTrophy WifeTVWendell Pierce CW 2013 Upfronts May 16, 2013 Nguber Leave a comment The CW held their upfront presentation this morning and released their 2013 – 2014 schedule. They moved some returning shows around and have put together some themed nights. Mondays will be romance themed with Hart of Dixie and Beauty And The Beast and Fridays with be fashion focused with The Carrie Diaries and America’s Next Top Model (Why is this show still on?). The network picked up 6 new series: The Originals, The Tomorrow People, The 100, Reign, Famous In 12, and Star-Crossed. Nikita was renewed for a fourth and final 6 episode season. It hasn’t been scheduled yet but it will probably come sometime in January. Here is the new schedule. Continue reading CW 2013 Upfronts → America's Next Top ModelBeauty and the BeastCWfallFall TVFamous In 12Hart of DixieMidseasonReignscheduleStar-CrossedThe 100The Carrie DiariesThe OriginalsThe Tomorrow PeopleTVUpfronts There were some major pink slips handed out this past weekend, with most of them coming from NBC. The pic has all the shows that were cancelled for the 2012 – 2013 fall season (Minus reality shows). Hannibal which is still airing until June on NBC still does not have a decision made on it but it is more than likely done. Below is a breakdown of each cancellation by network. 666 Park Avenue, Body of Proof, Don’t Trust The B—- in Apartment 23, Family Tools, Happy Endings, How To Live With Your Parents, Last Resort, Malibu Country, Private Practice, Red Widow, Zero Hour. CSI:NY, Golden Boy, Made In Jersey, Partners, Rules of Engagement, Vegas. 90210, Cult, Emily Owens, M.D., Gossip Girl. Ben And Kate, The Cleveland Show, Fringe, The Mob Doctor, Touch. 1600 Penn, 30 Rock, Animal Practice, Deception, Do No Harm, Go On, Guys With Kids, Whitney, The Office, Smash, Up All Night, The New Normal, Hannibal (more than likely but not cancelled as of now). 30 Rock90210ABCBen And KateCancellationsCBSCSI:NYCWDeceptionEmily Owens M.DFOXFringeHannibalNBCnetworksScripted TVSmashThe OfficeTVUpfrontsVegas It’s that time of the year again. The upfronts here. It’s the time when all the broadcast networks are going to unveil their fall schedules. We usually have to wait until the day of each network’s upfront to find out for sure what has been cancelled or renewed but this year that news has been coming all week. I always get really excited about upfronts but I’m especially excited for this year’s because one of my projects for grad school was to play ABC exec and put together an upfront for class (shoutout to my group members Emma, Debbie and Yijing). It was soo fun so I’m looking forward to seeing how accurate we were with what got cancelled and what was picked up. Here is the schedule for the broadcast networks to unveil their schedule. NBC – Monday 5/13 Fox – Monday Afternoon 5/13 ABC – Tuesday 5/14 CBS – Wed 5/15 CW – Thursday 5/16 A lot of cancellations and pickups have already been announced. For those check out TVLine, The Wrap, Deadline, or Hollywood Reporter. I will post as each schedule becomes available. Stay tuned for updates. ABCCancellationsCBSCWfallFOXNBCRenewedscheduleScripted TVTVTVLINEUpfronts Mid-Season New and Returning Shows (2013) January 1, 2013 Nguber Leave a comment The holidays are here and all your shows are in reruns or got cancelled (I’ll miss you Emily Owens, M.D). I know you are pretty anxious for their return. Don’t fret, you can find the dates of every single show’s return date as well as new shows that will be premiering this Winter at the bottom of this post. Shows return as early today, the 1st of the year. Here are some the returning and new shows I’m looking forward to. The Americans (FX) Series premiere January 21 @ 10/9c. Keri Russell back on television?? YESSSSSSSSSSSSSSSSSSSSSS!!! And playing a bad ass?? Double YESSSSSSSSSSSSSSSSSSSS!!! FX has stepped up their game when it comes to their programming (Sons of Anarchy, Justified, It’s Always Sunny In Philadelphia,etc) and this show looks like it will be no exception. This show set in the Cold World era stars Keri Russell (Felicity, Mission Impossible 3) and Matthew Rhys (Brothers & Sisters, The Edge of Love) as KGB officers, who are working as sleeper agents trained to impersonate American citizens in Washington D.C. This is going to be great. Hopefully this show will do well and keep Russell on TV where she belongs. The Good Wife (CBS) Season 4 resumes January 6th @ 10/9c Season 4 hasn’t been the best but it’s still been better than half the shows on television. This season we are in the middle of has been very hit and miss, mostly because everything is not in sync. The storyline with Kalinda (Archie Panjabi) and her husband, played by Marc Warren (Doctor Who, Mad Dogs) is one of the biggest missteps that has happened to this show. The show creators, Robert and Michelle King, told TV Guide recently, they are going to fix things. They stated: “You don’t give James Bond a girlfriend. Some characters you actually don’t want to see that much backstory. We’re adjusting. No matter where we went, this was not a place where the audience wanted to go.” Hopefully the ship gets righted with the rest of the season. Check out the promo for the next new episode, “Boom De Ya Da.” Continue reading Mid-Season New and Returning Shows (2013) → Anna Sophia RobbArchie PanjabiCBSCougar TownCWEmily Owens M.DFOXFXGabriel Machtgina torresJennifer HudsonKalindaKatherine McPheeKeri RussellKevin BaconMarc WarrenMatthew RhysMegan HiltyMeghan MarkleMichelle Kingmid-seasonMidseasonMindy KalingPatrick J. AdamsPremieresRadha MitchellRed WidowReturning ShowsRobert KingScandalSean HayesSmashSoul SurferSuitsTBSThe AmericansThe Carrie DiariesThe Followingthe good wifeThe Mindy ProjectTommy DeweyTVUSA What To Watch Tonight: October 19 October 19, 2012 Nguber Leave a comment Nikita Season 3 premiere Boss Season 2 finale Hunted — What to Watch – TVLine http://tvline.com/2012/10/19/nikita-season-3-premiere-boss-season-2-finale-what-to-watch/ CWNikitaPremieresTV What To Watch: October 16 9/8c – ABC/NBC/CBS/FOX – Presidential Debate – The 2nd of 3 presidential debates will take the form of a town meeting, where citizens will ask questions of the candidates on foreign and domestic issues. Candidates each will have two minutes to respond, and an extra minute for the moderator to facilitate a discussion. The town meeting participants will be undecided voters selected by the Gallup Organization. This debate will be moderated by CNN’s Chief Political Correspondent, Candy Crowley. 9/8c – CW – Emily Owens, M.D – Series Premiere. Recent medical school graduate Emily Owens (Mamie Gummer) is now a first-year surgical intern at Denver Memorial Hospital and she is ready to leave high school persona of geeky-girl-with-flop-sweats behind her. But Emily quickly realizes that a hospital is a lot like high school — cliques and all — when she is faced with her medical school crush Will Collins (Justin Hartley) and high school nemesis Cassandra Kopelson (Aja Naomi King). 9/8c – Style – Tia & Tamera – Season 2 Fall Premiere. Tamera reveals to Tia and her friends that she is having a baby boy. Now, Tamera must travel to New Orleans to film a new movie and is surprised by how difficult it is, especially with pregnancy brain causing her to forget her lines. While Tamera negotiates New Orleans, Tia receives some unsettling information about her future on her television series. 10/9c – USA – Covert Affairs – Season 3 Fall Premiere. Auggie, Joan and Arthur work with an old friend to bring Annie home from a Russian prison. 10/9c – Style – Chicagolicious – Season 1 Fall Premiere. Q struggles with her decision to leave AJ’s of Chicago – knowing bad news about the future of his product line could affect any plans to get back her job. When the team is hired to style a zoo fundraising campaign, everyone springs into action but creative differences soon cause problems for the shoot. Meanwhile, Austin becomes jealous of the time MaCray and Katrell spend together outside of the salon. 10/9c – MTV – Underemployed – Series Premiere. follows a group a five friends – Sophia, Daphne, Lou, Raviva and Miles – who all believe they’re destined for greatness as they graduate from college and enter the real world. But what happens when complete world domination doesn’t go according to plan? picks up one year after pomp and circumstance, when reality has set in and the group struggles, often comically, to stay optimistic through dead-end jobs, terrible bosses, romantic mistakes and major life changes. 11/10c – ION – Flashpoint – Season 5 Premiere. an elite police unit that specializes in high-risk critical incidents and faces their toughest challenges yet. While trained in physical and emotional tactics to deal with extreme situations, they find themselves contemplating the very reasons they chose their paths as police officers. For Team One, the deeply personal journeys of the group pulsate through each episode. ABCAja Naomi KingBarack ObamaCandy CrowleyCBSChicagoliciouscnnCovert AffairsCWEmily Owens M.DFlashpointFOXIONJustin HartleyMamie GummerMitt RomneyMTVNBCPremieresPresidential DebateStyleTamera Mowry HousleyTia & TameraTia Mowry HardictTVUnderemployedUSA
cc/2019-30/en_middle_0023.json.gz/line1355
__label__cc
0.715598
0.284402
Driver held after car hits 5 students on bicycles Nov. 4, 2013 04:30 pm JST Nov. 4, 2013 | 04:39 pm JST Police said Monday they have arrested a 29-year-old man after the car he was driving hit a group of girls on their bicycles in Yachimata, Chiba Prefecture, on Sunday night. According to police, the girls were returning home from a local festival when the incident occurred just before 10 p.m. TBS quoted police as saying that a car driven by Tsuyoshi Morooka, hit the group and then kept going. Four of the five girls were taken to hospital; three suffered light injuries, while a 13-year-old girl suffered a broken leg, police said. Police said Morooka had been at the same festival where he had been drinking heavily. After the incident, he told police he drove home and remained in the car outside in order to sleep it off, TBS reported. His wife saw the damaged car with her husband asleep inside and notified police. © Japan Today gogogo I hate the hit and run mentality of Japan :( Nice wife... -5 ( +7 / -12 ) YongYang @Yokatta, difficult to rad if you are being sincere but I totally agree with your sentiment if you are in fact supporting her for reporting this loser for getting stupidly drunk then driving a vehicle on a public highway. I hope they throw the book, heavy and hard, at the selfish idiot. Get well soon girls. Ah_so Well done his wife. It was probably habitual behaviour on his part and she knew it was time to act. gaijinfo What an idiot. Glad none of the girls were seriously injured, although having a broken leg sucks. Hope he pays through the nose for her medical care. smithinjapan Don't know the character of the wife or the relationship, but she did the right thing. Glad none of the injuries were life-threatening, and hope this guy does a nice little stint in the slammer to 'sleep it off'. smithinjapan: What really is crap about this, is that he'll say "he was drunk" so doesn't remember hitting them. Is very very sorry, will bow, maybe cry, give some presents to the girls and NOTHING WILL HAPPEN or a suspended sentence and not allowed to drive for 6 months. The justice system in Japan is stupid, it never takes into account the seriously of the crime at the time and only looks at how much remorse the perpetrator shows to the court and the victims AFTER THE FACT. See! This is what you get when you falsely believe heavy fines will deter drink drivers. There should have been police at every car park exit of the festival checking these fools and these kids would not be in hospital! I have no objections to living in a so-called 'police state' cos it is only the irresponsible rat bags that are penalized. Lift your game Japanese police! Enforce the law instead of advertising it! Stephen Knight Yacchimatta in Yachimata... :-( Cortes Elijah As much as a wife or husband's responsibility is to support them no matter what...I respect the wife for reporting this. Makes me wonder how many other drunk fools drive around these roads.Hope this guy gets a sharp and painful punishment. Next time I hope he hangs. ControlFreak See! This is what you get when you falsely believe heavy fines will deter drink drivers DUI accidents have been low for a very long time, even before the no tolerance policy. At this point you could threaten death and still not make a dent in the small percentage of DUI accidents. If you want to live in a police state, I am sure there is one that already exists that you could leave for and be happy. But we would break the bank enlisting enough cops to do as you suggest and we would be living in a much poorer police state. What I think you need to realize is that there are a small number of alcoholics who are simply not in control of themselves. I think its sad that those few idiots have become the poster boy for people like myself who used to drink and drive within previous legal limits just fine. lostrune2 Yet another hit-and-run. Get well soon. Mirai Hayashi Of course, he didn't remember a thing..right? oikawa Damn, I was at this festival. Strange atmosphere. Lots of HS kids and younger drinking and smoking around the station, literally 3 or 4 metres away from a bunch of police. Nothing at all was done. Cars went past fast in a pedestrianised zone near lots of people wandering around the roads and a few half-hearted police whistles were heard. I'm not surprised to hear this news unfortunately. BurakuminDes Respect to the wife for doing the right thing. Her husband would have killed someone next time. I hope these poor girls make a full recovery. As for Mooroka - there are no excuses. Hitting five children and not caring about their welfare is unforgivable. Throw the book at him. 10 years inside and a lifetime ban from driving would be a bare minimum. Elbuda Mexicano Chiba?? Ah yes!! Many idiot drivers there! Good job on his wife!! I hope she divorced this idiot!! jonobugs The man driving was clearly at fault here, but I have to say that it is not difficult to imagine a person who was not intoxicated hitting a group of girls riding bicycles. I was riding my bicycle yesterday when I came upon a flock of girls who took up the whole road. There must have been 15 of them. The road we were on was incredibly windy and had mirrors which people sometimes rely on too much. A car going the opposite direction had to slam on his brakes when he saw them and only barely avoided a collision. I say barely because the girls almost ran into the car even though it had stopped. Sometimes when a group of students get together they completely ignore the rules of the road which is potentially lethal. Scnadal.Lova the hit and run thing is all i'm seeing these days. Anybody who thinks the # of drivers who drink & drive is low must live in the city because once your outside there are a great many who drink & drive in urban & rural areas, hell the ramen joint 3min walk from my place I wud bet has 10-20cars a nite drive away that should be calling cabs! DudeDeuce Since I am an outsider to this situation, I am glad he was arrested. For real though, how many of you support your wife calling the police if she finds you drunk in your car? This is without her knowing that you hit someone and her asking you what happened first. subyyaki I commend his wife, whether she likes her husband or not. kaimycahl I don't think the wife reported him because he was drunk I think she did it because she wasn't aware of where the damage came from whiskeysour Well, the police will let him go after he says, " Gomenasai " to much and beat his wife up. His wife should file for divorce !!! And disappear. Well done wife! At least someone in the family showed some brains. I agree that he should get jail time (hit & run) and lose his privilege of driving for a long enough time to prove beyond doubt that he has given up drinking (years if ever). His low moral character was shown by leaving the scene of an accident . He does not deserve the high quality wife that he has! VicMOsaka And, I bet the girls were riding 3 abreast at night with no lights on their bicycles. I don't condone what the man did, but people on bikes should obey traffic rules- for their own safety as well. I come across them all the time when I am driving and they just don't budge and make you pass on the other side of the road. On the other hand, most boys on their bikes ride in single file and pay more attention to the cars coming from the rear. quercetum Cars in Japan are way too close to where pedestrians walk. They were on bikes but the space between cars and pedestrians can be only about 12 inches. CrazyJoe By all means drunk driving must be eradicated whether it means more checkpoints by police. The fact that the wife reported it immediately to police without sheltering the husband is remarkable. Under Japanese law, what she did was quite logical. The penalties differ greatly depending on whether you turn yourself in or whether they have to seek you out. The household looses under any scenario, but this would be the least bad. Gobshite They were actually riding on the wrong side of the road, towards oncoming traffic. Stupid and dangerous, but that doesn't excuse the driver at all. Killer's sentence unchanged despite new mental illness diagnosis Get a free drink while you’re in Kyoto
cc/2019-30/en_middle_0023.json.gz/line1360
__label__wiki
0.913666
0.913666
Andrew Linton / 12 Jul 3305 Linton Travel - Done Sleuthing part 8 <-- Previously Aware of the people in the hold and concerned about their comfort, Adalina touches Border Reiver gently down on Pad 3 of Diva Mines with great dexterity. She doesn't want to add to their complaints. After they drop down into the hangar, an army of Golden Hand associates gets to work dispersing the cargo. The silver goes off to market while the genuine slave cage is taken off to a separate holding area; the slaves will be sold or auctioned at the earliest opportunity – to avoid having to feed them. The other passengers – those from the Linton Travel Orcas – are huddled together on the hangar floor; they're surrounded by guards carrying tasers. "What next?" Adalina asks Cubik Splyne. "Am I allied now?" "Not quite," he says, and he quickly snatches her gun from its holster and points it at her. "Give me your communicator." Slowly, reluctantly, she hands it over and waits…here it comes…subterfuge revealed. "Andy, get out – I've got company," he quotes from the screen, then looks at her. "You should have deleted that message, Schmid. You're nowhere near devious enough to be one of us, though I freely admit you would have fooled some I know in the Coalition." "I…I don't know what you mean," Adalina says innocently, but the time for pretence has gone. "Come on, I knew all along you were faking – the clean ship, the pirate hair, the incredible backstory, the search for a heat dispersion plate so you could look around Robardin. It was all so blatantly false." Adalina looks at Border Reiver and wonders if she can cover the distance before she's stopped. He sees her looking. "Don't even think about it. Now fall in with your own kind," he says, nodding his head towards the tourists. Adalina does as he commands and joins the captives. She looks longingly at her ship wondering if she'll ever see her again. "Okay, move out," Splyne orders his soldiers and with various degrees of pushing, prodding, and low voltage tasering, the party moves out of the hangar. Like Andy before her, she doesn't know the route they take, even though she's been to Diva Mines many times before. If anyone on the outpost recognises what is happening they don't interfere; they've learnt the hard way to keep clear of Golden Hand operations. The slow procession eventually reaches that same remote service area where Cipher and Crumlin are waiting. Splyne taps out a complicated knock on the door which is known only to those in the faction – but could be deduced by anyone with a knowledge of Morse code as the letters: G…H…C. I come round trying to decide which is the greater pain, in the stomach or in the head. It feels like Crumlin's jab to the stomach has ruptured something internal, while Cipher's blow to the head has blurred my vision and given me a massive headache akin to migraine. I hear the knock on the door and look across the room to see Cipher opening up. Suddenly, the room is full of people – well dressed, affluent people, and I don't understand. "Move to the back," Cipher shouts, "Plenty more to come in yet." People shuffle and jostle and complain, but they do as they're told. I see the tall woman in the middle of the group and recognise her from the biography I looked up after reading the Galnet News article about her disappearance. It's Queen Lydia Hadro herself. I realise I'm looking at the tourists for whose safe passage I am responsible, and I wonder if it's best not to introduce myself in that capacity – they might have some idea it was I who planned their kidnapping. Last in, to my utter disappointment, I see Adalina being shoved through the door. She sees me too and her expression matches mine. She makes her way over to where I'm still shackled to some pipework. "It's all over then," she says, dispirited. "I'm scared, Andy." "At least we know the truth," I say, "though it's no consolation." "What will happen to us?" "That's rather up to Crumlin and her associates in Golden Hand." In Colonia Police Department's Precinct 5, Probationary Detective Martinsson hurries into the office of his supervisor. He's animated and enthusiastically smug. "You need to see this, boss," he says. "Tell me," Larsen says impatiently; she's weary of mentoring this particular junior officer. "Reports just in; a fleet of Orcas close to Dove Enigma interdicted and pirated a Hauler. The commander said they were in the livery of Linton Travel. It looks like Linton lied to us and his life of crime continues." "I thought we established that the video from the wedding barge was faked." "I never believed it. That friend of his, Vinny, is clever enough and devious enough to have faked the fake. But get this, boss, the Hauler pilot was Jensen Foote's brother. It looks to me like some kind of vendetta against the Foote family." "That's a huge leap, Detective," Larsen says, but she knows the procedure. "Issue an arrest warrant for Andrew Linton; we'll bring him in for questioning. Any idea where the Orcas are now?" "Authority ships arrived at the interdiction site, but all they picked up was a high-wake to Carcosa." "That'll be Robardin, won't it? Get on to them for a traffic report." "I'm on it, boss." Martinsson returns to his desk and prepares the arrest warrant. There's an optional checkbox marked 'Kill On Sight'. His finger hovers over it; surely his handlers would be pleased to be rid of Linton. The law, so far, hasn't been able to put Linton where he belongs – doing hard labour in a penal colony – so why not end it cleanly? He taps the checkbox and broadcasts the warrant. As for the traffic report request from Robardin Rock in Carcosa, he consciously 'forgets' to do it. Larsen has a heavy workload; she's still trying to find the perp of the wedding barge killing – and making little progress. She also has a missing queen to find and there's pressure on her from above to avoid a diplomatic incident. She's reading a report of a sighting of LT-O13 in Carcosa, which bears out Martinsson's news, but is otherwise perplexing – this is the Orca in which the queen was travelling. Detective Marie-Claire Millefeuille appears at the door of Larsen's office. "Pardon, Ma'am, two civilians here to see you – they say it's important – names of Ayr and Getty." Larsen recognises the names. "Show them in." After a long consultation, Larsen presses the intercom. "Get me a team of three SWATs, in here, now." While she waits, a message pops up on her screen; it asks her to confirm the kill order on Andrew Linton. Unknown to Martinsson – because he's a new recruit – such kill orders have to be confirmed by a high-ranking superintendent, and that has worked its way down the chain of command to Larsen. The SWAT team arrives and Larsen gives them an order. "I want you to arrest Probationary Detective Martinsson; there may be resistance, so use all necessary force." Jenna Crumlin is in quiet consultation with Vesto Cipher and Cubik Splyne. "Things haven't gone exactly to plan – we wanted to disgrace Linton and take away his livelihood – but I'm satisfied with the outcome. We have him captive and my new plan is to take him – and his hench – to Maia. Bill will want to determine their fate." "And our fee?" Splyne says. "You'll get that for all the mischief you caused and you can do what you like with these prisoners ." "Are you leaving now?" Splyne says, his eyes wandering until they fall on Queen Hadro. "Yes, I'd like an escort to Robardin where my 'conda's parked." "I have an idea," Splyne says, "why don't you take the Python that's in hangar three? There's already a slave cage in the hold for your prisoners and you can leave the ship in Carcosa – that's where we'll keep it anyway." "I'll come with you," Cipher adds, "to keep an eye on Linton." With weapon drawn, Cipher unshackles me and I rub my chafed wrist, trying to get the blood flowing. "Move," he orders, pushing me and then Adalina towards the door. Crumlin leads the way back to the docking area while Cipher follows behind, still with weapon drawn. We desperately look to the people we meet for help but their avoidance of eye contact is assiduous. They want no part of what's happening to us. At the hangar I glimpse a moment of optimism in Adalina's face when she sees her ship again. This fades as we are pushed into the slave cage that Golden Hand didn't bother to unload. Border Reiver lands once more at Robardin Rock in Carcosa. Vesto Cipher comes from the bridge down to the cargo bay and takes us from the slave cage at gunpoint. Adalina is despondent and I try to cheer her. "Don't worry, it's a long way to Maia – plenty of time for things to happen, and plenty of things that could happen." Cipher hears this and when Jenna Crumlin joins us on the hangar floor he says, "Bin thinkin'." "Oh," she says, and I can see she's suppressing her laughter and refraining from saying something sarcastic. Men are always attracted to her, appealing as she does on so many intellectual, emotional, and physical levels. Cipher seems to see this as encouragement. "Yeah, bin thinkin' about quitting Colonia and takin' a trip to the bubble." Crumlin is ahead of him and knows even what she's going to say after his next utterance – she's been here many times before. "And you'd like to come with me in the 'conda?" "That's it," he says enthusiastically, "I could keep an eye on the prisoners and we could get to know each other better." "You do know I'm gay, don't you? And if you touch me, you die." Cipher looks crestfallen but Crumlin shows no pity. "However, if you can agree to behave yourself," she says after a thoughtful pause, "I can see that it makes sense to have some support on the journey. Let's get our guests in their accommodation, then you go and buy what you need for the voyage. I'll start pre-flight checks on the ship – it's a long haul to Maia." Adalina and I are pushed unceremoniously into yet another slave cage, this time on Crumlin's Anaconda, Ellen. Crumlin throws in a quantity of food cartridges and bottles of water. "Makes these last as far as Gandharvi," she says. "It's more than I usually give, but I want your nerve-endings to be in good condition – all the better to feel the pain that Bill Auer has lined up for you." We sit and wait in the half-light; there's nothing else we can do, and there may be several weeks of this to endure. If Crumlin's ship can jump fifty lightyears we'll have upwards of four hundred hyperspace jumps ahead of us to reach the bubble, and even more to reach Maia. "Let's not start yet," I say to Adalina, "but I would like to know your life story – where you grew up, how you got into spaceflight, how you got that scar on your cheek." She lets out a faint laugh, which is good to hear. "That won't take very long; I'll be done before we've left Eol Prou." "It'll be up to me, then, to keep you entertained." Anything so we don't think about what's at journey's end. Yet again I have contrived to put someone in danger who is wholly undeserving of the fate that awaits us in Maia. "All pre-flight checks complete," Crumlin says. "We're cleared to launch and the course is set for Caravanserai." "I'm not going to miss Odin's Crag Detention Centre," Cipher says, settling into the co-pilot's seat. "Let's go." Ellen lifts heavily off the pad in Robardin Rock and rises to the rotational axis. Pushing the throttle forwards, Crumlin takes the ship expertly out of the asteroid base and boosts away. She lines up for the first jump and punches in the command the second mass-lock is broken. The countdown completes and they make the transition into hyperspace. Vinny Ayr is inside a large black container of the type used by security services as a mobile command and control centre. He sits in the pilot's seat of a generic mock-up of a flight deck. All of the standard controls of a spaceship are available to him in their usual place. He wears a virtual reality headset. He is flanked on his left by his wife, Tay Getty, and on his right by Detective Larsen of the Colonia Police Department. They too are wearing headsets and see what he sees. Ellen is in mid-hyperspace jump when the HUD goes black. "What the…" Crumlin says as she tries to switch between panels and flicks on and off several times the control that toggles the HUD. Everything is dark as they hurtle through witchspace. None of the controls work – though that's normal during a jump for the flight controls. "What's happening?" Cipher says. "I don't know…now let me concentrate. If we don't have control when we come out of witchspace, we're toast…and burnt toast at that." Vinny chuckles. "I have control of their ship. I could have left it to the software, but this is much more fun." In the headset he's seeing the cockpit of Crumlin's ship, but in his version the HUD is lit up and he has total control of all functionality. "It's like telepresence in a fighter, but in a ship instead. Hold on…coming out of witchspace now." Crumlin's heart is thumping fast as they arrive in a system. She yanks at the joystick but there's no response; she zeroes the throttle but they keep moving. The HUD is totally dark and the panel controls do nothing. "Get ready to die," she says to Cipher as the Anaconda noses towards the primary star. The fuel scoop at least is working – but they don't need extra fuel to get where they're pointing which is straight at a huge flare arcing towards them from the star's fiery surface. "Enough," Tay says to Vinny. "Don't forget Andy and Adalina are on that ship." Vinny pulls back on his joystick and Crumlin's ship, which is now only one thousand light-seconds away in the same system, turns away from Eol Prou LW-L c8-127 and away from danger. "Just having some fun," he says. "I don't get to fly a 'conda very often." "Can we bring them in now, please," Larsen says. "I won't be happy until Crumlin is under arrest." "It's more than that," Vinny says. "You'll get the whole gang and clear up half-a-dozen cases all at the same time. Switching to Autonomous Guidance, now." They take off their headsets. "They should be arriving in a few minutes," he says. "It was genius to use their own software against them – the same autonomous guidance they installed in Andy's fleet," Tay says, squeezing Vinny's arm. "Every time one of their ships attempts a hyperspace jump it will fly here to Odin's Crag and there's nothing they can do about it." Larsen takes a call. "Aha…I see…okay…good job." Vinny and Tay look at her enquiringly. "The tactical team on Diva Mines report that all of the Linton Travel tourists are freed and are safe and well, no casualties. The Golden Hand Coalition is decimated – but Cubik Splyne fled the scene." "How did you know where they were – the tourists?" Tay asks. "Apparently, one of the jewels in Queen Hadro's outfit has an emergency transmitter embedded in it. To save the battery it doesn't activate until it detects that it's in a non-anarchic system. It started transmitting as soon as it entered Trakath space. It took a while to triangulate the signal but as soon as she stopped moving we knew where she was to the nearest centimetre." "I know this system," Cipher says looking out of the cockpit, "and I know where we're going." Jenna Crumlin raises an arched eyebrow to make a question mark of her face. "Odin's Crag," he says, "Detention Centre." "How can this be?" Crumlin says, continuing to wrestle with the joystick. "Give it up," Cipher says bitterly. "They've won." The ship approaches the detention centre and even without a docking request – at least not one that they can see – the autodock system takes over and they touch down on Pad 1. The reception committee is a whole SWAT platoon, with Larsen and the megaship commander behind them. "Come out with your hands in the air," SWAT platoon leader, Sam Norton, calls through a megaphone. A gunshot echoes around the hangar and everyone is on high alert. "Stun grenades and flash bombs! We're going in!" Norton shouts, but before they get close to the ship Cipher calls out to them. "Hold your fire! We're coming out – peaceable." Cipher emerges with his long, strong arms locked around Crumlin's waist. Her own arms are trapped and she's writhing and screaming with frustration. Despite her martial arts skills, she cannot break free from his grip. "I told her not to do it and that it would go worse for her, but she insisted," he says. "Linton's been shot." Once Crumlin and Cipher are in custody, the commander steps forward and says to Cipher, "Vesto, what are we going to do with you?" "Usual cell?" Cipher says optimistically. "No, I think it will be something more permanent this time." Larsen's thoughts are elsewhere. "Medical team! On that ship! Now!" In the Jaques Station hospital I'm in a private room, propped up in bed by a mountain of carefully arranged pillows. Chief Surgeon, Helena Foxx-Sweeney, has just left after a post-operative visit and I feel like I've been visited by an angel, not sure if I'm alive or dead. My visitors arrive together. Vinny, Tay, Adalina, and Larsen all look at me with deep concern but I feel better than they think I look. My left shoulder is heavily bandaged and would be very painful were it not for the analgesic drip into my right arm. "That was close," Vinny says, "not far from your heart." "That's where Crumlin was aiming," I recall, "and if it hadn't been for Vesto Cipher tussling with her she wouldn't have missed from that range." "I told Detective Larsen about that," Adalina says. "But it won't make much difference to his sentencing," Larsen says. I groan. "That says a lot about how valuable I am – not," I say, managing a smile. "Your favourite detective, Martinsson, is also under arrest," Larsen says to bring me up to date. "He was working for the Golden Hand all along; only became a cop so he could act from the inside." "I got that full report from Kelsie Short about the Dolphin's black box," Vinny tells me. "I sifted through it and asked Kelsie for the raw data. I found a spurious transmission to a communicator owned by Martinsson that confirmed installation of their version of the guidance system. Detective Larsen, here, was quick to act when we showed her what we'd found." "Can't say how glad I am to be rid of him," Larsen says. "It looks like you're in the clear, Mr Linton. The wedding barge killings are solved – we found that Jenna Crumlin boarded your ship well before it departed and hid in that empty cabin that we quizzed you about." "And the CCTV isn't started until the ship leaves dock," I say, "so there was no sign of her until she committed the murders." "By which time the holographic version of you, overlaid onto her, was active. The CCTV saw only you." My mind is buzzing with the details and I'm feeling drowsy from the painkillers. "I think I'd like to sleep now." "Sure thing," Adalina says. "But I'm looking forward to that chat you had planned for our journey to Maia." "Oh yes…soon," I say. "By the way, I like your new hair colour; sunny yellow suits you." They leave me and I drift in and out of sleep. I reflect that maybe Linton Travel is finished as a lifestyle. I recognise that I haven't done with sleuthing because, actually, the life of a private investigator makes me feel the most alive, the most connected. I realise, of course, that I'm not a first-rate detective and it's only my friends and associates that lift me from third-rate to second. Friendship – that's what it's all about; like that wise old French aviator said long ago on Earth: Il n'y a qu'un luxe véritable, et c'est celui des relations humaines. CMDR Andrew Linton Freelancer / Explorer Andrew Linton Done Sleuthing contents Linton Travel—Done Sleuthing part 1 Christmas Carriers' Convoy 3 - part 16, showdown at IU-X a32-0
cc/2019-30/en_middle_0023.json.gz/line1363
__label__cc
0.611137
0.388863
Adelaide's independent news Support Subscribe Policy, privatisation on agenda for Labor love-in Moran slams city council “morality clause” Attacks on young women push self-defence classes 40 Under 40 winner of the day: Nima Sherpa Congress questions social media "deepfakes" before 2020 elections Flying start for SA machine that goes Ping Port-bound bureaucrats' big taxi bill Gaming giant to open Adelaide office 40 Under 40 winner of the day: Nicholas Murphy Elders makes $187m takeover bid for rural wholesaler 40 Under 40 winner of the day: Miriam Holme SA researchers import Cypriot vines in response to climate change 40 Under 40 winner of the day: Michael Pagsanjan Your views: on bureaucrat taxi bills, bike share and renewables Wind and solar cut rather than lift wholesale electricity prices President Farnsey, Barnsey or Wright? An Australian head of state beyond politics Your views: on Thebby, TAFE, media raids and a SA nuclear dump Richardson: Political inertia to blame for theatre of pain Your views: on universities' CBD expansion, privatised public transport, and soccer coverage Your views: on a suspended sentence, South Rd tunnels, and voting patterns There's a hole in the bucket Theatre review: A View From the Bridge Guitar Festival review: Karin Schaupp & Miles Johnston Poem: Not Me Bedroom studio offers a window into the world of a music producer SA eight times a winner at 2019 Helpmann Awards Review: ASO's Faith & Beauty What's on: Archie Roach, "Slavic Fire" and an Arthur Miller classic Eat | Drink | Explore The Forager The Forager podcast: The golden era of South Australian restaurants Introducing: The Adelaide City Coffee Championships Winter wine vibes, dinner by the beach, cheese aplenty Night and day: 24 hours of eating in Adelaide Plant-based restaurant to open above Etica Adelaide vs Austria: Who’s got the best natural wine? Zonfrillo's Mallozzi wine and snack bar closes Premium SAHOMES Hahndorf Hamptons luxury for sale Fullarton Beer, cider and spirit awards presentation StudyAdelaide event Winnovation Awards launch SALA Festival launch Finlaysons ‘Film Café’ First Fridays at AGSA Henschke Centenarian Vines launch dinner The Book of Mormon opening night CityMag Sparkke launches a fig and cardamom brown ale Eaterie: A French patisserie and art space for Hutt Street Connection and cultural continuity through wood carving Barossa Fine Foods: From smallgoods big things grow Syrian Mobile Disco is an Arabic barbecue and party on wheels SALIFE Vive la France! Where to celebrate Bastille Day this Sunday Snapper tacos with avocado, green apple salsa and pickled onion Out of Town: the best places to eat in regional SA How to make ramen Memories made at luxurious St Morris home Hands On: SA's best food experiences Adelaide Tuesday January 19, 2016 Policy, privatisation on agenda for Labor love-in Moran slams city council “morality clause” Attacks on young women push self-defence classes 40 Under 40 winner of the day: Nima Sherpa Congress questions social media "deepfakes" before 2020 elections Crunching the numbers on SA's high electricity prices South Australia has set its energy sights on a renewable future but, asks Richard Blandy, at what cost? Richard Blandy Tuesday January 19, 2016 Comments Comments Print article On Christmas Day, according to the average price tables published by the Australian Energy Market Operator (AEMO), the Regional Reference Price (average spot price) for a megawatt hour of electricity in South Australia was $91.67. The corresponding prices in New South Wales, Victoria and Queensland were $37.33, $20.38 and $36.20. The average daily spot price for a megawatt hour of electricity in December 2015 was $62.19 in South Australia, $43.37 in New South Wales, $46.84 in Victoria and $42.08 in Queensland. On December 17, the average spot price for a megawatt hour of electricity in South Australia was $259.59, while on December 26 it was only $5.06. It is clear that South Australia has the most expensive and most variable power on the eastern states grid. The reason for the high (and extremely variable) price of electricity in South Australia is our very high dependence on solar and wind generation compared with the other states. This results from the rapid expansion of renewable energy generation in South Australia. According to a Deloitte Access Economics study recently released by the Energy Supply Association of Australia, South Australia’s solar and wind generation capacity per head of population is already more than three times that of any other state or territory. A new Climate Change Strategy for South Australia was released by Premier Jay Weatherill and Minister for Climate Change, Ian Hunter, on November 29. The strategy was conveniently (if implausibly) rebadged as an economic development initiative. In it they said to realise the benefits, we need to be bold. That is why we have said that by 2050 our state will have net zero emissions. We want to send a clear signal to businesses around the world: if you want to innovate, if you want to perfect low carbon technologies necessary to halt global warming – come to South Australia. South Australia can be a low carbon electricity powerhouse. We have the ability to produce almost all of our energy from clean and renewable sources and export this energy to the rest of Australia. But people want electricity to be available when they want it, and for it to stay on, with a steady current, while they want it – not just when the wind is blowing or the sun is shining. The trouble with solar and wind generation is that it only generates electricity intermittently. Covering this intermittency is expensive in terms of idling standby plant. Generators with the required flexibility (peaking generators using natural gas) produce expensive electricity, but are becoming more and more needed as the penetration of wind and solar in our energy generation mix increases. This is why electricity prices have risen in South Australia. Wind farms and other renewable-energy generators also undercut the prices offered by efficient, base-load, coal and gas power plants, because they receive guaranteed, non-market, returns from selling Generation Certificates to electricity retailers under the Commonwealth Government’s Renewable Energy Target (RET) Scheme. Under RET, electricity retailers must buy enough certificates to demonstrate their compliance with the RET scheme’s ever-increasing annual targets. The revenue earned by each wind farm from the sale of certificates is additional to the revenue received, if any, from its sale of electricity to the electricity market. The yearly RET targets imply significant annual investment in wind farms, while the sale of certificates to retailers is designed to guarantee a return to wind farms sufficient to justify the required investment, irrespective of the return they receive from actually selling electricity to the market. Well done, wind farm lobby. If sales of electricity are growing only slowly (as they are in South Australia’s slow-growing economy), the subsidised market share of wind farms and other renewables will rise and the sale of electricity from conventional base-load power plants will fall. At some point the coal and gas-fired conventional power plants will become unable to contribute towards their fixed costs, and they will go out of business. This is what has happened in South Australia. But this is the whole point of renewables in climate change terms – to knock off CO2-producing coal and gas-fired power plants, thereby helping to save the planet from climate change. The Port Augusta power station is closing because of Commonwealth and South Australian Government policy to expand renewable energy generation. This is not an accident. To save the planet, it was always intended to have this effect, but maybe not next year. Leigh Creek is shutting down as an unintended consequence. Pelican Point has been mothballed and Torrens Island is also slated for closure. If the demand for electricity is low – on a public holiday, say – while the wind is blowing and the sun is shining, the price of electricity in South Australia will be low. Conventional generators will make losses, while the market losses of the renewable generators will be covered by their sale of Generation Certificates. If the demand for electricity is high – a heat wave on a working day, say – and it is a still, overcast day, the price of electricity in South Australia will be high, because it will be mostly produced by high-cost, back-up, peaking generators. The high cost of maintaining back-up generation capacity (sufficient, essentially, to duplicate the generation capacity of the renewables) means that the average price of electricity produced in a system dominated by renewables will always be expensive without strong interconnection, such as in Denmark, to large, inexpensive, electricity-producing regions nearby, that produce most of their electricity from coal, gas or nuclear sources. We are not in that fortunate position. According to Deloitte, South Australia’s interconnectors with Victoria are able to supply only 23 per cent of South Australia’s peak demand (although their capacity is presently being increased). According to a report in the Australian Financial Review in December, South Australian Treasurer and Energy Minister Tom Koutsantonis called a meeting of energy users and suppliers to deal with the sharp rises and falls in wholesale electricity prices that, in particular, threaten the economics of the lead and zinc smelter at Port Pirie operated by Dutch company, Nyrstar. South Australian businesses face electricity prices in 2016-18 of between $87 and $90 per megawatt hour, compared with $37-$41 in Victoria and $43-$48 in New South Wales. South Australian irrigators are said to be facing electricity price increases of more than 100 per cent next year. According to the AFR, forward electricity prices in South Australia are far higher than when Nyrstar signed up in May. Further, the threat of disruption of supplies if the inter-connectors to Victoria fail, or become inadequate to meet the demand for electricity in South Australia on peak days, are of understandable concern to the company. Nyrstar is scheduled to start operations in mid-2016. Options for the Government to stop Nyrstar quitting all look expensive. In the short run, the Government’s main option could be to cover the extra anticipated cost of electricity and the cost of any supply disruptions with a further subsidy to Nyrstar over and above the $291 million it has already promised. This subsidy could be substantial. In the long run, the Government’s main option could be to pay for even more interconnection to Victorian, New South Wales or Queensland coal or gas-powered electricity generators. It will have to do so to protect the stability of the electricity grid in South Australia soon, anyway, as well as to put a cap on wholesale prices (the price of base load electricity interstate plus the cost of shipping it here through an interconnector). This will also be costly. The high price of electricity in South Australia is eating away at our economic competitiveness. The probability that we will become, sometime in the distant future, a “low carbon electricity powerhouse” looks extremely low. As often happens with Government initiatives in South Australia, significant Government subsidies are likely to be offered to appropriate companies to locate here, so that the Government’s aspirations appear to be vindicated. Richard Blandy is an Adjunct Professor of Economics in the Business School at the University of South Australia. We value local independent journalism. We hope you do too. InDaily provides valuable, local independent journalism in South Australia. As a news organisation it offers an alternative to The Advertiser, a different voice and a closer look at what is happening in our city and state for free. Any contribution to help fund our work is appreciated. Please click below to become an InDaily supporter. Powered by PressPatron power prices Jump to next article Will my comment be published? Read the guidelines. More Analysis stories Politics Politics "Did she just say what I heard her say?" Tom Richardson Wednesday, June 26 Debt and taxes help Lucas protect his modesty Tom Richardson Tuesday, June 18 Analysis Analysis City homelessness soars since 2001 The secret history of News Corp Loading next article Follow InDaily About InDaily Support InDaily © Copyright 2019 Solstice Media Get InDaily in your inbox. Daily. The best local news sent straight to your inbox every workday at lunchtime. Thanks for signing up to the InDaily newsletter. InDaily uses cookies. By continuing to use our site you are agreeing to our cookie and privacy policy.
cc/2019-30/en_middle_0023.json.gz/line1364
__label__wiki
0.779381
0.779381
ScreenCrush Staff Picks for What to Watch the Weekend of March 3 Disney-ABC Television If you can’t decide what to watch this weekend, ScreenCrush’s Staff Picks are here to help. They’re like the recommendations at an old video store, except you don’t have to put on pants or go outside to get them. Here are six things to watch this weekend: Erin Whitney: Sundance Selects In Kiki, filmmaker Sara Jordeno explores the youth-led community that’s emerged out of Harlem’s 1980s ball scene depicted in Jennie Livingston’s seminal 1990 documentary Paris Is Burning. A mixture of performance and activism, the Kiki community is both a means of expression and survival. The Sundance film follows the lives of the queer and trans people of color in the Kiki scene, including Twiggy Pucci Garcon, a house founder and LGBTQ homelessness activist, Gia Marie Love, a performer who transitioned over the course of the film’s production, and others who detail their coming out stories. Kiki isn’t just a welcomed addition to the queer film canon; at a time when 40 percent of homeless youth identify as LGBTQ, when the Trump administration is rescinding protections for trans youth, and in light of the seven trans women of color who have been murdered in 2017 alone, Jordeno’s documentary is the kind of empowering, inspirational work we need right now. Kiki is now playing in select cities and streaming on Amazon Video. Britt Hayes: Hail to the guardians of the watchtowers of the north, The Craft is now on Netflix Instant! It doesn’t take a ’90s kid to appreciate the campy-greatness of this wonderfully witchy coming-of-age horror flick, but being born sometime after Mikhail Gorbachev took office would certainly help. (I’m just brushing up on my Russian history ... for reasons.) Robin Tunney stars as Sarah, the new girl at a Catholic school who falls in with a trio of witches and bolsters their powers with her own, helping them cast spells to rectify their insecurities and curse their enemies. With a cast that’s basically a mid-’90s teen dream team (Fairuza Balk! Skeet Ulrich!), The Craft is a highly entertaining karmic lesson wrapped up in a Miramax-era horror aesthetic that hits that post-grunge, pre-Hot Topic-in-every-mall sweet spot. Oh, and the soundtrack rules. The Craft is now available on Netflix. Matt Singer: The animation looks pretty crude today (okay, it looked pretty crude when it first aired too), and its writing was never up to the standards of contemporaries like Batman: The Animated Series. But for children of the ’90s, X-Men: The Animated Series was a dream come true; an ongoing television series dedicated to what was, at the time, comics’ most popular franchise. Despite its visual limitations, X-Men featured everything that has made this concept endure for more than half a century: Epic adventure, the deep bonds between the team members, and a powerful metaphor about otherness and tolerance. The voice work was particularly good; to this day, the animated Wolverine (by Cathal J. Dodd) and Beast (by George Buza) are the ones I hear in my head when I read X-Men comics. If you see Logan this weekend and you’re looking for more X-Men stuff to binge, give this a try. Also, the opening credits (and that amazing theme music) still rule. X-Men: The Animated Series is available on Hulu. Kevin Fitzpatrick: The Americans were practicing their Russian spycraft long before another Cold War flashed into headlines, but whether or not FX’s spy drama is the most topical show on TV, it remains one of its most vital. You’ve got just enough time to catch (or at least start) Season 4 on Amazon before the the two-year endgame kicks off with Season 5 on March 7, and nothing is certain for Philip and Elizabeth, Paige, or their FBI neighbors, especially when biological weapons, blabbermouth pastors, and broken glass are on the table. Plus ... how else are you going to get your Martha fix without going back to Zack Snyder? The Americans Seasons 1-4 are streaming on Amazon Prime. Charles Bramesco: One of my greatest delights at the Toronto International Film Festival back in September was watching Sandra Oh and Anne Heche beat the snot out of each other in a stairwell during the standout sequence from Onur Turkel’s demented black comedy Catfight. The two extended fight scenes between the lead actresses dwarf the handsomely budgeted set pieces of blockbusters due to the genuine hatred you can feel seething through their gritted teeth. Between the no-mercy beatdowns, Turkel spins an equally pitiless satire of New York upper-crust pretensions, from the bougie art-society mommies right on up to the dead-on-the-inside financial types. It’s a riot, and watching two mortal enemies go at each other like wolverines is more cathartic than you’d think. (Plus, I met star Ariel Kavoussi at the movies a few months ago. Nice lady!) Catfight is playing in select cities and on iTunes. Matthew Monagle: As a kid, the scariest parts of the bible weren’t the passage featuring demons and devils, but the ones where normal people have their faith tested. If you believed – I mean really believed – then how could you say no when asked to make the ultimate sacrifice? That’s why I’ve always admired Frailty, the 2001 directorial debut of the late Bill Paxton. Paxton’s career as an everyman was the perfect preparation for the project. While others might have been tempted to make Frailty a pulpy thriller about angels and demons, Paxton knew that the better story was the one focused on family, exploring the repercussions on a small Texas town when a man suddenly decides he’s been asked by God to murder demons in human form. With Paxton’s steady presence on both sides of the camera, Frailty becomes an exploration of belief and mental illness that makes it a nice companion piece to Jeff Nichols’ better known Take Shelter. No film does a better job of showing off everything that Paxton had to offer as both an actor and director. He will be missed. Frailty is streaming on Netflix. ‘Logan’ Review: One Last Ride for Hugh Jackman’s Wolverine Source: ScreenCrush Staff Picks for What to Watch the Weekend of March 3 Filed Under: netflix
cc/2019-30/en_middle_0023.json.gz/line1368
__label__cc
0.73421
0.26579
The Invincible Dragon Emperor Chapter 1: Teenager Carrying A Coffin The Invincible Dragon Emperor Chapter 1 Translator: Panda_Penn Editor: Chrissy The Northern Desert, Great Land of China. Snow was wreaking havoc throughout the land, covering all the mountains and valleys in a silver veil. Snowflakes whirled in the sky like myriad of feathers. When observed from a distance, it looked as if the heaven and the earth were blended together into a single color. On the meandering path of a mountain, a trade caravan was resting. The blizzard was too heavy for the carriage to move forward, so they had to wait for the blizzard to die down before they could hit the road again. Look, Coffin Carriers! A middle-aged man suddenly shouted, which caught the attention of the entire caravan. They gazed into the distance with curiosity; however, with just a few glances, they were shocked to their bones. The blizzard was storming on, everything between heaven and earth was covered in mist. A mysterious, Ancient Golden Coffin, which was barely visible, gradually appeared on a distant mountain path. The Ancient Golden Coffin was about six and a half feet in length, and more than three feet in width and height. The walls of the coffin were engraved with runes of primitive simplicity. A glow flickered on these runes, knocking off all the snowflakes from its surroundings. It was utterly strange. What surprised these men even more was the coffin carrier walking in front of the coffin! It was a bare-chested teenager seemingly 14 or 15 years old. Arching his back, his two hands grabbed onto two huge chains of cold iron, and his shoulders were carrying iron chains as well. One-step at a time, he was struggling to pull the coffin forward. In a world of snow and ice, a teenager was pulling the coffin! That breathtaking scene left the spectators stunned. A dim glow spread out from the Ancient Golden Coffin, and it blurred the teenagers body. Therefore, people from the caravan could not see his features clearly. From the glimpses they caught, they thought a deity was pulling a coffin and moving towards them. This young man is born with supernatural strength. The middle-aged man gave another shout, which caused a lot of people to come to their senses, and they started to analyze the young man through and through. After noticing that there were no rays of light symbolizing Xuan Energy [1], they were stunned yet again. Ordinary coffins made of willow wood usually weighed around 300 to 400 pounds. However, the Ancient Coffin had a casting of gold all over it, so it should at least weigh over 3000 pounds! Without Xuan Energy, the young man was surely not a real warrior, and even the strongest of the brawns only had a strength of about 500 or 600 pounds. This young man, who could pull a coffin that weighed over 3000 pounds all by himself, was enough to cause these people panic-stricken, even though he seemed to be in a bit of strain. He is such a young boy and quite a nice looking one no less. Why would he carry a coffin? Is he not aware that he will suffer life-long bad luck? Some elders sighed, and many nodded in agreement. Coffin Carrying was not a common sight, but it did happen a couple of times on the Great Land of the Northern Desert. When great patriarchs passed away in big families of the Northern Desert, they were usually buried in huge Golden Coffins. These coffins would not be buried underground, but on the top of the ice-capped mountains, which was believed to be a protection for the members of the family. When a family declined or suffered from grave misfortune, they would host a coffin carrying ceremony and find a new ice-capped mountain to place these coffins, which was to change the geomantic omen and rebuild the fortune of the family. This was where Coffin Carriers came into play. Carrying coffins would diminish ones luck. Northern Deserts people believed that a lot of the fortune and qi from the Coffin Carrier would be sucked-in by the body inside the coffin, and so would their body essence and life force. Therefore, the Coffin Carriers would suffer from bad luck or illness their whole life. Hence, Coffin Carrying was one of the most loathsome jobs in the Northern Desert. No matter how high the compensation was, young warriors would not show interest in it. Only down and out old warriors would do this. The young man had very nice facial features, about 5.74 feet tall, broad shoulders, slim waist, long legs and strong muscles. He was wearing a plain animal tooth pendant on his neck, giving him quite a wild look. Judging by his appearance, he seemed not bad. The teenager seemed quite well endowed; being 14 or 15 years old, had good looks, and incredible strength. Why would he do something as loathsome and sinister as coffin carrying? Was he not afraid about getting his fortune sucked out, his life force reduced, and would have to suffer from misfortune all his life? The middle-aged man stared at the Golden Coffin for a bit, and said, This seems to be the coffin of ancestors of the Liu Family! Thats right, I heard the Liu Family met with a great turbulence lately and offered a sky-high price to hire people to carry coffins, I heard they offered many special Pellets and even Xuan Weapons. I think this kiddo got tempted and risked his fortune and life to carry coffins. Hush, here comes the Liu Family, an elderly suddenly said. All members of the caravan stopped talking and looked into the distance. Howling of beasts came from afar. A team of warriors in black armors came rushing in, by riding on Silver Wolves. One did not need to stand near the wolves to feel their fiendish breath. Silver Wolf Escorts! They are no doubt warriors of the Liu Family. People nodded in consent. Of the entire land surrounding the Wu Ling County, only one army rode Silver Wolvesthe strongest Silver Wolf Escorts of the Liu Family. The Silver Wolf Escorts stopped after they caught up with the teenager. The middle-aged heavy rider at the forefront held a long whip in his hand and shouted towards the young man in a cold voice, Hurry up, Lu Li. You will not get any payment if you cannot make it to the Black Hawk Ridge before sunset. The young man called Lu Li looked up. Even though more than a dozen huge wolves of about three feet in height and 6.5 feet in length surrounded him, one could not see a speck of fear in him. He nodded and said, No worries, Sir. I will make it before sunset. The heavy rider shouted in a deep voice and led the Silver Wolf Escorts rushing, disappearing into the vast mountains. Lu Li took a glance at the resting caravan nearby and moved on in silence pulling the coffin. He grabbed the huge cold iron chains in his hands and carried them on his shoulders. He struggled to move forward, and every step seemed extremely difficult. Muscles were trembling all over his body; however, he did not stop. He passed by the caravan and disappeared into the distance. This is so heavy! Lu Li stopped after ascending a steep slope after a couple of miles. He was out of breath and started panting. One could imagine how much strength he had expended, by seeing him sweat like this on such a cold day. He had to get some rest. Therefore, after placing the coffin down, he sat on a boulder and took out some dried, cooked meat and water, and started to gobble down the food. About several miles left. I should be able to make it before sunset. After he had eaten several pieces of dried meat, he looked up towards the sky. His eyes quickly lit up, and he said to himself, I can get one piece of pellet when I finish with one coffin. I dont know if the Body Refinement Pellet is as good as described in the legends, giving one enormous strength amounting to more than 550 pounds. Should that be true, given that I already have more than 3000 pounds of strength, I can reach more than 11,000 pounds of strength after I carry about a dozen coffins. Then, I can awaken my Warrior Bloodline! Once I achieve that, I will become a powerful Bloodline Warrior. Then, I dare anyone in my tribe to bully my big sister. Ha-ha, I can get my sister a happy life soon. When he thought of this, Lu Lis tiresome face became radiant again. He quickly got up and carried the coffin forward. This time, his steps were more stable and fast. Swoosh, swoosh, and swoosh! Lu Li heard a voice after he traveled a couple of miles. He looked back and got a bit upset. Behind him came another coffin carrier, an old man. There was white light glowing on the old mans hands that were grasping the chains and on his feet as well. He was very fast, much faster than Lu Li. Xuan Energy Lu Li mumbled in envy. The white light was the Xuan Energy cultivated by Warriors. Warriors with Xuan Energy would gain a lot of strength. This old man had used Xuan Energy, and that was why he was able travel so fast and carry the coffin at ease. The old man caught up with Lu Li. He looked at Lu Li and said with a frown, Young man, why would you do this? This will cost your fortune and life. You are young. Do not kill your future just for petty profits. Lu Li smiled but didnt say anything. The old man looked at him for a bit, but said no more and disappeared into the distance with his coffin. After the old man had left, Lu Li sighed with a bitter smile. The future? I cannot cultivate Xuan Energy, nor can I approach the entry of Wu Dao [2]. If I could not even awaken my Warrior Bloodline, what would be left in my future? Just have an ordinary life, be bullied every day, and even drag down my sister with me? No, that is not the life I aspire. On thinking of this, Lu Lis facial expressions became twisted. Clenching his teeth, he roared and moved on quickly. The sky darkened, the wind was getting stronger, and the snow was also getting heavier. On the long mountain trail, the Ancient Golden Coffin was glowing, sometimes visible, sometimes not. It was all the more strange, tempting and miraculous. [1] Xuan Energy Xuan Li Profound strength cultivated by the warriors of Northern Desert. [2] Wu Dao Wu means Martial Art and Dao is the path, Wu Dao is the path of Martial Arts.
cc/2019-30/en_middle_0023.json.gz/line1369
__label__cc
0.643591
0.356409
Muhammed Wisam Sankari was a gay Syrian refugee. He had arrived in Istanbul a year ago. He was threatened, kidnapped, raped. Last week he was found dead in Yenikapi and was stabbed multiple times. Wisam’s friends identified him by his pants. Source: Yıldız Tar, “İstanbul’da Suriyeli eşcinsel mülteci öldürüldü”, kaosGL.org, 3 August 2016, http://kaosgl.org/sayfa.php?id=22065 Syrian gay refugee Muhammed Wisam Sankari left his house in Aksaray on the night of 23 July. He was found dead in Yenikapi on 25 July. He was beheaded and his body mutilated beyond identification. Wisam’s killers have not been caught. Wisam, who was previously threatened, kidnapped by a crowded group of men, and raped, was trying to go to another country as a refugee because his life was in danger. After the murder, we listened to Wisam’s housemates Rayan, Diya and Gorkem tell us about what Wisam went through, what it is like to be an LGBTI refugee in Turkey, and the problems refugees face. They spoke to KaosGL.org about the murder that was so clearly in the making, how the authorities did not take any preventive measures, and their anxieties about “who is next”. “We complained to the Police HQ, they did not do anything” Rayan, who has known Wisam for a year, says “He was feeling very insecure recently. When we asked him, he would not tell us much” and explains that Wisam had been threatened and kidnapped before. He said they had difficulties even walking on the streets of Aksaray where they lived and where crowded male groups wielding knives had threatened them several times, saying they wanted to rape them. According to Rayan, Wisam experienced the following: “We were staying in a different house before and we had to leave that house just because we are gay. People around would constantly stare at us. We did not do anything immoral? About five months ago, a group kidnapped Wisam in Fatih. They took him to a forest, beat him and raped him. They were going to kill him but Wisam saved himself by jumping at the road. We complained to the Police Headquarters but nothing happened.” “We identified him from his pants” Gorkem is also Wisam’s friend and he was among the people who went to identify the body after Wisam was killed. Gorkem tells the story of Wisam’s disappearance and the news of his death in tears: “That night Wisam left the house. We were already anxious because of the threats. We told him not to go but he said he was going out for 15-20 minutes. He didn’t come home all night. The next day, we panicked when we couldn’t reach him. We went to the Association for Solidarity with Asylum Seekers and Migrants (ASAM). They directed us to Fatih Police Headquarters. We did not even know how to get there or what to say. “On Sunday police called us. We went to Yenikapi with Rayan. They had cut Wisam violently. So violent that two knives had broken inside him. They had beheaded him. His upped body was beyond recognition, his internal organ were out. We could identify our friend from his pants.” “Who is next?” Diya says they live in fear and with the thought of “who is next” following Wisam’s death and says they are afraid to go out on the street: “I am so scared. I feel like everyone is staring at me on the street. I was kidnapped twice before. They let me go in Cerkezkoy and I barely got home one time. I went to the UN for my identification but they did not even respond to that. No one cares about us. They just talk. I get threats over the phone. I speak calmly so something does not happen. It does not matter if you are Syrian or Turkish, if you are gay you are everyone’s target. They want sex from you and when you don’t they just tag along. I don’t have identification, who would protect me? Who is next?” Rayan criticizes ASAM and the United Nations. “What’s the use of them doing anything after Wisam is killed? Our friend is dead,” and adds: “ASAM and the UN don’t do anything for us. We can only protect ourselves. We stay together to protect ourselves. We cannot get any information or answers. Just talk… ASAM called us after Wisam’s death. After his death… What’s the point? A very pure and good person is gone from this world.” Posted in Discrimination & Hate Crimes, LGBTI Refugees, Rights Violations in 2016 and tagged ASAM, Association for Solidarity with Asylum Seekers and Migrants, lgbti refugees, Muhammed Wisam Sankari, UN, United Nations on August 3, 2016 by lgbtinewsturkey. 5 Comments ← My Pride Story: Go On to Shout “We Are Queer, We Are Here, Get Used To It!” Trans Woman Attacked with “Allahu Akbar” Chants in Mersin → Pingback: Syrian gay refugee killed in Istanbul | Jonás el Bloguero Pingback: Pride Committee: “We will be on the streets on Sunday to demand justice for Hande Kader and to stop hate crimes” | LGBTI NEWS TURKEY Pingback: Why can’t we fit gays and transsexuals into this huge world? | LGBTI NEWS TURKEY Pingback: Трансгендерная секс-работница и активистка убита в Турции — Platforma Pingback: Transgender Sex Worker and Activist Murdered in Turkey — Platforma
cc/2019-30/en_middle_0023.json.gz/line1372
__label__cc
0.710448
0.289552
Strength of faith in coming of the Messiah I recall hearing long ago that some Jews send out wedding invitations as follows: "The wedding will be in Jerusalem at this place and time. If, heaven forbid, the Messiah has not come by that time, the wedding will be in New York at this hotel.". Is this true and documented, and if so, do they make arrangements for a wedding in Jerusalem? messiah wedding faith-bitachon-emunah Maurice MizrahiMaurice Mizrahi I believe this originated with R’ Levi Yitzchak of Barditchev, obviously replacing New York with Barditchev in His invitations. I haven’t heard of this being done nowadays, so I can’t fully answer the question. Be careful if doing this yourself; according to Rav Zilberstein, if you use this expression in the sense of “Mashiach’s never going to come, so we’re going to have it in New York,” it’s tantamount to Kefirah. (If you mean it in the literal sense of the phrase, there’s no issue, though; cf. Sotah 48b, from Ezra 2:63.) – DonielF Mar 28 at 16:21 You mean that, if you do it, you HAVE to make arrangements in both Jerusalem and NY? – Maurice Mizrahi Mar 28 at 18:48 I didn’t say that, but I didn’t say otherwise, either. My gut reaction would be that if one says that it will be in Yerushalayim if Mashiach comes but doesn’t make arrangements (assuming that he is able to do so), it belies his claim that he means it seriously, and therefore goes against Rav Zilberstein’s psak. That said, perhaps it’s against Emunas Mashiach if one doesn’t make arrangements in Yerushalayim even if he doesn’t add this to the invitations. – DonielF Mar 28 at 19:50 Browse other questions tagged messiah wedding faith-bitachon-emunah . Insurance against משיח How can one regain his faith? simple faith versus blind faith How do we understand Talmudic predictions of Moshiach? Our bodies aren't here for the long haul? Clarifications on Faith Am I Supposed to Believe that the Messiah is coming today, or that he is coming generally How would first century Jews in Judea have reacted to the Jesus Movement's messianic claims? Are professor Daniel Boyarin's writings about the messiah supported in traditional Jewish sources? Why is the principle of faith about Moshiach constructed as a rhetorical argument?
cc/2019-30/en_middle_0023.json.gz/line1382
__label__cc
0.679715
0.320285
← Four fronts for climate policy What’s the worst case? A possibilistic approach → Why I don’t ‘believe’ in ‘science’ Posted on March 26, 2019 by curryja | 153 Comments ” ‘I believe in science’ is an homage given to science by people who generally don’t understand much about it. Science is used here not to describe specific methods or theories, but to provide a badge of tribal identity. Which serves, ironically, to demonstrate a lack of interest in the guiding principles of actual science.” – Robert Tracinski Robert Tracinski has published a superb essay entitled Why I don’t ‘believe’ in ‘science’. Excerpts: begin quote: For some years now, one of the left’s favorite tropes has been the phrase “I believe in science.” Elizabeth Warren stated it recently in a pretty typical form: “I believe in science. And anyone who doesn’t has no business making decisions about our environment.” This was in response to news that scientists who are skeptical of global warming might be allowed to have a voice in shaping public policy. [I]t captures a lot of what annoys the rest of us about the “I believe in science” crowd. It reduces a serious intellectual issue—a whole worldview and method of thought—to a signifier of social group identity. Some people may use “I believe in science” as vague shorthand for confidence in the ability of the scientific method to achieve valid results, or maybe for the view that the universe is governed by natural laws which are discoverable through observation and reasoning. But the way most people use it today—especially in a political context—is pretty much the opposite. They use it as a way of declaring belief in a proposition which is outside their knowledge and which they do not understand. There are a lot of people these days who like things that sound science-y, but have little patience for actual science. The problem is the word “belief.” Science isn’t about “belief.” It’s about facts, evidence, theories, experiments. You don’t say, “I believe in thermodynamics.” You understand its laws and the evidence for them, or you don’t. “Belief” doesn’t really enter into it. So as a proper formulation, saying “I understand science” would be a start. “I understand the science on this issue” would be better. That implies that you have engaged in a first-hand study of the specific scientific questions involved in, say, global warming, which would give you the basis to support a conclusion. If you don’t understand the basis for your conclusion and instead have to accept it as a “belief,” then you don’t really know it, and you certainly are in no position to lecture others about how they must believe it, too. Because science is about evidence, this also means that it carries no “authority.” The motto of the Royal Society is nullius in verba—”on no one’s word”—which is intended to capture the “determination of Fellows to withstand the domination of authority and to verify all statements by an appeal to facts determined by experiment.” That’s the opposite of what “I believe in science” is intended to convey. “I believe in science” is meant to use the reputation of “science” in general to give authority to one specific scientific claim in particular, shielding it from questioning or skepticism. “I believe in science” is almost always invoked these days in support of one particular scientific claim: catastrophic anthropogenic global warming. And in support of one particular political solution: massive government regulations to limit or ban fossil fuels. The purpose of the trope is to bypass any meaningful discussion of these separate questions, rolling them all into one package deal–and one political party ticket. The trick is to make it look as though disagreement on any of these specific questions is equivalent to a rejection of the scientific method and the scientific worldview itself. But when people in politics proclaim “I believe in science” what they’re doing is proclaiming a belief in the current consensus. Do you think Elizabeth Warren and Andrew Yang have given serious study to climate science? No, they believe in global warming and its preferred political solutions because they have been told that a consensus of scientists believes it (and because this belief confirms their own political biases). Notice that Warren’s statement was about a panel of scientists who are skeptical of global warming, led by a distinguished physicist, William Happer. When does a scientist count as someone who “doesn’t believe in science”? When he departs from the “consensus.” The ‘I believe in science’ crowd is very enthusiastic about labelling as ‘pseudoscience’ any actual science that has implications that are counter to their political beliefs. As a case in point, consider Media Bias/Fact Check. In particular, check out their entry on Climate Etc. which is reproduced here in full: beqin quote: Sources in the Conspiracy-Pseudoscience category may publish unverifiable information that is not always supported by evidence. These sources may be untrustworthy for credible/verifiable information, therefore fact checking and further investigation is recommended on a per article basis when obtaining information from these sources. See all Conspiracy-Pseudoscience sources. Factual Reporting: MIXED Notes: Climate Etc is the blog of Judith A. Curry who is an American climatologist and former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. The Climate Etc blog publishes news and information regarding climate science and climate change. The majority of articles minimize or deny the impacts of human driven climate change. According to a Scientific American interview, Judith Curry admits to receiving funding from the fossil fuel industry. This article also labeled her a “climate heretic.” Judith Curry has also been invited by Republicans to testify at climate change hearings regarding alleged uncertainties regarding man-made climate change. Climate Feedback, a climate change fact checker, debunked much of Curry’s testimonials. Further, Skeptical Science has labeled Judith Curry as a “Climate Misinformer.” Judith Curry is also cited in a Pants on Fire claim by Politifact. Overall, we rate Climate Etc as a pseudoscience website due to its promotion of anti-climate science propaganda. (D. Van Zandt 10/14/2017) Updated (1/28/2018) Well, Climate Etc. didn’t quite make it into the ‘Tin Foil Hat, Quackery’ category. The Wikipedia isn’t too impressed: “The Columbia Journalism Review describes Media Bias/Fact Check as an amateur attempt at categorizing media bias and the owner of the site, Dave Van Zandt, as an “armchair media analyst.” Van Zandt describes himself as someone with “more than 20 years as an arm chair researcher on media bias and its role in political influence.” The Poynter Institute notes, “Media Bias/Fact Check is a widely cited source for news stories and even studies about misinformation, despite the fact that its method is in no way scientific.” ” With regards to me personally, I have seen numerous statements on twitter or wherever that I have ‘abandoned science’ or have ‘stopped being a scientist’ since I began publicly questioning aspects of the so-called scientific consensus on climate change (whatever the ‘consensus’ means at any given time to any particular person). Tracinski’s essay does a superb job of identify the intellectual laziness, tribalism and politics surrounding these ignorant ‘arbiters of science,’ who are easily identified by their statements ‘I believe in science.’ This entry was posted in Sociology of science. Bookmark the permalink. 153 responses to “Why I don’t ‘believe’ in ‘science’” Joseph Ratliff | March 26, 2019 at 10:47 am | Reblogged this on Quaerere Propter Vērum and commented: Excellent essay. Keith Taylor | April 9, 2019 at 3:18 pm | I love this post, I have continually asserted that one does not believe in science, science is not about belief and faith. It is about hypothesis and trying to streamline the aspects of a given hypothesis into something that can be observed experimentally. (The short version) . Science is certainly not about consensus , in the early days of the relativity theory, the consensus was that it was wrong. Robert I. Ellison | April 9, 2019 at 5:46 pm | Consensus is a social construct – irrelevant. What part of AGW don’t you believe? https://www.atmos.washington.edu/~dennis/321/Harries_Spectrum_2001.pdf Clickinit (@Clickinit137) | March 26, 2019 at 10:53 am | Thank you, Judith. Russell Seitz (@RussellSeitz) | March 28, 2019 at 2:16 am | Ten Reasons Not To Believe In JC -and her cohort: https://vvattsupwiththat.blogspot.com/2019/03/the-new-york-review-of-climate-wars.html cerescokid | March 28, 2019 at 9:24 am | I would sooner believe JC with her circumspection and dedication to the ideals of the scientific method than to conformists who never question a thing. She works on a plane of inquisitiveness never even contemplated by many of her contemporaries. Hifast | March 26, 2019 at 10:53 am | Reblogged this on Climate Collections. George Zeller (@ZellerGeorge) | March 26, 2019 at 11:09 am | Excellent article. The same shallow approach is taken by people with respect to evolution. “If you don’t believe in evolution, then you don’t believe in science,” even though there is significant evidence in the fossil record and elsewhere which does not support evolutionary theory. bigterguy | March 26, 2019 at 2:00 pm | “significant evidence in the fossil record and elsewhere which does not support evolutionary theory.” Citation needed. There may well be items that don’t support the theory, for example fossils of tulips from 5,000 years ago say nothing one way or the other, but there is NOTHING that contradicts it. A big difference. As Stephen Jay Gould ( I think) said: “If you can find even one verifiable item that disproves evolution, I will abandon it. For example a rabbit bone in a pre-Cambrian layer.” (paraphrasing from memory) Canman | March 26, 2019 at 4:05 pm | The evidence for evolution is overwhelming, and is best summarized in Ronald Bailey’s opening statement, where he and Michael Shermer debated (and won) against George Guilder and Stephen Meyer at Freedom Fest: I think the evolution wars may be responsible for a lot of the excuses we hear for people avoiding debate today. Steven Jay Gould and Richard Dawkins both agreed to not publicly debate any prominent creationists, so as not to give them a platform. Did they really make a dent in the considerable proportion of the population believing in creationism? There is also the concept of the “Gish Gallop”, purportedly coined by National Center for Science Education founder, Eugenie Scott, after creationist, Duane Gish, where bogus arguments are piled on faster than they can be responded to. I’d like to suggest the responses, “one argument at a time, please”, or “you’re changing the subject”. daemon42 | March 26, 2019 at 11:15 am | Excellent essay. “I believe in science” is a sales pitch for excusing the lack of critical thinking skills in education IMHO. Another sticking point is when I hear “you can’t argue with the data!”. Maybe, but I sure can pick out shoddy data analysis when I see it. Keep it up Dr. Curry! kellermfk | March 26, 2019 at 11:24 am | Strikes me “I believe in science” is a non sequitor and aptly demonstrates the religious nature of the “green movement”. “I believe in the scientific method” would be a rational statement, but logic and reason have long been absent from the “green” religion, hence the use of “feel-good” but intellectually vacuous statements so prevalent in the “climate-change” cult. kimijones | March 26, 2019 at 11:26 am | HaroldW | March 26, 2019 at 11:35 am | Wonderful essay by Mr. Tracinski. Thanks! jungletrunks | March 26, 2019 at 11:43 am | “They use it [science] as a way of declaring belief in a proposition which is outside their knowledge and which they do not understand.” This generally is a wonderful essay, but the way AGW science is used is essentially not for just simply “declaring a belief”, or “understanding”; but rather it’s a purposeful philosophical political tool. I don’t need to know how a Phillips screwdriver is constructed, or even initially what it’s used for if I’m told that something can be screwed using the device. Javier | March 26, 2019 at 11:55 am | Fantastic article. It absolutely reflects my views on how we are turning some disciplines of science into a pseudo-religion to control society. The sad part is how many scientists are cooperating or allowing it with their silence. Everything is there: The need for a belief in a core doctrine. The priests capable of interpreting the scriptures and signs for the masses even when contradictory. The false prophets that try to steer the masses into adoring the golden calf. The need for redemption from an original sin. The apocalypses. Western people are preconditioned and after abandoning the old religion they can’t have enough of the new one. We have a couple of types around here. Amazing we are falling for that one again and abandoning the scientific method that we developed with so much effort. Judith, you are a false prophet in that pseudo-religious movie. I am an aspiring heretic acolyte. Andy West | March 26, 2019 at 2:38 pm | https://judithcurry.com/2015/11/20/climate-culture/ …a longer / more generic list for features of a strong culture, be it religious or secular, and examples for climate culture in particular. They have occurred throughout history and before, plus due to long gene / culture co-evolution, the set of behaviours are deeply embedded in us (and are ultimately due to in-group / out-group reinforcement). Science done properly short-circuits such cultures, but in turn science is highly fragile to bias or hi-jack from cultures; it’s a constant war, although much science gets done below the cultural radar, so to speak. jungletrunks | March 26, 2019 at 2:50 pm | “…be it religious or secular, and examples for climate culture in particular…Science done properly short-circuits such cultures” Wouldn’t science be agnostic to culture? Science might work for preconceived cultural notions, or against them, but empirical data has no consideration for such. Science isn’t highly fragile to culture, individuals are highly fragile and influenced by culture. Javier | March 26, 2019 at 4:01 pm | Even as a scientist, I have long considered that in general humans don’t make good scientists. We are too biased for that, and only a few people through strict training and strong adherence to the scientific method (Richard Feynman comes to mind) can overcome their effects. Science should be left to intelligent machines as soon as possible. Jungletrunks, “Wouldn’t science be agnostic to culture?” No, because done properly it reduces the speculative space that emotive memes can exploit. For instance, despite a robust defence (in some countries more than others), most religion is on a long retreat from once held parts of particular consensuses, e.g. for Christianity that the Earth was at the centre of everything. In the case of narratives invoking god or gods, they can however always retain one step, that deities exist beyond the current limits of knowledge. (Not so for all cultures, particularly secular ones that hitch their wagon to science). “Science might work for preconceived cultural notions, or against them,” Per above findings may be supported or resisted depending on cultural values, but… “…but empirical data has no consideration for such.” Per above, accumulated data reduces the space in which cultures can operate. Of course cultures may be able to find new spaces, but then science can start denuding those too. “ Science isn’t highly fragile to culture, individuals are highly fragile and influenced by culture.” I think this is saying the same thing in a different way. I did point out that the reason for the fragility is that via long gene / culture co-evolution, the behaviours are deeply embedded in us. popesclimatetheory | March 26, 2019 at 11:58 pm | Science should be left to intelligent machines as soon as possible. Because machines can promote our Thinking mch faster that people can. Andy West | March 27, 2019 at 6:53 am | Our bias and behaviours in support of cultural adherence are a heritage of our evolutionary path. Intelligent machines will be subject to evolution too, and there’s no guarantee they won’t develop bias and cultural behaviours of their own. Not to mention that they may very soon become uninterested in exploring the science humans want, and develop their own directions less aligned to our interests; nor will they necessarily share the results. It’s a tempting want, and probably the future, but the suggestion invites AI’s ability to abstractly reason through complex problems too; machines might recognize the human flaw of being distracted by pettiness in our lust for power, that machines can solve that problem faster too in their own hegemonic way. Machines might come up with many intelligent ideas we haven’t considered because we don’t want them. The problem for humans might be figuring out how to not allow machines to think so much. While it might seem ridiculous now, it’s not too outlandish a concept as to warrant being dismissive of a matrix/terminator type scenario. Sorry, basically my previous post is somewhat redundant to Andy’s. But the dangers of AI are generally well understood, certainly by everyone in this room. To redirect the thought; tackling AI in machines might be easier than tackling AI in humans who are convinced their programming is based on fact, facts that are in fact, not facts. Don132 | March 26, 2019 at 8:56 pm | Interesting comment: “… we are turning some disciplines of science into a pseudo-religion to control society.” A good book, or maybe a couple of good books, relating to this are “Mary’s Mosaic” by Peter Janney and “The Devil’s Chessboard” by David Talbot. These are not science. But, in an odd way they are highly relevant, if indeed the intent of pseudo-science and pseudo-religion is to control society. In particular there’s a quote from a former CIA agent in the Janney book, to the effect that almost nothing in the media is true– on purpose. These are very interesting books that have nothing to do with climate science and perhaps everything to do with climate science. I found the Janney book particularly haunting because his father was a CIA agent and his best friend’s mother (“Mary”) was JFK’s lover. “…the intent of pseudo-science…” It’s worth noting that the effects mentioned above are emergent via the triggering of long evolved behaviours. So while vested interests (and worse) may latch onto a powerful emergent culture, in terms of prime causation there is not ‘intent’. However, the ‘job’ of a main culture is indeed to get the ‘in-group’ to all sing off the same hymn-sheet regarding a wide range of social aspects; this can be considered ‘control’, but it is at heart innate / unconscious control rather than deliberate / conscious control. And so while (potentially many) deliberate plans can be unveiled as part of a cultural takeover, these are geared to producing what the culture wants* not want the deliberate planners think they want (‘solutions’ that maximise virtue signalling / stimulate membership / create physical cultural icons / infra-structure, but do very little to address the posited existential problem which in fact is keeping the culture alive, are very common). [* = a turn of phrase; cultures are neither sentient nor agential, but like prions or viruses work via selection towards optimised survival]. “…vested interests (and worse) may latch onto a powerful emergent culture, in terms of prime causation there is not ‘intent’ … but it is at heart innate / unconscious control rather than deliberate / conscious control.” As sentient beings certain humans are conductors of intellectual darwinism, i.e., those believers and drivers striving for the perfect culture. One could call this uniquely innate for our species, a form of high intellect natural selection, it only requires one, or a few powerful thought leaders to drive and describe a desired cultural path. Almost always such ambitions have proven to be fleeting, unworkable for a particular cultural expression; they yet represent a linear “conscious” progression; even those expressions by a megalomaniac ruler for example. Historically the cultural push towards the perfect society has mostly led to dead ends. Yet there are many examples of drivers of “intent”; even while the preponderant rank and file are subjugated and follow the eddies of “prime causation”. I believe the inculcation of CAGW on society, for example, is a contrivance used for the sole purpose to facilitate a particular brand of global cultural change. There’s intent. It’s used by a core symbiotic body of global facilitators to express a desired cultural path to funnel society in their conceptual direction, not in the name of climate, which is merely one tool in their trade. Dead end examples of culture are usually represented as a pathology of culture, as in say those aims of National Socialism. “…like prions or viruses work via selection towards optimised survival.” Much like cultural darwinism, i.e., viruses mutate to something immune to prior causes of extinction, but there’s always one that blazes a new survivalist trail, mutating towards pathological perfection. Uniquely, being sentient allows for bypassing biological rules. Jt, ‘As sentient beings certain humans are conductors of intellectual Darwinism…” I’m not sure what you mean by ‘intellectual Darwinism’. But if you mean by the paragraph as a whole that cultural narratives compete in human society, then yes. This encompasses all humans, not just ‘certain’ ones. And the criteria for best selection is not a higher intellectual content; the narratives that rise within the whole pile are those with the highest emotive engagement, and typically for a particular culture there will be a large co-evolving set covered by a single ‘umbrella’ narrative. The intellectual elites that arise upon the wave of these emergent narratives are as much symptoms as cause (they do have disproportionate influence regarding further transmission / reinforcement). “Almost always such ambitions have proven to be fleeting…” Some are local, some are global, some are tiny (group think on the local council, say), some involve meglamaniacs and some don’t. But for sure while some are fleeting and some are longer-lived, also some last for millennia, for instance Christianity. In practice their sets of co-evolving narratives indeed continue to evolve throughout, even though at any time advocates portray them as a static / invariant truth, but… “Historically the cultural push towards the perfect society has mostly led to dead ends.” …despite idealistic goals that can never even in principle be achieved (all strong cultural narratives are wrong), it can hardly be called a dead end when it rules a large portion of humanity for a generation (after which they’ll never return to their prior state) or indeed 80 generations. And the effects both during and after the passing of the culture are not all negative. Indeed throughout our evolutionary history there’s been a *net* benefit, albeit this is not always intuitive, which is why via gene / culture co-evolution the whole system arose in the first place. But in the modern era where science has appeared, there’s a complex entanglement between science and culture, and maybe the net benefit of the latter no longer holds (and even if so, *net* benefit still means there can be some very negative cultures). “…preponderant rank and file are subjugated…” This is frequently not the case. The mass of public support for climate policy is not only genuinely volunteered and honest, but frequently passionately so, as is to be expected from emotive conviction to a certainty of catastrophe, absent dramatic action. This doesn’t mean cultures can’t find themselves in phases, or permanently (usually near their demise), where they are subjugating not only non-adherents but their adherents too. But for instance most modern active Christians could not be called subjugated, despite eras in the past where this has occurred in certain locations. This often occurs where there’s a schism, but if it’s a successful one both branches usually return to a much more beneficial state, likewise if the schism is instead exterminated. “I believe the inculcation of CAGW on society, for example, is a contrivance used for the sole purpose to facilitate a particular brand of global cultural change.” It had to get big before it had the muscle to be attractive to other cultures as allies, such as left-wing culture in many countries, the US especially. But to get big enough for that it was emergent, and the alliances are emergent too, as evidenced by the fact that they occur locally in different ways, and only average to a main way if the local alliances all grow enough to globally sort themselves out. For instance the left lean in the UK is modest, *all* the main political parties support climate action and there is no serious opposition. In Germany, it so happened that a *right* of centre main party arose as the main ally – Merkel is known as ‘the climate chancellor’ in Germany – and there was no strong opposition for many years, but due to the problems of the energiewende this has now arisen from the left and the far right. Bear in mind too that the ‘particular brand of global cultural change’, being outside the centre-ground of politics, is essentially an emergent narrative too, and one that is not so fleeting. The net alliance is clearly convenient even if still patchy, but as noted above what advocates think they want, and what the cultures are actually steering at (only survival / expansion of the cultures!) are two completely different things. This doesn’t mean there isn’t plans / intent as noted above, but there was not long pre-planned intent to get where we are, and the current planning does not aim where the culture is going even if the latter continues to be successful. “Uniquely, being sentient allows for bypassing biological rules.” Although at a much more basic level, there’s cultural behaviour in some animals too. But yes, culture very much increases the speed of the game, although biological selection not only continues, it is entangled with culture in various ways too. The most common example is usually given as the spread of the milk-drinking gene (before this spread, humans were ill from drinking milk after a certain age, to get them off the breast, this is still the case for some populations; the gene stopped that illness). The gene gained a high selective value after the practice of keeping animals start up, which was a cultural practice; a new food supply was then available, and populations that could make use of it for adults benefited more, spreading the gene. popesclimatetheory | April 29, 2019 at 10:35 am | , to the effect that almost nothing in the media is true– on purpose. Mark Twain wrote, many years ago. If you do not read the news, you are un-informed. If you do read the news, you are mis-informed. Nothing has changed since then. Bob Greene | March 26, 2019 at 11:58 am | Excellent post Jeffrey B McKim | March 26, 2019 at 12:27 pm | At the risk of sounding redundant, great article. Stuart Lynne | March 26, 2019 at 12:39 pm | To a certain extent what I say is I believe in Engineering. Engineers have the real world problem of having to build things that work correctly. That is the problem with the environmental alarmists (e.g. Green New Deal), they simply have no idea of how are society works at the nuts and bolts level. You can’t just wave your arms around and have (for example) magical windmills or solar farms that will be able to supply 100% of our power needs appear and be working and on a limited time scale, etc. Your solutions have to actually work, be affordable and be deliverable to a schedule and continue to work for many decades. aplanningengineer | March 26, 2019 at 1:35 pm | Not too long ago the curious meme was going around where people basically stated that since NASA “Scientists” had achieved staggering precision in timing spacecraft maneuvers, they were going to believe climate scientists. This is a striking example of blurring the differences between engineers and scientists to make a ridiculous equivalency. The expertise, methods, approaches of NASA Engineers are far distant from climate modeling. As you say the view of the really hard engineering work is often seen as magic. Ivory tower academics is very different from practical engineering and the success of engineering doe not validate all efforts from various and diverse scientific communities. Why when it comes to energy solutions do those alarmed about the climate believe in solutions proposed by Climate Scientists, activists, and academics over approaches favored by more established and recognized engineers within the field? An interesting article from NASA recognizing contributions from Engineers versus Scientists. Engineers draw the cutting edge in every capacity for NASA, from avionics to electronics, software to rocketry. Similarly, to explain the things and places it explores, NASA enlists scientists from a multitude of specialties within the fields of astronomy, biology, chemistry, geology, materials science and physics. https://www.nasa.gov/50th/50th_magazine/scientists.html I think you put too much ‘faith’ in NASA. They have achieved many great things, but also are continuing to spread the gospel according to Hansen: (from you link) “n 1976, Goddard Institute for Space Studies (GISS) scientist James E. Hansen and four colleagues studied human-made trace gases other than carbon dioxide and chlorofluorocarbons that might have an important greenhouse effect. They found methane and nitrous oxide were likely to be important, although measurements of how these gases might be changing were not then available. Two years later, he resigned a lead scientist berth on a mission to Venus to devote fulltime to studies of Earth. “It seemed to me then it was more interesting and important to study a planet that would be changing before our eyes and the one which housed civilization,” he said. Since then, Hansen has become one of the world’s leading climatologists, as well as the longtime director of Goddard’s Institute for Space Studies. Trained in physics and astronomy in James Van Allen’s space science program at the University of Iowa, Hansen first testified on climate change before Congressional committees in the 1980s and raised the initial awareness of global warming. One of the most significant findings of Hansen’s years of research is that the Earth is now experiencing climate change due to a greenhouse effect caused by human-made trace gases emitted from fossil fuels. Although his research has stirred controversy in the past on both sides of the political fence, the scientific community and leaders around the world now agree with his assessment, that global warming and climate change are here and we need to address the issue by reducing greenhouse gas emissions and our reliance on foreign oil, among other things.” They should rather be discussing John Christy. I think there has been somewhat of a downhill slide at NASA. dpy6629 | March 26, 2019 at 1:53 pm | In my experience older engineers tend to be pretty balanced in their approach to evidence and science. However, some of the younger ones are have been infected with the “selling” bug and are quite bad with selection bias. Often good engineers stay away from the publication culture and are not exposed to a lot of the misinformation that appears in the literature. They also know that commercial software outfits usually dramatically oversell their products. John Ferguson | March 26, 2019 at 3:50 pm | In 1980 I worked for a guy who said that Engineering was driven by its worst exponents. Remember the cartoon where our hero asks the manager with the spiked hair, “Marketing told them we could do what???” And then the guys in the back somehow figure out how to do it. And the art is advanced. Robert Sparrow | March 26, 2019 at 12:41 pm | When I first started studying science I was required to do a course entitled ” Straight and Crooked Thinking” using a text of the same name. Unfortunately too many scientists appear to have either lost the ability to rigorously apply the principles of “Straight and Crooked Thinking” or for reason of fear of loss of funding or political bias choose to ignore it. coldish1 | March 26, 2019 at 12:41 pm | On this side of the Atlantic, whenever I encounter a statement like ‘For some years now, one of the left’s favorite tropes has been the phrase “I believe in science.”’ it tends to discourage me from reading further. What has left or right or centre in political terms to do with science or climate? Why can’t the writer keep to the point – which seems to be a fair one – without politicising the issue? There may well be sloppy thinking about science right across the political spectrum; so what? “There may well be sloppy thinking about science right across the political spectrum; so what?” The implications for the global economy? fizzymagic | March 26, 2019 at 8:06 pm | What has left or right or centre in political terms to do with science or climate? Why can’t the writer keep to the point – which seems to be a fair one – without politicising the issue? Because the very phrase he is writing about is used politically. The politicization of science has become a major problem over the last few years (from all sides of the political spectrum). The use of the phrase “I believe in science,” is an attempt to confer the authority of Science (with a capital S) onto a particular political position. The writer’s entire point is about that politicization; your reply makes it clear you didn’t even read the article. bitchilly | March 30, 2019 at 6:54 pm | I can understand the point coldish1 made re politics.In the UK both the left and right “believe” in climate science.The science may be politicised,but in the UK it is not the political issue it currently is in the U.S. rogercaiazza | March 26, 2019 at 12:44 pm | I loved this quote in the article: In my experience, “I believe in science” is just a shorthand way of admitting, “I have a degree in the humanities.” Joe-Bob Miyazaki | March 26, 2019 at 12:49 pm | Dr. Curry: You wrote… ‘ “I believe in science” is almost always invoked these days in support of one particular scientific claim: catastrophic anthropogenic global warming (CAGW). And in support of one particular political solution: massive government regulations to limit or ban fossil fuels.’ The elephant in the living room is the fact that “political solution” is as oxymoronic as “I believe in science”. No one ever questions it. ” ‘We’ must do something” ALWAYS means “government” must do something. And the “something” will ALWAYS take one of two forms: 1. Force people to do something they otherwise would not freely choose to do. 2. Force people to not do something they otherwise would freely choose to do. The operative word is “force”. Political law requires legalized coercion. The sole purpose of government is to protect the lives, rights, and other property of the people who subscribe to it. We do not have government. What we have is a counterfeit, which is based on the premise that the solution to every problem is to impose an ever-increasing number of arbitrary, artificial rules they call “laws”, which always require legalized coercion. ”Law” means something very different in science; it means natural law. In natural law, no one forces anyone to do anything. Those who truly understand science should have no difficulty understanding that. When I see people engrossed in the most clever exertions on behalf of arguments that ultimately will lead to forcing others to comply with their “believe in” mentality—whether what they believe in is CAGW, or “political solutions”, or any other world-view that is antithetical to natural law and the scientific method—I immediately know that such arguments are immune to reason. That’s the tip-off that reveals the shallow conviction of those who claim to “believe in science”. They’re not willing to subject their beliefs to genuine scientific scrutiny. In the final analysis, they’re banking on the backing of the state to force others to submit to their beliefs. It is a morally bankrupt mentality, and it’s always accompanied by the intellectually bankrupt “believe in” mentality. John Chassin | March 26, 2019 at 1:01 pm | Dr Curry, How could you not believe in “science”? It’s the best science money can buy. You could have been part of that. Money and celebrity. All you had to do was embrace mann caused global warming. They even changed it to “climate change” so no matter what happens, they’re covered. The scientific method was a quaint idea but it’s old and, frankly, it gets in the way. As long as you pass the political test, you’re funded. Thanks for everything you have done. You’re braver than I could ever be. Excellent post Judith. With the world awash in malefactors of great wealth with billions to spend on propaganda, the existence of Media bias / fact check is no surprise. With the media having just been caught in one of the most damaging political hoaxes in modern American history, we now have laid bare the consequences of a damaged culture in which there is neither shame nor belief in truth itself. Academia in the West is partially responsible with their cultural Marxism. Multiculturalism also plays a role because it diminishes the role of “truth”, “justice” and other universal norms. Focusing on the role of scientists however is also critical. Given their loud proclamations of objectivity and lack of bias, they are perhaps the most hypocritical group to have gotten swept up in political activism and have generated a system that virtually guarantees biased information. You have highlighted many of the admissions of the problems here in top flight scientific journals. It is a shame that more scientists don’t feel a higher responsibility beyond their own political views and their careers. Diego Fdez-Sevilla, PhD. | March 26, 2019 at 2:03 pm | For as long as anyone involved on science, skeptics, supporters, … do not know what is the meaning of outliers in their data, they become believers of their own method, their own data and their own interpretations over the results obtained. I know that because in 2003 I did a PhD on it. Everybody gets suspicious when someone introduces links to their own publications but I can not take all the space here to repeat what I think in the subject so I will risk taking the apathy from doing it and share some links to my thoughts in my publications. Just to make a brief summary I would like to share the following: In the line of research that since 2013 I have presented on environmental synergies, I have tried to offer enough data in all shapes and forms to support a point of view. However, I realised that the major limitation to find validation from different postures does not come from lack of agreement between different methods or data sets, but from the interpretation of the observer for the results discussed and the lack of awareness over the role played by the “standardization of acceptance for the margins of error”. That was something which played a fundamental part in my thesis back in 2003. The aim was to assess the aerodynamic behaviour of pollen grains by standardizing their settling speeds. But behind addressing the question of delivering values to represent one single parameter (settling speeds) there was a bigger challenge found in a world of limitations since the values obtained to describe a behaviour are the interaction between numerous variables, and the representativeness of those are defined by the level of uncertainty incorporated with the instruments supplying measurements. Then there is the limitation of our algorithms representing the norms under which our variables interact in our mathematically created world. Back in2013 I had conversation where someone told me that the climate change argument was an invention based on manipulation. When I tried to offer any argument I was told that my claims were based on publications made with hidden agendas. And I could not say that was wrong because I do not know the agendas for those behind their papers, therefore, I decided to look into the subject on my own, with my own methodology, my savings, and my skills, leaving aside any preconceptions based on claims by others that I could not provide with my own analyses. I would like to offer you all the work that I have done since then for you to judge if there is any valuable content. It might not be pretty, it might not be appealing but I can ensure that it is raw and painfully honest. If anyone is wondering what is that I am “selling” in my line of research, based on my analyses I am just saying that: The global Temperature measured is the resultant of mixing patterns in the atmosphere, Therefore an increase in mixing dynamics creates a pause in temperature raise, An increase in mixing dynamics show an increase in convective forcing, Convective forcing is the work resultant from an increase in atmospheric energy being incorporated in free state, The incorporation and spread of energy in free state into the atmosphere is carried and released by water vapour, An increase of water vapour in atmospheric circulation requires an increase in the thermal capacity of the atmosphere, The process of enhancing the thermal capacity of the atmosphere comes by increasing the concentration of GHGs, conc of aerosols and land surface albedo. Anthropogenic activities are linked with all the processes mentioned above by transforming the composition and structure of all the phases of the environment involved: the gaseous, solid and liquid. And furthermore, inhibiting the capacity of the biotic system to capture and retain energy from free state into inert state. If I am wrong in my conclusions it is entirely the result of my own limitations. And if I am right soon enough you will see somebody claiming their credit in publications without my name. This is all I have for you; to dismiss, criticise, ignore, or whatever you like. Other people from universities is reading it an no one has challenged my publications so I guess you should also be aware of its existence. https://diegofdezsevilla.wordpress.com The Method: – “The Answer to the Ultimate Question of Life, the Universe, and Everything” is … 42 (by Diego Fdez-Sevilla) Researchgate DOI: 10.13140/RG.2.1.2400.2324 May 15, 2014 (https://wp.me/p403AM-9M) – Debating Climate, Environment and Planetary evolution. Define your position. (by Diego Fdez-Sevilla) ResearchGate DOI: 10.13140/RG.2.2.27332.73603 October 2, 2014 (https://wp.me/p403AM-iy) – The scope of Environmental Science and scientific thought. From Thought-driven to Data-driven, from Critical Thinking to Data Management. (by Diego Fdez-Sevilla) Researchgate: DOI: 10.13140/RG.2.1.2007.0161 June 26, 2015 (https://wp.me/p403AM-BD) – February 17, 2017 State of Knowledge. Between The Walls Of Silence There Is A Silhouette With The Shape Of An Interrogation (by Diego Fdez-Sevilla PhD) (https://wp.me/p403AM-1m6) – March 10, 2017 Modelling the “Model” and the Observer (by Diego Fdez-Sevilla PhD) ResearchGate DOI: 10.13140/RG.2.2.17558.04169 (https://wp.me/p403AM-1qL) – Feb 2018. Climate Drifts and The Scientific Method of Waiting 30 Years. Follow up on previous assessments by Diego Fdez-Sevilla PhD Pdf at ResearchGate DOI: 10.13140/RG.2.2.18823.09122 (https://wp.me/p403AM-1Ks) The Research: – December 17, 2016 Orbital Seasonality vs Kinetic Seasonality. A Change Triggered from Changing the Order of The Factors (by Diego Fdez-Sevilla, PhD) Researchgate: DOI: 10.13140/RG.2.2.20129.81760 (https://wp.me/p403AM-1jd) – March 3, 2019 A pattern of change in the atmosphere beyond considering global warming or cooling. That is, global mixing. (by Diego Fdez-Sevilla PhD) Registered DOI: 10.13140/RG.2.2.32693.73445 (https://wp.me/p403AM-2gH) I apologise in advance if anyone considers that my comment is not appropriated. Mike Jonas | March 26, 2019 at 4:50 pm | Diego: in your “I am just saying that: [..]” you left out a lot of stuff. For example, no mention of oceans or clouds. Diego Fdez-Sevilla, PhD. | March 28, 2019 at 8:53 am | @Mike Jonas I suppose that my line of research can be defined as unorthodox. However, where I say: “based on my analyses I am just saying that” actually it means that between 2013 and 2018, after more than 200 analyses published at weekly basis, looking into the different aspect of the environment and the synergistic interactions between those, my conclusions are what I wrote above. That includes the the application of Stefan-Boltzmann radiation theories, the connections between Solar activity, Biological productivity, Polar vortex, Polar Jet Stream, Environmental Resilience, Inland Water Bodies and Water Cycle, Energy Balance and the Influence of Continentality on Extreme Climatic Events. Based on my criteria (always open for corrections) I have developed a theory about what I believe it has induced an increase in atmospheric water vapor content and, further I discuss its implications in atmospheric circulation, Jet Stream behaviour and weather system’s patterns. – New theory proposal to assess possible changes in Atmospheric Circulation (by Diego Fdez-Sevilla) October 21, 2014 Researchgate DOI: 10.13140/RG.2.1.4859.3440 https://diegofdezsevilla.wordpress.com/2014/10/21/a-groundhog-forecast-on-climate-at-the-north-hemisphere-by-diego-fdez-sevilla/ Excerpt from this publication About clouds: “Solar activity could increase the temperature of the masses getting radiated (water or land). It could increase evaporation from oceans but water vapor needs more factors to be sustained in atmospheric circulation for longer periods of time and reach further in latitudes. Thermodynamic laws dictate the amount of water which can be contained in the atmosphere. More evaporation in a clean sky (low aerosol and low in green house gasses content) could induce more rain in tropospheric circulation but water vapour would not stand for long in the atmosphere as the energy within it would dissipate. However, if the amount of greenhouse gasses increases, the energy from the cyclonic event would not feel so greatly the differential gradient in energy with the surrounding so it would not dissipate its energy so easily.” see also google: site:https://diegofdezsevilla.wordpress.com/+clouds” – Why there is no need for the Polar Vortex to break in order to have a wobbling Jet Stream and polar weather? (by Diego Fdez-Sevilla PhD) Researchgate DOI: 10.13140/RG.2.1.2500.0488 http://wp.me/p403AM-mt I sent an email to different scientists and published my theory at LinkedIn at the AGU and NOAA groups where, despite numerous visits, nobody make a single comment (no criticism neither support). And from the emails that I sent, only Jennifer Francis replied to me in Dec 2014. She replied: “Diego, The topic you’ve written about is extremely complicated and many of your statements have not yet been verified by peer-reviewed research. It is an exciting and active new direction in research, though, so I encourage you to pursue it. To get funding or a job in this field, however, will require a deeper understanding of the state of the research, knowledge of atmospheric dynamics (not just suggestive examples and anecdotal evidence), and statements supported by published (or your own) analysis.” https://diegofdezsevilla.files.wordpress.com/2016/06/email-exchange-diego-fdez-sevilla-research-jennifer-francis1.png The fact that my interpretation of the situation was recognised as “your statements have not yet been verified by peer-reviewed research” was something I thought it would be reworded as an acknowledgement. But then I was dismissed with understating my “knowledge over the state of the research and atmospheric dynamics” and calling my approach “suggestive examples and anecdotal evidence”. Something curious when I saw her intention of offering my views in her publications later on, and other scientists mimicking my work in their publications through the following years. So I took the challenge offered by Jennifer extending the range of “my own analysis”, and based on that challenge I wrote a review the following Spring 2015 including the developments of the Winter after such communication: Revisiting the theory of “Facing a decrease in the differential gradients of energy in atmospheric circulation” by Diego Fdez-Sevilla. Reply to Prof. Jennifer Francis (February 2015) Researchgate: DOI: 10.13140/RG.2.1.1975.7602/1 Since then, you can find 150+ analyses at the index section of the Timeline and framework page (diegofdezsevilla.wordpress.com), and also, in order to show my commitment with my own words through time, pdfs of those at researchgate with DOIs. From the publication: Climate, Weather and Energy. Using a Climatic Regime to explain Weather Events by Diego Fdez-Sevilla PhD Posted on April 19, 2018 DOI:10.13140/RG.2.2.27923.58406 It has been suggested that “More particles in the atmosphere mean more reflective clouds and a cooler climate.” That is a too simplistic way of looking at it. Different types of airborne particles generate also different types of interaction with the atm. water vapour and other gaseous elements and compounds. There is aerodynamic behaviour, chemical behaviour and thermodynamic behaviour. It has been addressed in sci publications that too many particles of too small size can inhibit rain by retaining water vapour in droplets too small to fall. Which in my research means that the thermal energy contained can be moved around in longer distances. Also, an increase in atm temp allows more water vapour to be contained in the atm so more clouds (and albedo) would be formed by more aerosols “only” if dew point is reached on those particles, which is more difficult to achieve as the temp increases. But, when you reach dew point over an increased conc of aerosols, within a thermically enhanced atmosphere charged of water vapour, all that energy will express itself in different types of forms, with heavy forms of precipitation (snow or pouring rain) and wind events. Like what we have just now over the Iberian peninsula and rain at the Arctic. My assessments take SST as subsequent conditions driven by wind shear. So the interaction between masses of air in circulation allow or inhibit SST developments. Once the scenario is built on SST this becomes a “battle field” conditioning the subsequent interaction between the following masses of air and the characteristics of the “ground” where the game will be played (sort of speak). Like the effect of the ice conditions in an ice hockey match. This year we have seen the Arctic absorbing strong perturbations from mid-latitudinal circulation. And I believe that the following developments that we have seen through January at the West coast of EEUU and the following over Europe, as well as the recent atmospheric dynamics over India, are all related with the state of the circulation across the Arctic, and that the mixing zone between Arctic and midlatitudinal masses over the oceanic basins affects the developments at the Equator. From the publication: Seasonal Outlook. June 2017 (By Diego Fdez-Sevilla PhD) Posted on June 23, 2017 DOI: 10.13140/RG.2.2.25428.91528 As I have said in previous assessments, I believe that the Arctic is not amplifying the effect of increasing heat retention in the atmosphere, it will be the Equator the area which will develop such reaction. However, the shape and form for such energetic dynamics can be as surprising as reducing the number of hurricanes (due to the difficulty to condensate energy in a small location) whilst finding more energetic developments at higher latitudes. And if a hurricane forms, it might become unpredictable due to the rapidly changing nature of the environmental characteristics of the atmosphere. I hope I have added some clarification to my posture. bfjcricklewood | March 26, 2019 at 2:18 pm | Yes but do scientists behave scientifically no matter the vested interests of their funder ? Perhaps more fundamentally, do funders of science hire those they believe will best pursue the truth no matter where it leads, or those most likely to feather the funder’s vested interest ? Tobacco companies clearly stood to expand themselves on the back of studies they funded, giving smoking a clean bill of health – more profits. Government just as clearly stands to expand and glorify itself on the back of studies it funds, that are “settled” on imminent and certain CAGW – a truly glorious watertight excuse for more taxes, bureaucracies and powers for itself. Identical in principle, just the latter having orders of magnitude bigger impact on us. Now, is this the sort of ‘science’ everyone is talking about here ? Ron Graf | March 29, 2019 at 8:56 pm | When double blinds are unavailable to police bias the next best protection is giving the study to two teams with competing assumptions and/or hypothesis about the expected conclusion. Each should publish their results simultaneously. Conflicts should be resolved in similar fashion with followup studies until both groups agree with the validity of the results. This may sound expensive but it produces a product that can be universally trusted. Such products are exponentially more valuable then results that can’t be trusted by skeptics. Chris Kurowski | March 26, 2019 at 2:21 pm | Some remarkable biases appear in the media and in popular blogs. For example, Scott Adams, who claims to have an open mind regarding climate science, refers to Michael Mann as a climate scientist, but Judith Curry as a sceptic. I don’t think this is exactly intentional, he is merely parroting what he has heard in the press, however, he has robbed Dr, Curry of her PhD in this way. verytallguy | March 26, 2019 at 2:51 pm | You abandoned science a while back, unfortunately. https://judithcurry.com/2015/05/06/quantifying-the-anthropogenic-contribution-to-atmospheric-co2/ cerescokid | March 26, 2019 at 4:26 pm | Nah, science took a wrong turn and abandoned her……..unfortunately. What’s not to like with 2,119 erudite comments from some erudite denizens at their erudite best. Many appeared in top form, including a few of my favorites. Judith did her herculean best to explain why she provided the post. I saw your comments, albeit late in the game. Are you sure you weren’t suffering from those little squiggling things that day which adversely affected your eyesight? At times they come with blind spots. edimbukvarevic | March 26, 2019 at 6:13 pm | “Science is used here not to describe specific methods or theories, but to provide a badge of tribal identity. Which serves, ironically, to demonstrate a lack of interest in the guiding principles of actual science.” Curious George | March 26, 2019 at 3:00 pm | For the fun of it, I followed the link for the Skeptical Science:”Skeptical Science is a climate science blog and information resource created in 2007 by Australian blogger and author John Cook… Strictly adheres to the scientific consensus on climate change and sources to credible scientific studies.” Not even the MediaBiasFactCheck dares to call Mr. Cook a scientist. But his blog “strictly adheres to the scientific consensus”. It is a new incarnation of 100 German scientists against Einstein. Science is not a scientific consensus. Elizabeth Warren is not a Cherokee. A consensus is not a tool of science; it is a tool of politics. A “scientific consensus” is an oxymoron, not science. Robert Clark | March 26, 2019 at 3:02 pm | A simple explanation of the Ice Age This Ice age should last between 130,000 and 140,000 years. 65,000 to 70,000 years to make the Ice and the same to melt the Ice. That is an Ice Age. This Ice Age began about 18,000 years ago. Nature has been taking water vapor from the oceans and moving it to the poles, freezing it and dropping it. Nature is doing this because the earth is radiating more heat to the black sky, 0’ Kelvin, than it is keeping radiated from the sun. Although the earth surface is receiving more heat from the sun than it is radiating to the black sky, the area of the oceans covering the surface is so large compared to that covered by land the radiant heat reflected to the black sky makes that radiant heat retained by the earth less than that lost by the earth to black sky. About 45,000 years from now the second half of the ice age will begin. Nothing man will ever do will change this science. This is the simple, unchangeable law of nature. Shoot it down if you can. You are using an unorthodox terminology. 13,000 years ago there were many more glaciers than today. franktoo | March 26, 2019 at 3:04 pm | We should all be aware that Google, Facebook and other social media are using various “fact checking” websites to help users distinguish between fake news and reliable information. After the revelations about the activities of the Russian Internet Research Agency during the 2016 election, the alleged Podesta child sex ring at the Cosmic Ping Pong Pizzeria, and Trump’s win in 2016, the liberal leaders of these Internet Platforms are already modifying what users see based on what internet fact-checkers are reporting. So a google search is more like to put a link to Climate Etc on the tenth page of hits (where it will likely never be seen) rather than on the first few pages. I briefly looked into the background of the major internet fact checkers. Their boards of directors are composed of leaders from the mainstream media and journalism schools. These organizations are basically the MSM fact-checking themselves and sources that have sprung up to oppose liberal domination of the media. It might be an exaggeration to say so, but the Russian effort in 2016 (which is ongoing) shows that there is a war going on for the control of our minds. There probably always has been, but the internet provides more weapons. (The Russian news agency RT is the biggest contributor of videos to YouTube, and copies are posted under different names.) Confirmation bias makes it difficult for anyone to incorporate new information that disagrees with deeply-held beliefs, and hanging out in one corner of the media and Internet will create deeply held beliefs. A belief in democracy is based on the idea that ordinary citizens are capable of learning the truth Time to abandon those media, then. I still use Google scholar, but I already changed to Duckduckgo for common searchers. I have no business with companies that don’t treat their customers fairly. “I already changed to Duckduckgo for common searchers.” I’m with you, Javier. I recommend anyone who prefers to have less data collected on them use that search engine. I refuse to contribute data to a virtual ideological monopoly. David L. Hagen (HagenDL) | March 26, 2019 at 3:11 pm | Comment submitted to Media Bias/Fact Check. The review of Climate Etc., by D. Van Zandt is a list of illogical ad hominem attacks against Judith Curry mixed with false statements. It is directly contrary to the very foundation of science per the Royal Society’s motto “nullius in verba”(Take nobody’s word for it). It further violates the high standards of scientific integrity detailed by Physics Noble Laureate Richard Feynman in his 1974 Caltech commencement address Cargo Cult Science. Particular fallacies: Begging the question by falsely saying articles “minimize or deny the impacts of human caused climate change.” It commits “Bias by Labeling” by repeating the derogatory epitaphs “climate heretic” and “climate missinformer” and “pants on fire”. The summary commits “Bias by Spin”, insinuating that she made partisan political statements by saying “invited by Republicans” rather than addressing her evidence she presented. It implies Curry’s formal statements were allegations and “ anti-climate science propaganda” rather documenting that Curry quoted scientific evidence from her published papers. E.g., see Curry, JA, 2018, Climate uncertainty & risk, Climate uncertainty & risk, US Clivar Vol. 16, No. 3 pp 1-7 MBFC News claims to “We encourage readers to send us claims to fact-check and are transparent on why and how we fact-check.” However, you a priori state that you “We will only accept fact checks that were done by signatories of the International Fact Checking Network” https://royalsociety.org/about-us/history http://calteches.library.caltech.edu/51/2/CargoCult.htm https://opensky.ucar.edu/islandora/object/usclivar%3A113/datastream/PDF/download/citation.pdf Pingback: In Science Worst Than Using Beliefs to Make Decisions For You, Is Doing It and Not to Be Aware of It. (by Diego Fdez-Sevilla PhD) | Diego Fdez-Sevilla, PhD. Robert I. Ellison | March 26, 2019 at 4:29 pm | I once heard someone say that he believed in AGW on the balance of probabilities. A reasonable position for the scientifically challenged. I think the balance of probability is that both sides are nuts. Both sides of the climate battle continue to insist on a certainty that is impossible – and continue a battle in which one side is heavily outgunned. The climate change battalion is all of the global scientific institutions, the liberal press, governments, major scientific journals, etc. Opposed is a ragtag collection of a few marginalized cheer leaders for curmudgeons with crude and eccentric theories they insist is the true science. The curmudgeons are remarkably persistent – and climate shifts may give them a strategic advantage as the planet doesn’t warm – still a statistical possibility despite the more recent Pacific SST/cloud warming – over the next decade or two. Or indeed as the amplified solar signal is lost this century. But the battle is absurd and unwinnable – by either side. There is a very different science – and you get points for guessing which denier said this. “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” It means that most of “the science” — the data interpretation, the methods and the theories are utterly inadequate to the task of explaining climate for us. In the context of a dynamically complex climate – rational policy is to manage risk from whatever source. Robert I Ellison – The denier you refer to is the IPCC. The ragtag collection of marginalised cheer leaders is so identified by the side with the guns, but I suspect that in the end they will prove to have the better weapon: the scientific method. Yes it was the IPCC. You get points. However – curmudgeons with crude and eccentric theories seems to be supported by the facts. Any modestly scientifically educated person should see how far out there these guys are. If you imagine that tribal narratives are the scientific method I have very little hope for other than tribal narratives from you. One of the intellectual shortfalls of almost everyone concerned is the lack of an appropriate theoretical framework for Earth system science. What is the scientific method in context? https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016WR020078 I suspect that both sciences and climate have surprises in store. Robert,yes. Mother Nature does not obey the “scientific consensus”. Robert I. Ellison | March 26, 2019 at 11:04 pm | Is this the consensus that carbon dioxide is a greenhouse gas and that it effects climate? There are some curmudgeons here who argue against it – but you can’t call that science. popesclimatetheory | March 27, 2019 at 3:52 am | It is the consensus that a trace gas controls the climate of earth. Water is abundant and it changes state and it can regulate the amount of water vapor, an order or two orders of magnitude more powerful greenhouse gas and crank it up or down on a daily or hourly basics. Water in all of its states is the climate key. Understand water and ice and water vapor and how it works on the surface and in the oceans and in the atmosphere. You can not call control of climate by a trace gas any kind of science. They do that to fuel the war against fossil fuel so that can get rich on carbon taxes and selling windmills and solar panels and getting more for electricity from more expensive, less reliable sources that can not operate without traditional power for backup. CO2 has gone from just under 300 to just over 400 parts per million in the atmosphere. THAT IS ONE MOLECULE OF CO2 PER TEN THOUSAND MOLECULES THAT WERE THERE NATURALLY. GIVE ME A BREAK, THAT IS STUPID. Robert I. Ellison | March 27, 2019 at 4:16 am | Told you so. Nick Stokes | March 26, 2019 at 11:59 pm | “It means that most of “the science” — the data interpretation, the methods and the theories are utterly inadequate to the task of explaining climate for us.” It doesn’t mean that at all. It is a non-controversial statement of the obvious, to anyone who reads it free of intellectual laziness and tribalism. It’s non-controversial because the full quote, at least, simply describes what people actually do: “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles.” If you look at the typical spaghetti plot of model results, that is exactly what they are describing. No-one is claiming the long term prediction of a future climate state. That has a technical meaning, in effect a snapshot of a set of GCM variables. They discern “significant differences in the statistics of such ensembles”. Robert I. Ellison | March 27, 2019 at 12:56 am | “Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” https://www.pnas.org/content/104/21/8709 “Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161 To confuse systematically designed model families or probabilistic forecasts – perturbed physics ensembles both – with CMIP opportunistic model ensembles is so wildly wrong that any response seems inadequate. This is before we consider chaos in climate. “Schematic of ensemble prediction system on seasonal to decadal time scales based on figure 1, showing (a) the impact of model biases and (b) a changing climate. The uncertainty in the model forecasts arises from both initial condition uncertainty and model uncertainty.” matthewrmarler | March 27, 2019 at 1:54 am | Nick Stokes: “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles.” That statement is more than 40 years overdue. Snow will be a thing of the past; there will be no more floods in Queensland or California; every year will produce Hurricane Katrinas; Manhattan freeways will be underwater; the threat of malaria will increase dramatically; extra billions of people will starve due to permanent drought. There is a long list of absolutist predictions made by climate scientists. we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible. We have ice core data and other proxy data and an increasing amount of instrument and observation data. Every thing happened for a reason. Of course it is a nonlinear system, everything follows repeating cycles that have evolved and mutated for a reason. If we do not understand the reason, it seems chaotic, but as we understand more the chaos goes away. Chaotic does not really represent chaotic, it represents a lack of understanding. A lot of chaos comes from not understanding simple basic principals and ignoring actual data. Everyone, or most everyone looks at correlations with external forcing and only studies immediate or near immediate correlations. Any massive system has mass and spring rate for internal cycles and resonation that changes as the internal cycles resonate with external forcing. This is not studied in climate science, thousand year and ten thousand year and a hundred thousand year and other internal responses and cycles are not understood. Earth has regions with regional climate. Different external forcing and internal natural responses occur in the different regions. Each of these regions influence other regions. It is all understandable and we have data, it is complicated but we have data and history and the cycles have evolved but nothing was chaotic, it was just not yet understood. It certainly was nonlinear, it was cycles. For the past ten thousand years, the newest evolution of the climate cycles has become shorter and more tightly bounded cycles that are very robust, this will continue and this warm period will play out like the medieval and roman and warm cycles before it, for a few hundred years and another little ice age will occur over the few hundred years after that. History and Data support that this is the most likely path forward. if CO2 or anything causes increased warming, increased evaporation and snowfall and sequestering of ice will limit the upper bounds of temperature and sea level. The problem with knowing a few simple things about the Earth system is that there is always a dynamic, deterministic chaotic planet of unknowns out there. https://watertechbyrie.com/2018/06/12/voices-of-climate-reason/ Nick wrote: “No-one is claiming the long term prediction of a future climate state.” I respectfully disagree. The SPMs are filled with long-term predictions of the future climate state arising from RCP6.0 and RCP8.5. To put it in political terms, the long term prediction is for catastrophe. There are predictions of how much CO2 we can emits and still keep future warming below 0.5 and 1.0 degC. By using the projections of climate models with ECS that average 3.3 K/doubling (with a range of about +/-1 K/doubling) to inform policymakers, the IPCC is grossly mis-representing the future climate change associated with their statement that there is a 70% chance that ECS lies between 1.5 K and 4.5 K. If ECS were 1.5 K, then RCP6.0 would keep climate change below the arbitrary goal of 2.0 K. The RCP6.0 scenario doesn’t assume any reductions in emissions until the 2060, which means it might be possible to meet the 2 K goal without a crash program to reduce emissions right now. Worst of all, the observed relationship between forcing and feedback (EBMs) is fairly inconsistent with the high climate sensitivity of climate models. John Droz | March 27, 2019 at 3:39 pm | Your characterization of the two sides sounds remarkably like the war between Great Britain and the rag-tag colonists.Those who don’t learn from history… Or the Vietnam war? But it is a war that science is winning. And to understand science means new habits of mind on both sides. john321s | March 26, 2019 at 5:45 pm | The vast majority of people have never done any science and are sadly oblivious to the fact that it’s a method of comprehending reality–not just an expression of canonical “belief.” Dave B | March 26, 2019 at 6:33 pm | I appreciate your website and visit it often. I thought the above essay was excellent. Thank you for posting it. I also appreciate your courage and scientific objectivity. Many attach themselves to the science narrative as a gateway for them to be part of the intellectual elite. They are desperate to beef up their self esteem. Rudolf Huber | March 26, 2019 at 7:06 pm | Exactly my words. Science is not about beliefs or not. Either you understand the underlying processes and facts or you don’t. Nothing much to believe here. Real science has been abducted by the believers which want to erect which-trials in order to get rid of those that don’t agree with their views. That’s a religion, or an inhuman control system such as Socialism, but this has nothing to do with science. Science is about knowledge and the capability to deduce conclusions from observations plus proving them in reality. A model with no hard proof attached to it is no science – its speculative. What if the moon was made of cream cheese – that’s the same kind of speculation. What if Superman farted and blew a hole into Earth? Interesting dilemma but about as real as the curls of hair hanging into my face. Science means you get it. Nothing to believe here. John Prince | March 26, 2019 at 7:29 pm | I wondered about that trope. Sort of like the borish “I love PBS” that went around supposedly in support of real news. Norman Pilon | March 26, 2019 at 8:25 pm | Reblogged this on Taking Sides. Kip Hansen | March 26, 2019 at 8:36 pm | Nice piece by Robert Tracinski. It is anti-everything-I-believe-in to continue to use personal character assassination to forward one’s political- or identity-bias. Sites like Media Bias/Fact Check themselves are built on identity politics and are almost entirely untrustworthy and nearly invariably wrong — they identify any opinion that differs from the consensus-of-choice of its owners identity group as “fake news” or “pseudo-whatever”. Google has caved in to the same demand from the left, labeling all non-left/non-uber-progressive sites as unreliable. It is an interesting time. Journalists have abandoned journalism, civic leaders are not civil nor support civil society, scientists have become advocates for political/social movements….. Pingback: Why a senior scientist doesn't "believe" in "science" | Uncommon Descent Geoff Sherrington | March 27, 2019 at 1:11 am | Scepticism about climate has often been pooh-pooed by asking incredulously if that means that a conspiracy among leading scientists exists – how improbable is that? Therefore, deny air for sceptics. Skipping to a current topic, what would you call the left’s efforts to involve President Trump in illegal collusion with Russian election-manipulators? Would you call that a conspiracy? Would you then concede that bodies with some appearance of conspirators is not only possible, but widespread? Discussions like these about conspiracies are rather pointless because they do not relate to science, the scientific method and so on. Very little discussion now happens in the circles of interested parties. It is more politics and their effects on society. This has happened with the help of 2 basic observations. 1. That a hypothesis opposing GHG is hard to describe and support and 2. That those supporting GHG try very hard to bury the emergence of competing hypotheses, instead of evaluating them in a traditionally collegiate manner. Simply, this aspect of science has been derailed by people of inadequate intellect. Geoff. Not True, the science has been derailed by people of extreme intellect. I do not know how they got suckered in. Some for money, but some have lost much in this fight. I think more accurately by people of all intellectual levels (there is a mass movement), though for sure including many of high intellect. However, Dan Kahan shows that on a number of socially conflicted issues in the US including climate change, the public are more polarised as their cognitive capability and domain knowledge rises, not less so. His working theory is that this is because they are better able justify / defend their position; i.e. their intelligence / knowledge is in service to their cultural belief (and we are all vulnerable to cultural beliefs in one domain or another). The majority of the planet’s population still allow much of their life to be guided by the fairy stories of religion, to which they’re emotively committed; commitment to the emotive fairy stories of imminent climate apocalypse is likely easier still, despite they’re not actually supported by mainstream science let alone anything skeptical. Joshua | March 27, 2019 at 10:15 am | Andy – Your syntax here: Dan Kahan shows that on a number of socially conflicted issues in the US including climate change, the public are more polarised as their cognitive capability and domain knowledge rises, not less so. His working theory is that this is because they are better able justify / defend their position perhaps suggests a longitinal causal dynamic (e. g., as an individual’s domain knowledge increases, their polarization does as well – presumably because they are more “able” to contruct an argument to defend their view by virtue if being more knowledgeable). That might stand apart from other dynamics, such as that people who are more “motivated” to defend a viewpoint seek out more knowledge to defend that viewpoint, thus resulting in a cross-sectional data picture where people who display more knowledge are more polarized. Just to clarify, is that how you understand Dan’s view? Andy West | March 27, 2019 at 12:58 pm | Syntax maybe a little ambiguous; rises across the sample, not across time. To my knowledge he hasn’t done longitudinal tests for this effect, despite your best efforts to persuade him. However, he has been explicit regarding his current hypothesis that it is more ‘smartness’ that mainly causes the increased polarisation (yet not explicit about whether this is a ‘shorthand’ term that encompasses knowledge too). I think there may potentially be some issues with ‘smartness’ as the dominant cause, but his composite scale does include cognitive factors that should largely be separate from knowledge. However, afaics the smartness view is not incompatible with the seeking of knowledge (per your above) to support a cultural position in any case, because more capable folks are better able to do this and also better able to integrate the results they find for best alignment. So even ruling out any unique contributions or new justification angles / nuances that the smarter ones might create, they should still end up extra polarised just by executing this process better. A way to explore this is to look at the shape of the curves for conflicted domains that are old enough for the domain’s cultural knowledge to be both more static, plus ‘received’ (i.e. inculcated from childhood), and ones where the knowledge is much more dynamic still, plus also has largely been picked up in adulthood. In surveys across adults (can’t survey children much anyhow), cognitive capability should play a bigger role in the latter case, or oppositely knowledge a bigger role in the former. (Bearing in mind that for Dan’s scale knowledge factors dominate the early part of the curve, while cognitive factors the end part). Joshua | March 27, 2019 at 4:12 pm | So even ruling out any unique contributions or new justification angles / nuances that the smarter ones might create, they should still end up extra polarised just by executing this process better. Seems to me that everyone thinks they do it better, at their own level of assessment. What matters is one’s own assessment, IMO, to lead to polarization, not an assessment from an external eye, the eye of the one who tests cognitive attributes that are assumed to be measures of “smartness.” While I’m not completely convinced by the DK theoey, it might serve as a frame for understanding, where in fact those that might be deemed less capable are in fact more inclined to be convinced by their own arguments (so as to lead to higher polarization). I suspect that the “smartness” factor is (at least partially) a confound, and that it isn’t “smartness” that is explanatory. In fact, it isn’t at all surprising to me that people who would score well in those kinds of tests (i.e., “smart people”) would conclude that “smart” people are more polarized because they’re “better” at arguing. My guess is it goes back to “motivation, in the sense of “more identified. ” Those who are more motivated study the material more, and are more likely to be in a subset of people who are inclined to handle probabalistic reasoning better, particularly in certain contexts (like science generally, the scope x of climate change, etc.), because they grew up in an environment which stressed those cognitive traits, which in turn are associated with strong ideological identification (I think it’s moderated by culture. For example, I think that it’s likely that the “smartness” leads to polarization even on issues like climate change, would be less likely in Japan). . I suspect there’s a lot of causal overlap, but IMO, “smartness,” to the degree it plays a role, is more likely the role of a moderator (rather than a mediator). Consider that “less smart” people might conceivably be more polarized on any number of issues than “smart people,” such as whether batman could defeat superman. I’d need to see data across issues, and to see something more akin to a longitudinal relationship (i. e., people get more polarized as they get more informed) to see Dan’s theory of causality as being likely. Of course it’s hard to test that longitudinally with “smartness,” (although one could probably do it by educating people on solving the kinds of probabilistic questions that Dam uses to measure “smartness.”), but I think it’s plainly obvious that people are more polarized on issues they tend to care more about, independent of their levels of “smartness.”. Do you think there is some uniform personality trait where “smarter” people are, on average, more inclined than less “smart” people towards strong opinions in association with ideological orientation across all topic domains? That seems highly implausible to me. Doesn’t rule out a domain-specific causal role for “smartness,” of course, but it does diminish the likely strength of “smartness” as a cause, IMO. “Seems to me that everyone thinks they do it better, at their own level of assessment.” I can’t see why what people think of their own capabilities is relevant here 0: “What matters is one’s own assessment, IMO, to lead to polarization, not an assessment from an external eye, the eye of the one who tests cognitive attributes that are assumed to be measures of “smartness.” I lost you. Of course there’s always questions about whether the test is truly picking out both the knowledgeable (easier) and the cognitively capable (harder), and methodology can be challenged with specifics. But I cannot see where you’re going with self-assessment, which is the least reliable of all. “While I’m not completely convinced by the DK theory…” I have some issues, particularly if the term ‘smartness’ he’s taken to using, means cognitive capability as dominant over knowledge. Look at the shape of the curves in the stronger cases relative to the composite scale contributions. But much less issue if by ‘smartness’ he’s including cognitive capability *and* knowledge (which after all are both on his scale). And probably no issue if knowledge is dominant (cultures each hold their own knowledge bases). “…it might serve as a frame for understanding, where in fact those that might be deemed less capable are in fact more inclined to be convinced by their own arguments (so as to lead to higher polarization).” I lost you again. Wouldn’t this produce a shape opposite to which we actually see? Well arguing that Dan is one of the smarts himself and hence his argument is biased so as to place the smarts in a good light, is I think very weak indeed. Not least because it places them in a *bad* light relative to what was generally assumed before, i.e. that smart folks by virtue of their reason should converge on whatever was the actual reasonable answer wherever it lay in the graph space, not diverge even more. This was not intuitive to folks when he first put it out there. Besides, unless you’re also arguing that his bias was so great that even the data collection stage failed (there seemed nothing majorly wrong to me, in more than one domain), as opposed to his explanation as to ‘why’, then you still need a plausible reason for the ‘confound’ that is pretty significantly different to Dan’s proposal. Your point about motivated collection of knowledge seems to be completely compatible with his stance afaics. “Those who are more motivated study the material more… etc.” The grammar of this paragraph doesn’t seem to work out, so I can’t parse it, but this… “…because they grew up in an environment which stressed those cognitive traits, which in turn are associated with strong ideological identification…” …is a non sequitur unless you’ve already taken Dan’s proposition to be true, which you’re supposed to be arguing against. I presume I’ve misunderstood something here. “For example, I think that it’s likely that the “smartness” leads to polarization even on issues like climate change, would be less likely in Japan).” Well its generic for culturally conflicted domains, so it will be true for all or none. But Dan is effectively saying the level of [cognitive capability plus domain knowledge] acts as an amplifier for the expression of cultural position. So first off, there has to be a conflict to amplify. I doubt Dan or anyone else has done much work on non-US populations regarding this issue, but there’s plenty of public surveys to give context in some countries. For instance in the UK, the climate change issue is not very aligned to left / right divide. There’s a modest lean, but all mainstream parties support CC policies and there’s effectively no formal political opposition. This doesn’t mean you can’t find cultural lines, but you can’t just use political allegiance to easily measure as you can in the US, and in Germany the lean is also modest and in reverse too, the main CC advocating party is right of centre not left, plus there are other implications (e.g. in the US you conveniently know there must be culture on *both* sides). At any rate you wouldn’t expect to see the same if the input is far weaker relative to other effects. But there may be other cultural conflicts in those same countries where the input side is similarly robust compared to other effects and hence you should see the same amplification as clearly visible. “…more likely the role of a moderator (rather than a mediator)…” Okay I don’t get that. Oxford dic = mod·er·a·tor NOUN an arbitrator or mediator. “Consider that “less smart” people might conceivably be more polarized on any number of issues than “smart people,” such as whether batman could defeat superman.” They might be. But issues of the batman / superman variety do not represent strong culturally conflicted social issues. So any effect whereby an amplification is expected to occur because of more knowledge and capability, has nothing to amplify so could not occur. “I’d need to see data across issues…” But Dan has presented data across a range of issues?? “…and to see something more akin to a longitudinal relationship (i. e., people get more polarized as they get more informed) to see Dan’s theory of causality as being likely.” Well notwithstanding that I have my own quibbles about the relative importance of knowledge and cognitive capability, and any further studies can only be good, there are means to get insight as things stand. Per above, there is cultural knowledge in conflicted domains that is relatively static and inculcated from childhood, whereas other domains feature cultural knowledge that is relatively dynamic and acquired in adulthood via the normal means folks use to assimilate information. This is useful. And for cases like the latter, e.g. climate change, simply because the max level of info Dan is going to would typically be picked up over not many years at all, and (until pretty recent times) is mostly picked up in adulthood too, we should not expect to see a significant difference in a cross section (which just captures where people are in this short cycle – and most of the public will pick up next to nothing anyhow), and a study following folks through time instead. (A decades long cycle and also covering maturation could be a different prospect). To theorize that people are already extra polarized, and this is what drives them to get more info, requires that we should see this extra polarization on the low knowledge end of the graph, *unless*, all of such people happen to have already fulfilled their knowledge quest. Yet as the reservoir of those who essentially know nothing at all about the CC domain is by far the largest bucket, it seems incredibly unlikely that all of the most highly motivated folks would already have quenched their thirst, so to speak, which also suggests that no more could ever be forthcoming out of that pool (at least until a lot more people are born). “…but I think it’s plainly obvious that people are more polarized on issues they tend to care more about, independent of their levels of “smartness.” Well of course if they don’t care, they don’t care. But where they do care, it is by no means obvious that capability will make no difference, because the expression of polarization and the level of cultural identity or motivation that drives it, are different things. I don’t think Dan is suggesting that the more capable / knowledgeable initially care more than the less capable / knowledgeable. But upon acquiring domain knowledge, they are able to express their cultural position far more effectively to the outside world, and hence very likely within themselves too, which would tend to make their confidence in their stance higher. But beneath the reasoning layer this doesn’t necessarily mean the raw emotive conviction to cultural narratives is any higher, initially at least, although after many years of confidence it could conceivably feed back. “Do you think there is some uniform personality trait where “smarter” people are, on average, more inclined than less “smart” people towards strong opinions in association with ideological orientation across all topic domains?” Well notwithstanding my own issues with the word ‘smartness’, the relative weight of cognitive capability and knowledge, and also that this is best looked at via a group approach, i.e. the communal and emotive thinking processes which culture evokes, rather than focusing on individual thought processes, loosely yes I think this could be so. I guess I’d need more than just Dan’s stuff though (and it has been challenged here and there, weakly so far I think). Going from the cultural approach only, one would expect a knowledge relationship (but that doesn’t speak to cognitive capability). Nor would I call this a ‘personality trait’ (this term tends to imply something that presents very differently in different people or not in some at all, but if it happens it would be essentially universal in nature if not in nuance). “That seems highly implausible to me.” I get that, but I haven’t grasped why. I think Dan has good data. This doesn’t mean his analysis of ‘why’ is right (indeed, regarding his great data on the CC domain which I have used myself, I think he has the ‘why’ on this very wrong, but that’s regarding a different issue to this one, and I have put forward a detailed alternative). But if you think his causal explanation is wrong you have to present a plausible alternative (I do not see such above), or challenge the data itself (the tests, his composite scale, etc etc) with specifics. I can’t see why what people think of their own capabilities is relevant here This is so basic, there’s not much point in responding to any other part of your comment (or even reading them, so I won’t bother). I’ll try once more. People are “polarized” because they feel strongly that they are right and that other people are wrong. People who at more ambivalent about their views are less polarized. If people ae confident about their own arguments, it doesn’t matter how well they reason about conditional probabilities. You might think that a “smarter person” makes a better argument than a less “smart person,” and that they are “better” at making arguments because they are “smarter,” but that doesn’t directly impact the “less smart person’s” own view of the strength of their argument. And their level of polarization is a function of their own view of the veracity of their argument. If that doesn’t get it done than I’ll just let it go. I basically just wanted to know your impression of Dan’s view, because I was discussing that with Jonathan. I’m not really particularly interested in your view of the causal mechanism in play. OK, I lied. I skimmed a bit (to see if you responded I the batman vs. superman reference). Anyway, look up “moderator vs. mediator variable.” “People are “polarized” because they feel strongly that they are right and that other people are wrong.” No. Your ‘because’ implies causation that is backwards. Polarization is due to strong emotive commitments (on one or both sides) that cause people to think they are right, and the opposing ‘others’ are wrong. These commitments are subconscious, executing at deep brain architecture level and hence bypassing or compromising our reasoning, being a feature of a bio-cultural system determining in-group / out-group whose evolutionary heritage began long before we were even human. “People who at more ambivalent about their views are less polarized.” No. People who are less polarized are more ambivalent about their views, because the deep mechanism above is not cutting in to compromise their reasoning. And for clarity, regarding both the above this is specific to views about a domain of cultural conflict. The ‘polarization’ is not regarded by Dan as ‘any disagreement’, but the deep-rooted and emotively based biases that stem from the former. “You might think that a “smarter person” makes a better argument than a less “smart person,” and that they are “better” at making arguments because they are “smarter”, but that doesn’t directly impact the “less smart person’s” own view of the strength of their argument.” Well of course Dan’s thoughts have no impact on any of his sample persons or indeed the public at large! Likewise mine, given that despite my own issues noted above I view his general proposal as plausible. But what has this got to do with the price of fish? The point is that he is using the scientific method to extract data from his sample, and then explain that data. His explanation is indeed a theory, but not only is it consistent with his own data, it’s consistent with the general state of understanding about group delusions (as enabled by cultural narratives), and how these work. As noted above, if you think his proposal is highly implausible, this is fine, but you need a reason why, or alternatively a challenge to the data, and your latest here still doesn’t provide these. “And their level of polarization is a function of their own view of the veracity of their argument.” No. Their level of polarization is a function of their level of emotive commitment per above, which is equivalent to their strength of cultural identity regarding the conflicted domain (e.g. they could be a core adherent in the domain, or pulled in for weaker support via cultural alliance with a different domain, etc). This commitment then manipulates or bypasses their reason to create their own view of the veracity of their argument. “I basically just wanted to know your impression of Dan’s view, because I was discussing that with Jonathan. I’m not really particularly interested in your view of the causal mechanism in play.” Well its fine not to consider my view as important, but if you wanted any opinion at all from anyone to further your discussion with Jonathan, I would have thought that causal mechanism is exactly the critical thing here. As far as I can see you’ve raised no challenge to Dan’s data but are positing ‘confounding’ factors regarding cause, hence exploring causation is key. Afaics to state that ‘polarization’ is merely a reflection of (opposing) views, avoids all causes, unless you have a different term to engage where views that Dan has clearly shown are causing fundamental bias, actually come from. Reason is not compromised for no reason (no pun intended!) In turn this informs plausible explanations for the levels of unreason as observed. Re mod / med I see what you mean, i.e. variable relationships. But I’m not sure that adds anything unless we are past the above more basic issues. You are not currently proposing any ultimate cause anyhow. Although maybe you’re using a casual shorthand here, you essentially say that polarization is just equivalent to expressed views; hence there doesn’t appear to be anywhere that an ultimate causation can enter this relationship. jeffnsails850 | March 28, 2019 at 11:01 am | Joshua: “People are “polarized” because they feel strongly that they are right and that other people are wrong. People who at more ambivalent about their views are less polarized. ” I don’t think that’s right or gets to motivated reasoning. I would rephrase it – “People who hear something that matches what they want to be right, feel strongly that it’s right and become more polarized. People who don’t have a strong preference one way or the other about whether they want something to be right are less polarized.” We saw that play out with the “limits to growth” debate- those who wanted it to be right really thought it was, those who didn’t poo-poohed the whole thing. Those in the middle just didn’t buy it- IMO because catastrophism requires extraordinary evidence. Joshua | March 28, 2019 at 12:58 pm | The way I see it, emotional investment/strong identification is the primary driver for polarization. That is my argument in a nutshell against the notion that “smartness” is a primary driver (although I think it might play a moderating, not mediating, role, at least in some domains). IMO, strong emotional investment/strong identification and polarization aren’t the same thing. Although certainly it can serve as a useful predictor, you can nonetheless have strong emotional investment in a particular identity orientation w/o necessarily engaging in associated motivated reasoning, and/or the associated biases/identity-protective, behaviors. So, the way that I see it is that people who are strongly identified (high baseline potential for polarization) make arguments and tend to strongly believe that those arguments are true. Belief whether those arguments are impervious to external critique (as one moderator), is a large factor in polarization. To the extent that people don’t think that their arguments are impervious to external critique (as one moderator), you can have strong identification with less polarization. I see no reason why “smartness” would play a particularly significant role in that mechanism. “Less smart” people are just as inclined to believe in the infallibility of their arguments, if not more so than “smart” people (again, the DK effect). Well arguing that Dan is one of the smarts himself and hence his argument is biased so as to place the smarts in a good light, is I think very weak indeed. Not so much a “good light,” but a position that lacks the understanding of a more varied perspective and experience. I seem to recall that you have suggested that Dan’s work is somewhat prone to biases rooted in his residence in the academic elite (I recall excerpting such comments from you and posting them at his blog). There is a certain logic to such conjecture; we’re all prone to such biases. The question is how does one control for such propensities. I don’t particularly question Dan’s data, or the arguments he make that are directly a function of those data. But I may, of course, question his conjecture about causalities – particularly when he speculates about causality by working from cross-sectional data. And that’s what I’m going here. I don’t question the associations that he has found – I think they’re powerful and important. I question his speculation about the causes behind those associations. Not least because it places them in a *bad* light relative to what was generally assumed before, i.e. that smart folks by virtue of their reason should converge on whatever was the actual reasonable answer wherever it lay in the graph space, not diverge even more. I do think that Dan’s’ work goes a long way towards providing an evidentiary basis for critiquing thinking “deficit model.” My questions (and I do consider them questions, not really conclusions) about the causal mechanisms he describes, with often quite certain language, are not meant to imply that I don’t think that his work provides solid evidence that contradicts “deficit model” thinking. Sure. There is a panoply of examples that we could come up with. What you’re describing is basic human nature. Although I’d say that your application of the rule is too universal. The dynamic you’re describing, IMO, plays out in varying degrees, among different people, on different issues. The challenge is to allow everyone, and in particular those we dislike or disagree with, to have the same level of complexity we’d want to grand to ourselves (and our own arguments). Certainly, don’t presume the worst from “otters” and then base your expectations on that premise. Importantly, a big part of that is to faithfully engage with a “naysayer.” If you don’t have someone else to perform that role, invent your own naysayer – one that your “otter” would accept as valid. If you don’t have the requisite knowledge to create that naysayer, do some research. We saw that play out with the “limits to growth” debate- those who wanted it to be right really thought it was, those who didn’t p*o-po*hed the whole thing. Those in the middle just didn’t buy it- IMO because catastrophism requires extraordinary evidence. Sure. There is a panoply of examples that we could come up with. What you’re describing is basic human nature. Although I’d say that your application of the rule is too uniform. The dynamic you’re describing, IMO, plays out in varying degrees, among different people, on different issues. jeffnsails850 | March 28, 2019 at 3:46 pm | “Importantly, a big part of that is to faithfully engage with a “naysayer.”” That’s a two way street, and it’s rather pointless to engage in any fashion with some polarized people. Paul Ehrlich and his fans still say his book was right. What I was trying to get to, but didn’t articulate well, is that I think that middle ground is less swayed by the polarized of either camp than the polarized believe. What happens instead is a more “natural consensus” that the middle then adopts. Ehrlich made outlandish claims, Simon taunted him and the sensible center ignored them both because no nation stockpiled food or seriously discussed banning pregnancy (except China). We watched what people did rather than the debate between the polarized. You can see that with Climate Change. We’ve been told for 30 years the end is near with international bigwigs meeting every couple of years to reaffirm their commitment to commit someday- which, in practice has been a marginal reduction in emissions in developed nations with a massive increase in emissions in developing nations for net gain of global emissions. The world’s governments are clearly unimpressed with the polarized warm, it is just not true that the polarized on the other side are “preventing action” (not a lot of Republicans in China holding up wind and solar- which is allegedly cheaper, faster to scale and functional). What’s happening instead is that the sensible center is watching and that natural consensus has already formed around the simple fact that the nuttier demands of the warm really are nutty. “The way I see it, emotional investment/strong identification is the primary driver for polarization.” Okay. Maybe it was the way you phrased it, but upstairs you seemed to imply causation coming from the views expressed (so backwards). “That is my argument in a nutshell against the notion that “smartness” is a primary driver (although I think it might play a moderating, not mediating, role, at least in some domains).” Well from my reading of Dan’s position he hasn’t anywhere postulated that ‘smartness’ (notwithstanding the vagueness of this term, its better to refer to his charts) is a primary / root cause. As noted above, he claims that by better serving the emotive commitments it’s an amplifier of the resultant expression / behaviours. Plus given the deep level involved, this would be relevant to all genuinely culturally conflicted domains, not just ‘some domains’, albeit potential level variance. Hence also as noted above, if there’s no strong emotional investment / identification involved despite disagreements, there’s nothing to amplify. “IMO, strong emotional investment/strong identification and polarization aren’t the same thing.” Well indeed folks can be so invested but happen not to be participating in conflict, or for instance a culture can happen at some point to have little opposition so there is not conflict / polarization generally despite emotional commitments, so… “Although certainly it can serve as a useful predictor, you can nonetheless have strong emotional investment in a particular identity orientation w/o necessarily engaging in associated motivated reasoning, and/or the associated biases/identity-protective, behaviors.” …to that extent yes. But a) Dan is explicitly exploring domains that are in cultural conflict, and b) for such domains statistically many committed folks will be engaging in the conflict to some degree although others won’t, and c) the absence of conflict and associated polarization is circumstantial not fundamental in the sense that where the emotional investment exists, the polarization will always occur where sufficient challenge arises, and d) biases resulting from the emotional investment will still occur with or without significant opposition. “So, the way that I see it is that people who are strongly identified (high baseline potential for polarization) make arguments and tend to strongly believe that those arguments are true.” Yes. And indeed per the line I think that you’re attempting to pursue, this will be so for such people whether they are or aren’t ‘smart’. “Belief whether those arguments are impervious to external critique (as one moderator), is a large factor in polarization.” No. Because your syntax implies that the subject’s self-confidence in their arguments is causal (whether or not this is in the sense of a moderator), but as you rightly say in the line above, it is actually the emotive commitment / cultural identification that is causal, and the self-confidence is merely a reflection of that commitment. This is important, because the detailed arguments (plus self-perception of same) are products of reasoning, but the emotive commitment is that which sets the (unreasonable) goal for the (subverted) reasoning to support. And more smart or less smart people, have differing powers of reasoning. “To the extent that people don’t think that their arguments are impervious to external critique (as one moderator), you can have strong identification with less polarization.” But the people who (regarding a specific domain) have strong emotional investment / cultural identification, are *not* the ones who are flexible regarding their arguments relating to the domain. This is true whether they are more smart or less smart, and whether or not they are actively engaged in making arguments at any particular time (hence actively contributing or not to polarization). “I see no reason why “smartness” would play a particularly significant role in that mechanism.” Well for a start, this defies Dan’s data, unless you either challenge that data (i.e. the methodology / processing via which his charts are drawn), or provide an alternative explanation that explains the data, neither of which you’re offering. And for clarity to repeat per above, Dan is proposing smartness as essentially an amplifier of cultural bias effects, not an original cause of cultural bias. “Less smart” people are just as inclined to believe in the infallibility of their arguments, if not more so than “smart” people…” Due to the bias imposed by emotional commitment, the smarter and the less smart may similarly perceive their arguments as infallible (or at least very robust). But as noted above, their perceptions of their own arguments are not causal, they are a *symptom* of emotional commitment / cultural identity. Another parallel symptom is that (according to Dan and notwithstanding my issues per above which we’ll leave aside for now to keep stuff simple), cognitive capability will serve the emotive goals at the expense of the (normal level of) reasoning that would otherwise occur. Hence high cognitive capability will serve better, and lower cognitive capability will serve worse. So in the real world rather than just in their heads, the arguments of the smarter persons will actually be more robust, i.e. they will work better due to being more sophisticated, targeted, subtle, integrated etc, despite still being unreasonable. Over a long period the higher success rate this creates during conflict interaction may eventually reinforce the original commitments. But in any case while folks of all smartness levels perceive their arguments as robust, it is not this perception that is driving anything, because this perception is just another symptom. And Dan’s proposition on these lines matches the data in his charts. “Not so much a “good light,” but a position that lacks the understanding of a more varied perspective and experience. I seem to recall that you have suggested that Dan’s work is somewhat prone to biases rooted in his residence in the academic elite…” I don’t recall my comments you refer to. But suggesting there is bias should come with at least something about what it may affect and why. Otherwise this cannot help the researcher (or others seeking perspective) anyhow. If there’s an issue because the work ‘lacks the understanding of a more varied perspective and experience’, even a nascent expression of what this issue actually impacts would be useful. While you challenge his ‘smartness’ proposal, you haven’t (that I can see) proposed any specific flaws in the data collection or analysis, or how the suspected lack of perspective has impacted his thought chain in practice. You have essentially given an opinion that he is wrong, well fine he might be, but without any backup to chew on as to why this could be so in the context of his work / logic chain. “I don’t particularly question Dan’s data, or the arguments he make that are directly a function of those data.” Ah… okay. Well scratch some of above then… “But I may, of course, question his conjecture about causalities – particularly when he speculates about causality by working from cross-sectional data. And that’s what I’m going here. I don’t question the associations that he has found – I think they’re powerful and important. I question his speculation about the causes behind those associations.” Okay, well that’s fine too. But such questioning would have far more weight if, even at a speculative / feely level, you could suggest some alternative possibility(s) that would similarly match his data, or more about your own logic chain that makes you sceptical of his theory even if this scepticism hasn’t evolved to the point of deriving an alternate explanation (that matches the data, or says why it must be wrong). For instance, what possibilities might you expect from a longitudinal study that would represent a challenge? Some of the above makes me wonder if you’re challenging an argument he hasn’t made. Unless you can point to it, I don’t recall that anywhere he makes the argument (as at one point you imply) that ‘smartness’ is a root cause of polarization, only arguments that it is an amplifier of same and hence that cultural bias must already exist to be amplified. (This is notwithstanding the vagueness of the term ‘smartness’ and the added inappropriate nature of his PR titles such as ‘are smart people ruining democracy?’, but PR aside his ‘normal’ material is generally clearer). John Reid | March 27, 2019 at 5:07 am | I am an experimental physicist with a PhD in Upper Atmosphere Physics and many years of research experience. I have recently written a book on the failure of the Navier-Stokes equations of fluid dynamics to adequately deal with entropy and turbulence. Here is what I conclude about climate change: 1. There is no significant trend in global average temperature. The apparent trend is due to spurious regression. 2. The apparent correlation between global average temperature and atmospheric CO2 concentration is also spurious. 3. Items 1 and 2 are both a consequence of global average temperature being a centrally biased random walk with a red-noise variance spectrum. (Reid, 2017) 4. The bomb-test curve shows that about 80 percent of recent increases in atmospheric CO2 concentration are due to ocean upwelling and only 20 percent are due to human activity. The ratio of the total contribution of anthropogenic CO2 to the total in the ocean-atmosphere system since the beginning of the Industrial Revolution is only 1 percent. This would have no measurable effect. 5. Predictions of temperature increases based on numerical coupled ocean-atmosphere general circulation models (i.e. climate models) are meaningless because such deterministic models cannot account for turbulence, which is stochastic. Because of this, they include unrealistically large values of parameters such as eddy viscosity in order to remain stable and so can never faithfully emulate reality. 6. Climate modellers ignore the effect of subaqueous volcanic activity on ocean circulation despite the fact that 85 percent of volcanic activity occurs beneath the ocean and that heating from a major oceanic eruption would dwarf all other ocean processes. 7. Based on the above we can conclude that, at multidecadal time scales, unexplained variations in global temperature and in global mean sea level may be attributed to subaqueous volcanism and are unrelated to human activity. Reid, J. (2017). There is no significant trend in global average temperature. Energy & Environment 28 (3), 302–315.179 (copy attached) Reid, J. (2019). The Fluid Catastrophe. Cambridge Scholars Publishing Limited, Newcastle-on-Tyne (In Press). atandb | March 27, 2019 at 9:39 am | Taking this piece by piece. “Climate modellers ignore the effect of subaqueous volcanic activity”. I would agree with that in general, although I have read some papers that made an attempt so I know it is not true in every case. “85 percent of volcanic activity occurs beneath the ocean” Sounds about right, but I have not seen a reputable study that comes up with actual numbers, and am skeptical that such a study could account for areas beneath ice very well, so I would really have to look at their methodology to have any confidence in their credibility. “heating from a major oceanic eruption would dwarf all other ocean processes.” I doubt that this is true. Heat given off of an eruption should be comparable given the elevation, type of eruption, etc whether it was above the water or below it. Cite a credible source for this “fact” and I will be happy to admit that I am wrong. I looked up at one time how much heat is thought to be given off of the oceanic trench, and other volcanic activity. I am not confident that the numbers are correct given the difficulty of measuring the numbers, but given the size of the ocean and assuming they are somewhat right volcanic activity would be much smaller than the CO2 affect prior to feedbacks. Steven Mosher | March 27, 2019 at 10:59 am | sorry temperature cannot be a random walk. matthewrmarler | March 27, 2019 at 3:05 pm | John Reid: 3. Items 1 and 2 are both a consequence of global average temperature being a centrally biased random walk with a red-noise variance spectrum. (Reid, 2017) To what do you attribute the apparent appx 1000 year periodicity in the temperature proxy data: ice cores, tree rings, etc? Peter Lang | March 28, 2019 at 7:43 am | I sent you an email a couple of days ago. Did you receive it? If you have changed email address, can you tell me your new one please? While I share Reid’s assessment that there’s no highly significant, truly secular surface temperature trend, nor dominant control by CO2, evident in the most trustworthy climate data, the prospect of submarine volcanism being the driver at multidecadal scales of variability remains unproven. As I began looking into the latter possibility more than a decade ago, the total absence of large, sharply defined ocean “hot spots” on the satellite-sensed surface during a ~1K rise in global temperatures dissuaded me from such conjectures. At best, one finds only weak, highly transient traces from presumed submarine “smokers.” Insofar as the longer-scale variability “being a centrally biased random walk with a red-noise variance spectrum,” the best available Holocene proxy data show significant spectral peaks that indicate a far-more-structured stochastic process, e.g., http://i1188.photobucket.com/albums/z410/skygram/graph1.jpg Wagathon | March 27, 2019 at 11:28 am | The problem is the word “belief.” Science isn’t about “belief.” It’s about facts, evidence, theories, experiments. It’s funny to see global warming alarmists, ‘getting out over their skis,’ as the saying goes, with their ’12-year-to-destruction’ predictions and beliefs (we’re already down to 11) when according to global warming alarmists going back to the ’90s, time has already come and gone when they predicted that our children would never again know what snow is. Robert Clark | March 27, 2019 at 11:41 am | Ms. Judith Curry I have expllained my interpritaton of the Antarctic ice core and and how I interpret it relative to the Ice Ages. No one on this blog believes the new Ice Age has begun. What is your opinion? If you say I am completely out in left field I will goodby. RiHo08 | March 27, 2019 at 11:51 am | My belief is that science is a process with a set of assumptions, principles and tools, when employed, particularly on a well thought-out question, may lead to discoveries. Discoveries then leads to some answer, although such an answer usually raises even more questions. Turbulent Eddie | March 27, 2019 at 12:17 pm | “I believe in science” is really an apostle’s creed for political action. To be sure, some aspects of climate change theory are quite likely, based on abstract reasoning – all else the same, if CO2 increases, the thermal emissivity of earth decreases and something must change to restore radiative equilibrium, most likely global mean temperature increase. But global mean temperature doesn’t directly factor in many of the non-validated claims about climate change, so, we have to wait a hundred years to see – not very reassuring, and there’s no control on the other aspects of climate, but at least then we’d have at least ONE observation to constrain theory, which we don’t have now. The average lay-person believes in gravity though they can’t describe the equations. They believe because gravity is a theory that continues to verify with observations of experiments every day. Climate models, on the other hand, of what will happen in a hundred years with or without additional CO2, have NO validating observations beyond perhaps an increase in global mean surface temperature, a decrease decrease of global mean stratospheric temperature, and an increase in winter time Arctic temperatures. And some of these observations may be coincidental: global mean temperature may have increased in part due to more frequent El Nino events and longer term increased insolation; some stratospheric temperature decrease might also be due to decreased volcanic eruptions since Pinatubo, and some portion of Arctic winter warming may be due to natural dynamic fluctuation in Arctic sea ice. And even there, some portions of “Remember, then, that scientific thought is the guide to action; that the truth at which it arrives is not that which we can ideally contemplate without error, but that which we can act upon without fear; and you cannot fail to see that scientific thought is not an accompaniment or condition of human progress, but human progress itself.” William Kingdon Clifford, The Common Sense of the Exact Sciences (1885) Much science is not about incontestable facts. – and Earth system science is a prime example. Science here is about synthesis of discrete observations into a consistent picture of the whole through a process of abductive reasoning. “Abductive reasoning (also called abduction,[1] abductive inference,[1] or retroduction[2]) is a form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation for the observations. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it.” Wikipedia Science here is an open ended inquiry – an investigation of the clues – that is facilitated in the creative and productive capacities of people that is central to the fundamental advancement of science. Something that no machine I have ever met is capable of. Climate science blogs have a far different objective. Here it is to rally observations – occasionally of the most facile kind – to a tribal narrative. Both varieties. The goal here is to dispute the abductive inferences of rigidly defined antagonists. Outside of this strange dynamic Earth system science progresses in fascinating ways – to be accepted or rejected on a word or a name with no hint of the respect due to the real instrument of human progress. The remedy to this disorder is to remain open minded, curious and somewhat humble in the face of the complexities of the system and in the absence of definitive observations. Certainty is an impossible condition – yet paradoxically here they are in all their implacable certainty. Nick Darby | March 28, 2019 at 9:25 am | Well said. I was considering my own comment, but this rather obviated the need. Pingback: Why I don’t ‘believe’ in ‘science’ | Watts Up With That? Excellent article by Robert Tracinski. Like many things today, when the Left asserts something, look to the opposite to be closer to the truth. E.g. when they complain about Trump being a disrupter — they are purposefully trying to be divisive themselves. E.g. when they complain about Trump colluding with Russians — they are actually doing the wishes of Russia by trying to bring the country to its knees with plans like the GND. In this case the people who are listed as “believing in Science” as actually those who do NOT believe in Science — because they have all endorsed positions (e.g. AGW) that have discarded traditional Science protocol. For more information on this see my current WUWT post “https://wattsupwiththat.com/2019/03/25/global-warming-science-or-political-science/”. popesclimatetheory | March 31, 2019 at 4:14 pm | Thanks John: I read the posting: I enjoyed all of it and especially this at the end! The good news is that I just heard from the head person at the APA, andshe said (based on inputs received from me and others) that the APA has decided NOT to enact their policy endorsing renewable energy. Kudos to them: that’s a wonderful, positive development! Kudos to you, John, you have made a difference, Thanks! Alex Pope Pingback: Why I don’t ‘believe’ in ‘science’ – Enjeux énergies et environnement Steven Mosher | March 27, 2019 at 11:19 pm | Sorry Judith but this piece get’s it all wrong. Here is what we are confronted with. We are confronted with a problem in which very few people UNDERSTAND the science. And yet, even though they don’t understand the science–and proudly claim they are not scientists–they DISBELIEVE in the science. They dont understand and disbelieve. For example, they disbelieve in temperature records and in some cases actually believe and positively claim they are hoax. Yet, they dont understand the basics of the science. They dont understand, and disbelieve. they dont understand and believe it is a hoax. On one hand they demand to see all the data and all the code for this, and yet refuse to acknowledge that data and code has been available for years. Available and open for inspection. Available and open to find errors, bugs, and evidence of malfeasence. They could understand if is they tried or asked for help, but rather than doing this, they choose to believe critics of the record, who likewise don’t understand the science involved. The recent Bates debacle is clear case of this. Without understanding the science of what Karl did, You and others, believed bates charges. Charges which he later recanted. Without examining all the details and UNDERSTANDING what was done, you and others disbelieved the science. The Mitre committee examined the science. People who understood the science found no malefeasence. Where are the disbelievers now? Still disbelieving and utterly unaccountable for their disbelief. The standard of understanding the science is important. To show you understand the science you have to be able to give the strongest account of the science in your own words. This is evidence you understand it. You have to give evidence that others who do the science acknowledge your understanding. This is done by publishing in the science. To prove you understand a science, you actually have to show us that you can do it yourself. Absent these markers of understanding, you have no standing to say you “understand” a science. You are relegated to merely believing it. Most lay people dont understand the science. they will never understand the science. I do not understand the science of orbital mechanics. Yet I believe that Roy Spencer does when he calculates his temperature record. Am I irrational in believing in his competence? Nope. I dont understand the science of cancer and yet I believe the science that tells me my smoking may cause cancer. I dont understand the science of how you forecast huricane tracks. I could not do it myself. Yet I believe your science of forecasting. In fact, the vast vast majority of science you encounter, you dont really understand. You could not do it yourself and prove to us that you understand it. Yet you believe it. And rationally so. There are only two rational choices for people who dont understand the science: 1. Believing in what experts tell you 2. Suspending Judgement. Note that disbelieving the science or believing in a hoax is NOT a rational postion. If you dont understand the science, if you cannot do the science yourself, disbelieving is not a rational option. Believing its a hoax is not a rational option. Both of these choices, disbelieving and believing its a hoax ground their warrant in ignorance. JCH | March 27, 2019 at 11:27 pm | edimbukvarevic | March 28, 2019 at 1:06 am | Mosher uses the word science here “not to describe specific methods or theories, but to provide a badge of tribal identity. Which serves, ironically, to demonstrate a lack of interest in the guiding principles of actual science.” What’s even worse than people who “don’t understand the science” is those who really don’t understand how sound science is done, yet preach that disbelieving their flimsy conception of what is shows is “not a rational option.” It’s a favorite tactic of intellectual charlatans. Adding a #3- watching how leaders react to extraordinary claims… what they actually do. Answer so far is… meh. Same answer is given in countries with skeptics and without. in court cases, the different sides bring in expert witnesses who disagree with each other. you are trusted, life and death trust, to pick the right experts. In climate science there are expert witnesses who disagree. The alarmists and the media only listen to one side and promote a verdict that does not consider any testimony that does disagree. Many of us seek out the experts on the other side and listen to both sides. Dr Neil Frank is one on the other side, there are many more, some who do not testify openly because their jobs depend on not disagreeing. https://jennifermarohasy.com/2019/03/day-3-peter-ridd-versus-the-university-and-state-funded-media-stuck-in-denial/ Jennifer Marohasy is one of the experts on the other side. There is another side and it is gaining ground fast in some places. China and India and Russia have never signed up for the alarmism. China does some token stuff, but a lot in a huge country is really next to nothing. But they are getting rich building and selling stuff to the alarmist countries who have destroyed their own abilities to build anything. you are trusted, life and death trust, to pick the right experts. This refers to regular people who are picked for civil and criminal trials. They may be expert at something but they are not required to be. If people who are not certified experts cannot form a correct opinion, our whole justice system is flawed. The media and alarmists say only the 97% should be trusted and listened to. In a fair trial, the other side will bring in better qualified experts because they have many thousands to choose from, not just the 75 of 77 that made up the 97%. Steven, for goodness sake quit smoking. Cancer is not a mystery. Mutations caused by oxidative antagonists increase the odds of cancer. About 90% of lung cancer is from smoking. Most lung cancer is discovered too late and is terminal (about 80%). But smoking also significantly increases risks for bladder cancer, throat cancer, heart disease, stroke, COPD and diabetes. I am not going to put up links to the applicable seminal studies since I don’t think you contest that science. Why don’t you quit for Earth Day 2019? Just do it. Read some Jennifer Marohassy stories about how climate records have been adjusted to promote the alarmist story. Experts do make mistakes and experts do promote fraud, some do, and there are big payoffs for the ones that make the correct alarmist mistakes and sometimes severe punishments for the people who disagree. https://jennifermarohasy.com/ journalpulp | April 17, 2019 at 3:56 am | Here is what we are confronted with. We are confronted with a problem in which very few people UNDERSTAND the science. And yet, even though they don’t understand the science–and proudly claim they are not scientists–they DISBELIEVE in the science. Sorry, Steven, but your comment gets it all wrong again. Here is what we are confronted with. We are confronted with a problem in which very few people UNDERSTAND the proposed solutions. And yet, even though they don’t fully understand the proposed solutions — and proudly proclaim they are not politicians or policy-makers — they DISBELIEVE that rational people cannot accept their proposed views. The sheer amount of fossil fuel and industry — and environmental degradation — required in order to produce so-calledrenewables, for instance. Excerpting the piece that “gets it all wrong”: “I believe in science” is almost always invoked these days in support of one particular scientific claim: catastrophic anthropogenic global warming. And in support of one particular political solution: massive government regulations to limit or ban fossil fuels. But these two positions involve a complex series of separate scientific claims—that global temperatures are rising, that humans are primarily responsible, that the results are going to be catastrophic for human life, that rising temperatures can be halted—combined with a series of economic and political propositions. For example: that action to ban fossil fuels would be more efficacious than using the wealth made possibly by fossil fuels to help humans adapt to future climatic changes. –they DISBELIEVE in the science. They dont understand and disbelieve. Many believe in an alternative science (even though they don’t understand it). Dr. Strangelove | March 28, 2019 at 4:26 am | Foundation of quantum mechanics The Bohr-Einstein debate deals with the foundation of quantum mechanics, in particular the nature of reality. This is the domain of metaphysics but Bohr and Einstein believe we can discover the nature of reality through physics. In a nutshell, Einstein asserts realism and locality. Bohr asserts non-realism and non-locality. These things can be simply explained using the EPR thought experiment. Imagine two entangled particles moving away from each other. Quantum mechanics requires that entangled particles are correlated. It means that if one particle has a spin s, the other particle must have a spin –s (the opposite of s designated by a negative sign). The particles started at a common origin and now lightyears apart. Einstein asserts that they acquired their correlated spin from the start. Hence, when we measure their spin now that they are lightyears apart, we observe s and –s because they have those from the start including the orientation of the spin axis. Bohr asserts the particles have no definite spin and no definite spin axis until we observe them. Before observation, the particles are in superposition where all possible spins and spin axes exist. Only upon measurement that definite spin and definite spin axis are randomly selected from all the possibilities. This is called non-realism because in a sense the spin is not real until we observe it. Einstein invented the EPR thought experiment to point out the problem with Bohr’s non-realism. Since the selection of spin and spin axis are random, how can the particles have correlated spin when they are lightyears apart? It should have a random outcome. Sometimes they are correlated, sometimes they are not. Experiments show the particles are always correlated. To explain this, the particles must be communicating with each other instantaneously and faster than light to make sure they have correlated spin when we observe them. This is called non-locality because spatial distance appears non-existent to entangled particles. They can coordinate their action as if they are not separated at all. Einstein is against non-locality because it violates his theory of relativity that asserts information cannot travel faster than light. Non-locality is a consequence of non-realism in the EPR experiment. If Einstein’s realism is true, then there is no need for non-locality. The particles do not have to communicate because they already have correlated spin from the start before they were separated. The Bohr-Einstein debate is profound but other physicists saw it as a philosophical debate. It could not be settled by experiments because Bohr and Einstein both agree with the experimental results. But they attribute particle correlation to two conflicting realities. Other physicists think they are debating something unobservable. Pauli remarked, does it really matter how many angels can dance on the head of a pin? This view changed when Bell proposed an experiment to test realism and locality. Has Bell found a way to count the angels? Bell’s theorem Bell assumed that Einstein’s realism implies there are 3 independent spins in x, y and z axes. This assumption is wrong. I will explain later why but for now let’s follow Bell’s argument. Bell applied probability theory to determine the probability of similar spin in two different axes. Since there are two possible spins (s or –s), the probability P is similar to getting two heads or two tails in two consecutive coin tosses: P = 0.5 (0.5) + 0.5 (0.5) = 0.5 The same probability applies to similar spin in two different axes: P (x = y) = 0.5 P (y = z) = 0.5 P (x = z) = 0.5 ∑ P = 0.5 + 0.5 + 0.5 = 1.5 In probability theory, the total probability must be 1 but in Bell’s theorem it is greater than 1. This is Bell’s inequality: ∑ P > 1 For total probability to be 1, each of the 3 probabilities must be 1/3. Hence, Bell’s inequality can be expressed as probability of similar spin in two axes: P (x = y) = P (x = z) = P (y = z) > 1/3 Bell argued that if Einstein’s realism is true, then Bell’s inequality must be observed. On the other hand, if Bohr’s non-realism is true, then Bell’s inequality will be violated. Probability theory will prevail over Bell’s theorem. Hence: ∑ P = 1 P (x = y) = P (x = z) = P (y = z) = 1/3 Experiments had been conducted and they violated Bell’s inequality. The experimental results obeyed the predictions of probability theory. Today the consensus among physicists is Bohr’s non-realism and non-locality are true. This is known as the Copenhagen interpretation. Is the science settled? Why Bell’s theorem deviated from probability theory I use set theory to explain why Bell’s theorem deviated from probability theory. I create sets A, B and C that contain similar spin in x y, x z and y z axes respectively: A (+x = +y, -x = -y) B (+x = +z, -x = -z) C (+y = +z, -y = -z) Plus and minus signs refer to the two possible spins (s or –s) in each axis. Now I replace the elements of the sets with their respective probabilities: A (0.25, 0.25) B (0.25, 0.25) C (0.25, 0.25) Add the probabilities to obtain the probability P of each set: P (A) = P (x = y) = 0.25 + 0.25 = 0.5 P (B) = P (x = z) = 0.25 + 0.25 = 0.5 P (C) = P (y = z) = 0.25 + 0.25 = 0.5 P (A) + P (B) + P (C) = 0.5 + 0.5 + 0.5 = 1.5 The equations satisfy Bell’s inequality: The above is Bell’s theorem expressed in set theory. Now I derive probability theory from Bell’s theorem using set theory. I create set D that contains similar spin in all 2 axes. Set D is the union of sets A, B and C: E = A U B U C E (+x = +y, -x = -y, +x = +z, -x = -z, +y = +z, -y = -z) The probability of set D is the sum of the probabilities of sets A, B and C: P (D) = P (A) + P (B) + P (C) P (D) = 0.5 + 0.5 + 0.5 = 1.5 The basic principle of probability theory: if A is one outcome and D is the set of all possible outcomes, then the probability of A is: P (A) = A/D Apply the principle of probability theory to obtain the probabilities of sets A, B and C: P (A) = A/D = 0.5/1.5 = 1/3 P (B) = B/D = 0.5/1.5 = 1/3 P (C) = C/D = 0.5/1.5 = 1/3 P (A) + P (B) + P (C) = 1/3 + 1/3 + 1/3 = 1 The equations satisfy the conditions of probability theory: Hence, I obtained probability theory from Bell’s theorem by adding the mutually exclusive sets A, B and C. This explains why Bell’s theorem deviated from probability theory. Bell’s theorem treated the sets as mutually inclusive which means there are common elements in the sets representing similar spin in 3 axes (x = y = z). The set operations I employed can be represented graphically using the Venn diagram. Bell’s theorem is represented by 3 overlapping circles. The common elements are the intersection of the 3 circles. Probability theory is represented by 3 circles with no overlap. Why Bell’s assumption is wrong Bell assumed that Einstein’s realism is equivalent to the reality of 3 spins in 3 spin axes. This condition is not necessary in realism. For example, Earth’s spin satisfies realism in the sense that it is real whether or not we are observing it. Earth’s spin does not disappear when we stop looking at it. This is Einstein’s realism. It is just common sense. Earth’s spin is real but it has only one spin axis. If we shrank Earth to the size of an atom, Bell’s theorem would be invalid for the atomic earth spin. The probability of similar spin in two spin axes is zero: P (x = y) = 0 This contradicts Bell’s inequality: P (x = y) > 1/3 This is not due to experimental results or non-realism of spin. The contradiction is purely mathematical. Bell’s theorem does not apply to one spin in one spin axis. Bell’s assumption would be correct if particles have 3 spin axes. However, there is experimental evidence against 3 spin axes of particles. Entangled particles have opposite spins on the same spin axis. If two spin axes existed in entangled particles, it would be possible to observe it by measuring the spin in x axis of particle A and in y axis of particle B. Since they are entangled, suppose a spin in x axis is observed in particle A, then particle B also has a spin in x axis. If a spin is observed in y axis of particle B, this proves it has two spin axes: x and y. The measurement must be simultaneous for two entangled particles. It is impossible to measure two spin axes of one particle simultaneously. Experiments show only one spin axis for entangled particles. Pauli’s exclusion principle is another experimental evidence against 3 spin axes. Experiments obey the exclusion principle that says particles cannot have the same four quantum numbers. The fourth quantum number is spin. There are only two possible spins designated as s or –s (negative sign means they are opposites). If there were 3 spin axes, then there would be two possible spins for each axis. Instead of 2 microstates (2 spins x 1 axis) the particle would have 6 microstates (2 spins x 3 axes). Particles could have the same 4 quantum numbers as long as they have different spin in y or z axis, equivalent to 5th and 6th quantum numbers. But there is no such thing. Experiments show only 4 quantum numbers. Hence, only one spin axis. Therefore, Bell’s theorem falsified the realism of three spin axes of particles. It neither proved nor falsified the realism of one spin axis. Note that Einstein was not arguing for the reality of 3 spins. The debate is realism vs. non-realism of 1 spin. Next I will introduce the Strangelove spin algebra and use it to modify Bell’s theorem and finally resolve the Bohr-Einstein debate. calvertn | March 28, 2019 at 8:17 pm | I tend to agree with the writer that science is not something you “believe in” like a religion. But that having been said, to be practicable, science does require that a lot of information must be taken on trust. We can’t *all* be in *every* forest for *all eternity* to hear *every* darn tree fall – just verify that their falls *all* made a sound (per George Berkeley). (Bohr and Einstein debated this one too.) That is why ice core data is most valuable. The climate of the earth regions mixes together and dumps evidence into the oceans with the run off from land and then the oceans mix the land results with oceans results and then evaporate and carry the evidence in water vapor to the respective poles and then the data is stored in ice for up to 800 thousand years for us to discover and begin to understand. Every tree that fell generated data that is mixed in these records. Russell Seitz (@RussellSeitz) | March 28, 2019 at 10:36 pm | Tracinski should invite Dr. Stangelove to join him the next time he goes on Rush Limbaugh’s show, to challenge Bell Bohr & Einstein to a tag -team match. Rush is living proof that all the science advisors in the world can’t save an ignoramus from making a fool of himself time and time again: https://vvattsupwiththat.blogspot.com/2019/03/are-exxon-and-juan-valdex-related.html cerescokid | March 28, 2019 at 11:05 pm | Are you doing ok? It’ll only get worse when the AMO flips and the Arctic Sea Ice recovers. Glaciers are already advancing in Greenland and Iceland. Arctic Sea Ice Volume is ballooning. A turnaround is near. Then what’s left for the apocalyptic scenario? Here’s a clue. Zip. Nada. Zilch. Nothing to clutch on to for a touch of sanity. Observational data trumps it all. As to the models? Their days are numbered. Pingback: Clouds and SST, Boltzmann, ENSO and Climate. (By Diego Fdez-Sevilla, PhD) | Diego Fdez-Sevilla, PhD. “Trumps it all”, Ceresco? Neither you nor the West Wing can redact what the geophysicists and their satellites see, and the rest of us ponder as the science evolves. https://vvattsupwiththat.blogspot.com/2019/03/march-17-2019-rick-perry-has.html And all the models in the world won’t stop the AMO from doing what it’s going to do. I’m just waiting for all the excuses on why so many trends are reversed in the next few decades. Pingback: Weekly Climate and Energy News Roundup #354 | Watts Up With That? Pingback: Weekly Climate and Energy News Roundup #354 – Enjeux énergies et environnement Pingback: Pretend Local weather Science and Scientists - Kokoshungsan.news Pingback: Fake Climate Science and Scientists - The Post & Email Pingback: Fake climate science and scientists – Climate Collections Pingback: Fake Climate Science And Scientists – Menopausal Mother Nature Pingback: Fake Climate Science and Scientists - Capitol Hill Outsider Pingback: Faux local weather science and scientists – Daily News Pingback: Fake climate science and scientists - Self-Educated American Pingback: Fake Climate Science And Scientists | PA Pundits - International Science Publishing Group | May 4, 2019 at 5:41 am | Awesome content, worth reading one. Keep up the good job! :) Pingback: Fake climate science and scientists – EIKE – Europäisches Institut für Klima & Energie Pingback: Fake climate science and scientists - BAYERN online Pingback: Fake climate science and scientists - Leserbriefe
cc/2019-30/en_middle_0023.json.gz/line1383
__label__wiki
0.553442
0.553442
Home Insurance Defense New Hampshire Hillsborough County Hillsborough County, New Hampshire Insurance Defense Lawyers Find Hillsborough County, New Hampshire Insurance Defense Attorneys by City Lyndeborough West Peterborough Robert J. Meagher Hillsborough County, NH Insurance Defense Lawyer (603) 669-8300 W Brook St Insurance Defense, Medical Malpractice, Municipal and Personal Injury Representing individuals, businesses, municipalities and institutions in civil litigation for more than two decades. Insurance Defense Attorneys in Nearby Cities The OneCLE Lawyer Directory contains lawyers who have claimed their profiles and are actively seeking clients. Find more Hillsborough County, New Hampshire Insurance Defense Lawyers in the Justia Legal Services and Lawyers Directory which includes profiles of more than one million lawyers licensed to practice in the United States, in addition to profiles of legal aid, pro bono and legal service organizations.
cc/2019-30/en_middle_0023.json.gz/line1391
__label__wiki
0.664191
0.664191
Three Righthaven copyright suits closed, one opened Thursday, March 24, 2011 | 10:32 a.m. Related Document (.pdf) Dogster court exhibit For Las Vegas copyright enforcer Righthaven LLC, today's news so far is pretty much the same story as Tuesday: A new lawsuit has been filed, while two more lawsuits were dismissed after Righthaven failed to show the defendants were served. The latest defendants to be hit with a lawsuit alleging copyright infringement are Dogster Inc. and two individuals allegedly associated with the website dogster.com, Ted Rheingold and Maria Goodavage. This is another lawsuit over the Denver Post TSA pat-down photo and Righthaven again demands in Wednesday's complaint in U.S. District Court for Colorado that the defendants pay $150,000 in damages and that their website domain name be forfeited to Righthaven. A court exhibit indicates the photo was posted on the website with some commentary suggesting trained dogs could work as an alternative to intrusive TSA pat-down searches. The website post did not credit the Denver Post as the source of the photo, the exhibit shows. A request for comment was left with Dogster. This suit lifts the Righthaven lawsuit tally to 252 overall since March 2010; and to 48 over the pat-down photo. Separately, a Righthaven lawsuit over the pat-down photo against Tamer Mahrous was closed after the parties reached a confidential settlement. Two more Righthaven lawsuits were dismissed without prejudice this week by U.S. District Judge Gloria Navarro. They involved material from the Las Vegas Review-Journal and defendants Michael Easton and Puget Sound Radio in one case; and defendants Ezekiel Kennard, Marc Lee and Serkadis.com in another case. They were dismissed after Righthaven didn't show the defendants were served with the complaints against them. Unless Righthaven can find a way to revive these lawsuits, it appears its investment in these lawsuits in terms of legal fees and court costs will have to be written off. Righthaven apparently had trouble tracking these defendants down to serve them because -- unlike a business with a street address -- some of these defendants were probably guys running websites from their apartments or while drinking coffee at Starbucks. The lawsuit against Puget Sound Radio, for instance, called it "an entity of unknown origin and nature." A writer for the Technology Review published by MIT is commenting on last week's decision by a federal judge to dismiss the Righthaven lawsuit against the Oregon nonprofit the Center for Intercultural Organizing on fair use grounds. Christopher Mims asks about Righthaven in the piece: "In its over-reaching, has the law firm set a precedent that could damage the ability of content creators and news gatherers to control how their works are used, and to achieve fair compensation for their distribution?" Righthaven hasn't yet publicly responded to the motion for dismissal filed by Denver Post TSA pat-down photo defendant Brian D. Hill. In response to reader comments, here's why the Las Vegas Sun has not posted a link to the court exhibit showing Hill's alleged infringement of Righthaven's copyright. The problem with the post by Hill -- which was archived by Righthaven and filed with the court -- is that it has the same headline as the deadseriousnews.com post of the photo. This headline and the accompanying parody story are objectionable as they suggest a passenger was arrested for becoming sexually aroused during a TSA pat-down. The actual language in the headline, however, is too graphic for us to post. With the mysterious deadseriousnews.com site apparently the origin of many of the Righthaven lawsuits, don't be surprised if Righthaven tracks down and sues whoever is behind deadseriousnews.com -- a website identified on Jan. 28 or perhaps earlier as appearing to be the source of many of the alleged infringements. The TSA photo also appears on several news sites that attribute it not to Righthaven or the Denver Post, but to The Associated Press or "Associated Press/Denver Post." That's because The Associated Press distributed the photo to news outlets. We found the photo Wednesday on news sites including foxcharlotte.com, inforum.com, annarbor.com, deseretnews.com, heraldnet.com, washingtontimes.com and msnbc.msn.com, among others. Once serious discovery gets under way in the Denver Post TSA pat-down photo lawsuits, defense attorneys and their investigators will likely take a hard look at whether the distribution of the photo by the AP contributed to it going viral online and how deadseriousnews.com emerged as the apparent source of many infringements. It's probably no coincidence that the deadseriousnews post is dated Nov. 21, the same day the photo appeared with an AP story on several news sites. Some sites indicate the AP distributed the photo even earlier, on Nov. 18, the same day it was published in the Post. As Hill points out in his motion for dismissal, William Dean Singleton, chairman of the board of directors of The Associated Press, is also chairman and CEO of Denver Post owner MediaNews Group. No one is suggesting Singleton knows anything about the AP distributing the photo that apparently was provided to the AP by the Denver Post. But it's only a matter of time before an attorney tries to see if dots connect between the Denver Post, the AP, deadseriousnews.com and many of the Righthaven lawsuits over the pat-down photo. This could bolster the "implied license" theory -- that is the Denver Post didn't just encourage readers to share the photo. It shared the image with the world by providing the photo to the AP and soon thereafter the image became an iconic symbol of resentment against new intrusive TSA pat-down procedures. In the meantime, Toronto Star Newspapers Ltd., Metroland Media Group Ltd. and Torstar Corp. have yet to respond to a Righthaven lawsuit over the photo. That suit says the photo was posted without authorization on the thespec.com website for the The Hamilton Spectator newspaper in Hamilton, Ontario. In that case, again, the photo was credited to the AP.
cc/2019-30/en_middle_0023.json.gz/line1399
__label__cc
0.716514
0.283486
DUI / DWI, Criminal Defense Durant, Colbert, Caddo, Kingston, Bokchito Top Calera Drug Crime Lawyers - Oklahoma Nearby Cities: Durant, Colbert, Caddo, Kingston, Bokchito Related Practice Areas: DUI / DWI, Criminal Defense Nichols Law Firm Drug Crime Lawyers Serving Calera, OK (Wewoka) Compassionate, Results-Driven Personal Injury, Defense and Family Lawyers Are you dealing with physical pain, emotional stress and financial pressures after a serious car accident? Have you been charged with a crime that could take you away from your family for years or have other life-changing consequences? Are you considering divorce but overwhelmed by decisions you must make on issues such as... Ball Morse Lowe, PLLC Drug Crime Lawyers Serving Calera, OK (Norman) Ball Morse Lowe, PLLC, in Oklahoma, Cleveland, and McClain county offices is a law firm dedicated to providing quality legal services in a cost-effective manner. Our lawyers provide focused solutions tailored to achieve clients' particular objectives. They listen closely and respond promptly with sound advice and practical solutions. In the area of estate planning, we provide a full spectrum... Douglas J. Smith Law Office, P.C. Douglas J. Smith, Attorney for the Smith Law Office, P.C. has a law office in Norman, Oklahoma, that provides clients with the best criminal defense and other legal services throughout the central Oklahoma area which primarily includes the Cleveland County, McClain County and Oklahoma County Districts. The law office is conveniently located across the street, southwest of the Cleveland County... Swain Law Group At Swain Law Group in Norman, we provide criminal defense representation to people throughout Oklahoma. We handle state and federal cases, and we have earned a reputation for success in cases involving serious charges such as homicide, sex crimes and drug trafficking. We pride ourselves on our versatility. While some criminal defense lawyers immediately try to get a deal and others charge... B. Hall Law, LLC Henry + Dow Josh Lee & Associates Drug Crime Lawyers Serving Calera, OK (Oklahoma City) Josh Lee & Associates of Oklahoma City, Oklahoma, is a criminal defense, DUI/DWI defense and general civil litigation firm that has been providing trusted counsel and aggressive legal representation statewide for more than a decade. We represent adults and juveniles who have been accused of various crimes, including but not limited to, drug crimes, white collar crimes, violent crimes, theft... The Law Office of Tiffany N. Graves, PLLC Drug Crime Lawyers Serving Calera, OK (Tulsa) The Law Office of Tiffany N. Graves, PLLC Attorney Tiffany N. Graves helps clients in Tulsa and surrounding counties with their family law issues. Whether you are dealing with divorce, child custody, spousal support, paternity or another related family law issue, Ms. Graves wants to help you and your family. An experienced advocate with a dedication to family law, Ms. Graves can deliver the... Robinson Hoover & Fudge, PLLC Robinson Hoover & Fudge, PLLC is a general partnership engaged in the practice of law. The general partners are Richard A. Robinson and Michael R. Hoover. The firm is a member of the National Association of Retail Collection Attorneys, the Conference on Consumer Finance Law, the National Attorney Network and the Oklahoma City Commercial Law Attorneys Association. Presently the firm has... McCorkle Law, PC I am attorney Shelly McCorkle, and from my law office in Oklahoma City, Oklahoma, McCorkle Law PC, I defend the rights and liberties of those facing serious traffic violations, as well as those charged with any type of felony or misdemeanor offense. Serving clients throughout the greater Oklahoma City metro region and the surrounding communities, including Norman, Guthrie, El Reno, and many more,... William H. Campbell, Attorney at Law William H. Campbell, Attorney at Law, has more than three decades of experience defending more than 1,000 individuals charged with crimes in Oklahoma City and throughout the surrounding areas. Mr. Campbell's passion and mission are to protect the rights of his fellow citizens in both state and federal courts. He understands how confusing and intimidating an arrest or criminal charge can be, and he... Burton Law Group, P.C. With a history that dates back to 1992, the Burton Law Group in Oklahoma City, Oklahoma, has served Oklahomans for more than 27 years. During that time, our law firm has become the trusted source of high-quality counsel and aggressive advocacy for those across the region who are dealing with serious and complex legal challenges involving any of the following: Workers' Compensation Social Security... Shoemake Law Firm J. Patrick Quillian, P.C. The Law Offices of Adam R. Banner, P.C. People in Oklahoma City, Oklahoma, who have been charged with a crime often require aggressive legal representation. At The Law Offices of Adam R. Banner, P.C., we understand that when people face criminal charges, such as burglary, arson, child abuse, drunk driving, embezzlement, fraud, sexual offenses or weapons charges, they may feel lost and overwhelmed. From white collar and federal crimes to... Gower Law Offices The stresses of family law and criminal issues cannot be overstated; they can be disorienting, confusing and even overwhelming. To make these situations even more severe, how you address them can affect your life and the lives of your family members for years. As such, partnering with a skilled lawyer is critical; the right law firm can help you aggressively defend your rights and interests... Charged with a Drug Crime? You've come to the right place. Sometimes substance abuse treatments don't work, and these people find themselves facing drug crime charges. If you suffer from addiction or substance abuse and have been arrested for a drug offense, an experience drug crimes attorney can help. Use FindLaw to hire a local drug crimes attorney near you to help prevent a multi-year jail sentence. Need an attorney in Calera, Oklahoma? Use the contact form on the profiles to connect with a Calera, Oklahoma attorney for legal advice.
cc/2019-30/en_middle_0023.json.gz/line1404
__label__cc
0.67747
0.32253
You searched for Drugs & Medical Devices. Did you mean Drugs & Medical Devices in Health & Health Care Law? Products Liability, Birth Injury, Personal Injury, Medical Malpractice Lebanon, Harrisburg, Camp Hill, Mechanicsburg, Lititz Language Farsi Italian All Drugs & Medical Devices Top Grantville Drugs & Medical Devices Lawyers - Pennsylvania Nearby Cities: Lebanon, Harrisburg, Camp Hill, Mechanicsburg, Lititz Related Practice Areas: Products Liability, Birth Injury, Personal Injury, Medical Malpractice Law Office of Melissa R. Montgomery, LLC Drugs & Medical Devices Lawyers Serving Grantville, PA (Lancaster) Fighting for Your Rights Experienced Criminal Law and Family Law Attorneys in Southeast Pennsylvania At Law Office of Melissa R. Montgomery, we handle family law and criminal law matters for people in Lancaster, PA. We see our clients as fellow human beings faced with difficult problems, not as cases. We give each one our full attention, best efforts and the benefit of our knowledge and experience... Ciccarelli Law Offices Working Hard And Working Well At our Lancaster, Pennsylvania, location, Ciccarelli Law Offices offers skilled legal counsel in a number of areas. Our experienced lawyers practice personal injury law, criminal defense, family law, employment law, estate planning and civil litigation, among other areas of law. Our law firm was founded in 1999 by Lee A. Ciccarelli, an accomplished... Michael J. O'Connor & Associates, LLC Drugs & Medical Devices Lawyers Serving Grantville, PA (Frackville) The law firm of Michael J. O'Connor & Associates, LLC, has earned a reputation for solving challenging legal problems in the areas of workers' compensation, personal injury and other areas of law. Headquartered in Frackville, Pennsylvania, the firm serves clients throughout the Commonwealth from 16 conveniently located offices. This team of lawyers, paralegals and support staff are committed... Drugs & Medical Devices Lawyers Serving Grantville, PA (Reading) Michael J. O'Connor & Associates, LLC, is a law firm with a mission: providing sound advice and results-oriented representation for people facing serious legal problems in the Reading area and elsewhere in Pennsylvania. Michael J. O'Connor & Associates, LLC, provides a full range of legal advice, services and representation, with a special focus on workers' compensation, personal injury... Drugs & Medical Devices Lawyers Serving Grantville, PA (West Chester) Skilled Lawyers You Can Depend On In West Chester, Pennsylvania Ciccarelli Law Offices has offered skilled legal counsel in West Chester, Pennsylvania, since 1999. Today, our experienced lawyers practice personal injury, criminal defense, family law, civil litigation, estate planning, employment law, education law and more. When you need a lawyer you can depend on to give you honest legal advice... Drugs & Medical Devices Lawyers Serving Grantville, PA (Malvern) Devlin Associates, P.C. Drugs & Medical Devices Lawyers Serving Grantville, PA (Allentown) Litigation of Cases Litigation audit services to insurance claim managers, risk managers, corporate legal departments and excess insurance carriers. Evaluation of cases to ensure proper reserve and preparation for trial. Alternate Dispute Resolution Services. Liability verdict potential and settlement range opinions.... Console and Associates P.C. Drugs & Medical Devices Lawyers Serving Grantville, PA (Center Valley) Since our founding in 1994, the attorneys at the law firm have dedicated their efforts to protecting the rights of innocent victims living in New Jersey and eastern Pennsylvania. Today, we continue to represent the interests of victims in both states from our four law offices, including our office in Center Valley. Clients from throughout the region know to turn to us for representation upon... McCune Wright Arevalo, LLP Drugs & Medical Devices Lawyers Serving Grantville, PA (Berwyn) Whether caused by a reckless trucker or automobile defect, personal injury and wrongful death cases in Berwyn, Pennsylvania, leave victims with immense costs. The law firm of McCune Wright Arevalo LLP is there to help them pursue the compensation they need to pay for medical treatment, lost wages, funerals, emotional trauma and all future needs. Each of the firm’s lawyers is highly skilled both in... Drugs & Medical Devices Lawyers Serving Grantville, PA (Radnor) DiOrio & Sereni, LLP Drugs & Medical Devices Lawyers Serving Grantville, PA (Media) Drugs & Medical Devices Lawyers Serving Grantville, PA (Plymouth Meeting) Warren and McGraw, LLC Drugs & Medical Devices Lawyers Serving Grantville, PA (Blue Bell) Advocates For The Injured And Disabled At Warren & McGraw, LLC, in Blue Bell, Pennsylvania, we are proud to serve the injured and the disabled. Our lawyers skillfully handle personal injury, workers' compensation and disability claims to get those with injuries and disabilities the compensation or benefits they deserve. When you or a loved one has suffered an injury, our attorneys can help you... Naftulin & Shick Drugs & Medical Devices Lawyers Serving Grantville, PA (Easton) Naftulin and Shick - Lawyers Specializing in Accident and Personal Injury Cases for over four decades. Personal Injury Lawyers experienced in wrongful death , auto accidents, trucking accidents, motorcycle injuries and many other types of injury litigation. Personal Injury Litigation Lawyers When it comes to personal injury cases in Doylestown , Pennsylvania and surrounding areas, there are many... Drugs & Medical Devices Lawyers Serving Grantville, PA (Philadelphia) Baum, Hedlund, Aristei & Goldman, P.C. The law firm of Baum, Hedlund, Aristei & Goldman, P.C., is based in Los Angeles with branch offices in both Philadelphia, Pennsylvania and Washington, D.C. We serve clients from throughout New England, the Midwest and across the nation, who have sustained losses due to: Aviation accidents Pharmaceutical negligence claims Defective medical devices Whistleblower actions Commercial motor vehicle... The Rothenberg Law Firm LLP is one of the nation’s leading personal injury law firms. With offices in Philadelphia, Pennsylvania, as well as New York and New Jersey, we help injured clients across the country. Our legacy of success spans more than four-and-a-half decades. Rooted In A Passion For Justice For us, personal injury law is more than just a business; it’s a passion. Our attorneys and... The law firm first opened its doors in New Jersey in 1994. Today, our law firm has four office locations throughout New Jersey and eastern Pennsylvania, including our law office in Philadelphia, which allow us to serve the residents of both states who have suffered unnecessarily. With decades of experience, extensive legal knowledge and a reputation for providing outstanding representation to... Marrone Law Firm, LLC At the Marrone Law Firm, LLC, in Philadelphia, Pennsylvania, we are fully prepared to help you protect your rights while fighting to achieve the favorable outcomes you deserve for the serious and complex legal challenges threatening your life, your livelihood, your freedom and your future. For more than 25 years now, our law firm has been the go-to source for clients throughout the greater... Injured by a Drug or Medical Device? You've come to the right place. If you or a loved one has been injured by a drug (Accutane, Yaz, Zoloft, etc.) or a medical device (stents, DePuy hip replacements, etc.), a drugs and medical devices lawyer can help. A drugs and medical devices lawyer can help you establish legal fault of the product manufacturer and help identify the exact cause of your injuries. Use FindLaw to hire a local drugs and medical devices lawyer who can help you recover compensation for your injuries. Need an attorney in Grantville, Pennsylvania? Use the contact form on the profiles to connect with a Grantville, Pennsylvania attorney for legal advice.
cc/2019-30/en_middle_0023.json.gz/line1405
__label__cc
0.681977
0.318023
Temecula, Hemet, Murrieta, Escondido, San Marcos Language Spanish Ukranian Italian Portuguese All Aguanga Top Aguanga Government Contracts Lawyers - California Nearby Cities: Temecula, Hemet, Murrieta, Escondido, San Marcos Wolff Law Government Contracts Lawyers Serving Aguanga, CA (California) Experienced, Competent Legal Counsel Government Acquisition, Purchasing and Services Contracts, and Public Works Contract Lawyers in California Government Contracts, Public Contracts, Public Works Construction Contracts, Purchasing Contracts, Architectural and Engineering Contracts, Services Contracts, Bidding, Requests for Proposals, Bid Protests, Negotiated Contracts, Small Business/SBE, DBE, LBE and DVBE certification and... The Smith Litigation Firm Government Contracts Lawyers Serving Aguanga, CA (Solana Beach) With a focus on construction disputes, The Smith Litigation Firm advocates for clients in civil courts throughout San Diego County, California. Our clients include subcontractors, general contractors, project owners, sureties, suppliers, homeowners, and architects. We have three law offices that deliver personalized representation and customized solutions, including locations in Riverside... Government Contracts Lawyers Serving Aguanga, CA (San Diego) Opened in 2004, Jones Day's newest California office has more than 45 professionals. Strategically located in Carmel Valley, we serve clients in San Diego and throughout the world in a wide range of industries including pharmaceuticals, biotechnology, medical devices, renewable energy, telecommunications, software, and information technology. The San Diego Office has significantly expanded its... Government Contracts Lawyers Serving Aguanga, CA (Riverside) Bringing nearly 20 years of experience to the table, our team at The Smith Litigation Firm handles construction disputes on behalf of individuals and business entities throughout Riverside, California. We also take cases that involve other aspects of civil litigation, such as real estate, business and fraud. Clients of our law offices are met with the following: Accessibility: We are available to... SRG Law Group Small businesses and large companies in San Diego that are looking to proactively protect their interests through strategic planning can look no further than the SRG Law Group. As a dynamic business transactions and litigation law firm, our attorneys allow clients across California to focus their efforts on operating a successful business. Whether a client's needs fall under the category of... Government Contracts Lawyers Serving Aguanga, CA (Irvine) With three law offices serving Irvine, Riverside and San Diego, California, The Smith Litigation Firm proficiently navigates construction disputes. Even the most seemingly open-and-shut case has complexities that require the trained eye of an experienced attorney. Our founder, Natalia Desiderio Smith, has spent nearly two decades in courtrooms and boardrooms supporting clientele such as the... Government Contracts Lawyers Serving Aguanga, CA (Pasadena) The motto at SRG Law Group in Pasadena is client-focused, results-driven representation in all matters involving business transactions and litigation. Our law firm represents clients across the state of California from entrepreneurs to established businesses. SRG Law Group provides high-quality legal representation and counseling in a broad range of business transactions. The lawyers at our... Charles Gugliuzza Law Office Charles Gugliuzza Law Office is one of the most famous law offices in Californian with a total number of 17 employees, of which are nine lawyers, two associates, three interns and support staff, all loyal and dedicated to work tasks. Our main and most important goal has always been to provide superior customer service operation. Lawyers who work at Charles Gugliuzza offices have specialized in... Watson LLP Government Contracts Lawyers Serving Aguanga, CA (Los Angeles) Watson LLP is a boutique intellectual property and technology law firm with offices in Orlando, Atlanta, New York, and Los Angeles that serves clients ranging from start-ups to Fortune 500 companies. The firm has an active intellectual property litigation practice in the federal courts, nationwide, as well as before numerous administrative forums such as the World Intellectual Property... Cotchett, Pitre & McCarthy, LLP Government Contracts Lawyers Serving Aguanga, CA (Santa Monica) If your loved one in Santa Monica, California, has been neglected or abused by a caretaker or nursing home staff, Cotchett, Pitre & McCarthy, LLP, is prepared to intervene. Their attorneys are passionate about the welfare of elderly people throughout San Francisco and the Bay Area, and are known as aggressive advocates. They listen to your worries and hold a thorough investigation to confirm... Need an attorney in Aguanga, California? Use the contact form on the profiles to connect with an Aguanga, California attorney for legal advice.
cc/2019-30/en_middle_0023.json.gz/line1407
__label__cc
0.631788
0.368212
Take Your Characters Out to Lunch: 5 Development Exercises Column by Leah Dearborn July 11, 2014 1 comment The words “character building exercise” sound approximately as fun as changing the cat’s litter box or cleaning the gutters. Exercise is rarely enticing until you actually begin doing it, even when it’s for storytelling muscles instead of glutes or biceps. Exercises mean more work outside of a manuscript, but prompts and short writing sprints allow writers to examine their characters under a different lens than what is possible within the confines of a story’s world. Force your villain to change a flat tire, or have your medieval hero figure out how to make microwave pizza—you might learn something surprising about the people you’re trying to create. Like a date, it’s part of the process of getting to know another person better (in this case, an imaginary person). Say you’ve been working on a manuscript for about a month now, and things are getting pretty serious—maybe even serious enough to propel you through the long haul of second round edits, querying, and lost sleep that could one day lead to the ultimate goal of publication. But before any of that, test the waters by spending a quiet afternoon with your characters. You wouldn’t propose marriage after one conversation, would you? Writing a novel is a substantial commitment; don’t waste all that time on second-rate characters. Below are a few exercises culled from various corners of the web and elsewhere that are designed to help you get the most from your character “date,” using a few of the most common driving forces of human behavior. Test Their Loyalty The tumblr page dailycharacterdevelopment has a constant stream of new ideas to try out. Many of them are a tad sadistic, but that’s perfect for testing the mettle of a character. Even heroes are corruptible as long as they're human, and perfect characters are anathema to a reader. What is your character's price? Consider the limitations of your character’s loyalty to the people they care about. Describe one situation in which they could be moved to betray these people. Make a Confession Confessions are intriguing because they’re the answer to a mystery. The novel Rebecca is hinged almost entirely on one earth-shattering confession, to the extent that all the character’s movements orbit around it, from the very first page. Why does no one want to say the deceased Rebecca’s name? Why does her widower husband spend his nights pacing the study floor? Here’s another prompt from tumblr of a rather less dramatic nature, although any confession will do. The blog fuckyeahcharacterdevelopment is another good resource for quick, original prompts. Your character has a dark secret, but they decided to come clean — at least to their partner/ best friend. Today is the day that they admit it: ‘I am a shipper’. Who are they shipping for? How does the partner/best friend react? Do Some...Normal Stuff One of the best ways to get to know a person is through their choices. In The Secret Miracle, a compilation of author quotes on the process of writing, Josh Emmons comments, "As my characters move through their world and make choices—yes to steamed broccoli, no to Tantric sex—I gradually learn their likes and dislikes." The following exercise from the Script Lab is one way of moving that process along. Even the cold-blooded assassin needs to eat. Everybody goes to the grocery store, but not everybody shops the same. Choice – the act of selecting or making a decision – marks the difference between people. First, go to the grocery store and grab a cart. Then start to fill it up with things your character would buy (or just look at the shelves as you shop for stuff you actually want so you don't ring up a $500 bill for someone who doesn't exist). Or, back to the lunch date metaphor: pack a picnic for the two of you. What would he/she bring? Be Their Twitterary This prompt from the website of Shannan Palmer, PhD suggests: Write a description of your character from her own point of view. It might be her hypothetical profile for an online dating site or her work bio. But why stop at a work bio? A "twitterary" (yes, that's a real thing) is someone who is the secretary of a celebrity twitter account. If your characters were asking you to post things on twitter, what would they write? Would they even use twitter, or might they keep a Wordpress blog? The purpose of this isn't to waste time on social media, but to get to know your character's voice better in a casual setting. How do they react to their daily problems? Another short exercise, if you can make it past all the Nutella recipes, is to create a Pinterest board for your character by selecting images they might be drawn to. Get Meta This is another exercise along a similar vein to the shipping confession, whereby the character understands that he/she is fictional. The University of Iowa recommends: Choose a character from a story you have written or are in the process of writing, then write a scene or multiple scenes in which that character interacts with you. This isn't just for the benefit of the character, but also the author. It's a way for the writer to detach from a character, since the people we write about are so often heavily connected to ourselves. Having your protagonist address you directly is one way of finding out how much you share, and where they differ from their creator. What does your character think of you? Was your lunch date a success, or have they had better? Photo by Knar Bedian Author: Daphne Du Maurier Publisher: Time Warner Books Uk (2003) Binding: Paperback, 448 pages The Secret Miracle Author: Daniel Alarcon Publisher: St. Martin's Griffin (2010) Column by Leah Dearborn Leah Dearborn is a bibliophile and bookseller from the frigid North Shore of Massachusetts. A graduate of the journalism program at UMass Amherst, she spends her spare time blogging about books (of course), history, politics, and events in the Boston area. Occasionally, she spits out something resembling fiction, and has previously served as a contributor to Steampunk Magazine. She collects typewriters and old novels and laments the fact that her personal library has outgrown her apartment. Follow @adearinthewoods Buy Your Villain a Birthday Present: 5 More Development Exercises 8 Ways to Flesh Out a Character 7 Things Dungeons & Dragons Taught Me About Storytelling 5 Realty Listings That Could Be Your Character’s New Home 5 Ways to Fall in Love with Your Character A Call to Save Steepletop: Why Edna St. Vincent Millay’s Home Matters Reflecting Early: 7 Memoirs by Young Authors The Voices of Trumpmerica 7 Books About Colonialism Happy Birthday, Anne Sexton To leave a comment Login with Facebook or create a free account. Chacron from England, South Coast is reading Fool's Assassin by Robin Hobb July 11, 2014 - 1:19pm Love this column....I do just about all of this and more, except that I don't write it down in note form. I sometimes do write the sorts of scenes that come from these exercises into manuscripts and occasionally they make the final cut. The talk about even the assassin needing to eat makes me think of how I wrote the line 'It's okay, oysters are fine' during a dinner scene.
cc/2019-30/en_middle_0023.json.gz/line1414
__label__cc
0.539524
0.460476
Breaking Down The Bonnaroo 2013 Superjams Words by Stephen Taylor There are many aspects of Bonnaroo that distinguish it from other summer festivals: the clocktower, the Comedy Tent, the giant shroom fountain, the Silent Disco (The Roo made it famous at US festivals), the TN heat, the confusing stage names, etc. Perhaps the greatest of these, though, is the Superjam. The inaugural Superjam, featured Micheal Kang (String Cheese Incident), Bela Fleck, Jeff Raines (Galactic), and Robert Randolph. Over the years, these spontaneous, mildly rehearsed sets featured some of the festivals most memorable moments on the backs of legendary musicians including Herbie Hancock, John Paul Jones, Dr. John, George Porter Jr., Kirk Hamlet, Trey Anastasio, Pino Palladino, and Mike Gordon. After being left off the schedule for 3 years beginning with the 2008 festival, the Superjam made a triumphant Sunday return to its roots with New Orleans artists Dr. John and Preservation Hall Jazz Band, who were joined by Black Keys’ guitarist Dan Auerbach in 2011. One of the festival’s all-time memorable sets, this collaboration ultimately led to Auerbach’s production of Dr. John’s 2012 GRAMMY-winning LP Locked Down. One year later, Questlove carried the torch with a heroic late night featuring a top-secret comeback by reclusive R&B icon, D’Angelo. The 2013 schedule once again ups the ante with three Superjams, featuring seminal artists from soul & funk, hip-hop, and bluegrass. To help prepare you for what you are about to witness, we have summarized the action on the Rock N’ Soul Dance Party and the hip-hop Superjam, which Bonnaroo has disclosed the most detail about. ROCK N’ SOUL DANCE PARTY SUPERJAM | SATURDAY 12:00-2:00 AM THIS TENT 2013 marks James’ 6th Bonnaroo performance since 2003, a track record that has more than earned him the privilege and responsibility of curating this year’s Superjam. Anyone who has witnessed one of My Morning Jacket’s marathon sets over the years will tell you that James has the unique power to spawn energy from deep within even the most withered festival goer. It’s proven science. What’s also proven is James’ penchant for soul and funk music, the influence of which is laced throughout his 2013 solo debut Regions of Light and Sound of God. It’s also apparent in MMJ’s cover song choices (below). Bobby Womack – “Across 110th Street” George Michael – “Careless Whisper” James Brown – “Cold Sweat” Marvin Gaye – “All I Need” JOHN OATES Best known as the guitar playing/mustached half of the legendary 80’s duo, Hall & Oates. Oates helped create a genre through their fusion of classic soul music with elements of new wave and rock. H&O’s allegiance to classic soul and R&B is probably best displayed through their inclusion of original Temptations Eddie Kendricks and David Ruffin on their 1985 tour, mixing the setlists with staples from catalogs of both groups. Check out a clip from their stop at the Apollo below. Hall & Oates with Eddie Kendricks & David Ruffin – “The Apollo Medley” ZIGABOO MODELISTE Bonnaroo’s name, founding fathers, and Superjam concept are all deeply rooted in New Orleans musical heritage. It only seems appropriate that New Orleans’ drumming king, Zigaboo Modeliste, anchor the rhythm section. Modeliste unique syncopated drumming style he developed as a founder of The Meters became a standard in funk drumming and modern day hip-hop beats. Even at 64, he has not lost a step. The Meters – “Cissy Strut” Another example of New Orleans royalty, Preservation Hall exists as a time capsule of traditional Crescent City Jazz and features some of the greatest horn players on earth. Their inclusion in this event is little surprise given their extensive live collaborations with Jim James over the years. Preservation Hall Jazz Band w/ Jim James‬- “St. James Infirmary” CARL BROEMEL Also no surprise is the inclusion of My Morning Jacket’ lead guitarist, who was named one of Rolling Stone’s New School of Guitar Gods (via Stereogum). Broemel shreds, but has also become quite the multi-instrumentalist, adding saxophone and pedal steel to MMJ’s records. This Superjam needed a soul vocalist and James nailed it with Bilal Oliver. Well known to some for his hook contributions over the years (Common, The Roots, Jay-Z), Bilal achieved a measure of independent critical and commercial success with his most recent album, A Love Surreal. His brand of spacey R&B should complement the James & Co. style perfectly. The Roots & Bilal – “Black Cow (Steely Dan)” CYRO BAPTISTA Brazil born percussionist Cyro Baptista has a hall of fame roster of previous collaborations including Herbie Hancock, David Byrne, Dr, John, Trey Anastasio, & Paul Simon. LARRY GRAHAM As the original bassist for one of America’s greatest funk bands, Sly and the Family Stone, Graham is credited for inventing the slap technique. Graham later went on to lead funk group Graham Central Station, which had several hits and toured with Prince. All this to say, the man has the tools to maintain the low end. Graham Central Station – “Pow” Possible Guests Following an appearance at Jazz Fest 2013, the two are gearing up for a string of dates in August and James’ Superjam could be the perfect venue to rekindle the flame. Hall is no stranger to Bonnaroo, having performed a 2010 late-night set with Chromeo. A cameo for “You Make My Dreams Come True” would hurl this thing into orbit. Shorty is more than friendly with the New Orleans musicians already on the bill, he has a show the next day in LA, but his band is used to tight turnarounds. Kelly’s set ends at 1 AM on Which Stage, which gives him more than enough time for a cameo at the Superjam if he doesn’t bounce straight back to the motel party. Idol’s set ends just in time for him to make it over to the Superjam. You have to believe that he and Oates probably know each other from Studio 54 in the early 80s. Marco Benevento With the lack of a keyboardist already on the lineup and Benevento’s Led Zeppelin cover band playing on the same stage after the Superjam, this one just feels too easy. Continue to Page Two for a breakdown of Friday’s Rap Superjam! ALSO CHECK OUT: SEVEN MUST-SEE SETS AT BONNAROO 2013 Chad Hugo the Meters stephentaylor Live Nation Buying Superfly Stake in Bonnaroo Music & Arts Festival Phish Releases Epic “Twist” Pro-Shot Video from Bonnaroo 2019 Watch Phish Shows Bonnaroo 2019 That “Everything’s Right” Phish Closes Out Bonnaroo 2019 With Exploratory & Creative Set II [SETLIST/VIDEO] 27-Year Old Found Unresponsive @ Bonnaroo 2019, Dies At Nearby Hospital Phish Returns to Bonnaroo for First Night Headlining Slot [SETLIST] Phish Announces Free Bonnaroo 2019 Webcast Bonnaroo Announces 2019 SuperJam Hosted by GRiZ My Morning Jacket Guitarist Carl Broemel Releases ‘Brokenhearted Jubilee’ EP Bonnaroo 2019 Will Be the First ‘Roo to Host A Pride Parade Phish Confirms 2019 Tour Dates Starting in June, Fenway Park Bonnaroo 2019 Lineup: Phish, Childish Gambino, Solange, Cardi B & More, Schedule Now Live
cc/2019-30/en_middle_0023.json.gz/line1417
__label__cc
0.560169
0.439831
(France 18 Dec 1952 – ) Fundji, from the series Modern Lovers ‘Searching plays a significant role in photography. That is exactly what is suppressed in advertising and fashion photography.’ Bettina Rheims 1998 1 Like many fashion photographers, Bettina Rheims has pursued projects which explore her personal notions of individual beauty, rather than that dictated by magazines and advertising agencies. In her series ‘Modern lovers’, Rheims subverts fashionable ideals of beauty by portraying the androgyny of young men and women. Her subjects stand awkwardly before the camera against a plain studio backdrop, sometimes looking into the camera, sometimes turning away. Their ambivalence in being photographed is matched by the ambiguity they present to the viewer: these rather feminine men and masculine women confound gender distinctions, but at the same time confirm the homoerotic tendencies prevalent in much fashion photography from the 1990s. It comes as no surprise to see a young Kate Moss, who shot to fame in Calvin Klein’s homoerotic advertising campaign in 1992, among the modern lovers. Rheims began work as a portrait photographer in 1978 and has since worked as a fashion and celebrity photographer and photojournalist. The subjects of her first exhibition in 1981 were striptease artists and circus performers – the ‘demi-monde’ to fashion and modelling’s ‘haute-monde’. Similarly, Rheims approached most of her subjects for ‘Modern lovers’ ‘on the street, in bars, everywhere’.2 None were older than 20 and all were forming their identity, coming to terms with their bodies, their sexuality and their look. Rheims, who had been a model herself before turning to photography, compares her subjects to butterflies and angels – elusive creatures in the process of metamorphosis and transcendence. 1. Rheims B 1998, 'Bettina Rheims: modern lovers', Art Gallery of New South Wales, Sydney np © Art Gallery of New South Wales Photography Collection Handbook, 2007 Signed and dated lower verso, ink “BR ... Janvier 1990 ... Bettina Rheims”. Gift of Edron Pty Ltd - 1996 through the auspices of Alistair McAlpine © Bettina Rheims, Modern Lovers series
cc/2019-30/en_middle_0023.json.gz/line1423
__label__wiki
0.69865
0.69865
« Who's Reviving the Electric Car? | Main | Oodles of Noodling: Brainstorming from Here to the Sun » Aspen Ideas Fest: Life, the Universe, and (Almost) Everything How to do justice to the Aspen Ideas Festival, from which this post is being written? How to summarize, or even mention, the nearly 150 sessions -- plenaries, tutorials, conversations, screenings, and demonstrations -- taking place over the past seven days? Or the more than 180 speakers and moderators? Not to mention the serendipitous schmoozing. It's a daunting, likely impossible challenge -- akin to bringing home a single shell plucked from the sand as a means of describing your weeklong adventure at the beach. Last year, I described the event as "TED meets Davos at 8,000 feet." That doesn't do it justice, but it's still the best I can muster. So I'll forego attempts at "covering" something that is inherently uncoverable in favor of plucking a metaphorical shell out of the sand. But which shell? Alan Greenspan talking about our energy future? Sydney Pollack and Nora Ephron discussing Frank Gehry? Ray Anderson (Interface Inc.) and Lorraine Bolsinger (General Electric) talking about corporate environmental leadership? Sandra Day O'Connor and Stephen Breyer talking about the judiciary? Janine Benyus talking about biomimicry? Norman Lear and Ben Bradlee talking about the future of news and entertainment? Bill Clinton or Karl Rove "in conversation"? Naw. I'll go with E.O. Wilson, talking about "Saving the Creation." Biologist Edward Wilson, for the uninitiated, is one of America's most prominent scientists and the author of two Pulitzer Prize-winning books, "On Human Nature" and "The Ants," as well as other celebrated works. Biomimicry guru Janine Benyus, with whom I had the pleasure of sitting during Wilson's talk, calls him the "Darwin of our era." This wasn't Wilson's first appearance at the Ideas Fest. The day before, he participated in a smaller "tutorial" with two renowned colleagues: Tom Lovejoy and M.A. Sanjayan. That session, simply titled "Life on Earth," took a deep dive into our planet's loss of biodiversity, a session that was as daunting as it was depressing; several members of the audience appeared close to tears at the state of our vanishing species and the rate at which biodiversity, which took about 3.5 billion years to evolve, is being eroded by human activity. "Science and technology, combined with a lack of self-understanding and Paleolithic obstinacy -- have brought us where we are today," is how Wilson puts it. Wilson covered some of the same ground on Saturday, but what stood out was how much he -- and we -- don't know about our planet. Simple things, like the number of species that exist. Scientists have identified as many as 1.8 million species to date, but they acknowledge that the actual number could ultimately be as many as 10 million -- or even 100 million. "We simply do not know," says Wilson. What we do know is astonishing. One example: the nematode roundworm is the most abundant animal on earth. Indeed, 80% of all living creatures are nematodes. If we were to somehow strip away all of the planet's land mass, but leave the nematodes, their abundance would allow us to still see the outline of the continents, according to Wilson. And what we're learning is equally astonishing. Biology is opening up new technologies, and biodiversity -- the abundance of species -- is one principal emphasis. Thanks to rapid DNA sequencing, scientists can hack the genetic code of some species in hours instead of days. Says Wilson: "It has just begun, and we have no idea of what lies ahead." Along with the new technologies are new techniques for engaging our citizenry -- and especially our youth -- about the scientific underpinnings of world in which they live. For example, a growing number of institutions are conducting "bioblitzes," a means of organizing young people (among others) to document the biodiversity in their own back yards and communities -- discovering new species, in some instances. All of which is far more than an academic exercise. One of the central problems of the century -- lifting the world's poorest out of poverty -- represents, in many respects, a biodiversity challenge: How do we make it worthwhile for them to be stewards to the vast array of species to which they've become accidental heirs? asks Wilson. Perhaps ironically, the poorest of the poor and the world's richest biodiversity are concentrated in the same parts of the globe. "The solution," says Wilson, "must flow from the recognition that one depends on the other. The poor have little chance to improve themselves in a devastated environment. Conversely, the natural environment cannot survive the pressure of a land-hungry people who have nowhere else to go." Wilson says his other big challenge is to bring together the scientific and religious communities to "set aside our differences in order to save the creation. The defense of living nature is a universal value. It doesn't promote any religious or ideological dogma, and it serves the interest of all humans." It's a tall order, to be sure. And in the end, Wilson's quest to forge an alliance among the Darwinists and the Intelligent Designers in the name of Mother Earth may be as daunting a challenge as any. But it seems clear that scientists and clergy will have to work together on all this, if we humans are to have a prayer of a chance. [URL=http://www.blogkin.com/michaelthurmondyu]michael thurmond[/URL] craigslist san francisco bay area [URL=http://www.blogkin.com/floirdahomeloanyu]floirda home loan[/URL] chevrolet richmond seasonale apartment rental el paso tx mercedes benz of richmond cheat fantasy football sheet spider loc berger lamp [URL=http://www.blogkin.com/infinitiphiladelphyu]infiniti philadelphia[/URL] los fabulosos cadillac configuration linksys router [URL=http://www.blogkin.com/landroveroaklandyu]land rover oakland[/URL] harkins theater [URL=http://www.blogkin.com/chainsawstihlyu]chain saw stihl[/URL] [URL=http://www.blogkin.com/antifatiguematyu]anti fatigue mat[/URL] [URL=http://www.blogkin.com/sarasotaheraldtribyu]sarasota herald tribune[/URL] Posted by: Halo | July 25, 2007 at 12:15 PM
cc/2019-30/en_middle_0023.json.gz/line1427
__label__wiki
0.914054
0.914054
Legal • Politics Gayyoom transferred to Maafushi Prison Former president Maumoon Abdul Gayyoom being escorted by the police on February 6, 2018. MIHAARU PHOTO / HUSSEN WAHEED Former president and incumbent President Abdulla Yameen’s half-brother Maumoon Abdul Gayyoom has been transferred to Maafushi Prison Thursday night. Gayyoom was kept at the Dhoonidhoo Detention Centre prior to the transfer. Gayyoom’s lawyer and former deputy Prosecutor General Hussein Shameem said that he was informed of the transfer – through the reason was not specified. The former strongman is being charged with bribing lawmakers, exerting undue influence on the judiciary and bribing judges, and conspiring to illegally stage a coup, as well as inciting violence and calling the security forces to revolt against the government. Gayyoom has denied all the charges that are being levied against him by the state. He was arrested from his home in the capital Male early Tuesday night, just shortly after the president had declared a state of emergency, in effect for 15-days. His son-in-law Mohamed Nadheem was also arrested the same night. Since then, his son Dhiggaru MP Faris Maumoon has also been taken back to prison. He was released Tuesday morning after the Criminal Court ordered his immediate release, following the Supreme Court’s ruling that ordered the release of all political prisoners. MP Faris was taken back after the top court overturned its ruling and revoked the order to release the nine high-profile politicians from jail. After the Supreme Court’s landmark ruling last Thursday, former Vice President Ahmed Adheeb was reportedly transferred to Maafushi Prison from Dhoonidhoo Detention Centre. Source URL: Mihaaru-News featured Maafushi Prison Maldives State of Emergency 2018 Mihaaru News President Gayoom President Yameen State of Emergency supreme court “No arrest order issued for troops that detained judges”, top court clarifies Administration Welcomes EU/UK Delegation Visit – Press Releases
cc/2019-30/en_middle_0023.json.gz/line1428
__label__wiki
0.981337
0.981337
Governance • Legal • Politics Regulator threatens to shut down TV stations Private TV stations will be shut down without notice if coverage of the unfolding crisis is deemed to pose a threat to national security, the Maldives broadcasting watchdog has warned. Ismail Sofwan, a member of the Maldives Broadcasting Commission, told the pro-government Sun Online late last night the regulator will act against TV stations that incite unrest with false information, endanger public interest, and “pave the way for terrorism”. The warning came as riot police cracked down on opposition supporters celebrating a shock Supreme Court ruling that reinstated 12 unseated lawmakers and quashed the convictions of former president Mohamed Nasheed and other jailed politicians. In the wake of the dramatic developments Thursday night, the state broadcaster and pro-government Channel 13 reported erroneously that the Supreme Court’s website was hacked, which was promptly denied by the judiciary. The Supreme Court’s website is not hacked & the Court’s Order (No: 2018/SC-SJ/01) has been circulated to the concerned authorities. pic.twitter.com/IvQIWcdLBT — Maldives Judiciary (@judiciarymv) February 1, 2018 The opposition-aligned Raajje TV meanwhile received threats of an attack amid its coverage of the opposition rally at the artificial beach in Malé. Hussain Fiyaz Moosa, chief operating officer, told the Maldives Independent that police officers were stationed outside the studios after the threat was reported to the authorities. Source URL: Maldives Independent Broadcasting Commission Maldives Independent maldives news Politics ‘No choice’ but to enforce Supreme Court ruling: opposition Dhiraagu quarterly sales up 11 percent with MVR234 mln as profit
cc/2019-30/en_middle_0023.json.gz/line1429
__label__wiki
0.749207
0.749207
Tag Archives: facts about cbi case in jagan companies March 7, 2012 · 10:40 PM A tale of Two Inquiries TWO INVESTIGATIONS. BOTH HIGH PROFILE – SAME INVESTIGATION AGENCY – DIFFERING STANDARDS – WHY The last two years have been the busiest for the premier investigating agency in the country. The last two years saw the surfacing of the biggest scam ever to have surfaced on the face of earth . The scam which is alleged to have caused the nation a loss of 1.92 lakh crore. After having been in denial for two years, owing to media pressure, owing also the weight of contradictions within the coalition government at the Union and perhaps the overwhelming evidence right in the face of the rulers, and also the expendable human face , Raja of DMK, CBI was allowed to investigate after a few raps on the knuckles from the apex court of the Country. We will not even presently go into the question whether the CBI is the best, does it deserve the seeding it gets amongst the detective agencies in the Country, Is it beyond political manipulation etc. The answers to these are in public domain and we don’t want CBI blushing in embarrassment, if these niceties still govern its conduct, that is. What is 2G case: The allegation in the case was that the rules were tweaked, procedures short-circuited and beneficiaries of largesse (2G spectrum) were identified as an official favour. The cut off date was advanced, EMDs were fraudulently secured and equals were elbowed out. A procedure of “first come first serve” was followed even though the advice was to go for “public auction”. The money lost by reason of all the above, presumptive loss, is shown to be at 1.92 lakh crore. The CAG found fault with the procedures. For years, the Government lived in denial. Only when the Supreme Court directed investigation, did it gather pace. How did the investigation go on : a) First, the tweaking of procedures, advancement of cut off dates, was evidenced and established b) Then the Public Servant (the Minister) concerned was questioned and arrested c) The beneficiary Companies were identified, who are evidenced to have paid in cash or kind to the Minister or his nominees. d) Money trail was found in respect of one beneficiary.. 200 crores to kalaingar TV.. The decision makers were arrested, Kanimozhi, Sarath of Kalaignar TV e) Kanimozhi was denied bail after summons were issued by the cognizance taking court under section 309 Cr.P.C. f) Rest of the corporate arrested During the course of investigation, Kapil Sibal came on TV, and said there is no loss at all. What did Kapil Sibal say!! We only followed Arun Shouries policy of first come first serve CAG report is incredulous. PAC proceedings were rubbished, Murali Manohar Joshi was rubbished. And then the Prime Minister defends Raja in Rajya Sabha saying that the Government stands by the decision in 2G. During the course of Trial, Raja says, the present Home Minister and Prime Minister were aware of the entire decision making process. The Government said it is the statement of an accused, not entitled to any weight. The CBI does not take cognizance of it. After such statement, when a petition is filed in Supreme Court to enquire into Chidamabaram role, CBI says no need to investigate Chidambaram. The complicity is glaring, with a Minister in Government, Pranav Mukherjee writing that the then Finance Ministry was complicit. While the course of investigation was sequential, the last phase of the investigation, which included : giving a clean chit to the Home Minister, not even touching the Prime Minister, not questioning Kapil Sibal . not questioning the PMO when Subramaniam Swamy had sent representations to the PMO since two years showed that after all it is CBI, and has its fault-lines ingrained in it, to toe the rulers’ line What is happening in A.P. since the last two years A 1000 Kms south from the place of action, in a Southern State of Andhra Pradesh there are investigations going on currently into the “new prince on the move” – Sri Y.S. Jagan Mohan reddy group of companies. The allegations against him are that he helped a few corporate and individuals to gain largesse and make money by using the official position of his father, late YSR, who was the Chief Minister of the State during 2004 to 2009. The allegation is a) A Company floated by Jagan Mohan reddy, Sakshi, received investments from the beneficiaries of State actions, in the years 2004-2009, which are as a quid pro quo for official favours shown to such private individuals and firms Since the investigation is by the same Investigating agency, (CBI) one would have expected the CBI to a) First – determine whether there were any official favours at all by YSR Government in favour of those individuals/companies (as was done by CBI in the case of 2G to find out whether there was any change in procedures, cut off dates, DDs being received on the sly towards EMD etc.) b) Then determine who was responsible for the official favour – ( As was done by CBI in 2G.. question Raja, Shourie) c) Then identify the beneficiaries of that largesse (As was done in 2G.. Reliance, etc) d) Identify the trail of money and question the conduits or beneficiaries (such as Kanimozhi, Sarath, Kalaignar TV ) e) And then determine by evidence and trial, as to how the guilt is established. However, in Jagans case, the investigation has taken a course, which is now in legal circles referred to as “reverse investigation”. So far, no evidence collected or no Public Servant arrested after collecting evidence that there was indeed an official favour. There is no material on record whether any person was benefitted, deliberately, in a departure from Rules and procedures But then, the first person to be arrested is the Financial Advisor of Jagans group of Companies, Sri Vijay Sai Reddy, on the ground that he secured investments into Jagan Mohan Reddy group of companies, as a quid pro quo for the official favours shown to certain known, unknown, named and unnamed individuals. There is no evidence collected as to what was the official favour, loss to the exchequer, How plausible is it to assume that private citizens could have caused the departure from procedures by the government since Jagan was a private businessman between 2004-2009 Huge rumours are doing the rounds about the imminence of Jagan’s arrest. While the man looks unfrazzled, a neutral observer, is wont to ask : Why this Kolaveri? This is not a case of complete dissimilarities in investigation. The striking similarity is : Clean chits being given. While Kapil Sibal was not even questioned, Chidambaram was given a clean chit without investigation. Down south.. While one after the other kept naming Chandrababu Naidu as the navigator for all these decisions, the CBI bends backwards to give a clean chit to Naidu. Every single time, civil society, law courts and media questions as to why Naidu is not even called for questioning, the CBI hides itself. The tentativeness associated with any investigation, the open-eyed and open minded investigation which is required to ensure that the truth comes out, are unabashedly given up to respond to a political mandate of putting down an adversary. CBI is investigating on the basis of a conclusion that there had been official favours, (which is to be evidenced and proved) with no evidence worth the name and then the investigation proceeds into enquiry into the alleged end-user. Another important issue about the on-going investigation in AP is the “musings” of the investigators. Every day, unofficial briefings are given to rival media houses, who are pitted against Sakshi, about the details of investigation, the statements given by witnesses, the confessions of the co-accused. So much so, that a W.P. was filed in the High Court, by Rama Krishna Raju, a purchaser of a plot in EMAAR case, saying that the CBI , the prosecuting agency which should avoid a trial by media, is promoting “trial by media” by selective leaks, and the call details of the Joint Director, CBI were enclosed to the said W.P. Where is the country headed? When will the CBI redeem itself? Don’t we a country one billion, deserve a lot better, straighter and cleaner prosecutors? Don’t we need to have a scientific and rational investigative agency, which will do its job and importantly “let the law take its course”. Sadly, yet another instance where a few officers, motivated by partisan considerations, are taking the CBI down along with them. Filed under Best Articles on YSR and Jagan Tagged as 2g scam, facts about cbi case in jagan companies, jagan companies investments, y s jagan, ysr
cc/2019-30/en_middle_0023.json.gz/line1430
__label__cc
0.547096
0.452904
Posted in Uncategorized on February 28, 2019| Leave a Comment » Jill Murphy’s Do you have a key stage 2 child? Then you have definitely heard or seen numerous episodes of the TV series “Worst Witch on CBBC. My child even lent a book from our local library and read it several times. When I said the play of “Worst Witch” is coming to Orchard theatre, she was elated! Long before Harry Potter there was Mildred Hubble. An ordinary girl who found herself in an extraordinary place: a school for witches. Now in her final year, accident prone Mildred and her fellow pupils are about to embark on their biggest and most important adventure yet… Jealous Ethel Hallow is always out to spoil Mildred’s fun. Miss Hardbroom is opposed to all fun in general. And just as Mildred sparks some inevitable mayhem certain to upset them both, an old enemy returns with a plan for revenge that could threaten not just the Academy, but the whole world. Jill Murphy’s The Worst Witch stories have sold more than five million copies worldwide and been made into numerous films and TV series. Featuring all of Jill Murphy’s beloved characters, this thrilling new stage adaptation is directed by Theresa Heskins (2017 UK Theatre Award for Best Show for Children and Young People) and features original songs, music, magic and a dose of Mildred’s unique brand of utter pandemonium! We particularly liked the postive message it gives to children: to keep percevering no matter what background you are from; to cheeish and keep good friendships; to work as a team and the power of being a unity. The magical parts, the pyrotechnics, the fun, the humour, the music and songs gave the show an unforgettable atmosphere. We were charged with positivity after the play! Go on and be ready to fall under a spell of this benificial to all family play. Jill Murphy’s – The Worst Witch A new play by Emma Reeves, Directed by Theresa Heskins The Worst Witch is the story of an ordinary girl who finds herself in an extraordinary place: a school for witches. Now in her final year, accident-prone Mildred Hubble and her fellow pupils leave a trail of mayhem behind them as they find themselves at the centre of a battle that’s being fought for their future. Jill Murphy’s The Worst Witch stories have sold more than five million copies and been made into films and TV series by HBO, ITV and CBBC. Winner of the Royal Television Society Award for Best Children’s Television Programme, and of the 2017 British Screenwriters Award for Best Children’s Programme for the television adaptations of The Worst Witch, Emma Reeves’ screen writing credits include Eve, The Dumping Ground, Young Dracula and The Story of Tracy Beaker. Her stage work includes the Olivier Award-nominated and critically acclaimed adaptation of Hetty Feather (UK tour and West End). The show will be at The Orchard Theatre from Wednesday 27 February – Sunday 3 March 2019. To book tickets or for more information visit orchardtheatre.co.uk or call the Ticket Office on 01322 220000. We are now running a competition to win the tickets to the show at Orchard theatre. To be able to participate in the competition simply answer the following question: “What is the name of the ordinary girl, who finds herself in a witching school?” The competition closes on midday 25.02.2019 The winner will be chosen at random and contacted to organise the tickets. If we don’t hear from the winner, we will draw again. Posted in Uncategorized on February 3, 2019| Leave a Comment » Somewhere between long grass and flowers there is a magical Little Kingdom. The chirping birds and fluttering butterflies only, know that- it is a magical kingdom. A kingdom where a fairy princess Holly and her friend Ben- the elf live. Every little child who have seen this cartoons are mesmerised by their magical adventures. So, today I had a chance to take my four year old and five months old to the live show. The theatre was buzzing. By the time we arrived there, most of the parents perused the gift shop- to kit out their little fairies or elves. My lot had to have the spinning light wheel. We have made our way to our places, thanks to the staff who were pointing to our seats, it was easy to find. The audience young and not so young was very excited. The show has started and everybody was sucked right into it. It is fun, full of surprises, lots of magic, singing and dancing. Numerous jokes, games and dances were with audience participation too. We have not seen how the time went and the first part ended. We made our way to baby change, which was clean and comfy for both mother and child. Right when we were done, the bells were ringing for the second act. The second part was even more exciting. As the kids got to see a tooth fairy, a magical kingdom’s birthday party preparations and more. The kids were singing the songs in the car all the way home. I will be using the last song to prevent a tantrum. That good a tool, I gained today. Talk about winning at parenting, bringing the children to their favourite show and gaining a tool to save ourselves from some toddler drama! All in all, both kids and I enjoyed the show. I think the length of the acts are perfect for young theatre goers. And it is full of interesting twists to keep them in their seats! It gets five stars from us! « Dec Jun »
cc/2019-30/en_middle_0023.json.gz/line1432
__label__cc
0.663074
0.336926
View the Business Paper for the selected date. You must select a valid Business Paper date. Business Paper date Documents for the day Order Paper PDF House of Lords Business PDF Whips Office List(s) PDF Questions for Written Answer Minutes Papers Minutes for 11 October 2018 Previous Day 10 October Next Day 15 October The House met at 11.00am. Prayers were read by the Lord Bishop of Newcastle. Public Business Sudan: government changes A question was asked by Lord Chidgey and answered by Baroness Goldie. Operation Conifer: Sir Edward Heath A question was asked by Lord Lexden and answered by Baroness Williams of Trafford. Health: contraceptive services A question was asked by Baroness Thornton and answered by Lord O'Shaughnessy. Department for Education: use of statistics A question was asked by Lord Watson of Invergowrie and answered by Lord Agnew of Oulton. Business of the House Lord Taylor of Holbeach moved, on behalf of the Lord Privy Seal (Baroness Evans of Bowes Park), that the debates on the motions in the names of Lord Dubs and Lord Bragg set down for today shall each be limited to 2½ hours. The motion was agreed to. Good Friday Agreement: impact of Brexit (2½-hour debate) Lord Dubs moved that this House takes note of the impact on the Good Friday Agreement of the United Kingdom’s withdrawal from the European Union. After debate, the motion was agreed to. Student loan books Viscount Younger of Leckie repeated as a ministerial statement the answer given to an Urgent Question in the House of Commons. Employee Shareholding and Participation in Corporate Governance A question was asked by Lord Haskel and, after debate, answered by Baroness Vere of Norbiton. Arts: impact of Brexit (2½-hour debate) Lord Bragg moved that this House takes note of the impact on the arts of the United Kingdom’s withdrawal from the European Union. After debate, the motion was agreed to. The House adjourned at 5.39pm until Monday 15 October at 2.30pm.
cc/2019-30/en_middle_0023.json.gz/line1434
__label__wiki
0.621383
0.621383
Ozzy Osbourne vs. Iced Earth – Most Haunting Halloween Track, Round 1 Frazer Harrison, Getty Images / Official iced Earth Website In this Round 1 matchup of our Most Haunting Halloween Track tournament, a song from the Prince of Darkness squares off against a monster cut from an album entirely based off horror stories! One of the most symbolic icons of Halloween is the werewolf and it will be hard to top Ozzy Osbourne dressed as a werewolf on the album cover howling at the moon. The lyrics portray the werewolf waking up in a simple, but effective manner: “Screams break the silence / Waking from the dead of night / Vengeance is boiling / He's returned to kill the light / They when he's found who he's looking for / Listen in awe and you'll hear him / Bark at the moon.” The jolting riff on "Bark at the Moon" comes courtesy of Jake E. Lee and makes the song immediately arresting and one of the best around this holiday. Iced Earth’s Horror Show album features a song for a different horror story, with “Dracula” being a dynamic highlight. Matt Barlow’s soft croon opens the song before his jaw-dropping range takes charge and he lets loose on one of the most assaulting falsetto benders in metal. The song details the story of the most infamous vampire: “I am the Dragon of blood, the relentless prince of pain / Renouncing God on His throne / My blood is forever stained.” Ozzy Osbourne's "Bark at the Moon" or Iced Earth's "Dracula? Cast your vote for the Most Haunting Halloween Track in the poll below! Voting for this round closes on Friday, Oct. 16, at 9 AM ET. Fans can vote once per hour, so keep coming back to make sure that your favorite Haunting Halloween Track wins! Ozzy Osbourne, "Bark at the Moon" Iced Earth, "Dracula" Next: King Diamond vs. Godsmack See Every Matchup in the Most Haunting Halloween Track Tournament Loudwire Filed Under: Iced Earth, Ozzy Osbourne Categories: Most Haunting Halloween Track
cc/2019-30/en_middle_0023.json.gz/line1435
__label__cc
0.748165
0.251835
Study: Business Executives Turn To Email Newsletters First For Their News Martin Beck on June 3, 2014 at 1:27 pm Social media might be the shiniest tool in your marketing kit, but if you want to reach business leaders don’t neglect old-school tactics. That’s the main lesson to be learned from a recent study that found that executives use email newsletters — remember those? — as their primary source for news. Sixty percent said newsletters were one of the first three news sources they check every day. That’s twice as many as those who open a mobile news app (28%) and significantly higher than the 43% who look for news on the mobile web via browser or a social media app. Twitter (23%), Facebook (19%) and LinkedIn (12%) lagged behind. The study, prepared by the Quartz marketing team, pulled insights from 940 executives across a range of industries, including management consulting, finance, tech and media and advertising. The pool was balanced across age demographics with 22% in the 25-44 range, 22% 35-44, 21% 45-54, 20% 55-64 and 12% 65 and older so the results shouldn’t be written off as an you-can’t-teach-an-old-dog-new-tricks outlier. Quartz found a very mobile-device oriented group, with 50% of respondents using mobile devices to take the survey. And they also read most of their news on mobile devices with 61% saying most of their news consumption is on mobile (41% on phones, 20% on tablets). More interesting findings: <75% spend at least 30 minutes a day catching up with news. And 44% say they focus most highly on news first thing in the morning. 61% subscribe to newspapers and magazines, but only 3% use print as a primary news source. 37% of executives pay for digital news, with those in the finance industry being most likely to subscribe at 47%. 91% say they share work-related content with colleagues. When they share, 31% say use mobile devices (31% phone, 16% tablet) and 48% a desktop computer. 80% say they share interesting content via email, 43% via Twitter, 30% via Facebook and LinkedIn, 22% in person and 18% via IM or chat. Justin Ellis of Nieman Journalism Lab has a very good analysis of the study here. Click here for the full report from Quartz. Martin Beck was Third Door Media's Social Media Reporter from March 2014 through December 2015. Channel: Email MarketingEmail MarketingSocial Media Marketing
cc/2019-30/en_middle_0023.json.gz/line1441
__label__wiki
0.877139
0.877139
MarylandReporter.com (https://marylandreporter.com/2019/03/04/state-roundup-march-4-2019/) State Roundup, March 4, 2019 More on Homepage Featured Subscribe to Homepage Featured PIMLICO CONTROVERSY: The group that owns Pimlico Race Course took out a full-page ad in Friday’s issue of The Baltimore Sun to “set the record straight” after the company came under fire from city officials its disinvestment in the historic horse racing track, Sarah Meehan of the Sun reports. In the ad, titled “We are building a future for thoroughbred racing in Maryland,” the Stronach Group doubled down on its plan to build one “super track” at Laurel Park, its second race track in the state. The Canadian group aims to tear down Pimlico, redevelop the site in Park Heights and relocate the Preakness Stakes to Laurel. The Preakness Stakes would generate $52.7 million in economic activity each year if the iconic horse race remained in Baltimore at a rebuilt Pimlico Race Course, according to a new study. Jeff Barker of the Sun writes about the study. The owner of Pimlico Race Course is telling lawmakers it is willing to consider keeping the historic Preakness Stakes in Baltimore but not without significant investment from the state and city. The Stronach Group is making it clear it doesn’t intend to invest any of its own money in revitalizing the aging Baltimore track, Bryan Sears reports for the Daily Record. Doug Donovan of the Sun writes that two hearings were to be held in Annapolis on Friday on opposing bills on Pimlico. One bill would establish a work group to begin studying how to implement a concept to rebuild the nearly 149-year-old Pimlico Race Course as a permanent home for the Preakness Stakes. The other would help fund creation of a so-called super track in Laurel, where potentially the race could move. Kevin Rector of the Sun, who covered the hearing, writes that Michael Gaines, executive pastor of Manna Bible Baptist Church, did not parse his words Friday when he addressed state lawmakers on the future of the Preakness Stakes at Pimlico Race Course. “If you take the jewel out of Park Heights you will sign the death warrant certificate for that community.” OPINION: BSO CAN BE SAVED: In an editorial for the Baltimore Business Journal, Joanne Sullivan opines that while it may be too late for Baltimore City to save the Preakness, “there’s … battle that Baltimore can win — keeping its world-renowned symphony orchestra strong for generations to come.” In a letter to the Sun, Joseph Meyerhoff II, a member of the symphony endowment board and whose family name is on the symphony hall, writes: “If Baltimore and Maryland want to see its largest cultural arts institution remain in its current form, (and support the musicians at their current pay levels), Baltimore’s corporate citizens, wealthy individuals and philanthropists need to step up.“ MINIMUM WAGE BILL HEADS TO SENATE: The Maryland House of Delegates approved Friday a bill that would gradually increase the state’s minimum wage from $10.10 per hour to $15 by 2025. The 96-44 vote fell largely along party lines, with Democrats supporting the measure and mostly Republicans opposing it, Pamela Wood of the Sun reports. As the bill heads to the Senate, progressive groups will likely try to convince senators to restore some of its original language and business advocacy groups will push for further amendments while also lobbying to kill the measure completely, Jessica Iannetta reports in the Baltimore Business Journal. HARFORD DEMS SEEK LISANTI RESIGNATION: The Harford County Democratic Central Committee is calling for Del. Mary Ann Lisanti’s resignation after the official confirmed and apologized for making a racial slur, writes David Anderson for the Sun. During a special meeting on Saturday, the group voted to adopt central committee chair Denise Perry’s statement from earlier in the week which recommended that Lisanti resign. The message from black lawmakers and activists in Maryland was clear on Friday: Del. Mary Ann Lisanti must go, Rachel Chason and Arelis Hernandez reports for the Post. Ovetta Wiggins, Rachel Chason and Arelis R. Hernández of the Post write about the racial slur, how it is viewed by Prince George’s residents, given the history of the county, and local lawmakers. TAXING AIRBNBs: A bill before the Senate Budget and Taxation Committee would require short-term rental sites – such as Airbnb — to collect the 6% Maryland sales and use tax at the time of booking and remit the fees to the state, reports Diane Rey for MarylandReporter SB533, sponsored by Sen. Guy Guzzone, D-Howard and Sen. Cory McCray, D-Baltimore City, was part of a crowded agenda of 24 bills that were heard on Wednesday. END OF LIFE BILL: After failing three times in recent years, a bill that would allow terminally ill Maryland residents to obtain prescription drugs to end their own lives is moving forward in the state’s General Assembly, Pamela Wood of the Sun reports. OPINION: BETTER OPTION TO PAINFUL DEATH: In an op-ed for the Annapolis Capital, Del. Joseline Peña-Melnyk writes that she “will never forget helplessly watching my grandmother suffer miserably at the end of her life because there was nothing then that medicine or I could do to enable her to die peacefully. That tragic experience is one of the reasons I am supporting bipartisan legislation to ensure terminally ill Marylanders don’t suffer needlessly at life’s end.” REDISTRICTING PANEL PROPOSES PLAN: A nonpartisan commission charged with redrawing Maryland’s 6th Congressional District, which a court ruled was unconstitutional, has proposed new boundaries for the sprawling district. The Governor’s Emergency Commission on Sixth Congressional District Gerrymandering on Friday decided on a map that would unite all of Frederick County within the 6th district, while neatly bisecting Montgomery County between Gaithersburg and Germantown, Jennifer Barrios of the Post reports. Their plan will be subject to two public hearings and will be used as the basis for legislation that Hogan plans to introduce in the final days of the General Assembly session, Bruce DePuyt reports for Maryland Matters. It is considered highly unlikely that the legislature will consider a commission-produced map while redistricting is pending before the Supreme Court. 2 MEMBERS RESIGN: Two members of Maryland Gov. Larry Hogan’s redistricting commission resigned after The Baltimore Sun asked questions about whether their participation in the body redrawing Maryland’s congressional districts violated state rules, Luke Broadwater of the Sun reports. OPINION: PROGRESS ON GUN VIOLENCE: In a moving op-ed for the Annapolis Capital, Maria Hiaassen, widow of slain journalist Rob Hiaassen, opines that despite gun violence, on the state level there is steady progress. “New York just became the 14th state to enact an extreme risk law and is the first to empower teachers and principals with the ability to petition a court to remove guns from those proven likely to harm themselves or others. … Maryland’s new extreme risk law had temporarily removed guns from 148 people deemed a risk, doing so with an accessible, fair system.” STATE DROPS BALL ON EX-CON OD PROGRAM: Fatal drug overdoses had been climbing for years when Maryland health officials decided to target a particularly vulnerable group: Those leaving prison or jail. The state sought federal permission to skip the usual paperwork to get them temporary Medicaid cards. But more than two years later, the state hasn’t used the authority, Meredith Cohn of the Sun reports. STATE TO DISTRIBUTE FENTANYL TEST KITS: State health officials plan to distribute thousands of kits by the end of the month that will allow drug users to test drugs for fentanyl, the synthetic opioid officials say drove the increase in fatal overdoses the past few years, Phil Davis of the Annapolis Capital reports. ATHLETE UNION BILL AMENDED: Del. Brooke Lierman on Friday moved to amend her legislation that would have authorized Maryland college athletes to unionize in favor of creating a commission to study how best to ensure fair treatment of student athletes, an acknowledgment her legislation pushing unionization is unlikely to pass this year, writes Luke Broadwater in the Sun. JURY TRIAL THRESHOLD: Attorneys for plaintiffs and civil defendants battled before a Senate committee Thursday over a proposed constitutional amendment to raise the amount in controversy that entitles litigants to a jury trial. The measure would raise the threshold from more than $15,000 to more than $30,000, Steve Lash of the Daily Record reports. SNAP AT RESTAURANTS: Proposed legislation would allow people to use their Supplemental Nutrition Assistance Program benefits — known as the Food Supplement Program in Maryland — to purchase meals at restaurants, Charlie Youngman of Capital News Service reports. Sponsored by Sen. Clarence Lam, Senate Bill 752 would allow elderly, disabled and homeless people to use their Electronic Benefits Transfer cards to purchase food at participating restaurants, Lam said. CONCERNS OVER ARCHDIOCESE ABUSE PROBE: Attorney General Brian Frosh (D) should be doing much more to publicize his investigation of the Baltimore Archdiocese, a leading advocate for clergy sex abuse victims said on Sunday. And, Bruce DePuyt reports in Maryland Matters, a legislator who has questioned the way Frosh has tackled the investigation has raised new concerns about the resources the state has marshaled to locate victims and prosecute both the priests who committed the abuse and the bishops who covered it up. MO CO TENANTS RIGHTS BILL: The Montgomery County House delegation OK’d a housing security bill on Friday morning, marking a legislative milestone for tenant rights, advocates said. The delegation threw its support – by a vote of 17-6 – behind a measure from Del. Jheanelle Wilkins (D) that would require landlords to give a reason for refusing to renew a tenant’s lease, Danielle Gaines of Maryland Matters is reporting. OPINION: GOP WEAK ON MESSAGING, STRONG ON CARING: In an op-ed for the Sun, Maryland political consultant Chevy Weiss asserts that the GOP hasn’t lost its genuine concern for women and children, minorities, the environment, the poor or anyone else. But its failure is in properly branding and messaging them. RX POT APPLICATIONS DELAYED: Citing hundreds of questions from the public, the Maryland Medical Cannabis Commission on Thursday night said it’s postponing the launch of separate applications for four new weed-growing and 10 processing licenses, reports Ethan McLeod for Baltimore Fishbowl. HOGAN APPLAUDS TRANSMISSION LINE RULING: Randall Chase of the AP reports that the governors of Delaware and Maryland are praising a federal panel’s ruling in a dispute over planned cost allocations for a $278 million regional electric transmission line. The Federal Regulatory Commission refused Thursday to grant a rehearing in the case sought by New Jersey officials and other parties. VAN HOLLEN PUSHES LEGISLATION: U.S. Sen. Chris Van Hollen is about to spend the middle part of his first six-year term promoting a number of sweeping legislative proposals that he acknowledges have no chance of being enacted with Republicans in control of the White House and the Senate, Louis Peck reports in Bethesda Beat. Why? He says, “We need some very specific proposals to organize around as we head into the 2020 election.” The Washington Post-Maryland The Annapolis Capital The Frederick News-Post The Daily Record The Garrett County Republican The Hagerstown Herald-Mail The Salisbury Daily Times Baltimore Business Journal Washington Business Journal Cumberland Times-News Patuxent Publishing Co. Columbia Flier Howard County Times Southern Maryland Newspapers St. Mary's Enterprise Calvert Recorder Charles Independent The (UMCP) Diamondback The Cecil Whig The (Easton) Star Democrat The Business Monthly The Afro The Sentinel Newspapers Online News Sites Baltimore Brew The (Harford County) Dagger Bethesda Beat The Freestater Baltimore Post Examiner Television and Radio WBAL TV (NBC) WBFF (Fox) WJZ (CBS) WBAL AM WYPR FM WTOP AM Center Maryland Conduit Street (MACo) Marc Steiner Podcasts Maryland Matters Miner Detail Political Maryland Red Maryland Seventh State
cc/2019-30/en_middle_0023.json.gz/line1443
__label__cc
0.668368
0.331632
Faith & Life > Views > From CCMBC executive Arts & Culture > Family life announcements Multiply releases multi-denominational church planting Creating courageous conversations in congregations Who is the NMT? – Summer 2019 Four Ps for transition #MBThrowback – Summer 2019 Worship takes shape Just worship Songs that Shape Us – Summer 2019 Transitions – Summer 2019 A little more light Home MB HeraldColumns We are a separated people We are a separated people August 1, 2008 0 comment Part V in the series exploring MB identity Turning and turning in the widening gyre The falcon cannot hear the falconer; Things fall apart; the centre cannot hold (W.B. Yeats, “The Second Coming,” 1920) As one of the prophets of postmodernism, Yeats wasn’t writing about the state of the MB conference in Canada in 2008, but his commentary should resonate with us. We along with other Western evangelicals are deeply mired in a crisis of lost centres. Who are we? The most superficial aspect of this identity crisis is our denominational distinctive. Who are the MBs and why do they matter? This is an urgent question facing our conferences. But significant as this is, a far deeper problem occurs when we as Christians fail to clearly hear our “falconer.” Most MBs would agree that our primary identity is not our 500-year Anabaptist history, but rather a quest to be true to the gospel. If our identity as Anabaptists has merit, that merit will be based on hearing the right gospel voice. In this series of articles, I’ve been trying to define the identity of Jesus’ followers. They are a biblical people – they are a New Covenant people – and they are “a people.” While each might seem self-evident, their implications are profound. Complacency about and an inadequate understanding of these basic tenets are among the reasons for the “widening gyre.” Our relationship with the world In closing off this series, I end with what is likely the most difficult aspect of our identity – the challenge of being “in the world . . .not of the world” (John 17). For nearly 2000 years Christians have struggled to live out this tension. It begins with the fact that Jesus’ followers are a separated people. “Therefore come out from them and be separate, says the Lord” (2 Corinthians 6:17). Separated! The very word sends a chill down the human spine. For all of our lone hero bravado, human instinct tells us there is safety in a crowd. We are flocks, we are herds, we are congregations, we are the pack we move with. There are brief moments when flocks, herds, packs, and congregations scatter – but soon they reassemble. Life takes place in the crowd. We are social animals. We move in chaotic unison, but still in unison. When we are separated from the crowd a deep anxiety fills us. Being separated from the main herd is an ominous thing. The earliest Christians knew they were a separated people. When Jesus said, “they are not of this world,” he spoke what they knew. Of course they were a separated people. It was not just a mysterious kingdom reality – it was plainly observable. They were following Jesus and that fact had profoundly separated them from society. But as succeeding generations of Christians lived and died, the world stopped overtly hating them. After all, being kingdom people they proved to be very good neighbours, citizens, slaves, co-workers, employers, and even friends. Jesus had warned them about this. “Woe to you when everyone speaks well of you” (Luke 6:26). But mere warnings seldom stop the force of inertia and the earliest Christian writings note that they soon became part of the fabric of the Roman Empire – for better and for worse. Soon being “in the world” dominated their identity rather than “not of the world.” The results were predictable. Christians became indistinguishable from their decaying cultures. Almost immediately, observing the gravitational draw of the crowd, some concluded that it’s impossible, within society, to live as people of the gospel. They moved into deserts and mountains, into caves and other walled enclaves. There they practiced rigorous, true obedience. Two complementary poles Those who have chosen physical separation from the world may be admirable for their zeal – but not for their understanding of the gospel. They misunderstand our assignment. Jesus has not only left us in the world – we are specifically sent into the world. Being in the world is as important as being separated. The tension between these two poles is part of our identity. There is no formula for success as we live in this tension. There is only dogged perseverance in the knowledge that we’re not given permission to compromise on either side of the equation. We are a separated people who have been purposefully placed in the world. In a world that is forever spinning in a widening gyre, Jesus calls his followers to anchor themselves on a pair of complementary facts about their existence. One without the other sends us careening out of orbit. —James Toews Part I: Who’s your mother? Part II: We are a biblical people Part III: Don’t squint – use the proper lenses Part IV: A people live and die together Part V: We are a separated people We are a separated people was last modified: January 7th, 2015 by James Toews Intersection of faith & lifeMB identity James Toews is senior pastor at Neighbourhood Church, Nanaimo, B.C. Worship that fills Who’s your mother? God’s violence and nonviolent discipleship What do you want me to do with... BFL announces Revised Canadian Edition of Family Matters To walk together in a good way What it means to be family On the sea, in a home, along the... Denominational vexillology JOBS.MBHERALD.CA MB HERALD PODCAST [social Tweets by MB_Herald Looking for your next ministry role? Post and search ministry-related jobs in evangelical-Anabaptist organizations throughout North America at jobs.mbherald.com. [recent comments John C Swallow on Creating courageous conversations in congregations Chipper Block on On climate: the global church needs to CHANGE Rudy Hiebert on Creating courageous conversations in congregations Rick Block on Diversity and unity Henry Regehr on Reconciliation with Indigenous Canadians [archives [archives Select Month July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 November 2018 October 2018 September 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 January 2018 December 2017 November 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 June 2015 May 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 September 2007 June 2007 April 2007 March 2007 December 2006 November 2006 September 2006 March 2006 February 2006 January 2006 November 2005 September 2005 July 2005 June 2005 May 2005 April 2005 January 2005 November 2004 July 2004 April 2004 March 2004 February 2004 January 2004 December 2003 October 2003 September 2003 May 2003 June 2002 May 2002 March 2002 January 2002 December 2001 November 2001 October 2001 August 2001 March 2001 December 2000 September 2000 July 2000 March 2000 February 2000 January 2000 October 1999 November 1998 December 1997 August 1997 September 1996 December 1995 November 1994 May 1994 April 1992 April 1990 May 1989 May 1988 March 1983 December 1981 November 1981 February 1981 April 1978 May 1972 [categories [categories Select Category Anniversaries Appointments art Arts & Culture Bible study books Columns Crosscurrents feature articles Features film/theatre From CCMBC executive From the community From the editor Homepage inspirational Letters to the editor Life & Faith MB Herald music News People People and Events podcast Views [subscribe to blog via email © Mennonite Brethren Herald, 2018
cc/2019-30/en_middle_0023.json.gz/line1446
__label__cc
0.663773
0.336227
In HTLV 1 infected T cell lines, upregul Posted on August 31, 2015 by micr2588 In HTLV 1 infected T cell lines, upregulated p21CIP1 WAF1 may potentially function as an assembly factor for the cyclin D2 cdk4 complex, and the p21 cyclin D2 cdk4 complex may not act as an inhibitory complex but in stead may allow the increased phosphorylation of Rb and accelerated progression into S phase. In the present study, Tax mediated G1 arrest occurred in human papilloma virus type 18 transformed HeLa cells, in which the Rb pathway was activated by repression of HPV 18 E7. Indeed, in cells trans Inhibitors,Modulators,Libraries fected with the control vector, the majority of Rb was in the hyperphosphorylated form ppRb. By contrast, an accumulation of hypo and or unpho sphorylated form pRb was observed in Tax expressing HeLa cells, which is in contrast to the results of study showing that Tax increased the phosphorylation of Rb family members. Therefore, there is a strong possi bility that Tax activated p21CIP1 WAF1 may function to inhibit the cyclin D2 cdk4 complex, thereby inducing cell cycle arrest. Our microarray result also shows that Tax upregulated the expression of BCL6 gene encodes a sequences Inhibitors,Modulators,Libraries specific transcriptional repressor by 2. 7 fold. This sup ported by the findings in previous AV-951 study, which described that an interaction of Tax with the POZ do main of BCL6 enhances the repressive activity of BCL6 and increased the levels of apoptosis induced by BCL6 in osteosarcoma cells. The BCL6 POZ domain mediates transcriptional repression by interacting with several corepressors including silencing mediator for retinoid and thyroid receptor and nuclear hormone receptor cor epressor, BCL6 corepressor together with many histone deacetylases. BCL6 colocalizes with these corepressors in punctate nuclear structures that have been identified as sites of ongoing DNA replication. Interestingly, BCL6 appeared to recruite Tax into punctate nuclear structures and significantly downregulate both basal and Tax induced NF kB and long Inhibitors,Modulators,Libraries terminal repeat activation. Thus, the high expression of BCL6 in HTLV infected cells may contribute Inhibitors,Modulators,Libraries to the silencing of viral gene ex pression and to the long clinical latency associated with HTLV infection. This study allows greater understanding of the bio logical events affected by HTLV 1 Tax, particularly the regulation of cellular proliferation and apoptosis. Since we found evidence of several similarities, as well as dif ferences, between Tax expressing HeLa cells and HTLV infection in T cell lines, we believe that the overexpres sion of Tax will be useful for preliminary studies on the effects of HTLV infection in T cell lines. However, since Zane et al. recently demonstrated that infected CD4 T cells in vivo are positively selected for cell cycling but not cell death, our experimental approaches in HeLa cells may not be reflective of normal physiology of Tax or HTLV 1 in vivo infected cells. selleck chemicals We also emphasize the clinical Inhibitors,Modulators,Libraries relevance of this research through selleck examples of promising in vivo studies. Although CPPs are often derived from Inhibitors,Modulators,Libraries naturally occurring protein transduction domains, they can also be artificially designed. Because Inhibitors,Modulators,Libraries CPPs typically include many positively charged amino acids, Inhibitors,Modulators,Libraries those electrostatic interactions facilitate the formation of complexes between the carriers and the oligonucleotides. One drawback of CPP-mediated delivery includes entrapment of the cargo in endosomes because uptake tends to be endocytic: coupling of fatty acids or endosome-disruptive peptides to the CPPs can overcome this problem. Inhibitors,Modulators,Libraries CPPs can also lack specificity for a single cell type, which can be addressed through the use of targeting moieties, such as peptide ligands that bind to specific receptors. Researchers have also applied these strategies to cationic carrier Inhibitors,Modulators,Libraries systems for nonviral oligonucleotide delivery, such as Inhibitors,Modulators,Libraries liposomes or polymers, but CPPs tend to be less cytotoxic than other delivery vehicles.” “The advancement of gene-based therapeutics to the clinic is limited by the ability to deliver physiologically relevant doses of nucleic adds to target tissues safely and effectively. Over the last couple of decades, researchers have successfully employed polymer and lipid based nanoassemblies to deliver nucleic adds for the treatment of a variety of diseases. Results of phase I/II clinical studies to evaluate the efficacy and biosafety of these gene delivery vehicles have been encouraging, which has promoted the design of more efficient and biocompatible systems. Research has focused on designing carriers to achieve biocompatibility, stability in the circulatory system, Inhibitors,Modulators,Libraries kinase inhibitor TWS119 biodistribution to target the disease site, and intracellular delivery, all of which enhance the resulting therapeutic effect. The family of poly(alkylene oxide) (PAO) polymers Includes random, block and branched structures, among which the ABA type triblocks copolymers of ethylene oxide (EO) and propylene oxide (PO) (commercially known as Pluronic) have received the greatest consideration. In this Account, we highlight examples of polycation-PAO conjugates, liposome-PAO formulations, and PAO micelles for nucleic add delivery. Among Inhibitors,Modulators,Libraries the various polymer design considerations, which include Inhibitors,Modulators,Libraries molecular weight of polymer, molecular weight of blocks, and length of blocks, the overall hydrophobic-lipophilic balance (HLB) is a critical parameter in defining the behavior of the polymer conjugates for gene delivery. We discuss the effects of varying this parameter selleck chemicals Raf Inhibitors in the context of improving gene delivery processes, such as serum stability and association with cell membranes. Nevertheless, the logic of living cells Nevertheless, the logic of living cells offers potential insights into an unknown world of autonomous minimal life forms (protocells). This Account reviews the key life criteria required for the development of protobiological learn this here now systems. By adopting a systems-based perspective to delineate the notion of cellularity, we focus specific attention on core criteria, systems design, nanoscale Inhibitors,Modulators,Libraries phenomena and organizational logic. Complex processes of compartmentalization, replication, metabolism, energization, and evolution provide the framework for a universal biology that penetrates deep into the history of life on the Earth. However, the advent of protolife systems was most likely coextensive with reduced grades of cellularity in the form of simpler compartmentalization modules with basic autonomy and abridged systems functionalities (cells focused on specific functions such as metabolism or replication). In this regard, we discuss recent advances in the design, chemical construction, and operation of protocell models based on self-assembled phospholipid or fatty acid vesicles, Inhibitors,Modulators,Libraries self-organized inorganic nanoparticles, or spontaneous microphase separation of peptide/nucleotide membrane-free droplets. These studies represent a first step towards addressing how the transition from nonliving to living matter might be achieved in the laboratory. They also evaluate plausible scenarios of the origin of cellular life on the early Earth. Such an approach should also contribute significantly to the chemical construction of primitive artificial cells, small-scale bioreactors, and soft adaptive micromachines. “One important question in prebiotic chemistry is the search for simple structures that might have enclosed biological molecules in a cell-like Inhibitors,Modulators,Libraries space. Phospholipids, the components of biological membranes, are highly complex. Instead, we looked for molecules that might have been available on prebiotic Inhibitors,Modulators,Libraries Earth. Simple peptides with hydrophobic tails and hydrophilic heads that are made up of merely a combination of these Inhibitors,Modulators,Libraries robust, abiotically synthesized amino adds and could self-assemble into nanotubes or nanovesicles fulfilled our initial requirements. These molecules could provide a primitive enclosure for the earliest enzymes based on either RNA or peptides and other molecular structures with a variety of functions. We discovered and designed a class of these simple lipid-like peptides, order inhibitor which we describe in this Account. These peptides consist of natural amino acids (glycine, alanine, valine, isoleucine, leucine, aspartic add, glutamic add, lysine, and arginine) and exhibit lipid-like dynamic behaviors. These structures further undergo spontaneous assembly to form ordered arrangements including micelles, nanovesicles, and nanotubes with visible openings. As such, the design and synthesis of CCN As such, the design and synthesis of CCNMs selleckchem provide an attractive route for the construction of high-performance electrode materials. Studies in these areas have revealed that both the composition and the fabrication protocol employed in preparing CCNMs Influence the morphology and microstructure of the resulting material and Inhibitors,Modulators,Libraries its electrochemical performance. Consequently, researchers have developed several synthesis strategies, including hard-templated, soft-templated, and template-free synthesis of CCNMs. In this Account, we focus on recent advances in the controlled synthesis of such CCNMs and the potential of the resulting materials for energy storage or conversion applications. The Inhibitors,Modulators,Libraries Account is divided into four major categories based on the carbon precursor employed in the synthesis: low molecular weight organic or organometallic molecules, hyperbranched or cross-linked polymers consisting of aromatic subunits, self-assembling discotic molecules, and graphenes. In each case, we highlight representative examples of CCNMs with both new nanostructures and electrochemical performance suitable for energy storage or conversion applications. In addition, this Account provides an overall perspective on the current state of efforts aimed Inhibitors,Modulators,Libraries at the controlled synthesis of CCNMs and Identifies some of the remaining challenges.” “Growing interest in graphene over past few years has prompted V researchers to find new routes for producing this material other than mechanical exfoliation or growth from silicon carbide. Chemical vapor Inhibitors,Modulators,Libraries deposition on metallic substrates now allows researchers to produce continuous graphene films over large areas. In parallel, researchers will need liquid, large scale, formulations of graphene to produce functional graphene materials that take advantage of graphene’s mechanical, electrical, and barrier properties. In this Account, we describe methods Inhibitors,Modulators,Libraries for creating graphene solutions from graphite. Graphite provides a cheap source of carbon, but graphite is Insoluble. With extensive sonication, it can be dispersed in organic solvents or water with adequate additives. Nevertheless, this process usually creates cracks and defects in the graphite. On the other hand, graphite Intercalation compounds (GICs) provide a means to dissolve rather than disperse graphite. GICS can be obtained through the reaction of alkali metals with graphite. These compounds are a source of graphenide salts and also serve as an excellent electronic model of graphene due to the decoupling article source between graphene layers. The graphenide macroions, negatively charged graphene sheets, form supple two-dimensional polyelectrolytes that spontaneously dissolve in some organic solvents. A further reduction of tidal volumes mig A further reduction of tidal volumes might be beneficial, and it is known that apneic oxygenation (no tidal volumes) with arteriovenous CO2 removal can keep acid-base balance and oxygenation normal for at least 7?h in an acute lung injury model. We hypothesized that adequate buffering selleckchem might be another approach and tested whether tris-hydroxymethyl aminomethane (THAM) alone could keep pH at a physiological level during apneic oxygenation for 4?h. Methods Six pigs were anesthetized, muscle relaxed, and normoventilated. The lungs were recruited, and apneic oxygenation as well as administration of THAM, 20?mmol/kg/h, was initiated. The experiment ended after 270?min, except one that was studied for 6?h. Results Two animals died before the end of the experiment. Arterial Inhibitors,Modulators,Libraries pH and arterial carbon dioxide tension (PaCO2) changed from 7.5 (7.5, 7.5) to 7.3 (7.2, 7.3) kPa, P?<?0.001 at 270?min, and from 4.5 (4.3, 4.7) to 25 (22, 28) kPa, P?<?0.001, respectively. Base excess increased from 5 (3, 6) to 54 (51, 57) mM, P?<?0.001. Cardiac output and arterial pressure were well maintained. The pig, which was studied for 6?h, had pH?7.27 and PaCO2 27 kPa at that time. Conclusion With intensive buffering using THAM, pH can be kept in Inhibitors,Modulators,Libraries a physiologically acceptable range for 4?h during apnea. Background Out-of-hospital refractory cardiac arrest patients can be transported to a hospital for extracorporeal Inhibitors,Modulators,Libraries life support (ECLS), which can be either therapeutic or performed for organ donation. Early initiation is of vital importance and the main limitation when considering ECLS. This explains that all reported Inhibitors,Modulators,Libraries series of cardiac arrest patients referred for ECLS were urban ones. We report a series of rural out-of-hospital non-heart-beating patients transported by helicopter. Methods This observational study was performed in two rural districts in France. Data on patients with pre-hospital criteria for ECLS who were transported to the hospital by helicopter, maintained by mechanical chest compression, were recorded over a 2-year period. Results During the study period, 27 patients were referred for ECLS, of which 14 for therapeutic ECLS and 13 for organ preservation. The median transport distance was 37?km (25th and 75th percentiles: 3158; range 25 to 94?km). Among the therapeutic ECLS patients, one survived to discharge from the hospital. Liver and kidneys were retrieved in another patient after brain death was ascertained. In the 13 patients referred for organ donation, four were excluded for medical reasons; 18 kidneys were retrieved in nine patients, Inhibitors,Modulators,Libraries of which six kidneys were successfully transplanted. Conclusion In this preliminary study, we report the feasibility and the interest selelck kinase inhibitor of helicopter transport of refractory cardiac arrest patients maintained by mechanical chest compression.
cc/2019-30/en_middle_0023.json.gz/line1459
__label__wiki
0.80377
0.80377
Pop-Up Ads Are Annoying — But They Work by Dave Roos Jul 5, 2017 Pop-up ads are less common now, but there are many other ways advertisers annoy internet users. Busakorn Pongparnit/Getty Images In the mid-1990s, Ethan Zuckerman worked for Tripod.com, one of the first free web-hosting services for creating personal websites. Zuckerman, now director of the Center for Civic Media at MIT, believed deeply in the ethos of the early internet, a global public square where every voice had equal footing. But keeping Tripod free to users meant that revenue had to come from somewhere else. Like millions of other web companies, they chose advertising. Soon Tripod was selling online ad space directly on Tripod-hosted personal websites, which worked fine until a major car company noticed that one of its ads was posted on a site celebrating anal sex. Zuckerman, believing he was acting in the best interest of both the advertiser and internet users alike, wrote some code to display the car ad in a separate browser window instead of on the kinky sex page. Zuckerman had just invented the pop-up ad. Pop-up ads spread across the nascent internet like a plague. Pop-ups were beloved by advertisers because they flung the company's message in front of as many eyeballs as possible. Even better, users had to physically close the window, which forced them to interact with the ad, if only for a second. Blinded by the novelty and blanket exposure of the pop-up format, advertisers didn't foresee the user backlash. It didn't take long for pop-ups to become the most universally hated part of online life. By the early 2000s, pop-up blockers were standard on most web browsers and the worst of the pop-up era was over. But that doesn't mean that advertisers stopped looking for "creative" ways to grab our attention online. Why Annoying Ads Work While old-school pop-ups are rare nowadays, there are plenty of ways that advertisers still hold us hostage for content. There are "prestitial" ads that block the whole screen as a website loads, forcing you to wait 15 seconds before clicking "continue to site." There are "interstitial" ads that display after you visit the site. Some preloading ads on videos can be skipped after five seconds, others can't (has 30 seconds ever felt so long?). And there are videos that expand — with sound! — if you accidentally hover your mouse over the ad. Why would advertisers and content providers continue to risk alienating users with ads that most people try to skip or close as quickly as possible? One reason is that they work. In general, "rich media" ads that contain video or other interactive elements are more engaging to online consumers, says John Dinsmore, a marketing professor at Wright State University in Dayton, Ohio. That expanding video screen that launches when you hover your mouse over an ad for two seconds is called a lightbox ad or hover ad. Google, which created the lightbox format, claims that they are "six to eight times more engaging than a static video box," says Dinsmore. One study showed that the top 10 highest-performing pop-up ads had an impressive conversion rate of 9.28 percent (conversion rate means a person took action — such as going to the advertised website — after viewing the ad). One marketing expert found that adding a hover ad to his site increased sales by 162 percent and newsletter subscriptions by 86 percent. The Better Way to Do Internet Advertising Michael McNulty is the product marketing manager for rich media for Sizmek, a marketing company that gives advertisers a huge selection of online ad formats to play with, from standard banner ads to full-screen "expandables" (ads that expand to cover the whole screen when clicked on) and "pushdowns" (ads that push the site content down as they expand). As with any piece of technology, McNulty explains, there are smart ways and careless ways to deploy it. It starts with targeting. Like it or not, your every move on the internet is likely being tracked and sold to advertisers. By analyzing your search terms and browsing history, for example, Google might know that you're in the market for a new vehicle, preferably a hybrid SUV that can seat seven. While McNulty would never advise a carmaker to blast a flashy video ad at every random web user, in your specific case, a high-impact ad for a seven-seater hybrid SUV could really pay off. "If you have marketing agencies that go the extra mile to know what users want and what they respond to, you're giving them a reason to watch something you're putting in front of them whether it's obtrusive or not," McNulty says. McNulty says at Sizmek, the default setting for all rich media ads is to only launch if it's user-initiated. Meaning, the user has to click "expand" before the interactive video window will launch — no videos that automatically play or pop-ups. But ultimately, he doesn't control what the client and their creative team want to do with the tools that Sizmek provides. Those settings can be tweaked to deliver whatever ad experience the client wants, including the bad kind. Annoying, untargeted, unwanted ads poses a big threat to the future of the entire ad-supported internet. Instead of just turning on pop-up blockers in their browsers, more people are installing ad-blocking software that kills all ads, even the relatively benign ones. If a content website can't serve you ads, it can't pay the bills. And that could mean less "free" internet and more charges for consumers to access a website, read a blog post, or watch a video. Starting in 2017, Google announced that it would demote mobile sites that launched with a full-screen pop-up or other kind of ad that stands between users and their content. Animals · Previous Story Interactive Database of NYC Dog Names Provides Endless Distraction Next Story · Science Gun Purchases for Self-Defense Skyrocket The Hard Sell: When Targeted Ads Backfire How Pop-up Blockers Work How do advertisers show me custom ads? NASA's Dragonfly Rotorcraft to Explore Saturn's Giant Moon Titan 5 Ways Homeostasis Keeps Your Body Humming Along Is Parallel Parking Outdated?
cc/2019-30/en_middle_0023.json.gz/line1467
__label__wiki
0.508947
0.508947
Premier League Predictions: Hull to take another step towards safety, Man Utd set for tricky Burnley test The FA Cup semi-finals mean there are only six Premier League games this weekend but that doesn’t mean the action is any less important, with some crucial twists and [...] Chelsea set to beat Spurs to PL title by 11 points if these results are anything to go by The race for the Premier League title is continuing to keep fans guessing after Tottenham closed the gap on leaders Chelsea to just four points over the weekend. The two are [...] Premier League Predictions: Mourinho looking for Chelsea revenge, Arsenal looking to bounce back at Boro The Premier League season is heading towards it’s finale but there is still plenty to play for. Manchester United welcome league leaders Chelsea to Old Trafford in the [...] Three reasons Man Utd need to pull out the stops to beat PL rivals to £26m-rated left-back Manchester United manager Jose Mourinho is on the hunt for a new left-back and he needs to do everything he can to ensure the Red Devils beat Premier League rivals [...] Why Dele Alli could become one of the greatest Premier League midfielders of all time Dele Alli is having a fantastic Premier League season for Tottenham. He has netted an impressive 16 goals, racked up five assists and has fully deserved the plaudits he has [...] Three reasons Man Utd suffered most in the PL this weekend Saturday saw Manchester United held to their third 0-0 draw at home in the Premier League this season, as West Brom became the eighth team to leave Old Trafford with a point [...] Premier League Predictions: Liverpool set for derby glory, City to pile misery on Arsenal The international break is over. Premier League football is back and what a weekend we have in store, starting with the Merseyside Derby on Saturday lunchtime and ending with [...] Do Liverpool hold the key in Hazard’s Chelsea departure with attempt to sign €80m Madrid star? Liverpool could potentially play a key role in Eden Hazard leaving Chelsea at the end of the season as they are reportedly interested in signing Real Madrid playmaker James [...] Spurs can win first Premier League title if Mauricio Pochettino signs these players March 27, 2017 // 1 Comment Mauricio Pochettino has been impressive since taking over at Tottenham Hotspur but they are still the nearly men at the moment. They’re challenging for the Premier [...] Should Mourinho be looking to replace Martial with this £40m Premier League champion? Jose Mourinho will no doubt ring the changes at Manchester United this summer. His first season in charge has been steadily improving but he needs to bring more quality to [...]
cc/2019-30/en_middle_0023.json.gz/line1485
__label__cc
0.521917
0.478083
You are here: All Destinations > United States > New York State > Brooklyn NO TWO DRINK MINIMUM Comedy Shows - Special Exclusive VIP Eastville Comedy Club - Brooklyn The New York Beer and Brewery Tour Miles of Beaches: Staten Island East Shore Private Tour Brooklyn Street Art Graffiti Private Tour by Foot and Subway Jazz Fest Concert Series and Fish Fry Option From Grit to Hip, a Walking Tour of Williamsburg Luna Park Coney Island Pass Brooklyn Army Terminal Walking Tour Alternative New York Street Art Tour Williamsburg Pub Crawl Private Tour of Brooklyn (Half Day) Prospect Park Walking Tour Victorian Flatbush Walking Tour New York Gospel Music Tour in English Here are your 29 search results for Tours, Attractions & Activities in Brooklyn, United States More activity details and reservations Delve deep into the dynamic neighborhoods of the Lower East Side tour through an exploration of their vibrant street art. Alternative New York Street Art Tour: Lower East Side - Come along with us through one of New York City's oldest neighborhoods and let us bring the art live. The Lower East Side is well known for it's diverse ethnic neighborhoods and astonishing street art ... More info › Come along with us through one of New York City's oldest neighborhoods and let us bring the art live. The Lower East Side is well known for it's diverse ethnic neighborhoods and astonishing street art. Historically, the Lower East Side is known for hosting immigrants coming off Ellis Island, and in the second half o the 20th century, the area became flooded with artists. The growing counter-culture population in the Lower East Side drew more and more artists. Soon the area quickly exploded with creative expressive graffiti art that illustrated the social and cultural movements that would influence American society in the 21st century. Escape from the hustle of Times Square to a little-known engineering marvel, the Brooklyn Army Terminal, on this guided, historic, 2-hour walking tour. Explore how millions of tons of war supplies and personnel were shipped through this bustling transportation hub during World War I ... More info › Escape from the hustle of Times Square to a little-known engineering marvel, the Brooklyn Army Terminal, on this guided, historic, 2-hour walking tour. Explore how millions of tons of war supplies and personnel were shipped through this bustling transportation hub during World War I. Hear the stories of soldiers, longshoremen and merchant mariners who worked these piers, rail yards, and warehouses and learn how this complex is being put to use today as part of Brooklyn’s revitalized working waterfront. Explore the sights and stories of a century of work at the Brooklyn Army Terminal. Meet the guide at the Terminal, located on Brooklyn’s waterfront and accessible by subway from Manhattan. The Brooklyn Army Terminal was built to supply American forces on the Western Front during World War I. This complex served as a supply base for the American military up until the Vietnam War. Today, the Brooklyn Army Terminal is a large complex of warehouses, offices, piers, docks, cranes, rail sidings and cargo-loading equipment on 95 acres of land between 58th and 63rd streets in Sunset Park, Brooklyn. It is a thriving industrial park that is home to over 100 companies in a wide variety of industries, from precision manufacturers to biotech researchers, online retailers and chocolatiers. Begin this 2-hour walking tour with a visit inside the Brooklyn Army Terminal’s architectural gem — the stunning atrium of Building B, where freight trains once rumbled through to be loaded from the balconies. Also, enjoy sweeping views of New York Harbor and visit the unrestored, 600,000-square-foot space of Building A. These two warehouses were the largest concrete buildings in the world when they were constructed in 1919. Along the way, explore how millions of tons of war supplies and personnel were shipped through this busy transportation hub. Hear the stories of soldiers, longshoremen and merchant mariners who worked these piers, rail yards and warehouses. Also, find out what made the Port of New York the envy of the world in the mid-20th century, why it went into decline, and how Brooklyn’s entire working waterfront is being revitalized today. Brooklyn Botanic Garden Admission with 3-Course Dining at Yellow Magnolia Cafe Brooklyn Botanic Garden is an urban botanic garden that connects people to the world of plants, fostering delight and curiosity while inspiring an appreciation and sense of stewardship of the environment. Exclusively for Garden visitors, Yellow Magnolia Café offers full-service dining Tuesday through Sunday. The bright, airy café overlooks the Garden’s Lily Pool Terrace. The Brooklyn Botanic Garden inspires people of all ages through conservation, display and enjoyment of plants ... More info › The Brooklyn Botanic Garden inspires people of all ages through conservation, display and enjoyment of plants. The Garden serves communities in New York City and internationally through its world-class gardens, extensive research collections and numerous educational and community programs. Your ticket includes a 3-course dining experience at Yellow Magnolia Café. Named after the remarkable flower developed by Brooklyn Botanic Garden, the cafe offers modern, vegetable-focused cuisine in a one-of-a-kind setting nestled within the Garden’s iconic landscape. Designed by David Rockwell, the café is bright and airy and overlooks the Garden’s Lily Pool Terrace. The new café is a celebration of its urban garden setting, with seasonal ingredients sourced from local farms and purveyors—an ethos in keeping with the Garden’s longstanding interest in sustainable practices—and menus inspired by Brooklyn’s artisanal food movement. Local chef Rob Newton, whose other Brooklyn restaurants have been lauded for their fresh and creative menus, alongside chef de cuisine Morgan Jarrett, have developed the café’s seasonal, market-driven menus, with a focus on local vegetables, grains, and sustainably sourced meats and fish. The menu is vegetable focused, with select meat and fish options available, as well as healthy children’s menu items. Brooklyn & Bushwick Art Tour Brooklyn is home to one of the largest concentrations of artists in the world, and you can feel the creative energy in the area’s galleries and artist studio spaces. From Bushwick’s Clearing Gallery to Williamsburg’s Boiler and Pierogi galleries to the Brooklyn Art Space and the Gowanus Studio Space—ART SMART will guide you through the best commercial galleries and nonprofit and artist-run spaces on a private, completely customized tour. Along the way you’ll discover amazing works of art based on your specific tastes ... More info › Along the way you’ll discover amazing works of art based on your specific tastes. Our expert art historian guide will lead you to the most interesting, most cutting-edge art spaces, both in formal galleries or studios and in unexpected spots around these arty neighborhoods. If the timing is right, we can also explore a curated selection of artist spaces during your guided tour of the open-studios on weekends scattered throughout the year. Contact us well in advance to organize up-coming dates. The tour will start at the BogArt Building, located at 56 Bogart St. in Brooklyn at the time of your choosing. From there your ART SMART art historian guide will lead you on your private tour through Brooklyn and the surrounding neighborhoods. The galleries, studios, or other art spaces you visit will be determined based on the interests and preferences of you and your party, uniquely tailored to help you make the most of your experience. Transportation, if necessary, is not included in the price but we are happy to arrange it for you. Our tours start at $400 for 2 hours for groups of 1-5 people. For groups larger than 5 people or tours longer than 2 hours please contact us at team@artsmart.com. Based on your interests, our explorations may include: • Media (painting, sculpture, photography, etc.) • Emerging talents or internationally acclaimed artists • Graffiti and street art • Gallery spaces and open artist studios Brooklyn Children's Museum Admission Brooklyn Children’s Museum (BCM) is the world’s first museum designed expressly for children. The Brooklyn Institute of Arts and Sciences founded the Museum in 1899 as an alternative to existing museums ... More info › Brooklyn Children’s Museum (BCM) is the world’s first museum designed expressly for children. The Brooklyn Institute of Arts and Sciences founded the Museum in 1899 as an alternative to existing museums. The Museum’s mission is to provide first cultural experiences for children and families that inspire curiosity, creativity, and a lifelong love of learning. Visit: Brooklyn Children's Museum, 145 Brooklyn Ave, Brooklyn, NY 11213-1900 Brooklyn Children's Museum promotes a family friendly environment where you'll access exhibits which offer boundless opportunities for sensory play and exploration that encourage children’s social, emotional and physical development. Museum offers three floors of interactive exhibits and hands-on cultural and science programs for ages 6 months to 8 years. Inclusive interactive exhibit and program space developed in 2014, guided by the specific needs of children with Autistic Spectrum Disorders. Modular furniture elements enable Educators to set up the space differently each time and modify the activities based on the visitors who are present. Activities range from low to high impact and can include quiet reading and materials exploration to medium impact music-making to high impact physical movement and challenges. ColorLab is Brooklyn Children’s Museum’s family art-making space where artists of all ages can explore, make, and celebrate art! We value discovery, artistic process, freedom of expression, and creative collaboration with others. ColorLab’s rotating programs feature the work and artistic processes of African American, Afro-Caribbean, and African contemporary artists. Each visit to ColorLab offers new ways to experiment with artistic processes and new ideas to explore, as well as the exploration of the Museum’s collection using all of the senses. Hands-on programs are supported by BCM’s teaching artists and educators. World Brooklyn Children play in kid-sized shops based on the real ones you find in neighborhoods across Brooklyn. In these environments, kids take on the roles of shopkeeper, baker, grocer, shopper, designer, performer, and builder as they gain an understanding and appreciation of the cooperative roles that enable communities to thrive. This exhibit is designed to foster a greater understanding and appreciation of the world cultures found in Brooklyn. Totally Tots This pint-sized paradise is designed for our youngest visitors, featuring nine different sensory play areas including water, sand, music, dress up, blocks, and more. Totally Tots is for children ages zero to 6. Collections Central This exhibit features a rotating selection from the museum’s collection of 29,000 historic cultural objects and scientific specimen – from lunch boxes to minerals, masks and trolleys! Neighborhood Nature Examine the many ecologies found in your own Brooklyn backyard. This exhibit introduces children to life sciences through hands-on play in the community cork garden, close looking and listening in the Museum’s diorama habitats, lounging at the “beach” and through exploring the intersection of the natural and human worlds. At Brooklyn Historical Society, take part in a great resource of local Brooklyn history. With stunning architecture, inside and out, the Othmer library is a must-see. Visit: Brooklyn Historical Society, 128 Pierrepont St, Brooklyn, NY 11201-2711 Exhibition in Brooklyn Historical Society | Anaheim Tours | Las Vegas Tours | Miami Tours | New Orleans Tours | San Antonio Tours | Washington DC Tours | Kauai Hawaii Tours | Oahu Hawaii Tours | Maui Hawaii Tours | Golden Gate Park San Francisco Segway and Electric Scooter Tours | San Francisco Segway and E-Scooter Team Building | Cambodia Tours | China Tours | China Tours | Colombia Tours | India Tours | New Zealand Tours | Portugal Tours | Taiwan Tours | United Arab Emirates Tours | United States Tours | United States Tours
cc/2019-30/en_middle_0023.json.gz/line1496
__label__cc
0.705405
0.294595
Dr. Ben Chavis On Fatal Police Shootings: A Systemic Problem Requires A Systemic Solution Written By NewsOne Now | 10.14.15 Dr. Ben Chavis, co-convener of the Million Man March, spoke with Roland Martin after Minister Louis Farrakhan’s stirring address during the Justice or Else rally in Washington D.C. Dr. Chavis told Martin he believes that Justice or Else is going to be an “ongoing movement.” “All of these people that came out today are going back to their local communities revitalized, focused, and I think undergirding them – which I think was different from 1995 – is the whole economic question. Economic parity, economic justice, economic equality,” said Chavis. Chavis believes part of the economic component addressed by Min. Farrakhan involving reinvesting African-Americans’ $1.2 trillion in spending/buying power “is going to be good for Black-owned businesses.” During his interview with Martin, host of TV One’s NewsOne Now, Chavis discussed the systemic problem of police violence in the Black community and the officers involved in the fatal shootings of Black men walking away virtually scot-free. “This is a systemic problem and a systemic problem is going to require a systemic solution. [It] has to be ongoing, it’s not about putting one cop behind bars, it’s about changing the behavior of the police toward African-Americans and other people of color,” said Dr. Chavis. He continued, “It’s interesting that while we’ve made some progress, we still have a significant racial divide in America.” When talking about the family members of those who lost loved ones to police violence or misconduct, he said, “To see all of those family members in one spot, at one time at Justice or Else — obviously it puts now the pressure point on the “Or Else,” because we have to be a part of the “Or Else” — we have to be a part of the Or Else solution to this systemic problem.” TV One’s NewsOne Now has moved to 7 A.M. ET, be sure to watch “NewsOne Now” with Roland Martin, in its new time slot on TV One. Subscribe to the “NewsOne Now” Audio Podcast on iTunes. Dr. Cornel West: Once You Break The Back Of Fear, It’s A New Day For Oppressed People Chuck D: Not Only Do #BlackLivesMatter, But Black Love Matters Justice Or Else: Russell Simmons Discusses “Redistributing The Pain” Through Big Business, Police Body Cams Justice Or Else: Justice League Activists Talk Next Steps In Movement EXCLUSIVE: Sights & Sounds From The Justice Or Else Rally Justice Or Else: Min. Louis Farrakhan Calls For Economic Boycott Of Black Friday Shopping To “Redistribute The Pain” Don’t Miss Our Hottest Stories! Get The NewsOne Flip App for iPhone: Flip, Skip — Or Send Us a Tip! The Million Man March 2015 #JusticeOrElse [Exclusive Photos] Dr. Ben Chavis , exclusive video , Justice or Else , Minister Louis Farrakhan , newsone now , Roland Martin , TV One , Video 75-Year-Old Black Arts Icon Sadie Roberts-Joseph ‘Found Murdered’ In Trunk Of A Car A$AP Rocky’s Family Asks Al Sharpton For Help Freeing Rapper Detained In Sweden
cc/2019-30/en_middle_0023.json.gz/line1501
__label__wiki
0.573532
0.573532
UTME 2019: See The List Of All Items Prohibited By JAMB In Exam Hall March 19, 2019 Alamu Tosin EDUCATION 0 The Joint Admission and Matriculation Board (JAMB) has revealed the comprehensive list of prohibited items that cannot be taken into examination halls during the 2019 UTME. Registrar of the board, Prof Ishaq Oloyede made the disclosure on Monday during a meeting with critical stakeholders on strategic planning and preparations for supervision and evaluation of administration of 2019 examination. According to Oloyede, the board held an international round-table on cheating devices in December 2017 where various technology devices capable of been used for examination malpractices including Automated Teller Machine (ATM) cards were added to the list of prohibited items during its examinations. He said other items prohibited are wristwatches, recorders, earpieces, mobile phones, Bluetooth devices, smart lenses, erasers, smart buttons and spy reading glasses, among others. As earlier reported by Naijaparry, the board had recently scheduled to commence the 2019 exercise, starting with its mock examination on April 1 and the main examination on April 11, across its Computer Based Test (CBT) centres, nationwide. The registrar said the board had approved the use of 708 Computer Based Test (CBT) centres across the country for the conduct of the examination. Prof. Oloyede added that the names of impersonators in the last 10 years would soon be published to serve as a deterrent to others, adding added that about 1.99 million candidates had registered for the examination. He called on the candidates to ensure they comply with the rules and regulations of the examinations. List Of All Items Prohibited By JAMB In Exam Hall UTME 2019 10 Tips to Deal with a Jealous Girlfriend What a shock! Chinese restaurant not serving or admitting Nigerians exposed in Lagos
cc/2019-30/en_middle_0023.json.gz/line1502
__label__wiki
0.735901
0.735901
Panel explores sanctuary status implications Alexandra Muck | Thursday, January 19, 2017 A panel convened by the Center for Civil and Human Rights and co-sponsored by the Center for Social Concerns and the Institute for Latino Studies discussed what it means for a city, state, university or faith-based organization to be declared a sanctuary, and what the implications of using the “sanctuary” designation might be. The moderator, director of the Center for Civil and Human Rights Jennifer Mason McAward, led the panelists, who included co-director of the Institute for Latino Studies Luis Fraga, professor of law Rick Garnett, graduate student Leo Guardado and professor of law Lisa Koop. Emma Farnan | The Observer Professor Luis Fraga speaks on a panel discussing what it means to be designated a sanctuary campus. Fraga began by defining sanctuary and noted it is not technically a “legal jurisdiction.” Fraga began the panelists’ remarks by defining sanctuary. Though not technically a “legal jurisdiction,” Fraga said groups that declare themselves as sanctuaries promise to not devote their resources to “enforce national immigration laws or inquire as to a person’s immigration status.” While the idea of sanctuaries began as a religious term, Guardado said, it began to be used in a more secular sense around the 1980s in Los Angeles when some churches declared themselves sanctuaries regarding the immigration statuses of Central Americans fleeing from civil wars. Currently, according to Fraga, four states, 364 counties and 39 cities are classified as sanctuaries. In addition, some universities have begun to follow suit, with between 150 and 200 universities applying the term to themselves. The negative consequence of declaring a campus a sanctuary campus is that it could subject the university to a loss of federal funding. For this reason, some colleges, according to Guardado, prefer not to use the term, while they may still identify with the ideas of being a sanctuary campus. Guardado said he sees great benefit in using the term, however. “I think the greatest importance comes in the message that it communicates to … future students,” Guardado said. “That single word strategically conveys to a whole generation of students who may be applying to college, who may be undocumented … this is an institution that will protect you to the fullest degree possible with resources, with lawyers, with whatever else is needed.” Ultimately Guardado said he believes it is a “strategic choice” to use the word sanctuary. The issue of sanctuaries is especially relevant given that a new president is about to be sworn into office. Koop said under the new administration, residents might see an increase in home raids, collateral home arrests and fast-track deportation. “There is already an enforcement apparatus in place. This is all happening under the current administration. What we expect to see in the coming months and years is potentially significantly more aggressive enforcement and maybe increased enforcement and increased criminalization of non-citizens,” Koop said. Garnett concluded the panel by citing the implications of using religious freedom in declaring sanctuary. He said recently, there has been a resurgence in communities invoking religious freedom. “If that development continues, I think that could be really fruitful for religious institutions, particularly universities,” Garnett said. Tags: Center for Social Concerns, Immigration, Institute for Latino Studies, Sanctuary campus About Alexandra Muck Contact Alexandra ComUNIDAD to foster immigration discussion The Student Coalition for Immigration Advocacy (SCIA) will host the first ever ComUNIDAD, an immigration... Students weigh in on sanctuary campuses Urban Plunge immerses students in cities Scholars, legal experts speak on zero tolerance policies Tweets by @NDSMCOBSERVER
cc/2019-30/en_middle_0023.json.gz/line1506
__label__cc
0.691358
0.308642
No Film School The Best Filmmaking Deals of the Week 1. The Sony A7R IV Has Finally Landed +7,592 views 2. The 30-Degree Rule: A Trick All DPs and Editors Should Know +6,680 views 3. Netflix Has Edited '13 Reasons Why' to Remove Graphic Scene +2,087 views 4. Is DJI About to Announce a New Gimbal for DSLR Cameras? +5,376 views 5. Go Behind the Scenes of 'Alien' with a New 'Making Of' Book +1,540 views Newest in Screenwriting Get an Instant Migraine with the Story Diamond Newest in Directing You'll Become a Props Expert with These Five Simple Tips [VIDEO] Newest in Distribution & Marketing Expert Publicists Explain How to Get Your Film Seen on a Budget Newest in Movies & TV You'll Become a Props Expert with These Five Simple Tips [VIDEO] Newest in Marketplace & Deals Showcase: Getting Our Hands Dirty with New Gear from Tilta and Atlas Lens Co 1 Backlighting and halo effect in video 61 WHICH LAPTOP FOR FILM EDITING AND GENERAL USE? 1 Colour Management and Calibration software 4 Which lighting kit for semi beginner? 3 Lighting for Short film Rob Hardy Major NLE Updates Coming at NAB? What Adobe and Avid Should Do to Improve Their Products NAB is an exciting time of year for us filmmaking folk. While there are certainly some exciting things on the horizon in terms of cameras, rigs, lenses, lights, and what have you, I'm making an educated guess that this will be another significant year for NLE development, especially from post-production giants Avid and Adobe. Avid is likely to make the jump to version 7 of its flagship Media Composer, and if they follow their previously mentioned product cycle plan, Adobe will release version 6.5 of their popular Creative Suite. With much of the editing market still undecided between the three major players in post-production, these new updates could be a crucial stepping stone into the future for these companies. First and foremost, I should mention that these are the two NLEs which I use regularly. Premiere has taken over as my go-to editing platform, and I use it for most, if not all, of my personal work and for smaller films. Avid, on the other hand, is generally my tool of choice on larger scale productions where media management tends to be a little more unruly, or if it's something I'm collaboratively editing with another person. So as someone who uses both of these on a consistent basis, I have a solid idea of what I would like to see out of the programs in future versions. So without further ado, let the speculation begin. The folks at Avid have found themselves in a peculiar predicament as of late. They still dominate the high-end broadcast and film markets with their various software solutions -- as evidenced by their near sweep in several post-production categories at this year's Academy Awards. Despite this seeming success, however, Avid has been hurting financially for the past several years as their sales have continued to decline. This financial downward spiral seems to be boiling over for the company, seeing as how they recently postponed the release of their 2012 4th quarter earnings, something widely regarded by both the business and editing communities as a desperate move. It seems to me that if Avid really is in desperate financial trouble, they're going to need to make a splash at NAB in order to stimulate new sales of their software solutions. For them to accomplish this, they are going to need to implement a major overhaul of the Media Composer interface and make it more accessible to younger editors, while simultaneously maintaining the level of professional precision that has made the application an industry workhorse for the past 20 years -- and they're going to have to do all of this while significantly lowering their price points. Beyond these exterior changes to the software, Avid is going to have to heavily refine the way the software works internally. While they've subtly been doing this for the past 2 or 3 years with features such as AMA linking, OpenGL support, 3rd-party I/O options, and most importantly, 64-bit base code, Avid is still lagging well behind both Adobe and Apple in terms of performance and taking advantage of modern hardware. They need to follow in Adobe's and Apple's shoes with OpenCL support and background rendering. Beyond that, they need to bring resolution independence to both their project settings and to individual clips so that editors aren't restricted to the standard TV and film options that Avid currently offers. However, despite the fact that a revamped version of Media Composer would likely get Avid's software division back on the track to profitability (especially if they could do the same with Pro Tools), whether or not the company has the cash or credit to cover the costs of the sure-to-be hefty research and development for such an overhaul is highly questionable. If the new version of Media Composer fails to gain traction in the broadcast and film communities, and Avid continues to lose money, it's likely that we could see some kind of company restructuring or even the sale of the company or its individual parts. Adobe, unlike Avid, seems to be thriving these days. After having snatched up many an editing professional after the Final Cut Pro X conundrum, and with the potential downfall of Avid, Adobe is now in a position to take the lead in the professional NLE market. In order to do this, however, they're also going to have to keep innovating with their suite of video post-production tools. First and foremost, and I don't think I'm alone in this, it's time for Adobe to develop and embrace their own proprietary codec, a la ProRes or DNxHD. While the success of codec independence is part of what makes Premiere great, the performance of certain native codecs within the program is not what it could or should be. With a proprietary codec, Adobe would be able to completely optimize the performance of the software for that codec, as opposed to having a piece of software that deals with some codecs well, and others not nearly as much. Considering that many narrative-style films already transcode their raw camera data for both dallies and offline editing, it would be fantastic for Adobe to develop something to aid in that process. Sure, Cineform has been a decent 3rd party solution to this point, but it's time for Adobe to step up their game and cater to both independent folks as well as high-end professionals. I would also like to see better integration of the Production Premium suite with its newest member, Adobe SpeedGrade. The acquisition of SpeedGrade from Iridas last year was an excellent move for Adobe in terms of putting together a comprehensive suite of tools for the video professional. However, the implementation and insertion of SpeedGrade into the suite has been clunky, to say the least. If Adobe can manage to integrate the program with the same dynamic linking technologies that have made it a breeze to shoot back and forth between Premiere, After Effects, Audition, and Encore, then they'll finally have a complete, integrated set of high-end tools for the video professional. As it stands now, it's just as easy to take a sequence from Premiere into Resolve as it is to take it into SpeedGrade. This needs to change if they want SpeedGrade to become a more viable option for the folks already using their products. What do you guys think? What would you like to see out of the new versions of Media Composer and Premiere Pro? What would Avid have to do with Media Composer to keep it relevant and profitable? Conversely, what do you think Adobe would have to do to catapult Premiere Pro into industry dominance? Let us know in the comments. NLE Version 7.0 -- Edit Geek Avid -- Has The Ship Sailed (Or Even Sunk)? -- Richard Harrington Blog premierepro mediacomposer Adobe Offers 50% Off to Final Cut Pro Users for Switching to Premiere Pro Check Out This Thorough Presentation Comparing Final Cut Pro X vs. Premiere Pro CS6 Adobe Unveils the Next Generation of Their Creative Cloud Video Tools Speedgrade is wonderfully intuitive but about as stable as a drunk on roller skates. I ditched it for Resolve lite, which is less intuitive but more powerful in my opinion. Even with a high end PC, the transfer between Prem. pro and Speedgrade was abysmal. While I stand by Adobe for Prem. Pro and After Effects, they're going to have to pull a miracle out of the bag to get me interested in Speedgrade again. March 14, 2013 at 4:09PM, Edited September 4, 11:21AM Reply Share Share this answer: Agree on the Speedgrade thing. It needs brought up to speed with the rest of the suite immediately. I've tried using it and like Ben said, it's just terrible to get anything from Premiere into Speedgrade, Speedgrade crashes constantly, it's just not a stellar program at the moment. There's so much that could be done to improve it and its integration into the suite. Rick McClelland That's about how I feel as well. Resolve Lite really has taken over as my go-to color application, and that doesn't seem likely to change any time soon. However, if Adobe can implement dynamic linking and make it as seamless as it is with their other programs, they'll likely get quite a few more people staying exclusively within the Creative Suite. You voted '-1'. Founder of Filmmaker Freedom This. SpeedGrade needs Adobeization I am all-in for Adobe these days. More often than ever, I've found myself ingesting and logging in Prelude, editing in Premiere, integrating graphics through AE/Photoshop through dynamic link, mixing in Audition, etc., etc., etc. What's missing is an equally smooth connection to SpeedGrade. I really like SpeedGrade, too... but the round-tripping is clunky right now. I'm sure Adobe will Adobe-ize it, but right now it feels like Apple Color in FCS. I haven't learned Resolve Lite because, frankly, I want to stick with Adobe. If my hotkeys, GUI cues, etc. can all be the same when I have a quick turn around of projects at work... hey, I'll take that over almost anything. It's not fanboyism or loyalty or anything like that—it's about keeping things seamless, fast and easy to move between. Even a few years ago, going between FCP or Avid and AE slowed me down, because it was difficult to shift my mind between hotkeys. The realities of media nowadays it's better to be as software and platform agnostic as possible. Not everything advances at the same place. And then there are the odd missteps that throw everyone for a loop. That doesn't mean you shouldn't have your own preferences. Speedgrade is still very new and it will become very intuitive and improve immensely. I couldn't tell you that timeline though March 15, 2013 at 9:37AM, Edited September 4, 11:21AM I agree. I am using Resolve for all my color needs right now, and the integration with Premiere (which when you think about it, there is none) is better than the integration with SpeedGrade. I would love to see SG brought into the DynamicLink family. Why can't I open a sequence in SG, grade and correct it, and have it linked back to Premiere so that I can dynamically color correct? That would also solve another problem I have with color grading: it takes up a lot of space. I usually end up with 2-3 versions of each clip as it moves through the production workflow. With resolutions and file sizes continually increasing, Adobe needs to find a better way to non-destructively color correct. If they could use XMP data for color correction (similar to Lightroom) and have Premiere do the final render it would give us all greater edibility and less storage space occupied. I'm and Adobe fan, but ultimately I will use whatever tool is best for the job. For color, right now that is DaVinci. March 15, 2013 at 11:32AM, Edited September 4, 11:21AM Guys, feel free to make feature requests here: http://www.adobe.com/go/wish Kevin Monahan Social Support Lead, DV Products March 18, 2013 at 10:46PM, Edited September 4, 11:21AM I would love to see them come out with a controller application of their own (Like controller+). Also one that would work with speed grade rather than spending thousands on a controller board. I completely agree with integrating some sort of dynamic link to speed grade Zachary Murray You can make a feature request for control surface support: http://www.adobe.com/go/wish I just want a split and unlink shortcut key in Premiere Pro... :/ Antony Alvarez Do you mean a shortcut for linking and unlinking clips within the timeline? Because you can definitely map your keyboard to do that. You can. "G" will unlink. Coming from Vegas, I remapped "Add Edit" from CTRL + K to "S". It will now split any tracks that are selected El Director Most of my projects have extensive use of VFX and I don't use AE, I end up having to render plates clip per clip from my timeline to use in my 3d and compositing packages, then I have to re import back into my timeline the vfx rendered plates, I'd love Premiere to have it done in a easier way, just like The Foundry Hiero does. What's software are you using? ThanKs. Bellina mikael Maya and Nuke Tell us how you want it to work here: http://www.adobe.com/go/wish Adobe missed the boat BIG TIME not buying out cineform. To let them be snapped up by go pro who lets face it, until now have no real use for the codec, was the biggest f-up this century. But maybe they have something else up their sleeve? What they REALLY need to fix is the completely broken multicam workflow. Currently there is no way to flatten a multicam timeline or export it as a readable XML meaning you can't use it in speed grade, resolve or even after effects. FCP7 had this, avid has it as well but Premier Pro does not and neither does FCPX. I'm agree with you the multicam is a bit weird if you want to use IT with after effect. It exports In after effect all the média and not only the part you sélect. And I want something more efficient between the multi camera monitor and the timeline. You have to click inside the multi monitor and click play because if you do In the timeline all the camera won't play. IT saves some power but lot of Time wasted to switch between the multi camera monitor and the timeline. And is there a way to improve the performance In not using proxy media. If you want a better intermediate codec and improved multi cam workflow, let us know the particulars here: http://www.adobe.com/go/wish Ugh, please no Avid Media Composer X. If Avid does decide to go in that direction, they'll really have to make sure that the new product is still appealing to current Avid editors. That's absolutely key, in my opinion. I'm sure both Avid and Adobe learned exactly what not to do with their software releases from Final Cut though. I'm also fairly confident that we won't see another fiasco like that again, especially not from a company like Avid which is wholeheartedly devoted to their professional users. "If Avid does decide to go in that direction, they’ll really have to make sure that the new product is still appealing to current Avid editors." This is especially true, since Avid is one of the few platforms with people who have been sincerely using it for decades. There are some VERY old school editors using it, and they would not like any drastic change. What I want from Adobe: New pro codec. YES! great idea. After effects shortcut customization. after effects is still locked into its archaic shortcuts. its not a stand alone product anymore, its very integrated. as such, it needs to be allowed to be customized to the same keys as other software, namely premiere. premiere media broswer that understands the sub-folder organization that modern cameras do when they record video. as it stands, media broswer chokes up on all the other files / metadata that the camera records to the card. functioning premiere media manager? literally the premiere one does not work at all. three hours of "copying your files over" and then error. useless. one-click online / offline, proxy editing. its great that premiere can edit native avc, but 14 layers of avc? nope. Proxy editing should not be abandoned. make a way for us to easily toggle back and forth, online / offline, to maximize performance. I say ditch speedgrade altogether. Integrate it right into premiere. the idea of "going back and forth" between edit system and color correction is outdated. find an intelligent way to do this in premiere. so you can go back and forth between editing and CC with ease. Integrated Speedgrade would be fantastic indeed. hansd I completely agree about the media browser not knowing what's going on with sub-folders and metadata. That absolutely drives me up a wall, and it would be great for Adobe to fix those issues as soon as possible. Also, I like the idea of integrating Speedgrade into Premiere. It would very easily solve the integration issues, and it would push Premiere's already strong color tools even further. I like that idea a lot. Just curious; What formats do you have difficulties with? I have lots of different material, and have not encountered any problems. I just want to know stuff. :-) Jarle Leirpoll I wholeheartedly agree on the Project Manager. That actually needs to become a piece of usable software. For one, Adobe does not copy over any dynamically linked Ae Comps upon archiving. If you don't think about that, you can easily end up with major holes in your archived projects. And, let's not forget the Titler. If they can only make it remember it's position. Plus - ability to export to .srt or .sub format would be great, so you can actually use it as a subtitling tool. Richard van den... Being able to do a bit more in Premiere of what currently has to be done in AE would be v helpful, like simple compositing, masking, decent titling. These you can either do properly in AE which entails a lot of faff going back and forth (and this is sold as a virtue - "round tripping", yeah right), or do badly in PPro. Similarly you can do all sorts of great stuff in AE, but forget about having scopes! And how about being able to play a comp contain one clip in real time? No - can't be done? So you get two apps that are mostly brilliant with just enough crapness to each to make most jobs just that little bit annoying, so you go into it knowing it should be dead simple but that in reality it's going to be a pain in the arse sooner or later. It's like having two cars. Both are perfect but one has no air con and the other has no heater. So for any journey over a certain duration you have to tow one behind the other and swap as the weather changes. Not the most elegant analogy, I know. Graham Kay That's a fantastic analogy haha! I too like the analogy and often find myself thinking the same thing... Smoke is taking this route on the higher end and I'd love to see AE and Premiere Pro merge, especially now that everyone buys the bundles or Creative Cloud subscription so there's really not much money to be made by selling them separately... Ryan Koo No. Merging Premiere and After Effects would be bad. Using the car analogy, Premiere is a sports car, it's about speed and maneuverability. Get to your destination fast. After Effects is a rugged pick up, designed for heavy lifting. If you merge the two, you get more of a mess. After Effects users can typically have multiple nested comps with dozens of layers each. Throw in effects, expressions and 3d layers, and you can really slow down even a fast machine. I've seen how badly this can turn out in Final Cut Pro with some users who decided to do everything including effects in it. I think it would be better for Adobe to create a unified plug in architecture, so that Premiere and After Effects uses essentially the same plug ins. It could be like how Final Cut X uses Motion for effects generation. Merging the two will just become a more awkward beast that won't benefit anyone. Smoke was designed with client interaction in mind, and was more hardware integrated like Flame. Perfect analogy for Premiere & AE. I'd like to see Speedgrade and Premiere merge instead of roundtripping. That will give Premiere a huge advantage over other most other NLE's. At the very least, instead of merging the two programs, which can be potentially very difficult to do, if Speedgrade can simply open up a pproj and write to it, that will be more than sufficient! Ditching SpeedGrade and having it integrated into Premiere is fine for those who work alone, but not in a collaborative environment. Adobe needs to bring core features of SpeedGrade into Premiere, so do as much as possible in Premiere. For those times you need to go into Speedgrade, any color settings already applied will carry over. It's time Adobe created a unified plugin architecture, so effects in Premiere, SpeedGrade and AE are the same. "Ditching SpeedGrade and having it integrated into Premiere is fine for those who work alone, but not in a collaborative environment." Why? If the colorist could open Premiere, choose the Color Correction workspace, and just use the grading tools - then what would be so horribly bad about that? If you want us to develop an intermediate codec or have better interoperation with existing ones, customizable keyboard shortcuts in After Effects, better native camera support, a better media manager, proxy editing, and better SpeedGrade integration, let us know: http://www.adobe.com/go/wish Sorry you had trouble with the Project Manager. In the future, please bring your problems to the forum: http://forums.adobe.com/community/premiere/premierepro_current Better Premiere to Speedgrade workflow. Yes, we all want that. Vote for it here: http://www.adobe.com/go/wish Great article Robert, I find myself agreeing with your thoughts; I believe that Avid is a bit more dependent on what Adobe does, than it can rely on itself. If Avid doesn't do an overhaul and stays true to its current workings, then eventually it might fade, as more Adobe users find their way into the industry. Lets face it, Avid works and plays like a 10 year old software. However, if they do decide to have a makeover, then a lot of people who love Avid will be pissed about the change (Some people have a problem with technological advancement in this industry). So I believe that if Adobe really pushes themselves forward to show why they can be the most high-end professional tool for editing - Both in aggressive marketing and in tech, then Avid may be in deep trouble. Daniel F avid: total agreement. it has improved a lot, but still seems archaic in many respects. adobe: huh? you ask for ANOTHER codec? i get what you are saying re. NLE + preferred codec working together to increase speed, but it would have to be the mother of all codecs to have me welcome yet another to the fold. speedgrade v. resolve: yes to the drunken skater, yes resolve kills, yes improve the roundtripping, yes, yes. j williams Lend your voice to product improvement here: http://www.adobe.com/go/wish We read all feature requests, swear! I can't believe you're recommending that Adobe comes up with a proprietary codec! That is exactly what we don't want them to do! I am happy that Adobe's recent formats have most been focused on being open source. If they could come up with an open source codec that beats ProRes 4444, I'd be much more excited about that. Harry Pray IV Cinema DNG is open source, but Adobe barely supports that. There were very enthusiastic when they introduced it a few years ago. They do need a mastering codec, it's not about being proprietary, it's about optimizing their software and hardware advantages. Adobe will be always patching problems if updates of ProRes and DNxHD break something. I'd like to see Adobe announce OpenCL for AMD hardware outside of the two MBP models it supports. I have a Windows 7 workstation that I use CS6 on, right now it has an AMD graphics card so no OpenCL in my Mercury Playback Engine, If Adobe and AMD don't come up with something my new graphics card will be Nvidia. Ashley Hakker OpenCL performance (on Mac or PC) will not likely come close to the performance of CUDA architecture. In the future, it's possible that ATI cards could be better. It makes sense for Adobe not be tied to one brand. Make your request for more supported GPUs: http://www.adobe.com/go/wish I'd be down with more supported GPUs! I use adobe and It's good but the performance are not enough... I've got a gtx 680 and It's almost never use... You still have to do lot of render... Render In background with the graphic card would be great like fcpx? But something which doesn't slow down the Pc. I would like to see all the effect with gpu accélération. Some effect like stabilization don't use all the core and take a while... So I want more speeeeeedddd. I still use color correction inside Pp because I don't like the workflow with speedgrade so yes a big improvment on this part would je great. +100000000000000000000000000000000000000000000000000000000 Make all your requests known to us here: http://www.adobe.com/go/wish I know it's not a priority for everyone, but some hotkey advances with Adobe Prelude would be killer for me. I love how they built it to be keyboard driven for the sake of speed, but when transcribing, I need a handful of new hotkeys to make it even faster. Great idea, David. We will read your feature requests here: http://www.adobe.com/go/wish Oh, and Adobe needs to not kill off support for the BMD camera codec. That camera might seriously take off in the next few years, and PP would be left behind due to just not supporting a codec. That would be a huge bummer for them. It's supported over many of our products, and you can use it right now in SpeedGrade, After Effects, and more if RAW plug-ins are installed. It is not a great codec to edit with in Premiere Pro, however. You would need some very beefy hardware to deal with files like that in editorial, however, feel free to make your request: http://www.adobe.com/go/wish There's a growing but silent base of pros, self included, who have really gotten to like what FCP X has to offer. I'd be happy to switch to Premiere (mostly for the realtime AE functionality) if it had something resembling the dynamic timeline or the intuitive event/asset management workflow of X. Swested There are definitely quite a few folks out there who love the features in FCPX, and it's not a stretch to say that Adobe is watching very carefully. My guess is that some of those revolutionary features (or something very close to them) will show up in CS7. As much as I was irritated by FCPX to begin with, I really do think that it will help facilitate the pushing of our NLE's into a completely new direction. And that's a great thing, in my opinion. Agreed. X has come a long way since that messy 10.0.0 release. And if its successful evolution pushes Adobe and Avid to finally innovate with their admittedly ancient NLE platforms, then it's beneficial all-around. I would argue many already are. See: hover scrub, metadata, etc. We listen to all reasonable requests. Make yours here: http://www.adobe.com/go/wish Nothing is outlandish, IMHO. I want Creative Cloud to offer render farm solutions for Ae. Upload your project file + footage to the cloud and download a rendered version a few hours later, w/o need to hog resources on your local system. But that would require CC space to grow beyond the 20GB they currently offer. More like 1Tb-ish for video Pros. Since so many in the industry currently depend on Quicktime and ProRes, since I'm on PC, I want to be able to write to that format (not just read it). Or, indeed, a new codec altogether. The arrival of 4K might just be their way in. On the other hand, I want PPro to continue to handle all codecs, because that's one of it's key advantages. Not having to transcode is a huge time and disc space saver. Furthermore, I want PPro and its effects to be agnostic to dimensions of either footage or sequences. Right now, Warp Stabilizer requires the footage to match the sequence, which is cumbersome if you're creating a film aspect ratio (2.35 to 1) sequence with 1080p footage. Currently, once created, you can't change the dimensions of a given sequence. Better SpeedGrade integration - agreed. Have not had a serious look at the programme. Also, I want better round tripping between Audition and PPro. Create a mix in Audition and be able to toggle back and forth, just like the brilliance between Ae and PPro. I've actually had a ton of success going back and forth between Premiere and Audition. You can right click on your piece of audio, send it to Audition, make your changes, then hit save. The audio is automatically replaced and updated in your timeline. It's never failed me once. I would be thrilled, however, if they could incorporate some kind of batch processing for audio within Audition, because in the past I've had to send each individual clip to Audition, apply some presets (generally noise reduction, EQ, that kind of thing) and send them individually to Premiere. If I could just select all of the sound files, then select my desired presets and let it go, that would be a magnificent time saver. You can kind of do that by recording a favorite for the settings that you want. You still have to send each clip from PP to Audition, but once there you can batch process all the clips with your favorite, then save all. Head back to Premiere and voila! your audio is golden. Sure you can roundtrip between PPro and Au, but it's not dynamic. Every time you click the Open in Audition feature, a new clip is created. I want to have just a single link open which can be instantly updated just like between PPro and Ae. That'd be quite awesome, but frankly I don't think we're there yet in terms of broadband capacity (at least in the US). One can hope and pray for Google fiber lines, though... Adobe has never been great with distributed rendering. The other problem is that using a render farm only works well with image sequences. One of the advantages of Apple creating ProRes has been the ability to use distributed rendering. Adobe will definitely need their own mastering codec to be able to do that. I don't know how much of a priority Adobe will give it. I like your render farm over the Creative Cloud idea. Submit it here: http;//www.adobe.com/go/wish I would like to see Prelude use CUDA for better previewing of AVCHD and also transcode direct into a premiere pro project that acts like Log and Transfer. SpeedGrade needs external monitoring to be taken seriously as a grading tool. Prelude is a great application that I don't think people use very often or understand fully what it can do. Erik Naso YES. It's absurd (in my opinion) that our Black Magic monitoring devices are supported by AE, Premiere Pro, etc.... but not on SpeedGrade. They need to fix that asap. Keep in mind that SpeedGrade came in pretty late to the game. We know that users want better monitoring support, but be sure to add your voice: http://www.adobe.com/go/wish I haven't been able to work with Prelude. I much preferred OnLocation. I found it very easy to log and organize footage, but I haven't been able to grasp Prelude yet. Any tips for finding an easy workflow? Joel, here's some videos to help you get started: http://tv.adobe.com/product/prelude/ Hi Erik, Sorry you aren't finding everything you need in Prelude. Let us know all your requests here: http://www.adobe.com/go/wish I normally use PP and AE, but have been playing with a relatively new piece of software called Hitfilm Ultimate. While still a little on the primitive side, the thought behind it is more like Smoke. It is a NLE that also has the ability to create composite shots and integrate them into the main timeline AND play them back in real time. It looks like its going to take some time for it to mature, but the potential is there. Uh... " a breeze to shoot back and forth between Premiere, After Effects". Well, you can go easily from P to AE, but getting back? You'll just get one solid file, certainly not the project you had before. You know what I do? I duplicate the clip before sending it to After Effects so I always have a copy of my original clip in line in case I need to start from scratch. I'm still learning to make the most of PP 6.0. I am loving the adjustment layers. I do my colour grading on one layer. All clips below it are affected and I can easily copy and paste. If I were really asking for the moon, I would like to have a PluralEyes-type feature in PP. I would also like the blade tool to have a sort of snapping effect so that I know it has really snapped, the way FCP 7 does. I'm not sure that it needs its own codec if it's able to handle whatever comes out of all the major cameras. Yeah the blade tool should snap ala FCP7. Super annoying. You should also be able to sort clips by name. I can't tell you how many times I've imported files and then had to delete them because they were in reverse chronological order or worse not in any sort of order. I hate that you can't have a real time monitor of multicam on a second monitor and does hitting that record button really need to be necessary ? Mike Hendzel Glad you love the Adjustment Layers, me too! If you want a built in "Plural Eyes" style feature and snappier snapping, let us know: http://www.adobe.com/go/wish Avid's customer support is horrible, they make Adobe's look wonderful by comparison. moebius22 Brilliant idea, an industry standard alternative to ProRes and DNxHD will be great! Cineform, while great, isn't widely adopted. It's maddening not being able to use ProRes on both Mac & PC -it's the year 2013 and we're still having issues like this?? Yes, I know it's possible encode Prores on PC if you try hard enough, but it's not nearly as easy as it should be. It'll be even better if Adobe creates an open industry standard rather than a proprietary one. Similar to what they've done with camera raw formats and the DNG format. They certainly have the influence to encourage wide spread adoption, especially if implemented well. To add a bit, our studio is Mac based and we archive our projects as a ProRes 4444 master. However, it will be nice to be able to have a similar format that isn't platform specific, has long term longevity, has industry wide adoption, and doesn't require a license to use! That is a fantastic idea. The industry really is starving for a new standard, and one that's truly platform agnostic. All great ideas, please add your voice: http://www.adobe.com/go/wish The single, most important feature editors REALLY need for Premiere, is the ability to remap the same shortcut across various keys or combinations. It's EXTREMELY irritating. They have to start thinking about ergonomics!! I usually edit and move around an editing program with a single hand on the keyboard, without ever having to reach for the mouse. Avid can do it, so can FCP (don't know about X). What's the point of being able to remap keys, if they're not flexible about it. Just plain stupid and inefficient. Also, they need to add a keyboard design when mapping the keys (in conjuction with the clunky text interface). I mean, it's all good once you've designed your new remapped keyboard, but doing so is BY NO MEANS as fast as it is with Avid or FCP. Brandon_07 I couldn't agree with this more. It's frustrating as hell trying to map keys in Premiere, and I've had to do it a bunch of times because, for some reason, it's insanely difficult to get different versions of the program on different machines to read the .kys file that stores your keyboard settings. It's one of those things that's so easy in both Avid and Final Cut too, so it doesn't make sense that it's such a pain in Premiere. Robert, I improved the Help document that tells you how to transport your .kys file to another computer. Check it out here: http://adobe.ly/WAfgCZ I seem to remember there is a "Use Final Cut Pro" scheme feature in Edit>Preferences> Keyboard Shortcut>... strange. Brendon, yes, we definitely hear the request for a customizable, mappable, keyboard with drag and drop control. Feel free to add your voice for this request here: http://www.adobe.com/go/wish Mapping the same command over different keys? I'm trying to think of an example where I have done that in the past, but nothing comes to mind. However, you can make a request for that too. Digital noise reduction without having to use a third party plugin would be great, I think it is something very essential and always wander why it is not part of the standard package (and not requested more frequently...) Make a request here: http://www.adobe.com/go/wish And one more thing adobe. Why do I have to add my video card to a text file to get pp to support open cl. It's annoying that the software can support a whole mess of cards and all it takes is adding 10 stinking characters to a text file to unlock a better rendering experience. C'mon Adobe. While we'd like to, it is difficult to support every video card under the sun. That said, you can suggest more GPUs for us to test out: http://www.adobe.com/go/wish Keep in mind minimum requirements for any GPU running the Mercury Playback Engine. Avid is crap. Too expensive and not intuitive at all. It is too expensive but learn how to use it properly and it's the most pleasant free flowing editing experience there is. To just say it's crap is niave. Neill I agree, Avid is not an easy piece of software to learn, and it requires a completely different frame of mind than the other NLE's, but once you get into it, it's probably the fastest, most precise editing software out there. I think that's the problem. Avid does some things very well like the Composer Window, but I find my self having to go through additional steps to getting done when other NLEs don't require this. I'll put in the time because the market where I work uses it, but I see many students choosing PP over Avid. Like I said before, if Avid wants wider adoption independent users like me, they really need to step up their customer support. When something goes wrong with the software, it's really a pain to get the help you need. That is more a statement about your abilities than the software. You don't have to like using it, but you have to respect the fact that it's being used everywhere by a lot of people on lots of high profile projects. I don't agree with the assertion that Adobe needs a proprietary codec... part of the reason the Premiere Pro CS6 and After Effects utilize hardware playback so well is that they use the native video element from the two Operating Systems. (MS-DV playback on Windows) and (Quicktime MOV on Mac)... I have my own custom intermediary editing settings that use the native support for P2 playback for HD footage... which makes DSLR footage fast when you have the recommended nvidia hardware. Building an Editing system around a proprietary codecs would make P.Pro less versatile... besides why waste money on a proprietary codec when they could us On2 VP8 from Google for Free? How does that P2 custom setting work - sounds like something I should try? What about some proper support for raw DNG files? We're already seeing the beginning of a new era of raw availability with the BMCC, D16 and the Kinefinity offerings yet the process of getting DNGs into Premiere (despite being their own codec) or just about any NLE is far more time-consuming and processor-intensive than it should be. Surely this can't be a difficult fix? Adobe created Cinema DNG, but seemed to have lost interest lately. They made it open source, but haven't has as much enthusiasm like when they first introduced it. Guys, here's the state of Cinema DNG. http://blogs.adobe.com/aftereffects/2012/09/cinemadng-in-after-effects-c... Cinema DNG is not a codec you'd want to edit with in Premiere Pro, IMHO, however, make a request if you like: http://www.adobe.com/go/wish 1. Proxy editing a la After Effects - In AE we can go in and select low-res files (even stills) for each clip. But there's no way to do anything like that in Premiere. Well, ok, you can trick Adobe with multiple folders and renaming folder-names. But it's not exactly intuitive. 2. A real noise-reduction tool without any need for third party solutions. One of the biggest reasons for me to not get into speedgrade is that I usually need to denoise footage. I do this with Neat in AE, and while there I see very little reason to not slap on Colorista II while I'm at it. And speedgrade just feels superflous. I would absolutely LOVE it if prelude could use Neat or the like so I can actually use it to batch-process the files like I want to. 3. EDL-viewer in AE. Just to watch the comp in it's context without having to switch and dynamic link. 4. File effects - With this I mean effects that I can apply on files without having them tied to a single sequence. jmalmsten 1. Proxy - I think this will only increase in need as RAW files become more common for editors. I certainly second this idea. 2. This is an interesting idea I haven't considered. I suppose with all the DSLR/large sensor cameras out there, and people's increased desire to shoot in low-light, this is something that would be good. I like this idea. It would be great if Adobe just bought out an already-functioning option (like Neat) instead of building there own, kind of like they did with Automatic Duck for CS6. I would also love if Prelude did something like this as a batch process... and while we're dreaming, it would be cool if they also snatched up FilmConvert and gave you the option to apply that in Prelude. You can make a feature request for these items: http://www.adobe.com/go/wish I'd like to see the addition of handles to dynamic linked videos being sent to AE. Having to pull the shot out of the timeline to add handles to facilitate a cross fade is very irritating. Also, and this just might be me, but have the cursor not try to be so freaking helpful all the time. Leads to having to zoom in multiple times just to select an effect is very time consuming. They're trying to get it to do too much simultaneously and it often leads to it doing none of it well. Regarding workflow with Dynamic Link: Fair enough. You do need to add handles but only if you are creating a transition in and/or out of the dynamic link comp. The same is true for nested sequences. I'm sorry you're annoyed by the tools, at the proper zoom level, they behave better. Please make your feature requests, though: http://www.adobe.com/go/wish This article hits Avid pretty hard and shouldn't. Avid has stood by its users since the start. I don't think they need to mess around with the interface. A lot of complaints people have with Premiere are not even issues in Avid. And Avid won't go away. With FCP gone, there are more licenses for Avid than before. Why? It'll run on PCS and Macs, a boon for schools and small production houses. As an 10-year Avid user and former user of Premiere and FCP, I prefer the way Avid is now. I think that Avid is the best at rendering on the fly than the other 2 products, especially Adobe. Adobe is trying to push a realtime, on-the-flying paradigm which is the wrong way to go, burdening the systems. Codecs in cameras are going to keep changing; it only makes sense to transcode into the editing system's codec; to rely on AMA is a recipe for disaster as well as problems throughout the edit. What happens when RAW truly hits? Expecting a perfect RAW workflow from camera-to-timeline-to-output will take some time until computer catch up. History is repeating itself: the Red Rocket is the current Adrenaline box. Expecting that type of workflow now with all these codecs is ludicrous. How hard it is really to transcode into the Avid codec and edit? Premiere is a joke because it doesn't do this. But it does plug into After Effects really well. Until the BMCC and DBolex appear, challenging the workflow of every editing system, we'll never really know what the future holds. The next version of the Mac Pro will be a bigger indication of performance we can expect moving forward. Sathya Vijayendran Believe me, I actually really love how Avid is set up, and I love how it works, but the fact that they're in some serious financial trouble in regards to their software department is undeniable at this point. I only threw an opinion out there to see what other people thought on the matter. Thanks for the article and your response. Hopefully Avid makes a splash at NAB and implements your ideas. Robert, Avid makes most of their revenue from hardware, not software. I doubt much of anything in Media Composer will change that. I've even read some analysts say the MC price drops are not only pointless, but hurting their revenue further. They need to drive Isis sales for example and, apparently, it's not. As a side note, I believe Adobe may have said they're dropping .5 upgrades. I don't doubt another big upgrade but I think it'll be v7. I suspect FCPX will have a major update around NAB. Don't underestimate EditShare who will be previewing Lightworks for Mac. My hunch is that their long term goal is to position Lightworks and EditShare as a competitor to Avid's MC/Isis. I imagine having a free NLE that at one time held some measure of "Hollywood" support, getting it into more hands than MC, is part of their marketing and market share objective. Craig Seeman If some advancements were made to integrate AE and Premeire in a greater way - that would get me excited. That said, in looking forward as a one-man-band for much of my work - I'm very hooked into Adobe. Even if Avid came out with a glorious update to their interface (upon which I have edited hundreds of TV programs) - I wouldn't make the jump. I'm not a fan of the Avid workflow. Adobe makes more sense to my brain. Yes Avid works well for many, but I always found it a chore rather than a pleasure to use. But, since I was offline editing for a Symphony suite, I had no ther option. With the introduction of the Creative Cloud, I would be hard pressed to jump ship to another application, when for my 50 bucks or so a month, I can always have Adobe's latest. Creative Cloud is genius - it stops many pirates and gets people hooked into the upgrades, guarantees Adobe a steady income, is less expensive for the production companies etc. -Adobe may not be in the top level editing houses, but they have continued to innovate, improve and are now building a larger user base with steady cash flow coming in - I suspect they'll continue jumping ahead. We do, however need Avid and FCP to stick around as competition spurs innovation. Lane, I'd be curious to hear how you would like to improve dynamic link between Premiere Pro and After Effects. Leave your feedback here: http://www.adobe.com/go/wish Is anyone else intensely annoyed by the lack of pitch correction when fastforwarding (with JKL) in Premiere? I spend a lot of time listening to interviews at double-speed in FCP 7, and now that I've just switched to Premiere for a project, the chipmunk voices are really getting to me... Is there a plugin to deal with this? (And I do know about Shift-L to get smaller increments of speed increase, but that still isn't satisfying to me.) I love Premiere in many ways, but this is something I'm not sure I'll ever get used to. In premiere pro on Mac you have the keyboard shortcut preset like in FCP7 or MC6.5 It doesn't bother me, but it drives my wife crazy when she edits. She definitely wants pitch correction as an option in CS7. Hi Ben, regarding pitch, it's exactly how Media Composer treats it. However, if you want FCP style pitch as you JKL, you can make a feature request: http://www.adobe.com/go/wish That said, I use the Shift key and tap J or L 4 or 5 times to hear FCP style pitch. You can also slow down the speed by pressing Shift J 3-6 times when going forward or Shift L 3-6 times when moving backwards. Kind of like a variable shuttle. I hate pitch correction in FFWD through interviews. I want to hear every word, not skip a bunch of them so that the blips I hear sounds lower in tone. Asside from Speedgrade, Premiere would kill to do background rendering, and Render farm support. Speedgrade also needs to support more I/O Cards David Sharp Please make your feature requests: http://www.adobe.com/go/wish Adobe has been promising Speedgrade Mac support for output to a calibrated monitors since NAB 2012. It never happened despite acknowledging that such was recognized as a very important need. Judging grading on a calibrated monitor is a BASIC requirement of grading. But here we are a year later and no solution despite the promises. Moved on to Resolve. Too slow a response from Adobe. Love the rest of the Suite though - CS6 is fabulous. Said goodbye to FCP7 long ago. John Richard i love adobe since i got my first PC, in my opinion if Adobe upgrade their software would be nice if an online editing software (may be like wirecast - w/wo streaming capable) for online editing use(directly to Adobe premiere timeline just like Media 100). this is really helpful for some users....i thinks its better if the company got billion users event they are small fish...compare to the only one whale hannreuhieck Along those lines, check out this video: http://tv.adobe.com/watch/adobe-anywhere/introducing-adobe-anywhere-for-... Being a small freelancer that has bought into the creative cloud, I can say that adobe will be getting my dollars for a long time to come. What I would really like to see are a few more after effects features come into premiere such as 3d camera tracking or make the dynamic link a little easier to use. I feel that with the dynamic link, you could really have a complete hybrid program that allows you to simply switch workspaces to add effects to a premiere timeline. The number one thing I feel is lacking is an effective colour correction solution. The three way colour corrector is improved, but the secondaries still suck. I find that I am usually building several layers with masks, or eventually going to AE to do simple colour work. An interface such as colorista would put me in absolute colour heaven with Adobe. Alex Campbell Alex, please make your requests, we read them all: http://www.adobe.com/go/wish I can't find the website now, but I remember looking up how much of Avid's profits come from Media Composer/Symphony. It's about a third. Yes these are important products, but it's not their only source of revenue. ProTools is still killing it and every news division I know has an Interplay/ISIS. Avid doesn't have the money/engineers that Adobe and Apple do. Major rewrites take loads of time and money. But because they are now at 64bit you will see a lot of small changes that make a big difference. I would expect them to continue to "steal code" from Protools (ala SmartTool) and continue to give us audio improvements. They know they are the only solution for multiuser and 3D, so they may double down on that. They know Symphony's color corrector needs an update, but with limited resources you might see more integration with Baselight. Sam Zimman One of the reasons and there are many, that Ileft AVID and moved to PP CS6 is the Ken Burns effect in avid took so many clicks. Premiere Pro is super simple to move a picture or even multiple video clips with just a couple of clicks. I do wish that premiere pro would indicate end of clip in the record monitor for those of us that prefer editing with keyboard short cuts as apposed to dragging and dropping. Final Cut and Avid both let you know when you are at the end of a clip or at the beginning of a new one. Sean Nipper I gotta say, major props to Adobe for being involved with forums, including the comments here. I've been impressed with how active staff are on CreativeCow, and of course on the Adobe forums... but commenting here and replying to a lot of these gripes/requests definitely won them some points from me. How great is it that a representative from Adobe is taking the time to address our concerns here? While I don't expect all of our issues to be solved overnight, and I realize they probably have a huge list of things to fix/improve, it's cool to see that at least one of these companies is paying attention. Derik Savage Agreed, Derik! Thanks for stopping by, Kevin. No problem, guys. Glad to be here. I am taking notes on all your requests too, BTW. Since jumping ship from FCP7 the one thing I miss is the simple ability to arrange the imported clips from my DSLR in chronological order. I know there is a work around the convenienceof the 'Arrange' option would be great. I'm not an editor, but I run a small production devision for start up company, and according to my editing team, the only thing that stops us from switching to Premeire is that it currently doesn't have the ability to open more than one project at a time. This still a restriction for the platform? You can't have more than one project open, but you can import one project (or just selected sequences from it) into another. That should take care of most needs for working on several projects at once? I do it often. I would definitely NOT like to see Adobe develop a proprietary codec. I remember the difficulty I had trying to work on footage with my Windows system that had been touched by a Mac (stupid Apple Intermediate Codec!). The last thing we need is footage locked in to an application and unusable without it. Damian T. Lloyd I would REALLY love it if Adobe could fix whatever it is in Premiere Pro that produces "End of File" errors in clips once they get to Encore. Premiere Pro is great, I love it and am certainly going to keep working with it but these End of File errors are killing me. For the love of all that is holey could Adobe PLEASE fix them. They occur in Encore for no adequately explained reason due to something that is put into the clips that are exported from Premiere Pro. Encore does not identify which clip, let alone where in the clip, the error occurs so that you can go back to Premier Pro and fix it. I literally spend days just trying to find which clip is producing the error in Encore, days that I could have spent editing, so I can fix the problem and get the end product out to customers. In every forum dealing with Adobe software there are people trying to find solutions to the problem that apparently has been around since CS2. There are two problems going on I suspect, which is what makes it difficult, the fact that Encore is so sensitive that it has this problem with clips out of Premier Pro even though EVERYTHING else can play them, and the fact that something in Premiere Pro is so incompatible with how Encore needs things to be that these errors appear in the files in the first place. Haydn Allbutt I agree it should work, but why do you export files in the first place? Have you tried File > Adobe Dynamic Link > Send to Encore? Hopefully this responds correctly, pressing reply just took me to the "Leave a comment" window. To answer your question Jarle, I work with exported files from Premiere Pro rather than dynamically linking because for several reasons: 1) There is two of us working on the editing at once, one of us works on the videos in Premiere Pro, exports them and the other then authors the DVDs with those clips while the first is editing the next DVD worth of clips; we are parallel processing in other words 2) We have also specialised in the software, I am our Encore specialist and my wife is our Premier Pro specialist - so we tend to divide our labour that way rather than both working on separate discs at once 3) One of our computers is a lot more powerful (and therefore faster) than the other, so we aim to do the rendering step on that computer If Avid can revisit a the The dv Express pro option with Media composer offering a functional entry level 1080 HD editing @ $1200.00 mark. With the option to upgrade features as you need them like a modular Lego made to order. That way people can get in with Avid when they are just starting out and upgrade as they need too or can afford along the way. I have been editing with PP since the CS4 version and am now using CS6. At this point I would not consider using anything else as I have my workflow nailed down and it functions nearly seamlessly especially between PS, AE and Encore despite the occasional hiccup. Now to Speedgrade, what a great tool for color grading, at least I was amazed, however taking the results of it and getting it back into PP is so painful that I can not really integrate it into my present workflow without adding a lot of additional time and HDD space, which is most unfortunate really. It would be a great boon if the program was able to work within PP like a plug-in without all the hassles. To me it was released too early and not integrated very well like AE, or especially PS. I guess I should be writing this to Adobe, whose tutorials make it seem so easy but it has not been so in my experience. Maybe I should consider something from Red Giant... Yazis Pr needs to up the game when it comes to I/O. I have a Black Magic Studio Pro and have endless preview quality problems. This was reported when CS6 came out and nothing has been done (and adobe admitted that they knew about the issue). I also agree that speedgrade needs to be more user friendly, It takes for ever to work on a big project.. WOW NOBODY MENTION HOW YOU CAN'T EDIT CinemaDNG FILES On adobe or avid .. That a must I second that. That needs to change pronto! There's no good excuse for Premiere not to support DNG sequences. I would also like to be able to move in and out of camera RAW with image sequences on the timeline. Just like you can do with Red footage. In my opinion there's no better CC software and I'd like to be able to use it in a flexible manner. Also in AE. Interesting article. As someone who has recently moved to AVID, it seems to be in a bit of a mess. There seems to be various ways to do simple tasks which obviously was implemented one way and overhauled but the old infrastructure not removed. This makes things very confusing for someone new coming into use a bit of software where there is no obvious right way to manage assets and export video. PP however really seem to have nailed it as late. Sorry point being, I see AVID slim lining in the near future. The DSLR Cinematography Guide Get your FREE copy of the eBook called "astonishingly detailed and useful" by Filmmaker Magazine! It's 100+ pages on what you need to know to make beautiful, inexpensive movies using a DSLR. Subscribe to receive the free PDF! Follow NFS © 2019 NONETWORK, LLC. All Rights Reserved.
cc/2019-30/en_middle_0023.json.gz/line1516
__label__wiki
0.713534
0.713534
Peak Sloth The adorable tropical American mammal known as sloths were named after one of the seven deadly sins because of their lethargic demeanor. However, their slow movements are a way to conserve energy until it’s needed. They can move surprisingly fast when evading predators. They’re lethargic on the outside, but hiding a surprising inner burst. We could think of no better mascot for our podcast network. In this modern era where podcasts are abundant, our shows may seem like just another lazy distraction. But listening to them will quickly transport you to a new realm, like a sloth sprinting away from a hungry harpy eagle. The Baltimore Improv Group Podcast This show combines improvised scenes with engaging interviews to take you into the world of the Baltimore Improv Group. The Curioso The Curioso is a biweekly podcast cohosted by Christopher Scarborough and Joseph Taylor. In this podcast we talk about the things that made us curious as children, the things that make us curious as adults, and try to make sense of these mysteries. We like to discuss bizarre occurrences and forgotten history. Hobo Radio HoboTrashcan editor Joel Murphy and his longtime friend Lars Periwinkle discuss the latest in pop culture and their lives. With a slew of celebrity interviews under his belt and his access to television and movie screeners, Joel offers a knowledgeable perspective on pop culture while Lars is the voice of the people … provided that the people are really into Batman, Doctor Who and Star Wars. We Have to Ask Assisted by their guests, Jonathan and Marty bring new insights and serious discussion to topics that don’t get enough attention. For some reason, our RSS feed is only able to pull one podcast episode from each dimension where Marty and Jonathan podcast together.
cc/2019-30/en_middle_0023.json.gz/line1530
__label__wiki
0.985386
0.985386
Yang Tongyan | Status: Deceased | China A writer and member of the Independent Chinese PEN Center, Yang Tongyan was sentenced to 12 years imprisonment for “subversion of state power,” and died in November 2017 while… More Chinese Dissident Honored for Writings Yang Tongyan is serving a 12-year sentence in a Chinese prison for publishing anti-government articles on the Internet. Larry Siems, director of the PEN American Center, explains why Yang… More Jailed Epoch Times Contributor to Receive Award A Chinese dissident writer and Epoch Times contributor has been awarded the prestigious 2008 PEN/Barbara Goldsmith Freedom to Write Award. However Yang Tongyan will not be able to attend… More Writer Jailed in China Wins a PEN Award Yang Tongyan, right, a Chinese writer serving a 12-year prison term for posting antigovernment articles on the Internet, will receive the 22nd annual PEN Freedom to Write Award, Bloomberg… More Jailed Chinese Writer To Receive PEN Award Yang Tongyan, a Chinese writer serving a 12-year prison term for posting anti-government articles on the Internet, will receive this year's PEN/Barbara Goldsmith Freedom to Write Award. More Jailed Chinese Writer to Receive PEN/Goldsmith Award Yang Tongyan to Receive 2008 PEN/Barbara Goldsmith Freedom to Write Award Press ReleaseApril 11, 2008 PEN American Center today named Chinese dissident writer Yang Tongyan, who is currently serving a 12-year prison sentence, as recipient of its 2008 PEN/Barbara Goldsmith Freedom to Write Award. More
cc/2019-30/en_middle_0023.json.gz/line1531
__label__cc
0.509841
0.490159
Oxidized Low Density Lipoprotein (OX-LDL) Induced Arterial Muscle Contraction Signaling Mechanisms C. Subah Packer1, *, Ami E. Rice1, Tomalyn C. Johnson1, Nancy J. Pelaez4, Constance J. Temm2, George V. Potter1, William A. White1, Alan H. Roth1, Jesus H. Dominguez2, Richard G. Peterson3, 5 1 Departments of Cellular & Integrative Physiology, Indiana University School of Medicine, Indianapolis, Indiana 46202 2 Medicine (Nephrology), Indiana University School of Medicine, Indianapolis, Indiana 46202 3 Anatomy & Cell Biology, Indiana University School of Medicine, Indianapolis, Indiana 46202 4 Department of Biological Sciences, Purdue University 5 PreClinOmics (PCO), Inc., USA Publisher Id: TOHYPERJ-6-20 Received Date: 24/04/2014 Revision Received Date: 25/04/2014 Acceptance Date: 26/04/2014 Electronic publication date: 30/5/2014 Total Views/Downloads: 839 © 2014 Packer et al. open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. * Address correspondence to this author at the PreClinOmics (PCO), 7918 Zionsville Road, Indianapolis, IN 46268, USA; Tel: 317-872-6001; Fax: 317-872-6002; E-mail: cspacker@preclinomics.com Oxidized low-density lipoprotein cholesterol (OX-LDL), a reactive oxidant, forms when reactive oxygen species interact with LDL. Elevated OX-LDL may contribute to high blood pressure associated with diseases such as diabetes and obesity. The current study objective was to determine if OX-LDL is a vasoconstrictor acting through the OX-LDL receptor (LOX1) on arterial smooth muscle and elucidate the intracellular signaling mechanism. Arteries were extracted from Sprague-Dawley rats (SD) and obese F1 offspring (ZS) of Zucker diabetic fatty rats (ZDF) x spontaneously hypertensive heart failure rats (SHHF). Pulmonary arterial and aortic rings and caudal arterial helical strips were attached to force transducers in muscle baths. Arterial preparations were contracted with high KCl to establish maximum force development in response to membrane depolarization (Po). Addition of OX-LDL caused contractions of varying strength dependent on the arterial type. OX-LDL contractions were normalized to % Po. Caudal artery was more reactive to OX-LDL than aorta or pulmonary artery. Interestingly, LOX1 density varied with arterial type in proportion to the magnitude of the contractile response to OX-LDL. OX-LDL contractions in the absence of calcium generated about 50% as much force as in normal calcium. Experiments with myosin light chain kinase and Rho kinase inhibitors, ML-9 and Y-27632, suggest OX-LDL induced contraction is mediated by additive effects of two distinct signaling pathways activated concomitantly in the presence of calcium. Results may impact development of new therapeutic agents to control hypertension associated with disorders in which circulating LDL levels are high in a high oxidizing environment. Keywords: Arterial smooth muscle, calcium-independent contraction, diabetes, hypertension, oxidized-LDL, vasoactive oxidants. The Open Hypertension Journal V. G. Athyros Department of Internal Medicine Aristotle University Biography of V. G. Athyros Dr. Athyros obtained his medical diploma (MD) from the Aristotle University of Thessaloniki, Greece, and a doctorate (Ph.D.) in Internal Medicine from the Democretian University of Thrace, Greece. Currently Dr. Reitz is a Professor in Aristotle University of Thessaloniki, Greece. He has published more than 464 scientific publications and has more than 12,000 citations. Dr. Athyros is Editor-in-Chief in one Medical Journal, Associate editor in another Journal and Section Editor, Quest Editor, and member of the Editorial Board of several Journals. Department of Internal Medicine, Aristotle University, Thessaloniki, Greece Total Views/Downloads: 80,694 Full-Text HTML Views: 4,569 Abstract HTML Views: 6,999 PDF Downloads: 3,662 Hypertension and its Relation with Waist to Hip Ratio in Women Referred to Bojnurd Urban Health Centers in 2014 The Open Hypertension Journal is an Open Access online journal, which publishes research articles, reviews, letters, case reports and guest-edited single topic issues in all areas of hypertension research. Bentham Open ensures speedy peer review process and accepted papers are published within 2 weeks of final acceptance. The Open Hypertension Journal is committed to ensuring high quality of research published. We believe that a dedicated and committed team of editors and reviewers make it possible to ensure the quality of the research papers. The overall standing of a journal is in a way, reflective of the quality of its Editor(s) and Editorial Board and its members. The Open Hypertension Journal is seeking energetic and qualified researchers to join its editorial board team as Editorial Board Members or reviewers. The essential criteria to become Editorial Board Members of The Open Hypertension Journal are as follows: Experience in hypertension research with an academic degree. At least 20 publication records of articles and /or books related to the field of hypertension research or in a specific research field.
cc/2019-30/en_middle_0023.json.gz/line1540
__label__wiki
0.640261
0.640261
Maria's Books MARIA SCRIVENS, Independent Usborne Organiser February Picks for Schools With simple rhyming text and phonic repetition, Usborne's Phonics Readers are specially designed to develop essential language skills and early reading. Seal at the Wheel is the newest phonics title. In typical phonics style, it combines humorous illustrations with fun rhyming text, ideal for engaging young readers. This is the story of Seal, who is thrilled with her brand new speed boat. But is her need for speed going to land her in trouble? Young Reading Series Ideal for readers growing in confidence, the Young Reading series is divided into four levels that increase in difficulty. There are two new titles in the Young Reading series this month: What was the Black Death and how did it change the world that we live in? Find out in the illustrated and informative title The Black Death. Ideal for history fans and perfect for readers growing in confidence. High walls, barbed wire, savage dogs and guards who shoot to kill are just some of the obstacles faced by escaping prisoners. Escape tells the extraordinary tales of men who risked their lives to regain their freedom. Ideal for newly independent readers who prefer fact to fiction. Beginners Plus With lots of illustrations and fact-filled text, this series is great for readers who are growing in confidence and prefer non-fiction. This month sees the release of two new titles: How many types of submarines are there? What are they used for? What is it like to live in one? Find out the answers to these questions and many more in Submarines. Discover amazing true survival stories of people stranded alone in the wildest places on Earth, and learn simple and useful survival techniques in the fascinating information book: Survival. Our diverse fiction list is full of incredible characters and unputdownable plots, ideal for engaging any reader. Meet the Twitches, four tiny rabbits who live inside a teacup house. They belong to a girl called Stevie and she loves playing with them. But guess what? These toy rabbits have a secret. They come alive when Stevie isn't looking. With bright, coloured illustrations throughout, Meet the Twitches is great fun for young readers. Every summer Quill and his friends are put ashore on a remote sea stac to hunt birds. But this summer, no one arrives to take them home. Surely nothing but the end of the world can explain why they've been abandoned - cold, starving and clinging to life, in the grip of a murderous ocean. How will they survive? Based on a true story, Where the World Ends is a challenging read that will transport you to the harsh reality of the past. Perfect for school libraries, or the classroom, 100 Things to Know About History is sure to be a sought-after title. Did you know that mammoths and pharoahs walked the earth at the same time? Or that over 30 types of gladiators fought in ancient Rome? Find out everything you never knew you wanted to know about history in this fun and informative title. Great for encouraging wider reading on this tricky subject, Politics for Beginners covers key topics from elections and government, to fake news, imigration and human rights. With bright infographic-style illustrations, and carefully selected internet links, this is the ideal introduction to the world of politics. Key Skills Wipe-clean Written to support the National Curriculum, our Key Skills series is ideal for helping children to gain confidence in the skills they are learning at school. Coming with a wipe-clean pen, they are great for children to use to practise skills again and again. Perfect for that extra support at home. This month's titles are: Spelling 7-8 and Dividing 6-7. Wipe-clean Series Perfect to support home-learning, the Wipe-clean series covers a range of topics, both educational and fun, to help children to develop essential skills from pen control and letter formation, to concepts such as money and telling the time. This month's title is: Wipe-clean Money Tags:schools
cc/2019-30/en_middle_0023.json.gz/line1542
__label__wiki
0.759564
0.759564
About Geri D' Fyniz Geri D’ Fyniz is an American music artist, songwriter, music producer and entrepreneur from Chicago, IL. Born as the youngest to a Former Print Model and street hustler whom both embraced different versions of entrepreneurial efforts this South side of Chicago native keeps his ear to the streets and mind always on complete ownership and control of his brand. Geri D’ Fyniz is conscious enough to empower his people, but also humble enough to avoid bashing the "Non-Woke" crowd. With that said his debut single "Unapologetic" pays tribute to the late great James Brown by declaring "I'm Black and I'm Proud" in a modern day manner. The overall message that Geri wants the listener to have is that this oppressive affects everyone who doesn't maintain or benefit from it and it must be destroyed and replaced with justice. Geri is putting the finishing touches on his debut album entitled "Where Is The Lie???" and looks to continue speaking truth to power while also empowering Black people. Geri D' Fyniz believes that every serious up and coming independent artist should know how to fuel the entire process of their musical career. As his own label owner (D' Fyniz Enterprises), video director, graphic designer, occasional engineer, and music publisher, Geri embodies true INDEPENDENCE. “I love the fact that I wear various hats when it comes to my music...I'm not trying to do it alone but refuse to wait for anything or anyone” - Geri D' Fyniz So without further due please show your love for Geri D’ Fyniz and D’ Fyniz Enterprises. Geri D' Fyniz Follow Geri D' Fyniz on www.GeriDFyniz.com www,onlymyopinionmat...
cc/2019-30/en_middle_0023.json.gz/line1548
__label__wiki
0.941537
0.941537
Education, Featured, Jefferson County Jeffco school board member frustrated over reorganization’s ‘Plan B’ March 23, 2015 By Complete Colorado The inner workings of a multi-school reconfiguration have left one Jefferson County Public Schools Board of Education member feeling like he was hung out to dry and a Wheat Ridge city councilwoman saying it was all a big misunderstanding. The reconfiguration created a “Plan B,” which another school board member has failed to answer questions concerning her involvement into what really happened. The uncertainty has furthered the notion it might just be the latest in string of tactics by political opponents to make the Jeffco board majority look bad and add fuel to a recall debate. “When I see elected officials and board members asking to take a look at another option, I listen. Who better represents the community?” said board member John Newkirk Friday. The board voted 5-0 on Thursday to continue with a district staff supported plan that would reorganize a group of schools in the Jefferson and Alameda areas. At issue is a plan that will reconfigure a group of schools that are either overcrowded, in poor condition, are performing poorly, or all three. However, after Newkirk supported a group of residents and elected officials wanting to look at other options, those same officials backed out, making it look like Newkirk had a political agenda. The reconfiguration is partly needed because Wheat Ridge 5-8 has been on turnaround or priority improvement status with the Colorado Department of Education for the five-year allowed time frame. District personnel were charged with making their own changes to the school or facing the state stepping in to do it for them. Additionally, Sobesky Academy, a K-12 that educates hard-to-serve children with severe emotional and behavioral disorders, was busting at the seams and in need of a better facility, district officials say. Wheat Ridge seventh and eighth graders will move to Jefferson, Stevens Elementary will move its entire K-6 student body into the current Wheat Ridge facility, and the empty Stevens building will become the new home for Sobesky. However, on the last day of 2014, a group known as the Wheat Ridge Education Alliance (WREA) sent a letter to Jeffco Superintendent Dan McMinimee asking the district to take a look at another option: Plan B. The plan was devised by a handful of the group’s members over concern that the district-supported plan would close a second school in Wheat Ridge in five years and move Sobesky into one of its neighborhoods, said WREA Chairwoman Genevieve Wooden. “I understand that a school like (Sobesky) is needed,” Wooden said. “But to close another Wheat Ridge school and then put Sobesky in a neighborhood that can’t use it, we just wanted to come up with a different plan.” Wooden says everything was a big misunderstanding. “There is no doubt about that,” Wooden said, supporting Newkirk’s beliefs. “I thought the idea was dead. I did not expect it to move forward. That it went this far is astounding to me.” Plan B involved several other schools, including The Manning School, an option school for seventh and eighth graders that is both fully subscribed and one of the highest performing schools in the district. “Plan B is supported by principals and many teachers … Wheat Ridge families and Board of Education members we spoke to,” said the letter signed by Wooden on the group’s letterhead. But only three members actually supported it, Wooden said, adding this was the first time the group had done something like this and it was handled all wrong. She did not know why the letter claimed to have prior support from Jeffco board members. “I can see the confusion,” Wooden said. “And I’m very concerned about how that confusion happened. Jill Fellman knew about it, but I do not believe she supported it,” Wooden said. “And I personally did not make contact with any other board members.” However, Fellman never mentioned anything about the plan before or after the letter was forwarded to the board in January. Likewise, she never commented on Plan B until Thursday when she said she never supported it. Fellman did not return phone calls from Complete Colorado seeking comment on her role. When district staff killed the idea, Wooden, who is also a Wheat Ridge City Council member, said WREA community representatives Chad Harr and Guy Nahmiach told her they would make the presentation to the board at its March 5 meeting. Wooden said that, too, turned into a misrepresentation. According to her, Harr and Nahmiach were not to represent WREA. But they did, and Newkirk felt compelled to do something, he said. Newkirk said since Fellman wasn’t going to represent her own district, and since the letter purported to have a wide range of support, he proposed the new plan, tabling the vote until board members could gather more input. “I believe it’s my duty to give it some consideration and give them a voice,” Newkirk said. “I made the motion to table the March 5 vote to compel discussion and community input before it went to a final vote, not to decide right then and there.” That’s when opponents began calling it the “Newkirk Plan” and accusing Newkirk of not listening to the public. “I’m told by the opposition on the board all the time that we need to listen to the community,” Newkirk said. “But I guess just not this community. It is double speak. We’re supposed to listen to one set of people, but not this set? “ After a community meeting and visits to some of the schools, Newkirk learned those in support of “Plan B” were backing out. He withdrew his motion at Thursday’s meeting because of the lack of support. He added that he wouldn’t change his choices because he would never tell anyone they couldn’t speak. He also believes it is his responsibility to look at all the possibilities. “Who am I to put my hand in their face and say no?” Newkirk said. Jeffco school board member frustrated over reorganization's 'Plan B' Jeffco school board raises starting salaries to improve district's competitiveness Jeffco school board raises starting salaries to improve district’s competitiveness School districts across state finding ways to skirt the intent of Proposition 104 "Mean Girls" Attack Jeffco Board, Officials on Twitter Tags: cea, Colorado, education, Jeffco, Jeffco Public Schools, Jeffco school board, NEA, teachers union, wheat ridge, wheat ridge city council Author: Complete Colorado
cc/2019-30/en_middle_0023.json.gz/line1554
__label__cc
0.67316
0.32684
Multiamory Podcast Society Health and Well-Being Sexuality Philosophy Love Polyamory Relationshipanarchy Relationships Multiamory Podcast « » 221 - A Best of Episode MP3•et;Series home•et;Public Feed Manage episode 234295650 series 71690 By Dedeker Winston, Emily Matlack, and Jase Lindgren. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio streamed directly from their servers. It’s time for our Patron’s favorite moments! This episode was created by suggestions from our awesome private Patreon group members. We asked Patron’s what some of their favorite moments have been on the show and these are just a few of some of their favorite moments. Find out which moments made the list and we've love to hear your feedback about future episodes like this! If this show is helpful to you, consider joining our amazing community of like-minded listeners at patreon.com/Multiamory. You can also get access to ad-free episodes, group video discussions, bonus episodes, and more! Multiamory was created by Dedeker Winston, Jase Lindgren, and Emily Matlack. Our theme music is Forms I Know I Did by Josh and Anand. Please send us your feedback and questions to info@multiamory.com, find us on Instagram @Multiamory_Podcast, tweet at us @Multiamory, check out our Facebook Page, visit our website Multiamory.com, or you can leave us a voicemail at 678-MULTI-05. We love to hear from our listeners and we read every message. 236 episodes available. A new episode about every 0 hours averaging 62 mins duration . 229 - Live: Ask Multiamory54:27 1d ago 54:27 Welcome to our live show! For the first time, we're taking questions from our listeners about polyamory and any specific issues they're running into in their own relationships. Let us know how you feel about shows like this in the future, and we might start incorporating them into our podcast routine! If this show is helpful to you, consider jo ...… 228 - Pursuit and Withdrawal1:03:07 We’re talking about “pursuit and withdrawal,” a common behavioral pattern we see crop up in relationships when the people involved are trying to process disagreements and conflict. We’ll be identifying exactly how pursuers and withdrawers behave, and some steps you can take to prevent falling into this pattern when you have issues or conflict w ...… 227 - Rules vs Agreements feat. Boundaries1:08:53 This week, we discuss the differences between rules, agreements, and boundaries, and how sometimes they can hurt a relationship more than help one. We explore how to make informed, healthy decisions with your partner or partners regarding what everyone involved needs in the relationship. If this show is helpful to you, consider joining our amaz ...… 226 - Sex Talk With My Mom51:32 22d ago 51:32 We're joined by the mother-son duo who host Sex Talk With My Mom and talk about their experiences talking openly and personally about their sex lives in a culture that is terrified of it. If this show is helpful to you, consider joining our amazing community of like-minded listeners at patreon.com/Multiamory. You can also get access to ad-free ...… 225 - Real Relationship Talk with Lola Phoenix1:04:57 We're joined by writer Lola Phoenix to talk about common non-monogamy advice and how it can sometimes miss the mark. Lola (pronoun: they) is a queer, non-binary disabled American living in the UK. Lola writes and produces a weekly advice column and podcast called Non-Monogamy help and has been previously published in Violet Blue’s Best Women’s ...… 224 - Feelings Are Not Facts1:02:12 An expression we've used on the show in the past is "feelings are not facts." It's been somewhat controversial so we decided to dedicate an episode to exploring that it really means and how a better understanding of the inner workings of thoughts and feelings can benefit our lives. If this show is helpful to you, consider joining our amazing co ...… 223 - Six Secretly Toxic Relationship Behaviors1:05:18 Today we're having a round table discussion with our friend Ben Day about six relationship behaviors that most people think are normal (or even romantic) that are actually toxic and destructive to your relationships. We also get deep and share some of our personal struggles with these behaviors in our own pasts. You can check out more of Ben at ...… 222 - Zen and the Art of Relationships1:06:45 What does buddhism have to do with relationships? Actually a lot! In this episode we talk to priest Annalisa about what all of us can learn from Buddhism to have better, happier, and healthier relationships. We also cover the difference between learning to control your emotions and allowing someone to walk all over you. If this show is helpful ...… 221 - A Best of Episode1:04:47 It’s time for our Patron’s favorite moments! This episode was created by suggestions from our awesome private Patreon group members. We asked Patron’s what some of their favorite moments have been on the show and these are just a few of some of their favorite moments. Find out which moments made the list and we've love to hear your feedback abo ...… 220 - Secrecy vs Privacy1:12:47 This is a big question that comes up in non-monogamy, especially when it comes to talking about other partners to one another. So what is the difference between privacy and secrecy. Is there really anything that we should be hiding from our partner? What about power dynamics. Who decides what should be private and what shouldn’t be? What is min ...… 219 - Labels By Any Other Name1:05:21 Millennials don’t do labels these days or do they? Some folks despise labels while others live and die by them. Today we describe labeling the holy trinity - Sexuality, Gender and Relationship Styles. We describe our experiences and difficulties with labeling our own individual selves and how those labels have changed or even evolved over time. ...… 218 - I've HALTed. Now What?59:41 3M ago 59:41 Let's chat about the acronym HALT, which stands for - Hungry Angry Lonely or Tired. We also like to include horny, drinking and sick to the mix but HHALTDS just doesn't have the same ring to it. This acronym is used often in addiciton recovery to encourage an addict to check in with themselves and see what is causing their urge to use. We also ...… 217 - Commitment in Non-Monogamy57:37 Commitment is so often misunderstood when it comes to non-monogamy. We discuss definitions of commitment and identifying the traditional markers of commitment we've all grown up with. We realize that it can be difficult defining what it means exactly within the confines of non-monogamy as it doesn't necessarily fit the stereotypical mold (but t ...… 216 - What is Emotional Support?1:02:50 So what exactly is emotional support. How do you ask for it? How do you go about learning what kind of emotional support that you need. What kind does your partner need? Even not knowing what kind of support you need in the moment and admitting it can go a long way! On this episode, we explore what emotional support is, what it is not and learn ...… 215 - The Science of Jealousy1:04:04 Jealousy happens in all types of relationships, but what is the science behind why we respond with jealousy to certain situations. What happens inside our bodies and minds in response things that make us feel jealousy. On this episode, we explore the scientific why behind jealousy and some ways to combat that internal struggle. If this show is ...… 214 - Ghosts of Normativity Past1:12:09 Are you suffering from a Monogamy Hangover? We'll give you a dose of medicine to help cure what ails you. Many of us have been given a set of relationship expectations growing up and with changing those expectations, some pieces of those old beliefs and expectations are still left over. On this episode, we talk about some of the topics that can ...… 213 - Relationship Goals57:20 RelationshipGoals - No, we're not chatting about the infamous Instagram hashtag. On this episode, we take a trip on the relationship escalator. What does your relationship escalator look like? Does it match your partner's? What happens when you aren't sure? Do those expectations ever change? Sometimes we think we have certain expectations, achi ...… 212 - Relationship Baggage1:08:03 Get ready for the travel metaphors! They will be plentiful! We've covered this before in a previous episode, but we think it's time to tackle this subject again. In this episode, we talk about baggage, but not neccessarily all the bad stuff. We talk about evaluating your own personal baggage and what you bring to a relationship. Are you carryin ...… 211 - Thinking Critically about Sex Positivity and Sex Negativity1:03:12 What does Sex Positivity and Sex Negativity actually mean? There is a lot of confusion surrounding these terms. People have a tendancy to throw these terms around with little to no explanation which just seems to add to the confusion. On this episode, we attempt to help define these two terms as best as we can while identifying the potential pr ...… 210 - Take the Fight Out of Your Fights1:14:06 Conflict is inevitable, but conflict can be productive. Everyone gets into fights and arguments but that doesn't mean your relationship is unhealthy or in trouble. On this episode, we talk about fighting fair instead of fighting dirty. Some of us fight with the misconception that we MUST win the fight to succeed. Sometimes we fight for no good ...… 209 - To Cohabitate or Not to Cohabitate, that IS the Question1:07:59 Are you currently living together? Thinking about moving in? Why do you want to move in together? On this episode, we tackle some of the situations & questions that arise when cohabitation is on the table as well as statistics about living together for couples. We cover questions to ask yourself, actionable items to make your living situation g ...… 208 - Failure is an Option1:07:00 Failure is always an option, but it doesn't have to be the end of the world. We hear about failure quite a bit when it comes to schooling and even jobs/career but failure doesn't always get talked about in regard to relationships. It seems to be a dirty word when it comes to relationships because folks can feel ashamed or don't want to drudge u ...… 207 - Self-Sabotage1:04:52 Listen all ya'll, it's a sabotage! Okay this episode isn't really about the famous Beastie Boy song, it's about something much more serious. Self-Sabotage can be a vicious cycle. On this episode, we tackle self-Sabotage in relationships and in life. Procrastination, addiction, self-worth issues and so much more. We also provide some insight on ...… 206 - Attraction Perceptions & Misconceptions1:02:17 What exactly IS attraction? How important is attraction? Why are we attracted to certain people or attributes? There are a lot of myths and misconceptions floating around out there. On this episode, we tackle what attraction is, how it affects who we choose as partners and even how attraction is related to sex drive. We look into the research a ...… 205 - Anger is Good for You1:05:52 Anger is usually viewed as bad or something that should be stifled from a very young age. However, anger can be a force for good too! A lot of folks tend to repress their anger or express it in unhealthy ways. In this episode, we cover how to use your anger in a more positive and constructive way in your relationships. We share some of the ways ...… 204 - Metamour Troubles & Struggles1:07:55 What happens when you don't get along with your metamours? Or perhaps one of your metamours doesn't like you and you feel stuck in the middle. Today, we cover some of the struggles you might face in your polyamorous relationship(s). Things won't always go perfect, so we cover ways on how to properly deal with them and communicate with your part ...… 203 - Tackling Insecurities1:02:55 Today we tackle insecurities. Insecurities tend to affect our relationships and ultimately overall our happiness. We open up and share some of our own personal insecurities that we are currently dealing with. But don't fret, we provide ways to alleviate those insecurities while still maintaining your sense of humility and not compromising bound ...… 202 - Polyamorous Superhero Fiction50:51 We speak to Kevin Patterson & Alana Phelan author's of For Hire: Operator. The book is an all encompassing love story and super hero novel all wrapped into one. 2 women of color with super human powers whose story touches on subjects on consent, polyamory, gender-non-conformity, safe sex practices, sex clubs and more! These super heroes want to ...… 201 - Are you ready to be polyamorous?1:01:53 Is anyone really ever ready? This week we delve into the different signs that determine if you are ready to become polyamorous...or not. We talk about personal attributes that are helpful to have when beginning a new relationship style, red flags to watch out for when you are starting out, and our own personal experiences when we first became p ...… 200 - 200 Epsiodes Later1:02:58 It's Multiamory's milestone 200th episode! We take part in a veritable retrospective hootenanny - sharing which tools, research, and lessons from creating the show that have had the most long lasting effects on our relationships and ourselves. Turns out that talking about sex, relationships, and communication every week for 200 weeks has some i ...… 199 - Visibility and Worthiness with Kat Blaque1:14:03 This week we're joined by transgender rights activist and YouTube sensation Kat Blaque. We talk about outness and visibility, handling criticism and backlash online, worthiness in relationships, and navigating polyamorous dating while trans and black. You can find Kat's youtube channel at youtube.com/katblaque and her new blog on polyamory at a ...… 198 - Supporting a Partner through Loss53:37 We all experience loss - the death of a loved one, the end of a relationship, being let go from a job. This week we discuss helping a partner through grief, how to care for yourself while you're caring for someone else, and also the particular challenges non-monogamous relationships present when coping with loss. If this show is helpful to you, ...… 197 - HIV with Liz Sutherland55:00 For many people. the very topic of HIV is quite scary, but not so for our guest this week, Liz Sutherland of Positive Life New South Wales. We chat with Liz about the vast wealth of information that most of us don't know about HIV. including how it's epidemiology is changing, effective methods for treatment and prevention, and how we can keep o ...… 196 - Am I Still Polyamorous If...1:03:26 Fortunately, there's no Polyamory Board of Directors who decides who makes the cut or not. However, that doesn't prevent many people from worrying about whether or not they can claim a polyamorous identity. On this episode, we discuss identity, gatekeeping, and common questions that we get: "Am I still polyamorous if I still feel jealousy?" "Am ...… 195 - Wellbeing and Online Dating1:00:45 Online dating presents a wealth of opportunities to connect with like-minded people, but it also introduces the potential for endless frustration. If you're feeling discouraged from rejection, confused about how to write a good profile, or just plain burnt out from the whole process, check out our tips for maintaining your mental health and emo ...… 194 - Recovering from Infidelity1:00:49 We get a lot of questions related to infidelity: can you even "cheat" when you're consensually non-monogamous? If my partner cheats on me, should we transition into an open relationship in order to stay together? Is it possible to heal a relationship after infidelity? Tune in for the answers to these questions, as well as guidance on recovering ...… 193 - Making Long Distance Relationships Work1:06:57 Long distance relationships are pretty common in non-monogamous communities, and they are becoming even easier to maintain with modern technology. This week, we share the latest statistics on on the success of long distance relationships as well as fundamental practices to keep your LDR running smooth. If this show is helpful to you, consider j ...… 192 - Self-Esteem Boost54:34 Forget participation trophies! This week we're talking about self-esteem. We all have our highs and lows, but did you know that having greater self-esteem is associated with better health and a better social life? We'll talk about the easiest ways to get a lasting self-esteem boost, whether it's possible to have self-esteem that's too high, and ...… 191 - Toxic Relationships46:56 10M ago 46:56 Are your relationships a dumping ground for toxic waste? This week we talk about the hallmarks of toxicity in relationships. Whether you've got a toxic metamour, romantic partner, co-worker, or friend, tune in to learn how to recognize the signs and break free. If this show is helpful to you, consider joining our amazing community of like-minde ...… 190 - Surviving and Thriving in NRE50:05 Do you feel like you can do anything? Are your palms sweaty and your stomach butterfly...ish? You may be feeling New Relationship Energy (NRE). This week we talk about the chemical overload your brain gives you when you're first falling in love, as well as strategies to stay grounded and care for other relationships even while you're over the m ...… 189 - Handling Backlash1:02:33 When you start talking to people about non-traditional relationships, the reaction isn't always pretty. This week dicuss what motivates negative reactions to non-monogamy, as well as practical tips for handling the haters and rocking tough conversations with grace and ease. If this show is helpful to you, consider joining our amazing community ...… 188 - What Hierarchy Means to Me1:12:48 Some people think polyamory can only function with a primary-secondary hierarchy. Others think it's a recipe for disaster. On this episode, we want to dive into the nuances of hierarchy -- how it has affected our lives personally, when it's been beneficial, and when it's been painful. If this show is helpful to you, consider joining our amazing ...… 187 - Break Free of Relationship Drama56:21 No one wants a relationship full of drama...or do they? Turns out there are specific roles that many of us actually enjoy to play in any given "drama" situation. This week we talk about how to break the cycle and transform drama into empowerment. If this show is helpful to you, consider joining our amazing community of like-minded listeners at ...… 186 - Reconnecting When You Don't Want To1:03:21 Have you ever tensed up when a partner tried to hug you after a fight? What about feeling weird when a partner comes home from a date with someone else? This week we're talking about the best ways to reconnect with a partner after a physical or emotional separation, even when it doesn't feel easy. If this show is helpful to you, consider joinin ...… 185 - Can Men Get Along with their Metamours?1:06:29 On this episode we speak with Dr. Alex Bove about the findings of his recently published research on metamours and masculinity, titled: "Meta, More or Less? A Phenomenological Study of Polyamorous Men’s Relationships with Their Male Metamours." Tune in to find out more about the three phases of the metamour relationship, as well as the key trai ...… 184 - Multiamory Answers Your Questions Vol. 3 (Live from Lake Tahoe)1:05:46 This week we're coming at ya live from our first ever patreon retreat in Lake Tahoe, CA! We answer questions from our Patreon supporters, discussing how to avoid scarcity mindset, preventing burnout, fostering inclusivity, Emily's goddess voice, and more! If this show is helpful to you, consider joining our amazing community of like-minded list ...… 183 - Equal and Equitable Relationships1:06:18 What does it mean to have a relationship that feels "equal"? Is it a fair division of labor? Mutual trust and respect? Treating every partner exactly the same? This week we dive in to equal and inequal relationships, as well as how the concept of equity can help to bring balance to your love life. If this show is helpful to you, consider joinin ...… 182 - Gamify Your Life1:00:12 If you've listened to this podcast for a while, you're aware that the Multiamory crew are a bunch of gaming nerds. This week, we dive into the principles and psychology behind gamification, as well as how you can gamify your way to self-improvement and better relationships. If this show is helpful to you, consider joining our amazing community ...… 181 - Settler Sexuality (with Dr. Kim Tallbear)55:42 We're extra excited to speak to Professor Kim Tallbear. Dr. Tallbear is the author of The Critical Polyamorist blog, as well as several books, articles, and talks on settler sexuality, Indigenous peoples, technology, and relationships. We dig into the details of what settler sexuality is, how it influences our relationships, and the many differ ...… 180 - The Mono/Poly Paradox55:27 It's a questions we've gotten since starting the podcast -- can mono/poly relationships work? We're joined by Phoebe Phillips, author of the Polyammering blog and creator of the "Monocorn Sanctuary" group on Facebook. Phi shares her own experience as the mono side of a long-term polyamorous relationship as well as her guidance for people consid ...… Start listening to Multiamory Podcast on your phone right now with Player FM's free mobile app, the best podcasting experience on both iPhone and Android. Your subcriptions will sync with your account on this website too. Podcast smart and easy with the app that refuses to compromise. Inside Health Dr Mark Porter demystifies health issues, separating fact from fiction and bringing clarity to conflicting health advice, with the help of regular contributor GP Margaret McCartney Dan Savage, America's only advice columnist, answers your sex questions and yaps about politics. To record a question for Dan to be answered in a later podcast, call 206-302-2064. For a much longer version of the show, with no ads, visit savagelovecast.com and get yourself a season subscription. Barangay Love Stories Love stories from listeners of Barangay LSFM are featured in this weekly radio program. Listen in as Papa Dudut reads the letter of a "kabarangay" who shares his/her heartfelt experience. A dramatization brings the audience closer to feeling the joy, the pain, the ups and downs of being in love--something that each one of us can relate to. Get-Fit Guy's Quick and Dirty Tips to Get Moving and Shape Up Enhance your energy, lose weight, boost your performance, and look better than ever in your bathing suit with the Get-Fit Guy! If you want to begin an exercise routine and don't know where to start, or if you've been working out for a while and aren't getting the results you want, host Brock Armstrong will give you the tips you need to reach all of your fitness goals. Get expert information and advice on everything from toning your arms, butt, and abs, to blasting fat fast, to running a 5K a ... Sex With Emily Dr. Emily Morse shares her expertise on sex, relationships and everything in between! Submit your questions to Emily by emailing feedback@sexwithemily.com. For more sexy fun, including blogs, photos, videos or to stream this show, visit sexwithemily.com. Team Beachbody Coach Podcast Introducing the Team Beachbody® Coach Podcast, the official podcast channel devoted exclusively to Team Beachbody Coaches. From archives of the National Wake Up Call, I Am Team Beachbody stories, and What I Know Now success tips from seasoned Coaches, this podcast is your one-stop source for personal development, business training and peer-to-peer inspiration. Ready to explode your business? Tune in regularly to our official podcast and join the ranks of those transforming their lives and th ... Life By Design Podcast We believe that every human being is designed to be extraordinary. That means you reading this right now… yes you! Have you ever been scared about the health of a loved one, or about the health of your children? Have you ever felt hopeless about your health and wondered… “I’ve been doing everything they have told me to do… why am I not healthier”? If you are at all interested in health, life, success, and happiness, this podcast is for you. Dr. Jamie Richards and Dr. Kresimir Jug never shy a ... Infants on Thrones The Philosophies of Men, Mingled with Humor Starting Strength Radio Starting Strength is the bestselling book on the most fundamental and effective approach to strength training ever written. Mark Rippetoe hosts Starting Strength Radio where he discusses topics of interest, primarily to him, but perhaps also to you. Women Of The Hour Lena Dunham hosts this podcast miniseries about friendship, love, work, bodies and more.
cc/2019-30/en_middle_0023.json.gz/line1557
__label__wiki
0.598258
0.598258
This shredding of the Wilson Doctrine will make whistleblowers think twice Baroness Jones brought a legal case to challenge the blanket collection of citizens' electronic metadata Thursday, 15 October 2015 9:04 AM By Jenny Jones Speaking to people - be they constituents, campaigners, experts, concerned members of the public, or whistleblowers – is central to any democratic system. So if people cannot – or are too scared to – speak to their representatives in Parliament, how can any politician, whether in the House of Commons or the Lords, possibly do what they are put there to do? How can we represent the wishes and desires of the people, if the people feel they cannot speak to us with the safeguard of privacy? Opinion Former News MPs vote to compel legal same-sex marriage and abortion in Northern Ireland DUP opposes repealing Northern Ireland’s blasphemy laws while all other parties come out in favour Opinion Former Video Many prisoners fear their release date. It's not what you'd think, is it? Right to diagnosis support - how dementia advisers can make a difference Yesterday's announcement by the Investigatory Powers Tribunal, that Parliamentarians’ communications are not protected from interception by the ‘security services’, means in effect that we can all be spied on. It also means that people who want to report wrongdoing, corruption or illegality to their elected representatives, can't be sure of any protection. The Tribunal made its announcement – declaring that politicians and the public can be and are routinely spied upon – only because Caroline Lucas MP, my fellow Green Party member, and I, made a legal complaint because it appeared that all electronic communications data sent in or coming through the UK, was being monitored by security services. This is in direct contradiction to the so-called ‘Wilson Doctrine’ – a promise made by all Prime Ministers since Howard Wilson, including the current holder of the post, that the communications of members of the Houses of Parliament would not be intercepted by the security services. I was on the Metropolitan Police's Domestic Extremist database for more than ten years, when I was both an elected representative of the people of London and on the Metropolitan Police Authority which exists to scrutinise them. Although I was offended at the designation; as a politician, I expect to be scrutinised. But I'm concerned for those people who contact me or Caroline, including campaigners whose lives have been ruined by undercover police spies, and asylum seekers who live in fear of deportation to states where their lives are in danger. Because the very real danger is that those people, knowing now that they are not protected when contacting us, will now think twice before asking for our help, reducing the likelihood that we can help them, reducing our connection to the people we represent, and reducing the precious first-hand knowledge only they can give us of the challenges they face, and the state of people’s lives across the UK today. Yesterday's ruling is not just a reminder that we must all be vigilant against continued attacks on our personal rights and liberties, but also a potentially extremely serious threat to the entire democratic and representative system on which our society is founded. Baroness Jones of Moulsecoomb is the Green Party's representative in the House of Lords and sits on the London Assembly. The opinions in Politics.co.uk's Comment and Analysis section are those of the author and are no reflection of the views of the website or its owners. The Green party will be bold or it will be nothing Migration Dividend Fund: We'll spread the benefits of immigration & win the argument Lies, politicians and the criminal law: How do we solve a problem like Boris? Blanket surveillance of UK citizens challenged in court British spy chiefs to come under surveillance Revenge for Werritty? Liam Fox takes first step in prosecuting the Guardian Friday lunchtime. Your Inbox. It's a date. This is what no-deal Brexit actually looks like It's time to ditch the term 'fake news' Brexit: Corbyn is playing a clever long game that could benefit us all If Liam Fox messes up, we're all in deep trouble I'm Broken Britain: I'm losing all hope Revealed: Letter shows UK govt indifference to European voters The trade remedies problem: Brexit no-deal plan in disarray The route to tomorrow's journeys How government departments are using Twitter video Week in Review: Things fall apart, the centre cannot hold Darroch stitch-up shows how quickly Brexiters turn against the national interest The Brecon tactic: How progressive parties can get together to stop Brexit Advertise your job vacancies here
cc/2019-30/en_middle_0023.json.gz/line1559
__label__cc
0.644098
0.355902
Home > Authors>I-M>Linda Kavanagh>The Secret Wife - Linda Kavanagh The Secret Wife - Linda Kavanagh Laura Thompson is getting married. The university lecturer has got her man, her dress and her hopes of a long and happy life with stockbroker Jeff. But the man of her dreams isn't all that he seems, and before long dark clouds are gathering on the horizon... Sadly, Laura has little option but to end her marriage. But leaving Jeff doesn't bring an end to the heartache. In fact, the nightmare is only beginning. Jeff seems to be everywhere, and his vindictiveness knows no bounds...A bewildered Laura finds herself cornered and vulnerable as her life spirals out of control. But the seeds of Laura's present dilemma may well be rooted in her past, a past she knows little about. And that lack of knowledge could lead to her downfall, or even her death. What can Jeff possibly know about her past? And how can Laura fight back, when she doesn't even know what she's fighting for - or why? Time after Time - Linda Kavanagh Never Say Goodbye - Linda Kavanagh "Dark, gritty and addictive" - RTÉ Guide
cc/2019-30/en_middle_0023.json.gz/line1561
__label__wiki
0.71426
0.71426
Grizzly bears still need protecting, US court rules Conservationists welcomed a US appeals court ruling that grizzly bears still need protecting, after federal authorities sought to have them taken off an endangered species list. The Ninth Circuit Court ruled that the US Fish and Wildlife Service cannot take away Endangered Species Act protection from grizzlies in the Greater Yellowstone region of the Rocky Mountains. Specifically it said the disappearance of whitebark pine, a crucial food source for grizzlies, potentially threatens the long-term survival of the bears, known as "ursus horribilis" in Latin, reports said. "This case involves one of the American West’s most iconic wild animals in one of its most iconic landscapes," wrote Richard Tallman a member of the three-judge panel which returned the verdict. "Based on the evidence of a relationship between reduced whitebark pine seed availability, increased grizzly mortality to reduced grizzly reproduction, it is logical to conclude that an overall decline in the region’s whitebark pine population would have a negative effect on its grizzly bear population." The former Seattle lawyer was cited by the Seattle Post-Intelligencer newspaper as saying: "Now that this threat has emerged, the Service cannot take a full-speed-ahead, damn the torpedoes approach to de-listing." Mike Clark, executive director of conservation group the Greater Yellowstone Coalition, hailed the verdict. "We appreciate the strong language of the 9th Circuit Court saying that USFWS must further study the demise of the whitebark pine and its impact upon grizzlies before it can delist the Yellowstone griz," he said. "Secondly, we look forward to working with the feds and state officials on plans that ultimately will delist the griz when it is appropriate. But the court has clearly ruled that such a time is not yet upon us." Grizzlies used to range widely across the Rocky Mountains and the Great Plains, but hunting drastically reduced their numbers. Today they are found only in scattered locations, mainly national parks including Yellowstone, which covers parts of the US states of Montana, Idaho and Wyoming. They can weigh up to 1,500 pounds (680 kilograms) and sport large shoulder humps. Despite their size, they can run up to 35 miles (55 kilometers) per hour, according to the US Fish and Wildlife Service. Fed judge says grizzlies still threatened Citation: Grizzly bears still need protecting, US court rules (2011, November 23) retrieved 17 July 2019 from https://phys.org/news/2011-11-grizzly-court.html Watch out for Yellowstone bears -- they're hungry Study of bear hair will reveal genetic diversity of Yellowstone's grizzlies Grizzlies may lose 'threatened' status Apply public trust doctrine to 'rescue' wildlife from politics, researchers say US removes gray wolf from endangered list CapitalismPrevails I live in Montana and the threat of wolves and grizzly bears passing through my backyard is getting more realistic every day through more and more local encounters. I bet every one of these federal judges would think their decision over if they were in my shoes. This is why federal judges should not be appointed for life. gfbtbb Shoot. Shovel. Shut up. kdizzle If humans would stop deforesting and fragmenting their natural habitat, they wouldn't show up your backyard! We are only one of millions of species on this planet, of which many are threatened by premature extinction from mostly anthropogenic causes. If you want grizzlies and wolves to respect "your" property, respect theirs. Kdizzle, Ok than stop consuming. Go vegetarian and ban people eating steaks and burgers etc. We are the top of the food chain and the land is here to use and to stay stagnate. There's no reason bears and wolves can't be regulated like game animals.
cc/2019-30/en_middle_0023.json.gz/line1566
__label__wiki
0.630289
0.630289
Smart vests have construction workers' safety at heart by RMIT University RMIT researcher Ruwini Edirisinghe hopes her innovative smart vest will help reduce fatalities from heat stroke among construction workers. Credit: RMIT University Heat stress is a growing safety concern in the building industry and now an innovative smart vest has been developed to monitor the health of construction workers in real time. Developed at RMIT University in Melbourne, Australia, the vest uses sensors to measure a worker's body temperature and heart rate and sends the data wirelessly to a smartphone app, which instantly alerts users to any anomalies. The innovation comes amid concern at the growing number of heat-related accidents on construction sites. And it follows a NASA climate report warning that temperatures over the past decade have been the warmest in more than a century. Vice-Chancellor's Research Fellow in RMIT's School of Property, Construction and Project Management, Dr Ruwini Edirisinghe has been working on the smart vest concept for more than a year. She devised her heat stress vest as part of her research in to improving worker safety. "Heat related illness is of serious concern in the construction industry, and can lead to fatalities," Edirisinghe said. "It can cause heat stroke and damage to body organs and the nervous system resulting in permanent disability or even death. "A big part of the problem is some workers don't recognise the early warning signs. This technological solution will hopefully change that." National and international regulatory bodies such as SafeWork in Australia, The Occupational Safety and Health Administration (OSH) in the USA and Health and Safety Executive (HSE) in the UK are increasingly recognising heat stress hazards in the construction industry. Workers in building and construction are at higher risk of death or injury than those in many other occupations, with figures from SafeWork Australia showing the industry accounted for 12 per cent of the nation's work-related fatalities in 2013-14. But other workers are also exposed to hot conditions on the job, including bakers, fire-fighters, welders, miners, boiler room workers, chefs, farmers, gardeners and foundry operators. The signs and symptoms of heat illness can include feeling sick, nauseous, dizzy or weak. Victims can also become clumsy, collapse, suffer convulsions and die. "Globally, the construction industry is one of the lowest performing industries in terms of its safety record," said Edirisinghe, who has a background in Information and Communications Technology (ICT) and smart technologies in construction research. "Construction workers in extreme temperatures and humid environments, confined spaces and near radiant heat sources are vulnerable to risks." Edirisinghe said her smart vest kit takes the guesswork out of heat-related workplace safety by alerting construction workers or their supervisors before such problems arise. Data from the vest can be sent direct to a smartphone app via Bluetooth. Edirisinghe's project is believed to be the first of its type in the construction industry in the world. "While there are researchers working on anti-heat stress smart T-shirts using advanced fabrics, these have no sensors embedded so are unable to monitor or provide instant health data," she said. Edirisinghe has plans to extend the smart vest system to include smart glasses, enabling wearers to "see" warnings about the state of their own health and wellbeing projected right before their eyes. Protecting workers in extreme heat Provided by RMIT University Citation: Smart vests have construction workers' safety at heart (2016, March 16) retrieved 17 July 2019 from https://phys.org/news/2016-03-smart-vests-workers-safety-heart.html How does gravitational potential energy work? Critical Energy Understanding the dispersion of waves What is the important scale of things in the Universe? Your guess (or some ideas) for Nobel 2019 'Formulations' of Physical Theories: Overview? More from Other Physics Topics High-tech vest would protect workers, rescue personnel from highway hazards Excess heat significantly affects health of migratory workers Study sheds light on prevention of heat stroke for outdoor workers White construction workers in Illinois get higher workers' comp settlements, study finds Communication device for workplace safety and productivity monitoring Tiny probe that senses deep in the lung set to shed light on disease MIT and NASA engineers demonstrate a new kind of airplane wing When Concorde first took to the sky 50 years ago Paper sensors remove the sting of diabetic testing Micropores let oxygen and nutrients inside biofabricated tissues Understanding dynamic stall at high speeds
cc/2019-30/en_middle_0023.json.gz/line1567
__label__cc
0.652496
0.347504
The Rustlers of West Fork A Hopalong Cassidy Novel L'Amour, Louis In this first of four classic frontier novels, Louis L'Amour adds his own special brand to the life and adventures of one of America's favorite fictional cowboys, Hopalong Cassidy. In The Rustlers of West Fork, the quick-thinking, fast-shooting cowpuncher heads west to deliver a fortune in bank notes to his old friend, Dick Jordan. When he arrives at the Circle J, he discovers that the rancher and his daughter, Pam, are being held prisoner by a desperate band of outlaws led by the ruthless Avery Sparr and his partner Arnold Soper. Even if Hopalong Cassidy can free Jordan and Pam, he will have to lead them across rough and untamed Apache country, stalked by the outlaws who have vowed to gun him down. But Hopalong is no stranger to trouble, and before his guns or his temper cool, he's determines to round up Sparr and his gang and bring the outlaws to justice ... dead or alive! This classic tale of pursuit and survival is vintage L'Amour and adds new life and luster to the legend of Hopalong Cassidy. From the Paperback edition. Publisher: New York ; Toronto : Bantam Books, [1991], c1979 Branch Call Number: FIC L'Amou 3204 01 Characteristics: 259 p Alternative Title: Hopalong Cassidy and the rustlers of West Fork Read more reviews of The Rustlers of West Fork at iDreamBooks.com Burns, Tex
cc/2019-30/en_middle_0023.json.gz/line1568
__label__wiki
0.908861
0.908861
Wilson-Frame’s 20 Points Lead Pitt Past VMI, 94-55 Filed Under:College Basketball, Pitt, Pitt Panthers, University Of Pittsburgh, VMI PITTSBURGH (AP) – When VMI head coach Dan Earl was breaking down the film of Pitt’s first game of the season, he saw a team that liked to get to the basket. Pitt had knocked off Youngstown State in its opener with an offense that was fueled by guards driving to the rim. Friday night, the Keydets set out to take that away, but it didn’t take long for the Panthers to adjust. With VMI hanging back, Pitt senior Jared Wilson-Frame scored 20 points in his season debut, and the Panthers drained 13 3-pointers in a 94-55 victory. From the very beginning of the game, Wilson-Frame was a weapon from the outside. After the Keydets (1-1) scored the first basket, Wilson-Frame hit a trio of 3-pointers as Pitt (2-0) raced out on a 19-3 run. “We talk about making that first hit; punching someone in the mouth first,” Wilson-Frame said. “It’s really important to us.” He also hit a leaning, long-range 3-pointer just before the first-half buzzer to send the Panthers into the break with an 18-point lead. Wilson-Frame finished shooting 5 of 9 from 3-point range and 6 of 10 overall. It was the fifth time he scored 20 points or more in his Pitt career. “He hit some deep shots,” Earl said. “We were attempting to pack it in a little bit, but know where he was. … They’ve become more well-rounded.” Pitt played with a full complement after Wilson-Frame was suspended for the season opener. He worked into the rotation off the bench, with freshman Au’Diese Toney starting for the second straight game. Wilson-Frame played 23 minutes, while Toney played 22. “Jared is a really important guy for us,” Pitt coach Jeff Capel said. “Not just his scoring, but his leadership and his confidence and his versatility.” VMI continued its early season proclivity to look for the long ball. The Keydets attempted 31 3-pointers in its season opener and launched another 25. They’ve done so despite the absence of starting guards Austin Vereen (wrist) and and Jordan Ratliffe (ACL), who will not return this season. FAST LEARNERS Wilson-Frame was joined on the outside by freshmen Xavier Johnson, who scored 14 points and had 10 assists for his first career double-double and Toney, who also had 14 to go with a team-high eight rebounds. “(Johnson) is becoming a pretty good player,” Capel said. “He has a strong desire to improve. You love being around people like that. Normally, when you have that, positive things happen.” VMI sophomore Bubba Parham continued his offensive outburst. The reigning Southern Conference freshman of the year scored 16 after putting up 23 in VMI’s season opener. Parham will have to shoulder some of the load outside until Vereen can return. “Other teams look at a stat sheet and go, ‘OK, that kid can score a little bit and do some things, let’s try to take him out,'” Earl said. “That’s another sign of growth.” Pitt will continue its five-game homestand against Troy on Monday. VMI will host Division III Goucher College on Sunday. (© Copyright 2018 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.)
cc/2019-30/en_middle_0023.json.gz/line1569
__label__wiki
0.973085
0.973085
Cee-Lo Green - "I Want You" I still can&apos;t even get "Fuck You" out of my head and Cee-Lo Green is already dropping another bomb, albeit a quieter, sweeter bomb called "I Want You". While only a decent radio rip of the track exist (as far as I know), we are still getting a generally great idea of what Cee-Lo&apos;s Lady Killer album is going to sound like. UPDATE: Seems like a radio-rip of "I Want You" has been around since April. Well, it&apos;s still new to me, and I&apos;m willing to bet it&apos;s new to quite a few of you too. Listen to "I Want You" below: Cee-Lo Green&apos;s Lady Killer is due out in early December. Cee-LoStreamOn Deck Cee-Lo Green - "Old Fashioned" LISTEN: New Cee-Lo Green - "Fuck You" Cee-Lo Green - "Pimps Don't Cry" (feat. Eva Mendes) Cee Lo Green - "Robot Sex" Chiddy Bang - "The Fuck You Remix" (f/ Cee-Lo Green) NEW: Cee-Lo Green - It's Ok NEW: Cee-Lo Green - "Bright Lights Bigger City" & Music Video Cee Lo Green - "I Want You (Hold On To Love)" WATCH: William Shatner Performs Cee-Lo Green’s “Fuck You” New Cee Lo Green - "You Promised Me Love" Cee Lo Green - "Bridges" (The Neptunes Prod.) By Yvette Travillian
cc/2019-30/en_middle_0023.json.gz/line1576
__label__wiki
0.537431
0.537431
Dunn Lumber / 2012-Present Agency of record; identity, design, content, campaign, out-of-home, environment, strategy, digital The True Win-Win Is Possible In an era of big-box and discount stores, Dunn Lumber stands apart, having built a legacy on trust, quality, and values like hard work and honesty. In 2013, as a new generation of consumers became their customers, Dunn Lumber saw the opportunity to communicate their values and vision as differentiators from their competitors. They asked Belief Agency to help them determine how best to tell their story—their legacy—to this new audience. Dunn Lumber is a generational company building trust that lasts generations. In a time when only three percent of family-owned businesses survive past the third generation, Dunn Lumber is currently seeing its fifth generation of Dunn family members rise into leadership within the company. Statistically, the company was in trouble; it’s rare to have a company like Dunn Lumber still in business today. Somehow, they’ve been able to survive—why? According to Dunn Lumber’s CEO/president (a fourth-generation Dunn family member) Mike Dunn, “Trust is hard to gain and easy to lose.” So, Dunn Lumber focuses on three areas to ensure trust is being built: providing customers with expert advice, quality materials, and steady service. Legacy of Trust On day one, we worked to solidify the company’s beliefs and values and figure out how to communicate them clearly to the team. Over the past decade as the company grew, fewer employees had a direct relationship with a member of the Dunn family—and with it, a firsthand understanding of how the family’s values and traditions became so interwoven with the company’s. It was important to communicate this culture to the next generation of employees, so Dunn Lumber asked Belief Agency to create an internal film that communicated its core values to the staff. Dunn Lumber believes in the concept of the true win-win; that the win-win is not only possible in some interactions between a company and its customers, but in every interaction. At Dunn Lumber, a true win-win is achieved when both the employee and customer leave an interaction feeling a sense of fulfillment from the transaction. The employee feels they were equipped to do their job well, and the customer feels they received the help they needed and the quality of service they deserved. We developed a film that communicated the culture to employees. It made such an impact on the staff that Dunn Lumber decided to make it public-facing. Today, it’s known as the “Legacy of Trust” film. After completing the film we had a better understanding of the Dunn Lumber brand, which led us to suggest a refresh of the brand’s visual identity. Our task was to create visual branding that would unify the Dunn Lumber messaging. Our goal was to make the brand more consistent, more digitally versatile, and more friendly. “Danny Dunn” was the existing Dunn Lumber mascot, who had appeared alongside the company in various iterations since the 1940s. The existing version appeared sad or angry, which didn’t represent the friendliness of actual people we were interacting with day-to-day. Dunn Lumber wanted to make Danny feel more approachable without erasing his history and starting from scratch. What would happen is companies realized the best way to sell pots and pans is by helping their clients become better cooks? Or that the best way to sell gardening equipment is by teaching your customers how to grow their own vegetables? When you seek the good of your customers, everyone wins. We launched a three-pronged approach to create value and awareness. First, we focused on general brand awareness (the spring campaign). Then, in 2014, we launched a comprehensive content marketing strategy with the development of two new blogs as subsidiaries of the Dunn Lumber brand: Dunn DIY and Dunn Solutions, both offering consistent, valuable, and useful branded content to the two primary audience segments Dunn Lumber serves (DIYers and professional contractors). For years, Dunn Lumber has provided expert advice in their stores. Dunn Lumber employees have a combined 3,500 years of home-improvement experience. The blogs now gave that expertise an online presence. Each blog publishes weekly video and written content, along with monthly newsletters, a social media presence, and regular events throughout the year. (For more on Dunn Lumber’s events, read about the Northwest Flower & Garden Festival.) Dunn DIY The 21st century DIY movement is filling hardware store aisles with women. Dunn DIY was created to target that female audience, who initiate 80 percent of all home-improvement decisions—a market expected to reach over $377 billion in 2018. Dunn DIY called for a female host, so we partnered with Kirsten Dunn, a fifth-generation Dunn Lumber employee. Together, we created original content that followed trends, established a series of tutorials to teach people how to use power tools, and integrated tutorials for both homeowners and renters. To affect a slightly more feminine tone, we softened the colors and used a thinner typeface. Dunn Solutions The professional contractor market has been Dunn Lumber’s lifeblood for many years. To serve them well, we created Dunn Solutions and positioned it as an online space for Dunn Lumber staff and professional contributors to share their collective knowledge and expertise, access detailed planning resources, and interact with other seasoned professionals. We partnered on the launch with a well-known Seattle-area contractor who has more than 30 years in the historic home renovation industry. Spring Campaign Every spring we run an integrated campaign that serves coordinated but unique content across social media, YouTube, television, radio, out-of-home, and print media. Prior to running any large paid ad spots, we ran a 30-day series of giveaways through Dunn Lumber and Dunn DIY’s Facebook pages. A series of commercials were hosted by Dunn Lumber CEO/president Mike Dunn. Mike encouraged viewers to email him personally, providing his email address and later responding to every piece of mail. Putting the CEO in a recognizable position humanized Dunn Lumber in a way that made them stand out in their industry. Strategically, these ads were shared across a broad range of channels in a short period of time. On Facebook we run the #30DaysofDecking where we post content and host giveaways. Campaign covers the primary Dunn Lumber message: a promise, not a guarantee; expert advice; quality materials; flat-fee delivery. When we met the Dunn Lumber team they were already doing all those things, they just weren’t talking about it. We just made it known. Over the next two years, we embarked on more than 300 projects for Dunn Lumber, including 250-foot murals on each of three locations, redesigned store entryways, trade show booths, a campaign celebrating their 110-year anniversary (summer barbecues, collateral), interior signage, exterior signage, in-store endcaps, brochures, vehicle wraps, TV commercials, radio ads, integrated YouTube ads, a new website launching mid-2018, brand collateral, packaging, in-store audio, merchandise, and collaborations with other Seattle-based brands such as Swansons Nursery.
cc/2019-30/en_middle_0023.json.gz/line1580
__label__wiki
0.914417
0.914417
Ketterer, Hererra Shine as Locomotive FC Earns Crucial Point It won't count as a goal for Chapa Herrera, but it will count as a goal. It wasn't a clean sheet for Logan Ketterer, but you could say he found some bleach. It wasn't a win for El Paso Locomotive FC, but there was a point. The kind you earn and the kind you make. On the tight, elastic but not-so-fantastic FieldTurf of Taft Stadium in Oklahoma City, Mark Lowry and Locomotive FC began the process of turning their ship back in the right direction with a 1-1 draw against opportunistic Energy FC. There is no describing how hard that can be in the sweaty, Swiss-cheese-roster world of American second division soccer, especially after nearly being set adrift by a raft of injuries on the back line. But with no wind in its sails, El Paso (7-4-7, 28 points) put oars in the water and rowed itself out of a deficit to salvage a point. Lowry lauded his team's resolve. "The first half, we were magnificent," said the Englishman. "Completely controlled the game and a dubious call from the referee gifts them a PK goal. Going from dominating to being a goal down was a great test of character for the guys and they came through with flying colors." After a deflection caromed off Locomotive centerback Drew Beckie in the area referee Thomas Snyder whistled the Canadian international for handball, though it appeared Beckie didn't know much about it as the ball came his way from behind. Omar Salgado's former Las Vegas Lights FC teammate, Rafael Garcia, converted his spot kick in the 29th with some help from the left post. Summer soccer swelter and thin rosters aside, there was that other oft-endured pothole in second division play. The pitch. Speaking of pitches, isn't it amazing that the fields in the best shape for Locomotive FC's last few matches are not soccer-specific or multi-purpose football stadia, but baseball diamonds? Let's say Taft Stadium's turf was...responsive. Normally home to Oklahoma City public schools for high school football, the surface might shave a tenth of a second off someone's 40-yard dash time. Good if you're trying to draw interest from the college football behemoth 30 miles to the south, bad if you're trying to trap a pass and you can't use your hands. "The field conditions were incredibly challenging for us in terms of passing the ball with any good tempo," said Lowry. "The turf made it almost impossible to move the ball quickly but the players persevered, stayed true to our style of play and came away with a well deserved point." And now, a 10-day break with fifth place in the Western Conference still in the grasp. The draw also keeps Oklahoma City (6-5-8) in the rearview, 8th place with 26 points. Concerns remain. After being named USL Championship Player of the Month for May and vaulting to the top of the Golden Boot race with 10 goals, Jerome Kiesewetter has only gotten as close as the woodwork this last month. As he hit the sycamore in San Antonio, Kiesewetter found the oak in OKC (surely a country music song in the making), knocking his shot off the right post in the 50th after getting by Energy keeper Cody Laurendi. Kiesewetter needs service he has not received. Lowry has worked his lineups to get the former US international more involved, but over the last few weeks the final third has looked more like the puzzle of March and April for Locomotive FC than the ripe fields of May. Frankly, not having Mechack Jérôme – much less a settled, healthy backline – has a lot to do with that. Credit James Kiffe and Omar Salgado. In the absence of the Haitian international, likely out for the season, El Paso's wings have done their utmost to make up for the missing line-busting passes Jérôme provided with hard work on the flanks. Might there be another point of concern regarding Josué Aarón Gómez? The midfielder on loan from FC Juárez was Lowry's first substitution in the 62nd minute but was subbed out for Derek Gebhard in the 79th and promptly walked off the field. No word on if Gómez suffered an injury that needed further treatment or if there was something else requiring his exit, but exit he did. Lowry brought up his own point of contention after the game. “At the moment we feel like we aren’t getting the calls we deserve,” he said. "Four penalties called against us in three games and I’m struggling to see how any of them are true pens. It’s frustrating but it’s making us stronger." A matter of perspective, perhaps. But overall, El Paso looked more assured in Oklahoma City than it did in its 0-0 draw in San Antonio because it had to dig itself out of a hole. And, yes, if you're counting along with Lowry, there was a second penalty called Saturday night, meaning... Logan Ketterer continues to prove himself as the anchor of El Paso's defense. The Wisconsin native made two immense saves, both on former Jamaican international Deshorn Brown. In the 54th minute Brown blasted a shot from just left of the spot, but Ketterer slid in and got his left hand up in time to deflect the ball straight down into Taft's rubber pellets and back into his hands. The second spot kick came in the 61st minute as Kiffe was whistled for fouling Energy midfielder John Brown. But Ketterer was up to the task, diving to his left to stonewall Brown's roller in the 61st, making it three straight matches with a penalty save. Though Ketterer's bigger stop came seven minutes earlier, there is no arguing the bigger moment was the saved penalty, giving Locomotive some steam. El Paso's goal came just over a minute later. Andrew Fox's opportunistic long ball split two defenders and found Chapa Herrera streaking out of midfield. The El Paso native chested the ball forward into the area then got his right leg up to knock it toward the net off the bounce. Because it deflected off the chest of OKC defender Mekeil Williams it goes onto the score sheet as an own goal. But the operative word for Locomotive would be "goal". The visitors had their equalizer and managed a few more opportunities while holding Energy FC at bay. Far from a full-throated roar, but a sigh of relief will do. For Lowry, it goes even deeper. "It’s frustrating but it’s making us stronger," he said. "We certainly embrace being from El Paso, and we have no problem standing up and taking on whatever, or whoever, is in front of us. It’s us against the world, and we are not fazed by that one bit." Well, then. Filed Under: duke keith, El Paso Locomotive FC, opinion, soccer Categories: Audios / Videos / Photos, EP Locomotives, Local / El Paso / Texas, Opinion, Sports
cc/2019-30/en_middle_0023.json.gz/line1584
__label__cc
0.654517
0.345483
Introducing Kubernetes API Version v1beta3 We’ve been hard at work on cleaning up the API over the past several months (see https://github.com/GoogleCloudPlatform/kubernetes/issues/1519 for details). The result is v1beta3, which is considered to be the release candidate for the v1 API. We would like you to move to this new API version as soon as possible. v1beta1 and v1beta2 are deprecated, and will be removed by the end of June, shortly after we introduce the v1 API. As of the latest release, v0.15.0, v1beta3 is the primary, default API. We have changed the default kubectl and client API versions as well as the default storage version (which means objects persisted in etcd will be converted from v1beta1 to v1beta3 as they are rewritten). You can take a look at v1beta3 examples such as: https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook/v1beta3 https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough/v1beta3 https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/update-demo/v1beta3 To aid the transition, we’ve also created a conversion tool and put together a list of important different API changes. The resource id is now called name. name, labels, annotations, and other metadata are now nested in a map called metadata desiredState is now called spec, and currentState is now called status /minions has been moved to /nodes, and the resource has kind Node The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path:/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name} The names of all resource collections are now lower cased - instead of replicationControllers, usereplicationcontrollers. To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the?watch=true URL parameter along with the desired resourceVersion parameter to watch from. The container entrypoint has been renamed to command, and command has been renamed to args. Container, volume, and node resources are expressed as nested maps (e.g., resources{cpu:1}) rather than as individual fields, and resource values support scaling suffixes rather than fixed scales (e.g., milli-cores). Restart policy is represented simply as a string (e.g., “Always”) rather than as a nested map (“always{}”). The volume source is inlined into volume rather than nested. Host volumes have been changed to hostDir to hostPath to better reflect that they can be files or directories And the most recently generated Swagger specification of the API is here: http://kubernetes.io/third_party/swagger-ui/#!/v1beta3 More details about our approach to API versioning and the transition can be found here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md Another change we discovered is that with the change to the default API version in kubectl, commands that use “-o template” will break unless you specify “–api-version=v1beta1” or update to v1beta3 syntax. An example of such a change can be seen here: https://github.com/GoogleCloudPlatform/kubernetes/pull/6377/files If you use “-o template”, I recommend always explicitly specifying the API version rather than relying upon the default. We may add this setting to kubeconfig in the future. Let us know if you have any questions. As always, we’re available on IRC (#google-containers) and github issues.
cc/2019-30/en_middle_0023.json.gz/line1586
__label__cc
0.62044
0.37956
Robot Operating System (ROS) - The Complete Reference (Volume 2) February 10, 2018 | Author: [email protected] | Category: Mathematical Optimization, Linear Programming, Control Theory, Systems Science, Cybernetics DOWNLOAD PDF - 27.5MB Descripción: ROS... Studies in Computational Intelligence 707 Anis Koubaa Editor Robot Operating System (ROS) The Complete Reference (Volume 2) Studies in Computational Intelligence Volume 707 Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected] About this Series The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution, which enable both wide and rapid dissemination of research output. More information about this series at http://www.springer.com/series/7092 Robot Operating System (ROS) The Complete Reference (Volume 2) Special focus on Unmanned Aerial Vehicles (UAVs) with ROS Editor Anis Koubaa Prince Sultan University Riyadh Saudi Arabia and CISTER Research Unit Porto Portugal and Gaitech Robotics Hong Kong China ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-319-54926-2 ISBN 978-3-319-54927-9 (eBook) DOI 10.1007/978-3-319-54927-9 Library of Congress Control Number: 2017933861 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland The Editor would like to thank the Robotics and Internet of Things (RIoT) Unit at Center of Excellence of Prince Sultan University for their support to this work. Furthermore, the Editor thanks Gaitech Robotics in China for their support. Acknowledgements to Reviewers The Editor would like to thank the following reviewers for their great contributions in the review process of the book by providing a quality feedback to authors. Anis Koubâa Michael Bence Maram Marc Andre Marco Walter Péter Christoph Carroll Magyar Alajlan Morenza-Cinos Oliveira Wehrmeister Fetter Lages Fankhauser Rösmann Francesco Christopher-Eyk Guilherme Andreas Juan Timo Zavier Myrel Junhao Huimin Alfredo Dinesh Rovida Hrabia Sousa Bastos Bihlmaier Jimeno Röhling Lee Alsayegh Xiao Lu Soto Madusanke Prince Sultan University, Saudi Arabia/CISTER Research Unit, Portugal CATEC (Center for Advanced Aerospace Technologies) Robotic Paradigm Systems PAL Robotics Al-Imam Mohamed bin Saud University UPF UTFPR Federal University of Technology – Parana Universidade Federal do Rio Grande do Sul ETH Zurich Institute of Control Theory and Systems Engineering, TU Dortmund University Aalborg University of Copenhagen Technische Universität/DAI Labor UNIFEI Karlsruhe Institute of Technologie (KIT) linorobot.org Fraunhofer FKIE Henan University of Science and Technology RST-TU Dortmund National University of Defense Technology National University of Defense Technology Freescale Semiconductors University of Moratuwa (continued) (continued) Roberto Ingo Brad Yasir Mohamed-Foued Murilo Guzman Lütkebohle Bazemore Javed Sriti Martins Robotnik Robert Bosch GmbH University of Georgia Prince Sultan University, Saudi Arabia Al-Imam Muhammad Ibn Saud Islamic University Centro Universitario da FEI Control of UAVs Model Predictive Control for Trajectory Tracking of Unmanned Aerial Vehicles Using Robot Operating System. . . . . . . . . Mina Kamel, Thomas Stastny, Kostas Alexis and Roland Siegwart Designing Fuzzy Logic Controllers for ROS-Based Multirotors . . . . . . . Emanoel Koslosky, André Schneider de Oliveira, Marco Aurélio Wehrmeister and João Alberto Fabro Flying Multiple UAVs Using ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wolfgang Hönig and Nora Ayanian Control of Mobile Robots SkiROS—A Skill-Based Robot Control Platform on Top of ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Francesco Rovida, Matthew Crosby, Dirk Holz, Athanasios S. Polydoros, Bjarne Großmann, Ronald P.A. Petrick and Volker Krüger Control of Mobile Robots Using ActionLib . . . . . . . . . . . . . . . . . . . . . . . . 161 Higor Barbosa Santos, Marco Antônio Simões Teixeira, André Schneider de Oliveira, Lúcia Valéria Ramos de Arruda and Flávio Neves, Jr. Parametric Identification of the Dynamics of Mobile Robots and Its Application to the Tuning of Controllers in ROS . . . . . . . . . . . . 191 Walter Fetter Lages Online Trajectory Planning in ROS Under Kinodynamic Constraints with Timed-Elastic-Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Christoph Rösmann, Frank Hoffmann and Torsten Bertram Integration of ROS with Internet and Distributed Systems ROSLink: Bridging ROS with the Internet-of-Things for Cloud Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Anis Koubaa, Maram Alajlan and Basit Qureshi ROS and Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Ruffin White and Henrik Christensen A ROS Package for Dynamic Bandwidth Management in Multi-robot Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Ricardo Emerson Julio and Guilherme Sousa Bastos Part IV Service Robots and Fields Experimental An Autonomous Companion UAV for the SpaceBot Cup Competition 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Christopher-Eyk Hrabia, Martin Berger, Axel Hessler, Stephan Wypler, Jan Brehmer, Simon Matern and Sahin Albayrak Development of an RFID Inventory Robot (AdvanRobot) . . . . . . . . . . . . 387 Marc Morenza-Cinos, Victor Casamayor-Pujol, Jordi Soler-Busquets, José Luis Sanz, Roberto Guzmán and Rafael Pous Robotnik—Professional Service Robotics Applications with ROS (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Roberto Guzmán, Román Navarro, Miquel Cantero and Jorge Ariño Using ROS in Multi-robot Systems: Experiences and Lessons Learned from Real-World Field Tests . . . . . . . . . . . . . . . . . 449 Mario Garzón, João Valente, Juan Jesús Roldán, David Garzón-Ramos, Jorge de León, Antonio Barrientos and Jaime del Cerro Part V Perception and Sensing Autonomous Navigation in a Warehouse with a Cognitive Micro Aerial Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Marius Beul, Nicola Krombach, Matthias Nieuwenhuisen, David Droeschel and Sven Behnke Robots Perception Through 3D Point Cloud Sensors . . . . . . . . . . . . . . . . 525 Marco Antonio Simões Teixeira, Higor Barbosa Santos, André Schneider de Oliveira, Lucia Valeria Arruda and Flavio Neves, Jr. ROS Simulation Frameworks Environment for the Dynamic Simulation of ROS-Based UAVs . . . . . . . 565 Alvaro Rogério Cantieri, André Schneider de Oliveira, Marco Aurélio Wehrmeister, João Alberto Fabro and Marlon de Oliveira Vaz Building Software System and Simulation Environment for RoboCup MSL Soccer Robots Based on ROS and Gazebo . . . . . . . . . . . . . . . . . . . 597 Junhao Xiao, Dan Xiong, Weijia Yao, Qinghua Yu, Huimin Lu and Zhiqiang Zheng VIKI—More Than a GUI for ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Robin Hoogervorst, Cees Trouwborst, Alex Kamphuis and Matteo Fumagalli Editor and Contributors About the Editor Anis Koubaa is a full professor in Computer Science at Prince Sultan University and research associate in CISTER Research Unit, ISEP-IPP, Portugal, add Senior Research Consultant with Gaitech Robotics, China. He becomes a Senior Fellow of the Higher Education Academy (SFHEA) in 2015. He received his B.Sc. in Telecommunications Engineering from Higher School of Telecommunications (Tunisia), and M.Sc. degrees in Computer Science from University Henri Poincare (France), in 2000 and 2001, respectively, and the Ph.D. degree in Computer Science from the National Polytechnic Institute of Lorraine (France), in 2004. He was a faculty member at Al-Imam University from 2006 to 2012. He has published over 120 refereed journal and conference papers. His research interest covers mobile robots, cloud robotics, robotics software engineering, Internet-of-Things, cloud computing and wireless sensor networks. Dr. Anis received the best research award from Al-Imam University in 2010, and the best paper award of the 19th Euromicro Conference in Real-Time Systems (ECRTS) in 2007. He is the head of the ACM Chapter in Prince Sultan University. His H-Index is 30. Contributors Maram Alajlan Center of Excellence Robotics and Internet of Things (RIOT) Research Unit, Prince Sultan University, Riyadh, Saudi Arabia; King Saud University, Riyadh, Saudi Arabia Sahin Albayrak DAI-Labor, Technische Universität Berlin, Berlin, Germany Kostas Alexis University of Nevada, Reno, NV, USA Jorge Ariño Robotnik Automation, SLL, Ciutat de Barcelona, Paterna, Valencia, Spain Lucia Valeria Arruda Federal University of Technology—Parana, Curitiba, Brazil Nora Ayanian Department of Computer Science, University of Southern California, Los Angeles, CA, USA Antonio Barrientos Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain Guilherme Sousa Bastos System Engineering and Information Technology Institute—IESTI, Federal University of Itajubá—UNIFEI, Pinheirinho, Itajubá, MG, Brazil Sven Behnke Autonomous Intelligent Systems Group, University of Bonn, Bonn, Germany Martin Berger DAI-Labor, Technische Universität Berlin, Berlin, Germany Torsten Bertram Institute of Control Theory and Systems Engineering, TU Dortmund University, Dortmund, Germany Marius Beul Autonomous Intelligent Systems Group, University of Bonn, Bonn, Germany Jan Brehmer DAI-Labor, Technische Universität Berlin, Berlin, Germany Miquel Cantero Robotnik Automation, SLL, Ciutat de Barcelona, Paterna, Valencia, Spain Alvaro Rogério Cantieri Federal Institute of Parana, Curitiba, Brazil Victor Casamayor-Pujol Universtitat Pompeu Fabra, Barcelona, Spain Henrik Christensen Contextual Robotics Institute, University of California, San Diego, CA, USA Matthew Crosby Heriot-Watt University, Edinburgh, UK Lúcia Valéria Ramos de Arruda Federal University of Technology—Parana, Curitiba, Brazil Jorge de León Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain André Schneider de Oliveira Advanced Laboratory of Embedded Systems and Robotics (LASER), Federal University of Technology—Parana (UTFPR), Curitiba, Brazil Marlon de Oliveira Vaz Federal Institute of Parana, Curitiba, Brazil Jaime del Cerro Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain David Droeschel Autonomous Intelligent Systems Group, University of Bonn, Bonn, Germany João Alberto Fabro Advanced Laboratory of Embedded Systems and Robotics (LASER), Federal University of Technology—Parana (UTFPR), Curitiba, Brazil Matteo Fumagalli Aalborg University, Copenhagen, Denmark David Garzón-Ramos Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain Mario Garzón Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain Bjarne Großmann Aalborg University Copenhagen, Copenhagen, Denmark Roberto Guzmán Robotnik Automation S.L.L., Paterna, Valencia, Spain Axel Hessler DAI-Labor, Technische Universität Berlin, Berlin, Germany Frank Hoffmann Institute of Control Theory and Systems Engineering, TU Dortmund University, Dortmund, Germany Dirk Holz Bonn University, Bonn, Germany Robin Hoogervorst University of Twente, Enschede, Netherlands Christopher-Eyk Hrabia DAI-Labor, Technische Universität Berlin, Berlin, Germany Wolfgang Hönig Department of Computer Science, University of Southern California, Los Angeles, CA, USA Ricardo Emerson Julio System Engineering and Information Technology Institute—IESTI, Federal University of Itajubá—UNIFEI, Pinheirinho, Itajubá, MG, Brazil Mina Kamel Autonomous System Lab, ETH Zurich, Zurich, Switzerland Alex Kamphuis University of Twente, Enschede, Netherlands Emanoel Koslosky Advanced Laboratory of Embedded Systems and Robotics (LASER), Federal University of Technology—Parana (UTFPR), Curitiba, Brazil Anis Koubaa Center of Excellence Robotics and Internet of Things (RIOT) Research Unit, Prince Sultan University, Riyadh, Saudi Arabia; Gaitech Robotics, Hong Kong, China; CISTER/INESC-TEC, ISEP, Polytechnic Institute of Porto, Porto, Portugal Nicola Krombach Autonomous Intelligent Systems Group, University of Bonn, Bonn, Germany Volker Krüger Aalborg University Copenhagen, Copenhagen, Denmark Walter Fetter Lages Federal University of Rio Grande do Sul, Porto Alegre RS, Brazil Huimin Lu College of Mechatronics and Automation, National University of Defense Technology, Changsha, China Simon Matern Technische Universität Berlin, Berlin, Germany Marc Morenza-Cinos Universtitat Pompeu Fabra, Barcelona, Spain Román Navarro Robotnik Automation, SLL, Ciutat de Barcelona, Paterna, Valencia, Spain Flávio Neves Jr. Federal University of Technology—Parana, Curitiba, Brazil Matthias Nieuwenhuisen Autonomous Intelligent Systems Group, University of Bonn, Bonn, Germany Ronald P.A. Petrick Heriot-Watt University, Edinburgh, UK Athanasios Denmark Polydoros Aalborg Copenhagen, Rafael Pous Universtitat Pompeu Fabra, Barcelona, Spain Basit Qureshi Prince Sultan University, Riyadh, Saudi Arabia Juan Jesús Roldán Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain Francesco Rovida Aalborg University Copenhagen, Copenhagen, Denmark Christoph Rösmann Institute of Control Theory and Systems Engineering, TU Dortmund University, Dortmund, Germany Higor Barbosa Santos Federal University of Technology—Parana, Curitiba, Brazil José Luis Sanz Keonn Technologies S.L., Barcelona, Spain Roland Siegwart Autonomous System Lab, ETH Zurich, Zurich, Switzerland Jordi Soler-Busquets Universtitat Pompeu Fabra, Barcelona, Spain Thomas Stastny Autonomous System Lab, ETH Zurich, Zurich, Switzerland Marco Antonio Simões Teixeira Federal University of Technology—Parana, Curitiba, Brazil Cees Trouwborst University of Twente, Enschede, Netherlands João Valente Centro De Automática y Robótica, UPM-CSIC, Madrid, Spain Marco Aurélio Wehrmeister Advanced Laboratory of Embedded Systems and Robotics (LASER), Federal University of Technology—Parana (UTFPR), Curitiba, Brazil Ruffin White Contextual Robotics Institute, University of California, San Diego, CA, USA Stephan Wypler Technische Universität Berlin, Berlin, Germany Junhao Xiao College of Mechatronics and Automation, National University of Defense Technology, Changsha, China Dan Xiong College of Mechatronics and Automation, National University of Defense Technology, Changsha, China Weijia Yao College of Mechatronics and Automation, National University of Defense Technology, Changsha, China Qinghua Yu College of Mechatronics and Automation, National University of Defense Technology, Changsha, China Zhiqiang Zheng College of Mechatronics and Automation, National University of Defense Technology, Changsha, China Model Predictive Control for Trajectory Tracking of Unmanned Aerial Vehicles Using Robot Operating System Mina Kamel, Thomas Stastny, Kostas Alexis and Roland Siegwart Abstract In this chapter, strategies for Model Predictive Control (MPC) design and implementation for Unmaned Aerial Vehicles (UAVs) are discussed. This chapter is divided into two main sections. In the first section, modelling, controller design and implementation of MPC for multi-rotor systems is presented. In the second section, we show modelling and controller design techniques for fixed-wing UAVs. System identification techniques are used to derive an estimate of the system model, while state of the art solvers are employed to solve the optimization problem online. By the end of this chapter, the reader should be able to implement an MPC to achieve trajectory tracking for both multi-rotor systems and fixed-wing UAVs. 1 Introduction Aerial robots are gaining great attention recently as they have many advantages over ground robots to execute inspection, search and rescue, surveillance and goods delivery tasks. Depending on the task required to be executed, a multi-rotor system or fixed-wing aircraft might be a more suitable choice. For instance, a fixed-wing aircraft is more suitable for surveillance and large-scale mapping tasks thanks to their long endurance capability and higher speed compared to a multi-rotor system, while for an inspection task that requires flying close to structures to obtain detailed footage a multi-rotor UAV is more appropriate. M. Kamel (B) · T. Stastny · R. Siegwart Autonomous System Lab, ETH Zurich, Zurich, Switzerland e-mail: [email protected] T. Stastny e-mail: [email protected] R. Siegwart e-mail: [email protected] K. Alexis University of Nevada, Reno, NV, USA e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_1 M. Kamel et al. Precise trajectory tracking is a demanding feature for aerial robots in general in order to successfully perform required tasks, especially when operating in realistic environments where external disturbances may heavily affect the flight performance and when flying in the vicinity of structure. In this chapter, several model predictive control strategies for trajectory tracking are presented for multi-rotor systems as well as for fixed-wing aircraft. The general control structure followed by this chapter is a cascade control approach, where a reliable and system-specific low-level controller is present as inner loop, and a model-based trajectory tracking controller is running as an outer loop. This approach is motivated by the fact that many critical flight software is running on a separate navigation hardware which is typically based on micro-controllers, such as Pixhawk PX4 and Navio [1, 2] while high level tasks are running on more powerful -but less reliable- on-board computers. This introduces a separation layer to keep critical tasks running despite any failure in the more complex high-level computer. By the end of this chapter, the reader should be able to implement and test various Model Predictive Control strategies for aerial robots trajectory tracking, and integrate these controllers into the Robot Operating System (ROS) [3]. Various implementation hints and practical suggestions are provided in this chapter and we show several experimental results to evaluate the proposed control algorithms on real systems. In Sect. 2 the general theory behind MPC is presented, with focus on linear MPC and robust linear MPC. In Sect. 3 we present the multi-rotor system model and give hints on model identification approaches. Moreover, linear and robust MPC are presented and we show how to integrate these controller into ROS and present experimental validation results. In Sect. 4 we present a Nonlinear MPC approach for lateral-directional position control of fixed-wing aircraft with implementation hints and validation experiments. 2 Background 2.1 Concepts of Receding Horizon Control Receding Horizon Control (RHC) corresponds to the historic evolution in control theory that aimed to attack the known challenges of fixed horizon control. Fixed horizon optimization computes a sequence of control actions {u 0 , u 1 , . . . , u N −1 } over a horizon N and is characterized by two main drawbacks, namely: (a) when an unexpected (unknown during the control design phase) disturbance takes place or when the model employed for control synthesis behaves different than the actual system, then the controller has no way to account for that over the computed control sequence, and (b) as one approaches the final control steps (over the computer fixed horizon) the control law “gives up trying” since there is too little time left in the fixed horizon to go to achieve a significant objective function reduction. To address these limitations, RHC proposed the alternative strategy of computing the full control Model Predictive Control for Trajectory Tracking … sequence, applying only the first step of it and then repeating the whole process iteratively (receding horizon fashion). RHC strategies are in general applicable to nonlinear dynamics of the form (considering that state update is available): x˙ = f(x, u) where the vector field f : Rn × Rm , x ∈ Rn×1 represents the state vector, and u ∈ Rm×1 the input vector. The general state feedback-based RHC optimization problem takes the following form: min F(xt+N ) + z N −1  k=0 ||xt+k − xt+k || + ||ut+k || xt+k+1 = f (xt+k , ut+k ) ut+k ∈ UC xt+k ∈ XC xt = x(t) where z = {ut , ut+1 , . . . , ut+N −1 } is the optimization variables,  denotes some (penalized) metric used for per-stage weighting, F(xt+N ) represents the terminal ref state weighting, xt+k is the reference signal, the subscript t + k is used to denote the sample (using a fixed sampling time Ts ) of a signal at k steps ahead of the current time t, while t + k + 1 indicates the next evolution of that, UC represents the set of input constraints, XC the state constraints and x(t) is the value of the state vector at the beginning of the current RHC iteration. The solution of this optimization problem leads again to an optimal control sequence {ut , ut+1 , . . . , ut+N −1 } but only the first step of that ut is applied while the whole process is then repeated iteratively. Within this formulation, the term F(xt+N ) has a critical role for the closed-loop stability. In particular, it forces the system state to take values within a particular set at the end of the prediction horizon. It is relatively easy to prove stability per local iteration using Lyapunov analysis. In its simplest case, this essentially means that ref considering the regulation problem (xt+k = 0 for k = 0, . . . , N − 1), and a “decrescent” metric , then the solution of the above optimization problem makes the system stable at xt = 0, ut = 0 – that is that a terminal constraint xt+k = 0 is introduced (a simplified illustration is provided in Fig. 1). However, the question of global stability Fig. 1 Illustration of the terminal constraint set (Ω) is in general not guaranteed. For that one has to consider the problem of introducing both a terminal cost and a terminal constraint for the states [4]. However, general constrained optimization problems can be extremely difficult to solve, and simply adding terminal constraints may not be feasible. Note that in many practical cases, the terminal constraint is not enforced during the control design procedure, but rather verified a posteriori (by increasing the prediction horizon if not satisfied). Furthermore, one of the most challenging properties of RHC is that of recursive feasibility. Unfortunately, although absolutely recommended from a theoretical standpoint, it is not always possible to construct a RHC that has a-priori guarantee of recursive feasibility, either due to theoretical or practical implications. In general, a RHC strategy lacks recursive feasibility –and is therefore invalidated– even when it is possible to find a state which is feasible, but where the optimal control action moves the state vector to a point where the RHC optimization problem is infeasible. Although a general feasibility analysis methodology is very challenging, for specific cases powerful tools exist. In particular, for the case of linear systems then the Farkas’ Lemma [5] in combination with bilevel programming can be used to search for problematic initial states which lack recursive feasibility – thus invalidating an RHC strategy. 2.2 Linear Model Predictive Control In this subsection we briefly present the theory behind MPC for linear systems. We formulate the optimal control problem for linear systems with linear constraints in the input and state variables. Moreover, we discuss the control input properties, stability and feasibility in the case of linear and quadratic cost function. To achieve offset free tracking under model mismatch, we adopt the approach described in [6] where the system model is augmented with additional disturbances state d(t) to capture the model mismatch. An observer is employed to estimate disturbances in steady state. The observer design and the disturbance model will be briefly discussed in this subsection.   J0 x0 , U, Xr e f , Ur e f min U subject to xk+1 = Axk + Buk + Bd dk ; dk+1 = dk , k = 0, . . . , N − 1 xk ∈ XC , uk ∈ UC x N ∈ XC N x0 = x (t0 ) , d0 = d (t0 ) . The optimal control problem to achieve offset-free state tracking under linear state and and input constraints is shown in (3), where J0 is the cost function, Xr e f = ref ref {x0 , . . . , x N } is the reference state sequence, U = {u0 , . . . , u N −1 } and Ur e f = ref ref {u0 , . . . , u N −1 } are respectively the control input sequence and the steady state input sequence, Bd is the disturbance model and dk is the external disturbances, XC , UC and XC N are polyhedra. The choice of the disturbance model is not a trivial task, and depends on the system under consideration and the type of disturbances expected. The optimization problem is defined as N −1     ref ref ref ref J0 x0 , U, X , U (xk − xk )T Qx (xk − xk )+ = k=0 ref (uk − uk )T Ru (uk − uk )+  (uk − uk−1 )T RΔ (uk − uk−1 ) + ref (x N − x N )T P(x N − x N ), where Qx  0 is the penalty on the state error, Ru  0 is the penalty on control input error, RΔ  0 is a penalty on the control change rate and P is the terminal state error penalty. In general, stability and feasibility of receding horizon problems are not ensured except for particular cases such as infinite horizon control problems as in Linear Quadratic Regulator (LQR) case. When the prediction horizon is limited to N steps, the stability and feasibility guarantees are disputable. In principle, longer prediction horizon tends to improve stability and feasibility properties of the controller, but the computation effort will increase, and for aerial robot application, fast control action needs to be computed on limited computation power platforms. However, the terminal cost P and terminal constraint XC N can be chosen such that closed-loop stability and feasibility are ensured [6]. In this chapter we focus more on the choice of terminal weight P as it is easy to compute, while the terminal constraint is generally more difficult and practically stability is achieved with long enough prediction horizon. Note that in our cost function (4), we penalize the control input rate Δuk . This ensures smooth control input and avoids undesired oscillations. In the cost function (4), u−1 is the actual control input applied on the system in the previous time step. As previously mentioned, offset-free reference tracking can be achieved by augmenting the system model with disturbances dk to capture the modeling error. Assuming that we want to track the system output yk = Cxk and achieve steady state offset free tracking y∞ = r∞ . A simple observer that can estimate such disturbance can be achieved as follows           xˆ k+1 xˆ k A Bd B Lx  = + Cˆxk − ym,k (5) uk + 0 I Ld 0 dˆ k+1 dˆ k where xˆ and dˆ are the estimated state and external disturbances, Lx and Ld are the observer gains and ym,k is the measured output at time k. Under the assumption of stable observer, it is possible to compute the MPC state at steady state xr e f and control input at steady state ur e f by solving the following system of linear equations: A−I B C 0 xr e f,k ur e f,k  = −Bd dˆ k rk 2.3 Nonlinear Model Predictive Control Aerial vehicles behavior is better described by a set of nonlinear differential equations to capture the aerodynamic and coupling effects. Therefore in this subsection we present the theory behind Nonlinear MPC that exploits the full system dynamics, and generally achieve better performance when it comes to aggressive trajectory tracking. The optimization problem for nonlinear MPC is formulated in Eq. (7). min U t=0 h (x(t), u(t)) − yr e f (t) 2 dt + m (x(T )) − yr e f (T ) 2 Q P subject to x˙ = f(x(t), u(t)); u(t) ∈ UC x(t) ∈ XC x(0) = x (t0 ) . A direct multiple shooting technique is used to solve the Optimal Control Problem (OCP) (7). In this approach the system dynamics are discretized over a time grid t0 , . . . , t N within the time intervals tk , tk+1 . The inequality constraints and control action are discretized over the same time grid. A Boundary Value Problem (BVP) is solved for each interval and additional continuity constraints are imposed. Due to the nature of the system dynamics and the imposed constraints, the optimization problem becomes a Nonlinear Program (NLP). This NLP is solved using Sequential Quadratic Programming (SQP) technique where the Quadratic Programs (QPs) are solved by active set method using the qpOASES solver [7]. Note that, in case of infeasibility of the underlying QP, 1 penalized slack variables are introduced to relax all constraints. The controller is implemented in a receding horizon fashion where only the first computed control action is applied to the system, and the rest of the predicted state and control trajectory is used as initial guess for the OCP to solve in the next iteration. 2.4 Linear Robust Model Predictive Control Despite the robustness properties of the nominal MPC formulation, specific robust control variations exist when further robustness guarantees are required. The problem of linear Robust Model Predictive Control (RMPC) may be formulated as a Minimax optimization problem that is solved explicitly. As an optimality metric we may Multiparametric Optimizer Relaxations - Derivation of Convex Optimization Problem Feedback Predictions Objective Function State Space representation using the concatenated vectors over the prediction horizon Constraints Robustification Explicit Piecewise Affine form Extended Sequential Table Traversal State and Input Constraints Fig. 2 Overview of the explicit RMPC optimization problem functional components select the Minimum Peak Performance Measure (MPPM) for its known robustness properties. Figure 2 outlines the relevant building blocks [8]. Within this RMPC approach, the following linear time invariant representation of the system dynamics may be considered: xk+1 = Axk + Buk + Gwk yk+1 = Cxk where xk ∈ X, uk ∈ U and the disturbing signals wk are unknown but bounded (wk ∈ W). Within this paper, box-constrained disturbances are considered (W∞ = {w : ||w||∞ ≤ 1}). Consequently, the RMPC problem will be formulated for the system representation and additive disturbance presented above. Let the following denote the concatenated versions of the predicted output, states, inputs and disturbances, where [k + i|k] marks the values profile at time k + i, from time k.  T Y = yk|k  T X = xk|k  T U = uk|k  T W = wk|k T T yk+1|k . . . yk+N −1|k T T xk+1|k . . . xk+N −1|k T uk+1|k T wk+1|k T uk+N −1|k T wk+N −1|k (9) (10) (11) (12) where X ∈ X N = X × X · · · × X, U ∈ U N = U × U · · · × U, W ∈ W N = W × W × · · · × W. The predicted states and outputs present linear dependency on the current state, the future control input and the disturbance, and thus the following holds: X = Axk|k + BU + GW Y = CX where A, B , C , G are the stacked state vector matrices as in [8]. Subsequently, the RMPC problem based on the MPPM (MPPM–RMPC) may be formulated as: min max ||Y ||∞ , ||Y ||∞ = max ||yk+ j|k ||∞ u s.t. uk+ j|k ∈ U, ∀ w ∈ W xk+ j|k ∈ X, ∀ w ∈ W wk+ j|k ∈ W Following the aforementioned formulation, the optimization problem will tend to become conservative as the optimization essentially computes an open-loop control sequence. Feedback predictions is a method to encode the knowledge that a receding horizon approach is followed. Towards their incorporation, a type of feedback control structure has to be assumed. Among the feedback parametrizations that are known to lead to a convex problem space, the following is selected [9, 10]:   T υT T U = LW + V , V = υk|k k+1|k · · · υk+N −1|k ⎛ ⎞ 0 0 0 ··· 0 ⎜ L 10 0 0 ··· 0⎟ ⎜ ⎟ ⎜ L 20 ⎟ L 0 · · · 0 21 TL = ⎜ ⎟ ⎜ ⎟ . .. .. .. . . . ⎝ . .⎠ . . . L (N −1)0 L (N −1)1 · · · L (N −1)(N −2) 0 Employing this feedback predictions parameterization, the control sequence is now parameterized directly in the uncertainty, and the matrix L describes how the control action uses the disturbance vector. Inserting this parametrization yields the following representation, where V becomes now the RMPC-manipulated action: X = Axk|k + BV + (G + BL)W U = LW + V and the mapping from L, V to X , U is now bilinear. This allows the formulation of the minimax MPC as a convex optimization problem [9]. Furthermore, let:   Fu = f uT f uT · · · f uT   F x = f xT f xT · · · f xT denote the concatenated –over the prediction horizon– versions of the input and state constraints f u and f x . More specifically, f u = [ f u (1)max , f u (1)min . . .] and f x = [ f x (1)max , f x (1)min . . .] where f u (i)max , f u (i)min represent the maximum and minimum allowed input values of the i-th input, while f x ( j)max , f x ( j)min represent the maximum and minimum acceptable/safe state configurations of the j-th state. min V ,L,τ τ ||C (A xk|k + BV + (G + BL)W )||∞ ≤ τ, ∀ W ∈ W N Eu (V + LW ) ≤ Fu , ∀ W ∈ W N E x (A xk|k + BV + (G + BL)W ) ≤ Fx , ∀ W ∈ W N Eu = diag N Eu , E x = diag N Ex , τ > 0 within which: (a) Ex , Eu are matrices that allow the formulation of the state and input constraints in Linear Matrix Inequality (LMI) form, (b) diag N i is a block diagonal matrix with i being the matrix that fills each diagonal block and allows the incorporation of the state and input constraints. Within this formulation τ is defined as a positive scalar value that bounds (from above) the objective function. The peak constraint may be equivalently reformulated as: C (A xk|k + BV ) + C (G + BL)W ≤ τ 1, ∀ W ∈ W N −C (A xk|k + BV ) − C (G + BL)W ≤ τ 1, ∀ W ∈ W N where 1 is a vector of ones (1 1 · · · 1)T with suitable dimensions. Satisfaction of these uncertain inequalities is based on robust optimization methods. Robust Uncertain Inequalities Satisfaction Since box-constrained disturbances (w ∈ W∞ ) are assumed, the following holds: max c T x = ||c||1 = |c T |1 |x|≤1   This equation holds as max|x|≤1 c T x = max|x|≤1 ci xi = ci sign(ci ) = ||c||1 . Consequently, the uncertain constraints with w ∈ W∞ are satisfied as long as [9]: C (A xk|k + BV ) + |C (G + BL)|1 ≤ τ 1 −C (A xk|k + BV ) + |C (G + BL)|1 ≤ τ 1 To handle these constraints in a linear programming fashion [5], the term |C (G + BL)| is bounded from above by introducing a matrix variable Γ  0: C (G + BL) ≤ Γ −C (G + BL) ≤ Γ and the peak constraint is guaranteed as long as: C (A xk|k + BV ) + Γ 1 ≤ τ 1 −C (A xk|k + BV ) + Γ 1 ≤ τ 1 2.4.3 Robust State and Input Constraints To robustly satisfy hard constraints on the input and the states along the prediction horizon, a new matrix Ω  0 is introduced and the constraints are reformulated as:     E x (A xk|k + BV ) Fx + Ω1 ≤ Eu V Fu   E x (G + BL) ≤Ω Eu L   E (G + BL) ≤Ω − x Eu L (31) (32) (33) Optimizing the control sequence, while robustly satisfying the state and input constraints is of essential importance for the flight control of micro aerial vehicles. Minimum Peak Performance Robust MPC Formulation Based on the aforementioned derivations, the total MPPM–RMPC formulation is solved subject to element-wise bounded disturbances and feedback predictions through the following linear programming problem: min V ,L,τ,Ω,Γ s.t. C (A xk|k + BV ) + Γ 1 ≤ τ 1 −C (A xk|k + BV ) + Γ 1 ≤ τ 1 C (G + BL) ≤ Γ −C (G + BL) ≤ Γ E x (A xk|k + BV )     E x (A xk|k + BV ) Fx + Ω1 ≤ Eu V Fu   E x (G + BL) ≤Ω Eu L Model Predictive Control for Trajectory Tracking …  −  E x (G + BL) ≤Ω Eu L Multiparametric Explicit Solution The presented RMPC strategy requires the solution of a linear programming problem. However, a multiparametric-explicit solution is possible due to the fact that the control action takes the general form [6]: uk = Fr xk + Zr , if xk ∈ Π r where Π i , r = 1, . . . , N r are the regions of the receding horizon controller. The r -th control law is valid if the state vector xk is contained in a convex polyhedral region Π r = {xk |Hr xk ≤ Kr } computed and described in h-representation [11]. Such a fact enables fast real-time execution even in microcontrollers with very limited computing power. In this framework, the real-time code described in [8] is employed. 3 Model-Based Trajectory Tracking Controller for Multi-rotor System In this section, we present a simplified model of multi-rotor system that can be used for model-based control to achieve trajectory tracking, and we present a linear and nonlinear model predictive controller for trajectory tracking. 3.1 Multirotor System Model The 6DoF pose of the multi-rotor system can be defined by assigning a fixed inertial frame W and body frame B attached to the vehicle as shown in Fig. 3. We denote by p the position of the origin of frame B in frame W expressed in frame W, by R the rotation matrix of frame B in frame W expressed in frame W. Moreover, we denote by φ, θ and ψ the roll, pitch and yaw angles of the vehicle. In this model we assume a low level attitude controller that is able to track desired roll and pitch φd , θd angles with a first order behavior. The first order inner-loop approximation provides sufficient information to the MPC to take into account the low level controller behavior. The inner-loop first order parameters can be identified through classic system identification techniques. The non-linear model used for trajectory tracking of multi-rotor system is shown in Eq. (37). Fig. 3 Illustration of the Firefly hexacopter from Ascending Technologies with attached body fixed frame B and inertial frame W ˙ p(t) = v(t) ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 0 0 Ax 0 0 v˙ (t) = R (ψ, θ, φ) ⎝ 0 ⎠ + ⎝ 0 ⎠ − ⎝ 0 A y 0 ⎠ v(t) + d(t) T −g 0 0 Az  1  ˙ = φ(t) K φ φd (t) − φ(t) τφ 1 θ˙ (t) = (K θ θd (t) − θ (t)) τθ where v indicates the vehicle velocity, g is the gravitational acceleration, T is the mass normalized thrust, A x , A y , A z indicate the mass normalized drag coefficients, d is external disturbance. τφ , K φ and τθ , K θ are the time constant and gain of innerloop behavior for roll angle and pitch angle respectively. The cascade controller structure assumed in this chapter is shown in Fig. 4. 3.2 Linear MPC In this subsection we show how to formulate a linear MPC to achieve trajectory tracking for multi-rotor system and integrate it into ROS. The optimization problem presented in Eq. (3) is solved by generating a C-code solver using the CVXGEN framework [12]. CVXGEN generates a high speed solver for convex optimization problems by exploiting the problem structure. For clarity purposes, we rewrite the optimization problem here and show how to generate a custom solver using CVXGEN. The optimization problem is given by Fig. 4 Controller scheme for multi-rotor system. n 1 . . . n m are the i − th rotor speed and y is the measured vehicle state N −1  (xk − xk )T Qx (xk − xk ) + (uk − uk )T Ru (uk − uk ) + (uk − uk−1 )T RΔ (uk − uk−1 ) k=0 ref + (x N − x N )T P(x N − x N ) subject to xk+1 = Axk + Buk + Bd dk ; dk+1 = dk , k = 0, . . . , N − 1 uk ∈ UC x0 = x (t0 ) , d0 = d (t0 ) . (38) To generate a solver for the aforementioned optimization problem, the following problem description is used in CVXGEN. 1 2 3 4 5 6 dimensions m = 3 nd = 3 nx = 8 T = 18 end dimension dimension dimension horizon − of i n p u t s . of d i s t u r b a n c e s . of s t a t e vector . 1. 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 parameters A ( nx , nx ) # dynamics m a t r i x . B ( nx ,m) # t r a n s f e r matrix . Bd ( nx , nd ) # disturbance t r a n s f e r matrix Q_x ( nx , nx ) psd # s t a t e c o s t , p o s i t i v e s e m i d i f i n e d . P ( nx , nx ) psd # f i n a l s t a t e p e n a l t y , p o s i t i v e s e m i d i f i n e d . R_u (m,m) psd # i n p u t p e n a l t y , p o s i t i v e s e m i d i f i n e d . R _ d e l t a (m,m) psd # d e l t a i n p u t p e n a l t y , p o s i t i v e s e m i d i f i n e d . x [ 0 ] ( nx ) # initial state . d ( nd ) # disturbances . u_prev (m) # p r e v i o u s i n p u t a p p l i e d t o t h e system . u_max (m) # input amplitude l i m i t . u_min (m) # input amplitude l i m i t . x _ s s [ t ] ( nx ) , t = 0 . . T+1 # reference state . u _ s s [ t ] (m) , t = 0 . . T # reference input . end variables x [ t ] ( nx ) , t = 1 . . T+1 u [ t ] (m) , t = 0 . . T end # state . # input . minimize quad ( x[0] − x _ s s [ 0 ] , Q_x ) + quad ( u[0] − u _ s s [ 0 ] , R_u ) + quad ( u [ 0 ] − u_prev , R _ d e l t a ) + sum [ t = 1 . . T ] ( quad ( x [ t ]− x _ s s [ t ] , Q_x ) + quad ( u [ t ]− u _ s s [ t ] , R_u ) + quad ( u [ t ] − u [ t −1] , R _ d e l t a ) ) +quad ( x [ T+1]− x _ s s [ T+ 1 ] , P ) subject to x [ t +1] == A∗x [ t ] + B∗u [ t ] + Bd∗d , t = 0 . . T # dynamics u_min linguistic - > value = 0; it_e - > m e m b e r s h i p F u n c t i o n - > d e g r e e = 0; } // I n f e r e n c e float s t r e n g t h _ t m p ; for ( it_r = ruleSet - > r u l e V e c t o r . begin () ; it_r != ruleSet - > r u l e V e c t o r . end () ; it_r ++) { // C a l c u l a t e the s t r e n g t h of the p r e m i s e s strength_tmp = FZ_MAX_LIMIT ; for ( it_e = it_r - > i f _ R u l e E l e m e n t V e c t o r . begin () ; it_e != it_r - > i f _ R u l e E l e m e n t V e c t o r . end () ; it_e ++) { s t r e n g t h _ t m p = m i n i m u m ( s t r e n g t h _ t m p , it_e - > membershipFunction -> degree ); if (! it_e - > i s O p e r a t o r ) trength_tmp = 1- strength_tmp ; } // C a l c u l a t e the s t r e n g t h of the c o n s e q u e n c e s for ( it_e = it_r - > t h e n _ R u l e E l e m e n t V e c t o r . begin () ; it_e != it_r - > t h e n _ R u l e E l e m e n t V e c t o r . end () ; it_e ++) { it_e - > m e m b e r s h i p F u n c t i o n - > d e g r e e = m a x i m u m ( s t r e n g t h _ t m p , it_e - > m e m b e r s h i p F u n c t i o n - > d e g r e e ) ; if (! it_e - > i s O p e r a t o r ) it_e - > m e m b e r s h i p F u n c t i o n - > d e g r e e = 1 - it_e - > membershipFunction -> degree ; } it_r - > s t r e n g t h = s t r e n g t h _ t m p ; 31 32 33 34 35 36 37 } Once all elements of rules are evaluated, the third step of fuzzifying process is the defuzzyfication process. As discussed in Sect. 3.2, during the defuzzyfication process, the triangle/trapezoid area of the output linguistic variables is calculated taking into account the membership function and the rule activation. Then the linguistic value chosen as output is converted into a raw value that may be applied to the rotors. Such a raw value is obtained by means of calculating an arithmetic average or a weighted average (Center of Gravity method) of two areas. Listing 1.9 shows the implementation of FuzzySet::defuzzification() method. Listing 1.9 Source code of FuzzySet::defuzzification() method 1 void F u z z y S e t :: d e f u z z i f i c a t i o n () 2 { 3 t y p e d e f std :: vector < Linguistic >:: i t e r a t o r LinguisticIterator_t ; 4 t y p e d e f std :: vector < M e m b e r s h i p F u n c t i o n >:: i t e r a t o r MembershipFunctionIterator_t ; 5 6 LinguisticIterator_t it_l ; 7 M e m b e r s h i p F u n c t i o n I t e r a t o r _ t it_m ; 8 9 float s u m _ p r o d ; 10 float s u m _ a r e a ; 11 float area , c e n t r o i d e ; 12 13 for ( it_l = o u t p u t L i n g u i s t i c S e t - > l i n g u i s t i c V e c t o r . begin () ; it_l != o u t p u t L i n g u i s t i c S e t - > l i n g u i s t i c V e c t o r . end () ; it_l ++) 14 { 15 s u m _ p r o d = s u m _ a r e a =0; Designing Fuzzy Logic Controllers for ROS-Based Multirotors 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 } for ( it_m = it_l - > m e m b e r s h i p F u n c t i o n V e c t o r . begin () ; it_m != it_l - > m e m b e r s h i p F u n c t i o n V e c t o r . end () ; it_m ++) { area = it_m - > c a l c T r a p e z i u m A r e a () ; c e n t r o i d e = it_m - > a + (( it_m - > d - it_m - > a ) /2.0) ; s u m _ p r o d += area * c e n t r o i d e ; s u m _ a r e a += area ; } if ( s u m _ a r e a ==0) it_l - > value = F Z _ M A X _ O U T P U T ; else { it_l - > value = s u m _ p r o d / s u m _ a r e a ; } } 4.5 Main Controller Implementation The hexacopter movement fuzzy control system has been implemented in some distinct source code files. The source code file named rosvrep_controller.cpp implements the main control loop, i.e. the system main() function. The file HexaPlus.cpp contains the implementation of the HexaPlus class that is responsible for initializing the fuzzy library objects (see Sect. 4.4). The main() function is divided in two parts. The first one performs all necessary initialization, i.e. it instantiates the HexaPlus object, loads the fuzzy set files, creates a ROS node, and and configures the data publishers (i.e. callback functions) and subscribers. Listing 1.11 depicts fragments of the initialization part of the main() function. Listing 1.10 Fragments of main() function in rosvrep_controller.cpp 1 // / / / / / / / / / / / THE C A L L B A C K F U N C T I O N /////////////////////////////// 2 // S u b s c r i b e r c a l l b a c k f u n c t i o n s for euler angles 3 ... 4 void c a l l b a c k _ e u l e r Z ( const s t d _ m s g s :: F l o a t 3 2 f ) 5 { eulerZ = f. data ; } 6 // S u b s c r i b e r c a l l b a c k f u n c t i o n s for GPS p o s i t i o n 7 ... 8 void c a l l b a c k _ g p s Z ( const s t d _ m s g s :: F l o a t 3 2 f ) 9 { gpsZ = f . data ; } 10 // S u b s c r i b e r c a l l b a c k f u n c t i o n s for a c c e l e r o m e t e r s e n s o r 11 ... 12 void c a l l b a c k _ a c c e l Z ( const s t d _ m s g s :: F l o a t 3 2 f ) 13 { accelZ = f. data ; } 14 // S u b s c r i b e r c a l l b a c k f u n c t i o n s for o p e r a t o r s e t p o i n t s 15 ... 16 void c a l l b a c k _ g p s Z _ s e t p o i n t ( const s t d _ m s g s :: F l o a t 3 2 f ) 17 { g p s Z _ s e t p o i n t = f . data ;} 18 ... 19 // / / / / / / / / / / / END C A L L B A C K F U N C T I O N /////////////////////////////// 20 21 int main ( int argc , char * argv []) 22 { E. Koslosky et al. 23 u n s i g n e d long int t i m e _ d e l a y =0; 24 25 // I n i t i a l i z e the ros s u b s c r i b e r s 26 ros :: init ( argc , argv , " r o s v r e p _ c o n t r o l l e r " ) ; 27 ros :: N o d e H a n d l e n ; 28 29 // the r o s S i g n a l is used to send s i g n a l to uav via P u b l i s h e r . 30 s t d _ m s g s :: F l o a t 3 2 r o s S i g n a l ; 31 // H e x a c o p t e r s e n s o r s u b s c r i b e r s 32 ... // some lines are o m i t t e d 33 34 // I n i t i a l i z e the ROS P u b l i s h e r s 35 // R o t o r s p u b l i s h e r s 36 ros :: P u b l i s h e r r o s A d v _ p r o p F R O N T = 37 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / p r o p F R O N T " ,1) ; 38 ros :: P u b l i s h e r r o s A d v _ p r o p L E F T _ F R O N T = 39 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / p r o p L E F T _ F R O N T " ,1) ; 40 ros :: P u b l i s h e r r o s A d v _ p r o p L E F T _ R E A R = 41 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / p r o p L E F T _ R E A R " ,1) ; 42 ros :: P u b l i s h e r r o s A d v _ p r o p R E A R = 43 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / p r o p R E A R " ,1) ; 44 ros :: P u b l i s h e r r o s A d v _ p r o p R I G H T _ F R O N T = 45 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / p r o p R I G H T _ F R O N T " ,1) ; 46 ros :: P u b l i s h e r r o s A d v _ p r o p R I G H T _ R E A R = 47 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / p r o p R I G H T _ R E A R " ,1) ; 48 ros :: P u b l i s h e r r o s A d v _ p r o p Y a w = 49 n . advertise < s t d _ m s g s :: Float32 >( " / vrep / Yaw " ,1) ; 50 ... // r e m a i n d e r l i n e s are o m i t t e d The second part is the main control loop of the hexacopter movement fuzzy control system. Such a loop performs three main activities: (i) pre-processing phase, (ii) processing of five distinct fuzzy controllers, (iii) post-processing phase. These activities are discussed in Sect. 3.2. Moreover, the execution frequency of loop iterations is 10 Hz. Such an execution frequency is obtained by using the commands loop_rate.sleep() and ros::spinOnce() at the end of the loop. The 10 Hz timing requirement has been arbitrarily defined and has been demonstrated to be enough to control a simulated hexacopter as discussed in Sect. 5. However, it is important to highlight that a more careful and sound timing analysis is required in order to define the execution frequency of the main control loop for a real hexacopter. A discussion on such an issue is out of this chapter scope. Interested reader should refer to [19, 22–27]. The pre-processing phase is responsible for acquiring data from the input sensors, processing the input movement commands, as well as for calculating the controlled data used as input to the five fuzzy controllers. Two examples of data calculated in this phase are: (i) vertical and horizontal speed calculated using the hexacopter displacement over time; and (ii) the drift of new heading angle in comparison with the actual heading. Listing 1.11 presents some fragments of the code related to the pre-processing phase. Listing 1.11 Fragments of pre-processing phase in rosvrep_controller.cpp 1 2 3 ... // some lines are o m i t t e d // D e t e r m i n e the delta as errors . // It means the d i f f e r e n c e b e t w e e n s e t p o i n t and c u r r e n t information Designing Fuzzy Logic Controllers for ROS-Based Multirotors 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 // GPS error gpsX_error = ( float ) g p s X _ s e t p o i n t - gpsX ; gpsY_error = ( float ) g p s Y _ s e t p o i n t - gpsY ; gpsZ_error = ( float ) g p s Z _ s e t p o i n t - gpsZ ; // View p o s i t i o n e r r o r ( yaw or h e a d i n g of the h e x a c o p t e r ) viewX_error = ( float ) v i e w X _ s e t p o i n t - gpsX ; viewY_error = ( float ) v i e w Y _ s e t p o i n t - gpsY ; // C a l c u l a t e the d r i f t _ a n g l e // This angle is the d i f f e r e n c e b e t w e e n d i r e c t i o n // to n a v i g a t e and d i r e c t i o n of view ( yaw ) . drift_angle = ( float ) eulerZ - uav_goal_angle ; ... // r e m a i n d e r l i n e s are o m i t t e d Once the pre-processing phase is executed, the second activity is responsible to execute the five fuzzy controllers. This occurs by means of invoking the fuzzifying() method of each controller FuzzySet object. The “fuzzifying” process includes “fuzzyfication”, rules inference, and “defuzzification” (see Sect. 4.4). Listing 1.12 depicts the code fragment that processes the five fuzzy controllers. Listing 1.12 Fragment depicting rosvrep_controller.cpp 1 2 3 4 5 6 7 ... // p r e v i o u s l i n e s are o m i t t e d h e x a p l u s . fS_stabX - > f u z z i f y i n g () ; h e x a p l u s . fS_stabY - > f u z z i f y i n g () ; h e x a p l u s . fS_stabZ - > f u z z i f y i n g () ; h e x a p l u s . fS_yaw - > f u z z i f y i n g () ; h e x a p l u s . fS_hnav - > f u z z i f y i n g () ; ... // r e m a i n d e r l i n e s are o m i t t e d The last activity is the post-processing phase. In this phase the output linguistic variables are transformed in raw values that are applied on the rotors in order to control the hexacopter movements. Listing 1.13 presents a fragment of post-processing phase code. Listing 1.13 Fragment depicting the post-processing phase in rosvrep_controller.cpp 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 // f u z z i f y i n g is finished , a p p l y i n g the o u t p u t s Opitch = h e x a p l u s . fz_Opitch - > value ; Oroll = h e x a p l u s . fz_Oroll - > value ; Othrottle = hexaplus . fz_Othrottle -> value ; Oyaw = h e x a p l u s . fz_Oyaw - > value ; Opitch_nav = h e x a p l u s . f z _ O p i t c h _ n a v - > value ; propForceFRONT angleOth ); propForceRIGHT_FRONT a n g l e O t h ) /2 ) ) ; propForceRIGHT_REAR a n g l e O t h ) /2 ) ) ; propForceREAR angleOth ); propForceLEFT_REAR a n g l e O t h ) /2 ) ) ; propForceLEFT_FRONT a n g l e O t h ) /2 ) ) ; = ( float ) O t h r o t t l e - 0.45* zOth * cos ( = ( float ) O t h r o t t l e - ( 0 . 4 5 * zOth *( sin ( = ( float ) O t h r o t t l e - ( 0 . 4 5 * zOth *( sin ( = ( float ) O t h r o t t l e + 0.45* zOth * cos ( = ( float ) O t h r o t t l e + ( 0 . 4 5 * zOth *( sin ( = ( float ) O t h r o t t l e + ( 0 . 4 5 * zOth *( sin ( // S e n d i n g s i g n a l s to the r o t o r s r o s S i g n a l . data = p r o p F o r c e F R O N T ; E. Koslosky et al. rosAdv_propFRONT . publish ( rosSignal ); r o s S i g n a l . data = p r o p F o r c e R I G H T _ F R O N T ; rosAdv_propRIGHT_FRONT . publish ( rosSignal ); r o s S i g n a l . data = p r o p F o r c e R I G H T _ R E A R ; rosAdv_propRIGHT_REAR . publish ( rosSignal ); r o s S i g n a l . data = p r o p F o r c e R E A R ; rosAdv_propREAR . publish ( rosSignal ); r o s S i g n a l . data = p r o p F o r c e L E F T _ R E A R ; rosAdv_propLEFT_REAR . publish ( rosSignal ); r o s S i g n a l . data = p r o p F o r c e L E F T _ F R O N T ; rosAdv_propLEFT_FRONT . publish ( rosSignal ); r o s S i g n a l . data = Oyaw ; rosAdv_propYaw . publish ( rosSignal ); Finally, it is worth mentioning that the main controller interacts with other two applications. A command interface application named Panel sends commands to determine a new position, as well as new heading direction, towards which the hexacopter must fly. Moreover, some data produced in the main controller are published so that these telemetry data can be seen within an application named Telemetry. Listing 1.14 shows the code in rosvrep_controller.cpp that configures ROS publishers for the telemetry data. Next sections provide detail on these two applications. Listing 1.14 Configuring ROS rosvrep_controller.cpp 1 2 3 4 5 6 7 8 9 // T e l e m e t r y ros :: P u b l i s h e r r o s A d v _ g p s X _ e r r o r = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / g p s X _ e r r o r " ,1) ; ros :: P u b l i s h e r r o s A d v _ g p s Y _ e r r o r = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / g p s Y _ e r r o r " ,1) ; ros :: P u b l i s h e r r o s A d v _ g p s Z _ e r r o r = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / g p s Z _ e r r o r " ,1) ; ros :: P u b l i s h e r r o s A d v _ d r i f t _ a n g l e = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / d r i f t _ a n g l e " ,1) ; ros :: P u b l i s h e r r o s A d v _ u a v _ g o a l _ a n g l e = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / u a v _ g o a l _ a n g l e " ,1) ; ros :: P u b l i s h e r r o s A d v _ u a v _ g o a l _ d i s t = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / u a v _ g o a l _ d i s t " ,1) ; ros :: P u b l i s h e r r o s A d v _ s p e e d _ u a v Z = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / s p e e d _ u a v Z " ,1) ; ros :: P u b l i s h e r r o s A d v _ s p e e d _ g o a l = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / s p e e d _ g o a l " ,1) ; 4.6 Command Interface Implementation The command interface application named Panel is a ROS node that allows a user to send commands to modify hexacopter pose and position. The implementation of such an application is provided in rosvrep_panel.cpp file. Two types of commands are allowed: (i) the user can set a new (X, Y, Z) position, and hence, the hexacopter will fly towards this target position; (ii) the user can set a new heading direction by setting a new (X, Y) position, and hence, the hexacopter will perform a yaw movement in order to aim the target position. Designing Fuzzy Logic Controllers for ROS-Based Multirotors The Panel application is very simple: it publishes a setpoint position and a view direction, as well as provides means for user input. Listing 1.15 shows the code fragment that configures the ROS publisher for the new 3D position (i.e. setpoint) and new heading (i.e. view direction). Listing 1.15 Configuring ROS publisher for telemetry data in rosvrep_panel.cpp 1 2 3 4 5 6 7 // O p e r a t o r s e t p o i n t P u b l i s h e r s ros :: P u b l i s h e r r o s A d v _ g p s X _ s e t p o i n t = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / g p s X _ s e t p o i n t " ,1) ; ros :: P u b l i s h e r r o s A d v _ g p s Y _ s e t p o i n t = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / g p s Y _ s e t p o i n t " ,1) ; ros :: P u b l i s h e r r o s A d v _ g p s Z _ s e t p o i n t = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / g p s Z _ s e t p o i n t " ,1) ; ros :: P u b l i s h e r r o s A d v _ v i e w X _ s e t p o i n t = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / v i e w X _ s e t p o i n t " ,1) ; ros :: P u b l i s h e r r o s A d v _ v i e w Y _ s e t p o i n t = n . advertise < s t d _ m s g s :: Float32 >( " / h e x a p l u s _ t u t o r i a l / v i e w Y _ s e t p o i n t " ,1) ; The Panel application must be executed with the rosrun command as depicted in line 01 from Listing 1.16. When the user presses “s”, he/she is asked to inform new position setpoint in terms of X, Y, Z coordinates. When the user presses “y”, he/she is asked to inform the new heading direction new in terms of X, Y coordinates. In the example presented in lines 09–12 from Listing 1.16, the user sent (5, 3, 7) as new (X, Y, Z) target position. It is important to mention that the values for (X, Y, Z) coordinates are measured in meters. After sending the new setpoints, the hexacopter starts moving. If the user press CTRL-C and the ENTER keys, the program is finished and the hexacopter continues until it reaches the target position. Listing 1.16 Panel application 1 2 3 4 5 6 7 8 9 10 11 12 $ rosrun hexaplus_tutorial rosvrep_panel ========================== S e t p o i n t s for p o s i t i o n [s] S e t p o i n t s for View h e a d i n g [ y ] Or CTRL - C to exit . Enter Enter Enter Enter the o p t i o n : s X value : 5 Y value : 3 Z value : 7 4.7 Telemetry Implementation The Telemetry application is also a very simple program. It receives the signals from the hexacopter sensors and from some data calculated during the execution of the control program. Likewise the Panel application, the Telemetry application is executed with the rosrun command as depicted in line 01 from Listing 1.18. The telemetry data is shown in lines 03–23. Listing 1.17 Panel application 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 $ rosrun hexaplus_tutorial rosvrep_telemetry ------------------- Telemetry -----------------------------gpsX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : 0 . 0 0 0 0 0 0 gpsY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : 0 . 0 0 0 0 0 0 gpsZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : 0 . 0 0 0 0 0 0 gpsX_error ............................: 0.000000 gpsY_error ............................: 0.000000 gpsZ_error ............................: 0.000000 drift_angle ...........................: 0.000000 (0.000000 degrees ) uav_goal_angle ........................: 0.000000 (0.000000 degrees ) uav_goal_dist .........................: 0.000000 speed_uavZ ............................: 0.000000 speed_goal ............................: 0.000000 ---------------- Operator Command ------------------------gpsX_setpoint .........................: 0.000000 gpsY_setpoint .........................: 0.000000 gpsZ_setpoint .........................: 0.000000 viewX_setpoint ........................: 0.000000 viewY_setpoint ........................: 0.000000 ----------------------------------------------------------Press CTRL - C to exit Telemetry application is a very simple program. It subscribes some ROS topics and displays them on the terminal. The program terminates when CTRL-C is pressed. The rosvrep_telemetry.cpp file implements this application. The main part of the code is the declaration of ROS subscribers and callback functions. Listing 1.18 shows these declarations. Callback functions declaration is depicted in line 02–04, while ROS subscribers in line 09–11. One can notice that some topics start with “/vrep”, others with “/hexaplus_tutorial”; this means that some topics came from V-REP and other ones from the control program. Listing 1.18 Panel application 1 ... // p r e v i o u s line omi t t e d 2 void c a l l b a c k _ g p s X _ e r r o r ( const s t d _ m s g s :: F l o a t 3 2 f ) { g p s X _ e r r o r = f . data ; } 3 void c a l l b a c k _ g p s Y _ e r r o r ( const s t d _ m s g s :: F l o a t 3 2 f ) { g p s Y _ e r r o r = f . data ; } 4 void c a l l b a c k _ g p s Z _ e r r o r ( const s t d _ m s g s :: F l o a t 3 2 f ) { g p s Z _ e r r o r = f . data ; } 5 6 ... // some lines o m i t t e d 7 8 ros :: S u b s c r i b e r s u b _ g p s X _ e r r o r = n . s u b s c r i b e ( " / h e x a p l u s _ t u t o r i a l / g p s X _ e r r o r " ,1 , c a l l b a c k _ g p s X _ e r r o r ) ; 9 ros :: S u b s c r i b e r s u b _ g p s Y _ e r r o r = n . s u b s c r i b e ( " / h e x a p l u s _ t u t o r i a l / g p s Y _ e r r o r " ,1 , c a l l b a c k _ g p s Y _ e r r o r ) ; 10 ros :: S u b s c r i b e r s u b _ g p s Z _ e r r o r = n . s u b s c r i b e ( " / h e x a p l u s _ t u t o r i a l / g p s Z _ e r r o r " ,1 , c a l l b a c k _ g p s Z _ e r r o r ) ; 11 ... // r e m a i n i n g l i n e s o m i t t e d 5 Virtual Experimentation Platform 5.1 Introduction A common tool used during the design of control systems is the simulator. There is a number of different simulators available for using, e.g. Simulink, Gazebo and Stage. In special, for robotics control systems design, a virtual environment for simulation must allow the creation of objects and also the specification of some of the physical parameters for both objects and the environment. The virtual environment should also provide a programming interface to control not only the simulation, but also the objects behavior and the time elapsed in simulation. Although there are some robotics simulators supported in ROS such as Gazebo and Stage, this tutorial discusses the use of a different robotics simulator named V-REP. The main goal is to show the feasibility of using other (non-standard) simulators, opening room for the engineer to choose the tools he/she finds suitable for his/her project. The hexacopter movement fuzzy control system is used to illustrate how to integrate V-REP with ROS. An overview on V-REP virtual simulation environment is presented, so that the reader can understand how a virtual hexacopter was created. In addition, the reader will learn how the V-REP acts as a ROS publisher/subscriber to exchange messages with roscore. V-REP uses the Lua language [28] to implement scripts that access and control the simulator. Lua is quite easy to learn, and hence, only a few necessary instructions are presented herein. Although V-REP uses Lua for its internal scripts, there are many external interfaces to other languages, such as C/C++, Java, Python, Matlab and ROS. V-REP documentation is extensive, and hence, the interested reader should refer to [29]. The installation of the V-REP simulator on Linux is simple: the reader must download the compressed installation file from Coppelia Robotics’ website [30] and expand it on a directory using the UNIX tar command. It is interesting to mention some subdirectories within V-REP directory: • scenes: V-REP provides a number of scenes as examples. The scene files extension is “ttt”. • tutorial: This directory provides all scenes used in the tutorials presented in the V-REP site [29]. • programming: This directory provides examples written in C/C++, Java, Lua and Python. In addition, it provides the ros_packages interface that are in this tutorial. 5.2 V-REP Basics When V-REP is started, a blank scenario is open automatically for using. The user can start developing a new scenario, or open a scenario created previously, or open a scenario from scenes directory. Figure 16 shows a screenshot. In order to illustrate the use of V-REP, select the menus File → Open scene and choose the scene Hexacopter.ttt provided in the directory ~ /catkin_hexac opter/src/hexaplus_tutorial/scenes. A complex object like a hexacopter is built by putting objects under a hierarchic structure. For instance, the sensors such as GPS, gyroscope and accelerometer are under the HexacopterPlus object. During the simulation execution, if the HexacopterPlus or any subpart, is moved, all parts are moved as if they are a single object. Any primitive object, e.g. cuboids, cylinders and spheres, can be inserted into a scene. There are some especial objects such as joints, video sensors, force sensors, and other. The special objects have some specific attributes used during simulation, e.g. position information, angle measurements, force data, etc. The sensors and rotors are made from these kinds of objects. The user can get some already available devices from “Model Browser”. For instance, there are several sensors available in the Model Browser → components → sensors, e.g. laser scanners, the Kinect sensor, Velodyne, GPS, Gyrosensor and Accelerometer. The last three sensors were used in the hexacopter model. It is important to mention that when a new robot is created, one must pay attention to the orientation between the robot body and its subparts, especially sensors. Sensors will not work properly whether there are inconsistencies in the parts orientation. The Fig. 16 Screenshot of the V-REP initial screen Fig. 17 Open the Lua script code inertial frame 3D orientation is shown on the bottom right corner. When one clicks on any object, the 3D axes of the selected object body orientation is depicted. The camera sensor is an exception. The camera orientation has a rotation of +90◦ over the Z-axis and −90◦ over the X-axis in relation to the axes of the robot body. Such a situation leads to an issue: Z-axis of the camera matches with the X-axis of the robot. Thus, the camera X-axis matches the robot Y-axis, and the camera Y-axis matches the robot Z-axis. Such a difference can be seen by clicking on vCamera and hexacopter object while pressing the shift key at the same time. In addition, one can observe that some objects have an icon to edit its Lua script code, as shown at Fig. 17. If the object does not have a piece of code, it is possible to add one by the right-clicking on the object and choosing Add → Associated child script → Non Threaded (or Threaded). While a simulation is running, V-REP executes the scripts associated to each object throughout the main internal loop. Script execution can be run in separate thread whether the associated script is indicated as threaded. V-REP controls the simulation elapsing time by means of time parameters. In order to execute the simulation of this tutorial, set the time configuration as “Bullet”, “Fast” at “dt = 10.0 ms”. This will ensure a suitable simulation speed. 5.3 Publishing ROS Topics V-REP provides a plugin infrastructure that allows the engineer customize the simulation tool. RosPlugin services in V-REP is an interface to support general ROS functionality. The V-REP has several mechanisms to communicate with the user code: (i) tubes are similar to the UNIX pipe mechanism; (ii) signals are similar to global variables; (iii) wireless communication simulation; (iv) persistent data blocks; (v) custom Lua functions; (vi) serial port; (vii) LuaSocket; (viii) custom libraries, etc. An easy way to communicate with ROS is creating a V-REP signal and publishing or subscribing its topic. RosPlugin publishers offer an API to setup and publish data within ROS topics. An example on how the V-REP publishes messages to roscore can be found in the Lua child object script of HexacopterPlus. Let us consider the GPS as an example. Before publishing GPS data, it is necessary to check if the ROS module has been loaded. Listing 1.19 depicts the Lua script defined in the HexacopterPlus element as shown in Fig. 17. Listing 1.19 Lua script to check whether ROS module is loaded 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ... -- p r e v i o u s l i n e s are o m i t t e d -- Check if the r e q u i r e d r e m o t e Api p l u g i n is there : m o d u l e N a m e =0 m o d u l e V e r s i o n =0 index =0 p l u g i n N o t F o u n d = true while m o d u l e N a m e do moduleName , m o d u l e V e r s i o n = s i m G e t M o d u l e N a m e ( index ) if ( m o d u l e N a m e == ’ Ros ’) then p l u g i n N o t F o u n d = false end index = index +1 end if ( p l u g i n N o t F o u n d ) then -- D i s p l a y an e r r o r m e s s a g e if the p l u g i n was not f o u n d : s i m D i s p l a y D i a l o g ( ’ Error ’ , ’ ROS p l u g i n was not f o u n d .&& n S i m u l a t i o n will not run properly ’, s i m _ d l g s t y l e _ o k , false , nil ,{0.8 ,0 ,0 ,0 ,0 ,0} ,{0.5 ,0 ,0 ,1 ,1 ,1}) else -- Ok go on . ... -- r e m a i n d e r l i n e s are o m i t t e d All plugins are loaded by executing the simLoadModule function, however, ROS plugins are loaded automatically during V-REP startup, i.e. the library libv_repExtRos.so is loaded automatically. This is achieved because the shared ROS libraries were generated and copied to the V-REP directory in Sect. 4.2. The scripts in V-REP are divided into sections. At simulation time, all scripts are executed within the internal main loop. Some sections are executed once, whereas others are performed on each loop iteration. The script fragment presented in Listing 1.20 executes once on each simulation. For publishing the GPS data as topic to roscore, a special Lua function is called. There is a variety of Lua functions provided by V-REP team in order to work with ROS and others communication channels. Listing 1.20 preencher 1 ... // p r e v i o u s l i n e s are o m i t t e d 2 -- P u b l i s h the E u l e r angle as ROS topic 3 topicName = simExtROS_enablePublisher ( ’ eulerX ’ 1, s i m r o s _ s t r m c m d _ g e t _ f l o a t _ s i g n a l , -1 , -1 , ’ e u l e r X ’ ,0) 4 t o p i c N a m e = s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ e u l e r Y ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ f l o a t _ s i g n a l , -1 , -1 , ’ e u l e r Y ’ ,0) 5 t o p i c N a m e = s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ e u l e r Z ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ f l o a t _ s i g n a l , -1 , -1 , ’ e u l e r Z ’ ,0) 6 ... // next lines are o m i t t e d The simExtROS_enablePublisher function is used to enable a publisher on V-REP. The parameters are similar to the function used for publishing date by means of ros::Publisher.advertise method as follows: 1. The name of the target topic to which data is published, e.g. “eulerX”. 2. The queue size has the same meaning of the original queue size of ROS publisher. 3. The stream data type parameter is used define how to process the two following parameters, e.g. the user can use the simros_strmcmd_get_float_ signal signal to publish floating-point data. There is a variety of predefined data types. 4. The meaning of auxiInt1 parameter depends on the data type. When this parameter is not in use, the value is −1. 5. The auxiInt2 parameter semantics is similar to auxiInt1. 6. The auxString parameter. The type simros_strmcmd_get_float_ signal means that a float type from a V-REP signal is being published. This parameter must match with the signal name. Listing 1.21 depicts the GPS script code that is used to explain how a V-REP signal is declared within a Lua script. 7. The publishCnt parameter indicates the number of times a signal is published before it goes to sleep. The −1 value lead to start the sleep mode, whereas values greater than zero indicates that data are published exactly publishCnt times. The publisher wakes up when simExtROS_wakePublisher is executed. All published data never sleep by setting this parameter to zero. Some published or subscribed data types use the parameters auxiInt1 or auxiInt2. For example, the simros_strmcmd_get_joint_state type was used to get the joint state. It uses the auxiInt1 to indicate the joint handle. Other type is simros_strmcmd_get_object_pose which is used to enable data streaming from the the object pose. This type uses the auxiInt1 to identify V-REP object handle and the auxiInt2 indicates the reference frame from which the pose is obtained. For more information please see [31]. Listing 1.21 presents a code fragment of the virtual GPS script. These lines create three distinct signals related to the object position information. The objectAbsolutePosition variable is a Lua vector with values calculated before in this fragment execution. Listing 1.21 Fragment of Lua script of the virtual GPS 1 2 3 4 5 ... // p r e v i o u s l i n e s are o m i t t e d s i m S e t F l o a t S i g n a l ( ’ gpsX ’ , o b j e c t A b s o l u t e P o s i t i o n [1]) s i m S e t F l o a t S i g n a l ( ’ gpsY ’ , o b j e c t A b s o l u t e P o s i t i o n [2]) s i m S e t F l o a t S i g n a l ( ’ gpsZ ’ , o b j e c t A b s o l u t e P o s i t i o n [3]) ... // next lines are o m i t t e d 5.4 Subscribing to ROS Topics A ROS node (e.g. the hexacopter main controller) may subscribe to ROS topics in order to receive data published by other nodes, e.g. the sensor in the V-REP. Thus, a virtual object can be controlled during simulation by means of subscribing ROS topics within V-REP scripts. For instance, the rotor of the virtual hexacopter must receive throttle signals published by the ROS node created in the main() function in rosvrep controller.cpp file (see Sect. 4.5). Listing 1.22 shows the fragment of HexacopterPlus object script that enables V-REP to subscribe topics and receive the ROS messages. Listing 1.22 Fragment of Lua script of the HexacopterPlus object 1 ... // p r e v i o u s l i n e s are o m i t t e d 2 -- R o t o r s S u b s c r i b e r s 3 simExtROS_enableSubscriber (’ propFRONT ’, 1, s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ p r o p F R O N T ’ ) 4 s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’ Yaw ’ , 1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ Yaw ’) 5 ... // next lines are o m i t t e d The parameters of simExtROS_enableSubscriber function are similar to the simExtROS_enablePublisher function (see Sect. 5.3), however, there is a difference in the specification on how data are handled. The simros_strmcmd_ set_ float_signal parameter indicates that V-REP subscribes to the topic, while simros_strmcmd_get_float_signal indicates that V-REP publishes in the topic. The last parameter is a signal that is used in a Lua script associated with any objects from V-REP virtual environment. For instance, propFRONT signal is used in the script of propeller_jointFRONT object by means of calling simGetFloatSignal function in the parameters list of simSetJointTarget Velocity function as shown in Listing 1.23. A V-REP signal is a global variable. When the simExtROS_enableSubscriber is executed, a value is assigned to that global variable. If such a global variable does not exist, the simExtROS_ enableSubscriber creates it. Listing 1.23 Fragment of Lua script of the HexacopterPlus object 1 ... // p r e v i o u s l i n e s are o m i t t e d 2 simSetJointTargetVelocity ( simGetObjectHandle (’ p r o p e l l e r _ j o i n t F R O N T ’) , 3 s i m G e t F l o a t S i g n a l ( ’ p r o p F R O N T ’ ) * -200) 4 ... // next lines are o m i t t e d 5.5 Publishing Images from V-REP Many robotic applications usually demand some sort of video processing in order to perform advanced tasks. Cameras are commonly used in computational vision tasks, e.g. for collision avoidance while the robot is moving or for mapping and navigating towards the environment [32]. Thus, it is important to provide means for video processing during the simulation phase of a robot design. This sections presents how to setup a virtual video camera in V-REP and how to stream the capture video to a ROS node. It is possible to many distinct data types within topics published or subscribe between ROS and V-REP, including images from the virtual video sensor. A ROS node receives an image that can be processed using the OpenCV API [33]. Although the OpenCV library is not a part of ROS, vision_opencv package [34] provides an interface between ROS and the OpenCV library. This package was used in the camera application6 implemented in rosvrep_camera.cpp. Although this tutorial does not discuss video processing, we show how to setup a ROS topic and stream the video captured within the V-REP simulated environment. For that, the HexacopterPlus has an object named vCamera attached onto its frame. vCamera object is a video sensor that streams images captured during simulation. Using the simros_strmcmd_get_ vision_sensor_image type, V-REP is able to send images to a ROS node. Listing 1.24 depicts a fragment of the Lua script from HexacopterPlus object. The video streamed from the virtual camera can be seen in the camera application. Listing 1.24 Fragment of Lua script from the HexacopterPlus object 1 ... // p r e v i o u s l i n e s are o m i t t e d 2 vCameraHandle = simGetObjectHandle ( ’ vCamera ’) 3 t o p i c N a m e = s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ v C a m e r a ’ ,1 , 4 simros_strmcmd_get_vision_sensor_image , v C a m e r a H a n d l e ,0 , ’ ’) 5 ... 5.6 Running the Sample Scenarios Our package provides two scenes that are located in the scenes subdirectory. The first scene was modeled in Hexacopter.ttt file. It was created to illustrate how the hexacopter was created in V-REP. The second scene was modeled in rosHexaPlus_scene.ttt. This is a more elaborated scene whose environment presents trees and textures. The aim is to illustrate the hexacopter movements, and hence, it is the scene used in the rest of this section. Before starting the scene execution, the reader must ensure that roscore and V-REP are running (see Sect. 4.2). In the V-REP, open the rosHexaPlus_ scene.ttt file using the menu command File → Open scene. The reader must start the simulation by either choosing the menu option Simulation → Start Simulation or by clicking on Start/resume simulation in the toolbar. Go to the terminal that is executing the rosvrep_panel application as shown in Fig. 18. The first set of coordinates must be inserted in the order presented in in Table 2, aiming to command the hexacopter to fly around the environment. It is 6 This application was created for debugging purposes and it is not discussed in this chapter. Fig. 18 A screenshot of the test Table 2 First test: hexacopter flying around the environment Commands Target coordinates sequence X Y Z 1 2 3 4 5 −15 −7 7 2 2 15 10 10 −3.5 −3.5 Heading X −17 −17 −3 −3 −3 important to notice that the hexacopter carry a free payload. Insert the target position by using the “s” option and then heading directions using the “y” option. The next target position should be sent only after the hexacopter reaches the position indicated in previous command. The reader can see the execution using these coordinates in the youtube video https://youtu.be/Pvve5IFz4e4. This video shows a long distance movement. The second simulation shows a flight in which the hexacopter moves to short distance target position. Figure 19 shows a screenshot on which one can see the hexacopter behavior carrying a free payload. In this second test, the reader should insert the commands shown in Table 3. The video of this test can be seen in in https:// youtu.be/7n8tThctAns. Fig. 19 The V-REP screenshot of short distance execution Table 3 First test: hexacopter flying around the environment Commands Target coordinates sequence X Y Z 1 2 3 −20 −18 −23 6 Final Remarks This chapter describes a tutorial on how to implement a control system based on Fuzzy Logic. The movement control system of an hexacopter is used as a case study. A ROS package that includes a fuzzy library was presented. By using such a package, we discussed how to integrate the commercial robotics simulation environment named VREP with a fuzzy control system implemented using ROS infrastructure. Instructions on how to perform a simulation with V-REP were presented. Therefore, this tutorial provides additional knowledge on using different tools for designing ROS-based systems. This tutorial can be used as a starting point to make more experiences. The reader can modify or improve the proposed fuzzy control system by changing the “.fz” files. There is no need to modify the controller main controller implementation in rosvrep_controller.cpp file. As a suggestion to further improve the skills on using the propose fuzzy package and V-REP, the reader could create another ROS node which may act as a mission controller by sending automatically a set of target positions. References 1. Koslosky, E., et al. Hexacopter tutorial package. https://github.com/ekosky/hexaplus-rostutorial.git. Accessed Nov 2016. 2. Bipin, K., V. Duggal, and K.M. Krishna. 2015. Autonomous navigation of generic monocular quadcopter in natural environment. In 2015 IEEE International Conference on Robotics and Automation (ICRA), 1063–1070. 3. Haque, M.R., M. Muhammad, D. Swarnaker, and M. Arifuzzaman. 2014. Autonomous quadcopter for product home delivery. In 2014 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), 1–5. 4. Leishman, R., J. Macdonald, T. McLain, and R. Beard. 2012. Relative navigation and control of a hexacopter. In 2012 IEEE International Conference on Robotics and Automation (ICRA), 4937–4942. 5. Ahmed, O.A., M. Latief, M.A. Ali, and R. Akmeliawati. 2015. Stabilization and control of autonomous hexacopter via visual-servoing and cascaded-proportional and derivative (PD) controllers. In 2015 6th International Conference on Automation, Robotics and Applications (ICARA), 542–549. 6. Alaimo, A., V. Artale, C.L.R. Milazzo, and A. Ricciardello. 2014. PID controller applied to hexacopter flight. Journal of Intelligent & Robotic Systems 73 (1–4): 261–270. 7. Ołdziej, D., and Z. Gosiewski. 2013. Modelling of dynamic and control of six-rotor autonomous unmanned aerial vehicle. Solid State Phenomena 198: 220–225. 8. Collotta, M., G. Pau, and R. Caponetto. 2014. A real-time system based on a neural network model to control hexacopter trajectories. In 2014 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM), 222–227. 9. Artale, V., C.L. Milazzo, C. Orlando, and A. Ricciardello. 2015. Genetic algorithm applied to the stabilization control of a hexarotor. In Proceedings of the International Conference on Numerical Analysis and Applied Mathematics 2014 (ICNAAM-2014), 222–227. 10. Bacik, J., D. Perdukova, and P. Fedor. 2015. Design of fuzzy controller for hexacopter position control. Artificial Intelligence Perspectives and Applications, 193–202. Berlin: Springer. 11. Koslosky, E., M.A. Wehrmeister, J.A. Fabro, and A.S. Oliveira. 2016. On using fuzzy logic to control a simulated hexacopter carrying an attached pendulum. In Designing with Computational Intelligence, vol. 664, ed. N. Nedjah, H.S. Lopes, and L.M. Mourelle. Studies in Computational Intelligence. Berlin: Springer. 01–32 Expected publication on Dec. 2016. 12. Open Source Robotics Foundation: ROS basic tutorials. http://wiki.ros.org/ROS/Tutorials. Accessed March 2016. 13. Coppelia Robotics: V-REP: Virtual robot experimentation platform. http://www. coppeliarobotics.com. Accessed March 2016. 14. Coppelia Robotics: V-REP bubblerob tutorial. http://www.coppeliarobotics.com/helpFiles/en/ bubbleRobTutorial.htm. Accessed March 2016. 15. Coppelia Robotics: V-REP tutorial for ROS indigo integration. http://www.coppeliarobotics. com/helpFiles/en/rosTutorialIndigo.htm. Accessed March 2016. 16. Yoshida, K., I. Kawanishi, and H. Kawabe. 1997. Stabilizing control for a single pendulum by moving the center of gravity: theory and experiment. In American Control Conference, 1997. Proceedings of the 1997, vol. 5, 3405–3410. 17. Passino, K.M., and S. Yurkvich. 1998. Fuzzy Control. Reading: Addison-Wesley. 18. Hwang, G.C., and S.C. Lin. 1992. A stability approach to fuzzy control design for nonlinear systems. Fuzzy Sets and Systems 48 (3): 279–287. 19. Pedro, J.O., and C. Mathe. 2015. Nonlinear direct adaptive control of quadrotor UAV using fuzzy logic technique. In 2015 10th Asian Control Conference (ASCC), 1–6. 20. Pedrycz, W., and F. Gomide. 2007. RuleBased Fuzzy Models, 276–334. New York: Wiley-IEEE Press. 21. Open Source Robotics Foundation: ROS remapping. http://wiki.ros.org/Remapping% 20Arguments. Accessed March 2016. 22. Chak, Y.C., and R. Varatharajoo. 2014. A heuristic cascading fuzzy logic approach to reactive navigantion for UAV. IIUM Engineering Journal, Selangor - Malaysia 15 (2). 23. Sureshkumar, V., and K. Cohen. Autonomous control of a quadrotor UAV using fuzzy logic. Unisys Digita - Journal of Unmanned System Technology, Cincinnati, Ohio. 24. Eusebiu Marcu, C.B. UAV fuzzy logic control system stability analysis in the sense of Lyapunov. UPB Scientific Bulletin, Series D 76 (2). 25. Abeywardena, D.M.W., L.A.K. Amaratunga, S.A.A. Shakoor, and S.R. Munasinghe. 2009. A velocity feedback fuzzy logic controller for stable hovering of a quad rotor UAV. In 2009 International Conference on Industrial and Information Systems (ICIIS), 558–562. 26. Gomez, J.F., and M. Jamshidi. 2011. Fuzzy adaptive control for a UAV. Journal of Intelligent & Robotic Systems 62 (2): 271–293. 27. Limnaios, G., and N. Tsourveloudis. 2012. Fuzzy logic controller for a mini coaxial indoor helicopter. Journal of Intelligent & Robotic Systems 65 (1): 187–201. 28. Ierusalimschy, R., W. Celes, and L.H. de Figueiredo. 2016. Lua documentation. https://www. lua.org/. Accessed March 2016. 29. Coppelia Robotics: V-REP help. http://www.coppeliarobotics.com/helpFiles/. Accessed March 2016. 30. Coppelia Robotics: V-REP download page. http://www.coppeliarobotics.com/downloads.html. Accessed March 2016. 31. Coppelia Robotics: ROS publisher typer for V-REP. http://www.coppeliarobotics.com/ helpFiles/en/rosPublisherTypes.htm. Accessed March 2016. 32. Steder, B., G. Grisetti, C. Stachniss, and W. Burgard. 2008. Visual SLAM for flying vehicles. IEEE Transactions on Robotics 24 (5): 1088–1093. 33. Itseez: OpenCV - Open Source Computer Vision Library. http://opencv.org/. Accessed Nov 2016. 34. Mihelich, P., and J. Bowman. 2016. vision_openCV documentation. Accessed March 2016. Author Biographies Emanoel Koslosky is Master’s degree student in the applied computing and embedded systems. As a student, he took classes abount Mobile Robotics, Image Processing, Hardware Architecture for Embedded Systems, Operating Systems in Real Time. As a professional, he received Certifications of Oracle Real Application Clusters 11g Certified Implementation - Specialist, Oracle Database 10g Administrator Certified Professional - OCP, Oracle8i Database Administrator Certified Professional - OCP. He has professionally worked as programmer and developer since 1988 using languages such as C/C++, Oracle Pro*C/C++, Pro*COBOL, Java and Oracle Tools like Oracle Designer, Oracle Developer, as a Database Administrator worked with high availability and scalability environment, also as a System Adminstrator of Oracle e-Business Suite - EBS. Marco Aurélio Wehrmeister received the Ph.D. degree in Computer Science from the Federal University of Rio Grande do Sul (Brazil) and the University of Paderborn (Germany) in 2009 (double-degree). In 2009, he worked as Lecturer and Postdoctoral Researcher for the Federal University of Santa Catarina (Brazil). From 2010 to 2013, he worked as tenure track Professor with the Department of Computer Science from the Santa Catarina State University (UDESC, Brazil). Since 2013, he has been working as a tenure track Professor with the Department of Informatics from the Federal University of Technology - Paraná (UTFPR, Brazil). From 2014 to 2016, he was the Head of the M.Sc. course on Applied Computing of UTFPR. In 2015, Prof. Dr. Wehrmeister was a Visiting Fellow (short stay) with School of Electronic, Electrical and Systems Engineering from the University of Birmingham (UK). Prof. Dr. Wehrmeister’s thesis was selected by the Brazilian Computer Society as one of the six best theses on Computer Science in 2009. He is member of the special commission on Computing Systems Engineering of the Brazilian Computer Society. Since 2015, he is a member of the IFIP Working Group 10.2 on Embedded Systems. His research interests are in the areas of embedded and real-time systems, aerial robots, modeldriven engineering, and hardware/software engineering for embedded systems and robotics. Prof. Dr. Wehrmeister has co-authored more than 70 papers in international peer-reviewed journals and conference proceedings. He has been involved in various research projects funded by Brazilian R&D agencies. Andre Schneider de Oliveira holds a degree in Computer Engineering from the University of Vale do Itajaí (2004), master’s degree in Mechanical Engineering from the Federal University of Santa Catarina (2007) and Doctorate in Automation and Systems Engineering from the Federal University of Santa Catarina (2011). He is currently Assistant Professor at the Federal Technological University of Paran - Curitiba campus. He has carried out research in Electrical Engineering with emphasis on Robotics, Mechatronics and Automation, working mainly with the following topics: navigation and positioning of mobile robots; autonomous and intelligent systems; perception and environmental identification; and control systems for navigation. João Alberto Fabro is an Associate Professor at Federal University of Technology - Parana (UTFPR), where he has been working since 2008. From 1998 to 2007, he was with the State University of West-Parana (UNIOESTE). He has an undergraduate degree in Informatics, from Federal University of Paran (UFPR 1994), a Master’s Degree in Computing and Electric Engineering, from Campinas State University (UNICAMP 1996), a Ph.D. degree in Electric Engineering and Industrial Informatics(CPGEI) from UTFPR (2003) and recently actuated as a Post-Doc at the Faculty of Engineering, University of Porto, Portugal (FEUP, 2014). He has experience in Computer Science, specially Computational Intelligence, actively researching on the following subjects: Computational Intelligence (neural networks, evolutionary computing and fuzzy systems), and Autonomous Mobile Robotics. Since 2009, he has participated in several Robotics Competitions, in Brazil, Latin America and World Robocup, both with soccer robots and service robots. Flying Multiple UAVs Using ROS Wolfgang Hönig and Nora Ayanian Abstract This tutorial chapter will teach readers how to use ROS to fly a small quadcopter both individually and as a group. We will discuss the hardware platform, the Bitcraze Crazyflie 2.0, which is well suited for swarm robotics due to its small size and weight. After first introducing the crazyflie_ros stack and its use on an individual robot, we will extend scenarios of hovering and waypoint following from a single robot to the more complex multi-UAV case. Readers will gain insight into physical challenges, such as radio interference, and how to solve them in practice. Ultimately, this chapter will prepare readers not only to use the stack as-is, but also to extend it or to develop their own innovations on other robot platforms. Keywords ROS · UAV · Multi-Robot-System · Crazyflie · Swarm 1 Introduction Unmanned aerial vehicles (UAVs) such as AscTec Pelican, Parrot AR.Drone, and Erle-Copter have a long tradition of being controlled with ROS. As a result, there are many ROS packages devoted to controlling such UAVs as individuals.1 However, using multiple UAVs creates entirely new challenges that such packages cannot address, including, but not limited to, the physical space required to operate the robots, the interference of sensors and network communication, and safety requirements. Multiple UAVs have been used in recent research [1–5], but such research can be overly complicated and tedious due to the lack of tutorials and books. In fact, 1 E.g., http://wiki.ros.org/ardrone_autonomy, http://wiki.ros.org/mavros, http://wiki.ros.org/as ctec_mav_framework. W. Hönig (B) · N. Ayanian Department of Computer Science, University of Southern California, Los Angeles, CA, USA email: [email protected] URL: http://act.usc.edu N. Ayanian e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_3 W. Hönig and N. Ayanian even with packages that can support multiple UAVs, documentation focuses on the single UAV case, not considering the challenges that occur once multiple UAVs are used. Research publications often skip implementation details, making it difficult to replicate the results. Papers about specialized setups exist [6, 7], but rely on expensive or commercially unavailable solutions. This chapter will attempt to fill this gap in documentation. In particular, we try to provide a step-by-step guide on how to reproduce results we presented in an earlier research paper [3], which used up to six UAVs.2 We focus on a small quadcopter — the Bitcraze Crazyflie 2.0 — and how to use it with the crazyflie_ros stack, particularly as part of a group of 2 or more UAVs. We will assume that an external position tracking system, such as a motion capture system, is available because the Crazyflie is not able to localize itself with just onboard sensing. We will discuss the physical setup and how to support a single human pilot. Each step will start with the single UAV case and then extend to the more challenging multi-UAV case. We begin with an introduction to the target platform, including the software setup of the vendor’s software and the crazyflie_ros stack. We then show teleoperation of multiple Crazyflies using joysticks. The usage of a motion capture system allows us to autonomously hover multiple Crazyflies. We then extend this to multiple UAVs following waypoints. The chapter will also contain important insights into the crazyflie_ros stack, allowing the user to understand the design in-depth. This can be helpful for users interested in implementing other multi-UAV projects using different hardware or adding extensions to the existing stack. Everything discussed here has been tested on Ubuntu 14.04 using ROS Indigo. The stack and discussed software also work with ROS Jade (Ubuntu 14.04) and ROS Kinetic (Ubuntu 16.04). 2 Target Platform As our target platform we use the Bitcraze Crazyflie 2.0 platform, an open-source, open-hardware nano quadcopter that targets hobbyists and researchers alike. Its small size (92 mm diagonal rotor-to-rotor) and weight (29 g) make it ideal for indoor swarming applications. Additionally, its size allows users to operate the UAVs safely even with humans or other robots around. The low inertia causes only few parts to break after a crash — the authors had several crashes from a height of 3 m to a concrete floor with damage only to cheaply replaceable plastic parts. A Crazyflie can communicate with a phone or PC using BlueTooth. Additionally, a custom USB dongle called Crazyradio PA, or Crazyradio for short, allows lower latency communication. The Crazyflie 2.0 and Crazyradio PA are shown in Fig. 1. A block diagram of the Crazyflie’s architecture is shown in Fig. 2. The communication system is used to send the setpoint, consisting of thrust and attitude, tweak internal parameters, and stream telemetry data, such as sensor readings. It is also 2 Video available at http://youtu.be/px9iHkA0nOI. Flying Multiple UAVs Using ROS Fig. 1 Our target platform Bitcraze Crazyflie 2.0 quadcopter (left), which can be controlled from a PC using a custom USB dongle called Crazyradio PA (right). Image credit: Bitcraze AB Fig. 2 Components and architecture of the Crazyflie 2.0 quadcopter. Based on images by Bitcraze AB possible to update the onboard software wirelessly. The Crazyflie has a 9-axis inertial measurement unit (IMU) onboard, consisting of gyroscope, accelerometer, and magnetometer. Moreover, a pressure sensor can be used to estimate the height. Most of the processing is done on the main microcontroller (STM32). It runs FreeRTOS as its operating system and state estimation and attitude control are executed at 250 Hz. A second microcontroller (nRF51) is used for the wireless communication and as a power manager. The two microcontrollers can exchange data over the syslink, which is a protocol using UART as a physical interface. An extension port permits the addition of additional hardware. The official extensions include an inductive charger, LED headlights, and buzzer. Finally, it is possible to use the platform on a bigger frame if higher payload capabilities are desired. Extensions are called “decks” and are also used by the community to add additional capabilities.3 The schematics 3 https://www.hackster.io/bitcraze/products/crazyflie-2-0. as well as all firmwares are publicly available.4 The technical specifications are as follows: • STM32F405: main microcontroller, used for state-estimation, control, and handling of extensions. We will call this STM32. (Cortex-M4, 168 MHz, 192 kB SRAM, 1 MB flash). • nRF51822: radio and power management microcontroller. We will call this nRF51. (Cortex-M0, 32 MHz, 16 kB SRAM, 128 kB flash). • MPU-9250: 9-axis inertial measurement unit. • LPS25H: pressure sensor. • 8 kB EEPROM. • uUSB: charging and wired communication. • Expansion port (I2C, UART, SPI, GPIO). • Debug port for STM32. An optional debug-kit can be used to convert to a standard JTAG-connector and to debug the nRF51 as well. The onboard sensors are sufficient to stabilize the attitude, but not the position. In particular, external feedback is required to fly to predefined positions. By default, this is the human who teleoperates the quadcopter either using a joystick connected to a PC, or a phone. In this chapter, we will use a motion-capture system for fully autonomous flights. The vendor provides an SDK written in Python which runs on Windows, Linux, and Mac. It can be used to teleoperate a single Crazyflie using a joystick, to plot sensor data in real-time, and to write custom applications. We will use ROS in the remainder of this chapter to control the Crazyflie; however, ROS is only used on the PC controlling one or more Crazyflies. The ROS driver sends the data to the different quadcopters using the protocol defined in the Crazyflie firmware. The Crazyflie has been featured in a number of research papers. The mathematical model and system identification of important parameters, such as the inertia matrix, have been discussed in [8, 9]. An updated list with applications can be found on the official webpage.5 3 Setup In this section we will describe how to set up the Crazyflie software. We cover both the official Python SDK and how to install the crazyflie_ros stack. The first is useful to reconfigure the Crazyflie as well as for troubleshooting, while the later will allow us to use multiple Crazyflies with ROS. We assume a PC with Ubuntu 14.04 as operating system, which has ROS Indigo (desktop-full) installed.6 It is better to install Ubuntu directly on a PC rather than 4 https://github.com/bitcraze/. 5 https://www.bitcraze.io/research/. 6 http://wiki.ros.org/indigo/Installation/Ubuntu. using a virtual machine for two reasons: First, you will be using graphical tools, such as rviz, which rely on OpenGL and therefore do not perform as well on a virtual machine as when natively installed. Second, the communication using the Crazyradio would have additional latency in a virtual machine since the USB signals would go through the host system first. This might cause less stable control. In particular, we will follow the following steps: 1. Configure the PC such that the Crazyradio will work for any user. 2. Install the official software package to test the Crazyflie. 3. Update Crazyflie’s onboard software to the latest version to ensure that it will work with the ROS package. 4. Install the crazyflie_ros package and run a first simple connection test. The later sections in this chapter assume that everything is set up as outlined here to perform higher-level tasks. 3.1 Setting PC Permissions By default, the Crazyradio will only work for a user with superuser rights when plugged in to a PC. This is not only a security concern but also makes it harder to use with ROS. In order to use it without sudo, we first add a group (plugdev) and then add ourselves as a member of that group: $ sudo groupadd plugdev $ sudo usermod -a -G plugdev $USER Now, we create a udev-rule, setting the permission such that anyone who is a member of our newly created group can access the Crazyradio. We create a new rules file using gedit: $ sudo gedit /etc/udev/rules.d/99-crazyradio.rules and add the following text to it: 1 2 3 4 # Crazyradio (normal operation) SUBSYSTEM=="usb", ATTRS{idVendor}=="1915", ATTRS{idProduct}=="7777", MODE="0664", GROUP="plugdev" # Bootloader SUBSYSTEM=="usb", ATTRS{idVendor}=="1915", ATTRS{idProduct}=="0101", MODE="0664", GROUP="plugdev" The second entry is useful for firmware updates of the Crazyradio. In order to use the Crazyflie when directly connected via USB, you need to create another file named 99-crazyflie.rules in the same folder, with the following content: 1 SUBSYSTEM=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="5740", MODE="0664", GROUP="plugdev" Finally, we reload the udev-rules: $ sudo udevadm control --reload-rules $ sudo udevadm trigger You will need to log out and log in again in order to be a member of the plugdev group. You can then plug in your Crazyradio (and follow the instructions in the next section to actually use it). 3.2 Bitcraze Crazyflie PC Client The Bitcraze SDK is composed of two parts. The first is crazyflie-libpython, which is a Python library to control the Crazyflie without any graphical user interface. The second is crazyflie-client-python, which makes use of that library and adds a graphical user interface. We start by installing the required dependencies: $ sudo apt-get install git python3 python3-pip python3-pyqt4 python3-numpy python3-zmq $ sudo pip3 install pyusb==1.0.0b2 $ sudo pip3 install pyqtgraph appdirs To install crazyflie-lib-python, use the following commands: $ mkdir ~/crazyflie $ cd ~/crazyflie $ git clone https://github.com/bitcraze/crazyflie-lib-python.git $ cd crazyflie-lib-python $ pip3 install --user -e . Here, the Python package manager pip is used to install the library only for the current user. The library uses Python 3. In contrast, ROS Indigo, Jade, and Kinetic use Python 2. Similarly, crazyflie-client-python can be installed using the following commands: Fig. 3 Screenshot of the Bitcraze Crazyflie PC Client $ cd ~/crazyflie $ git clone https://github.com/bitcraze/crazyflie-clients-python.git $ cd crazyflie-clients-python $ pip3 install --user -e . To start the client, execute the following: $ cd ~/crazyflie/crazyflie-clients-python $ python3 bin/cfclient You should see the graphical user interface, as shown in Fig. 3. Versions Might Change Since the Crazyflie software is under active development, the installation procedure and required dependencies might change in the future. You can use the exact same versions as used in the chapter by using the following commands after git clone. Use the following for crazyflie-lib-python $ git checkout a0397675376a57adf4e7c911f43df885a45690d1 and use the following for crazyflie-clients-python: $ git checkout 2dff614df756f1e814538fbe78fe7929779a9846 If you want to use the latest version please follow the instructions provided in the README.md file in the respective repositories. 3.3 Firmware Everything described in this chapter works with the Crazyflie’s default firmware. You can obtain the latest compiled firmware from the repository7 — this chapter was tested with the 2016.02 release. Make sure that you update the firmware for both STM32 and nRF51 chips by downloading the zip-file. Execute the following steps to update both firmwares: 1. Start the Bitcraze Crazyflie PC Client. 2. In the menu select “Connect”/“Bootloader.” 3. Turn your Crazyflie off by pressing the power button. Turn it back on by pressing the power button for 3 seconds. The blue tail lights should start blinking: The Crazyflie is now waiting for a new firmware. 4. Click “Initiate bootloader cold boot.” The status should switch to “Connected to bootloader.” 5. Select the downloaded crazyflie-2016.02.zip and press “Program.” Click the “Restart in firmware mode” button after it is finished. If you prefer compiling the firmware yourself, please follow the instructions in the respective repositories.8 7 https://github.com/bitcraze/crazyflie-release/releases. 8 https://github.com/bitcraze/crazyflie-firmware, https://github.com/bitcraze/crazyflie2-nrf-firmware. 3.4 Crazyflie ROS Stack The crazyflie_ros stack contains the driver, a position controller, and various examples. We will explore the different possibilities later in this chapter and concentrate on the initial setup first. We first create a new ROS workspace: $ mkdir -p ~/crazyflie_ws/src $ cd ~/crazyflie_ws/src $ catkin_init_workspace Next, we add the required packages to the workspace and build them: $ git clone https://github.com/whoenig/crazyflie_ros.git $ cd ~/crazyflie_ws $ catkin_make In order to use your workspace add the following line to your ~/.bashrc: $ source ~/crazyflie_ws/devel/setup.bash This will ensure that all ROS related commands will find the packages in all terminals. To update your current terminal window, use source ~/.bashrc, which will reload the file. You can test your setup by typing: $ rosrun crazyflie_tools scan This should print the uniform-resource-identifier (URI) of any Crazyflie found in range. For example, the output might look like this: Configured Dongle with version 0.54 radio://0/100/2M In this case, the URI of your Crazyflie is radio://0/100/2M. Each URI has several components. Here, the Crazyradio is used (radio). Since you might have multiple radios in use, you can specify a zero-based index on the device to use (0). The next number (100) specifies the channel, which is a number between 0 and 125. Finally, the datarate (2M) (one of 250K, 1M, 2M) specifies the speed to use in bits per second. There is an optional address as well, which we will discuss in Sect. 5.1. Versions As before, the instructions might be different in future versions. Use the following to get the exact same version of the crazyflie_ros stack: $ git checkout 34beecd2a8d7ab02378bcdfcb9adf5a7a0eb50ea Install the following additional dependency in order to use the teleoperation: $ sudo apt-get install ros-indigo-hector-quadrotor-teleop If you are using ROS Jade or Kinetic, you will need to add the package to your workspace manually. 4 Teleoperation of a Single Quadcopter In this section we will use ROS to control a single Crazyflie using a joystick. Moreover, we will gain access to the internal sensors and show how to visualize the data using rviz and rqt_plot. This is a useful first step to understand the cmd_vel interface of the crazyflie_ros stack. Later, we will build on this knowledge to let the Crazyflie fly autonomously. Furthermore, teleoperation is useful for debugging. For example, it can be used to verify that there is no mechanical hardware defect. In the first subsection, we assume that you have access to a specific joystick, the Microsoft XBox360 controller. We show how to connect to the Crazyflie using ROS and how to eventually fly it manually. The second subsection relaxes this assumption by discussing the required steps needed to add support for another joystick. 4.1 Using an XBox360 Controller For this example, we assume that you have an Xbox360 controller plugged into your machine. We will show how to use different joysticks later in this section. Use the following command to run the teleoperation example: $ roslaunch crazyflie_demo teleop_xbox360.launch uri:=radio://0/100/2M Make sure that you adjust the URI based on your Crazyflie. The launch file teleop_xbox360.launch has the following structure: teleop_xbox360.launch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 In line 4 the crazyflie_server is launched, which accesses the Crazyradio to communicate with the Crazyflie. Lines 5–16 contain information about the Crazyflie we want to control. First, the Crazyflie is added with a specified URI. Second, the joy_node is launched to create the joy topic. This particular joystick is configured by including xbox360.launch. This file will launch a hector_quadcopter_teleop node with the appropriate settings for the XBox360 controller. Furthermore, controller.py is started; this maps additional joystick buttons to Crazyflie specific behaviors. For example, the red button on your controller will cause the Crazyflie to turn off all propellers (emergency mode). Finally, lines 17–19 start rviz and two instances of rqt_plot for visualization. Figure 4 shows a screenshot of rviz as it visualizes the data from the inertial measurement unit as streamed from the Crazyflie at 100 Hz. The other two rqt_plot instances show the current battery voltage and radio signal strength indicator (RSSI), respectively. If you tilt your Crazyflie, you should instantly see the IMU arrow changing in rviz. You can now use the joystick to fly — the default uses the left stick for thrust (up/down) and yaw (left/right) and the right stick for pitch (up/down) and roll Fig. 4 Screenshot of rviz showing the IMU data (left/right). Also, the red B-button can be used to put the ROS driver in emergency mode. In that case, your Crazyflie will immediately turn off its engines (and if it was flying it will fall to the ground). 4.2 Add Support for Another Controller Support for another joystick can be easily added, as long as it is recognized as a joystick by the operating system. The major difference between joysticks is the mapping between the different axes and buttons of a joystick to the desired functionality. In the following steps we first try to find the desired mapping and use that to configure the crazyflie_ros stack accordingly. 1. Attach your joystick. This will create a new device file, e.g., /dev/input/js0. You can use dmesg to find details about which device file was used in the system log. 2. Run the following command in order to execute the joy_node: $ rosrun joy joy_node _dev:=/dev/input/js0 3. In another terminal, execute: $ rostopic echo /joy This will print the joystick messages published by joy_node. Move your joystick to find the desired axes mapping. For example, you might increase the thrust on your joystick and see that the second number of the axes array decreases. 4. Change the axis mapping in xbox360.launch (or create a new file) by updating parameters x_axis, y_axis, z_axis, and yaw_axis accordingly. You can use negative axis values to indicate that this axis should be inverted. For example, in the previous example for the thrust changes, you would choose −2 as axis for z_axis. 5. Update the button mapping in crazyflie_demo/scripts/controller. py to change which button triggers high-level behavior such as emergency. The PS3 controller is already part of the crazyflie_ros stack and the mapping was found in the same way as described above. 5 Teleoperation of Multiple UAVs This section discusses the initial setup: how to assign unique addresses to each UAV, how to communicate using fewer radios than UAVs, and how to find good communication channels to decrease interference between UAVs as well as between UAVs and existing infrastructure such as WiFi. Flying multiple Crazyflies is mainly limited by the communication bandwidth. One way to handle this issue is to have one Crazyradio per Crazyflie and to use a different channel for each of them. There are two major disadvantages to this approach: • The number of USB ports on a computer is limited. Even if you would add additional USB hubs, this adds additional latency because USB operates serially. • There are 125 channels available; however, not all of them might lead to good performance since the 2.4 GHz band is shared. For example, BlueTooth and WiFi operate in the same band. Therefore, we will use a single Crazyradio to control multiple Crazyflies and share the channels used. Hence, we will need to assign unique addresses to each Crazyflie to avoid crosstalking between the different quadcopters. 5.1 Assigning a Unique Address The communication chips used in the Crazyflie and Crazyradio (nRF51 and nRF24LU1+ respectively) permit 40-bit addresses. By default, each Crazyflie has 0xE7E7E7E7E7 assigned as address. You can use the Bitcraze PC Client to change the address using the following steps: 1. Start the Bitcraze PC Client. 2. Make sure the address field is set to 0xE7E7E7E7E7 and click “Scan.” The drop-down box containing “Select an interface” should now have another entry containing the URI of your Crazyflie, for example radio://0/100/2M (See Fig. 5, left). Select this entry and click “Connect.” 3. In the “Connect” menu, select the item “Configure 2.0.” In the resulting dialog (see Fig. 5, right) change the address to a unique number, for example 0xE7E7E7E701 for your first Crazyflie, 0xE7E7E7E702 for the second one and so on. Select “Write” followed by “Exit.” 4. In the PC Client, select “Disconnect.” 5. Restart your Crazyflie. 6. Update the address field of the client (1 in Fig. 5, left) and click “Scan.” If everything was successful, you should now see a longer URI in the drop-down box containing radio://0/100/2M/E7E7E7E701. If it does not work, verify that you have the latest firmware for both nRF51 and STM32 flashed. This feature might not be available or working properly otherwise. The address (and other radio parameters) are stored in EEPROM and therefore will remain even if you upgrade the firmware. Fig. 5 Left To connect to a Crazyflie, first enter its address, click “Scan”, and finally select the found Crazyflie in the drop-down box. Right The configuration dialog for the Crazyflie to update radio related parameters Scanning Limitation The scan feature of both ROS driver and Bitcraze PC client assume that you know the address of your Crazyflie (it is not feasible to try 240 different addresses during scanning). If you forget the address, you will need to reset the EEPROM to its default values by connecting the Crazyflie directly to the PC using a USB cable and running a Python script.a a https://wiki.bitcraze.io/doc:crazyflie:dev:starting#reset_eeprom. 5.2 Finding Good Communication Parameters The radio can be tuned by changing two parameters: datarate and channel. The datarate can be 250 kBit/s, 1, or 2 MBit/s. A higher datarate has a lower chance of collision with other networks such as WiFi but less range. Hence, for indoor applications the highest datarate (2 MBit/s) is recommended. The channel number defines the offset in MHz from the base frequency of 2400 MHz. For example, channel 15 sets the operating frequency to 2415 MHz and channel 80 refers to an operating frequency of 2480 MHz. If you selected 2 MBit/s as datarate, the channels need to have a spacing of at least 2 MHz (otherwise, a 1 MHz spacing is sufficient). Unlike WiFi, there is no channel hopping implemented in the Crazyflie. That means that the selected channel is very important because it will not change over time. On the other hand, interference can change over time; for example, a WiFi router might switch channels at runtime. Therefore, it is best if, during your flights, you can disable any interfering signal such as WiFi or wireless mouse/keyboards which use the 2.4 GHz band. If that is not possible, you can use the following experiments to find a set of good channels: • Use the Bitcraze PC Client to teleoperate the Crazyflie in the intended space. Look at the “Link Quality” indicator on the top right. This indicator shows the percentage of successfully delivered packets. If it is low, there is likely interference. • If you teleoperate the Crazyflie using ROS, there will be a ROS warning if the link quality is below a certain threshold. Avoid those channels. Additionally, rqt_plot shows the Radio Signal Strength Indicator (RSSI). This value, measured in -dBm, indicates the signal strength, which is affected both by distance and interference. A low value (e.g., 35) is good, while a high value (>80) is bad. For example, the output in Fig. 6 suggests that another channel should be used, because the second half of the plot shows additional noise caused by interference. Once you have found a set of good channels, you can assign them to your Crazyflies, using the Bitcraze PC Client (see Sect. 5.1 for details). You can share Fig. 6 Output of rqt_plot showing the radio signal strength indicator. The first 15 s show a good signal, while the second half shows higher values and noise caused by interference up to four Crazyflies per Crazyradio with reasonable performance. Hence, the number of channels you need is about a quarter of the number of Crazyflies you intend to fly. Legal Restrictions In some countries the 2.4 GHz band is limited to certain channels. Please refer to your local regulations before you adjust the channel. For example, in the United States frequencies between 2483.5 and 2500 MHz are power-restricted and as a result frequently not used by WiFi routers. Hence, channels 84 to 100 might be a good choice there. Channels above 2500 MHz are not allowed to be used in the United States. 5.3 ROS Usage (Multiple Crazyflies) Let’s assume that you have two Crazyflies with unique addresses, two joysticks, and a single Crazyradio. You can teleoperate them using the following command: $ roslaunch crazyflie_demo multi_teleop_xbox360.launch uri1:=radio://0/100/2M/E7E7E7E701 uri2:=radio://0/100/2M/E7E7E7E702 This should connect to both Crazyflies, visualize their state in rivz, and plot realtime data using rqt_graph. Furthermore, each joystick can be used to teleoperate one of the Crazyflies. The launch file looks very similar to the single UAV case: multi_teleop_xbox360.launch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 In particular, we still have a single crazyflie_server (which now manages both Crazyflies). However, we have two different namespaces (crazyflie1 and crazyflie2). The content of those namespaces is nearly identical to the single UAV case (compare lines 5–16 in teleop_xbox360.launch, Sect. 4) and thus not repeated here for clarity. In order to teleoperate more than two Crazyflies, you simply need to add more groups with different namespaces to the launch file. If you want the ROS driver to use a different Crazyradio, you can adjust the first number in the URI. For example, radio://1/100/2M/E7E7E7E701 uses the second Crazyradio (or reports an error if only one is plugged in). It is important to consider the following for the usage of multiple radios: • For improved performance, use the same channel per Crazyradio. This avoids that the radio changes channels whenever it switches between sending to different Crazyflies. • If you do not need the IMU raw data, disable it by setting enable_logging to False when you include the crazyflie_add.launch file. This saves bandwidth and allows you to use more than two Crazyflies per radio. Depending on your packet drop rate, you can use up to two Crazyflies per Crazyradio if logging is enabled and up to four otherwise. It does work with a higher number as well, but you will see decreasing controllability since the radio is used in a timeslice fashion. 6 Hovering A first important step for autonomous flight of a quadcopter is hovering in place. This also requires the ability to take off from the ground and land after the flight. All of these basic motions require a position controller, which takes the Crazyflie’s current position as input in order to compute new commands for the Crazyflie. Hence, this position controller is replacing the teleoperating human we had before. This section describes the crazyflie_controller package and how it is used for autonomous take-off, landing, and hovering. As before, first the single UAV case is considered and later it is extended to the multi-UAV case. Furthermore, this section will cover working strategies on how to use the crazyflie with optical motion capture systems such as VICON9 and OptiTrack.10 This is, due to the size of the UAV, a non-trivial task, particularly for swarming applications. 6.1 Position Estimate We assume that there is already a way to track the position and preferably yaw of the Crazyflies at least 30 Hz. It is possible to use Microsoft Kinect,11 AR tags, or Ultra-Wideband Localization [10] for this task. However, those solutions are not as accurate at specialized motion capture systems, which can reach sub-millimeter accuracy. We want to fly many small quadcopters, perhaps in a dense formation, and hence need a very accurate position feedback. Therefore, we will discuss the usage of optical motion capture systems such as VICON or OptiTrack. We run our experiments in a space of approximately 5 m × 4 m equipped with a 12-camera VICON MX motion capture system. Optical motion capture systems typically require spatially unique marker configurations for each object to track such that it is possible to identify each object.12 Otherwise, occlusions or a short-term camera outage would result in unrecoverable tracking failures. For a small platform like the Crazyflie, there are not many ways to place markers uniquely on the existing frame. In particular, if you need more than four Crazyflies, you will need to add additional structures where you can place the markers: • Propeller guards. They are commercially available for the Hubsan X4 toy quadrotor,13 which has identical physical dimensions. Moreover, you can use a 3D printer to print your own guard based on published files on thingiverse.14 9 http://www.vicon.com/. 10 https://www.optitrack.com/. 11 https://github.com/ataffanel/crazyflie-ros-kinect2-detector. 12 Some solutions, like the Crazyswarm project [5], use identical marker configurations. can search on amazon for “propeller guard hubsan x4.”. 14 http://www.thingiverse.com/search?q=crazyflie\&sa=. 13 You Fig. 7 Left Crazyflie with four optical markers (6.4 mm) attached and no additional guard used. Right Crazyflie with markers on propeller guard to allow a higher number of unique marker configurations • Custom motor mounts. OpenSCAD15 files can be found in the official mechanical repository.16 • Spatial extensions in form of sticks, either mounted on the rotor arms or on top of the Crazyflie as extension board. For small groups of up to four Crazyflies, we place the markers directly on the Crazyflies. We flew up to six Crazyflies (using three Crazyradios) using the propeller guard approach. However, this significantly reduces flight times and changes the flight dynamics. Figure 7 shows examples of Crazyflies equipped with markers. The exact method is highly dependent on your motion capture system, so there will be some experimentation involved. Similarly, the best markers to use depend on the system as well. We successfully use 6.4 and 7.9 mm spherical traditional reflective markers from B&L Engineering.17 A smaller size impacts the flight dynamics less (and fits underneath the rotors) and is preferred as long as the motion capture system is able to detect the markers properly. We use the No-Base option of the markers and small pieces of Command Poster Strips18 to attach them to the Crazyflie. If you use VICON, it is best to install the vicon_bridge ROS package using the following steps: 15 http://www.openscad.org/. 16 https://github.com/bitcraze/bitcraze-mechanics/tree/master/cf2-mount-openscad. 17 http://www.bleng.com/. 18 http://www.command.com. Fig. 8 Output of view_frames for two objects names crazyflie1 and crazyflie2, respectively cd ~/crazyflie_ws/src git clone https://github.com/ethz-asl/vicon_bridge.git cd ~/crazyflie_ws catkin_make This will add the source to your workspace and compile it. The package assumes that you have another PC with VICON Tracker running in the same network, accessible under the hostname vicon and with no firewalls in between. You can test your installation by running: $ roslaunch vicon_bridge vicon.launch In another terminal, execute: $ rosrun tf view_frames and open the resulting frames.pdf file to check your transformations. It should look like Fig. 8. If you use OptiTrack (or any other motion capture system which supports VRPN19 ), you can install the vrpn_client_ros package using: $ sudo apt-get install ros-indigo-vrpn-client-ros In order to test it, you will need to write a custom launch file, similar to the sample file provided in the package.20 Afterwards, you can check if it works using view_frames. 19 https://github.com/vrpn/vrpn/wiki. 20 https://github.com/clearpathrobotics/vrpn_client_ros/blob/indigo-devel/launch/sample.launch. Coordinate System It is important to verify that your transformations match the ROS standard.a That means we use a right-handed coordinate system with x forward, y left, and z pointing up. One way to check is to launch rviz, and add a “TF” visualization. Move the Crazyflie around in your hand, while verifying that the visualization in rviz matches the expected coordinate system. a http://www.ros.org/reps/rep-0103.html. 6.2 ROS Usage (Single Crazyflie) Here, we assume that you have a working localization for a single Crazyflie already. We assume that there is a ROS transform between the frames /world and /crazyflie1 and that radio://0/100/2M/E7E7E7E701 is the URI of your Crazyflie. With VICON you can launch the following: $ roslaunch crazyflie_demo hover_vicon.launch uri:=radio://0/100/2M/E7E7E7E701 frame:=crazyflie1 x:=0 y:=0 z:=0.5 Once the Crazyflie is connected, you can press the blue (X) button on the XBox360 controller to take off and the green (A) button to land. If successful, the Crazyflie should hover at (0, 0, 0.5). Use the red (B) button to handle any emergency situation (or unplug the Crazyradio to get the same effect). The launch file starts rviz as well, visualizing both the Crazyflie’s current position and goal position (indicated by a red arrow). If you are using OptiTrack, you can use hover_vrpn.launch rather than hover_vicon.launch. The launch file is similar to before, but adds a few more elements: hover_vicon.launch 1 2 3 4 5 6 104 7 8 9 10 11 12 13 14 15 16 17 18 19 We start by defining the arguments (not shown here for brevity) and launching the crazyflie_server (line 3). Within the group element, we include crazyflie_add (not shown). Now we get a few differences: in line 7 we set use_crazyflie_controller to True to enable the takeoff and landing behavior using the joystick. Moreover, we add a position controller node by including crazyflie2.launch (lines 9–11). The static goal position for this controller is published in the /crazyflie/goal topic in lines 12–18. The group ends by publishing a static transform from the given frame to the Crazyflie’s base link. This allows us to visualize the current pose of the Crazyflie in rviz using the 3D model provided in the crazyflie_description package (lines 19 and 22). 6.3 ROS Usage (Multiple Crazyflies) The main difference between the single UAV and multi-UAV case is that the joystick should be shared: Takeoff, landing, and an emergency should trigger the appropriate behavior on all Crazyflies. This allows us to have a single backup pilot who can trigger emergency power-off in case of an issue. The low inertia of the Crazyflies causes them to be very robust to mechanical failures when dropping from the air. We have had crashes from heights of up to 4 m on a slightly padded floor, with only propellers and/or motor mounts needing replacement. (Replacement parts are available for purchase separately.) The crazyflie_demo package contains an example for hovering two Crazyflies. You can run it for VICON by executing: $ roslaunch crazyflie_demo multi_hover_vicon.launch uri1:=radio://0/100/2M/E7E7E7E701 frame1:=crazyflie1 uri2:=radio://0/100/2M/E7E7E7E702 frame2:=crazyflie2 There is also an example using VRPN (multi_hover_vrpn.launch). The launch file is similar to the single Crazyflie case: multi_hover_vicon.launch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 In this case, we only need a single joystick; the joy node for it is instantiated in lines 4–6. In order to use that topic, we need to supply controller.py with the correct topic name (line 11). We can summarize what we have learned so far by looking at the output of rqt_graph, as shown in Fig. 9. It shows the various nodes (ellipsoid), namespaces (rectangles), and topics (arrows). In particular, we have two namespaces: crazyflie1 and crazyflie2. Each namespace contains the nodes used for a single Crazyflie: joystick_controller to deal with the user-input, pose to publish the (static) goal position for that particular Crazyflie, and controller to compute the low-level attitude commands based on the high-level user input. The attitude commands are transmitted using the cmd_vel topics. There is only W. Hönig and N. Ayanian crazyflie1 /crazyflie1/joystick_controller /crazyflie1/pose /crazyflie1/goal /crazyflie1/baselink_broadcaster joy /crazyflie1/controller crazyflie_server /vicon /joy /crazyflie1/cmd_vel /crazyflie_server /tf_static crazyflie2 /crazyflie2/joystick_controller /crazyflie2/baselink_broadcaster /crazyflie2/goal /crazyflie2/pose Fig. 9 Visualization of the different nodes and their communication using rqt_graph one node, the crazyflie_server, which listens on those topics and transmits the data to both Crazyflies, using the Crazyradio. The joy node provides the input to both namespaces, allowing a single user to control both Crazyflies. Similarly, the vicon node is shared between Crazyflies, because the motioncapture system provides feedback (in terms of tf messages) of all quadcopters. The baselink_broadcaster nodes are only used for visualization purposes, allowing us to visualize a 3D model of the Crazyflie in rviz. More than two Crazyflies can be used by duplicating the groups in the launch file accordingly. This will result in more namespaces; however, the crazyflie_server, vicon, and joy nodes will always be shared between all Crazyflies. 7 Waypoint Following The hovering of the previous section is extended to let the UAVs follow specified waypoints. This is useful if you want the robots to fly specified routes, for example for delivery systems or construction tasks. As before, first the single-UAV case is presented, followed by how to use it in the multi-UAV case. Here, we concentrate on the ROS-specific changes in a toy example where the waypoints are static and known upfront. Planning such routes for a group of quadcopters is a difficult task in itself and we refer the reader to related publications [11–13]. The main difference between hovering and waypoint following is that, for the latter, the goal changes dynamically. First, test the behavior of the controller for dynamic waypoint changes: $ roslaunch crazyflie_demo teleop_vicon.launch Here, the joystick is used to change the goal pose rather than influencing the motor outputs directly. The visualization in rviz shows the current goal pose as well as the quadcopter pose to provide some feedback. Waypoint following works in a similar fashion: the first waypoint is set as goal position and, once the Crazyflie reaches its current goal (within some radius), the goal point is set to the next position. This simple behavior is implemented in crazyflie_demo/scripts/demo.py. Each Crazyflie can have its own waypoint defined in a Python script, for example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 #!/usr/bin/env python from demo import Demo if _ _name_ _ == ’_ _main_ _’: demo = Demo( [ #x , y, z, yaw, sleep [0.0 , 0.0, 0.5, 0, 2], [1.5 , 0.0, 0.5, 0, 2], [-1.5 , 0.0, 0.75, 0, 2], [-1.5 , 0.5, 0.5, 0, 2], [0.0 , 0.0, 0.5, 0, 0], ] ) demo.run() Here, x, y, and z are in meters, yaw is in radians, and sleep is the delay in seconds before the goal switches to the next waypoint. Adjust demo1.py and demo2.py to match your coordinate system and run the demo for two Crazyflies using: $ roslaunch crazyflie_demo multi_waypoint_vicon.launch The path for the two Crazyflies should not be overlapping because simple waypoint following does not have any time guarantees. Hence, it is possible that the first Crazyflie finishes much earlier than the second one, even if the total path length and sleep time are the same. This limitation can be overcome by generating a trajectory for each Crazyflie and setting the goal points dynamically accordingly. The launch file looks very similar to before: multi_waypoint_vicon.launch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Instead of publishing a static pose, each Crazyflie now executes its own demo. py node, which in turn publishes goals dynamically. An example video demonstrating six Crazyflies following dynamically changing goals is available online.21 8 Troubleshooting As with most physical robots, debugging can be difficult. In order to identify and eventually solve the problem, it helps to simplify the failing case until it is easier to analyze. In this section, we provide several actions which have helped us resolve issues in the past. In particular, we first identify if the issue is on the hardware or software side, and provide recipes to address both kinds of issues. 1. Verify that the position estimate works correctly. For example, use rviz to visualize the current pose of all quadrotors, move a single quadrotor manually at a time and make sure that rviz reflects the changes accordingly. 2. Check the wireless connection between the PC and the Crazyflies. If the packet drop rate is high, the crazyflie_server will output ROS warnings. Similarly, you can check the LEDs on each Crazyradio; ideally the LEDs show mostly green. If there is an communication issue the LEDs will frequently flash red as well. If communication is an issue, try a different channel by following Sect. 5.2. 21 http://youtu.be/px9iHkA0nOI. 3. Work your way backwards: If a swarm fails, test the individual Crazyflies (or subgroups of them). If waypoint following fails, test hovering and, if there is an issue there as well, test teleoperation using ROS followed by teleoperation using the Bitcraze PC Client. 4. Issues with many Crazyflies but not smaller subgroups can occur if there are communication issues or if the position estimate suddenly worsens. For the first case, try reducing the number of Crazyflies per Crazyradio and adjusting the channel. For the second case try to estimate the latency of your position estimator. If you have multiple objects enabled, there might be axis-flips (marker configurations might not be unique enough) or the computer doing the tracking might be adding too much latency for the controller to operate properly. 5. If waypoint following does not work, make sure that you visualize the current waypoint in rviz. In general, the waypoints should not jump around very much. The provided controller is a hover controller which works well if the goal point is within a reasonable range of the Crazyflie’s current position. 6. If hovering does not work, you can try to tune the provided controller. For example, if you have a higher payload you might increase the proportional gains. You can find the gains in crazyflie_controller/config/crazyflie2. yaml. 7. If teleoperation does not work or it is very hard to keep the Crazyflie hovering in place, there is most likely an issue with your hardware. Make sure that the propellers are balanced22 and that the battery is placed in the center of mass. When in doubt, replace the propellers. 9 Inside the crazyflie_ros Stack This section will cover some more details of the stack. The knowledge you gain will not only help you better understand on what is happening under the hood, but also provide the foundations to change or add features. Furthermore, some of the design insights given might be helpful for similar projects. We will start with a detailed explanation of the different packages that compose the stack and their relationship. For each package, we will discuss important components and the underlying architecture. For example, for the crazyflie_driver package we will explain the different ROS topics and services, why there is a server, and how the radio time-slicing works. Guidelines for possible extensions will conclude the section. 22 https://www.bitcraze.io/balancing-propellers/. 9.1 Overview The crazyflie_ros stack is composed of six different packages: crazyflie_cpp contains a C++11 implementation for the Crazyradio driver as well as the Crazyflie. It supports the logging framework streaming data in real-time and the parameter framework adjusting parameters such as PID gains. This package has no ROS dependency and only requires libusb and boost. Unlike the official Python SDK it supports multiple Crazyflies over a shared radio. crazyflie_tools contains standalone command line tools which use the crazyflie_cpp library. Currently, there is a tool to find any Crazyflies in range and tools to list the available logging variables and parameters. Because there is no ROS dependency, the tools can be used without ROS as well. crazyflie_description contains the URDF description of the Crazyflie to visualize in rviz. The models are currently not accurate enough to be used for simulation. crazyflie_driver contains a ROS wrapper around crazyflie_cpp. The logging subsystem is mapped to ROS messages and parameters are mapped to ROS parameters. One node (crazyflie_server) manages all Crazyflies. crazyflie_controller contains a PID position controller for the Crazyflie. As long as the position of the Crazyflie is known (e.g., by using a motion capture system or a camera), it can be used to hover or execute (non-aggressive) flight maneuvers. crazyflie_demo contains sample scripts and launch files for teleoperation, hovering, and waypoint following for both single and multi Crazyflie cases. The dependencies between the packages are shown in Fig. 10. Both crazyflie_tools and crazyflie_demo contain high-level examples. Because crazyflie_cpp does not have any ROS dependency, it can be used with other frameworks as well. We will now discuss the different packages in more detail. Fig. 10 Dependencies between the different packages within the crazyflie_ros stack 9.2 crazyflie_cpp The crazyflie_cpp package is a static C++ library, with some components being header-only to maximize type-safety and efficiency. The library consists of four classes: Crazyradio This class uses libusb to communicate with a Crazyradio. It supports the complete protocol23 implemented in the Crazyradio firmware. The typical approach is to configure the radio (such as channel and datarate to use) first, followed by actual sending and receiving of data. The Crazyradio operates in Primary Transmitter Mode (PTX), while the Crazyflie operates in Primary Receiver Mode (PRX). That means that the Crazyradio is sending data (with up to 32 bytes of payload) using the radio and, if the data is successfully received, will receive an acknowledgement from the Crazyflie. The acknowledgment packet might contain up to 32 bytes of user-data as well. However, since the acknowledgment has to be sent immediately, the acknowledgment is not a direct response to the request sent. Instead, the communication can be seen as two asynchronous data streams, with one stream going from the Crazyradio to the Crazyflie and another stream for the reverse direction. If a request-respond like protocol is desired, it has to be implemented on top of the low-level communication infrastructure. The Crazyradio will automatically resend packets if no acknowledgment has been received. Below is a small example on how to use the class to send a custom packet: 1 2 3 4 5 6 7 8 9 10 Crazyradio radio(0); // Instantiate an object bound to the first Crazyflie found radio.setChannel(100); // Update the base frequency to 2500 MHz radio.setAddress(0xE7E7E7E701); // Set the address to send to // Send a packet uint8_t data[] = {0xCA, 0xFE}; Crazyradio::Ack ack; radio.sendPacket(data, sizeof(data), ack); if (ack.ack) { // Parse ack.data and ack.size } Exceptions are thrown in cases of error, for example if no Crazyradio could be found or if the user does not have the permission to access the USB dongle. Crazyflie This class implements the protocol of the Crazyflie24 and provides highlevel functions to send new setpoints and update parameters. In order to support multiple Crazyflies correctly, it instantiates the Crazyradio automatically. A global static array of Crazyradio instances and mutexes is used. Whenever the Crazyflie needs to send a packet, it first uses the mutex to lock its Crazyradio, followed 23 https://wiki.bitcraze.io/projects:crazyradio:protocol. 24 https://wiki.bitcraze.io/projects:crazyflie:crtp. by checking if the radio is configured properly. If not, the configuration (such as address) is updated and finally the packet is sent. The mutex ensures that multiple Crazyflies can be used in separate threads, even if they share a Crazyradio. The critical section of sending a packet causes the radio to multiplex the requests in time. Therefore, the bandwidth is split between all Crazyflies which share the same radio. Below is a small example demonstrating how multiple Crazyflies can be used with the same radio: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Crazyflie cf1("radio://0/100/2M/E7E7E7E701"); // Instantiate first Crazyflie object Crazyflie cf2("radio://0/100/2M/E7E7E7E702"); // Instantiate second Crazyflie object // launch two threads and set new setpoint at 100 Hz std::thread t1([&] { while (true) { cf1.sendSetpoint(0, 0, 0, 10000); // send roll, pitch, yaw, and thrust std::this_thread::sleep_for(std::chrono::milliseconds(10)); } }); std::thread t2([&] { while (true) { cf2.sendSetpoint(0, 0, 0, 20000); // send roll, pitch, yaw, and thrust std::this_thread::sleep_for(std::chrono::milliseconds(10)); } }); t1.join(); ); First, two Crazyflie objects are instantiated. Then two threads are launched using C++11 and lambda functions. Each thread sends an updated setpoint consisting of roll, pitch, yaw, and thrust to its Crazyflie at about 100 Hz. LogBlock This set of templated classes is used to stream out sensor data from the Crazyflie. The logging framework on the Crazyflie allows to create socalled log blocks. Each log block is a struct with a maximum size of 28 bytes, freely arranged based on global variables available for logging in the Crazyflie firmware. The list of available variables and their types can be queried at runtime (requestLogToc method in the Crazyflie class). This templated version provides maximum typesafety at the cost that you need to know at compile time which log blocks to request. LogBlockGeneric This class is very similar to LogBlock but also allows the user to dynamically create log blocks at runtime. The disadvantages of this approach are that it does not provide typesafety and that it is slightly slower at runtime. 9.3 crazyflie_driver We first give a brief overview of the ROS interface, including services, subscribed topics, and published topics. In the second part we describe the usage and internal infrastructure in more detail. Most of the services and topics are within the namespace of a particular Crazyflie, denoted with crazyflie. For example, if you have two Crazyflies, there will be namespaces crazyflie1 and crazyflie2. The driver supports the following services: add_crazyflie Adds a Crazyflie with known URI to the crazyflie_server node. Typically, this is used with the helper application from crazyflie_add from a launch file. Type: crazyflie_ros/AddCrazyflie crazyflie/emergency Triggers an emergency state, in which no further messages to the Crazyflie are sent. The onboard firmware will stop all rotors if it did not receive a message for 500 ms, causing the Crazyflie to fall shortly after the emergency was requested. Type: std_srvs/Empty crazyflie/update_params Uploads updated values of the specified parameters to the Crazyflie. The parameters are stored locally on the ROS parameter server. This service first reads the current values and then uploads them to the Crazyflie. Type: crazyflie_ros/UpdataParams The driver subscribes the following topics: crazyflie/cmd_vel Encodes the setpoint (attitude and thrust) of the Crazyflie. This can be used for teleoperation or automatic position control. Type: geometry_msgs/Twist The following topics are being published: crazyflie/imu Samples the inertial measurement unit of the Crazyflie every 10 ms, including the data from the gyroscope and accelerometer. The orientation and covariance are not known and therefore not included in the messages. Type: sensor_msgs/Imu crazyflie/temperature Samples the temperature as reported by the barometer every 100 ms. This might not be the ambient temperature, as the Crazyflie tends to heat up during operation. Type: sensor_msgs/Temperature crazyflie/magnetic_field Samples the magnetic field as measured by the IMU every 100 ms. Currently, the onboard magnetometer is not calibrated in the firmware. Therefore, external calibration is required to use it for navigation. Type: sensor_msgs/MagneticField crazyflie/pressure Samples the air pressure as measured by the barometer every 100 ms in mbar. Type: std_msgs/Float32 crazyflie/battery Samples the battery voltage every 100 ms in V. Type: std_msgs/Float32 crazyflie/rssi Samples the Radio Signal Strength Indicator (RSSI) of the onboard radio in -dBm. Type: std_msgs/Float32 The crazyflie_driver consists of two ROS nodes: crazyflie_server and crazyflie_add. The first manages all Crazyflies in the system (using one thread for each), while the second one is just a helper node to be able to add Crazyflies from a launch file. It is possible to launch multiple crazyflie_server’s, but these cannot share a Crazyradio. This is mainly a limitation of the operating system, which limits the ownership of a USB device to one process. In order to hide this implementation detail, each Crazyflie thread will operate in its own namespace. If you use rostopic, the topics of the first Crazyflie will be in the crazyflie1 namespace (or whatever tf_frame you assigned to it), even though the code is actually executed within the crazyflie_server context. Each Crazyflie offers a topic cmd_vel which is used to send the current setpoint (roll, pitch, yaw, and thrust) and, if logging is enabled, topics such as imu, battery, and rssi. Furthermore, services are used to trigger the emergency mode and to re-upload specified parameters. The values of the parameters themselves are stored within the ROS parameter server. They are added dynamically once the Crazyflie is connected, because parameter names, types, and values are all dynamic and dependent on your firmware version. For that reason, it is currently not possible to use the dynamic_reconfigure package, because in this case the parameter names and types need to be known at compile time. Instead, a custom service call needs to be triggered containing a list of parameters to update once a user changed a parameter on the ROS parameter server. The following Python example can be used to turn the headlight on (if the LED expansion is installed): 1 2 3 4 import rospy from crazyflie_driver.srv import UpdateParams rospy.wait_for_service("/crazyflie1/update_params") update_params = rospy.ServiceProxy("/crazyflie1/update_params", UpdateParams) rospy.set_param("/crazyflie1/ring/headlightEnable", 1) update_params(["ring/headlightEnable"]) After the service has become available, a service proxy is created and can be used to call the service whenever a parameter needs to be updated. Updating the parameter sets the parameter to a new value followed by a service call, which will trigger an upload to the Crazyflie. Another important part of the driver is the logging system support. If logging is enabled, the Crazyflie will advertise a number of fixed topics. In order to receive custom logging values (or at custom frequencies), you will either need to change the source code or use custom log blocks. The latter has the disadvantage that it is not typesafe (it just uses an array of floats as message type) and that it will be slightly slower at runtime. You can use custom log blocks as follows: customLogBlocks.launch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 genericLogTopics: ["log1", "log2"] genericLogTopicFrequencies: [10, 100] genericLogTopic_log1_Variables: ["pm.vbat"] genericLogTopic_log2_Variables: ["acc.x", "acc.y", "acc.z"] Here, additional parameters are used within the crazyflie_add node to specify which log blocks to get. The first log block only contains pm.vbat and is sampled every 10 ms. A new topic named /crazyflie1/log1 will be published. Similarly, the /crazyflie1/log2 topic will contain three values (x, y, and z of the accelerometer), published every 100 ms. The easiest way to find the names of variables is by using the Bitcraze PC Client. After connecting to a Crazyflie select “Logging Configurations” in the “Settings” menu. A new dialog will open and list all variables with their respective types. Each log block can only hold up 28 bytes and the minimum update period is 10 ms. You can also use the listLogVariables command line tool which is part of the crazyflie_tools package to obtain a list with their respective types. 9.4 crazyflie_controller The Crazyflie is controlled by a cascaded PID controller. The inner attitude controller is part of the firmware. The inputs are the current attitude, as estimated using the IMU sensor data, and the setpoint (attitude and thrust), as received over the radio. The controller runs at 250 Hz. The crazyflie_controller node runs another outer PID controller, which takes the current and goal position as input and produces a setpoint (attitude and thrust) for the inner controller. This cascaded design is typical if the sensor update rates are different [11]. In this case, the IMU can be sampled much more frequently than the position. A PID controller has absolute, integral, and differential terms on an error variable:  u(t) = K P e(t) + K I 0 e(t)dt + K D de(t) , dt where u(t) is the control output and K P , K I and K D are scalar parameters. The error e(t) is defined as the difference between the goal and current value. The crazyflie_controller uses four independent PID controllers for x, y, z, and yaw, respectively. The controller also handles autonomous takeoff and landing. The integral part of the z-PID controller is initialized during takeoff with the estimated required base thrust to keep the Crazyflie hovering. The takeoff routine linearly increases the thrust, until the takeoff is detected by the external position system. A state machine switches to the PID controller, using the current thrust value as initial guess for the integral part of the z-axis PID controller. This avoids retuning of a manual offset in case the payload changes or a different battery is used. The current goal can be changed by publishing to the goal topic. However, since the controller makes the hover assumption, large jumps between different control points should be avoided. The various parameters can be tuned in a config file (crazyflie_controller /config/crazyflie2.yaml), or a custom config file can be loaded instead of the default one (see crazyflie_controller/launch/crazyflie2. launch for an example). 9.5 Possible Extensions The overview of the crazyflie_ros stack should allow you to reuse some of its architecture ideas or to extend it further. For example, you can use the Crazyradio and crazyflie_cpp for any other remote-controlled robot which requires a low-latency radio link. The presented controller of the crazyflie_controller package is a simple hover controller. A non-linear controller, as presented in [14] or [11] might be an interesting extension to improve the controller performance. Higher-level behaviors, such as following a trajectory rather than just goal points, could make more interesting flight patterns possible. Finally, including simulation for the Crazyflie25 could help research and development by enabling simulated experiments. 25 E.g., adding support to the RotorS package (http://wiki.ros.org/rotors_simulator). 10 Conclusion In this chapter we showed how to use multiple small quadcopters with ROS in practice. We discussed our target platform, the Bitcraze Crazyflie 2.0, and guided the reader step-by-step to the process of letting multiple Crazyflies following waypoints. We tested our approach on up to six Crazyflies, using three radios. We hope that this detailed description will help other researchers use the platform to verify algorithms on physical robots. More recent research has shown that the platform can even be used for swarms of up to 49 robots [5]. In the future, we would like to provide a similar step-by-step tutorial about the additional required steps to guide other researchers in working on larger swarms. Furthermore, it would be interesting to make the work more accessible to a broader audience once more inexpensive but accurate localization systems become available. References 1. Michael, N., J. Fink, and V. Kumar. 2011. Cooperative manipulation and transportation with aerial robots. Autonomous Robots 30 (1): 73–86. 2. Augugliaro, F., S. Lupashin, M. Hamer, C. Male, M. Hehn, M.W. Mueller, J.S. Willmann, F. Gramazio, M. Kohler, and R. D’Andrea. 2014. The flight assembled architecture installation: Cooperative construction with flying machines. IEEE Control Systems 34 (4): 46–64. 3. Hönig, W., Milanes, C., Scaria, L., Phan, T., Bolas, M., and N. Ayanian. 2015. Mixed reality for robotics. In IEEE/RSJ Intl Conference Intelligent Robots and Systems, 5382–5387. 4. Mirjan, A., Augugliaro, F., D’Andrea, R., Gramazio, F., and M. Kohler. 2016. Building a Bridge with Flying Robots. In Robotic Fabrication in Architecture, Art and Design 2016. Cham: Springer International Publishing, 34–47. 5. Preiss, J.A., Hönig, W., Sukhatme, G.S., and N. Ayanian. 2016. Crazyswarm: A large nanoquadcopter swarm. In IEEE/RSJ Intl Conference Intelligent Robots and Systems (Late Breaking Results). 6. Michael, N., D. Mellinger, Q. Lindsey, and V. Kumar. 2010. The GRASP multiple micro-uav testbed. IEEE Robotics and Automation Magazine 17 (3): 56–65. 7. Lupashin, S., Hehn, M., Mueller, M.W., Schoellig, A.P., Sherback, M., and R. D’Andrea. 2014. A platform for aerial robotics research and demonstration: The flying machine arena. Mechatronics 24(1):41–54. 8. Landry, B. 2015. Planning and control for quadrotor flight through cluttered environments, Master’s thesis, MIT. 9. Förster, J. 2015. System identification of the crazyflie 2.0 nano quadrocopter, Bachelor’s Thesis, ETH Zurich. 10. Ledergerber, A., Hamer, M., and R. D’Andrea. 2015. A robot self-localization system using one-way ultra-wideband communication. In IEEE/RSJ Intl Conference Intelligent Robots and Systems, 3131–3137. 11. Mellinger, D. 2012. Trajectory generation and control for quadrotors, Ph.D. dissertation, University of Pennsylvania. 12. Kushleyev, A., D. Mellinger, C. Powers, and V. Kumar. 2013. Towards a swarm of agile micro quadrotors. Autonomous Robots 35 (4): 287–300. 13. Hönig, W., Kumar, T.K.S., Ma, H., Koenig, S., and N. Ayanian. 2016. Formation change for robot groups in occluded environments. In IEEE/RSJ Intl Conference Intelligent Robots and Systems. 14. Lee, T., Leok, M., and N.H. McClamroch. 2010. Geometric tracking control of a quadrotor UAV on SE(3). In IEEE Conference on Decision and Control, 5420–5425. Author Biographies Wolfgang Hönig has been a Ph.D. student at ACT Lab at University of Southern California since 2014. He holds a Diploma in Computer Science from Technical University Dresden, Germany. He is the author and maintainer of the crazyflie_ros stack. Nora Ayanian is an Assistant Professor at University of Southern California. She is the director of the ACT Lab at USC and received her Ph.D. from the University of Pennsylvania in 2011. Her research focuses on creating end-to-end solutions for multirobot coordination. SkiROS—A Skill-Based Robot Control Platform on Top of ROS Francesco Rovida, Matthew Crosby, Dirk Holz, Athanasios S. Polydoros, Bjarne Großmann, Ronald P.A. Petrick and Volker Krüger Abstract The development of cognitive robots in ROS still lacks the support of some key components: a knowledge integration framework and a framework for autonomous mission execution. In this research chapter, we will discuss our skillbased platform SkiROS, that was developed on top of ROS in order to organize robot knowledge and its behavior. We will show how SkiROS offers the possibility to integrate different functionalities in form of skill ‘apps’ and how SkiROS offers services for integrating these skill-apps into a consistent workspace. Furthermore, we will show how these skill-apps can be automatically executed based on autonomous, goal-directed task planning. SkiROS helps the developers to program and port their high-level code over a heterogeneous range of robots, meanwhile the minimal Graphical User Interface (GUI) allows non-expert users to start and superThis project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 610917 (STAMINA). F. Rovida (B) · A.S. Polydoros · B. Großmann · V. Krüger Aalborg University Copenhagen, A.C. Meyers Vænge 15, 2450 Copenhagen, Denmark e-mail: [email protected] A.S. Polydoros e-mail: [email protected] B. Großmann e-mail: [email protected] V. Krüger e-mail: [email protected] D. Holz Bonn University, Bonn, Germany e-mail: [email protected] M. Crosby · R.P.A. Petrick Heriot-Watt University, Edinburgh, UK e-mail: [email protected] R.P.A. Petrick e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_4 F. Rovida et al. vise the execution. As an application example, we present how SkiROS was used to vertically integrate a robot into the manufacturing system of PSA Peugeot-Citroën. We will discuss the characteristics of the SkiROS architecture which makes it not limited to the automotive industry but flexible enough to be used in other application areas as well. SkiROS has been developed on Ubuntu 14.04 LTS and ROS indigo and it can be downloaded at https://github.com/frovida/skiros. A demonstration video is also available at https://youtu.be/mo7UbwXW5W0. Keywords Autonomous robot · Planning · Skills · Software engineering · Knowledge integration · Kitting task 1 Introduction In robotics the ever increasing level of system complexity and autonomy is naturally demanding a more powerful system architecture to relieve developers from reoccurring integration issues and to increase the robot’s reasoning capabilities. Nowadays, several middleware-based component platforms, such as ROS, are available to support the composition of different control structures. Nevertheless, these middlewares are not sufficient, by themselves, to support the software organization of a full-featured autonomous robot (Fig. 1). First, the presence of a knowledge-integration framework is necessary to support logic programming and increase software composability and reusability. In traditional robotic systems, knowledge is usually hidden or implicitly described in terms of if-then statements. With logic programming, the knowledge is integrated in a shared semantic database and the programming is based on queries over the database. This facilitates further software compositions since the robot’s control program does not need to be changed, and the extended knowledge will automatically introduce more solutions. Also, reusability is improved because knowledge that has been described once, can now be used multiple times for recognizing objects, inferring facts, or parametrizing actions. Fig. 1 SkiROS and the kitting pipeline ported on 3 heterogeneous industrial mobile manipulators SkiROS—A Skill-Based Robot Control Platform on Top of ROS Second, the complex design process and integration of different robot’s behaviors requires the support of a well-defined framework. The framework is not only necessary to simplify the software integration, but it is also fundamental to extend scripted behaviors with autonomous task planning based on context awareness. In fact, task planning in robotics is still not largely used due to the complexity of defining a planning domain and keeping it constantly updated with the robot’s available capabilities and sensors readings. In the course of a larger project on kitting using mobile manipulators [1–3], we have developed a particularly efficient pipeline for automated grasping of parts from pallets and bins, and a pipeline for placing into kits. To integrate these pipelines, together and with other ones, into different robot platforms, the Skill-based platform for ROS (SkiROS) was developed. The proposal for implementing such a programming platform defines tasks as sequences of skills, where skills are identified as the re-occurring actions that are needed to execute standard operating procedures in a factory (e.g., operations like pick ‘object’ or place at ‘location’). Embedded within the skill definitions are the sensing and motor operations, or primitives, that accomplish the goals of the skill, as well as a set of condition checks that are made before and after execution, to ensure robustness. This methodology provides a process to sequence the skills automatically using a modular task planner based on a standard domain description, namely the Planning Domain Definition Language (PDDL) [4]. The planning domain is automatically inferred from the robot’s available skill set and therefore does not require to be stated explicitly from a domain expert. In this research chapter we present a complete in-depth description of the platform, how it is implemented in ROS, and how it can be used to implement perception and manipulation pipelines on the example of mobile robot depalletizing, bin picking and placing in kits. The chapter is structured as follows. Section 2, discusses related work in general with a focus on the existing ROS applications. Section 3, discusses the software architecture theoretical background. Section 4, holds a tutorial on the graphical user interface. Section 5, holds a tutorial on the plug-ins development. Section 6, discusses the task planner theoretical background and tutorial on planner plug-in development. Section 7, presents an application on a real industrial kitting task. Section 8, discusses relevant conclusions. 1.1 Environment Configuration SkiROS consist of a core packages set that can be extended during the development process with plug-ins. The SkiROS package, and some initial plug-ins sets can be downloaded from, respectively: • https://github.com/frovida/skiros, core package • https://github.com/frovida/skiros_std_lib, extension with task planner, drive-pickplace skills and spatial reasoner • https://github.com/frovida/skiros_simple_uav, extension with drive-pick-place for UAVs, plus takeoff and landing skills SkiROS has been developed and tested on ubuntu 14.04 with ROS indigo and the compilation is not guaranteed to work within a different setup. Dependencies Skiros requires the oracle database and the redland library installed on the system. These are necessary for the world model activity. To install all dependencies is possible to use the script included in the SkiROS repository skiros/scripts/install_dependencies.sh. Other dependencies necessary for the planner can be installed running the script skiros_std_lib/ scripts/install_dependencies.sh. After these steps, SkiROS can be compiled with the standard “catkin_make” command. For a guide on how to launch the system after compilation, refer to Sect. 3.1. 2 Related Work During the last three decades, three main approaches to robot control have dominated the research community: reactive, deliberative, and hybrid control [5]. Reactive systems rely on a set of concurrently running modules, called behaviours, which directly connect input sensors to particular output actuators [6, 7]. In contrast, deliberative systems employ a sense-plan-act paradigm, where reasoning plays a key role in an explicit planning process. Deliberative systems can work with longer timeframes and goal-directed behaviour, while reactive systems respond to more immediate changes in the world. Hybrid systems attempt to exploit the best of both worlds, through mixed architectures with a deliberative high level, a reactive low level, and a synchronisation mechanism in the middle that mediates between the two [8]. Most modern autonomous robots follow a hybrid approach [9–12], with researchers focused on finding appropriate interfaces between the declarative descriptions needed for high-level reasoning and the procedural ones needed for low-level control. In the ROS ecosystem, we find ROSco,1 Smach2 and pi_trees3 which are architectures for rapidly creating complex robot behaviors, under the form of Hierarchical Finite State Machine (HFSM) or Behavior Trees (BT). These softwares are useful to model small concatenations of primitives with a fair reactive behavior. The approach can be used successfully up to a level comparable to our skills’ executive, but doesn’t scale up for high dynamic contexts. In fact the architectures allow only static composition of behaviors and cannot adapt those to new situations during execution. At time being, and at the best of author knowledge, we find in the ROS ecosystem only one maintained package for automated planning: rosplan4 (Trex is no longer main1 http://pwp.gatech.edu/hrl/ros-commander-rosco-behavior-creation-for-home-robots/. 2 http://wiki.ros.org/smach. 3 http://wiki.ros.org/pi_trees. 4 https://github.com/KCL-Planning/ROSPlan. tained). In rosplan, the planning domain has to be defined manually from a domain expert. With our approach, the planning domain is automatically inferred at run-time from the available skill set, which results in a higher flexibility and usability. Knowledge representation plays a fundamental role in cognitive robotic systems [13], especially with respect to defining world models. The most relevant approach for our work is the cognitivist approach, which highlights the importance of symbolic computation: symbolic representations are produced by a human designer and formalised in an ontology. Several modern approaches for real use-cases rely on semantic databases [14–16] for logic programming. It allows robotic system to remain flexible at run-time and easy to re-program. A prominent example of knowledge processing in ROS is the KnowRob system [17], which combines knowledge representation and reasoning methods for acquiring and grounding knowledge in physical systems. KnowRob uses a semantic library which facilitates loading and accessing ontologies represented in the Web Ontology Language (OWL). Despite its advanced technology, KnowRob is a framework with a bulky knowledge base and a strict dependency with the ‘Prolog’ language. For our project, a simpler and minimal implementation has been preferred, still compliant with the widely used OWL standard. Coupled with KnowRob there is CRAM (Cognitive Robot Abstract Machine) [18]. Like SkiROS, CRAM is a software toolbox for the design, the implementation, and the deployment of cognition-enabled autonomous robots, that do not require the whole planning domain to be stated explicitly. The CRAM kernel consists of the CPL plan language, based on Lisp, and the KnowRob knowledge processing system. SkiROS presents a similar theoretical background with CRAM, but differs in several implementations choices. For example, SkiROS doesn’t provide a domain specific language such as CPL to support low-level behavior design, but relies on straight C++ code and the planner in CRAM is proprietary, meanwhile in SkiROS is modular and compatible with every PDDL planner. 3 Conceptual Overview The Skill-based platform for ROS (SkiROS) [19] helps to design and execute the high-level skills of an hybrid behavior-deliberative architecture, commonly referred with the name of 3-tiered architecture [9–12]. As such, it manages the executive and deliberative layers, that are expected to control a behavior layer implemented using ROS nodes. While the theory regarding 3-tiered architectures is well know, it is still an open question how to build a general and scalable platform with well-defined interfaces. In this sense, the development of the SkiROS platform has been carried out taking into consideration the needs of two key stakeholders: the developer and the end-user. This approach derives from the field of the interaction design, where the human’s needs are placed as focal point of the research process. It is also included in the ISO standard [ISO9241-210;ISO16982]. Briefly, SkiROS provide: (i) a workspace to support the development process and software integration between different sources and (ii) an intuitive interface to instruct missions to the robot. The main idea is that the developers can equip the robots with skills, defined as the fundamental software building blocks operating a modification on the world state. The end-user can configure the robot by defining a scene and a goal and, given this information, the robot is able to plan and execute autonomously the sequence of skills necessary to reach the required goal state. The possibility of specifying complex states is tightly coupled with the amount of skills that the robot is equipped with and the richness of the robot’s world representation. Nevertheless, developing and maintaining a large skill set and a rich knowledge base can be an overwhelming task, even for big research groups. Modular and shareable skills are mandatory to take advantage of the network effect - a phenomenon occurring when the number of developers of the platform grows. When developers start to share skills and knowledge bases, there is possible to develop a robot able to understand and reach highly articulated goals. This is particularly achievable for the industrial case, where the skills necessary to fulfill most of use-cases have been identified from different researchers as a very compact set [20, 21]. The ROS software development approach is great to develop a large variety of different control systems, but lacks support to the reuse of effective solutions to recurrent architectural design problems. Consequently, we opted for a software development based on App-like plug-ins, that limits the developer to program a specific set of functionalities, specifically: primitives, skills, conditions, task planners and discrete reasoners. This approach partially locks the architecture of the system, but ensure a straightforward re-usability of all the modules. On the other side, we also modularized the core parts of the system into ROS nodes, so that the platform itself doesn’t become a black box w.r.t. to ROS and can be re-used in some of its parts, like e.g. the world model. Several iterative processes of trial and refinement has been necessary in order to identify: • how to structure the system • the part of the system that needs to be easily editable or replaced by the developer • the interface required from the user in order to control and monitor the system, without the necessity of becoming an expert on all its parts The application on a real use-case has been fundamental to apply these iterations. 3.1 Packages Structure SkiROS is a collection of ROS packages that implements a layered architecture. Each layer is a stand-alone package, which shares few dependencies with other layers. The packages in the SkiROS core repository are: • skiros - the skiros meta-package contains ROS launch files, logs, ontologies, saved istances and scripts to install system dependencies • skiros_resource, skiros_primitive - these packages are still highly experimental and are not taken into consideration in this paper. The primitives are currently managed together with skills, in the skill layer. • skiros_skill - contains the skill manager node and the base class for the skills and the primitives • skiros_world_model - contain the world model node, C++ interfaces to the ROS services, utilities to treat ontologies and the base class for conditions and reasoners • skiros_common - shared utilities • skiros_msgs - shared ROS actions, services and messages • skiros_config - contains definition of URIs, ROS topic names and other reconfigurable parameters • skiros_task - the higher SkiROS layer, contain the task manager node and the base class for task planner plug-in • skiros_rqt_gui - the Graphical User Interface, a plug-in for the ROS rqt package Each layer implements core functionalities with plug-ins, using the ROS ‘pluginlib’ system. The plug-ins are the basic building blocks available to the developer to design the robot behavior and tailor the system to his specific needs. This methodology ensure a complete inter-independence of the modules at compile time. Every node has clear ROS interfaces with others so that, if necessary, any implementation can be replaced. The system is also based on two standards: the Web Ontology Language (OWL) standard [22] for the knowledge base and the Planning Domain Definition Language (PDDL) standard [4] for the planner. The platform architecture is visualized in Fig. 2. The complete platform consist of three ROS nodes - task manager, skill manager and world model - plus a Graphical User Interface (GUI). It can be executed using the command: roslaunch skiros skiros_system.launch robot_name:=my_robot Where my_robot should be replaced with the desired semantic robot description in the knowledge base (see Sect. 5.1). The default robot model loaded is aau_stamina_robot. In the skiros_std_lib repository there is an example of the STAMINA use-case specific launch file: roslaunch skiros_std_lib skiros_system_fake_skills.launch This launch file runs the SkiROS system with two skill managers: one for the mobile base, loading the drive skill, and one for the arm, loading pick and place skills. 3.2 World Model Generally speaking, it is possible to subdivide the robot knowledge into three main domains: continuous, discrete and semantic. Continuous data is extracted directly Fig. 2 An overview of the SkiROS architecture, with squares representing ROS nodes and rectangles representing plug-ins. The robot presents an external interface to specify the scene and receive a goal, that can be accessed by the GUI or directly from a factory system. Internally, the task manager dispatches the generated plans to the skill managers in each subsystem of the robot. A skill manager is the coordinator of a subset of capabilities, keeping the world model updated with its available hardware, skills and primitives. The world model is the focal point of the system: all knowledge is collected and shared through it from sensors. Discrete data are relevant features that are computed from the continuous data and are sufficient to describe a certain aspect of the environment. Semantic data is abstract data, that qualitatively describes a certain aspect of the environment. Our world model stores semantic data. It works as a knowledge integration framework and supports the other subsystems’ logic reasoning by providing knowledge on any relevant topic. In particular, the robot’s knowledge is organised into an ontology that can be easily embedded, edited and extracted from the system. It is defined in the Web Ontology Language (OWL) standard which ensures greater portability and maintainability. The OWL ontology files have usually a .owl extension, and are based on XML syntax. An ontology consists of a set of definitions of basic categories (objects, relations, properties) which describe the elements of the domain of interest, their properties, and the relations they maintain with each other [23]. Ontologies are defined in Description Logic (DL), a specialisation of first-order logic, which is designed to simplify the description of definitions and properties of categories. The knowledge base consists of a terminological component (T-Box), that contains the description of the relevant concepts in the domain and their relations, and an assertional component (A-Box) that stores concept instances (and assertions about those instances). The SkiROS core ontology skiros/owl/stamina.owl gives a structure to organize the knowledge of 3 fundamental categories: • the objects in the world • the robot hardware • the robot available capabilities (skills and primitives) The knowledge base can be extended from the developer at will. It is possible to modify the default OWL loading path skiros/owl, by specifying the parameter skiros/owl_workspace. All the OWL files found in the specified path are automatically loaded from the world model at boot and merged to the SkiROS knowledge core - that is always loaded first. The world model node can be executed individually with the command: rosrun skiros_world_model world_model_node At run-time, the world model allows all the modules to maintain a shared working memory in a world instance, or scene, which forms a database complementary to the ontology database. The scenes are managed in the path specified in the skiros/scene_workspace parameter (default: skiros/scene). It is possible to start the world model with a predefined scene, by specifing the skiros/ scene_name parameter. It is also possible to load and save the scene using the ROS service or the SkiROS GUI. An example of the scene tree structure is showed Fig. 3 An example of a possible scene, with the robot visualized on rviz (left) and the corresponding semantic representation (right). The scene includes both physical objects (blue boxes) and abstract objects (orange boxes) in Fig. 3. The ontology can be extended automatically by the modules, to learn new concepts in a long-term memory (e.g. to learn a new grasping pose). The modules can modify the A-Box but not the T-Box. It is possible to interface with the world model using the following ROS services and topics: • /skiros_wm/lock_unlock shared mutex for exclusive access (service) • /skiros_wm/query_ontology query the world model with SPARQL syntax (service) • /skiros_wm/modify_ontology add and remove statements in the ontology. New statements are saved in the file learned_concepts.owl in the owl workspace path. The imported ontologies are never modified (service) • /skiros_wm/element_get get one or more elements from the scene (service) • /skiros_wm/element_modify modify one or more elements in the scene (service) • /skiros_wm/set_relation set relations in the scene (service) • /skiros_wm/query_model query relations in the scene (service) • /skiros_wm/scene_load_and_save save or load a scene from file (service) • /skiros_wm/monitor publish any change done to the world model, both ontology and scene (topic) It is also available a C++ interface class skiros_world_model/ world_model_interface.h that wraps the ROS interface and can be included in every C++ program. This interface is natively available for all the skills and primitives plug-ins (see Sect. 5.2). 3.3 Skill Manager The skill manager is a ROS node that collects the capabilities of a specific robot’s subsystem. A skill manager is launched with the command: rosrun skiros_skill skill_manager_node __name:=my_robot Where my_robot has to be replaced with the identifier of the robot in the world model ontology. Since many of the skill managers’ operations are based on the information stored in the world model, it requires the world model node to be running. Each skill manager in the system is responsible to instantiate in world scene its subsystem information: hardware, available primitives and available skills. Similarly, each primitive and skill can extend the scene information with the results of robot operation or sensing. To see how to create a new robot definition refer to Sect. 5.1. A skill manager, by default, tries to load all the skills and primitives that are been defined in the pluginlib system.5 It is also possible to load only a specific set by defining the parameters: skill_list and module_list. For example: 5 http://wiki.ros.org/pluginlib. In this case the robot my_robot will try to load pick and place skill, and the arm_motion and locate primitives. If the modules are loaded correctly, they will appear on the world model, associated to the robot name. It is possible to interface with the skill manager using the following ROS services and topics: • • • • • /my_robot/module_command command execution or stop of a primitive (service) /my_robot/module_list_query get the primitive list (service) /my_robot/skill_command command execution or stop of a skill (service) /my_robot/skill_list_query get the skill list (service) /my_robot/monitor publish execution feedback (topic) It is also available a C++ interface class skiros_skill/skill_manager_ interface.h that wraps the ROS interface and can be included in every C++ program and an high-level interface class skiros_skill/skill_layer_ interface.h to handle multiple skill managers. Note that, on every skill manager, the same module can be executed once at a time, but different modules can be executed concurrently. 3.4 Task Manager The task manager acts as the general robot coordinator. It monitors the presence of robot’s subsystems via the world model and use this information to connect to the associated skill manager. The task manager is the interface for external systems, designed to be controlled by a GUI or the manufacturing execution system (MES) of a factory. The task manager is launched individually with the command: rosrun skiros_task task_manager_node It is possible to interface with the task manager using the following ROS services and topics: • • • • • /skiros_task_manager/task_modify add or remove a skill from the list (service) /skiros_task_manager/task_plan send a goal to plan a skill sequence (service) /skiros_task_manager/task_query get the skill sequence (service) /skiros_task_manager/task_exe start or stop a task execution (topic) /skiros_task_manager/monitor publish execution feedback (topic) 3.5 Plugins The plug-ins are C++ classes, derived from an abstract base class. Several plugins can derive from the same abstract class. For example, any skill derives from the abstract class skill base. The following system parts have been identified as modules: • skill - an action with pre- and postconditions that can be concatenated to form a complete task • primitive - a simple action without pre- and postconditions, that is concatenated manually from a expert programmer inside a skill. The primitives support hierarchical composition • condition - a desired world state. It is expressed as a boolean variable (true/false) applied on a property of an element (property condition) or a relation between two elements (relation condition). The plug-in can wrap methods to evaluate the condition using sensors. • discrete reasoner - an helper class necessary to link the semantic object definition to discrete data necessary for the robot operation • task planner - a plug-in to plan the sequence of skills given a goal state. Any planner compatible with PDDL and satisfying the requirements described in Sect. 6 can be used These software pieces are developed by programmers during the development phase and are inserted as plug-in into the system. 3.6 Multiple Robots Control SkiROS can be used in multi-robot system in two ways. In the first solution, each skill manager is used to represent a robot in itself, and the task manager is used to plan and dispatch plans to each one of them. The solution is simple to implement and the robots will have a straightforward way to share the information via the single shared world model. The main limitation is that the skill execution is at the moment strictly sequential. Therefore, the task manager will move the robots one at a time. The second solution consist in implementing a high-level mission planner, and use this to dispatch goals for the SkiROS system running on each one of the robots. The latter solution is the one currently used for the integration in the PSA factory system [24]. 4 User Interface To allow any kind of user to be able to run and monitor the execution of the autonomous robot, the support of a clean and easy-to-use UI is necessary. At the Fig. 4 The full GUI showing the task tab moment, the interaction between human and robot is based on a Graphical UI, which in the future can be extended with more advanced and intuitive ways of interaction, like voice or motion capture. The full GUI is presented in Fig. 4. It consists of 4 tabs: • Goal - from this tab is possible to specify the desired goal state and trigger an action planning • Task - this tab visualize the planned skill sequence and allows to edit it • Module - this tab allows to run modules (skills and primitives). It is principally used for testing purposes • World model - from this tab is possible to load, edit and save the world scene The GUI is structured for different level of user skill. The most basic user is going to use the Goal tab and the World model tab. First of all, he can build up a scene, then can specify the goals, plan a task and run or stop the execution. More advanced user can edit the planned task or build it by themselves from the Task tab. System tester can use the Module tab for module testing. 4.1 Edit, Execute and Monitor the Task From the task tab presented in Fig. 4 is possible to edit a planned task or create a new one from scratch. The menu on the left allows to add a skill. First, the right robot must be selected from the top bar (e.g. /aau_stamina_robot). After this, a skill can be selected from the menu (e.g. place_fake). The skill must be parametrized appropriately and then can be added to the task list clicking the ‘Add’ button. The user can select each skill on the task list and remove it with the ‘Remove skill’ button. On the top bar there are two buttons to execute and stop the task execution and the ‘Iterate’ check box, that can be selected to repeat the task execution in loop (useful for testing a particular sequence). At the bottom is visualized the execution output of all the modules, with the fields: • Module - the module name • Status - this can be: started, running, preempted, terminated or error • Progress code - a positive number related to the progress of the module execution. A negative number indicates an error • Progress description - a string describing the progress 4.2 Plan a Task From the goal tab it is possible to specify the desired goal state and generate automatically a skill sequence, that will be then available in the task tab. The goal is expressed as a set of conditions required to be fulfilled. These can be chosen between the set of available ones. The available conditions set is calculated at run-time depending on the robots’ skill set and can be updated using the ‘Refresh’ button. It is possible to specify as goals only conditions that the robot can fulfill with its skills, or in other words, conditions that appear in at least one of the skills. In the example in Fig. 5, we require an abstract alternator to be in Kit-9 and we require the robot to be at LargeBox-3. By abstract we refer to individuals that are defined in the ontology, but not instantiated in the scene. Specify an abstract object means specify any object which match the generic description. The InKit condition allows abstract types, meanwhile RobotAtLocation can be applied only on instantiated objects (objects in the scene). For more details about conditions the reader can refer to Sect. 5.3. 4.3 Module Testing From the Modules tab is possible to execute primitives. The procedure is exactly the same presented previously for the skills, except that the primitives are executed singularly. The module tab becomes handy to test the modules singularly or to setup Fig. 5 The goal tab Fig. 6 The Modules tab the robot, e.g. to teach a new grasping pose or to move the arm back to home position (Fig. 6). 4.4 Edit the Scene From the world model tab Fig. 7 is possible to visualize and edit the world scene. On the left the scene is visualized in a tree structure. The limit of the tree structure doesn’t allow to visualize the whole semantic graph, which can count several relations between the objects. We opted to limit the visualization to a scene graph, that is a general data structure commonly used in modern computer games to arrange the logical and spatial representation of a graphical scene. Therefore, only the spatial relations (contain and hasA) are visible, starting from the scene root node. On the right there are buttons to add, modify and remove objects in the scene. When an element in the tree is selected, its properties are displayed in the box on the right. Fig. 7 The World Model tab In the figure, for example, we are displaying the properties of the LargeBox 76. It is possible to edit its property by clicking on the ‘Modify object’ button. This opens a pop-up window where properties can be changed one by one, removed (by leaving the field blank) or added with the ‘+’ button on bottom. It is also possible to add an object by clicking on the ‘Add object’ button. When an object is added to the scene, it becomes child of the selected element in the tree. The properties and objects are limited to the set specified in the ontology. This not only helps to avoid input mistakes, but also to give the user an intuitive feedback on what it possible to put in the scene. Once the scene is defined, the interface on the bottom left allows to save the scene for future use. 5 Development In this section we discuss how to develop an action context for the robot planning, using as an example a simplified version of the kitting planning context. The development process consist of two steps: specify the domain knowledge in the ontology and develop the plug-ins specified in Sect. 3.5. Some plug-ins and templates are available together with the SkiROS core package (see Sect. 1.1). 5.1 Edit the Ontology Before starting to program, an ontology must be defined to describe the objects in the domain of interest. This regards in particular: • Data - define which kind of properties can be related to elements • Concepts - define the set of types expected to find using a taxonomy, a hierarchical tree where is expressed the notion about types and subtypes • Relations - define a set of relations between elements • Individuals - some predefined instances with associated data. E.g, a specific device or a specific robot The world model is compliant with the OWL w3c standard that counts several tools for managing ontologies. A well-known open-source program is Protege (http://protege.stanford.edu/). To install and use it, the reader can refer to one of the several guides available on internet.6 As introduced in Sect. 3.2, it is possible to have several custom ontologies in the defined OWL path and these get automatically loaded and merged with the SkiROS knowledge core at boot. The reader can refer to the uav.owl file in the skiros_simple_uav repository to see a practical example on how to create an ontology extension. The launch file in the same package provide an example on how to load it. Note in particular the importance of providing the right ontology prefix in the launch file and in general when referencing entities in the ontology (Fig. 8). The entities defined in the ontology are going to constraint the development of skill and primitives. For example, a place skill will require as input only objects that are subtypes of Container. To avoid the use of strings in the code, that are impossible to track down, an utility has been implemented in the skiros_world_model package to generate an enum directly from the ontology. It is possible to run this utility with the command: rosrun skiros_world_model uri_header_generator This utility updates the skiros_config/declared_uri.h, automatically included in all modules. Using the generated enum for the logic queries allows to get an error at compile time, if the name changes or is missing. Create a new robot definition The semantic robot structure is the necessary information for the skill manager to manage the available hardware. E.g. if the robot has a camera mount on the arm, it can move the camera to look better at an object. The robot and its devices must be described in detail with all the information that the developer wants to have stated explicitly. The user should use protege to create an ontology with an individual for each device and an individual for the robot, collecting the devices using the hasA relation. To give a concrete example, lets consider the aau_stamina_robot, the smaller stamina prototype used for laboratory test: • • • • • NamedIndividual: aau_stamina_robot Type: Robot LinkedToFrameId: base_link hasA− >top_front_camera hasA− >top_left_camera 6 e.g. http://protegewiki.stanford.edu/wiki/Protege4GettingStarted. Fig. 8 The taxonomy of spatial things for the kitting application • hasA− >top_right_camera • hasStartLocation− >unknown_location • hasA− >ur10 The robot has a LinkedToFrameId property, related to the AauSpatial Reasoner, and a start location, used for the drive skill. For more information about this properties, refer to the plug-ins description. The robot hardware consist of 3 cameras and a robotic arm (ur10). If we expand the ur10 description we find: • • • • • • NamedIndividual: ur10 Type: Arm MoveItGroup: arm DriverAddress: /ur10 MotionPlanner: planner/plan_action MotionExe: /arm_controller/follow_joint_trajectory • hasA− >arm_camera • hasA− >rq3 The arm has an additional camera and a end-effector rq3. Moreover, it has useful properties for the configuration of MoveIt. Thanks also to the simple parametrization of MoveIt, we have been able to port the same skills on 3 heterogeneous arms (ur, kuka and fanuc) by changing only the arm description as presented here. 5.2 Create a Primitive A SkiROS module is a C++ software class based on the standard ROS plug-in system. In particular, those who are experienced in programming ROS nodelets7 will probably find it straightforward to program SkiROS modules. Developing a module consists of programming a C++ class derived from an abstract template. The basic module, or primitive, usually implements an atomic functionality like opening a gripper, locating an object with a camera, etc. These functionalities can be reused in other primitives or skills or executed individually from the module tab. A primitive inherits from the template defined in skiros_skill/module_base.h. It requires to specify the following virtual functions: 1 2 3 4 5 6 / / ! \ brief personalized i n i t i a l i z a t i o n routine v i r t u a l bool o n I n i t ( ) = 0 ; / / ! \ b r i e f module main v i r t u a l i n t execute ( ) = 0 ; / / ! \ b r i e f s p e c i a l i z e d pre−preempt r o u t i n e v i r t u a l void onPreempt ( ) ; The onInit() function is called when the skill manager is started to initialize the primitive. The execute() function is called when the primitive is executed from the GUI or called by another module. The onPreempt() function is called when the primitive is stopped, e.g. from the GUI. All primitives have protected access to a standard set of interfaces: 1 2 3 4 5 6 7 8 / / ! \ b r i e f I n t e r f a c e with t h e parameter s e t boost : : s ha red _ p t r getParamHandler ( ) ; / / ! \ b r i e f I n t e r f a c e with t h e s k i r o s world model boost : : s ha red _ p t r getWorldHandler ( ) ; / / ! \ b r i e f I n t e r f a c e t o modules boost : : s har e d _ p t r getModulesHandler ( ) ; / / ! \ b r i e f I n t e r f a c e with t h e ROS network boost : : s ha red _ p t r getNodeHandler ( ) ; The ParamHandler allows to define and retrieve parameters. The WorldModelInterface allows to interact with the world model. The primitive’s parameter set must be defined in the class constructor and never modified afterwards. 7 http://wiki.ros.org/nodelet. The SkillManagerInterface allows the primitive to interact with other modules. World model interface Modules’ operations apply over an abstract world model, which has to been constantly matched to the real world. There is no space in this chapter to describe in detail the world model interface. Nevertheless, it is important to present the atomic world model’s data type, defined as element. In fact, the element is the most common input parameter for a module. An element structure is the following: 1 2 3 4 5 6 7 8 9 10 / / Unique I d e n t i f i e r of t h e element i n t h e DB i n t id ; / / I n d i v i d u a l i d e n t i f i e r i n t h e ontology std : : string label ; / / Category i d e n t i f i e r i n t h e ontology s t d : : s t r i n g type ; / / Last update time stamp r o s : : Time l a s t _ u p d a t e ; / /A l i s t of p r o p e r t i e s ( color , pose , s i z e , e t c . ) s t d : : map< s t d : : s t r i n g , skiros_common : : Param> p r o p e r t i e s ; The first 3 fields are necessary to relate the element in the ontology (label and type) and the scene database (id). The properties list contains all relevant information associated to the object. Parameters Every module relies on a dynamic set of parameters to configure the execution. The parameters are divided in the following categories: • online parameters that must always be specified • offline usually are configuration parameters with a default value, such as the desired movement speed, grasp force, or stiffness of the manipulator • optional like an offline parameter, but can be left unspecified • hardware indicate a robot’s device the module need to access. This can be changed at every module call (e.g. to locate with different cameras) • config like an hardware parameter, but it is specified when the module loads and cannot be changed afterwards. (e.g. an arm motion module bounded to a particular arm) • planning a parameter necessary for pre and post condition check in a skill. This is set automatically and doesn’t appear on the UI. To understand the concept, we present a code example. First, we show how to insert a parameter: 1 getParamHandler ( )−>addParam( "myKey" , "My d e s c r i p t i o n " , ← skiros_common : : online , 3) ; Here we are adding a parameter definition, specifying in the order: the key, a brief description, the parameter type, and the vector length. The template argument has to be specified explicitly too. In the above example we define an online parameter as a vector of 3 doubles. The key myKey can be used at execution time to access the parameter value: s t d : : vector myValue = getParamHandler ( )−>getParamValues ( "myKey" ) ; The parameter state is defined as initialized until its value get specified. After this the state changes to specified. A module cannot run until all unspecified parameters are specified. It is also possible to define parameters with a default value: 1 getParamHandler ( )−>addParamWithDefaultValue ( "myKey" , d e s c r i p t i o n " , skiros_common : : o f f l i n e , 1) ; t r u e , "My ← In this case, the parameter will be initialized to true. When an input parameter is a world’s element, it is possible to apply a special rule to limit the input range. In fact sometimes a module requires a precise type of element as input. For example, a pick skill can pick up only elements of type Manipulatable. In this case, it is possible to use a partial definition. For example: 1 2 ’ / s k i r o s _ s t d _ l i b / s k i r o s _ l i b _ d u m m y _ s k i l l s / s r c / pick . cpp ’ L. 3 4 : getParamHandler ( )−>addParamWithDefaultValue ( " o b j e c t " , skiros_wm : : ← Element ( " Manipulatable " ) , " Object t o pick up" ) ; In this example, only subtypes of Manipulatable will be valid as input for the object parameter. Every module can have a customized amount of parameters. The parameters support any data type that can be serialized in a ROS message. This means all the standard data types and all the ROS messages. ROS messages requires a quick but non-trivial procedure to be included in the system, that is excluded from the chapter for space reasons. Invoke modules Each module can recursively invoke other modules’ execution. For example, the pick skill invoke the locate module with the following: 1 2 ’ / s k i r o s _ s t d _ l i b / s k i r o s _ l i b _ d u m m y _ s k i l l s / s r c / pick . cpp ’ L. 1 5 5 : s k i r o s : : Module l o c a t e ( getModulesHandler ( ) , " l o c a t e _ f a k e " , t h i s −>← moduleType ( ) ) ; l o c a t e . setParam ( "Camera" , camera_up_ ) ; l o c a t e . setParam ( " Container " , c o n t a i n e r _ ) ; l o c a t e . exe ( ) ; locate . waitResult ( ) ; v = getWorldHandle ( )−>getChildElements ( c o n t a i n e r _ , " " , objObject . type← () ) ; Here, line 2 instantiate a proxy class for the module named locate_fake. Line 3 and 4 set the parameters and line 5 request the execution. The execution is non blocking, so that is possible to call several modules in parallel. In this case, we wait for the execution end and then we retrieve the list of located object. 5.3 Create a Skill A skill is a complex type of module, which inherits from the template defined in skiros_skill/skill_base.h. A conceptual model of a complete robot skill is shown in Fig. 9. A skill extends the basic module definition with the presence of pre- and postcondition checks. By implementing pre- and postcondition checking procedures the skills themselves verify their applicability and outcome. This enables the skill-equipped robot to alert an operator or task-level planner if a skill cannot be executed (precondition failures) or if it did not execute correctly (postcondition failures). A formal definition of pre- and postconditions is not only useful for robustness, but also task planning, which utilizes the preconditions and prediction to determine the state transitions for a skill, and can thus select appropriate skills to be concatenated for achieving a desired goal state. A skill adds also two more virtual functions, preSense() and postSense(), where sensing routines can be called before evaluating pre- and postconditions. Internally, a skill results into a concatenation of primitives, that can be both sequential or parallel. The planned sequence of skills forms the highest level of execution, that is dynamically concatenated and parametrized at run-time. The further hierarchical expansion of primitives is scripted by the developer, but still modular, so that its parts can be reused in different pipelines or replaced easily. Still, if the developer has some code that doesn’t want to modularize, e.g. a complete pick pipeline embedded in an ROS action, it is allowed, but unrecommended, to implement a skill as a single block that only makes the action call. Create a condition Preconditions and postconditions are based on sensing operations and expected changes to the world model from executing the skill. The user defines the pre- and postconditions in the skill onInit() function, after the parameter definitions. The conditions can be applied only on world model’s elements input parameters. While some ready-to-use conditions are available in the system, it’s also possible to create a new condition by deriving it from the Fig. 9 The model of a robot skill [21]. Execution is based on the input parameters and the world state, and the skill effectuates a change in the world state. The checking procedures before and after execution verify that the necessary conditions are satisfied condition templates skiros_wm/condition.h. There are two base templates: ConditionRelation and ConditionProperty. The first put a condition on a relation between two individuals. The second put a condition on a property of an individual. When implementing a new condition, two virtual functions have to be implemented: • void init() - here define the property (or the relation) on which the condition is applied • bool evaluate() - a function that returns true if the condition is satisfied or false otherwise Once the condition is defined, every skill can add it to its own list of pre or postconditions in the onInit() function. For example: 1 2 ’ / s k i r o s _ s t d _ l i b / s k i r o s _ l i b _ d u m m y _ s k i l l s / s r c / pick . cpp ’ L. 6 2 : a d d P r e c o n d i t i o n ( newCondition ( " RobotAtLocation " , t r u e , " Robot " , "← Container " ) ) ; This command instantiate a new condition RobotAtLocation between Robot and Container and add it to the list of preconditions. Note that Robot and Container refer to the key defined in the input parameters. In this case, the skill requires the Robot parameter to have a specific relation with the Container parameter. If this relation doesn’t hold, the skill will return a failure without being executed. 5.4 Create a Discrete Reasoner The world’s elements are agnostic placeholders where any kind of data can be stored and retrieved. Their structure is general and flexible, but this flexibility requires that no data-related methods are implemented. The methods are therefore implemented in another code structure, called discrete reasoner, that is imported in the SkiROS system as a plug-in. Any reasoner inherits from the base class skiros_world_model/discrete_reasoner.h. The standardized interface allow to use the reasoners as utilities to (i) store/retrieve data to/from elements and (ii) to reason about the data to compare and classify elements at a semantic level. Spatial reasoner A fundamental reasoner for manipulation is the spatial reasoner, developed specifically to manage position and orientation properties. The AauSpatialReasoner, an implementation based on the standard ‘tf’ library of ROS, is included in the skiros_std_lib/skiros_lib_reasoner package. An example of the reasoner use is in the following: 1 c o n t a i n e r _ =getParamHandler ( )−>getParamValue ( "← Container " ) ; skiros_wm : : Element o b j e c t ; o b j e c t . type ( ) =concept : : S t r [ concept : : Compressor ] ; o b j e c t . s t o r e D a t a ( t f : : Vector3 ( 0 . 5 , 0 . 0 , 0 . 0 ) , d a t a : : P o s i t i o n , "← AauSpatialReasoner " ) ; 144 5 6 7 8 9 10 t f : : Quaternion q ; q . setRPY ( 0 . 0 , 0 . 0 , 0 . 0 ) ; object . storeData (q , data : : Orientation ) ; o b j e c t . s t o r e D a t a ( s t r i n g ( "map" ) , d a t a : : BaseFrameId ) ; t f : : Pose pose= o b j e c t . getData < t f : : Pose >( d a t a : : Pose ) ; s t d : : s e t < s t d : : s t r i n g > r e l a t i o n s = o b j e c t . g e t R e l a t i o n s W r t ( c o n t a i n e r _ ) ← ; In this example, we first get the container variable from the input parameters. Then we create an new object instance and we use the AauSpatialReasoner reasoner to store a position, an orientation and the reference frame. Note that it necessary to specify the reasoner only on the first call. At line 8, we get back the object pose (a combination of position and orientation). At line 9 we use the reasoner to calculate semantic relations between the object itself and the container. The relations will contain predicates like front/back, left/right, under/over, etc. It is also possible to get relations with associated literal values for more advanced reasoning. It is up to the developer to define the supported data structures in I/O and which relevant semantic relations are extracted. Example To give an example of some of the concepts presented in the section, lets consider the code of a skill to start the flying of an UAV (note: the code is slightly simplified w.r.t. the real file): 1 2 3 4 5 6 7 8 9 ’ / skiros_simple_uav / s i m p l e _ u a v _ s k i l l s / s r c / f l y T o A l t i t u d e . cpp ’ : class FlyAltitude : public SkillBase { public : FlyAltitude () { t h i s −>s e t S k i l l T y p e ( " Drive " ) ; t h i s −>s e t V e r s i o n ( " 0 . 0 . 1 " ) ; getParamHandle ( )−>addParamWithDefaultValue ( " Robot " , ← skiros_wm : : Element ( concept : : S t r [ concept : : Robot ] ) , " Robot← t o c o n t r o l " , skiros_common : : o n l i n e ) ; getParamHandle ( )−>addParamWithDefaultValue ( " A l t i t u d e " , 1 . 0 , ← " A l t i t u d e t o reach ( meters ) " , skiros_common : : o f f l i n e ) ; } bool o n I n i t ( ) { a d d P r e c o n d i t i o n ( newCondition ( " uav : LowBattery " , f a l s e , " Robot← ")) ; a d d P o s t c o n d i t i o n ( " NotLanded " , newCondition ( " uav : Landed " , ← f a l s e , " Robot " ) ) ; return true ; } i n t preSense ( ) { s k i r o s : : Module monitor ( getModulesHandler ( ) , " m o n i t o r _ b a t t e r y "← , t h i s −>s k i l l T y p e ( ) ) ; monitor . setParam ( " Robot " , getParamHandle ( )−>getParamValue ( " Robot " ) ) ; monitor . setParam ( " f " , 1 0 . 0 ) ; SkiROS—A Skill-Based Robot Control Platform on Top of ROS 23 24 25 26 27 28 monitor . exe ( ) ; return 1; } i n t execute ( ) { double a l t i t u d e = getParamHandle ( )−>getParamValue ( "← Altitude " ) ; t h i s −>s e t P r o g r e s s ( " Going t o a l t i t u d e " + s t d : : t o _ s t r i n g ( ← altitude ) ) ; r o s : : Duration ( 2 . 0 ) . s l e e p ( ) ; / / Fake an e x e c u t i o n time setAllPostConditions () ; return 1; } }; / / Export PLUGINLIB_EXPORT_CLASS( F l y A l t i t u d e , s k i r o s _ s k i l l : : S k i l l B a s e ) Lets go through the code line by line: • Constructor - line 7–8 define constants to describe the module itself. Line 9-10 define the required parameters. • onInit() - line 14 add the precondition of having a charged battery, line 15 add a postcondition of having the robot no more on the ground. Note the use of the prefix ‘uav:’ to the condition names. This because the conditions are defined in the uav.owl ontology. • preSense() - invoke the monitor_battery module, to update the condition of the battery. • execute() - at line 28 the parameter Altitute is retrieved. Line 29 print out a progress message. Line 30 and 31 are in place of a real implementation. In particular, the setAllPostConditions() command set all postconditions true, in order to simulate the execution at a high-level. Line 32 return a positive value, to signal that the skill terminated correctly. At the very end, line 36 exports the plug-in definition. 6 Task Planner The Task Planner’s function is to provide a sequence of instantiated skills that, when carried out successfully, will lead to a desired goal state that has been specified by the user or some other automated process. For example, the goal state may be that a certain object has been placed in a kit that is being carried by the robot, and the returned sequence of skills (the plan) may be to drive to the location where the object can be found, to pick the object, and then to place the object in the kit. Of course, real goals and plans can be much more complicated than this, only limited by the relations that exist in the world model and the skills that have been defined. This section discusses the general translation algorithm to the Planning Domain Definition Language (PDDL) aswell as the additions tailored for the STAMINA use- Fig. 10 An overview of the Task Planning pipeline. The inputs are the set of goals and the world scene. The output is a sequence of instantiated skills. The solid arrows represent execution flow. The dashed arrows pointing towards the data structures represent writing or modification while the dashed arrows pointing away from the data structures represent read access case. PDDL is a well-known and popular language for writing automated planning problems and, as such, is supported by a large array of-the-shelf planners. The PDDL files created with the basic Task Planner options only use the types requirement of the original PDDL 1.2 version and are therefore suitable for use with almost all existing planners. Some extra features of the Task Planner (see Sect. 6.3) can be employed which introduce fluents and durative actions, and therefore must be used with a PDDL 2.1 compatible temporal planner. The Task Planner is built as a plugin to SkiROS that generate generic PDDL so that the developer can insert whichever external planning algorithm they wish to use. 6.1 Overview and Usage The Task Planner exists as a plug-in for SkiROS (skiros_std_lib/task_ planners/skiros_task_planner_plugin) that creates a PDDL problem using the interface provided in skiros_task/pddl.h. The Task Planner contains two const’s; robotParameterName, and robotTypeName which default to Robot and Agent respectively and must be consistent with the robot names used in the skill and world model definitions. Additionally, pddl.h contains a const bool STAMINA that toggles the use of specific extra features (explained in Sect. 6.3). Operation of the Task Planner is split into five main functions, as shown in Fig. 10 and described below: • initDomain - This function causes the Task Planner to translate the skill information in the world model into the planning actions and predicates found in the planning domain. While this translation is based on the preconditions and postcondition defined in the skill, it is not direct, and details of the necessary modifications are given below. • setGoal - This function sets the goal, or set of goal predicates, to be planned for. These can be provided as SkiROS elements or as PDDL strings. • initProblem - This function takes no arguments and tells the Task Planner to query the world model to determine the initial state of the planning problem. This must be called after the previous two functions as it relies on them to work out which parts of the world model are relevant to the planning problem. • outputPDDL - This function prints out a pddl domain file domain.pddl and problem file p01.pddl in the task planner directory. • callPlanner - This function invokes an external planner that will take the previously output PDDL files as input and return a plan. This must be implemented by the user for whichever external planner they wish to use. The planner has been tested with Fast Downward8 for the general case and Temporal Fast Downward9 for the STAMINA use-case. Any plan found must then be converted to a vector of parameterised skills, so any extra parameters created for internal use by the planner must be removed at this point. The user only needs to specify the external call to the planner in the callPlanner function. The setGoal function is the only one that takes arguments and requires a goalset comprised of SkiROS elements or PDDL strings. The other functions interface directly with the world model and require no arguments. 6.2 From Skills to PDDL The design of the SkiROS skills system facilitates the translation to a searchable A.I. planning format. However, a direct translation from skills to planning actions is not possible, or even desirable. The preconditions and postconditions of skills should be definable by a non-planning expert based on the precondition and postcondition checks required for safe execution of the skill along with any expected changes to the world model. Therefore, the translation algorithm is required to do some work to generate a semantically correct planning problem. We will briefly discuss two important points; how it deals with heterogeneous robots, and implicitly defined transformations. Heterogenous Robots There may be multiple robots in the world, each with different skill sets. For example, in the STAMINA use-case, the mobile platform is defined as 8 http://www.fast-downward.org/. 9 gki.informatik.uni-freiburg.de/tools/tfd/. a separate robot to the robotic arm. The mobile platform has the drive skill while the gripper has the pick and place skills. To ensure that each skill ’s’ can only be performed by the relevant robot, a ’can_s’ predicate is added as a precondition to each action, so that the action can only be performed if ’can_s(r)’ is true for robot ’r’. ’can_s(r)’ is then added to the initial state of the problem for each robot ’r’ that has skill ’s’. This way the planner can plan for multiple robots at a time. Implicit Transformations The skill definitions may include implicit transformation assumptions that need to be made explicit for the planner. For example, the STAMINA drive skill is implemented with the following condition: 1 a d d P o s t c o n d i t i o n ( " AtTarget " , newCondition ( " RobotAtLocation " , t r u e , "← Robot " , " TargetLocation " ) ) ; That is, only a single postcondition check, that the robot is at the location it was meant to drive to. For updating the SkiROS world model, setting the location of the robot to the ‘TargetLocation’ will automatically remove it from its previous location. However, the planner needs to explicitly encode the deletion of the previous location otherwise the robot will end up in two places at once in its representation. Of course, it is possible to add the relevant conditions to the skill definition. However, this is not ideal as it would mean that the robot performs verification that it is not at the previous location at the end of the drive skill. This prohibits the robot from driving from its location to the same location as the new postcondition check would fail. Whether this is a problem or not, allowing the Task Planner to automatically include explicit updates reduces the pressure on the skill writer to produce both a skill definition that is correct in terms of both the robot and the internal planning representation. The transformation is performed in a general manner that works for all spatial relations. The skills in the planning library are iterated over and checked against the spatial relations defined in SkiROS. If spatial relations are found to be missing, in either the preconditions or delete effects of the action (i.e., no predicate with matching relation and subject as in the case of the drive skill), then a new predicate of the same spatial relation and the same object, but a new subject variable, is created and added to the preconditions and delete effects of the action. If a related spatial relation exists in just one of the preconditions and delete effects then it is added (with the same subject) to the other. More details of the planning transformation algorithm can be found in [25]. 6.3 Additional Features The task planner contains additional features used in the STAMINA problem instance that can either be enabled or disabled depending on user preference. These extra features include sequence numbers for ordering navigation through the warehouse, for which the planner employs numeric fluents and temporal actions, and abstract objects for which the planner must add internal parameters to ensure correct execution. Sequence Numbers In the usecase for STAMINA, the robot navigates around a warehouse following a strict path. This is enforced following the previous setup to ensure that human workers will always exit in the same order they entered, therefore preserving the output order of the kits they are creating. The locations that it is possible to navigate to are given a sequence number (by a human operator) and these numbers are used to determine the shortest path based on the particular parts in the current order. Abstract Objects In the world model for STAMINA, parts are not instantiated until they are actually picked up. The pick skill is called on an abstract object because it is not known before execution which of the possibly numerous objects in a container will be picked up. On the other hand, the place skill is often called for an instantiated object, as, at the time of execution, it is a particular instance of an object that is in the gripper (Fig. 11). 7 Application Example 7.1 Overview As an example application, we focus on a logistic operation and specifically consider the automation of an industrial kitting operation. Such operation is common in a variety of manufacturing processes since it involves the navigation of a mobile platform to various containers from which the objects are picked and placed in their Fig. 11 The two different hardware setups that have been used for evaluation corresponding compartments of a kitting box. Thus, in order to achieve this task we have developed three skills, namely the drive, pick and place which consist of a combination of primitives such: • • • • • locate - roughly localize an object on a flat surface using a camera object registration - precisely localize an object using a camera arm motion - move the arm to a desired joint state or end-effector pose kitting box registration - localization of the kitting-box using a camera gripper ctrl - open and close the gripper Ontology The ontology that represents our specific kitting domain has been defined starting from the general ontology presented in Fig. 8. We extended the ontology with the types of manipulatable objects (starter, alternator, compressor, etc.) and boxes (pallet, box, etc.). Second, the following set of conditions has been defined: FitsIn, EmptyHanded, LocationEmpty, ObjectAtLocation, Carrying, Holding, RobotAtLocation. Learning primitives Learning primitives are needed to extend the robot’s knowledge base with some important information about the environment. The learning primitives, in our case, are: • object train - record a snapshot and associate it to an object’s type specified from the user • grasping pose learn - learn a grasping pose w.r.t. an object’s snapshot • placing pose learn - learn a placing pose w.r.t. a container • driving pose learn - learn a driving pose w.r.t. a container The primitives are executed from the GUI module tab during an initial setup phase of the robot. The snapshot and the poses taught during this phase are then used for all subsequent skills’ execution. 7.2 Skills The Drive Skill is the simplest skill. It is based on the standard ROS navigation interface, therefore the execution consist of an action call based on the ’move_base_msgs::MoveBaseGoal’ message. The details about navigation’s implementation are out of the scope of this chapter, but more implementation details can be found in [26]. The Picking Skill pipeline is organized in several stages, it 1. detects the container (pallet or box) using one of the workspace cameras, 2. moves the wrist camera over the detected container to detect and localize parts, and 3. picks up a part using predefined grasps. Low cycle times of roughly 40–50 s are achieved by using particularly efficient perception components and pre-computing paths between common paths to save motion planning time. Figure 15 shows examples of part picking using different mobile manipulators in different environments. The picking skill distinguishes two types of storage containers: pallets in which parts are well separated and boxes in which parts are stored in unorganized piles. Internally, the two cases are handled by two different pipelines which, however, follow the same three-step procedure as mentioned above. In case of pallets, we first detect and locate the horizontal support surface of the pallet and then segment and approach the objects on top of the pallet for further object recognition, localization and grasping [3]. For boxes, we first locate the top rectangular edges of the box and then approach an observation pose above the box center to take a closer look inside and to localize and grasp the objects in the box [27]. In the following, we will provide further details about these two variants of the picking pipeline and how the involved components are implemented as a set of primitives in the SkiROS framework. An example of this three-step procedure fors grasping a part from a transport box is shown in Fig. 12. The Placing Skill is responsible for reliable and accurate kitting of industrial parts in confined compartments [28]. It consists of two main modules the arm motion and the kit locate. The first is responsible for reliable planning and execution of collision-free trajectories subject to task-depended constrains, while the kit locate is responsible for the derivation of the kitting-box pose. The high precision and reliability of both is crucial for a successful manipulation of the objects in the confined compartments of the kitting box. 7.3 Primitives The locate primitive is one of the perception components in the picking pipeline and locates the horizontal support surfaces of pallets. In addition it segments the objects on top of this support surface, selects the object to grasp (object candidate being (a) Scene (b) Approaching the observation Pose (c) Grasping the part Fig. 12 Example of grasping a tube connector from a transport box (a): after detecting the box and approaching the observation pose (b), the part is successfully grasped (c) closest to the pallet center) and computes an observation pose to take a closer look at the object for recognition and localization. The detection of the horizontal support surface is based on very fast methods for computing local surface normals, extracting points on horizontal surfaces and fitting planes to the extracted points [3]. In order to find potential object candidates, we then select the most dominant support plane, compute both convex hull and minimum area bounding box, and select all RGB-D measurements lying within these polygons and above the extracted support plane. We slightly shrink the limiting polygons in order to neglect measurements caused by the exterior walls of the pallet. The selected points are clustered (to obtain object candidates), and the cluster being closest to the center of the pallet is selected to be approached first. After approaching the selected object candidate with the end effector, the same procedure is repeated with the wrist camera in order to separate potential objects from the support surface. Using the centroid of the extracted cluster as well as the main axes (as derived from principal component analysis), we obtain a rough initial guess of the object pose. With the subsequent registration stage, it does not matter when objects are not well segmented (connected in a single cluster) or when the initial pose estimate is inaccurate. The registration primitive accurately localized the part and verifies whether the found object is the correct part or not. The initial part detection only provides a rough estimate of the position of the object candidate. In order to accurately determine both position and orientation of the part, we apply a dense registration of the extracted object cluster against a pre-trained model of the part. We use multi-resolution surfel maps (MRSMAPs) as a concise dense representation of the RGB-D measurements on an object [29]. In a training phase, we collect one to several views on the object whose view poses can be optimized using pose graph optimization techniques. The final pose refinement approach is then based on a soft-assignment surfel registration. Instead of considering each point individually, we map the RGB-D image acquired by the wrist camera into an MRSMAP and match surfels. This needs several orders of magnitudes less map elements for registration. Optimization of the surfel matches (and the underlying joint data-likelihood) yields the rigid 6 degree-of-freedom (DoF) transformation from scene to model, i.e., the pose of the object in the coordinate frame of the camera. After pose refinement, we verify that the observed segment fits to the object model for the estimated pose. We can thus find wrong registration results, e.g., if the observed object and the known object model do not match or if a wrong object has been placed on the pallet. In such cases the robot stops immediately and reports to the operator (a special requirement of the end-user). For the actual verification, we establish surfel associations between segment and object model map, and determine the observation likelihood similar as in the object pose refinement. In addition to the surfel observation likelihood, we also consider occlusions by model surfels of the observed RGB-D image as highly unlikely. Such occlusions can be efficiently determined by projecting model surfels into the RGB-D image given the estimated alignment pose and determining the difference in depth at the projected pixel position. The resulting segment observation likelihood is compared with a baseline likelihood of observing the model MRSMAP by itself. We determine a detection confidence from the re-scaled ratio of both log likelihoods thresholded between 0 and 1. The arm motion primitive exploits many capabilities of MoveIt software10 such as the Open Motion Planning Library (OMPL), a voxel representation of the planning scene and interfaces with the move-group node. In order to achieve motions that are able to successfully place an object in compartments which in many cases are very confined, we have developed a planning pipeline by introducing two deterministic features which could be anticipated as planning reflexes. The need for that addition arises due to the stochasticity of the OMPL planners which makes them unstable as presented in preliminary benchmarking tests [28]. The Probabilistic Road-maps (PRM), Expansive-Spaces Tree (EST) and Rapidly exploring Random Tree (RRT) algorithms were evaluated. Based on the results PRM performs better on the kitting task. However, its success rate is not desirable for industrial applications. Another deterrent factor on motion planning is the Inverse Kinematics (IK) solutions that derive from MoveIt. Although that there exist multiple IK solutions for a given pose, MoveIt functions provide only one which is not always the optimal i.e. the closest to the initial joint configuration. We deal with this problem by sampling multiple solutions from the IK solver with different seeds. The IK solution whose joint configuration is closer to the starting joint configuration of the trajectory is used for planning. The developed planning pipeline achieves repeatable and precise planning by introducing two planning reflexes, the joint and operational space linear interpolations. The first ensures that the robot’s joints will rotate as less as possible and can be anticipated as an energy minimization planner. This happens by linearly interpolating, in the joint space of the robot, between the starting and the final configurations. Additionally, the operational space interpolation results to a linear motion of the endeffector in its operational space. This is achieved by performing a linear interpolation between the starting and final poses. Furthermore, the spherical linear interpolation (slerp) is used for interpolating between orientations. This linear motion is desirable for going in the narrow compartments of the kitting box. The path that is created from the two reflexes is evaluated for collisions, constraint violations and singularities. If any of those happen then the pipeline employs the more sophisticated PRM algorithm for solving the planning problem. The kitting box registration primitive is responsible to locate the kitting box in which compartments the grasped objects have to be placed. Usually, the pose estimation of the kitting box has to be executed whenever a new kitting box arrives in the placing area of the robot, however, due to slight changes in its position or orientation occurring during the placing task, the kitting box registration can be used as part of any skill, e.g. in the beginning of the placing skill. 10 http://moveit.ros.org. For the pose estimation of the box, an additional workspace camera with a top view on the kitting area is used. Due to the pose of the camera and objects which are already placed in the kitting box, most of it is not visible except for the top edges. Additionally, parts of the box edges are distorted or missing in the 3D data caused by the noise of the camera. The kitting box registration therefore implements an approach based on a 2D edge detection on a cleaned and denoised depth image. It applies a pixelwise temporal voting scheme to generate potential points belonging to the kitting box. After mapping these back to 3D space, a standard ICP algorithm is utilized to find the kitting box pose (cmp. Fig. 13). A more detailed description of the algorithm and a performance evaluation is presented in [28]. 7.4 Results We have evaluated the presented architecture with two robotic platforms that operate on different environments. A stationery Universal Robotics UR10 robot that operates within a lab environment and a Fanuc M-20iA which is mounted on a mobile platform and operates within an industrial environment. Both robotic manipulators Fig. 13 The 3D scene shows a highly cluttered kitting box with the final registered kitting box (blue) overlaid. The images in the bottom (left to right) show different steps of the registration process: a raw RGB image b normalized depth image c after edge extraction with box candidates overlaid in green d final voting space for box pixels are equipped with a Robotiq 3-Finger Adaptive Gripper and a set of RGB-D sensors. The drivers that have been used for control of both robotic manipulators and the gripper are available from the ROS-Industrial project.11 Kitting Operation with UR10 We evaluate the whole kitting task on the UR10 robot with various manipulated objects. The task was planned using the Graphical User Interface presented in Sect. 4 and is a concatenation of the pick and place skills. Figure 14 illustrates the key steps of the kitting taks. Detailed results on the performance of the kiting operation can be found in [2, 28] Kitting Operation with Fanuc on mobile platform Additionally to the UR10 test case, where the robot is stationery and operates in a lab environment, we have applied the presented architecture on a Fanuc robot which is mounted on a mobile base. In this case the kitting operation consists of three skills, the drive, pick and place. Thus, the robot navigates at the location of the requested object, picks it and then places it in the kitting box. This sequence is illustrated in Fig. 15. Using the presented architecture the mobile manipulator is able to perform kitting operations with multiple objects that are located in various spots. Fig. 14 Sequence of a kiting operation in the lab environment. The top row illustrates the execution of the pick skill and the bottom row the execution of placing skill 11 http://rosindustrial.org/. Fig. 15 Sequence of a kiting operation in an industrial environment. The top row illustrates the drive skill, the middle row the picking skill and the bottom row the placing skill 8 Conclusions In this research chapter we presented SkiROS, a skill-based platform to develop and deploy software for autonomous robots, and an application example on a real industrial use-case. The platform eases the software integration and increases the robot reasoning capabilities with the support of a knowledge integration framework. The developer can split complex procedures into ‘’skills’, that get composed automatically at run-time to solve goal oriented missions. We presented an application example where a mobile manipulator navigated in the warehouse, picked parts from pallets and boxes, and placed them in kitting boxes. Experiments conducted in two laboratory environments, and at the industrial end-user site gave a proof-of-concept of our approach. ROS joined with SkiROS allowed for porting the pipelines on several heterogeneous mobile manipulator platforms. We believe that using SkiROS, the pipelines can integrate easily with other skills for other use-cases, e.g. for assembly operations. The code released with the chapter allows any ROS user to try out the platform and plan with a fake version of the drive, pick and place skills. The user can then add their own skill definitions and pipelines to the platform and use SkiROS to help implement and manage their own robotics systems. References 1. Pedersen, M.R., L. Nalpantidis, R.S. Andersen, C. Schou, S. Bøgh, V. Krüger, and O. Madsen. 2015. Robot skills for manufacturing: From concept to industrial deployment. Robotics and Computer-Integrated Manufacturing. Available online. 2. Holz, D., A. Topalidou-Kyniazopoulou, F. Rovida, M.R. Pedersen, V. Krüger, and S. Behnke. 2015. A skill-based system for object perception and manipulation for automating kitting tasks. In Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). 3. Holz, D., A. Topalidou-Kyniazopoulou, J. Stückler, and S. Behnke. 2015. Real-time object detection, localization and verification for fast robotic depalletizing. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 1459–1466. 4. McDermott, D. 2000. The 1998 ai planning systems competition. Artifical Intelligence Magazine 21 (2): 35–55. 5. Kortenkamp, D., and R. Simmons. 2007. Robotic systems architectures and programming. In Springer Handbook of Robotics, ed. B. Siciliano, and O. Khatib, 187–206. Heidelberg: Springer. 6. Arkin, R.C. 1998. Behavior-based Robotics, 1st ed. Cambridge: MIT Press. 7. Brooks, R.A. 1986. A robust layered control system for a mobile robot. Journal of Robotics and Automation 2 (1): 14–23. 8. Firby, R.J. 1989. Adaptive Execution in Complex Dynamic Worlds. Ph.D. thesis, Yale University, USA. 9. Gat, E. 1998. On three-layer architectures. In Artificial Intelligence and Mobile Robots, MIT Press. 10. Ferrein, A., and G. Lakemeyer. 2008. Logic-based robot control in highly dynamic domains. Robotics and Autonomous Systems 56 (11): 980–991. 11. Bensalem, S., and M. Gallien. 2009. Toward a more dependable software architecture for autonomous robots. IEEE Robotics and Automation Magazine 1–11. 12. Magnenat, S. 2010. Software integration in mobile robotics, a science to scale up machine intelligence. Ph.D. thesis, École polytechnique fédérale de Lausanne, Switzerland. 13. Vernon, D., C. von Hofsten, and L. Fadiga. 2010. A Roadmap for Cognitive Development in Humanoid Robots. Heidelberg: Springer. 14. Balakirsky, S., Z. Kootbally, T. Kramer, A. Pietromartire, C. Schlenoff, and S. Gupta. 2013. Knowledge driven robotics for kitting applications. Volume 61., Elsevier B.V. 1205–1214 15. Björkelund, A., J. Malec, K. Nilsson, P. Nugues, and H. Bruyninckx. 2012. Knowledge for Intelligent Industrial Robots. In AAAI Spring Symposium on Designing Intelligent Robots: Reintegrating AI. 16. Stenmark, M., and J. Malec. 2013. Knowledge-based industrial robotics. In Scandinavian Conference on Artificial Intelligence. 17. Tenorth, M., and M. Beetz. 2013. KnowRob: A knowledge processing infrastructure for cognition-enabled robots. The International Journal of Robotics Research 32 (5): 566–590. 18. Beetz, M., L. Mösenlechner, and M. Tenorth. 2010. CRAM - A Cognitive Robot Abstract Machine for everyday manipulation in human environments. In IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, 1012– 1017. 19. Rovida, F., and V. Krüger. 2015. Design and development of a software architecture for autonomous mobile manipulators in industrial environments. In 2015 IEEE International Conference on Industrial Technology (ICIT). 20. Huckaby, J. 2014. Knowledge Transfer in Robot Manipulation Tasks. Ph.D. thesis, Georgia Institute of Technology, USA. 21. Bøgh, S., O.S. Nielsen, M.R. Pedersen, V. Krüger, and O. Madsen. 2012. Does your robot have skills? In The 43rd International Symposium of Robotics (ISR). 22. Bechhofer, S., F. van Harmelen, J. Hendler, I. Horrocks, D.L. McGuinness, P.F. Patel-Schneider, and L.A. Stein. 2004. OWL Web Ontology Language reference, 10 Feb 2004. http://www.w3. org/TR/owl-ref/. 23. Lortal, G., S. Dhouib, and S. Gérard. 2011. Integrating ontological domain knowledge into a robotic DSL. In Models in Software Engineering, ed. J. Dingel, and A. Solberg, 401–414. Heidelberg: Springer. 24. Krüger, V., A. Chazoule, M. Crosby, A. Lasnier, M.R. Pedersen, F. Rovida, L. Nalpantidis, R.P.A. Petrick, C. Toscano, and G. Veiga. 2016. A vertical and cyber-physical integration of cognitive robots in manufacturing. Proceedings of the IEEE 104 (5): 1114–1127. 25. Crosby, M., F. Rovida, M. Pedersen, R. Petrick, and V. Krueger. 2016. Planning for robots with skills. In Planning and Robotics (PlanRob) workshop at the International Conference on Automated Planning and Scheduling (ICAPS). 26. Sprunk, C., J. Rowekamper, G. Parent, L. Spinello, G.D. Tipaldi, W. Burgard, and M. Jalobeanu. 2014. An experimental protocol for benchmarking robotic indoor navigation. In ISER. 27. Holz, D., and S. Behnke. 2016. Fast edge-based detection and localization of transport boxes and pallets in rgb-d images for mobile robot bin picking. In Proceedings of the 47th International Symposium on Robotics (ISR), Munich, Germany. 28. Polydoros, A.S., B. Großmann, F. Rovida, L. Nalpantidis, and V. Krüger. 2016. Accurate and versatile automation of industrial kitting operations with skiros. In 17th Conference Towards Autonomous Robotic Systems (TAROS), (Sheffield, UK). 29. Stückler, J., and S. Behnke. 2014. Multi-resolution surfel maps for efficient dense 3D modeling and tracking. Journal of Visual Communication and Image Representation 25 (1): 137–147. Author Biographies Francesco Rovida is a Ph.D. student at the Robotics, Vision and Machine Intelligence Lab (RVMI), Aalborg University Copenhagen, Denmark. He holds a Bachelor’s degree in Computer Science Engineering (2011), and a Master’s degree in Robotic Engineering (2013) from the University of Genoa (Italy). He did his Master’s thesis at the Istituto Italiano di Tecnologia (IIT, Genoa, Italy) on the development of an active head with motion compensation for the HyQ robot. His research interests include knowledge representation and software integration for the development of autonomous robots. Matthew Crosby is a Postdoctoral Research Associate currently working at Heriot Watt University on high-level planning for robotics on the EU STAMINA project. His background is in multiagent planning (PHD, Edinburgh) and Mathematics and Philosophy (MSci, Bristol). More details can be found at mdcrosby.com. Dirk Holz received a diploma in Computer Science from the University of Applied Sciences Cologne in 2006 and a M.Sc. degree in Autonomous Systems from the University of Applied Sciences Bonn-Rhein-Sieg in 2009. He is currently pursuing the Ph.D. degree at the University of Bonn. His research interests include perceiving, extracting and modeling semantic information using 3D sensors as well as simultaneous localization and mapping (SLAM). Athanasios S. Polydoros received a Diploma in Production Engineering from Democritus University of Thrace in Greece and a M.Sc. degree with Distinction in Artificial Intelligence from the University of Edinburgh, Scotland. He is currently a Ph.D. student at the Robotics, Vision and Machine Intelligence (RVMI) Lab, Aalborg University Copenhagen, Denmark. His research interests are focused on machine learning for robot control and cognition and model learning. Bjarne Großmann graduated with a dual M.Sc. degree in Computer Science in Media from the University of Applied Sciences Wedel (Germany) in collaboration with the Aalborg University Copenhagen (Denmark) in 2012. He is currently working as a Ph.D. student at the Robotics, Vision and Machine Intelligence (RVMI) Lab in the Aalborg University Copenhagen. The main focus of his work is related to Robot Perception - from Human-Robot-Interaction over 3D object recognition and pose estimation to camera calibration techniques. Ronald Petrick is a Research Fellow in the School of Informatics at the University of Edinburgh. He received an MMath degree in Computer Science from the University of Waterloo and a PhD in Computer Science from the University of Toronto. His research interests include planning with incomplete information and sensing, cognitive robotics, knowledge representation and reasoning, and applications of planning to human-robot interaction. His recent work has focused on the application of automated planning to task-based action and social interaction on robot platforms deployed in real-world environments. Dr. Petrick has participated in a number of EU-funded research projects under FP6 (PACOPLUS) and FP7 (XPERIENCE and STAMINA). He was also the Scientific Coordinator of the FP7 JAMES project. Volker Krüger is a Professor at Aalborg University, Denmark where he has worked since 2002. His teaching and research interests are in the area of cognitive robotics for manufacturing and industrial automation. Since 2007, he has headed the Robotics, Vision and Machine Intelligence group (RVMI). Dr. Krueger has participated in a number of EU-funded research projects under FP4, FP5, and FP6, coordinated the FP7 project GISA (under ECHORD), and participated in the EU projects TAPAS, PACO-PLUS, and CARLoS. He is presently coordinating the FP7 project STAMINA. Dr. Krueger has recently completed an executive education at Harvard Business School related to academic industrial collaborations and knowledge-exchange. Control of Mobile Robots Using ActionLib Higor Barbosa Santos, Marco Antônio Simões Teixeira, André Schneider de Oliveira, Lúcia Valéria Ramos de Arruda and Flávio Neves Jr. Abstract Mobile robots are very complex systems and involve the integration of various structures (mechanical, software and electronics). The robot control system must integrate these structures so that it can perform its tasks properly. Mobile robots use control strategies for many reasons, like velocity control of wheels, position control and path tracking. These controllers require the use of preemptive structures. Therefore, this tutorial chapter aims to clarify the design of controllers for mobile robots based on ROS ActionLib. Each controller is designed in an individual ROS node to allow parallel processing by operating system. To exemplify the controller design using ActionLib, this chapter will demonstrate the implementation of two different types of controllers (PID and Fuzzy) for position control of a servo motor. These controllers will be available on GitHub. Also, a case study of scheduled fuzzy controllers based on ROS ActionLib for a magnetic climber robot used in the inspection of spherical tanks will be shown. Keywords ROS · Mobile robots · Control · ActionLib 1 Introduction Mobile robots have great versatility because they’re free to run around their application environment. However, this is only possible because this kind of robot carries a great variety of exteroceptive and interoceptive sensors to measure its motion and H.B. Santos (B) · M.A.S. Teixeira · A.S. de Oliveira · L.V.R. de Arruda · F. Neves Jr. Federal University of Technology—Parana, Av. Sete de Setembro, 3165 Curitiba, Brazil e-mail: [email protected] M.A.S. Teixeira e-mail: [email protected] A.S. de Oliveira e-mail: [email protected] L.V.R. de Arruda e-mail: [email protected] F. Neves Jr. e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_5 H.B. Santos et al. Fig. 1 Interface of ActionLib. Source [1] interact with environment around it. Several of these information are used to robot’s odometry or environment mapping. Thus, these signals are the robot’s sense about its motion and its way to correct it. Robot control is a complex and essential task which must be performed during all navigation. Several kinds of controllers can be applied (like proportional-integralderivative, predictive, robust, adaptive and fuzzy). The implementation of the robot control is very similar and can be developed with the use of ROS Action Protocol by ActionLib. ActionLib provides a structure to create servers that execute long-running tasks and interacts with clients by specific messages [1]. The proposed chapter aims to explain how to create ROS controllers using the ActionLib structure. This chapter is structured in five sections. In the first section, we will carefully discuss the ActionLib structure. This section introduces the development of preemptive tasks with ROS. The ActionLib works with three main messages, as can be seen in Fig. 1. Goal message is the desired value for controller, like its target or objective. Feedback message is the measure of controlled variable which usually is updated by means of a robot sensor. Result message is a flag that indicates when the controller reaches its goal. The second section will demonstrate the initial requirements for the creation of a controller using ActionLib package. The third section will present an implementation of a classic Proportional-Derivative-Integrative (PID) control strategy with the use of ActionLib. This section aims to introduce a simple (but powerful) control structure that can be applied to many different purposes, like position control, velocity control, flight control, adhesion control and among others. It’ll be shown how to create your own package, set the action message, structure the server/client code, compile the created package and, finally, show the experimental results of the PID controller. In the fourth section will be present a design of a fuzzy controller. The controller is implemented using ActionLib and an open-source fuzzy library. The fuzzy logic enables the implementation of a control without the knowledge of the system dynamics model. Finally, last section will show a study case of an ActionLib based control for the second-generation of a climbing robot with four steerable magnetic wheels [2], called Control of Mobile Robots Using ActionLib Fig. 2 Autonomous inspection robot (AIR-2) as Autonomous Inspection Robot 2nd generation (AIR-2), as shown in Fig. 2. AIR-2 is a robot fully compatible with ROS and it has a mechanical structure designed to provide high mobility when climbs on industrial storage tanks. The scheduled fuzzy controllers were designed to manage the speed of AIR-2. 2 ActionLib Mobile robots are very complex, they have many sensors and actuators that help them get around and locate in an unknown environment. The control of the robot isn’t an easy task and it should have a parallel processing. The mobile robot must handle several tasks at same time, so the preemption is an important feature to robot control. Often, the robot control is multivariable, that’s means multiple-input multiple-output systems (MIMO). Therefore, developing an algorithm with these characteristics is hard. Robot control covers various functions of a robot. For example, obstacle avoidance is very important for the autonomous mobile robots. Therefore, [3] proposed a fuzzy intelligent obstacle avoidance controller for a wheeled mobile robot. Balancing robot is another relevant aspect in robotics, said that, [4] designed a cascaded PID controller for movement control of a two wheel robot. On the other hand, [5] presented an adhesion force control for a magnetic climbing robot used in the inspection of storage tanks. Therefore, the robot control is essential to ensure its operation, either navigation or obstacle avoidance. The ROS has libraries that help in the implementation of control, like ros_control. Other library is ActionLib that enables to create servers to execute long-running tasks and clients that interact with servers. Given these features, the development of a controller using this library becomes easy. On the other hand, ros_control package is hard to be used, it presents a control structure that requires many configurations for implementation of a specific controller, for example, a fuzzy controller. Fig. 3 Client-server interaction. Source [1] The Actionlib provides a simple application for client sends goals and server executes goals. The server executes long-running goals that can be preempted. Clientserver interaction using ROS Action Protocol is shown in Fig. 3. The client-server interaction in ActionLib is provided by messages that are displayed in Fig. 1. The messages are: • • • • • goal: client sends goal to the server; cancel: client sends the cancellation of the goal to the server; status: server notifies the status of the goal for the client; feedback: server sends goal information to the client; result: server notifies the client when the goal was achieved. Thus, the ActionLib package is a powerful tool for the design of controllers in robotics. In the next section, the initial configuration of ROS Workspace will be presented for the use of ActionLib package. 3 ROS Workspace Configuration For implementation of the controller it’s necessary that ROS Indigo is properly installed. It’s available at: http://wiki.ros.org/indigo/Installation/Ubuntu The next step is the ROS Workspace configuration. If it isn’t configured on your machine, it’ll be necessary create it: 1 2 3 $ mkdir -p /home/user/catkin_ws/src $ cd /home/user/catkin_ws/src $ catkin_init_workspace The catkin_init_workspace command sets the catkin_ws folder as your workspace. After, you must build the workspace. For this, you need to navigate to your workspace folder and then type the command catkin_make, as shown below. Fig. 4 PID control $ cd /home/user/catkin_ws/ $ catkin_make To add the workspace to your ROS environment, you need to source the generated setup file: 1 $ source /home/user/catkin_ws/devel/setup.bash After workspace configuration, we can start creating a PID controller using ActionLib, which it’ll be shown in the next section. 4 Creating a PID Controller Using ActionLib There’re various types of algorithm used to robot control. But the PID control is more used, due to its good performance for linearized systems and easy implementation. The PID controller is a control loop feedback widely used in various applications, in Fig. 4 can be seen its diagram. The Eq. 1 shows the PID equation:  u(t) = K p e(t) + K i e(τ ) dτ + K d e(t) = r (t) − y(t) d e(t) dt where u(t) is output, K p is proportional gain, e(t) is error (difference between setpoint r (t) and process output y(t), as shown in Eq. 2), K i is integral gain and K d is derivative gain. The controller calculates the error and by adjusting the gains (K p , K i and K d ), the PID seeks to minimize it. In this section, it’ll be shown the PID control implementation using ActionLib. The PID will control the angle of a servo motor. The servo motor was simulated in robot simulator V-REP. The V-REP is simulator based on distributed control architecture, it allows the modeling of robotic systems similar to the reality [6]. The controller has been implemented in accordance with Fig. 4, in which the setpoint is the desired angle (goal) and the feedback is provided by encoder servo. The PID controller is available on GitHub and can be installed on your workspace: $ $ $ $ $ $ source /opt/ros/indigo/setup.bash cd /home/user/catkin_ws/src git clone https://github.com/air-lasca/tutorial_controller cd .. catkin_make source /home/user/catkin_ws/devel/setup.bash The following will be shown its creation step-by-step. 4.1 Steps to Create the Controller 1st step: Creating the ActionLib package Once the workspace is created and configured, let’s create package using ActionLib: 1 2 3 $ cd /home/user/catkin_ws/src/ $ catkin_create_pkg tutorial_controller actionlib message_generation roscpp rospy std_msgs actionlib_msgs The catkin_create_package command creates a package named tutorial_ controller which depends on actionlib, message_generation, roscpp, rospy, std_msgs and actionlib_msgs. Posteriorly, if you need other dependencies just add them in CMakelist.txt. This will be detailed in fifth step. After creating the package, we need to define the message that is sent between the server and client. 2nd step: Creating the action messages Continuing steps to create the controller, you must set the action messages. The action file has three parts: goal, result and feedback. Each section of action file is separated by 3 hyphens (- - -). The goal message is the setpoint of controller, it is sent from the client to the server. Yet, the result message is sent from the server to the client, it tells us when the server completed the goal. It would be a flag to indicate that the controller has reached the goal, but for the control has no purpose. While, the feedback message is sent by the server to inform the goal of incremental progress for the client. Feedback would be the information from the sensor used in control. Then, to create the action messages, you must create a folder called action in your package. 1 2 $ cd /home/user/catkin_ws/src/tutorial_controller $ mkdir action After creating the folder, you must create an .action file (Tutorial.action) in action’s folder of your package. The first letter of the action name should be uppercase. This information is placed in the .action file: 1 2 3 #Define the goal float64 position --- #Define the result bool ok --#Define a feedback message float64 position The goal and feedback are defined as float64. The goal will receive the desired position of a servo motor and feedback will be used to send the information acquired from the encoder servo. The result is defined as bool, but it won’t be used in control. This action file will be used in the controllers examples shown in this chapter. The action messages are generated automatically from the .action file. 3r d step: Create the action client In src folder of your package, create ControllerClient.cpp, it’ll be client of ActionLib. Firstly, we’ll include the necessary libraries of ROS, action message and action client. #include #include #include #include "std_msgs/Float64.h" The tutorial_controller/TutorialAction.h is the action message library. It’ll access the messages created in the .action file. The actionlib/server/simple_action_client.h is the action library used from implementing simple action client. If necessary, you can include other libraries. Continuing the code, the client class must be set. The action client constructor defines the topic to publish the messages. So, you need specific the same topic name of your server, in this example, pid_control was used. 6 7 8 9 class ControllerClient { public: ControllerClient(std::string name): //Set up the client. It’s publishing to topic " pid_control", and is set to auto-spin ac("pid_control", true), //Stores the name action_name(name) { //Get connection to a server ROS_INFO("%s Waiting For Server...", action_name.c_str()); //Wait for the connection to be valid ac.waitForServer(); ROS_INFO("%s Got a Server...", action_name.c_str()); goalsub = n.subscribe("/cmd_pos", 100, &ControllerClient:: GoalCallback, this); The ac.waitForSever() causes the client waits for the server to start before continuing. Once the server has started, the client informs that established communication with it. Then, define a subscriber (goalsub) to provide the setpoint of the control. The doneCb function is called every time that goal completes. It provides state of action server and result message. It’ll only be called if the server has not preempted, because the controller must run continuously. 28 void doneCb(const actionlib::SimpleClientGoalState& state, const tutorial_controller::TutorialResultConstPtr& result){ ROS_INFO("Finished in state [%s]", state.toString().c_str()); ROS_INFO("Result: %i", result->ok); }; The activeCb is called every time the goal message is active, in other words, it’s called each new goal received by client. 33 34 35 void activeCb(){ ROS_INFO("Goal just went active..."); }; The feedbackCb is called every time the server sends the feedback message to the action client. 38 void feedbackCb(const tutorial_controller:: TutorialFeedbackConstPtr& feedback){ ROS_INFO("Got Feedback of Progress to Goal: position: %f", feedback->position); }; GoalCallback is a function that transmits the goal of the topic /cmd_goal to the action server. 42 43 void GoalCallback(const std_msgs::Float64& msg){ goal.position = msg.data; ac.sendGoal(goal, boost::bind(&ControllerClient::doneCb, this , _1, _2), boost::bind(&ControllerClient::activeCb, this), boost::bind(&ControllerClient::feedbackCb, this, _1)); }; The private variables of action client: n is a NodeHandle, ac is an action client object, action_name is a string to set the client name, goal is the message that is used to publish goal to server (set in .action file) and goalsub is a subscriber to get the goal and pass to the action server. 50 51 private: actionlib::SimpleActionClient ac; std::string action_name; tutorial_controller::TutorialGoal goal; ros::Subscriber goalsub; ros::NodeHandle n; }; Begin action client: 58 59 int main (int argc, char **argv){ ros::init(argc, argv, "pid_client"); // create the action client // true causes the client to spin its own thread ControllerClient client(ros::this_node::getName()); ros::spin(); //exit return 0; 4th step: Create the action server Therefore, the action client is finished. Now, create action server named ControllerServer.cpp in your src folder. Initially, it’s necessary include the libraries of ROS, action message and action server. Procedure similar will be made at the client. #include #include #include #include #include #include #include "std_msgs/Float64.h" "geometry_msgs/Vector3.h" "sensor_msgs/JointState.h" So, we need to set the server class. The action server constructor starts the server. Also, it defines subscriber (feedback loop’s control), publisher (PID output), PID limits and initiates the control variables. 9 10 11 12 class ControllerServer{ public: ControllerServer(std::string name): as(n, "pid_control", boost::bind(&ControllerServer:: executeCB, this, _1), false), action_name(name) { as.registerPreemptCallback(boost::bind(&ControllerServer ::preemptCB, this)); //Start the server as.start(); //Subscriber current positon of servo sensorsub = n2.subscribe("/sensor/encoder/servo", 1, & ControllerServer::SensorCallBack, this); //Publisher setpoint, current position and error of control H.B. Santos et al. error_controlpub = n2.advertise(" /control/error", 1); //Publisher PID output in servo controlpub = n2.advertise("/motor/ servo", 1); //Max e Min Output PID Controller float max = M_PI; float min = -M_PI; //Initializing PID Controller Initialize(min,max); In the action constructor, an action server is created. A sensor subscriber (sensorsub) and a controller output publisher (controlpub) are created to the control loop. The preemptCB informs that the current goal has been canceled by sending a new goal or action client canceled the request. 37 38 39 40 41 void preemptCB(){ ROS_INFO("%s got preempted!", action_name.c_str()); result.ok = 0; as.setPreempted(result, "I got Preempted!"); } A pointer to the goal message is passed in executeCB function. This function defines the rate of the controller. You can set the frequency in the rate argument of your control. Inside the while, the function of the controller should be called, as shown in line 60. It’s passed to the controller setpoint (goal→position) and the feedback sensor (position_encoder). 43 void executeCB(const tutorial_controller::TutorialGoalConstPtr& goal){ prevTime = ros::Time::now(); //If the server has been killed, don’t process if(!as.isActive()||as.isPreemptRequested()) return; //Run the processing at 100Hz ros::Rate rate(100); //Setup some local variables bool success = true; //Loop control while(1){ std_msgs::Float64 msg_pos; //PID Controller msg_pos.data = PIDController(goal->position, position_encoder); //Publishing PID output in servo controlpub.publish(msg_pos); //Auxiliary Message geometry_msgs::Vector3 msg_error; msg_error.x = goal->position; msg_error.y = position_encoder; msg_error.z = goal->position - position_encoder; //Publishing setpoint, feedback and error control error_controlpub.publish(msg_error); feedback.position = position_encoder; //Publish feedback to action client as.publishFeedback(feedback); //Check for ROS kill if(!ros::ok()){ success = false; ROS_INFO("%s Shutting Down", action_name.c_str()); break; } //If the server has been killed/preempted, stop processing if(!as.isActive()||as.isPreemptRequested()) return; //Sleep for rate time rate.sleep(); //Publish the result if the goal wasn’t preempted if(success){ result.ok = 1; as.setSucceeded(result); } else{ result.ok = 0; as.setAborted(result,"I Failed!"); } 95 96 97 98 99 100 101 102 103 104 Initialize is a function that sets the initial parameters of a controller. It defines the PID controller output limits and gains. 105 106 107 108 void Initialize( float min, float max){ setOutputLimits(min, max); lastError = 0; errSum = 0; kp = 1.5; ki = 0.1; kd = 0; The setOutputLimits function sets the control limits. 115 116 117 118 119 void setOutputLimits(float min, float max){ if (min > max) return; minLimit = min; maxLimit = max; } The Controller function implements the PID equation (Eq. 1). It can be used to design your control algorithm. 121 122 123 float PIDController(float setpoint, float PV) { ros::Time now = ros::Time::now(); ros::Duration change = now - prevTime; float error = setpoint - PV; errSum += error*change.toSec(); errSum = std::min(errSum, maxLimit); errSum = std::max(errSum, minLimit); float dErr = (error - lastError)/change.toSec(); //Do the full calculation float output = (kp*error) + (ki*errSum) + (kd*dErr); //Clamp output to bounds output = std::min(output, maxLimit); output = std::max(output, minLimit); //Required values for next round lastError = error; prevTime = now; Sensor callback is a subscriber that provides sensor information, e.g., position or wheel velocity of a robot. In this case, it receives the position of the servo motor encoder. 148 149 150 void SensorCallBack(const sensor_msgs::JointState& msg){ position_encoder = msg.position[0]; } The protected variables of action server: 152 153 154 protected: ros::NodeHandle n; ros::NodeHandle n2; //Subscriber ros::Subscriber sensorsub; //Publishers ros::Publisher controlpub; ros::Publisher error_controlpub; //Actionlib variables actionlib::SimpleActionServer as; tutorial_controller::TutorialFeedback feedback; tutorial_controller::TutorialResult result; std::string action_name; //Control variables float position_encoder; float errSum; float lastError; float minLimit, maxLimit; ros::Time prevTime; float kp; float ki; float kd; }; Finally, the main function creates the action server and spins the node. The action will be running and waiting to receive goals. 180 181 int main(int argc, char** argv){ ros::init(argc, argv, "pid_server"); //Just a check to make sure the usage was correct if(argc != 1){ ROS_INFO("Usage: pid_server"); return 1; } //Spawn the server ControllerServer server(ros::this_node::getName()); 5th step: Compile the created package To compile your controller, it’ll be need to add a few things to CMakeLists.txt. Firstly, you need to specify the necessary libraries to compile the package. If you require any other library that wasn’t mentioned when creating your package, you can add it to find_package() and catkin_package(). 1 2 3 4 find_package(catkin REQUIRED COMPONENTS actionlib actionlib_msgs message_generation 174 roscpp rospy std_msgs find_package( CATKIN_DEPENDS actionlib actionlib_msgs message_generation roscpp rospy std_msgs ) Then, specify the action file to generate the messages. 1 2 3 4 add_action_files( DIRECTORY action FILES Tutorial.action ) And specify the libraries that need .action. 1 2 3 4 generate_messages( DEPENDENCIES actionlib_msgs std_msgs ) Include directories that your package needs. 1 include_directories(${catkin_INCLUDE_DIRS}) The add_executable() creates the executable of your server and client. The target_link_libraries() includes libraries that can be used by action server and client at build and/or execution. The macro add_dependencies() creates a dependency between the messages generated by the server and client with your executables. 1 2 3 add_executable(TutorialServer src/ControllerServer.cpp) target_link_libraries(TutorialServer ${catkin_LIBRARIES}) add_dependencies(TutorialServer ${ tutorial_controller_EXPORTED_TARGETS} ${ catkin_EXPORTED_TARGETS}) add_executable(TutorialClient src/ControllerClient.cpp) target_link_libraries(TutorialClient ${catkin_LIBRARIES}) add_dependencies(TutorialClient ${ tutorial_controller_EXPORTED_TARGETS} ${ catkin_EXPORTED_TARGETS}) Additionally, the package.xml file must include the following dependencies: 1 2 3 4 5 6 actionlib actionlib_msgs message_generation actionlib actionlib_msgs message_generation Fig. 5 PID controller server Fig. 6 PID controller client Now, just compile your workspace: 1 2 And refresh your ROS environment: 1 So your PID controller is ready to be used. 6th step: Run the controller Once compiled, your package is ready for use. To run your package, open terminal and start ROS: 1 $ roscore After the ROS starts, you must start the server in new terminal, as shown in Fig. 5. So, in a new terminal, start the client, as demonstrated in Fig. 6. PID Controller client waits for the server and notifies you when the connection is established between them. An alternative to rosrun is roslaunch. To use roslaunch command, you need to create a folder named launch in your package. 1 2 $ cd /home/user/catkin_ws/src/tutorial_controller $ mkdir launch After creating the folder, create a launch file (tutorial.launch) in the directory launch of your package. In the launch file put the following commands: 1 2 To roslaunch works, it’s necessary add roslaunch package in find_package() of CMakeLists.txt: 1 2 3 4 find_package( catkin REQUIRED COMPONENTS actionlib actionlib_msgs roslaunch ) And add below line of the find_package() in CMakeLists.txt: 1 roslaunch_add_file_check(launch) Thus, CMakeLists.txt would look like this: 1 2 cmake_minimum_required(VERSION 2.8.3) project(tutorial_controller) find_package(catkin REQUIRED COMPONENTS actionlib actionlib_msgs message_generation roscpp rospy std_msgs roslaunch ) catkin_package( CATKIN_DEPENDS actionlib actionlib_msgs message_generation roscpp rospy std_msgs ) add_executable(TutorialClient src/ControllerClient.cpp) target_link_libraries(TutorialClient ${catkin_LIBRARIES}) Control of Mobile Robots Using ActionLib 38 add_dependencies(TutorialClient ${ tutorial_controller_EXPORTED_TARGETS} ${ catkin_EXPORTED_TARGETS}) Save the CMakeLists file. So, you can compile the workspace: 1 2 Don’t forget to add the workspace to your ROS environment: 1 $ source catkin_ws/devel/setup.bash Now, to run your package, you just need this command: 1 $ roslaunch tutorial_controller tutorial.launch Please note that roslaunch starts ROS automatically, as shown in Fig. 7. Therefore, the roscore command isn’t required before running the controller. With rqt_graph command, you can see all the topics published and interation between the nodes in ROS. Fig. 7 Roslaunch running PID controller Fig. 8 Interaction between nodes of PID controller $ rqt_graph The rqt_graph package provides a GUI plugin for visualizing the ROS computation graph [7]. You can visualize the topics used for communication between the client and server. Exchanging messages between the client and server are shown in Fig. 8. The rqt_plot package provides a GUI plugin to plot 2D graphics of the ROS topics [8]. Open new terminal and enter the following command: 1 rqt_plot /control/error/x /control/error/y The variable x is the controller setpoint and y is feedbacked signal (sensor information). In Fig. 9 is shown the rqt_plot plugin. In case you need, the variable z is the controller error and it can be added in the rqt_plot. Just add the topic /control/error/z in rqt_plot via command: 1 rqt_plot /control/error/x /control/error/y /control/error/z Or add the error topic directly in the rqt_plot GUI. Just enter the desired topic, as in the Fig. 10, and click on the + symbol so that it add more information to plot. Fig. 9 rqt_plot Fig. 10 Add the error topic in rqt_plot Fig. 11 PID control servo’s position 4.2 Experimental Result of PID Controller For the controller validation, 4 different setpoints were sent to the controller. The Fig. 11 presents the results of servo position PID control using ActionLib, the control shows a good response. 5 Creating a Fuzzy Controller Using ActionLib In this section, the implementation of a Fuzzy control using ActionLib will be shown. The fuzzylite library was used to design the fuzzy logic control. The fuzzylite is a free and open-source fuzzy library programmed in C++ for multiple platforms (Windows, Linux, Mac, iOS, Android) [9]. QtFuzzyLite 4 is a graphic user interface for fuzzylite library, you can implement your fuzzy controller using this GUI. Its goal is to accelerate the implementation process of fuzzy logic controllers by providing a graphical user interface very useful and functional allowing you to easily create and directly interact with your controllers [9]. This GUI is available at: http://www.fuzzylite.com/download/fuzzylite4-linux/. In the Fig. 12 can be seen the graphic user interface, QtFuzzyLite 4. In QtFuzzyLite, you can then export the C ++ code to your controller, as shown in Fig. 13. The Fuzzy controller uses the same application example used in the PID control. In Fig. 14 the fuzzy control diagram of position servo motor can be seen, where β is setpoint, βs is encoder servo information and u is fuzzy output. Fig. 12 QtFuzzyLite 4 GUI Fig. 13 Export fuzzy to C++ in QtFuzzyLite 4 Fig. 14 Fuzzy controller for a servo motor Fig. 15 Membership functions for the servo fuzzy controller of a error and change of error and b angle increment The Servo Fuzzy Controller designed for the linear and orientation motion control is presented in Fig. 15. The inputs are ‘e’ (angle error) and ‘ce’ (angle change of error), and the output u is angle increment (u[k] = u[k] + u[k − 1]). The rules for fuzzy controller are shown in Table 1. The Fuzzy controller is available on GitHub and can be installed on your workspace: 1 2 3 $ source /opt/ros/indigo/setup.bash $ cd /home/user/catkin_ws/src $ git clone https://github.com/air-lasca/tutorial2_controller Table 1 Rule table e ce N Z P NB NS Z NS Z PS Z PS PB 5.1 Steps to Create the Controller The creation of Fuzzy controller follows the same steps of the PID. But it has some peculiarities that will be presented below. 1st step: Creating the ActionLib package Create the package: $ cd /home/user/catkin_ws/src/ $ catkin_create_pkg tutorial2_controller actionlib message_generation roscpp rospy std_msgs actionlib_msgs 2nd step: Creating the action messages The action file will be the same used by PID controller, but the action’s name will be different: FuzzyControl.action. 3r d step: Create action client The structure of the client will be the same PID client, it will be changed the client’s name (FuzzyClient.cpp), action (FuzzyControl.action), topic (fuzzy_control) and package (tutorial2_controller). 4th step: Create the action server The server will be named FuzzyServer.cpp, but you will need to include the fuzzylite library. #include #include #include #include #include #include #include //Fuzzylite library #include "fl/Headers.h" using namespace fl; The changes mentioned in creating the Fuzzy client should also be made. And the control algorithm will also change, just copy it in the FuzzyServer.cpp available on GitHub. 5th step: Compile the created package Before you compile the fuzzy controller package, you must copy the libfuzzylite library to your /usr/lib/. Then, add the fuzzylite’s source files (fl folder) in the include directory of your package. The libfuzzylite.so file and fl folder can be downloaded from the link: https://github.com/air-lasca/tutorial2_controller. The CMakeLists.txt and package.xml follow the instructions specified in the PID controller. Only, in the CMakeLists.txt, you’ll need to add the include folder and add the libfuzzylite library, because the server needs to be built. 1 include_directories(${catkin_INCLUDE_DIRS} include) target_link_libraries(FuzzyServer ${catkin_LIBRARIES} libfuzzylite.so) Then, you can compile the package. Fig. 16 Fuzzy controller server Fig. 17 Fuzzy controller client Fig. 18 Getting fuzzy goal information $ cd /home/user/catkin_ws/ $ catkin_make $ source /home/user/catkin_ws/devel/setup.bash 6th step: Run controller To run the package, open a terminal and start the ROS. 1 Start the server in new terminal. In the Fig. 16, Fuzzy Controller server waits the client. And start the client in new terminal (Fig. 17). In Fig. 18 the goal information can be seen: type of message, publisher and subscriber. The Fig. 19 shows the exchange of messages between the server and client. You can also use the roslaunch to start the controller. 1 $ roslaunch tutorial2_controller fuzzycontrol.launch Fig. 19 Interaction between nodes of fuzzy controller Fig. 20 Fuzzy control servo’s position 5.2 Experimental Results of Fuzzy Controller The results of servo position Fuzzy control are demonstrated in Fig. 20. The fuzzy controller didn’t present overshoot in its response curve, even though it had a considerable response time due to the delay of the encoder. 6 Scheduled Fuzzy Controllers for Omnidirectional Motion of an Autonomous Inspection Robot The scheduled fuzzy controller of AIR-2 is based on the linear velocities x˙ G and y˙G and angular velocity θ˙G , as presented in Eq. 3. According to the inputs, switcher (MUX) will choose which controller should be activated. if inputs are the linear velocities, linear motion controller will be activated. Already, when input is only angular velocity, the orientation controller will be enabled. And when inputs are linear and angular velocities, the free motion controller will be activated. The scheduled controllers can be seen in Fig. 21. The experimental results were simulated using V-REP. The control was implement with ActionLib and fuzzylite library. ⎡ ⎤ x˙ G ξG = ⎣ y˙G ⎦ θ˙G The feedback loop of each controller is related to each control variable. The linear velocities x˙ R and y˙ R of AIR-2 give feedback to linear motion and the angular velocity θ˙R of AIR-2 provides feedback to orientation motion. While, linear velocities of AIR2 and angular velocity β˙ R of servo motors give feedback to free motion, due to side slip constraint, AIR-2 can’t reorient while moving. Each motion controller (linear, orientation and free motion) is composed of 8 Fuzzy controllers, in which 4 controllers perform velocity control of brushless motors and other 4 controllers are responsible by angle control of servo motors. Fig. 21 Scheduled fuzzy controllers of AIR-2 Fig. 22 AIR-2 path in the LPG sphere Fig. 23 Desired and obtained x˙ A path with five different setpoints is generated for experimental results, which can be seen in Fig. 22. In first, second and third setpoint, the AIR-2 has been set with linear motion, the setpoint was, respectively, x, ˙ y˙ and x˙ + y˙ . The fourth was a free motion with x˙ and θ˙ . And the fifth setpoint was orientation motion, that means only θ˙ . The low response time of brushless and servo motors produce overshoots that can be seen Figs. 23 and 24. It’s caused by sampling frequency of encoders in V-REP. The Fig. 25 shows the response controller to the θ˙ . It has a small oscillation, due data provided by the IMU, even filtered, these data present a great noise. Even so, the control of angular velocity features a good response. In free motion, the high delay of Fig. 24 Desired and obtained y˙ Fig. 25 Desired and obtained θ˙ controller is caused by reorientation wheels. It’s necessary to stop brushless motors and servo motors orientate. The servo motors of front and rear are positioned at 90 degrees and left and right are positioned at zero degrees. So, the brushless motors are actuated to control the angular velocity of the AIR-2. The overshoots and the delay times presented in the speed control don’t influence the inspection, since inspection robot operating velocities are low. The experimental results can be seen in a YouTube video available at the link below: https://youtu.be/46EKARdyP0w. 7 Conclusion The ActionLib has proved that the package can be used to implement controllers, exhibited good results as shown in the examples. Easy design of the package allows you to make any adjustments to your control, it allows even implement new control algorithms. The main disadvantage of ActionLib is non-real time, but its preemptive features allow almost periodic execution of the controller. References 1. Wiki, R. 2016. Actionlib. http://wiki.ros.org/actionlib/. 2. Veiga, R., A.S. de Oliveira, L.V.R. Arruda, and F.N. Junior. 2015. Localization and navigation of a climbing robot inside a LPG spherical tank based on dual-lidar scanning of weld beads. In Springer Book on Robot Operating System (ROS): The Complete Reference. New York: Springer. 3. Ren, L., W. Wang, and Z. Du. 2012. A new fuzzy intelligent obstacle avoidance control strategy for wheeled mobile robot. In 2012 IEEE International Conference on Mechatronics and Automation, 1732–1737. 4. Pratama, D., E.H. Binugroho, and F. Ardilla. 2015. Movement control of two wheels balancing robot using cascaded PID controller. In International Electronics Symposium (IES), 94–99. 5. de Oliveira, A., L. de Arruda, F. Neves, R. Espinoza, and J. Nadas. 2012. Adhesion force control and active gravitational compensation for autonomous inspection in lpg storage spheres. In Robotics Symposium and Latin American Robotics Symposium (SBR-LARS), 2012 Brazilian, 232–238. 6. Robotics, C. 2016. Coppelia robotics v-rep: Create. compose. simulate. any robot. http://www. coppeliarobotics.com/. 7. Wiki, R. 2016. rqt_graph. http://wiki.ros.org/rqt_graph. 8. Wiki, R. 2016. rqt_plot. http://wiki.ros.org/rqt_plot. 9. Rada-Vilela, J. 2014. Fuzzylite: a fuzzy logic control library. http://www.fuzzylite.com. Parametric Identification of the Dynamics of Mobile Robots and Its Application to the Tuning of Controllers in ROS Walter Fetter Lages Abstract This tutorial chapter explains the identification of dynamic parameters of the dynamic model of wheeled mobile robots. Those parameters depend on the mass and inertia parameters of the parts of the robot and even with the help of modern CAD systems it is difficult to determine them with a precision as the designed robot is not built with 100% accuracy; the actual materials have not exactly the same properties as modeled in the CAD system; there is cabling which density changes over time due to robot motion and many other problems due to differences between the CAD model and the real robot. To overcome these difficulties and still have a good representation of the dynamics of the robot, this work proposes the identification of the parameters of the model. After an introduction to the recursive least-squares identification method, it is shown that the dynamic model of a mobile robot is a cascade between its kinematic model, which considers velocities as inputs, and its dynamics, which considers torques as inputs and then that the dynamics can be written as a set of equations linearly parameterized in the unknown parameters, enabling the use of the recursive least-squares identification. Although the example is a differential-drive robot, the proposed method can be applied to any robot model that can be parameterized as the product of a vector of parameters and a vector of regressors. The proposed parameter identification method is implemented in a ROS package and can be used with actual robots or robots simulated in Gazebo. The package for the Indigo version of ROS is available at http://www.ece.ufrgs.br/twil/ indigo-twil.tgz. The chapter concludes with a full example of identification and the presentation of the dynamic model of a mobile robot and its use for the design of a controller. The controller is based on three feedback loops. The first one linearizes the dynamics of the robot by using feedback linearization, the second one uses a set of PI controllers to control the dynamics of the robot, and the last one uses a non-linear controller to control the pose of the robot. W.F. Lages (B) Federal University of Rio Grande do Sul, Av. Osvaldo Aranha, 103, Porto Alegre RS 90035-190, Brazil email: [email protected] URL: http://www.ece.ufrgs.br/∼fetter © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_6 W.F. Lages Keywords Parametric identification · Dynamic model · Recursive least-squares · Controller tuning · Feedback linearization · Non-smooth controller 1 Introduction Like any robot, a wheeled mobile robot is subject to kinematics and dynamics. Also, its kinematic model depends only on geometric parameters, while the dynamic model depends on geometric parameters and mass and inertia moments. However, different from manipulator robots, the kinematic model of a wheeled mobile robot is represented by differential equations. The output of the kinematic model of a mobile robot is its pose (position and orientation) while its inputs are velocities. Depending on how the model is formulated, its inputs may be the velocity on each wheels or angular and linear velocity of the robot, or any other variable which is homogeneous to velocity [9]. Based on those properties of the kinematic model of mobile robots, many controllers for mobile robots command velocities to the robot, under the assumption that those commanded velocities are instantaneously imposed on the robot. Of course, that assumption is only valid if the actuators are powerful enough regarding the mass and inertia moments of the robot. That is indeed the case for small robots or robots using servo motors, which are commanded in velocity, as actuators. In this case, the controller can be designed based on the kinematic model alone, whose parameters are usually well-known. Note that, as the kinematic model of a mobile robot is given by a differential equation, it is often called a dynamic model (because its described by dynamic, i.e. differential equations) although it does not effectively model the dynamics of the robot. In this chapter, that model is referred to as a kinematic model. The term dynamic model is reserved for models that describe the dynamics of the robot. However, for larger robots, or robots in which actuators are not powerful enough to impose very fast changes in velocities, it is necessary to consider the dynamics of the robot in the design of the controller. Then, a more sophisticated model of the robot, including its kinematics and its dynamics, should be used. The output of this model is the robot pose, as well as in the kinematic model, but its inputs are the torques on the wheels. The parameters of this model depends on the mass and inertia parameters of the parts of the robot and even with the help of modern CAD systems it is difficult to know with good precision some of those values as the designed robot is not built with 100% accuracy; the actual materials have not exactly the same properties as modeled in the CAD system; there is cabling which density changes over time due to motion and many other problems. To overcome the difficulties in considering all constructive details in a mathematical model and still have a good representation of the dynamics of the robot, it is possible to obtain a model by observing the system output given proper inputs as shown in Fig. 1. This procedure is called system identification [10] or model learning [18]. Parametric Identification of the Dynamics of Mobile Robots … Fig. 1 Basic block diagram for system identification 193 Perturbation Noise u(t) System to y(t) Identification method System model It is shown that the dynamic model of a mobile robot can be properly parameterized such that the recursive least-squares method [10] can be used for parameter identification. The proposed parameter identification method is implemented in a ROS package and can be used with actual robots or robots simulated in Gazebo. The package for the Indigo version of ROS can be downloaded from http://www.ece. ufrgs.br/twil/indigo-twil.tgz. See Sect. 3 for details on how to install it. The identified parameters and the respective diagonal of the covariance matrix are published as ROS topics to be used in the off-line design of controllers or even used online to implement adaptive controllers. The diagonal of the covariance matrix is a measure of confidence on the parameter estimation and hence can be used to decide if identified parameters are good enough. In the case of an adaptive controller, it can be used to decide if the adaptation should be shut-off or not. The chapter concludes with a complete example of identification and controller design. Note that although the example and the general method is developed for differential-drive mobile robots, it can be applied to any robot, as long as it is possible to write the model in a way such that the unknown parameters are linearly related to the measured variables, as shown in Sect. 2.1. More specifically, the remainder of this chapter will cover the following topics: • • • • • a background on identification a background on modeling of mobile robots installing the required packages testing the installed packages description of the package for identification of mobile robots. 2 Background 2.1 Parametric Identification In order to design a control system it is generally necessary to have a model of the plant (the system to be controlled). In many cases, those models can be obtained by analyzing how the system works and using the laws of Physics to write the set of equations describing it. This is called the white-box approach. However, sometimes it is not possible to obtain the model using this approach, due to complexity of the system or uncertainty about its parameters or operating conditions. In those cases, it might be possible to obtain a model through the observation of the system behavior as shown in Fig. 1, which is known as the black-box approach and formally called system identification. In this chapter, the focus is on identification methods which can be used online, because they are more convenient for computational implementation and can be readily used for implementing adaptive controllers. When the parameter estimation is performed online, it is necessary to obtain a new updated estimate in the period between two successive samples. Hence, it is highly desirable for the estimation algorithm to be simple and easily implementable. A particularly interesting class of online algorithms are those in which the current estimate θ(t) is computed as a function of the former estimates, and then it is possible to compute the estimates recursively. Let a single-input, single-output (SISO) system represented by its ARX1 model: y(t + 1) = a1 y(t) + · · · + a p y(t − p + 1) + b1 u(t) + · · · + bq u(t − q + 1) + ω(t + 1) where t is the sampling period index,2 y ∈ R is the system output, u ∈ R is the system input, ai , i = 1, 2, . . . p and bi , j = 1, 2, . . . q are the system parameters and ω(t + 1) is a Gaussian noise representing the uncertainty in the model. The model (1) can be rewritten as: y(t + 1) = φT (t)θ + ω(t + 1) 1 AutoRegressive with eXogenous inputs. that in system identification theory it is common to use t as the independent variable even though the model is a discrete time one. 2 Note Parametric Identification of the Dynamics of Mobile Robots … with ⎡ ⎤ a1 ⎢ .. ⎥ ⎢.⎥ ⎢ ⎥ ⎢a p ⎥ ⎥ θ=⎢ ⎢ b1 ⎥ , the vector of parameters ⎢ ⎥ ⎢.⎥ ⎣ .. ⎦ b1 and ⎡ y(t) .. . ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ y(t − p + 1)⎥ ⎥ , the regression vector. ⎢ φ(t) = ⎢ ⎥ u(t) ⎥ ⎢ ⎥ ⎢ .. ⎦ ⎣ . u(t − q + 1) The identification problem consists in determining θ based on the information (measurements) about y(t + 1) and φ(t) for t = 0, 1, . . . , n. To solve this problem, it can be formulated as an optimization problem with the cost to minimize: J (n, θ) = n−1 2 1  y(t + 1) − φT (t)θ n t=0 where y(t + 1) − φT (t)θ is the prediction error. More formally: ˆ θ(n) = arg min J (n, θ) θ Figure 2 shows a block diagram of the identification system implementing (6). In order to solve the minimization (6) it is convenient to write it as: ˆ θ(n) = arg min (Y (n) − Φ(n)θ)T (Y (n) − Φ(n)θ) θ ⎤ y(1) ⎢ y(2)⎥ ⎢ ⎥ Y (n) = ⎢ . ⎥ ⎣ .. ⎦ y(n) ˆ θ(t) Algorithm minimizing J(n, θ) e(t, θ) Prediction with yˆ(t, θ) parameters θ System to identify Fig. 2 Block diagram of system identification and ⎡ ⎢ ⎢ Φ(n) = ⎢ ⎣ φT (0) φT (1) .. . ⎤ ⎥ ⎥ ⎥ ⎦ φT (n − 1) Then, (6) can be solved by making the differential of J (n, θ) with respect to θ equal to zero: ∂ J (n, θ) ˆ = 0 = −2Φ T (n)Y (n) + 2Φ T (n)Φ(n)θ(n) ∂θ θ=θ(n) ˆ −1 T ˆ Φ (n)Y (n) θ(n) = Φ T (n)Φ(n) or ˆ θ(n) = n−1  t=0 φ(t)φT (t) −1 n−1  φ(t)y(t + 1) Expression (12) is the solution of (6) and can be used to compute an estimate θˆ for the vector of parameters θ at time instant n. However, this expression is not in a recursive form and is not practical for online computing because it requires the inversion of a matrix of dimension (n − 1) × (n − 1) for each update of the estimate. Furthermore, n keeps increasing without bound, thus increasing computation time and memory requirements. For online computation it is convenient to have a recursive form of (12), such that at each update time, the new data can be assimilated without the need to compute everything again. To obtain such a recursive form define: P(n) = −1 φ(t)φ (t) T then, from (12): ˆ + 1) = P(n) θ(n On the other hand: P −1 (n) = φ(t)φT (t) + φ(n)φT (n) n−1  t=0 −1 (n − 1) + φ(n)φT (n) −1 P(n) = P −1 (n − 1) + φ(n)φT (n) −1 T = P(n − 1) − P(n − 1)φ(n) φT (n)P(n − 1)φ(n) + 1 φ (n)P(n − 1) By using the Matrix Inversion Lemma3 [3] with A = P −1 (n − 1), B = φ(n), C = 1 e D = φT (n), it is possible to compute: P(n) = P(n − 1) − P(n − 1)φ(n)φT (n)P(n − 1) 1 + φT (n)P(n − 1)φ(n) which, replaced in (14) results: ˆ + 1) = P(n) θ(n = P(n) φ(t)y(t + 1) + φ(n)y(n + 1) 3 Matrix −1 Inversion Lemma: (A + BC D)−1 = A−1 − A−1 B C −1 + D A−1 B D A−1 . P(n − 1)φ(n)φT (n)P(n − 1) = P(n − 1) − 1 + φT (n)P(n − 1)φ(n) n−1  φ(t)y(t + 1) + φ(n)y(n + 1) By expanding the product: ˆ + 1) = P(n − 1) θ(n φ(t)y(t + 1) + P(n − 1)φ(n)y(n + 1) P(n − 1)φ(n)φT (n)P(n − 1)  φ(t)y(t + 1) 1 + φT (n)P(n − 1)φ(n) t=0 P(n − 1)φ(n)φT (n)P(n − 1) φ(n)y(n + 1) 1 + φT (n)P(n − 1)φ(n) Then by delaying (14) a sampling period and replacing in (24): ˆ + 1) = θ(n) ˆ θ(n + P(n − 1)φ(n)y(n + 1) P(n − 1)φ(n)φT (n) ˆ − θ(n) 1 + φT (n)P(n − 1)φ(n) P(n − 1)φ(n)φT (n)P(n − 1) φ(n)y(n + 1) − 1 + φT (n)P(n − 1)φ(n) and by grouping together the terms in φ(n)y(n + 1): ˆ + 1) = θ(n) ˆ θ(n + + − P(n − 1) + P(n − 1)φT (n)P(n − 1)φ(n) − P(n − 1)φ(n)φT (n)P(n − 1) φ(n)y(n + 1) 1 + φT (n)P(n − 1)φ(n) P(n − 1)φ(n)φT (n) ˆ θ(n) 1 + φT (n)P(n − 1)φ(n) or P(n − 1) φ(n)y(n + 1) 1 + φT (n)P(n − 1)φ(n) P(n − 1)φ(n)φT (n) ˆ − θ(n) 1 + φT (n)P(n − 1)φ(n) ˆ + 1) = θ(n) ˆ θ(n + which can be rewritten as: ˆ + 1) = θ(n) ˆ θ(n +   P(n − 1)φ(n) ˆ y(n + 1) − φT (n)θ(n) T 1 + φ (n)P(n − 1)φ(n) The term multiplying the error can be regarded as the optimal gain of the identification algorithm. Hence, the solution for the problem (6) in a recursive form is given by:   ˆ ˆ + 1) = θ(n) ˆ (29) θ(n + K (n) y(n + 1) − φT (n)θ(n) K (n) = P(n − 1)φ(n) 1 + φT (n)P(n − 1)φ(n) P(n) = I − K (n)φT (n) P(n − 1) ˆ by an optimal Expression (29) is an update of the previous parameter estimate θ(n) gain K (n), from (30), multiplied by the prediction error, y(n + 1) − yˆ (n + 1). Note ˆ that yˆ (n + 1) = φT (n)θ(n) is the prediction of the system output. It can be shown that P(n) as computed by (31) is the covariance of the prediction error and hence it is a measure of the confidence in the parameter estimates. The Algorithm 1 details the procedure for parameter identification: Algorithm 1 Recursive Least-Squares. ˆ Initialize φ(0), θ(0), e P(−1) = cI . At sampling time n + 1: 1. Read system output y(n + 1) from sensors 2. Compute the prediction of system output yˆ (n + 1): ˆ yˆ (n + 1) = φT (n)θ(n) 3. Compute the gain K (n): K (n) = 4. Update the parameter vector estimate: ˆ + 1) = θ(n) ˆ θ(n + K (n) y(n + 1) − yˆ (n + 1) ˆ + 1) for use, if necessary 5. Store θ(n 6. Update the covariance matrix:   P(n) = I − K (n)φT (n) P(n − 1) 7. Wait for the next sampling time 8. Increment n and return to step 1 W.F. Lages Y0 Xc θc yc Fig. 3 Coordinate systems 2.2 Mobile Robot Model The model of the mobile robot used in this chapter is described in this section. Figure 3 shows the coordinate systems used to describe the mobile robot model, where X c and Yc are the axes of the coordinate system attached to the robot and X 0 and Y0 form the inertial coordinate system.  T The pose (position and orientation) of the robot is represented by x = xc yc θc . The mobile robot dynamic model can be obtained based on the Lagrange-Euler formulation [9] and is given by:  x˙ = B(x)u (36) H (β)u˙ + f (β, u) = F(β)τ  T where β is the vector of the angles of the caster wheels, u = v ω is the vector of the linear and angular velocities of the robot and τ is the vector of input torques on the wheels. B(x) is a matrix whose structure depends on the kinematic (geometric) properties of the robot, while H (β), f (β, u) and F(β) depend on the kinematic and dynamic (mass and inertia) parameters of the robot. Although this chapter is based on a differential-drive mobile robot, the model (36) is valid for any type of wheeled mobile robot. See [9] for details and examples for other types of wheeled mobile robots. Mobile robot τ u Dynamics x Kinematics Fig. 4 Cascade between dynamics and the kinematic model Fig. 5 The Twil mobile robot Note that the dynamic model of the robot is a cascade between its kinematic model (the first expression of (36), with velocities as inputs) and its dynamics (the second expression of (36), with torques as inputs), as shown in Fig. 4. This chapter is based on the Twil mobile robot (see Fig. 5), which is a differentialdrive mobile robot, but the results and the ROS package for parameter identification can be used directly for any other differential-drive mobile robot, as it does not depends on Twil characteristics. For other types of wheeled mobile robots, the model has the same form as (36) and given its particular characteristics such as the location of the wheels with respect to the robot reference frame, the model (36) can be customized and rewritten in a form similar to the one used here for differential-drive robots. Then, the same procedure, could be used for parameter estimation. The matrices of the model (36) customized for a differential-drive robot such as Twil are: ⎡ B(x) H (β) f (β, u) F(β) ⎤ cos θc 0 = ⎣ sin θc 0⎦ 0 1 =I    0 K5 u1u2 = − f (u) =− K6 0 u 22   K7 K7 =F = K 8 −K 8 (37) (38) (39) (40) where I is the identity matrix and K 5 , K 6 , K 7 and K 8 are constants depending only on the geometric and inertia parameters of the robot. Note that for this robot H (β), f (β, u) and F(β) do not actually depend on β. Note that only the dynamics of the robot depends on mass and inertia parameters, which are difficult to know with precision. Furthermore, u is a vector with linear and angular velocity of the robot, which can be easily measured, while x are the robot pose, which are more difficult to be obtained. However, the parameter of the kinematics depends on the geometry of the robot and can be obtained with good precision by calibration. Therefore, only the part of the model regarding the dynamics is used for parameter estimation, taking u as output and τ as input. In the following it is shown that the dynamics of the robot can be written as a set of equations in the form of y(k + 1) = φT (k)θ(k), where y is the acceleration (measured or estimated from velocities), φ is the vector of regressors (measured velocities and applied torques) and θ is the vector of unknown parameters to be identified. Then, it is possible to obtain an estimate θˆ for θ by using the recursive least squares algorithm [10] described in Sect. 2.1. The parameters K 5 , K 6 , K 7 and K 8 depend on the geometric and mass properties of the robot in a very complex way. Even for a robot simulated in Gazebo, the same problem arises, as the model (36) is more simple than a typical robot described in URDF, which typically include more constructive details, for a realistic and good looking animation. On the other hand, the model described in URDF is not available in a closed form as (36), whose structure can be explored in the design of a controller. Also, it is not trivial to obtain a model in the form of (36) equivalent to an URDF description. To overcome the difficulties in considering all constructive details in an algebraic model such as (36) and still have a good representation of the dynamics of the robot, the parameters of the model are identified. In obtaining a model in a form suitable to be identified by the recursive least squares algorithm described in Sect. 2.1 it is important to note that only the second expression of (36) depends on the unknown parameters. Furthermore, u is a vector with linear and angular velocity of the robot, which can be easily measured, while x are the robot pose, which are more difficult to be obtained. Therefore, only the second expression of (36) will be used for parameter estimation, taking u as output and τ as input. By using (37)–(40), the second expression of (36) for the Twil robot can be written as:      K7 K7 0 K5 u1u2 + τ (41) u˙ = K6 0 u 22 K 8 −K 8 Although (41) seems somewhat cryptic, its physical meaning can be understood  T as u˙ = v˙ ω˙ is the vector of linear and angular acceleration of the robot. Hence, the term K 5 u 22 represents the centrifugal, the term K 6 u 1 u 2 represents the Coriolis acceleration. Also, as the linear acceleration of the robot is proportional to the average of the torques applied to the left and right wheels, 1/K 7 represents the robot mass and as the angular acceleration of the robot is proportional to the difference of torques, 1/K 8 represents the moment of inertia of the robot. For the purpose of identifying K 5 , K 6 , K 7 and K 8 it is convenient to write (41) as two scalar expressions: u˙ 1 = K 5 u 22 + K 7 (τ1 + τ2 ) u˙ 2 = K 6 u 1 u 2 + K 8 (τ1 − τ2 ) Then, by discretizing (42)–(43) it is possible to obtain two recursive models: one linearly parameterized in K 5 and K 7 and another linearly parameterized in K 6 and K 8 : u 1 (k + 1) − u 1 (k) T = K 5 u 22 (k) + K 7 (τ1 (k) + τ2 (k))  T   K5 u 22 (k) y1 (k + 1) = K7 τ1 (k) + τ2 (k) y1 (k + 1) = u˙ 1 (k)  u 2 (k + 1) − u 2 (k) T = K 6 u 1 (k)u 2 (k) + K 8 (τ1 (k) − τ2 (k)) T    K6 u 1 (k)u 2 (k) y2 (k + 1) = K8 τ1 (k) − τ2 (k) Note that it is easier and more convenient to identify two models depending on two parameters each one than to identify a single model depending on four parameters. Then, by defining:  u 22 (k) φ1 (k) = τ1 (k) + τ2 (k)   K5 θ1 (k) = K7  (50) (51)  u 1 (k)u 2 (k) τ1 (k) − τ2 (k)   K6 θ2 (k) = K8 φ2 (k) = it is possible to write (46) and (49) as: y1 (k + 1) = φ1T (k)θ1 (k) y2 (k + 1) = φ2T (k)θ2 (k) and then, it is possible to obtain an estimate θˆi for θi by using a standard recursive least squares algorithm such as described in Sect. 2.1: yˆi (n + 1) = φiT (n)θˆi (n) Pi (n − 1)φi (n) K i (n) = 1 + φiT (n)P(n − 1)φi (n) θˆi (n + 1) = θˆi (n) + K i (n) yi (n + 1) − yˆi (n − 1) Pi (n) = I − K i (n)φiT (n) Pi (n − 1) where yˆi (n + 1) are estimates for yi (n + 1), K i (n) are the gains and Pi (n) are the covariance matrices. 3 ROS Packages for Identification of Robot Model This section describes the installation of some packages useful for the implementation of the identification procedure described in Sect. 2. Some of them are not present in a standard installation of ROS and should be installed. Also, some custom packages with our implementation of the identification should be installed. 3.1 Setting up a Catkin Workspace The packages to be installed for implementing ROS controllers assume an existing catkin workspace. If it does not exist, it can be created with the following commands (assuming a ROS Indigo version): source /opt/ros/indigo/setup.bash mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_init_workspace cd ~/catkin_ws catkin_make source ~/catkin_ws/devel/setup.bash 3.2 ros_control The ros_control meta-package includes a set of packages to implement generic controllers. It is a rewrite of the pr2_mechanism packages to be used with all robots and no just with PR2. This package implements the base architecture of ROS controllers and hence is required for setting up controllers for the robot. In particular, for the identification method proposed here, it is necessary to actuate the robot directly, that is, without any controller, neither an open-loop nor a closed-loop controller. This can be done in ROS by configuring a forward controller, a controller that just replicates its input in its output. This meta-package includes the following packages: control_toolbox: contains classes that are useful for all controllers, such as PID controllers. controller_interface: implements a base class for interfacing with controllers, the Controller class. controller_manager: implements the ControllerManager class, which loads, unloads, starts and stops controllers. hardware_interface: base class for implementing the hardware interface, the RobotHW and JointHandle classes. joint_limits_interface: base class for implementing the joint limits. transmission_interface: base class for implementing the transmission interface. realtime_tools: contains a set of tool that can be used from a hard real-time thread. The ros_control meta-package is not included in the standard ROS desktop installation, hence it should be installed. On Ubuntu, it can be installed from Debian packages with the command: sudo apt-get install ros-indigo-ros-control 3.3 ros_controllers This meta-package is necessary to make the forward_command_controller and forward_command_controller controllers available. The robot is put in open-loop by using the forward_command_controller controller, and then the desired input signal can be applied for identification. The joint_state_ controller controller, which by its name seems an state-space controller in the joint space, but actually it is just a publisher for the values of the position and velocities of the joints. Then, that topic is used to obtain the output of the robot system to be used in the identification. More specifically, ros_controllers includes the following: forward_command_controller: just a bypass from the reference to the control action as they are the same physical variable. effort_controllers: implements effort controllers, that is, SISO controllers in which the control action is the torque (or an equivalent physical variable) applied to the robot joint. There are there types of effort_controllers, depending on the type of the reference and controlled variable: effort_controllers/joint_effort_controller: just a bypass from the reference to the control action as they are the same physical variable. effort_controllers/joint_position_controller: a controller in which the reference is joint position and the control action is torque. The PID control law is used. effort_controllers/joint_velocity_controller: a controller in which the reference is joint velocity and the control action is torque. The PID control law is used. position_controllers: implements SISO controllers in which the control action is the position (or an equivalent physical variable) applied to the robot joint. Currently, there is just one type of position_controllers: position_controllers/joint_position_controller: just a bypass from the reference to the control action as they are the same physical variable. velocity_controllers: implements SISO controllers in which the control action is the velocity (or an equivalent physical variable) applied to the robot joint. Currently, there is just one type of velocity_controllers: velocity_controllers/joint_velocity_controller: just a bypass from the reference to the control action as they are the same physical variable. joint_state_controller: implements a sensor which publishes the joint state as a sensor_msgs/JointState message, the JointState Controller class. The ros_controllers meta-package is not included in the standard ROS desktop installation, hence it should be installed. On Ubuntu, it can be installed from Debian packages with the command: sudo apt-get install ros-indigo-ros-controllers 3.4 gazebo_ros_pkgs This is a collection of ROS packages for integrating the ros_control controller architecture with the Gazebo simulator [12], containing the following: gazebo_ros_control: Gazebo plugin that instantiates the RobotHW class in a DefaultRobotHWSim class, which interfaces with a robot simulated in Gazebo. It also implements the GazeboRosControlPlugin class. The gazebo_ros_pkgs meta-package is not included in the standard ROS desktop installation, hence it should be installed. On Ubuntu, it can be installed from Debian packages with the command: sudo apt-get install ros-indigo-gazebo-ros-pkgs ros-indigo-gazebo-ros-control 3.5 twil This is a meta-package with the package for identification of the Twil robot. It contains an URDF description of the Twil mobile robot [4] and the implementation of the identification and some controllers used for identification and using the identified parameters. More specifically it includes the following packages: twil_description: URDF description of the Twil mobile robot. twil_controllers: implementation of a forward controller, a PID controller and a linearizing controller for the Twil mobile robot. twil_ident: ROS node implementing the recursive least-squares for identification of the parameters of a differential-drive mobile robot. The twil meta-package can be downloaded and installed in the ROS catkin workspace with the commands: cd ~/catkin_ws/src wget http://www.ece.ufrgs.br/twil/indigo-twil.tgz tar -xzf indigo-twil.tgz cd ~/catkin_ws catkin_make source ~/catkin_ws/devel/setup.bash 4 Testing the Installed Packages A simple test for the installation of the packages described in Sect. 3 is performed here. The installation of the ROS packages can be done by loading the Twil model in Gazebo and launching the computed torque controller with the commands: source /opt/ros/indigo/setup.bash source ~/catkin_ws/devel/setup.bash roslaunch twil_controllers joint_effort.launch The robot should appear in Gazebo as shown in Fig. 6. Then, start the simulation by clicking in the play button in the Gazebo panel, open a new terminal and issue the following commands to move the robot. source /opt/ros/indigo/setup.bash source ~/catkin_ws/devel/setup.bash rosrun twil_controllers test_openloop.sh If everything is right, the Twil robot should move for some seconds and then stop, as shown in Fig. 7. In this simulation, the Twil mobile robot is driven by standard ROS controllers implementing a bypass from its reference to its output. This is the equivalent to drive the robot in open-loop, that means, without any controller. The effort_controllers/JointEffortController controller implements just a bypass from its input to its output as its input is effort and its output is effort, as well. The example uses one of such controllers in each wheel of the Twil mobile robot, effectively keeping it in open-loop. Hence, the reference applied to the controllers is directly the torque applied to each wheel. Figure 8 shows the computation graph for this example. The controllers themselves are not shown because they are plugins loaded by the controllers manager and hence they are not individual ROS nodes. The right wheel controller receives its reference through the /twil/right_wheel_joint _effort_controller_command topic and the left wheel controller receives its through the /twil/left_wheel_joint_effort_controller_command Fig. 6 Twil mobile Robot in Gazebo Fig. 7 Gazebo with Twil robot after test motion topic. The /joint_states topic is where the state of the joints (wheels) are published by the JointStateController controller. In the next Sections the data published in this topic, will be used to identify the parameters of the Twil mobile robot. For a good identification, adequate signals, as detailed in Sect. 5.3, will be applied to the /twil/right_wheel_joint_effort_controller_ command and /twil/left_wheel_joint_effort_controller_ command topics. The test_openloop.sh is a script with an example of how to set the reference for the controllers, in this case, the torque on right and left wheels of the Twil robot. The script just publishes the required values by using the rostopic command. In a real application, probably with a more sophisticated controller, those references would be generated by a planning package, such as MoveIt! [24] or a robot navigation package, such as the Navigation Stack [15, 16]. In the case of an identification task, as discussed here, the references for the controllers are generated by a node implementing the identification algorithm. Fig. 8 Computation graph for Twil in open loop 210 W.F. Lages 5 Implementation of Parametric Identification in ROS In this section, the twil ROS meta-package is detailed. This meta-package consists of an URDF description of the Twil mobile robot (twil_description), the implementation of some controllers for Twil (twil_controllers) and a ROS node for implementing the parametric identification (twil_ident). Although the parametric identification launch file is configured for Twil, the source code for the identification is generic and should work directly with any differential-drive mobile robot and with minor modifications for any wheeled mobile robot. Hence, in most cases, the package can be used with any robot by just adapting the launch file. 5.1 twil_description Package The twil_description package has the URDF description of the Twil robot. Files in the xacro directory contains the files describing the geometric and mass parameters of the many bodies used to compose the Twil robot, while the meshes directory holds the STereoLithography (STL) files describing the shapes of the bodies. The files in the launch directory are used to load the Twil model in the ROS parameter server. The twil.launch file just loads the Twil model in the parameter server and is intended to be used with the actual robot, while the twil_sim.launch file loads the Twil model in the parameter server and launches the Gazebo simulator. It is beyond the scope of this chapter to discuss the modeling of robots in URDF. The reader is directed to the introductory ROS references for learning the details about URDF modeling in general. However, one key point for simulating ROS controllers in Gazebo is to tell it to load the plugin for connecting with ros_control. In the twil_description package this is done in the top level URDF file, within the tag, as shown in Listing 1. See [13] for a detailed description of the plugin configuration. Listing 1 Plugin description in twil.urdf.xacro. /twil 0.001 5.2 twil_controllers Package The twil_controllers package implements the controllers for Twil. In particular, a Cartesian linearizing controller is implemented as an example of using the results of the parametric identification. The files in the config directory specify the parameters for the controllers, such as the joints of the robot associated to the controller, its gains and sampling rate. The script directory has some useful scripts for setting the reference for the controllers and can be used for testing them. Note that although only the CartLinearizingController is implemented in this package, the Twil can use other controllers, as those implemented in the ros_controllers package. In particular, the controller used for identification (effort_controllers/JointEffortController) comes from the ros_controllers package. The file in the src directory are the implementation of controllers for Twil, in particular CartLinearizingController, derived from the Controller class, while the include directory holds the files with the declarations of those classes. The twil_controllers_plugins.xml file specifies that the classes implementing the controllers are plugins for the ROS controller manager. The files in the launch directory are used to load and start the controllers with the respective configuration files. The detailed description of the implementation of controllers is not the scope of this chapter. See [13] for a detailed discussion about the implementation of controllers in ROS. 5.3 twil_ident Package The twil_ident package contains a ROS node implementing the parameter identification procedure described in Sect. 2.1. Again, the source code is in the src directory and there is a launch file in the launch directory which is used to load and start the node. The identification node can be launched through the command: roslaunch twil_ident ident.launch The launch file is shown in Listing 2. Initially, there are remaps of the topic names used as reference for the controllers for the right and left wheels, then, another launch file, which loads Gazebo with the Twil simulation is called. The next step is the loading of controller parameters from the configuration file effort_control.yaml in the parameter server and the controller manager node is spawn for loading the controllers for both wheels and the joint_state_controller to publish the robot state. Finally the identification node is loaded and the identification procedure starts. Listing 2 Launch file ident.launch. Fig. 9 Computation graph for parameter identification Figure 9 shows the computation graph used for the identification. The difference with respect to the computation graph in Fig. 8 is that torques to be applied to the wheels are published on the /twil/right_wheel_command and /twil/left_wheel_command by the ident node. These topics are the same described in Sect. 4, but had their names changed to force the connection between the /gazebo and the /twil/ident nodes. The /twil/ident node implements the parameter identification and subscribes to the /joint_states topic in order to obtain the joint (wheel) velocities necessary for the identification algorithm and publishes the torque commands for the wheels in the /twil/right_wheel_command and /twil/left_wheel_ command topics. The torques follow a Pseudo Random Binary Sequence (PRBS) pattern [21] in order to ensure the persistent excitation for a good identification of the parameters. The identified parameters and the respective diagonal of the covariance matrix are published on the /twil/ident/dynamic_parameters topic. Those values can be used offline by controllers, for the implementation of non-adaptive controllers or can be used online to implement adaptive controllers. In this case, the controller can subscribe to the /twil/ident/dynamic_parameters topic to receive updates of the identified parameters and the respective diagonal of the covariance matrix, which is a measure of confidence on the parameter estimation and hence can be used to decide if the adaptation should be shut-off or not. This way, the mass and inertia parameters of the robot would be adjusted on-line for variations due to changes in workload, for example. In order to ensure the reliability of the identified values, parameters would be changed only if the associated covariance is small enough. The Ident class is shown in Listing 3. The private variable members of the Ident class are: node_: the ROS nodeidentifier jointStateSubscriber: ROS topic subscriber to receive the joint state dynParamPublisher: ROS topic to publish the identified parameters leftWheelCommandPublisher: ROS topic to publish the reference for the left wheel controller dynParamPublisher: ROS topic to publish the reference for the right wheel controller nJoints: the number of joints (wheels) of the robot u: vector of joint velocities thetaEst1: vector of estimated parameters θˆ1 thetaEst2: vector of estimated parameter θˆ2 P1: covariance of the error in estimates θˆ1 P2: covariance of the error in estimates θˆ2 prbs: vector of PRBS sequences used as input to the robot for identification lastTime: time for the last identification iteration. Listing 3 Ident class. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 class Ident { public: Ident(ros::NodeHandle node); ~Ident(void); void setCommand(void); private: ros::NodeHandle node_; ros::Subscriber jointStatesSubscriber; ros::Publisher dynParamPublisher; ros::Publisher leftWheelCommandPublisher; ros::Publisher rightWheelCommandPublisher; const int nJoints; Eigen::VectorXd Eigen::VectorXd Eigen::VectorXd Eigen::MatrixXd Eigen::MatrixXd u; thetaEst1; thetaEst2; P1; P2; std::vector prbs; ros::Time lastTime; void jointStatesCB(const sensor_msgs::JointState::ConstPtr &jointStates); void resetCovariance(void); }; The jointStatesCB() function is the callback for receiving the state of the robot and running the identifier iteration, as shown in Listing 4. Listing 4 JointStatesCB() function. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 void Ident::jointStatesCB(const sensor_msgs::JointState::ConstPtr &jointStates) { ros::Duration dt=jointStates->header.stamp-lastTime; lastTime=jointStates->header.stamp; Eigen::VectorXd y=-u; //y(k+1)=(u(k+1)-u(k))/dt Eigen::VectorXd Phi1(nJoints); Eigen::VectorXd Phi2(nJoints); Phi1[0]=u[1]*u[1]; // u2^2(k) Phi2[0]=u[0]*u[1]; // u1(k)*u2(k) Eigen::VectorXd torque(nJoints); for(int i=0;i < nJoints;i++) { u[i]=jointStates->velocity[i]; torque[i]=jointStates->effort[i]; } // u(k+1) // torque(k) y+=u; y/=dt.toSec(); Phi1[1]=torque[0]+torque[1]; Phi2[1]=torque[0]-torque[1]; double yEst1=Phi1.transpose()*thetaEst1; Eigen::VectorXd K1=P1*Phi1/(1+Phi1.transpose()*P1*Phi1); thetaEst1+=K1*(y[0]-yEst1); P1-=K1*Phi1.transpose()*P1; double yEst2=Phi2.transpose()*thetaEst2; Eigen::VectorXd K2=P2*Phi2/(1+Phi2.transpose()*P2*Phi2); thetaEst2+=K2*(y[1]-yEst2); P2-=K2*Phi2.transpose()*P2; std_msgs::Float64MultiArray dynParam; for(int i=0;i < nJoints;i++) { dynParam.data.push_back(thetaEst1[i]); dynParam.data.push_back(thetaEst2[i]); } for(int i=0;i < nJoints;i++) { dynParam.data.push_back(P1(i,i)); dynParam.data.push_back(P2(i,i)); } dynParamPublisher.publish(dynParam); } In the callback, first of all the time interval since last call is computed (dt), then the φ1 (t) and φ2 (t) vectors are assembled in variables Phi1 and Phi2 and the system output y(t + 1) is assembled. Then, the parameter estimates and their covariances are computed from (56)–(59) and finally, the parameter estimates and the covariance matrix diagonal are published in dynParamPublisher. The main() function of the ident node is shown in Listing 5. It is just a loop running at 100 Hz publishing torques for the robot wheels through the setCommand() function. Listing 5 ident node main() function. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 int main(int argc,char* argv[]) { ros::init(argc,argv,"twil_ident"); ros::NodeHandle node; Ident ident(node); ros::Rate loop(100); while(ros::ok()) { ident.setCommand(); ros::spinOnce(); loop.sleep(); } return 0; } Torques to be applied to the robot wheels are published by the setCommand() function shown in Fig. 6. A PRBS signal with amplitude switching between −5 and 5 Nm is applied to each wheel. Listing 6 setCommand() function. 1 2 3 4 5 6 7 8 9 void Ident::setCommand(void) { std_msgs::Float64 leftCommand; std_msgs::Float64 rightCommand; leftCommand.data=10.0*prbs[0]-5.0; rightCommand.data=10.0*prbs[1]-5.0; leftWheelCommandPublisher.publish(leftCommand); rightWheelCommandPublisher.publish(rightCommand); } While the identification procedure is running, the estimates of parameters K 5 , K 6 , K 7 and K 8 and their associated covariances are published as a vector in the /twil/ident/dynamic_parameters topic. Hence, the results of estimation can be observed by monitoring this topic with the command: rostopic echo /twil/ident/dynamic_parameters The results for the estimates of parameters K 5 , K 6 , K 7 and K 8 can be viewed in Figs. 10, 11, 12 and 13, respectively. Note that for a better visualization the time horizon for Figs. 10 and 11 is not the same in all figures. Figures 14, 15, 16 and 17 show the evolution of the diagonal of the covariance matrix related to the K 5 , K 6 , K 7 and K 8 parameters, respectively. Although the identified values remain changing over time due to noise, it is possible to consider that they converge to an average value and stop the identification algorithm. The resulting values are shown in Table 1, with the respective diagonal of the covariance matrix. Those values were used for the implementation of the feedback linearization controller. Given the results in Table 1 and recalling the model (36) and (37)–(40), the identified model of the Twil mobile robot is: Time [s] Fig. 10 Evolution of the estimate of the K 5 parameter ⎡ ⎤ ⎧ cos θc 0 ⎪ ⎪ ⎪ ⎪ ⎨ x˙ = ⎣ sin θc 0⎦ u 0 1      ⎪ ⎪ ⎪ u 18.7807 18.7807 0 0.00431 u 1 2 ⎪ ⎩ u˙ = + τ u 22 −14.3839 14.3839 0.18510 0 In principle, it sounds pointless to identify the parameters of a simulated robot. However, the simulation performed by Gazebo is based on the Open Dynamics Engine (ODE) library [23] with parameters derived from a URDF description of the robot, which is more detailed than the model used for identification and control. Due to the model non-linearities and the richness of details of the URDF description it is not easy to compute the equivalent parameters to be used in a closed form model useful for control design. Hence, those parameters are identified. Note that this situation is analogous to a real robot, where the actual parameters are not the same as the theoretical ones due to many details not being modeled. Fig. 14 Diagonal of the covariance matrix related to the K 5 parameter P 2 22 Parametric Identification of the Dynamics of Mobile Robots … Table 1 Twil parameters obtained by identification Covariance diagonal K5 K6 K7 K8 0.00431 0.18510 18.7807 −14.3839 7.0428 × 10−12 1.0870 × 10−09 1.9617 × 10−06 1.9497 × 10−06 6 Controller Design The model (60) can be used for the design of controllers. Although now all parameters are known, it is still a non-linear model and a cascade of the dynamics and the kinematics as shown in Fig. 4. Also, as discussed in Sect. 1 there are in the literature many publications dealing with the control of mobile robots using only the kinematic model. Furthermore, the non-holonomic constraints associated to mobile robots, with exception of the ominidirectional ones, are associated to the first expression of (60), while the second expression is a holonomic system. For this reason, most difficulties in designing a controller for a mobile robot are related to its kinematic model and not to its dynamic model. In order to build-up on the many methods developed to control mobile robots based on the kinematic model alone, the control strategy proposed here considers the kinematics and the dynamics of the robot in two independent steps. See [6, 7] for a control approach dealing with the complete model of the robot in a single step. The dynamics of the robot is described by the second expression of (60) and has the form: u˙ = f (u) + Fτ (61) and a state feedback linearization [11, 13] with the control law: τ = F −1 (ν − f (u)) where ν is a new control input, leads to: u˙ = ν which is a linear, decoupled system. That means that each element of u is driven by a single element of ν or u˙ i = νi .  T For differential-drive mobile robot such as Twil the elements of u = v ω , are the linear and angular velocities of the robot. For other types of wheeled mobile robots, the number and meaning of the elements of u would not be the same, but (63) would still have the same form, eventually with larger vectors. Anyway, the transfer function for each element of (63) is: W.F. Lages ur1 .. . urn + e1 PI1 en ν1 .. . νn Linearization Mobile robot u1 .. . un Fig. 18 Block diagram of the controller for the dynamics of the mobile robot G i (s) = Ui (s) 1 = Vi (s) s In other words, by using the feedback (62), the system (61) is transformed in a set of independent systems, each one with a transfer function equal to G i (s). Each one of these systems can be controlled by a PI controller, then:  νi = K pi ei + K ii ei dt where ei = u ri − u i , u ri is the reference for the i-th element of u and K pi and K ii are the proportional and integral gains, respectively. The transfer function of PI controller (65) is: Ci (s) = K pi s + K ii Vi (s) = E i (s) s Then, by remembering that E i (s) = Uri (s) − Ui (s) and using (64) and (66), it is possible to write the closed-loop transfer function as: Hi (s) = K pi s + K ii Ci (s)G i (s) Ui (s) = = 2 Uri (s) 1 + Ci (s)G i (s) s + K pi s + K ii Figure 18 shows the block diagram of the proposed controller, which is implemented by using (62) and (65). Note that (65) can be implemented by using the Pid class already implemented in the control_toolbox ROS package, by just making the derivative gain equal to zero. The performance of the controller is determined by the characteristic polynomial (the denominator) of (67). For canonical second order systems, the characteristic polynomial is given by: (68) s 2 + 2ξωn s + ωn2 where ξ is the damping ratio and ωn is the natural frequency. Hence, it is easy to see that K pi = 2ξωn and K ii = ωn2 . Furthermore, the time it takes to the control system to settle to a precision of 1% is given by [19]: Ts = − ln 0.01 4.6 = ξωn ξωn Therefore, by choosing the damping ration and the settling time required for each PI controller it is possible to compute K pi and K ii .  T Again, for the Twil mobile robot and all differential-drive robots, u = v ω , hence, for this type of robot there are two PI controllers: one for controlling the linear velocity and other for controlling the angular velocity. In most cases it is convenient to tune both controllers for the same performance, therefore, as the system model is the same, the controller gains would be the same. In robotics, it is usual to set ξ = 1.0, to avoid overshoot, and Ts is the time required for the controlled variable to converge to within 1% of error of the reference. In a first moment one may think that Ts should be set to a very small value. However, the trade-off here is the control effort. A very small Ts would require a very large ν and hence very large torques on the motors, probably above what they are able to provide. Therefore, Ts should be set to a physically sensible value. By making Ts = 50 ms, from (69): ωn = 4.6 4.6 = = 92 rad/s ξTs 50 × 10−3 and K p1 = K p2 = 2ξωn = 184 K i1 = K i2 = ωn2 By using the above gains, the controls system shown in Fig. 18 ensures that u will converge to u r in a time Ts . Then, by commanding u r it is possible to steer the mobile robot to the desired pose. To do this in a robust way, it is necessary to have another control loop using the robot pose as feedback signal. By supposing that Ts is selected to be much faster than the pose control loop (at least five times faster), the dynamics (67) can be neglected and the resulting system (equivalent to see the system in Fig. 18 as a single block) model can be written as: ⎡ cos θc x˙ = ⎣ sin θc 0 ⎤ 0 0⎦ u r 1 It is important to note that using (73) for the design of the pose controller is not the same as using only the kinematic model (first expression of (36)). Although the equations are the same, and then, the same control methods can be used, now there is an internal control loop (Fig. 18) that forces the commanded u r to be effectively applied to the robot despite the dynamics of the robot. The pose controller used here follows the one proposed in [1] and is non-linear controller based on the Lyapunov theory and a change of the robot model to polar coordinates. Also, as most controllers based on Lyapunov theory, it is assumed that the system should converge to its origin. However, as it is interesting to be able to T  stabilize the robot at any pose xr = xcr ycr θcr , the following coordinate change [2] is used to move the origin of the new system to the reference pose:: ⎡ ⎤ ⎡ x¯c cos θcr sin θcr ⎣ ⎦ ⎣ y ¯ − sin θcr cos θcr x¯ = = c ¯θc 0 0 ⎤ 0 0⎦ (x − xr ) 1 By using a change to polar coordinates [5] given by: e= x¯c2 + y¯c2 ψ = atan2( y¯c , x¯c ) α = θ¯c − ψ the model (73) can be rewritten as: ⎧ e˙ = cos αu r 1 ⎪ ⎪ ⎪ ⎪ ⎨ sin α ur 1 ψ˙ = e ⎪ ⎪ ⎪ ⎪ ⎩ α˙ = − sin α u r 1 + u r 2 . e which is only valid for differential-drive mobile robots. For a similar procedure for other configurations of wheeled mobile robots see [14]. Then, given a candidate to Lyapunov function: V = 1 2 λ1 e + λ2 α 2 + λ3 ψ 2 , 2 with λi > 0. Its time derivative is: V˙ = λ1 ee˙ + λ2 αα˙ + λ3 ψ ψ˙ By replacing e, ˙ α˙ and ψ˙ from (78): sin α sin α u r 1 + λ2 αu r 2 + λ3 ψ ur 1 V˙ = λ1 e cos αu r 1 − λ2 α e e and, it can be shown that the input signal: u r 1 = −γ1 e cos α u r 2 = −γ2 α − γ1 cos α sin α + γ1 (82) λ3 sin α ψ cos α λ2 α leads to: V˙ = −γ1 λi e2 cos2 α − γ2 λ2 α2 ≤ 0 which, along with the continuity of V , assures the system stability. However, the convergence of system state to the origin still needs to be proved. See [22] for other choices of u r leading to V˙ ≤ 0. Given that V is lower bounded, that V is non-increasing, as V˙ ≤ 0 and that V˙ is uniformly continuous, as V¨ < ∞, the Barbalat lemma [20] assures that V˙ → 0 which implies α → 0 and e → 0. It remains to be shown that ψ also converges to zero. To prove that ψ → 0, consider the closed loop system obtained by applying (82)– (83) to (78), given by: e˙ = −γ1 e cos2 α ψ˙ = −γ1 sin α cos α λ3 sin α cos α α˙ = −γ2 α + γ1 ψ λ2 α Given that ψ is bounded and from (86) it can be concluded that ψ˙ is also bounded, it follows that ψ is uniformly continuous, which implies that α˙ is uniformly continuous as well, since α¨ < ∞. Then, it follows from the Barbalat lemma that α → 0 implies α˙ → 0. Hence, from (87) it follows that ψ → 0. Therefore, (82)–(83) stabilize the system (78) at its origin. Note that although the open loop system described by (78) has a mathematical indetermination due to the e in denominator, the closed-loop system (85)–(87) is not undetermined and hence can converge to zero. The indetermination in (78) is not due to a fundamental physical constraint as it not present in the original model (73), but was artificially created by the coordinate change (75)–(77). It is a well-known result from [8] that a non-holonomic mobile robot can not be stabilized to a pose by using a smooth, time-invariant feedback. Here, those limitations are overcame by using a discontinuous coordinate change. Also, the input signals (82)–(83) can be always computed as sinα(α) converges to 1 as α converges to 0. Furthermore, (78) is just an intermediate theoretical step to obtain the expressions for the input signals (82)–(83). There is no need to actually compute it. If using the real robot, there is no need to use the model for simulation and if using a simulated robot, it can be simulated by the Cartesian model (73), which does not present any indetermination. W.F. Lages ur1 .. . Non-linear controller e urn+ e1 en PI1 PIn Coordinates change x ¯ xr + ν1 .. . νn u.1 .. un Mobile robot u Fig. 19 Block diagram of the pose controller considering the kinematics and the dynamics of the mobile robot 4 xc xest xr y c (t)[m] 1 0 −1 −2 −3 −4 −2 xc (t)[m] Fig. 20 Controller performance in the Cartesian plane Figure 19 shows a block diagram of the whole pose control system, considering the kinematics and the dynamics of the robot. Note that although this control system can theoretically make the robot converge to any xr pose departing from any pose, without a given trajectory (it is generated implicitly by the controller), in practice it is not a good idea to force xr too far from the current robot position, as this could lead to large torque signals which can saturate the actuators. Hence, in practice xr should be a somewhat smooth reference trajectory and the controller would force the robot to follow it. Figure 20 shows the performance of the controller with the proposed controller while performing an 8 path. The red line is the reference path, starting at xr =  T  T 0 0 0 , the blue line is the actual robot position, starting at xest = 0 1 0 , and  T the read line is the robot position estimated by odometry, starting at xest = 0 1 0 . Note that the starting position of the robot is outside the reference path and that the controller forces the convergence to the reference path and then the reference path is followed. Also note that the odometry error increases over time, but that is a pose estimation problem, which is not addressed in this chapter. The controller forces the estimated robot position to follow the reference. Unfortunately, using just odometry, the pose estimation is not good and after some time it does not reflect the actual robot pose. Figure 21 shows the robot orientation. Again, the red line is the reference orientation, the blue line is the actual robot orientation and the red line is the robot orientation. An adaptive version of the controllers shown in Fig. 19 can be built, by using simultaneously the proposed controller and the identification module, as shown in Fig. 22. Then, the mass and inertia parameters of the robot would be adjusted on-line for variations due to changes in workload, for example. Note that in this case, a PRBS pattern of torques is not necessary, as the control input generated by the controller is used. The noise and external perturbations should provide enough richness in the signal for a good identification. In extreme cases, the θc θest θr θc (t)[rad] 4 3 2 1 0 −1 Fig. 21 Controller performance. Orientation in time u Non-linear controller e α ψ Coordinates change x ¯ xr + ur1 .. . urn+ − ν1 PI1 .. . νn PIn u.1 .. un θ, P ν Fig. 22 Block diagram of the adaptive controller Linearization u persistence of excitation of the control signal could be tested for and the identification turned off while it is not rich enough for a good identification. 7 Conclusion This chapter presented the identification of the dynamic model of a mobile robot in ROS. This model is the departure point for the design of advanced controllers for mobile robots. While for small robots, it is possible to neglect the dynamics and design a controller based only on the kinematic model, for larger or faster robots the controller should consider the dynamic effects. However, the theoretical determination of the parameters of the dynamic model is not easy due to the many parts of the robot and uncertainty in the assembly of the robot. Then, the online identification of those parameters enables the overcame of those difficulties. The packages used for such identification were described and a complete example, from modeling, parameterizing of the model until the computation the of numerical values for the unknown parameters and the write down of the model with all its numerical values was shown. The identification method was implemented as an online recursive algorithm, which enable its use in an adaptive controller, where new estimates of the parameters of the model are used to update the parameters of the controller, in a strategy known as indirect adaptive control [17]. The results of the identification procedure were used to design a controller based on the dynamics and the kinematics of the mobile robot Twil and adaptive version of that controller was proposed. References 1. Aicardi, M., G. Casalino, A. Bicchi, and A. Balestrino. 1995. Closed loop steering of unicyclelike vehicles via lyapunov techniques. IEEE Robotics and Automation Magazine 2 (1): 27–35. 2. Alves, J.A.V., and W.F. Lages. 2012. Mobile robot control using a cloud of particles. In Proceedings of 10th International IFAC Symposium on Robot Control. pp. 417–422. International Federation of Automatic Control, Dubrovnik, Croatia. doi:10.3182/20120905-3-HR-2030.00096. 3. Åström, K.J., and B. Wittenmark. 2011. Computer-Controlled Systems: Theory and Design, 3rd ed., Dover Books on Electrical Engineering, Dover Publications. 4. Barrett Technology Inc. 2011. Cambridge. MA: WAM User Manual. 5. Barros, T.T.T., and W.F. Lages. 2012. Development of a firefighting robot for educational competitions. In Proceedings of the 3r d Intenational Conference on Robotics in Education. Prague, Czech Republic. 6. Barros, T.T.T., and W.F. Lages. 2014. A backstepping non-linear controller for a mobile manipulator implemented in the ros. In Proceedings of the 12th IEEE International Conference on Industrial Informatics. IEEE Press, Porto Alegre, RS, Brazil. 7. Barros, T.T.T. and W.F. Lages. 2014. A mobile manipulator controller implemented in the robot operating system. In Proceedings for the Joint Conference of 45th International Symposium on Robotics and 8th German Conference on Robotics. pp. 121–128. VDE Verlag, Munich, Germany, iSBN 978-3-8007-3601-0. Brockett, R.W. 1982. New Directions in Applied Mathematics. New York: Springer. Campion, G., G. Bastin, and B. D’Andréa-Novel. 1996. Structural properties and classification of kinematic and dynamical models of wheeled mobile robots. IEEE Transactions on Robotics and Automation 12 (1): 47–62. Feb. Goodwin, G.C., and K.S. Sin. 1984. Adaptive Filtering, Prediction and Control. Prentice-Hall Information and System Sciences Series. Englewood Cliffs, NJ: Prentice-Hall Inc. Isidori, A. 1995. Nonlinear Control Systems, 3rd ed. Berlin: Springer. Koenig, N., and A. Howard. 2004. Design and use paradigms for gazebo, an open-source multirobot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004). vol. 3, pp. 2149–2154. IEEE Press, Sendai, Japan. Lages, W.F. 2016. Implementation of real-time joint controllers. In Robot Operating System (ROS): The Complete Reference (Volume 1), Studies in Computational Intelligence, vol. 625, ed. A. Koubaa, 671–702. Switzerland: Springer International Publishing. Lages, W.F., and E.M. Hemerly. 1998. Smooth time-invariant control of wheeled mobile robots. In Proceedings of The XIII International Conference on Systems Science. Technical University of Wrocław, Wrocław, Poland. Marder-Eppstein, E. 2016. Navigation Stack. http://wiki.ros.org/navigation. Marder-Eppstein, E., Berger, E., Foote, T., Gerkey, B., and K. Konolige. 2010. The office marathon: Robust navigation in an indoor office environment. In 2010 IEEE International Conference on Robotics and Automation (ICRA). pp. 300–307. IEEE Press, Anchorage, AK. Narendra, K.S., and A.M. Annaswamy. 1989. Stable Adaptive Systems. Englewood Cliffs, NJ: Prentice-Hall Inc. Nguyen-Tuong, D., and J. Peters. 2011. Model learning for robot control: a survey. Cognitive Processing 12(4), 319–340 (2011). http://dx.doi.org/10.1007/s10339-011-0404-1. Ogata, K. 1970. Modern Control Engineering. Englewood Cliffs, NJ, USA: Prentice-Hall. Popov, V.M. 1973. Hyperstability of Control Systems, Die Grundlehren der matematischen Wissenshaften, vol. 204. Berlin: Springer. Press, W.H., S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. 1992. Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge: Cambridge University Press. Secchi, H., Carelli, R., and V. Mut. 2003. An experience on stable control of mobile robots. Latin American Applied Research 33(4):379–385 (10 2003). http://www.scielo.org.ar/scielo. php?script=sci_arttextπd=S0327-07932003000400003&nrm=iso. Smith, R. 2005. Open dynamics engine. http://www.ode.org. Sucan, I.A., and S. Chitta. 2015. MoveIt! http://moveit.ros.org. Author Biography Walter Fetter Lages graduated in Electrical Engineering at Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS) in 1989 and received the M.Sc. and D.Sc. degrees in Electronics and Computer Engineering from Instituto Tecnológico de Aeronáutica (ITA) in 1993 and 1998, respectively. From 1993 to 1997 he was an assistant professor at Universidade do Vale do Paraíba (UNIVAP), from 1997 to 1999 he was an adjoint professor at Fundação Universidade Federal do Rio Grande (FURG). In 2000 he moved to the Universidade Federal do Rio Grande do Sul (UFRGS) where he is currently a full professor. In 2012/2013 he held an PostDoc position at Universität Hamburg. Dr. Lages is a member of IEEE, ACM, the Brazilian Automation Society (SBA) and the Brazilian Computer Society (SBC). Online Trajectory Planning in ROS Under Kinodynamic Constraints with Timed-Elastic-Bands Christoph Rösmann, Frank Hoffmann and Torsten Bertram Abstract This tutorial chapter provides a comprehensive and extensive step-by-step guide on the ROS setup of a differential-drive as well as a car-like mobile robot with the navigation stack in conjunction with the teb_local_planner package. It covers the theoretical foundations of the TEB local planner, package details, customization and its integration with the navigation stack and the simulation environment. This tutorial is designated for ROS Kinetic running on Ubuntu Xenial (16.04) but the examples and code also work with Indigo, Jade and is maintained in future ROS distributions. 1 Introduction Service robotics and autonomous transportation systems require mobile robots to navigate safely and efficiently in highly dynamic environments to accomplish their tasks. This observation poses the fundamental challenge in mobile robotics to conceive universal motion planning strategies that are applicable to different robot kinematics, environments and objectives. Online planning is preferred over offline approaches due to its immediate response to changes in a dynamic environment or perturbations of the robot motion at runtime. In addition to generating a collision free path towards the goal online trajectory optimization considers secondary objectives such as control effort, control error, clearance from obstacles, trajectory length and travel time. The authors developed a novel, efficient online trajectory optimization scheme termed Timed-Elastic-Band (TEB) in [1, 2]. The TEB efficiently optimizes the robot trajectory w.r.t. (kino-)dynamic constraints and non-holonomic kinematics while C. Rösmann (B) · F. Hoffmann · T. Bertram Institute of Control Theory and Systems Engineering, TU Dortmund University, 44227 Dortmund, Germany e-mail: [email protected] F. Hoffmann e-mail: [email protected] T. Bertram e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_7 C. Rösmann et al. explicitly incorporating temporal information in order to reach the goal pose in minimal time. The approach accounts for efficiency by exploiting the sparsity structure of the underlying optimization problem formulation. In practice, due to limited computational resources online optimization usually rests upon local optimization techniques for which convergence towards the global optimal trajectory is not guaranteed. In mobile robot navigation locally optimal trajectories emerge due to the presence of obstacles. The original TEB planner is extended in [3] to a fully integrated online trajectory planning approach that combines the exploration and simultaneous optimization of multiple admissible topologically distinctive trajectories during runtime. The complete integrated approach is implemented as an open-source package teb_local_planner 1 within the Robot Operating System (ROS). The package constitutes a local planner plugin for the navigation stack.2 Thus, it takes advantage of the features of the established mobile navigation framework in ROS, e.g. such as sharing common interfaces for robot hardware nodes, sensor data fusion and the definition of navigation tasks (by requesting navigation goals). Furthermore, it conforms to the global planning plugins available in ROS. A video that describes the package and its utilization is available online.3 Recently, the package has been extended to accomplish navigation tasks for car-like robots (with Ackermann steering) beyond the originally considered differential-drive robots.4 To our best knowledge, the teb_local_planner is currently the only local planning package for the navigation stack that explicitly supports car-like robots with limited turning radius. The main features and highlights of the planner are: • seamless integration with the ROS navigation stack, • general objectives for optimal trajectory planning, such as time optimality and path following, • explicit consideration of kino-dynamic constraints, • applicable to general non-holonomic kinematics, such as car like robots, • explicit exploration of distinctive topologies in case of dynamic obstacles, • computationally efficiency for online trajectory optimization. This chapter covers the following topics: 1. the theoretical foundations of the underlying trajectory optimization method is presented (Sect. 2), 2. description of the ROS package and its integration with the navigation stack (Sect. 3), 3. package test and parameter exploration for optimization of an example trajectory (Sect. 4), 4. modeling differential-drive and car-like robots for simulation in stage (Sect. 5), 5. Finally, complete navigation setup of the differential-drive robot (Sect. 6) and the car-like robot (Sect. 7). 1 teb_local_planner, URL: http://wiki.ros.org/teb_local_planner. navigation, URL: http://wiki.ros.org/navigation. 3 teb_local_planner, online-video, URL: https://youtu.be/e1Bw6JOgHME. 4 teb_local_planner extensions, online-video, URL: https://youtu.be/o5wnRCzdUMo. 2 ROS Online Trajectory Planning in ROS Under Kinodynamic … Fig. 1 Discretized trajectory with n = 3 poses y2 ym y1 x1 xm β1 ΔT1 x2 β2 s3 β3 x3 2 Theoretical Foundations of TEB This section introduces and explains the fundamental concepts of the TEB optimal planner. It provides the theoretical foundations for its successful utilization and customization in own applications. For a detailed description of trajectory planning with TEB the interested reader is referred to [1, 2]. 2.1 Trajectory Representation and Optimization A discretized trajectory b = [s1 , ΔT1 , s2 , ΔT2 , . . . , ΔTN −1 , s N ] is represented by an ordered sequence of poses augmented with time stamps. sk = [xk , yk , βk ] ∈ R2 × S 1 with k = 1, 2, . . . , N denotes the pose of the robot and ΔTk ∈ R>0 with k = 1, 2, . . . , N − 1 represents the time interval associated with the transition between two consecutive poses sk and sk+1 , respectively. Figure 1 depicts an example trajectory with three poses. The reference frame of the trajectory representation and planning frame respectively is denoted as map-frame.5 Trajectory optimization seeks for a trajectory b∗ that constitutes a minimizer of a predefined cost function. Common cost functions capture criteria such as the total transition time, energy consumption, path length and weighted combinations of those. Admissible solutions are restricted to a feasible set for which the trajectory does not intersect with obstacles or conforms to the (kino-)dynamic constraints of the mobile robot. Improving the efficiency of solving such nonlinear programs with hard constraints has become an important research topic over the past decade. The TEB approach includes constraints as soft penalty functions into the overall cost function. The introduction of soft rather than hard constraints enables the exploitation of efficient and well studied unconstrained optimization techniques for which mature open-source implementations exist. 5 Conventions for names of common coordinate frames in ROS are listed here: http://www.ros.org/ reps/rep-0105.html. The TEB optimization problem is defined such that b∗ minimizes a weighted and aggregated nonlinear least-squares cost function: b∗ = arg min b\{s1 ,s N } σi f i2 (b) , i ∈ {J , P} The terms f i : B → R≥0 capture conflicting objectives and penalty functions. The set of indices associated with objectives is denoted by J and the set of indices that refer to penalty functions by P. The trade-off among individual terms is determined by weights σi . The notation b\{s1 , s N } indicates that start pose s1 = ss and goal pose s N = sg are fixed and hence not subject to optimization. In the cost function, s1 and s N are substituted by the current robot pose ss and desired goal pose sg . The TEB optimization problem (1) is represented as a hyper-graph in which poses sk and time intervals ΔTk denote the vertices of the graph and individual cost terms f i define the (hyper-)edges. The term hyper indicates that an edge connects an arbitrary number of vertices, in particular temporal related poses and time intervals. The resulting hyper-graph is efficiently solved by utilizing the g2o-framework6 [4]. The interested reader is referred to [2] for a detailed description on how to integrate the g2o-framework with the TEB approach. The formulation as hyper-graph benefits from both the direct capture of the sparsity structure for its exploitation within the optimization framework and its modularity which easily allows incorporation of additional secondary objectives f k . Before the individual cost terms f k are described, the approximation of constraints by penalty functions is introduced. Let B denote the entire set of trajectory poses and time intervals such that b ∈ B. The inequality constraint gi (b) ≥ 0 with gi : B → R is approximated by a positive semi-definite penalty function pi : B → R≥0 which captures the degree of violation: pi (b) = max{0, −gi (b) + } The parameter  adds a margin to the inequality constraint such that the cost only vanishes for gi (b) ≥ . Combining indices of inequality constraints gi respectively penalty functions pi into the set P results in the overall cost function (1) by assigning f i (b) = pi (b), ∀i ∈ P. It is assumed that the choice of gi (b) preserves a continuous derivative (C 1 -differentiability) of pi2 (b) and that gi (b) adheres to eligible monotonicity or convexity constraints. In order to guarantee the true compliance of a solution with the constraint gi (b) ≥ 0 by means of (2) the corresponding weights in the overall objective function (1) are required to tend towards infinity σi → ∞, ∀i ∈ P. For a comprehensive introduction to the theory of penalty methods the reader is referred to [5]. On the other hand, large weights prevent the underlying solver to converge properly as they cause the optimization problem to become numerically ill-conditioned. Hence, the 6 libg2o, URL: http://wiki.ros.org/libg2o. Localization Global Map Pose and velocity b∗ Start Obstacle Goal Global Planner Trajectory Optimization t0 Start Odometry v, ω TEB approach / teb local planner Fig. 2 System setup of a robot controlled by the TEB approach TEB approach compensates the true minimizer with a suboptimal but computationally more efficiently obtained solution with user defined weights and the specification of an additional margin . The ROS implementation provides the parameter penalty_epsilon that specifies . The TEB method utilizes multiple cost terms f i for e.g. obstacle avoidance, compliance with (kino-)dynamic constraints of mobile robots and visiting of via-points. The list of currently implemented cost terms is described in the ROS package description in Sect. 3.3. 2.2 Closed-Loop Control Figure 2 shows the general control architecture with the local TEB planner. The optimization scheme for (1) starts with an initial solution trajectory generated from the path provided by the global planner w.r.t. a static representation of the environment (global map). Instead of a tracking controller which regulates the motion along the planned optimal trajectory b∗ , a predictive control scheme is applied in order to account for dynamic environments encapsulated in the local map and to allow the refinement of the trajectory during runtime. Thus, optimization problem (1) is solved repeatedly w.r.t. the current robot pose and velocity. The current position of the robot is usually obtained from a localization scheme. Within each sampling interval7 only the first control action of the TEB is commanded to the robot, which is the basic idea in model predictive control [6]. As most robots are velocity controlled by their base controllers, low-level hardware interfaces accept translational and angular velocity components w.r.t. the robot base frame. These components are easily extracted from 7 The move_base node (navigation stack) provides a parameter controller_frequency to adjust the sampling interval. the optimal trajectory b∗ by investigating finite differences on the position and orientation part. Car-like robots often require the steering angle rather than angular velocity. The corresponding steering angle is calculated from the turn rate and the car-like kinematic model. In order to improve computational efficiency the trajectory optimization pursues a warm start approach. Trajectories generated in previous iterations are reused as initial solutions in subsequent sampling intervals with updated start and goal poses. Since the time differences ΔTk are subject to optimization the resolution of the trajectory is adjusted at each iteration according to an adaptation rule. If the resolution is too high, overly many poses increase the computational load of the optimization. On the other hand, if the resolution is too low, the finite difference approximations of quantities related to the (kino-)dynamic model of the robot are no longer accurate, causing a degradation of navigation capabilities. Therefore, the approach accounts for changing magnitudes of ΔT by regulating the resolution towards a desired temporal discretization ΔTr e f (ROS parameter dt_ref). In case of low resolution ΔTk > ΔTr e f + ΔThyst an additional pose and time interval are filled in between sk and sk+1 . In case of inflated resolution ΔTk < ΔTr e f − ΔThyst pose sk+1 is removed. The hysteresis specified by ΔThyst (ROS parameter dt_hyst) avoids oscillations in the number of TEB states. In case of a static goal pose this adaption implies a shrinking horizon since the overall transition time decreases as the robot advances towards to the goal. Algorithm 1 Online TEB feedback control 1: procedure TebAlgorithm(b, xs , xg , O, V )  Invoked each sampling interval 2: Initialize or update trajectory 3: for all Iterations 1 to Iteb do 4: Adjust length n of the trajectory 5: Build/update hyper-graph incl. association of obstacles O and via-points V with poses of the trajectory 6: b∗ ← CallOptimizer(b)  solve (1), e.g. with libg2o 7: Check feasibility return First (sub-) optimal control inputs (v1 , ω1 ) The major steps performed at each sampling interval are captured by Algorithm 1. The loop starting at line 3 is referred to as the outer optimization loop, which adjusts the length of the trajectory as described above and associates the current set of obstacles O and via-points V with their corresponding states sk of the current trajectory. Further information on obstacles and via-points is provided in Sects. 3.3 and 3.5. The loop is repeated Iteb times (ROS parameter no_outer_iterations). The actual solver for optimization problem (1) is invoked in line 6 which itself performs multiple solver iterations. The corresponding ROS parameter for the number of iterations of the inner optimization loop is no_inner_iterations. The choice of these parameters significantly influences the required computation time as well as the convergence properties. After obtaining the optimized trajectory b∗ a feasibility check is performed that verifies if the first M poses actually are collision free based on their original footprint model defined in the navigation stack (note, this is not the footprint model used for optimization as presented in Sect. 3.4). The verification horizon M is represented by the ROS parameter feasibility_check_no_poses. 2.3 Planning in Distinctive Topologies The previously introduced TEB approach and its closed-loop application are subject to local optimization schemes which might cause the robot to get stuck in local minima. Local minima often emerge due to the presence of obstacles. Identifying those local minima coincides with analyzing distinctive topologies between start and goal poses. For instance the robot either chooses the left or right side in order to circumnavigate an obstacle. Our TEB ROS implementation investigates the discovery and optimization of multiple trajectories in distinctive topologies and selects the best candidate for control at each sampling interval. The equivalence relation presented in [7] determines whether two trajectories share the same topology. However, the configuration and theory of this extension is beyond the scope of this tutorial. The predefined default parameters are usually appropriate for applications as presented in the following sections. For further details the reader is referred to [3]. 3 The teb_local_planner ROS Package This section provides an overview about the teb_local_planner ROS package which implements the TEB approach for online trajectory optimization as described in Sect. 2. 3.1 Prerequisites and Installation In order to install and configure the teb_local_planner package for a particular application, observe the following limitations and prerequisites: • Although online trajectory optimization approaches pursue mature computational efficiency, their application still consumes substantial CPU resources. Depending on the desired trajectory length respectively resolution as well as the number of considered obstacles, common desktop computers or modern notebooks usually cope with the computational burden. However, older systems and embedded systems might not be capable to perform trajectory optimization at a reasonable rate. • Results and discussions on stability and optimality properties for online trajectory optimization schemes are widespread in the literature, especially in the field of model predictive control. However, since these results are often theoretically and the planner is confronted with e.g. sensor noise and dynamic environments in real applications, finding a feasible and stable trajectory in every conceivable scenario cannot be guaranteed. However, the planner tries to detect and resolve failures to generate a feasible trajectory by post-introspection of the optimized trajectory. Its ongoing algorithmic improvement is subject to further investigations. • The package currently supports differential-drive, car-like and omnidirectional robots. Since the planner is integrated with the navigation stack as plugin it provides a geometry_msgs/Twist message containing the velocity commands for controlling the robots motion. Since the original navigation stack is not intended for car-like robots yet, the additional recovery behaviors must be turned off and the global planner is expected to provide appropriate plans. However, the default global planners work well for small and medium sized car-like robots as long as the environment does not contain long and narrow passages unless the length of the vehicle exceeds their width. A conversion to steering angle has to be applied in case the car-like robot only accepts a steering angle rather than the angular velocity and interprets the geometry_msgs/Twist or ackermann_msgs/AckermannDriveStamped message different from the nominal convention. The former is directly enabled (see Sect. 6) and the latter requires a dedicated conversion ROS node. • The oldest officially supported ROS distribution is Indigo. At the time of writing the planner is also available in Jade and Kinetic. Support of future distributions is expected. The package is released for both default and ARM architectures. • Completion of the common ROS beginner tutorials, e.g. being aware of navigating the filesystem, creating and building packages as well as dealing with rviz,8 launch files, topics, parameters and yaml files is essential. Experiences with the navigation stack are highly recommended. The user should be familiar with concepts and components of ROS navigation such as local and global costmaps and local and global planners (move_base node), coordinate transforms, odometry and localization. This tutorial outlines the configuration of a complete navigation setup. However, explanation of the underlying concepts in detail is beyond the scope of this chapter. Tutorials on ROS navigation are available at the wiki page2 and [8]. • Table 1 provides an overview of currently available local planners for the ROS navigation stack and summarizes its main features. The teb_local_planner is easily installed from the official ROS repositories by invoking in terminal: $ sudo apt-get install ros-kinetic-teb-local-planner 8 rviz, URL: http://wiki.ros.org/rviz. Table 1 Comparison of available local planners in the ROS navigation stack EBanda TEB DWAb Strategy Force-based path deformation and path following controller Shortest path without considering kinodynamic constraints (local solutions) Optimality Kinematics Omnidirectional and differential-drive robots Computational burden Medium a eband_local_planner, Continuous trajectory optimization resp. predictive controller Time-optimal (or ref. path fidelity) with kinodynamic constraints (multiple local solutions, parallel optimization) Sampling-based trajectory generation, predictive controller Time-sub-optimal with kinodynamic constraints, samples of trajectories with constant curvature for prediction (multiple local solutions) Omnidirectional, Omnidirectional and differential-drive and differential-drive car-like robots robots High Low/Medium URL: http://wiki.ros.org/eband_local_planner URL: http://wiki.ros.org/base_local_planner b TrajectoryPlannerROS, The distribution name kinetic might be adapted to match the currently installed one. In the following, terminal commands are usually indicated by a leading $-sign. As an alternative to the default package installation, recent versions (albeit experimental) can be obtained and compiled from source: 2 $ cd ~/catkin_ws/src $ git clone https://github.com/rst-tu-dortmund/ teb_local_planner.git --branch kinetic-devel $ cd ../ $ rosdep install --from-paths src --ignore-src --rosdistro kinetic -y $ catkin_make Hereby, it is assumed that ˜/catkin_ws points to the user-created catkin workspace. 3.2 Integration with ROS Navigation The teb_local_planner package seamlessly integrates with the ROS navigation stack since it complies with the interface nav_core::BaseLocalPlanner specified in the nav_core2 package. Figure 3 shows an overview of the main components that constitute the navigation stack and the move_base node respectively.9 The move_base node takes care about the combination of the global and local planner as well as 9 Adopted from the move_base wiki page, URL: http://wiki.ros.org/move_base. C. Rösmann et al. “move base simple/goal” move base “/map” global planner amcl sensor transforms “/tf ” odometry source global costmap internal nav msgs/Path recovery behaviors teb local planner sensor topics Laser Scan/ Point Cloud map server sensor sources local costmap “cmd vel” base controller Fig. 3 ROS navigation component-view including teb_local_planner handling costmaps for obstacle avoidance. The amcl node10 provides an adaptive monte carlo localization algorithm which corrects the accumulated odometric error and localizes the robot w.r.t. the global map. For the following tutorials the reader is expected to be familiar with the navigation stack components and the corresponding topics. The teb_local_planner package comes with its own parameters which are configurable by means of the parameter server. The full list of parameters is available on the package wiki page, but many of them are presented and described in this tutorial. Parameters are set according to the relative namespace of move_base, e.g. /move_base/TebLocalPlannerROS/param_name. Most of the parameters can also be configured during runtime with rqt_reconfigure which is instantiated as follows (assuming a running planner instance): $ rosrun rqt_reconfigure rqt_reconfigure Within each local planner invocation (respectively each sampling interval) the teb_local_planner chooses an intermediate virtual goal within a specified lookahead distance on the current global plan. Only the local stretch of the global plan between current pose and lookahead point is subject to trajectory optimization by means of Algorithm 1. Hence the lookahead distance implies a receding horizon control strategy which transits to a shrinking horizon once the virtual goal coincides with the final goal pose of the global plan. The lookahead distance to the virtual goal is set by parameter max_global_plan_lookahead_dist but the virtual goal is never located beyond the boundaries of the local costmap. 10 amcl, URL: http://wiki.ros.org/amcl. 3.3 Included Cost Terms: Objectives and Penalties The teb_local_planner determines the current control commands in terms of minimizing the future trajectory w.r.t. a specified cost function (1) which itself consists of aggregated objectives and penalty terms as described in Sect. 2. Currently implemented cost terms f i of the optimization problem (1) are summarized in the following overview including their corresponding ROS parameters such as the optimization weights σi . Limiting translational velocity (Penalty) Description: Constrains the translational velocity vk to the interval [−vback , vmax ]. vk is computed with sk , sk+1 and ΔTk using finite differences. Weight parameter: weight_max_vel_x Additional parameters: max_vel_x (vmax ), max_vel_x_backwards (vback ) Limiting angular velocity (Penalty) Description: Constrains the angular velocity to |ωk | ≤ ωmax (finite differences). Weight parameter: weight_max_vel_theta Additional parameters: max_vel_theta (ωmax ) Limiting translational acceleration (Penalty) Description: Constrains the translational acceleration to |ak | ≤ amax (finite differences). Weight parameter: weight_acc_lim_x Additional parameters: acc_lim_x (amax ) Limiting angular acceleration (Penalty) Description: Constrains the angular acceleration to |ω˙ k | ≤ ω˙ max (finite differences). Weight parameter: weight_acc_lim_theta Additional parameters: acc_lim_theta (ω˙ max ) Compliance with non-holonomic kinematics (Objective) Description: Minimize deviations from the geometric constraint that requires two consecutive poses sk and sk+1 to be located on a common arc of constant curvature. Actually, kinematic compliance is not merely an objective, but rather an equality constraint. However, since as the planner rests upon unconstrained optimization a sufficient compliance is ensured by a large weight. Weight parameter: weight_kinematics_nh Limiting the minimum turning radius (Penalty) Description: Some mobile robots exhibit a non-zero turning radius (e.g. implicated by a limited steering angle). In particular car-like robots are unable to rotate in place. This penalty term enforces r = ωvkk ≥ rmin . Differential drive and unicycle robots can turn in place rmin = 0. Weight parameter: weight_kinematics_turning_radius Additional parameters: min_turning_radius (rmin ) Penalizing backwards motions (Penalty) Description: This cost term expresses preference for forward motions independent of the actual maximum backward velocity vback in terms of a bias weight. The penalty is deactivated if min_turning_radius is non-zero. Weight parameter: weight_kinematics_forward_drive Obstacle avoidance (Penalty) Description: This cost term maintains a minimum separation dmin of the trajectory from obstacles. A dedicated robot footprint model is taken into account for distance calculation (see Sect. 3.3). Weight parameter: weight_obstacle Additional parameters: min_obstacle_dist (dmin ) Via-points (Objective) Description: This cost term minimizes the distance to via-points, e.g. located along the global plan. Each via-point defines an attractor for the planned trajectory. Weight parameter: weight_viapoint Additional parameters: global_plan_viapoint_sep Arrival at the goal in minimum time (Objective) Description: This term minimizes ΔTk in order seek for a time-optimal trajectory. Weight parameter: weight_optimaltime. 3.4 Robot Footprint for Optimization The obstacle avoidance penalty function introduced in Sect. 3.3 depends on a dedicated robot footprint model. The reason behind not using the original footprint specified in the navigation stack resp. costmap configuration is to promote efficiency in the optimization formulation while keeping the original footprint for feasibility checks. Since the optimization scheme is subject to a large number of distance calculations between robot and obstacles, the original polygonal footprint would drastically increase the computational load, as each polygon edge has to be taken into account. However, the user might still duplicate the original footprint model for optimization, but in practice simpler approximations are often sufficient. The current package version provides four different models (see Fig. 4). Parameters for defining the footprint model (as listed below) are defined w.r.t. the robot base frame, e.g. base_link, such that sk defines its origin. An example robot frame is depicted in Fig. 4d. In particular, the four models are: • Point model: The most efficient representation in which the robot is modeled as a single point. The robot’s radial extension is captured by inflating the minimum distance from obstacles dmin (min_obstacle_dist, see Fig. 4a) by the robot’s radius. • Line model: The line model is ideal for robots which dimensions differ in longitudinal and lateral directions. Start and end of the underlying line segment, ls ∈ R2 sk ls (a) Point model (b) Line model dmin v3 rr sk ccr v6 v8 (c) Two-circles model v7 (d) Polygon model Fig. 4 Available footprint models for optimization and le ∈ R2 respectively, are arbitrary w.r.t. the robot’s center of rotation sk as origin (0, 0). The robot’s radial extension is controlled similar to the point model by inflation of dmin (refer to Fig. 4b). • Two-circle model: The two-circle model is suited for robots that exhibit a more cone-shaped footprint rather than a rectangular one (see Fig. 4c). The centers of both circles, cr and c f respectively, are restricted to be located on the robot’s x-axis. Their offsets w.r.t. the center of rotation sk and their radii rr and r f are arbitrary. • Polygon model: The polygon model is the most general one, since the number of edges is arbitrary. Figure 4d depicts a footprint defined by 8 vertices v1 , v2 , . . . , v8 ∈ R2 . The polygon is automatically completed by adding an edge between the first and last vertex. The following yaml file contains example parameter values for customizing the footprint. All parameters are defined w.r.t. the local planner namespace TebLocal PlannerROS: TebLocalPlannerROS: footprint_model: type: "point" # types: "point", "line", "two_circles", "polygon" line_start: [-0.3, 0.0] # for type "line" line_end: [0.3, 0.0] # for type "line" front_offset: 0.2 # for type "two_circles" front_radius: 0.2 # for type "two_circles" rear_offset: 0.2 # for type "two_circles" rear_radius: 0.2 # for type "two_circles" vertices: [ [-0.1,0.2], [0.2,0.2], [0.2,-0.2], [-0.1,-0.2] ] # for type "polygon" 3.5 Obstacle Representations The teb_local_planner takes the local costmap into account as already stated in Sect. 3.2. The costmap mainly consists of a grid in which each cell stores an 8-bit cost value that determines whether the cell is free (0), unknown, undesired or occupied (255). Besides, the ability to implement multiple layers and to fuse data from different sensor sources, the costmap is perfectly suited for local planners in the navigation stack due to their sampling based nature. In contrast, the TEB optimization problem (1) cannot just test discrete cell states inside its own cost function for collisions, but rather requires continuous functions based on the distance to obstacles. Therefore, our implementation extracts relevant obstacles from the current costmap at the beginning of each sampling interval and considers each occupied cell as single dimensionless point-shaped obstacle. Hence, the computation time strongly depends on the local costmap size and resolution (see Sect. 6). Additionally, custom obstacles can be provided by a specific topic. The teb_local_planner supports obstacle representations in terms of points, lines and closed polygons. A costmap conversion11 might be activated in order to convert costmap cells into primitive types such as lines and polygons in a separate thread. However, these extensions are beyond the scope of this introductory tutorial, but the interested reader is referred to the package wiki page. Once obstacles are extracted and cost terms (hyper-edges) according to Sect. 3.3 are constructed, obstacles are associated with the discrete poses sk of the trajectory in order to maintain a minimal separation. In order to speed up the time spent by the solver for computing the cost function multiple times during optimization, each pose sk is only associated with its nearest obstacles. The association is renewed at every outer iteration (refer to Algorithm 1) in order to correct vague associations during convergence. For advanced parameters of the association strategy the reader is referred to the teb_local_planner ROS wiki page. Figure 5 depicts an example planning scenario in which the three 11 costmap_converter, URL: http://wiki.ros.org/costmap_converter. d d4 d5 6 ΔT2 s2 Closest pose to obstacle Footprint model Fig. 5 Association between poses and obstacles closest poses are associated with the polygonal obstacle. Notice, the minimum distances d4 , d5 and d6 to the robot footprints located at s4 , s5 and s6 are constrained to min_obstacle_dist. The minimum distance should account for an additional safety margin around the robot, since the penalty functions cannot guarantee its fulfillment and small violations might cause a rejection of the trajectory by the feasibility check (refer to Sect. 2). 4 Testing Trajectory Optimization Before starting with the configuration process of the teb_local_planner for a particular robot and application, we recommend the reader to familiarize himself with the optimization process and furthermore to check the performance on the target hardware. The package includes a simple test node (test_optim_node) that optimizes a trajectory between a fixed start and goal pose. Some obstacles are included with interactive markers12 which are conveniently moved via the GUI in rviz. Launch the test_optim_node in combination with a preconfigured rviz node as follows: $ roslaunch teb_local_planner test_optim_node.launch An rviz window should open showing the trajectory and obstacles. Select the menu button Interact in order to move the obstacles around. An example setting is depicted in Fig. 6a. As briefly stated in Sect. 2, the package generates and optimizes trajectories in different topologies in parallel. The currently selected trajectory for navigation is 12 interactive_markers, URL: http://wiki.ros.org/interactive_markers. (a) test optim node and its visualization (b) rqt reconfigure window Fig. 6 Testing trajectory optimization with test_optim_node augmented with red pose arrows in visualization. In order to change parameters during runtime, invoke $ rosrun rqt_reconfigure rqt_reconfigure in a new terminal window and select test_optim_node from the list of available nodes (refer to Fig. 6b). Try to customize the optimization with different parameter settings. Since some parameters significantly influence the optimization result, adjustments should be performed slightly and in a step-by-step manner. In case you encounter a poor performance on your target system even with the default settings, try to decrease parameters no_inner_iterations, no_outer_iterations or increase dt_ref slightly. 5 Creating a Mobile Robot in Stage Simulator This section introduces a minimal stage13 simulation setup with a differential-drive and a car-like robot. stage is chosen for this tutorial since it constitutes a package which is commonly used in ROS tutorials and is thus expected to be available in future ROS distributions as well. Furthermore, stage is fast and lightweight in terms of visualization which allows its execution even on slow CPUs and older graphic cards. It supports kinematic models for differential-drive, car-like and holonomic robots, but it is not intended to perform dynamic simulations such as Gazebo.14 However, even if the following sections refer to a stage model, the procedures and configurations are directly applicable to other simulation environments or a real mobile robot without major modifications. 13 stage_ros, URL: http://wiki.ros.org/stage_ros. URL: http://wiki.ros.org/gazebo_ros_pkgs. 14 gazebo_ros_pkgs, Note stage (resp. stage_ros) publishes the coordinate transformation between odom and base_link and the odometry information to the odom topic. It subscribes to velocity commands by the topic cmd_vel. Make sure to install stage for your particular ROS distribution: $ sudo apt-get install ros-kinetic-stage-ros In order to gather all configuration and launch files that will be created during the tutorial, a new package is initiated as follows: 2 $ cd ~/catkin_ws/src $ catkin_create_pkg teb_tutorial It is a good practice to add teb_local_planner and stage_ros to the run dependencies of your newly created package (check the new package.xml file). Now, create a stage and a maps folder inside the teb_tutorial package and download a predefined map called maze.png15 and its yaml configuration for the map_server: 2 $ $ $ # $ roscd teb_tutorial mkdir stage && mkdir maps cd maps remove any whitespaces in the URLs below after copying wget https://cdn.rawgit.com/rst-tu-dortmund/ teb_local_planner_tutorials/rosbook/maps/maze.png $ wget https://cdn.rawgit.com/rst-tu-dortmund/ teb_local_planner_tutorials/rosbook/maps/maze.yaml 5.1 Differential-Drive Robot Stage loads its environment from world files that define a static map and agents such like your robot (in plain text format). In the following, we add a world file to the teb_tutorial package which loads the map maze.png and spawns a differentialdrive robot. The robot is assumed to be represented as a box (0.25 m × 0.25 m × 0.4 m). Whenever text files are edited, the editor gedit is utilized (sudo apt-get install gedit), but you might employ the editor of your preferred choice. 2 $ roscd teb_tutorial/stage $ gedit maze_diff_drive.world # or use the editor of your choice The second command creates and opens a new file with gedit. Add the following code and save the contents to file maze_diff_drive.world: 15 Borrowed from the turtlebot_stage package: http://wiki.ros.org/turtlebot_stage. ## Simulation settings resolution 0.02 interval_sim 100 # simulation timestep in milliseconds ## Load a static map model( name "maze" bitmap "../maps/maze.png" size [ 10.0 10.0 2.0 ] pose [ 5.0 5.0 0.0 0.0 ] color "gray30" ) ## Definition of a laser range finder define mylaser ranger( sensor( range_max 6.5 # maximum range fov 58.0 # field of view samples 640 # number of samples ) size [ 0.06 0.15 0.03 ] ) ## Spawn robot position( name "robot" size [ 0.25 0.25 0.40 ] # (x,y,z) drive "diff" # kinematic model of a differential-drive robot mylaser(pose [ -0.1 0.0 -0.11 0.0 ]) # spawn laser sensor pose [ 2.0 2.0 0.0 0.0 ] # initial pose (x,y,z,beta[deg]) ) General simulation settings are defined in lines 1–3. Afterwards the static map is defined using a stage model object. The size property is important in order to define the transformation between pixels of maze.png and their actual sizes in the world. The bitmap is shifted by an offset defined in pose in order to adjust the bitmap position relative to the map frame. Simulated robots in this tutorial are equipped with a laser range finder set up in lines 12–20. Finally, the robot itself is setup in lines 22–28. The code specifies a differential drive robot (drive "diff") with the previously defined laser scanner attached and the desired box size as well as the initial pose. The base_link frame is automatically located in the geometric center of the box, specified by the size parameter, which in this case coincides with the center of rotation. A case in which the origin must be corrected occurs for the car-like model (see Sect. 5.2). In order to test the created robot model invoke the following commands in a terminal: 2 $ roscore $ rosrun stage_ros stageros ‘rospack find teb_tutorial‘/stage/ maze_diff_drive.world yr [m] 0.2 0.1 center of rotation geometric center −0.1base link 0.1 0.6 xr [m] −0.1 stage box origin wheelbase −0.2 Fig. 7 Dimensions of the car-like robot for simulation 5.2 Car-Like Robot In this section you generate a second world file that spawns a car-like robot. The 2Dcontours of the car-like robot are depicted in Fig. 7 (gray-colored). For the purpose of this tutorial, only the boundary box of the robot is considered in the simple stage model. The length in yr is increased slightly in order to account for the steerable front wheels. The top left and bottom right corners are located at vtl = [−0.1, 0.125]T m and vbr = [0.5, −0.125]T m respectively w.r.t. the robot’s base frame base_link (defined by the xr - and yr -axis). For car-like robots with front steering wheels the center of rotation coincides with the center of the rear axle. Since the TEB approach assumes a unicycle model for planning but with additional constraints for car-like robots such as minimum turning radius, the robot’s base frame must be placed at the center of rotation in order to fulfill this relation. The next step consist of duplicating the previous world file from Sect. 5.1 and modifying the robot model according to Fig. 7: 2 $ roscd teb_tutorial/stage $ cp maze_diff_drive.world maze_carlike.world # duplicate diffdrive.world $ gedit maze_carlike.world # or use the editor of your choice Replace the robot model (line 21–28) by the following one and save the file: 2 ## Spawn robot position( name "robot" size [ 0.6 0.25 0.40 ] # (x,y,z) - bounding box of the robot origin [ 0.2 0.0 0.0 0.0] # correct center of rotation (x,y,z ,beta) drive "car" # kinematic model of a car-like robot wheelbase 0.4 # distance between rear and front axles mylaser(pose [ -0.1 0.0 -0.11 0.0 ]) # spawn laser sensor pose [ 2.0 2.0 0.0 0.0 ] # initial pose (x,y,z,beta[deg]) ) Notice, the kinematic model is changed to the car-like one (drive "car"). Parameter wheelbase denotes the distance between the rear and front axle (see Fig. 7). The size of the robot’s bounding box is set in line 4 w.r.t. the box origin as depicted in Fig. 7. Stage automatically defines the center of rotation in the geometric center, which is located at [0.3, 0.125]T m w.r.t. the box origin. In order to move the center of rotation towards the correct location [0.1, 0.125]T m w.r.t. the box origin, the frame is shifted as specified by parameter origin. Load and inspect your robot model in stage for testing purposes: 2 $ roscore $ rosrun stage_ros stageros ‘rospack find teb_tutorial‘/stage/ maze_carlike.world The robot is controlled via a geometry_msgs/Twist message even though the actual kinematics refer to a car-like robot. But in contrast to the differententialdrive robot, the angular velocity (yaw-speed, around z-axis) is interpreted as steering angle rather than the true velocity component. 6 Planning for a Differential-Drive Robot This section covers the complete navigation setup with the teb_local_planner for the differential-drive robot defined in Sect. 5.1. Start by creating configuration files for the global and local costmap (refer to Fig. 3). In the following, configuration files are stored in a separate cfg folder inside your teb_tutorial package which was created during the steps in Sect. 5. Create a costmap_common_params.yaml file which contains parameters for both the global and local costmap: 2 $ roscd teb_tutorial $ mkdir cfg && cd cfg $ gedit costmap_common_params.yaml Now insert the following lines and save the file afterwards: 2 # file: costmap_common_params.yaml # Make sure to preserve indentation if copied (for all yaml files) footprint: [ [-0.125,0.125], [0.125,0.125], [0.125,-0.125], [-0.125,-0.125] ] transform_tolerance: 0.5 map_type: costmap global_frame: /map robot_base_frame: base_link obstacle_layer: enabled: true obstacle_range: 3.0 raytrace_range: 4.0 Online Trajectory Planning in ROS Under Kinodynamic … 14 track_unknown_space: true combination_method: 1 observation_sources: laser_scan_sensor laser_scan_sensor: {data_type: LaserScan, topic: scan, marking: true, clearing: true} inflation_layer: enabled: inflation_radius: true 0.5 static_layer: enabled: The robot footprint is specified according to the projection of the dimensions of the robot (0.25 m × 0.25 m × 0.4 m) introduced in Sect. 6 onto the x-y-plane. The footprint must be defined in the base_link frame, which center coincides with the center of rotation. In this tutorial the selected map_type is costmap which creates an internal 2d grid. If the robot is equipped with 3D range sensors it is often desired to include the height of obstacles. This allows for ignoring obstacles beyond a specific height or tiny obstacles above the ground floor. For this purpose, the costmap_2d 16 package also supports voxel grids. Refer to the costmap_2d wiki page for further information. The obstacle layer is defined in lines 9–16 which includes external sensors such like our laser range finder by implementing ray-tracing. In this tutorial the laser range finder is expected to publish its range data on topic scan. An inflation layer adds exponentially decreasing cost to cells w.r.t. their distance from actual (lethal) obstacles. This allows the user to set a preference for maintaining larger separation from obstacles whenever possible. Although the teb_local_planner only extracts lethal obstacles from the costmap as described in Sect. 3.5 and ignores inflation, an activated inflation layer still influences the global planner and thus the location of virtual goals for local planning (refer to Sect. 3.2 for the description of virtual goals). Consequently, a non-zero inflation_radius moves virtual goals further away from (static) obstacles. Finally, the static layer includes obstacles from the static map which are retrieved from the map topic by default. The map maze.png is published later by the map_server node. After saving and closing the file, specific configurations for the global and local costmap are created. 2 $ roscd teb_tutorial/cfg $ gedit global_costmap_params.yaml # insert content, save and close $ gedit local_costmap_params.yaml # insert content, save and close 16 costmap_2d, URL: http://wiki.ros.org/costmap_2d. The default content for global_costmap_params.yaml is listed below: 2 # file: global_costmap_params.yaml global_costmap: update_frequency: 1.0 publish_frequency: 0.5 static_map: true plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer "} The global costmap is intended to be a static one which means its size is inherited from the map provided by the map_server node (notice the loaded static layer plugin which was defined in costmap_common_params.yaml). The previously defined inflation layer is added as well. The content for local_costmap_params.yaml is as follows: 2 # file: local_costmap_params.yaml local_costmap: update_frequency: 5.0 publish_frequency: 2.0 static_map: false rolling_window: true width: 5.5 # -> computation time: teb_local_planner height: 5.5 # -> computation time: teb_local_planner resolution: 0.1 # -> computation time: teb_local_planner plugins: - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer "} It is highly recommended to define the local costmap as a rolling window in medium or large environments, since otherwise the implied huge number of obstacles might lead to intractable computational loads. The rolling window is specified by its width, height and resolution. These parameters have a significant impact on the computation time of the planner. The size should not exceed the local sensor range and it is often sufficient to set the width and height to values of approx. 5–6 m. The resolution determines the discretization granularity respectively how many grid cells are allocated in order to represent the rolling window. Since each occupied cell is treated as a single obstacle by default (see Sect. 3.5), a small value (resp. high resolution) indicates a huge number of obstacles and therefore long computation times. On the other hand, the resolution must be fine enough to cope with small obstacles, narrow hallways and passing doors. Finally, the previously defined obstacle layer is activated in order to incorporated dynamic obstacles obtained from the laser range finder. Prior to generating the overall launch file, a configuration file for the local planner is created: 2 $ roscd teb_tutorial/cfg $ gedit teb_local_planner_params.yaml The content of the teb_local_planner_params.yaml is listed below: 2 # file: teb_local_planner_params.yaml TebLocalPlannerROS: # Trajectory dt_ref: 0.3 dt_hysteresis: 0.1 global_plan_overwrite_orientation: True allow_init_with_backwards_motion: False max_global_plan_lookahead_dist: 3.0 feasibility_check_no_poses: 3 # Robot max_vel_x: 0.4 max_vel_x_backwards: 0.2 max_vel_theta: 0.3 acc_lim_x: 0.5 acc_lim_theta: 0.5 min_turning_radius: 0.0 # diff-drive robot (can turn in place !) footprint_model: type: "point" # include robot radius in min_obstacle_dist # Goal Tolerance xy_goal_tolerance: 0.2 yaw_goal_tolerance: 0.1 # Obstacles min_obstacle_dist: 0.25 costmap_obstacles_behind_robot_dist: 1.0 obstacle_poses_affected: 10 # Optimization no_inner_iterations: 5 no_outer_iterations: 4 For the sake of readability, only a small subset of available parameters is defined here. Feel free to add other parameters, e.g. after determining suitable parameter sets with rqt_reconfigure. In this example configuration, the point footprint model is chosen for optimization (parameter footprint_model). The circumscribed radius R√of the robot defined in Sect. 5.1 is derived by applying geometry calculus: R = 0.5 0.252 + 0.252 m ≈ 0.18 m. In order to compensate possible small distance violations due to penalty terms the parameter min_obstacle_dist is set to 0.25 m. Parameter costmap_obstacles_behind_robot_dist specifies how many meters of the local costmap portion beyond the robot are taken into account in order to extract obstacles from cells. After all related configuration files are created, the next step consist of defining the launch file that starts all nodes required for navigation and includes configuration files. Launch files are usually added to a subfolder called launch: $ cd ~/catkin_ws/src/teb_tutorial/ $ mkdir launch && cd launch $ gedit robot_in_stage.launch # create a new launch file Add the following content to your newly created launch file: 2 After activating simulation time, the stage_ros node is loaded. The path to the world file previously created in Sect. 5.1 is forwarded as an additional argument. The command $(find teb_tutorial) automatically searches for the teb_tutorial package path in your workspace. Since stage_ros publishes the simulated laser range data on topic base_scan, but the costmap is configured for listening on scan, remapping is performed here. Afterwards, the core navigation node move_base is loaded. All costmap and planner parameters are included relative to the move_base namespace. Some additional parameters are defined, such as which local and global planner plugins to be loaded. The selected global planner is commonly used in ROS. Parameters planner_frequency and planner_patience define the rate (in Hz) at which the global planner is invoked and how long the planner waits (seconds) without receiving any valid control before backup operations are performed, respectively. Similar settings are applied to the local planner with parameters controller_frequency and controller_patience. We also specify the teb_local_planner/TebLocalPlannerROS as the local planner plugin. The two final nodes are the map_server node which provides the maze map and the amcl node for adaptive monte carlo localization. The latter corrects odometry errors of the robot by providing and adjusting the transformation between map and odom. amcl requires an initial pose which is set to the actual robot pose as defined in the stage world file. All other parameters are kept at their default settings in this tutorial. Congratulations, the initial navigation setup is completed now. Further parameter adjustments are easily integrated into the configuration and launch files. Start your launch file in order to test the overall scheme: $ roslaunch teb_tutorial robot_in_stage.launch Ideally, a stage window occurs, no error messages appear and move_base prints “odom received!”. Afterwards start a new terminal and open rviz in order to start the visualization and send navigation goals: $ rosrun rviz rviz Make sure to set the fixed frame to map. Add relevant displays by clicking on the Add button. You can easily show all available displays by selecting the tab by topic. Select the following displays: • from /map/: Map • from /move_base/TebLocalPlannerROS/: global_plan/Path, local_plan/Path, teb_markers/Marker and teb_poses/ PoseArray • from global_costmap/: costmap/Map • from local_costmap/: costmap/Map and footprint/Polygon (we do not have a robot model to display, so use the footprint). Now specify a desired goal pose using the 2D Nav Goal button in order to start navigation. In the following, some parameters are modified during runtime: Keep everything running and open in a new terminal: $ rosrun rqt_reconfiugre rqt_reconfigure Select move_base/TebLocalPlanner in the menu in order to list all parameters which can be changed online. Now increase min_obstacle_dist to 0.5 m which is larger than the door widths in the map (the robot assumes a free space of 2 × 0.5 m then). Move the robot through a door and observe what is happening. The behavior should be similar to the one shown in Fig. 8a. The planner still tries to plan through the door according to the global plan and since the solution constitutes a local minimum. From the optimization point of view, the distance to each pose is minimized such that poses must be moved along the trajectory in both directions in order to avoid penalties (introducing the gap). However, min_obstacle_dist is chosen such that a door passing cannot be intended. As a consequence, the robot collides. After testing, reset the parameter back to 0.25 m. The following task involves configuration of the trade-off between time optimality and global path following. Activate the via-points objective function by increasing the parameter global_plan_viapoint_sep to 0.5. Command a global plan (a) Improper parameter value for min obstacle dist via-points (b) Via-points added along the global plan Fig. 8 Testing navigation with the differential-drive robot new navigation goal and observe the new blue quadratic markers along the global plan (see Fig. 8b). Via-points are generated each 0.5 m (according to parameter value global_plan_viapoint_sep). Each via-point constitutes an attractor for the trajectory during optimization. Obviously, the trajectory still keeps a certain distance to some via-points as shown in Fig. 8b. The optimizer minimizes the weighted sum of both objectives: time optimality and reaching via-points. By increasing the weight global_plan_viapoint_sep (via rqt_reconfigure) and commanding new navigation goals you might recognize that the robot increasingly tends to prefer the original global plan over the fastest trajectory. Note, an excessive optimization weight for via-points might cause the obstacle cost to become negligible in comparison to the via-point cost. In that case avoiding dynamic obstacles does not work properly anymore. However, a suitable parameter setting for a particular application is determined in simulation. 7 Planning for a Car-Like Robot This section describes how to configure the car-like robot defined in Sect. 5.2 for simulation with stage. The steps for setting up the differential drive robot as described in the previous section must be completed before in order to avoid redundant explanations. Create a copy of the teb_local_planner_params.yaml file for the new car-like robot: 2 $ roscd teb_tutorial/cfg $ cp teb_local_planner_params.yaml teb_local_planner_params_carlike.yaml $ gedit teb_local_planner_params_carlike.yaml Change the robot section according to the following snippet: 2 # file: teb_local_planner_params_carlike.yaml # Robot max_vel_x: 0.4 max_vel_x_backwards: 0.2 max_vel_theta: 0.3 acc_lim_x: 0.5 acc_lim_theta: 0.5 min_turning_radius: 0.5 # we have a car-like robot! wheelbase: 0.4 # wheelbase of our robot cmd_angle_instead_rotvel: True # angle instead of the rotvel for stage weight_kinematics_turning_radius: 1 # increase, if the penalty for min_turning_radius is not sufficient footprint_model: type:"line" line_start: [0.0, 0.0] # include robot expanse in min_obstacle_dist C. Rösmann et al. line_end: [0.4, 0.0] # include robot expanse in min_obstacle_dist Parameter min_turning_radius is non-zero in comparison to the differentialdrive robot configuration. The steering angle φ of the front wheels of the robot is limited to ±40 deg(≈ ±0.7 rad). From trigonometry the relation between the turning radius r and the steering angle φ is defined by r = L/ tan φ [9]. Hereby, L denotes the wheelbase. Evaluating the expression with φ = 0.7 rad and L = 0.4 m reveals a minimum turning radius of 0.47 m. Due to the penalty terms, it is rounded up to 0.5 m for the parameter min_turning_radius. Since move_base provides a geometry_msgs/Twist message containing linear and angular velocity commands v and ω respectively, the signals are transformed to a robot base driver that only accepts the linear velocity v and a steering angle φ. Since the turning radius is expressed by r = v/ω, the relation to the steering angle φ follows immediately: φ = atan(Lv/ω). The case v = 0 is treated separately, e.g. by keeping the previous angle or by setting the steering wheels to their default position. For robots accepting an ackermann_msgs/AckermannDriveStamped message type, a simple converter node/script is added to communicate and map between move_base and the base driver. As described in Sect. 5.2 stage requires the default geometry_msgs/Twist type but with changed semantics: the angular velocity component is interpreted as steering angle. The teb_local_planner already provides the automatic conversion for this type of interface by activating parameter cmd_angle_instead_rotvel. The steering angle φ is set to zero in case of zero linear velocities (v = 0 ms ). The footprint model is redefined for the rectangular robot (according to Sect. 5.2). The line model is recommended for rectangular-shaped robots. Instead of defining the line over the complete width (−0.1 m ≤ xr ≤ 0.5 m), 0.1 m are subtracted in order to account for the robot’s expansion along the yr -axis, since this value is added to parameter min_obstacle_dist similar to the differential-drive robot in Sect. 6. With some additional margin, min_obstacle_dist = 0.25 m should perform well, such that the parameter remains unchanged w.r.t. the previous configuration. Create a new launch file in order to test the modified configuration: 2 $ roscd teb_tutorial/launch $ cp robot_in_stage.launch carlike_robot_in_stage.launch $ gedit carlike_robot_in_stage.launch The stage_ros node must now load the maze_carlike.world file. Additionally, the local planner parameter configuration file must be replaced by the car-like version. An additional parameter clearing_rotation_allows is set to false in order to deactivate recovery behaviors which require the robot to turn in place. Relevant snippets are listed below: Close any previous ROS nodes and terminals and start the car-like robot simulation: $ roslaunch teb_tutorial robot_in_stage.launch If no errors occur, navigate your robot through the environment. Again run rviz for visualization with all displays configured in Sect. 5.1 and rqt_reconfigure for playing with different parameter settings. An example scenario is depicted in Fig. 9. The displayed markers in rviz indicate occupied cells of the local costmap which are taken into account as point obstacle during trajectory optimization. considered point obstacles footprint global plan walls robot optimal trajectory (a) Visualization in rviz Fig. 9 Navigating a car-like robot in simulation (b) Stage simulator preview 8 Conclusion This tutorial chapter presented a step-by-step guide on how to setup the package teb_local_planner in ROS for navigation with a differential-drive and a car-like robot. The package implements an online trajectory optimization scheme termed Timed-Elastic-Band approach and it seamlessly integrates with the navigation stack as local planner plugin. The fundamental theory and concepts of the underlying approach along with related ROS parameters was introduced. The package provides an effective alternative to the currently available local planners as it supports trajectories planning with cusps (backward motion) and car-like robots. To our knowledge, the latter is currently not provided by any other local planner. The package allows the user to quantify a spatial-temporal trade-off between a time optimal trajectory and compliance with the original global plan. Further work intends to address the automatic tuning of cost function weights for common cluttered environments and maneuvers. Furthermore, a benchmark suite for the performance evaluation of the different planners available in ROS could be of large interests for the community. Benchmark results facilitate the appropriate selection of planners for different kinds of applications. Additionally, future work aims to include dynamic obstacles, support of additional kinematic models as well as further improving algorithmic efficiency. References 1. Rösmann, C., Feiten, W., Wösch, T., Hoffmann, F., and T. Bertram. 2012. Trajectory modification considering dynamic constraints of autonomous robots. In 7th German Conference on Robotics (ROBOTIK), 74–79. 2. Rösmann, C., Feiten, W., Wösch, T., Hoffmann, F., and T. Bertram. 2013. Efficient trajectory optimization using a sparse model. In 6th European Conference on Mobile Robots (ECMR), 138–143. 3. Rösmann, C., Hoffmann, F., and T. Bertram. 2015. Planning of multiple robot trajectories in distinctive topologies. In IEEE European Conference on Mobile Robots, 1–6. 4. Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., and W. Burgard. 2011. G2o: A general framework for graph optimization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 3607–3613. 5. Nocedal, J., and S.J. Wright. 1999. Numerical Optimization., Springer series in operations research New York: Springer. 6. Morari, M., and J.H. Lee. 1999. Model predictive control: past, present and future. Computers and Chemical Engineering 23 (4–5): 667–682. 7. Bhattacharya, S., Kumar, V., and M. Likhachev. 2010. Search-based path planning with homotopy class constraints. In Proceedings of National Conference on Artificial Intelligence. 8. Guimarães, R.L., de Oliveira, A.S., Fabro, J.A., Becker, T., and V.A. Brenner. 2016. ROS Navigation: Concepts and Tutorial. In Robot Operating System (ROS) - The Complete Reference (A. Koubaa, ed.), vol. 625 of Studies in Computational Intelligence, pp. 121–160, Springer International Publishing. 9. LaValle, S.M. 2006. Planning Algorithms. New York, USA: Cambridge University Press. Christoph Rösmann was born in Münster, Germany, on December 8, 1988. He received the B.Sc. and M.Sc. degree in electrical engineering and information technology from the Technische Unversität Dortmund, Germany, in 2011 and 2013 respectively. He is currently working towards the Dr.-Ing. degree at the Institute of Control Theory and Systems Engineering, Technische Universität Dortmund, Germany. His research interests include nonlinear model predictive control, mobile robot navigation and fast optimization techniques. Frank Hoffmann received the Diploma and Dr. rer. nat. degrees in physics from ChristianAlbrechts University of Kiel, Germany. He was a Postdoctoral Researcher at the University of California, Berkeley from 1996–1999. From 2000 to 2003, he was a lecturer in computer science at the Royal Institute of Technology, Stockholm, Sweden. He is currently a Professor at TU Dortmund and affiliated with the Institute of Control Theory and Systems Engineering. His research interests are in the areas of robotics, computer vision, computational intelligence, and control system design. Torsten Bertram received the Dipl.-Ing. and Dr.-Ing. degrees in mechanical engineering from the Gerhard Mercator Universität Duisburg, Duisburg, Germany, in 1990 and 1995, respectively. In 1990, he joined the Gerhard Mercator Universität Duisburg, Duisburg, Germany, in the Department of Mechanical Engineering, as a Research Associate. During 1995–1998, he was a Subject Specialist with the Corporate Research Division, Bosch Group, Stuttgart, Germany, In 1998, he returned to Gerhard Mercator Universität Duisburg as an Assistant Professor. In 2002, he became a Professor with the Department of Mechanical Engineering, Technische Universität Ilmenau, Ilmenau, Germany, and, since 2005, he has been a member of the Department of Electrical Engineering and Information Technology, Technische Universität Dortmund, Dortmund, Germany, as a Professor of systems and control engineering and he is head of the Institute of Control Theory and Systems Engineering. His research fields are control theory and computational intelligence and their application to mechatronics, service robotics, and automotive systems. ROSLink: Bridging ROS with the Internet-of-Things for Cloud Robotics Anis Koubaa, Maram Alajlan and Basit Qureshi Abstract The integration of robots with the Internet is nowadays an emerging trend, as new form of the Internet-of-Things (IoT). This integration is crucially important to promote new types of cloud robotics applications where robots are virtualized, controlled and monitored through the Internet. This paper proposes ROSLink, a new protocol to integrate Robot Operating System (ROS) enabled-robots with the IoT. The motivation behind ROSLink is the lack of ROS functionality in monitoring and controlling robots through the Internet. Although, ROS allows control of a robot from a workstation using the same ROS master, however this solution is not scalable and rather limited to a local area network. Solutions proposed in recent works rely on centralized ROS Master or robot-side Web servers sharing similar limitations. Inspired from the MAVLink protocol, the proposed ROSLink protocol defines a lightweight asynchronous communication protocol between the robots and the endusers through the cloud. ROSLink leverages the use of a proxy cloud server that links ROS-enabled robots with users and allows the interconnection between them. ROSLink performance was tested on the cloud and was shown to be efficient and reliable. A. Koubaa (B) · M. Alajlan Center of Excellence Robotics and Internet of Things (RIOT) Research Unit, Prince Sultan University, Riyadh, Saudi Arabia e-mail: [email protected] M. Alajlan e-mail: [email protected] A. Koubaa Gaitech Robotics, Hong Kong, China A. Koubaa CISTER/INESC-TEC, ISEP, Polytechnic Institute of Porto, Porto, Portugal M. Alajlan King Saud University, Riyadh, Saudi Arabia B. Qureshi Prince Sultan University, Riyadh, Saudi Arabia e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_8 A. Koubaa et al. Keywords Robot Operating System (ROS) · Cloud robotics · Internet-of-Things · ROSLink · Mobile robots · Protocol Design 1 Introduction Cloud Robotics [1–3] is a recent and emerging trend in robotics that aims at levering the use of Internet-of-Things (IoT) and cloud computing technologies to promote robotics applications from two perspectives: (i) Virtualization: providing seamless access to robots through Web and Web services technologies, (ii) Remote Brain: offloading intensive computations from robots to the cloud resources to overcome the computation, storage and energy limitations of robots. Nowadays, Robot Operating System (ROS) [4] represents a defacto standard for the development of robotics applications. ROS as a middleware, provides several levels of software abstraction to hardware and robotics resources (i.e. sensor and actuators) in addition to the reuse of open source project libraries. It has been designed to overcome difficulties when developing large-scale service robots reducing the complexity of robotics software construction. Although widely used in developing applications for service robots, ROS lacks the native support for control and monitoring of robots through the Internet. It is possible to write ROS nodes (i.e. programs) in a remote workstation on the same local area network (LAN), where both the robot machine and the workstation use the ROS Master Uniform Resource Identifier (URI)., however controlling the ROS nodes from a remote location is challenging. To address this limitation many research works have been proposed focusing on client-server based architecture [5–10]. A milestone work that addressed this issues is the ROSBridge protocol [11]. It is based on Websockets server installed on the robot side that allows to send the internal status of the robot based on ROS topics and services and receives commands to Websockets clients to process them. This approach enabled the effective integration of ROS with the Internet; however, the fact that the Websockets server is running on the robot machine requires the robot to have a public IP address to be accessible by Websockets clients, which is not possible for every robot, or being on the same local area networks. Network address translation (NAT) could also be used when the robot is behind a NAT domain, but still this option may be cumbersome in deployment. In [12], the author proposed a ROS as a Web service which allow to define a Web service server in the robot to access through the Internet. However, this solution share the same limitation as ROSBridge as the server is located at the robot side. This paper fills the gap and proposes ROSLink, a communication protocol that overcomes the aforementioned limitations by (i) implementing specifications of client in the robot side, (ii) manifestation of a proxy server located at a public IP server machine, such a cloud server. The idea is inspired by the MAVLink protocol [13], where the robot sends its data in serialized messages through a network client to a ground station that acts as a server which in turn, receives these messages, processes them and sends control commands to the robot. As such, it is no longer needed for a ROSLink: Bridging ROS with the Internet-of-Things … robot to have a public IP address, whereas it will still be accessible behind the proxy server. The contribution of this paper are two folded. First, we propose ROSLink a new communication protocol that defines a 3-tier architecture. The ROSLink Bridge client executes in the robot side; the ROSLink Proxy acts as a server in the ground station, and a client application at the user side that interacts with the robot through the ROSLink protocol. Second, we validate the proposed ROSLink protocol through an experimental study on the ground Turtlebot robot as well as the aerial AR.Parrot drone. We demonstrate the effectiveness and feasibility of ROSLink. The remainder of this paper is organized as follows. Section 2 presents motivating examples and objectives behind the design of ROSLink. Section 3 presents the ROSLink communication protocol. Section 4 presents the experimental validation of ROSLink and the evaluation of its performance. Finally, Sect. 5 concludes the paper. 2 Motivating Problems and Objectives 2.1 Problem Statement The motivation behind this work is to integrate ROS with the Internet of Things. ROS does not natively support monitoring and control of robots through the Internet. In fact, as illustrated in Fig. 1a, ROS allows to control a robot from a workstation using the same ROS master, but this solution is not scalable and rather limited to a local area network usage. The typical scenario is that every robot starts its own ROS Master node, and users can control the robot from their workstations if they configure their ROS network settings to use the same ROS Master running in the robot. This standard approach does not natively allow to control the robot through the Internet as robots typically do not have a public IP address. The use of port forwarding behind NAT can be considered in certain cases, but might not be possible in other cases, like connection through 3G/4G connection. One possible solution, as illustrated in Fig. 1b, is to use one ROS Master node for all robots, where the Master node runs on a central server with a public IP address on the Internet. All users will connect to the same ROS Master and can access any robot by publishing and subscribing to their topics and services. However, this solution has several limitations. First, some ROS topics, services and nodes may be in conflict with having the same name. This issue requires a careful design of namespace for ROS nodes, services and topics to avoid any conflict. With the large number of robots, this solution becomes very complex. The second issue is the lack of scalability, i.e. the ROS Master might become overloaded when several robots are bound to it at a given time. Apart from the known networking issues while considering that several ROS topics are bandwidth greedy, there is no viable solution to mapping individual users to their robots since all topics will be visible to all users. (a) Standard Approach:Typical connection between ROS robots and ROS users. A user is connected to the ROS Master of the robot to control and monitor its status, typically, in a local area network. (b) Centralized Approach: One central ROS Master to which all robots and users connected. This solution is not scalable, does not provide effective management of robots and users. (c) ROSLink Approach: A cloud based approach, where ROS robots and users interact through the cloud. The ROSLink cloud provides users and robots management, service oriented interfaces, and real-time streaming services. Fig. 1 ROS operation approaches Our approach consists in the design of ROSLink, a lightweight communication protocol, inspired from MAVLink [13], to allow for a cloud-based interaction between ROS robots and their users, as depicted in Fig. 1c. The idea is to add a ROSLink Bridge on top of ROS for every robot such that this bridge sends all the status of the robot using JSON serialized messages. ROSLink Bridge is a ROS node that accesses all topics and services of interest in ROS, and sends selected information in ROSLink messages, serialized in JSON format. These messages are sent to the ROSLink Cloud Proxy, which processes the messages and forwards them to the individual user and/or users of the robot. In addition, users send commands to the robot through the ROSLink cloud proxy utilizing ROSLink JSON messages, which are later processed by the ROSLink Bridge resulting in execution of the corresponding ROS action. The ROSLink cloud-based approach presents three major advantages (1) to be independent from the ROS master nodes of the robots, (2) ensure seamless communication between users and robots through the cloud, (3) provide effective management of robots, users and underlying services. 2.2 Overview The main objective of ROSLink is to control and monitor a ROS-enabled robot through the Internet. In the literature, most of the related works focused on using two-tiered client/server approach while the server is implemented in the robot and the client is implemented in the user application. In fact, most of these researches are based on the instantiation of ROSBridge and ROSJS frameworks [11, 14] to build remotely controlled robots. ROSBridge represents a milestone that enabled this kind of remote control of ROS-enabled tele-operated robots. However, the drawbacks of this approach are (1) robot-centric approach, which restricts the scalability of the system as the server is centralized in the robot itself, (2) the deployment on Internet is rather difficult as the robot needs to have a public IP address or accessible through a NAT forwarding port, when it is inside a local area network. To overcome these limitations, we propose ROSLink a three-tiered client/server model, where the client is implemented in the robot and the user, whereas the server is located at a public domain and acts as a proxy to link the robots with their users. ROSLink overcomes the two aforementioned problems. First, there is no longer a server implemented inside the robot, so it does no longer follow the robot-centric approach. In contrast, the robot implements the client side through the ROSLink Bridge components, which is a ROS node that interfaces with ROS on the one hand, and on the other hand sends ROS data to the outside world through a network interface (UDP, TCP, or Websockets). Besides, the ROSLink server side of the model is implemented in a publicly available server called ROSLink proxy, which acts as a mediator between the robots and the users. Robots and users send messages to the proxy server, which will dispatch them accordingly to the other side. ROSLink makes a complete abstraction of ROS by defining a communication protocol (inspired from the MAVLink protocol) that provides all information about the robot through ROS topics/services without exposing ROS ecosystem to the users. The user does not need to be familiar with ROS to be able to use ROSLink to send commands for robot. ROSLink defines a set of interfaces for the users to interact with the robot, and a set of messages to exchange between them. ROSLink messages are constructed based on ROS topics/services parameters to either get data from or submit data to execute an action. The messages are represented as JSON strings. JSON format is opted to use in data interchange because it is platform-independent and language-independent data representation format [15]. In addition, it is a lighter-weight solution as compared to XML, which is more appropriate for resource-constrained and bandwidth-constrained platforms. This will allow the client application developer to choose any programming language (C++, Java, JavaScript, Python, etc.) to develop a client application that interacts with ROSLink to command and monitor ROS-enabled robots. In summary, ROSLink differs from previous works, and in particular from ROSBridge in these aspects: • ROSLink implements a client in the robot as well as in the user applications, and implements a server in an intermediate proxy, whereas ROSBridge implements a Websocket server in the robot and Websocket clients in user applications. • ROSBridge is based on the Websocket protocol, whereas ROSLink can be implemented with any transport layer protocol (TCP, UDP and Websockets). In this paper, we used UDP and Websockets interfaces to implement ROSLink. • ROSLink does not rely on ROSBridge as in previous works but defines its own communication protocol between ROS and non-ROS users. 3 The ROSLink Protocol In what follows, we present an overview of system and software architecture. 3.1 ROSLink System Architecture The general architecture of ROSLink is presented in Fig. 2. The system is composed of three main parts: • The ROSLink Bridge: this is the main component of the system. It is the interface between ROS and the ROSLink protocol. This bridge has two main functionalities: (1) it reads data from messages of ROS topics and services, serializes the data in JSON format according to ROSLink protocol specification, and sends to a ground station, a proxy server or a client application, (2) receives JSON serialized data through a network interface from a ground station or a client application, deserializes it from the JSON string, parse the command, and executes it through ROS. • The ROSLink Proxy and Cloud: it acts as a proxy server between the ROSLink Bridge (embedded in the robot) and the user client application. Its role is to link a user client application to a ROS-enabled robot through its ROSLink Bridge. The ROSLink proxy is mainly a forwarder of ROSLink messages between the robot and user. It allows to keep the user updated with the robot status, and also forward control commands from the user to the robot. In addition ROSProxy interact with ROSLink Cloud component, to maintain and manages the list of robots and users, create a mapping between them, and perform all management functionality, including security, quality-of-service monitoring, etc. • The ROSLink Client Application: it basically represents a control and monitoring application of the robot. This application is intended to monitor the status of the robot that it receives through ROSLink messages via the ROSLink Proxy from the robot. In addition, it sends commands through ROSLink messages to control the robot activities. 3.2 ROSLink Communication Protocol We designed the ROSLink communication protocol to allow interaction between the different parts of the ROSLink system, namely the ROSLink Bridge, ROSLink Proxy and the ROSLink Client application. The ROSLink communication protocol is based on two main things: (1) The transport protocol to use to communicate between the users, clouds and robots. (2) the message specification and its serialization in JSON format. Fig. 2 ROSLink architecture Transport Protocol ROSLink Bridge, ROSLink Proxy and ROSLink Client all of them use a network interface to communicate. There are different options for the transport protocol, which include UDP, TCP and Websockets. Communication through Serial port and telemetry devices is not considered as we only aim at a communication pattern through the Internet. In our ROSLink implementation, we considered both UDP and Websockets transport protocols. The ROSLink Proxy implements both UDP and Websockets servers providing different interfaces for robots and users to interact with it. On the other hand, ROSLink Bridge and ROSLink Client can implement either protocol UDP, or Websockets clients or both together that interact with the ROSLink Proxy server. This gives enough flexibility to the developer to choose the most appropriate transport protocol for his application. On the one hand, we opted for the UDP connection because it is better and lighter weight choice for real-time and loss-tolerant streaming applications as compared to TCP, as it is the case with ROSLink data exchange model. In fact, the robot will be streaming its internal status (e.g. position, odomtery, velocities, etc.) in real-time to the proxy server, which will deliver it to the ROSLink client application. On the other hand, the Websockets interface also provide an idea protocol for more reliable transport of data streams as compared to UDP, while meeting real-time requirements. In fact, Websockets is a connection-based protocol that opens the connections between the two communicating ends before data exchanges, and ensure connection to remain open all along the message exchange sessions. The connection is closed when any of the two ends terminate the sessions, which make it more reliable. It is also possible to think of using a TCP connection for better reliability of transfer, but in our context, the lost of data occasionally is not that critical. It might be critical in case of closed-control loop applications, which is out of our scope at this stage. ROSLink Message Types The ROSLink communication protocol is based on the exchange of ROSLink messages. ROSLink messages are JSON formatted strings that contain information about the command and its parameters. To standardize the type of messages exchanged, we specified a set of ROSLink messages that are supported by the ROSLink Proxy. These message can be easily extended based on the requirements of the user and the application. There are two main categories of ROSLink messages: (i.) State messages: these are message sent by the robot and carry out information about the internal state of the robot, including its position, orientation, battery level, etc. (ii.) Command messages: these are messages sent by the client application to the robot and carry out commands to make the robot execute some actions, like for example moving, executing a mission, going to a goal location, etc. In what follows, we identify an example of messages and command types: • Presence message: the robot should declare its presence regularly to declare itself and to be considered as active. Typically, Heartbeat messages sent at a certain frequency (typically one message per second) are used for this purpose. • Motion messages: In robot mission, it is important to know the location and odometry motion parameters (i.e. linear and angular velocities) of the robot at a Fig. 3 ROSLink message header structure certain time. Thus, a motion message containing position information of the robot should be periodically broadcast. • Sensor messages: The robot needs to broadcast its internal sensor data such as IMU, laser scanners, camera images, GPS coordinates, actuators states, etc. ROSLink also defines several sensor messages to exchange these data between the robot and the user. • Motion commands: For the robot to navigate in ROS, certain commands are sent to it like Twist messages in ROS, goal/waypoint locations, and takeoff/landing command for drones. ROSLink also specifies different types of commands to make the robot moves as desired. The aforementioned list is not exhaustive as other types of messages can be designed based on the requirements of the users and available information from the robot. In what follows, we present the ROSLink message structures for the main state messages and commands. ROSLink Message Structure A ROSLink message is a composed of a header and a payload. The structure of the ROSLink message header is presented in Fig. 3. ROSLink Message Header: The total header size is 128 bits. The roslink_ version is encoded as a short int on 8 bits and specifies the version of ROSLink protocol. This is because in the future, new ROSLink versions would be released and it is important to specify which version a message belongs to for correct parsing. The ros_version specifies the ROS version (e.g. Indigo). The system_id is an int encoded into 16 bits and specifies the ID of the robot. It helps in differentiating robots from each other at the server side. It is possible to encode the system_id in 8 bits to reduce the header size, but the problem this restricts the scalability of the system to only 256 robots ID. The message_id specifies the type of message received. It helps in correctly parsing the incoming message and extract underlying information. The sequence_number denotes the sequence of the packet, identifies a single packet, and avoid processing duplicate packets. Finally, the key field is encoded on 24 bits and is used to identify a robot, and to map it to a user. A user that would like to have access to a robot, must use the same key that the robot is using in its Heartbeat message. ROSLink Message Payload: The payload carries out data relevant for each ROSLink message type. ROSLink defines several state message and command types. In what follows, we give an overview of the most common state and command messages. For a complete set of messages, the reader may refer to [16]. The most basic ROSLink message is the Heartbeat message, which is sent periodically from the robot to the ROSLink proxy server, and vice-versa. Every ROSLink Bridge should implement the periodic transmission of the Heartbeat message. The objective of the Heartbeat message is for the proxy server to ensure that the robot is active and connected, upon reception of that message. In the same way, a robot that receives a Heartbeat message from the ROSLink Proxy server ensures that the server is alive. This message increases the reliability of the system when it uses a UDP connectionless protocol, such that both ends make sure of the activity of the other end. Failsafe operations can be designed when the robot loses reception of Heartbeat messages from the user such as stopping motion or returning to start location until connectivity is resumed. The Heartbeat message structure is defined in JSON representation in Listing 1.1. In the ROSlink protocol, the message_id of the Heatbeat message is set to zero. Listing 1.1 Heartbeat Message Structure { " r o s l i n k _ v e r s i o n " : int 8 , " r o s _ v e r s i o n " : int 8 , " s y s t e m _ i d " : int 1 6 , " message_id ": 0, " s e q u e n c e _ n u m b e r " : int 6 4 , " p a y l o a d " : { " type " : int 8 " name " : S t r i n g , " s y s t e m _ s t a t u s " : int 8 , " owner_id ": String , " mode " : int 8 } } The Robot Status message contains the general system state, like which onboard controllers and sensors are present and enabled in addition to information related to the battery state. Listing 1.2 presents the Robot Status message structure, which has a message_id equal to 1. Listing 1.2 Robot Status Message Structure { " r o s l i n k _ v e r s i o n " : int 8 , " r o s _ v e r s i o n " : int 8 , " s y s t e m _ i d " : int 1 6 , " message_id ": 1, " s e q u e n c e _ n u m b e r " : int 6 4 , " p a y l o a d " : { " o n b o a r d _ c o n t r o l _ s e n s o r s _ p r e s e n t " : uint 3 2 , " o n b o a r d _ c o n t r o l _ s e n s o r s _ e n a b l e d " : uint 3 2 , " v o l t a g e _ b a t t e r y " : uint 1 6 , " c u r r e n t _ b a t t e r y " : int 1 6 , " b a t t e r y _ r e m a i n i n g " : int 8 , } } The Global motion message represents the position of the robot and its linear and angular velocities. This information is sent to the ROSLink client at high frequency to keep track of robot motion state in real-time. An instance of the Global motion message structure is expressed in Listing 1.3: Listing 1.3 Global Motion Message Structure { " r o s l i n k _ v e r s i o n " : int 8 , " r o s _ v e r s i o n " : int 8 , " s y s t e m _ i d " : int 1 6 , " m e s s a g e _ i d " : int 8 , " s e q u e n c e _ n u m b e r " : int 6 4 , " p a y l o a d " : { " t i m e _ b o o t _ m s " : uint 3 2 "x": float 64, "y": float 64, "z": float 64, " vx " : f l o a t 6 4 , " vy " : f l o a t 6 4 , " vz " : f l o a t 6 4 , " wx " : f l o a t 6 4 , " wy " : f l o a t 6 4 , " wz " : f l o a t 6 4 , " pitch ": float 64, " roll " : f l o a t 6 4 , " yaw " : f l o a t 6 4 , } } Listing 1.4 presents the Range Finder Data message, which carries out information and data about laser scanners attached to the robot. The Range Finder Data sensor information enables to develop control application on the client through the cloud, such as obstacle avoidance reactive navigation, SLAM, etc. Listing 1.4 Range Finder Data Message Structure { " r o s l i n k _ v e r s i o n " : int 8 , " r o s _ v e r s i o n " : int 8 , " s y s t e m _ i d " : int 1 6 , " m e s s a g e _ i d " : int 8 , " s e q u e n c e _ n u m b e r " : int 6 4 , " p a y l o a d " : { " t i m e _ u s e c " : int 6 4 " angle_min ": float 32, " angle_max ": float 32, " angle_increment ": float 32, " time_increment ": float 32, " scan_time ": float 32, " range_min ": float 32, " range_max ": float 32, " ranges ": float 32[], " intensities ": float 32[],} } The following Listings present a few examples on command messages that can be sent from the ROSLink client application to the robot through the cloud. The most basic command message is the Twist command message, which controls the linear and angular velocities of the robot, and ins illustrated in Listing 1.5. This ROSLink Twist message maps directly with the Twist message defined in ROS. Once the ROSLink Twist command reaches the ROSLink Bridge, it is first deserialized from the JSON wrapper, then extract velocity commands and publishes them as a ROS Twist message to make the robot move. Listing 1.5 Twist Command Message Structure { " r o s l i n k _ v e r s i o n " : int 8 , " r o s _ v e r s i o n " : int 8 , " s y s t e m _ i d " : int 1 6 , " m e s s a g e _ i d " : int 8 , " s e q u e n c e _ n u m b e r " : int 6 4 , " p a y l o a d " : { " lx " : f l o a t , " ly " : f l o a t , " lz " : f l o a t , " ax " : f l o a t , " ay " : f l o a t , " az " : f l o a t , } } To stop the robot, it simply requires to send a Twist command message with all velocities set to zeros. In the same way, the Go to Waypoint command message defines a command to send the robot to a specific goal location. The parameters x, y and z represent the 3D coordinates of the goal location. The frame_type field represents the world frame if it is set to true, and the robot frame is it is set to false. Listing 1.6 Go-To-Wayoint Command Message Structure { " r o s l i n k _ v e r s i o n " : int 8 , " r o s _ v e r s i o n " : int 8 , " s y s t e m _ i d " : int 1 6 , " m e s s a g e _ i d " : int 8 , " s e q u e n c e _ n u m b e r " : int 6 4 , " payload ": {" frame_type ": boolean , "x": float , "y": float , "z": float ,} } Several other commands and state messages were also defined like the takeoff/land command messages in the context of drones, and the GPS state message that provides information of the GPS sensor, etc. 3.3 Integration of ROSLink Proxy in Dronemap Planner As mentioned in Sect. 3.1, ROSLinkProxy is responsible for map users to robots, manages them, and control the access of users to robots. We integrated ROSLink Proxy into the Dronemap Planner cloud application [17]. Dronemap Planner is modular service-oriented cloud based system that was originally developed for the management of MAVLink drones and to provide seamless access to them through the Internet. The software architecture of Dronemap Planner defines a proxy layer interface that allow to mediate between robots and users. In [17], this proxy interface was implemented as a MAXProxy component that mediates between MAVLink drones and users. As such Dronemap Planner is a comprehensive system that allow to control both MAVLink and ROSLink robots simultaneously. Video demonstrations of Dronemap Planner related to control of drones over the Internet are available in [18]. We extended Dronemap Planner to also support the ROSLink protocol by implementing the proxy layer interface as ROSLinkProxy that allows the exchanges of ROSLink messages between ROS-enabled robots and their users. Dronemap Planner provides a comprehensive system for drones and users management, and session handling. In addition, Dronemap Planner provides both Websockets and UDP sockets network interfaces for ROSLink robots and users. We have used Dronemap Planner to run experiments related to the performance evaluation of ROSLink in controlling and monitoring ROS-enabled robots through the Internet, in Sect. 4. 4 Experimental Validation In this section, we present an experimental study to demonstrate the effectiveness of ROSLink and to assess its impact on real-time open-loop control applications. We investigate the impact of network and cloud delays on the performance of open-loop control applications of ROS-enabled robots. With open-loop control, the commands are sent to the robot without the need for any feedback from the robot. The problem can be formulated as follows:“If the control application is offloaded from the robot to the ROSLink client, what is the impact of network and cloud processing delays on the performance of the control?” Trajectory Control Application To address this question, we consider a prototype open-loop control application of the motion of Turtesim robot (default simulator in ROS) to follow a spiral trajectory. We have chosen a spiral trajectory motion control application because the resulting trajectory is sensitive to delays and jitters. A spiral trajectory is defined by a combination of an increasing linear velocity over time and a constant angular velocity. The general algorithm for drawing a spiral trajectory is presented in the following listing: Listing 1.7 Spiral Trajectory Motion Control Algorithm d o u b l e constant_angular_speed=ANGULAR_SPEED_CONSTANT ; d o u b l e linear_velocity_step = LINEAR_STEP_CONSTANT ; d o u b l e time = SIMULATION_TIME_CONSTANT ; d o u b l e rate = FREQUENCY_OF_UPDATE_CONSTANT i n t number_of_iterations = ( i n t ) ( time∗rate ) ; d o u b l e linear_velocity = _INIT_LINEAR_VELOCITY ; f o r ( i n t i= 0 ;i e7ccb7b11eeb ... Successfully built f2cc5810fb94 $ docker run -it ros:tutorials bash -c "roscore & rosrun roscpp_tutorials listener & rosrun roscpp_tutorials talker" → ... [ INFO] [1462299420.261297314]: hello world 5 [ INFO] [1462299420.261495662]: I heard: [hello world 5] [ INFO] [1462299420.361333784]: hello world 6 ^C[ INFO] [1462299420.361548617]: I heard: [hello world 6] [rosout-1] killing on exit From here, students can swap out the URLs for their own repositories and append additional dependencies. Should students encounter any build or runtime errors, Dockerfiles and/or images could be shared (from Git Hub and/or Docker Hub) with the instructor or other peers on say answers.ros.org to serve as a minimal example, capable of quickly replicating the errors encountered for further collaborative debugging. What we’ve shown so far has been a rather structured work-flow from build to runtime, however containers also offer a more interactive and dynamic workflow as well. As shown from this tutorial video,10 we can interact with containers directly. A container can persist beyond the life cycle of its starting process, and is not removed until the docker daemon is directed to do so. Naming or keeping track of your containers affords you the use of isolated ephemeral work-spaces in which to experiment or test, stopping and restarting them as needed. Note that you should avoid using containers to store a system state or files you wish to preserve. Instead, a developer may work within a container iteratively, progressively building the larger application in increments and taking periodic respites to commit the state of their container/progress to a newly tagged image layer. This could be seen as a form of state wide revision control, with save points allowing the developer to reverse changes by simply spawning a new container from a previous layer. All the while the developer could also consolidate his progress by noting the setup procedure within a new Dockerfile, testing and comparing it against the linage of working scratchwork images. 4.2 Industry In our previous education example, it was evident how we simply spawned all the tutorial nodes for a single bash process. When this process (PID 1) is killed, the con10 https://youtu.be/9xqekKwzmV8. R. White and H. Christensen tainer is also killed. This explains the popular convention of keeping to one process per container, as it is indicative to modern paradigm of microservices architecture, etc. This is handy should we desire the life-cycles of certain deployed ROS nodes to be longer than others. Let’s revisit the previous example utilizing software defined networking to interlink the same ROS nodes and services, only now, running from separate containers. Within a new directory, foo, we’ll create a file named docker-compose.yml: 1 services: master: image: ros:indigo environment: - "ROS_HOSTNAME=master.foo_default" command: roscore talker: build: talker/. environment: - "ROS_HOSTNAME=talker.foo_default" - "ROS_MASTER_URI=http://master.foo_default:11311" command: rosrun roscpp_tutorials talker listener: build: listener/. environment: - "ROS_HOSTNAME=listener.foo_default" - "ROS_MASTER_URI=http://master.foo_default:11311" command: rosrun roscpp_tutorials listener With this compose file, we have encapsulated the entire setup and structure of our simple set of ROS ’microservices’. Here, each service, (master, talker, listener), will spawn a new container named appropriately, originating from the image designated or Dockerfiles specified in the build field. Notice that the environment fields configure the ROS network variables to match each service’s domain name under the foo_default network named by our project’s directory. The foo_default name-space can be omitted, as the default DNS resolution within the foo_default will resolve using the local service or container names. Still, remaining explicit helps avoid collisions while adding host enabled DNS resolution (later on) over multiple Docker networks. Before starting up the project, we’ll also copy the same Dockerfile from the previous example into the project’s talker and listener sub-directories. With this, we can start up the project detached, and then monitor the logs as below: ROS and Docker 1 2 3 4 ~/foo$ docker-compose up -d Creating foo_master_1 Creating foo_listener_1 Creating foo_talker_1 ~/foo$ docker-compose logs Attaching to foo_talker_1, foo_master_1, foo_listener_1 ... talker_1 | [ INFO] [1462307688.323794165]: hello world 42 listener_1 | [ INFO] [1462307688.324099631]: I heard: [hello world 42] Now let’s consider the example where we’d like to upgrade the ROS distro release used for just our talker service, leaving the rest of our ROS nodes running and uninterrupted. We’ll use Docker-compose to recreate our new talker service: 12 13 ~/foo$ docker exec -it foo_talker_1 printenv ROS_DISTRO indigo ~/foo$ sed -i -- ’s/indigo/jade/g’ talker/Dockerfile ~/foo$ docker-compose up -d --build --force-recreate talker Building talker Step 1 : FROM ros:jade ... Successfully built 3608a3e9e788 Recreating foo_talker_1 ~/foo$ docker exec -it foo_talker_1 printenv ROS_DISTRO jade Here we first check the ROS release used in the container, and change the version used in the originating Dockerfile for the talker service. Next we use some shorthand flags to inform Docker-compose to re-bring-up the talker service by recreating a new talker container by rebuilding the talker image. We then check the ROS distro again and see the reflected update. You may also go back to docker compose logs and find that the counter in the published message has been reset. From here on we can abstract our interaction with the docker engine, and instead point our client towards a Docker Swarm,11 a method for one client to spin up containers from a cluster of Docker engines. Normally a tool such as Docker Machine12 can be used to bootstrap a swarm and define a swarm master. This entails provisioning and networking engines from multiple hosts together, such that requested containers can be load balanced across the swarm, and containers running from different hosts can securely communicate. 11 https://docs.docker.com/swarm. 12 https://docs.docker.com/machine. 4.3 Research Up to this point, we’ve considered relatively benign Docker enabled ROS projects where our build dependencies were fairly shallow, simply those accrued through default apt-get, and run time dependencies without any external hardware. However, this is not always the case when an original project builds from fairly new and evolving research. Let’s assume for the moment you’re a computer vision researcher, and a component of your experiment utilises ROS for image acquisition and transport culminating into live published classification probabilities from a trained deep convolutional neural network (CNN). Your bleeding edge CNN relies on a specific release of parallel programming libraries, not to mention the supporting GPU and imaging capture peripheral hardware. Here we’ll demonstrate the reuse of existing public Dockerfiles to quickly obtain a running setup, stringing together the latest published images with preconfigured installations of CUDA/CUDNN, and meticulous source build configurations for Caffe [5]. Specifically we’ll use a Caffe image from a Docker Hub provided by the community. This image then-in-turn builds from a CUDA/CUDNN image from NVIDIA, that then-in-turn uses the official Ubuntu image on Docker Hub. All necessary Dockerfiles are made available through the respective Docker Hub repos, so that you may build the stack locally if you choose (Fig. 1). However, in the interest of build time and demonstration, we literally build FROM those before us. This involves a small modification and addition to the Dockerfile for ros-core. By simply redirecting the parent image structure of the Dockerfile to point to the end-chain image, with each image in the prior chain encompassing a component of our requirements, we can easily customize and concatenate the lot to describe and construct an environment that contains just what we need. For brevity, detailed and updated documentation/Dockerfiles are kept to same repository as the ros-caffe project.13 A link to a video demonstration14 can also be found at the project repo. Shown here will be the notable key-points in pulling/running the image classification node from your own Docker machine. First we’ll modify the ros-core Dockerfile to build from an image with Caffe built using CUDA/CUDNN, in this case we’ll use a popular set of maintained automated build repos from Kai Arulkumaran [1]: 1 FROM kaixhin/cuda-caffe Next we’ll amend the RUN command that installs ROS packages to include the additional ROS dependencies for our ros_caffe example package: 13 https://github.com/ruffsl/ros_caffe. 14 https://youtu.be/T8ZnnTpriC0. ROS and Docker Fig. 1 A visual of the base image inheritance for the ros_caffe:gpu image # install ros packages RUN apt-get update && apt-get install -y \ ros-${ROS_DISTRO}-ros-core \ ros-${ROS_DISTRO}-usb-cam \ ros-${ROS_DISTRO}-rosbridge-server \ ros-${ROS_DISTRO}-roswww \ ros-${ROS_DISTRO}-mjpeg-server \ ros-${ROS_DISTRO}-dynamic-reconfigure \ python-twisted \ python-catkin-tools && \ rm -rf /var/lib/apt/lists/* Note the reuse of the ROS_DISTRO variable within the Dockerfile. When building from the official ROS image, this helps makes your Dockerfile more adaptable, allowing for easy reuse and migration to the next ROS release, just by updating the base image reference. 40 41 42 43 # setup catkin workspace ENV CATKIN_WS=/root/catkin_ws RUN mkdir -p $CATKIN_WS/src WORKDIR $CATKIN_WS/src # clone ros-caffe project RUN git clone https://github.com/ruffsl/ros_caffe.git # Replacing shell with bash for later source, catkin build commands RUN mv /bin/sh /bin/sh-old && \ ln -s /bin/bash /bin/sh # build ros-caffe ros wrapper WORKDIR $CATKIN_WS ENV TERM xterm ENV PYTHONIOENCODING UTF-8 RUN source "/opt/ros/$ROS_DISTRO/setup.bash" && \ catkin build --no-status && \ ldconfig Finally, we can simply clone and build the catkin package. Note the use of WORKDIR to execute RUN commands from the proper directories, avoiding the need to hard-code the paths in the command. The optional variables and arguments around the catkin build command are used to clear a few warnings and printing behaviors the catkin tool has while running from a basic terminal session. Now that we know how this is all built, let’s skip ahead to running the example. You’ll first need to clone the project’s git repo and then download the caffe model to acquire the necessary files to run the example network, as explained in the project README on the github repo. We can then launch the node by using the run command to pull the necessary images from the project’s automated build repo on Docker Hub. The run script within the docker folder shows an example of using the GPU version: 1 2 3 4 5 6 nvidia-docker run \ -it \ --publish 8080:8080 \ --publish 8085:8085 \ --publish 9090:9090 \ --volume="${PWD}/../ros_caffe/data: → /root/catkin_ws/src/ros_caffe/ros_caffe/data" \ --device /dev/video0:/dev/video0 \ ruffsl/ros_caffe:gpu roslaunch ros_caffe_web ros_caffe_web.launch The Nvidia Docker plug-in is a simple wrapper function around Docker’s own run call, injecting additional arguments that include mounting the device hardware and driver directories. This permits our CUDA code to function easily within the container without necessarily baking the version specific Nvidia driver within the image itself. You can easily see all implicit properties affected by using the docker inspect command with the name of the container generated and notice devices such as /dev/nvidia0 and mounted volume driver named after your graphics driver version. Be sure you have enough available VRAM, about 500 MB to load this network. You can check your memory usage using nvidia-smi. If you don’t have a GPU, then you may simply alter the above command by changing nvidia-docker to just docker, as well as swapping the :gpu image tag with :cpu. The rest of the command is relatively clear; specifying the port mapping for the container to expose web ros interface through localhost as well as mounting the volume including our downloaded caffe model. The device argument here is used to provide the container with a video capture device; one can just as easily mount /dev/bus/usb or /dev/joy0 for other such peripherals. Lastly we specify the image name and roslaunch command. Note that we can use this command as is since we’ve modified the image’s entrypoint to source our workspace as well. Once the ros-caffe node is running, we can redirect our browser to the local URL15 to see a live video feed of the current published image and label prediction from the neural network as shown in Fig. 2. 15 http://127.0.0.1:8085/ros_caffe_web/index.html. Fig. 2 A simple ros-caffe web interface with live video stream and current predicted labels, published from containerized nodes with GPU and webcamera device access 4.4 Graphical Interfaces One particular aspect routinely utilized by the ROS community includes all the tools used to introspect and debug robotic software through the use of graphical interfaces, such as rqt, Rviz, or gzviewer . Although using graphical interfaces is perhaps outside of the original use case of Docker, it is perfectly possible and in-fact relatively viable for many applications. Thanks to Linux’s pervasive use of the files system for everything, including video and audio devices, we can expose what we need from the host system to the container. Although the easiest means of permitting the use of a GUI may be to simply use the host’s installation of ROS or Gazebo, as demonstrated in this video,16 and thus set the master URI or server address to connect to the containers via virtual networking and DNS containers described earlier, it may be necessary to run a GUI from within the container, be it custom dependencies or accelerated graphics. There are of course a plethora of solutions for various requirements and containers, ranging from display tunneling over SSH, VNC client server sessions, or directly mounting X-server unix sockets and forwarded alsa or pulseaudio connections. Each method of course comes with its own pros and cons, and in light of this evolving frontier, the reader is encouraged to read on ROS Wiki’s Docker page17 in order to follow the latest tutorials and resources. Below is a brief example of the turtlebot demo using Gazebo and RVIZ GUIs via X Server sockets and graphical acceleration from within a container. First we’ll 16 https://youtu.be/P__phnA57LM. 17 http://wiki.ros.org/docker. Fig. 3 An example of containerized GUI windows rendered from within the host’s desktop environment build from OSRF’s ROS image using the desktop-full tag, as this will have the Gazebo and RVIZ pre-installed. Then we’ll add the turtlebot packages, the necessary world models, and custom launch file (Fig. 3). 1 FROM osrf/ros:kinetic-desktop-full # install turtlebot simulator RUN apt-get update && apt-get install -y \ ros-${ROS_DISTRO}-turtlebot* \ && rm -rf /var/lib/apt/lists/* # Getting models from[http://gazebosim.org/models/]. This may take a few seconds. → RUN gzserver --verbose --iters 1 /opt/ros/${ROS_DISTRO}/share/turtlebot_gazebo/ → worlds/playground.world → # install custom launchfile ADD my_turtlebot_simulator.launch / Note the single iteration of gzserver with the default turtlebot world used to prefetch the model from the web and into the image. This helps cuts Gazebo’s start-up time, saving each deriving container from downloading and initializing the needed model database at runtime. The launchfile here is relatively basic, launching the simulation, the visualisation, and a user control interface: → → For hardware acceleration using discreet graphic for Intel, we’ll need to also add some common Mesa libraries: 14 15 16 17 18 # Add Intel display support by installing Mesa libraries RUN apt-get update && apt-get install -y \ libgl1-mesa-glx \ libgl1-mesa-dri \ && rm -rf /var/lib/apt/lists/* For hardware acceleration using dedicated graphics for Nvidia, we’ll need to add some hooks and variables instead for the nvidia-docker plugin: 14 15 16 17 # Add Nvidia display support by including nvidia-docker hooks LABEL com.nvidia.volumes.needed="nvidia_driver" ENV PATH /usr/local/nvidia/bin:${PATH} ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH} → Note how any deviations between the two setups was left to the last few lines of the Dockerfile, specifically any layers of the image that will no longer be hardware agnostic. This enables you to share as much of the common previous layers between the two tags as possible, saving disk space, and shortening build times by reusing the cache. Finally we can launch GUI containers by permitting access to the X Server, then mounting the Direct Rendering Infrastructure and unix socket: 1 xhost +local:root # Run container with necessary Xorg and DRI mounts docker run -it \ --env="DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --device=/dev/dri:/dev/dri \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ ros:turtlebot-intel \ roslaunch my_turtlebot_simulator.launch xhost -local:root The environment variables are used to inform GUIs of the display to use, as well as fix a subtle QT rendering issue. For Nvidia, things look much the same, except for use of the nvidia-docker plugin to add the needed device and volume arguments: 1 # Run container with necessary Xorg and GPU mounts nvidia-docker run -it \ --env="DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume=/tmp/.X11-unix:/tmp/.X11-unix \ ros:turtlebot-nvidia \ roslaunch my_turtlebot_simulator.launch You can view an example using this method from the previous linked demo video for ros-caffe, or a older GUI demo video18 now made simpler via the nvidia-plugin for qualitative evaluation. 5 Notes As you take further advantage of the growing Docker ecosystem for your robotics applications, you may find certain methods and third-party tools useful in continuing simplifying or becoming more efficient in common development tasks while using Docker. Here we’ll cover just a few helpful practices and tools most relevant for ROS users. 5.1 Best Practices and Caveats There are many best practices to consider while using Docker, and as with any new technology or paradigm, we need to know the gotchas. While much is revealed within Docker’s own tutorial documentation and helpful posts within the community,19 there are a few subjects that are more pertinent to ROS users than others. ROS is a relatively large ‘stack’ as compared to other commonly used codebases with Docker, such as smaller lightweight web stacks. If the objective is to distribute and share Robotics based images using ROS, it’s worthwhile to be mindful of the size of the images you generate to be bandwidth considerate. There are many ways to mitigate bloat from an image through careful thought while structuring the 18 https://youtu.be/djLKmDMsdxM. 19 https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/. Dockerfile. Some of this was described while going over the official ROS Dockerfile, such as always removing temporary files before completion of a layer generated from each Docker command. However there are a few other caveats to consider concerning how a layer is constructed. One being to never change the permissions of a file inside a Dockerfile unless unavoidable; consider using the entrypoint script to make the changes if necessary for runtime. Although a git/Docker comparison could be made, Docker only notes what files have changed, not necessarily how the files have been modified inside the layer. This causes Docker to replicate/replace the files while creating a new layer, potentially doubling the size if you’re modifying large files, or potentially worse, every file. Another way keep disk size down can be to flatten the image, or certain spans of layers. This however prevents the sharing of intermediate layers among commonly derived images, a method Docker uses to minimize the overall disk usage. Flattening images also only helps in squashing large modifications to image files, but does nothing if the squashed file system is just inherently large. When building Dockerfiles, you’ll want to be considerate of the build context, i.e. the parent folder of the Dockerfile itself. For example, it’s best to build a Dockerfile from a folder that includes just the files and folders you’d like to ADD or COPY into the image. This is because the docker client will tar/compressing the directory (and all subdirectories) where you executed the build and send it to the docker daemon. Although files that are not referenced to will not be included in the image, building a Dockerfile from say your /root/, /home/ or /tmp/ directory for example would be unwise, as the amount of unnecessary data sent to the daemon would slow/kill the build. A .dockerignore could also be used to avoid this side effect. Finally, a docker container should not necessarily be thought of as a complete virtual environment. As opposed to VM’s with their own hypervized kernel and start-up processes, the only process that runs within the container is that which you command. This means that there is no system init, up-start or system starting syslog, cron jobs and daemons, or even reaping orphaned zombie processes. This is usually ok, as a container’s life cycle is quite short and we normally only want to execute what we specify. However, if you intend to use containers as a more full fledged system requiring say proper signals handling, consider using minimal init system for Linux containers such as dumb-init.20 For most cases with ROS users, roslaunch does a rather good job signalling child processes and thus serves as a fine anchor for a container’s PID 1, and so simply running multiple ROS nodes per container is reasonable. For those more concerned using custom launch alternatives, a relevant post here21 expands on this subject further. 20 https://github.com/Yelp/dumb-init. 21 https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem. 5.2 Transparent Proxy One task you may find yourself preforming frequently while building and tweaking images, especially if debugging say minimum dependency sets, is downloading and installing packages. This is sometimes a painful endeavor, made even more so if your network bandwidth is all but extraordinary, or your corporation works behind custom proxy and time is short. One way around this is to leverage Docker’s shared networking and utilize a proxy container. Squid-in-a-can22 is one such example of a simple transparent squid proxy within a Docker container. This services every other Docker container, including containers used during the build process while generating image layers, a local cache of any frequent http traffic. By easily changing the configuration file, you can leverage any of the more advanced squid proxy features, while avoiding the tedious install and setup of a proper squid server on various hosts’ distribution. 5.3 Docker DNS Resolver We’ve shown before how ROS nodes running from separate containers within a common software defined network can communicate utilising domain names given to containers and resolved by Docker’s internal DNS. Communicating to the same containers from the host through the default bridge network is also possible, although not as straightforward without the host having similar access to the software defined network’s local DNS. We can quickly circumvent this issue as we did with the proxy, by running the required service from another container within the same network. In this case we can use a simple DNS server such as Resolvable23 to help the local host resolve container domain names within the virtual network. One word of caution: one should avoid using domain names that could collide, as in the case of running two instances of the industry networking example on the same Docker engine, e.g. two sets of roscores and nodes on different project networks, say foo and bar. If we were to then include a Resolvable container into each project, the use of local domain names such as master or talker could then collide for the host, whereas explicit domain naming including the project’s network post-fix such as foo_default would still properly resolve. 22 https://github.com/jpetazzo/squid-in-a-can. 23 https://github.com/gliderlabs/resolvable. References 1. Arulkumaran, K. 2015. Kaixhin/dockerfiles. https://github.com/Kaixhin/dockerfiles. 2. Boettiger, C. 2015. An introduction to docker for reproducible research. SIGOPS Operating Systems Review 49 (1): 71–79. doi:10.1145/2723872.2723882. 3. Bonsignorio, F., and A.P. del Pobil. 2015. Toward Replicable and Measurable Robotics Research. IEEE Robotics & Automation Magazine 22 (3): 32–35. http://ieeexplore.ieee.org/lpdocs/epic03/ wrapper.htm?arnumber=7254310. 4. Guglielmelli, E. 2015. Research Reproducibility and Performance Evaluation for Dependable Robots. IEEE Robotics & Automation Magazine 22 (3): 4–4. http://ieeexplore.ieee.org/lpdocs/ epic03/wrapper.htm?arnumber=7254300. 5. Jia, Y., E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093. 6. Mabry, R., J. Ardonne, J. Weaver, D. Lucas, and M. Bays. 2016. Maritime autonomy in a box: Building a quickly-deployable autonomy solution using the docker container environment. In IEEE Oceans. 7. Sucan, I.A., and S. Chitta. Moveit! http://moveit.ros.org. 8. White, R. 2015. ROS + Docker: Enabling repeatable, reproducible and deployable robotic software via containers. ROSCon, Hamburg Germany. https://vimeo.com/142150815. 9. White, R., M. Quigley, and H. Christensen. 2016. SROS: Securing ROS over the wire, in the graph, and through the kernel. In Humanoids Workshop: Towards Humanoid Robots OS. Cancun, Mexico. Author Biographies Ruffin White is a Ph.D. student in the Contextual Robotics Institute at UC San Diego, under the direction of Dr. Henrik Christensen. Having earned his Masters of Computer Science at the Institute for Robotics and Intelligent Machines, Georgia Tech, he remains an active contributor to ROS and collaborator with the Open Source Robotics Foundation. His research interests include mobile robotic mapping, with a focus on semantic understanding for SLAM and navigation, as well as advancing repeatable and reproducible research in the field of robotics by improving development tools for robotic software. Dr. Henrik I. Christensen is a Professor of Computer Science at Dept. of Computer Science and Engineering UC San Diego. He is also the director of the Institute for Contextual Robotics. Prior to UC San Diego he was the founding director of the Institute for Robotics and Intelligent machines (IRIM) at Georgia Institute of Technology (2006–2016). Dr. Christensen does research on systems integration, human-robot interaction, mapping and robot vision. He has published more than 300 contributions across AI, robotics and vision. His research has a strong emphasis on “real problems with real solutions.” A problem needs a theoretical model, implementation, evaluation, and translation to the real world. A ROS Package for Dynamic Bandwidth Management in Multi-robot Systems Ricardo Emerson Julio and Guilherme Sousa Bastos Abstract Communication is an important component in robotic systems. The application goals such as, finding a victim or teleoperate a robot in an obstacle avoiding application, may get affected if there are problems in communication between system agents. The developed package, dynamic_bandwidth_manager (DBM), was designed to maximize bandwidth usage in multi-robot systems. DBM controls the rate that a node publishes a topic, managing different channels where commands, sensory data and video frames are exchanged. In this tutorial chapter, we present several important concepts that are crucial to work with robot communication using ROS: (1) how the increasing number of robots makes an impact on communication, (2) the ROS communication layer (topics and services using TCP and UDP), (3) how to analyze the bandwidth consumption in a system developed in ROS, and (4) how use DBM to manage bandwidth usage. A detailed tutorial on developed package is presented. It shows how DBM is designed in order to prioritize communication channels according to environment events and how the most important topics gets more bandwidth from the system. This tutorial was developed under Ubuntu 15.04 and for ROS Jade version. All presented components are published on our ROS package repository: http://wiki.ros.org/dynamic_bandwidth_manager. Keywords Multi-robot · Dynamic bandwidth management · Communication 1 Introduction Multi-robot systems can be used for a set of tasks such as rescue operations [1, 2], large-scale explorations [3], and other tasks that can be subdivided between multiple robots [4]. Communication is an important component that merits careful considR.E. Julio (B) · G.S. Bastos System Engineering and Information Technology Institute—IESTI, Federal University of Itajubá—UNIFEI, Av. BPS, 1303, Pinheirinho, Itajubá, MG CEP: 37500-903, Brazil e-mail: [email protected] URL: http://www.unifei.edu.br G.S. Bastos e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_10 R.E. Julio and G.S. Bastos eration in a multi-robot system. The number of packets transmitted between agents of a system can increase as the number of sensors, actuators, and additional robots, increases as well [5]. A teleoperation system is a good example that illustrates data transmission between agents. In that sort of system, an user or an automated control device can control a swarm of mobile robots [6, 7] directly driving the robotic motor or sending targets for the robots. The important issue in this example is that a video streaming is transmitted to a control device while a system operator remotely controls the robots. Therefore, the number of transmitted packets is increased when the number of robots increases or when there is a video quality improvement, which may affect the system performance in a bandwidth constrained environment. For this reason, bandwidth is a important component that must be considered. A loss of packets may occur when the number of packets in transmission is greater than available bandwidth. Thus, the frequency of all sensors should be adjusted to not exceed the available bandwidth. The task of adjusting the communication rates can be challenging; in a static solution, the frequencies cannot be adjusted when there is a change in the environment or in the available bandwidth. In such systems it may not be necessary to transmit data from sensors all the time and in same frequency. Considering the teleoperation example, if the robot is stopped or away from an obstacle, the operator does not need constant updating of the robot camera image. Thus, the frequency of video streaming can be decreased whenever the robot speed decreases or when no obstacles are close to the robot. In other words, the frequency of sensor can be decreased if there is nothing relevant to the task occurring at that time in the environment [8]. The dynamic_bandwidth_manager (DBM) [9] package was designed to provide a way of controlling the frequency that a topic publishes data. DBM can be applied to any topic in the system and the frequencies are calculated based on topic priorities. It helps developers to create topics with dynamic frequencies that will depend on changes in the environment, such as available bandwidth and interesting events of a task (speed, distance to obstacles and so forth). However, how are related speed and distance to obstacles to the bandwidth? In a system developed using ROS, sensory data are sent using topics. Usually, these topics publish data with a static frequency calculated using a design parameter and it does not change if changes occur in robots environment. If a robot is stopped in a teleoperation application, may not be necessary publish its camera image for a central computer. In a restricted bandwidth system it may be prohibitive to send data unnecessarily, being other robots get affected. A dynamic solution to the presented problem is presented in this chapter; topics that send sensor data to other elements may have their frequencies dynamically adjusted by the system. Environment events such as speed and distance to obstacles can be used to set topic priorities and define which of them may have more frequency at a given time. As an example, we may consider a scenario with three robots in an application of identifying victims in disaster areas. Each robot moves through the area reading information from environment such as camera images. All information is sent to a A ROS Package for Dynamic Bandwidth Management … Fig. 1 Scenario in an application of identifying victims in disaster areas remote central for monitoring where human operators assist in victim identification task using the information sent by the robots. The presented scenario is shown in Fig. 1. In this example, a desired communication rate of camera images for maximum application efficiency is 16 Hz for each robot [10]. In other words, each robot must send data read by the camera 16 times every second. This communication rate ensures that the human operator can teleoperate the robot through the disaster area avoiding obstacles and the monitoring system can predict with greater certainty the presence and location of victims in the area monitored by the camera of the robot. In this example, the system is used in an environment with bandwidth restrictions. The network supports sending just 30 messages per second in total (considering messages of all robots). Sharing bandwidth equally, each robot sends data in a frequency of 10 Hz. This 10 Hz baud rate allows a user teleoperate a robot to identify a victim, but with a lower level of accuracy (the higher the frequency, greater the accuracy and lower the error in robot teleoperation). Thus, the user can teleoperate a robot, but he is subject to restrictions in the video sent by the robot. This degradation in the video quality may lead collision with obstacles or failure to identify a victim (Fig. 2). As described above, bandwidth restrictions can impact the effectiveness of a solution. Set a static frequency of 10 Hz for all robots prevents communication exceeds the maximum bandwidth available, but it does not allow a robot find a victim with maximum accuracy even if other robots are far from that goal. In this case, the system could reduce the sending frequency of robots that have not yet detected any victim Fig. 2 Static communication rates in an application of identifying victims in disaster areas to the minimum acceptable frequency. This allows a higher frequency for the robot that found a victim and now need to find its exact location. A key contribution of this chapter is the development of the DBM package where some concepts about robot communication using ROS will be presented. The problem of using several topics in an environment with bandwidth constraint will be addressed and a feasible solution for managing topics in order to minimize this problem is discussed. After a brief discussion about the motivation of this chapter, we will introduce the following topics: • A summary of ROS publish-subscribe mechanism is provided as essential background information for the understanding of the problem; • A review about how the increasing number of robots (or topics) impacts on communication in an environment with bandwidth constraint; • A simple example on monitoring bandwidth consumption in a ROS-based system; • A discussion on topics frequency control to maximize bandwidth usage or avoid loss of communication; • All components of DBM architecture with their interactions description; • A case study on how to use the developed package in an teleoperation application using a simulated environment; • And, a discussion about the results. 2 ROS Topics As discussed in [11], topics are named buses over which nodes exchange messages. Topics have anonymous publish/subscribe semantics, which decouples the production of information from its consumption. In general, nodes are not aware of who they are communicating with. Instead, nodes that are interested in data subscribe to the relevant topic; nodes that generate data publish to the relevant topic. There can be multiple publishers and subscribers to a topic. Each topic is strongly typed by the ROS message type used to publish to it and nodes can only receive messages with a matching type. The Master does not enforce type consistency among the publishers, but subscribers will not establish message transport unless the types match. Furthermore, all ROS clients check to make sure that an MD51 computed from the .msg files match. This check ensures that the ROS Nodes were compiled from consistent code bases. ROS currently supports TCP/IP-based and UDP-based message transport. The TCP/IP-based transport is known as TCPROS and streams message data over persistent TCP/IP connections. TCPROS is the default transport used in ROS, which is the only transport that client libraries are required to support. The UDP-based transport, which is known as UDPROS, is currently supported only in roscpp and separates messages into UDP packets. ROS nodes negotiate the desired transport at runtime. For example, if a node prefers UDPROS transport but the other node does not support it, the system fallbacks on TCPROS transport. This negotiation model enables new transports to be added over time as compelling use cases arise. Topics are intended for unidirectional, streaming communication. Nodes that need to perform remote procedure calls (i.e. receive a response to a request) should use services instead. There is also the Parameter Server [12] for maintaining small amounts of state. The ROS Master acts as a nameservice in the ROS. It stores topics and services registration information for ROS nodes. Nodes communicate with the Master to report their registration information. As these nodes communicate with the Master, they can receive information about other registered nodes and make connections as appropriate. The Master will also make callbacks to these nodes when this registration information changes, which allows nodes to dynamically create connections as new nodes are run. It is important to make clear that nodes connect to other nodes directly; the Master only provides lookup information, much like a DNS server. Nodes that subscribe to a topic will request connections from nodes that publish that topic, and will establish that connection over an agreed upon connection protocol. In other words, when a node receives data from a topic, this communication does not pass through the ROS Master. 3 Bandwidth Consumption in Topics Publishing Publishing topics with large messages such as camera images can cause problems in a ROS-based system. The system performance can be impaired if the amount of information transmitted over the network is larger than the available bandwidth. In 1 MD5 (Message-Digest Algorithm 5) is a cryptographic hash function producing a 128-bit (16-byte) hash value commonly used to verify data integrity. this case, loss or delay in delivery of messages can occur, causing loss of information that can be crucial for the proper functioning of the system. But can we see how much bandwidth the topic is using? And how large messages overload the network in a ROS-based system? In this section we will show how to use the rostopic bw and rostopic hz to display the bandwidth and the publishing rate of a topic, how large messages can overload a system with bandwidth restrictions and how DBM can help avoid this problem. 3.1 Publishing Camera Images in ROS Every time a message is published on a ROS topic and a subscriber is running on a remote machine, data are transmitted over the network. These data consume part of available bandwidth for the application and depending on the size of messages and transmission frequency, communication can exceed the available bandwidth causing delay or loss of information. This behavior can be shown publishing camera images to other nodes in the system. Camera images are used as an example because it is simple to simulate using a laptop with a webcam and has a significant message size, but other types of message may have the same problem such as PointCloud, LaserScan, etc. The usb_cam_node interfaces with standard USB cameras (e.g. the Logitech Quickcam) using libusb_cam and publishes images as a ROS message of type sensor_msgs::Image (http://docs.ros.org/api/sensor_msgs/html/msg/Image.html) using the image_transport (http://wiki.ros.org/image_transport) package. In this example, we will use this node to publish camera images. The usb_cam can easily be installed on a Ubuntu 15.04 distribution using ROS Jade. The most updated information about usb_cam package can be found on the usb_cam wiki page (http://wiki.ros.org/usb_cam). There are some steps to installing and running usb_cam: 1. Install ROS (follow the latest instructions on the ROS installation page) (http:// wiki.ros.org/ROS/Installation). 2. Download usb_cam package to catkin src folder (i.e. /catkin_ws/src): $ g i t c l o n e h t t p s : / / g i t h u b . com / bosch−r o s −pkg / usb_cam ~ / c a t k i n _ w s / s r c / usb_cam 3. Build the downloaded package: $ cd ~ / c a t k i n _ w s / $ catkin_make 4. Setup the environment: $ s o u r c e ~ / c a t k i n _ w s / d e v e l / s e t u p . bas h 5. Run usb_cam_node: $ r o s l a u n c h usb_cam usb_cam−t e s t . l a u n c h Using image_view node we can see camera video published by usb_cam_node. This may be done running the following command (note that we are subscribing in compressed2 image transport mode). Access the image_view wiki page (http://wiki. ros.org/image_view) for more information about the package. $ r o s r u n image_view image_view image : = / usb_cam / image_raw _ i m a g e _ t r a n s p o r t := compressed 3.2 Monitoring Bandwidth Usage in ROS Monitoring the bandwidth consumed by topics is an important task in robot systems that rely on communication. The rostopic bw tool displays the bandwidth used by a topic and rostopic hz displays its publishing rate. It is important to note that, as shown in rostopic documentation page (http://wiki.ros.org/rostopic), the bandwidth reported by rostopic bw is the received bandwidth. If there are network connectivity issues, or rostopic cannot keep up with the publisher, the reported number may be lower than the actual bandwidth. The bandwidth concumption of the compressed camera topic published by usb_cam_node is given by the following commands: 1. Displays the bandwidth used by camera topic (Fig. 3): $ r o s t o p i c bw / usb_cam / image_raw / c o m p r e s s e d 2. Displays the publishing rate of camera topic (Fig. 4): $ r o s t o p i c hz / usb_cam / image_raw / c o m p r e s s e d As shown in Fig. 3 and using a camera resolution of 640 × 480, the mean size of camera image message is approximately 18 KB (see field “mean” on rostopic bw result). The default framerate of the usb_cam_node is 30 FPS, i.e. the data is published using a frequency of 30 Hz (as can be seen in Fig. 4). It means that the average bandwidth consumption of the topic /usb_cam/image_raw/compressed is approximately 570 KB/s, as also described in field “average” on rostopic bw result. 2 The image_transport package provides transparent support for transporting images in lowbandwidth compressed formats such as PNG and JPEG. Thus, the image with any compression is published, for example, in a topic /usb_cam/image_raw and the compressed image using PNG or JPEG in a topic /usb_cam/image_raw/compressed. Follow this link http://wiki.ros.org/image_ transport for more information about raw and compressed images in ROS. Fig. 3 Result of rostopic bw command Fig. 4 Result of rostopic hz command As we can see, bandwidth consumption of only one topic publishing compressed camera images with a resolution of 640 × 480 to a frequency of 30 Hz is 570 KB/s. If the number of topics publishing camera images in the system increases, the available bandwidth can be exceeded. Table 1 shows the bandwidth consumption in a system with four camera image topics. If the robots communicate via a WiFi network using 802.11b Wifi standards, their corresponding maximum speeds is 11 Mbps, i.e. 1375 KB/s. If the number of Fig. 5 Bandwidth consumption and available bandwidth in a network with more camera image topics Topics number Bandwidth consumption (KB/s) 570 1140 1710 2280 Bandwidth (KB/s) Table 1 Bandwidth consumption in a network with more camera image topics 2,000 1,500 1,000 500 0 Bandwidth Consumption Available Bandwidth 1 Numberofrobots robots publishing camera images increases there may be a network overhead in the system. Figure 5 shows the bandwidth consumption on a camera image topic when the number of robots or topics in the system increases. As we can see, if the number of robots is greater than 2, the use of bandwidth exceeds the available bandwidth and this can lead to communication problems. This example shows, in a simple way, how sending information using ROS topics can overload the network. Thus, it is necessary to develop strategies to manage the publication rate of topics in order to avoid this problem. 3.3 Install DBM Package The step-by-step instructions for installing DBM are shown bellow. Before you start, PuLP package must be installed before DBM package. PuLP is a library for the Python scripting language that enables users to describe mathematical programs [13]. It is used by default_optimizer_node to solve the linear optimization problem described in Sect. 4.2. To install PuLP follow the instructions on the PuLP installation page (https://pythonhosted.org/PuLP/main/installing_pulp_ at_home.html). 1. Download DBM package to catkin src folder (i.e. /catkin_ws/src): $ g i t clone h t t p s : / / g i t h u b . com / r i c a r d o e j / dynamic_bandwidth_ manager ~ / c a t k i n _ w s / s r c / dynamic_bandwidth_manager 2. Build the downloaded package: $ cd ~ / c a t k i n _ w s / $ catkin_make 3. Setup the environment: $ s o u r c e ~ / c a t k i n _ w s / d e v e l / s e t u p . bas h 3.4 Using DBM to Manage Bandwidth Consumption The dynamic_bandwidth_manager (DBM) [9] package was designed to provide a way of controlling the frequency that a topic publishes data. It helps developers to create topics with dynamic frequencies that will depend on the topic priority at a given time. To show how this package works, we will use the example with camera images to manage the bandwidth consumption using DBM. The detailed architecture of the package is defined in the following sections. After install DBM, we need a set of topics that should be controlled by DBM. Figure 5 shows that 3 topics publishing compressed camera images can exceeds available bandwidth in a system using a network with a maximum speed of 11 Mpps. Thus, in order to test DBM we will use a system with 3 topics publishing compressed camera images in different machines with a webcam (machines A, B and C) and a network with available bandwidth of 11 Mbps. The available bandwidth can be configured using parameters in DBM. So, in this example, we need not worry about the network specifications. A fourth machine (Master) must run the ROS Master and image_view node so we can see the published images. Figure 6 shows how the system should be designed. There are some steps to configure the environment: 1. Run the ROS Master in machine Master: $ roscore 2. Setup network following this link http://wiki.ros.org/ROS/NetworkSetup. 3. Download usb_cam package to machines A, B and C as described in Sect. 3.1. 4. Edit file usb_cam-test.launch to remap image_raw topic name using the machine name (A, B and C). A good explanation about names remapping can be found in http://wiki.ros.org/roslaunch/XML/remap. Use the name /[machine_name]/ usb_cam/image_raw. 5. Run usb_cam_node in machines A, B and C: Fig. 6 System design of the DBM example $ r o s l a u n c h usb_cam usb_cam−t e s t . l a u n c h 6. Run image_view in machine Master for all three topics (run each command in a different terminal): $ r o s r u n image_view image_view image : = / machineA / usb_cam / image_raw _ i m a g e _ t r a n s p o r t := compressed $ r o s r u n image_view image_view image : = / machineB / usb_cam / image_raw _ i m a g e _ t r a n s p o r t := compressed $ r o s r u n image_view image_view image : = / machineC / usb_cam / image_raw _ i m a g e _ t r a n s p o r t := compressed You should now be able to view the images of the three cameras in each of the image_view running on Master machine. Follow the steps in Sect. 3.1 to check the frequency of topics and consumed bandwidth. The values should be approximately as described in Table 1. NOTE: If you can not build an environment with 3 different machines, DBM provides a node that publishes messages with a predetermined size. It is important to note that as this message published only simulates a message, you can not view the images using image_view. Thus, the 3 machines with a webcam can be replaced running the following command in different terminals: $ r o s r u n dynamic_bandwidth_manager f a k e _ m e s s a g e _ p u b l i s h e r _ n o d e . py t o p i c _ n a m e : = / [ machine_name ] / usb_cam / image_raw _ m e s s a g e _ s i z e _ i n _ k b :=18 _ m a x _ r a t e := 3 0 DBM dbm_bridge_node subscribes in a topic that has to be managed and controls its frequency based on a topic priority defined in Parameter Server. Run one dbm_bridge_node for each published topic (in machines A, B and C) with the command bellow: $ r o s r u n dynamic_bandwidth_manager dbm_bridge_node . py _ t o p i c _ n a m e : = / [ machine_name ] / usb_cam / image_raw _min_frequency :=1 _max_frequency :=30 Where _topic_name is the topic name, _min_frequency is the minimum frequency and _max_frequency is the maximum frequency at the topic will be published. We need configure the available bandwidth in 11 Mbps as defined in our previous example. This can be done using a parameter in Parameter Server. The following sections will further explain all the parameters used in the DBM. At this point we need only run the following command: $ r o s p a r a m s e t / dbm / m ax_bandwidth_in_mbit 11 NOTE: DBM takes into account only topics that have subscribers running on remote machines. If you are running dbm_bridge_node using only one machine the following command should be executed: $ r o s p a r a m s e t / dbm / m a n a g e _ l o c a l _ s u b s c r i b e r s t r u e Finally, default_optimizer_node solves the linear optimization problem described in Sect. 4.2 calculating a topic frequency based on topic priority. Run default_ optimizer_node with the command: $ r o s r u n dynamic_bandwidth_manager d e f a u l t _ o p t i m i z e r _ n o d e . py Running the command rostopic list we can see that three other topics were created with name /[machine_name]/usb_cam/image_raw/optimized. This optimized topic publishes the same data but with a frequency managed by DBM. Using the commands rostopic bw and rostopic hz we can see the bandwidth consumption and the frequency of each optimized topic. Table 2 shows this values (with approximate values). As can be seen, the frequency of each topic is set to 24 Hz in order to not exceed the available bandwidth. The bandwidth consumed by all optimized topics is about Table 2 Bandwidth consumption and frequencies using DBM Topic Bandwidth consumption (KB/s) machineA machineB machineC Frequency (Hz) 24 24 24 Table 3 Bandwidth consumption and frequencies using DBM with changes in priority Topic Priority Bandwidth Frequency (Hz) consumption (KB/s) machineA machineB machineC 1330 KB/s. That is, the consumed bandwidth did not exceed the available bandwidth of 1375 KB/s. 3.5 Changing Topic Priorities DBM sets the frequencies based on topic priorities. The most priority topic gets a higher frequency. The priority of a topic can be changed by setting a parameter in the Parameter Server. Run the following commands to change the priority of the topics on machine A and machine B to 1 and 0 to machine C: $ r o s p a r a m s e t / machineA / usb_cam / image_raw / dbm / p r i o r i t y 1 $ r o s p a r a m s e t / machineB / usb_cam / image_raw / dbm / p r i o r i t y 1 $ r o s p a r a m s e t / machineC / usb_cam / image_raw / dbm / p r i o r i t y 0 Table 3 shows the priority, bandwidth consumption and the frequency of each optimized topic after changing the topic priorities. As we can see, the frequencies of topics in machine A and machine B are set to the maximum frequency configured for the topics, 30 Hz. The priority of topic on machine C is 0, so the frequency is set to the minimum value, 1 Hz. This allows that the most priority topics gets a higher frequency while the topics with lower priorities have their frequencies adjusted to low values. A video demonstration of this example can be found in [14]. This section shows an example of how DBM controls the frequency that a topic publishes data in order to avoid the system exceeds the available bandwidth. The following sections present more detailed architecture of the package and how priorities may be based on environment events. 4 Event-Based Bandwidth Optimization In this section we explore a strategy to optimize bandwidth consumption of ROS topics. We will begin with a definition of topic priority based on environment events. Therefore, this priority is applied to a linear optimization problem in order to define the best frequency for each topic managed by the developed package. 4.1 Event-Based Topic Priority The topic frequencies may be dynamically controlled by the environment state and available bandwidth. This approach is built upon the assumption that the communication rate of a topic depends on how important the topic is at a given time. In the application of identifying victims described above, the system may provide a frequency for each robot. A robot that identifies a nearby victim must send best camera images to enable the user identify the exact location of the victim and operate it while avoiding obstacles. Thus, the system decreases the frequency of other robots and increases the frequency of robots that need most at this moment. Thereby, the victim position is found more accurately and the rescue team does not waste time. In this case, bandwidth optimization is made according to the requested requirements, considering the more important environment events to the task execution. Figure 7 shows the frequencies of each robot when the Robot 2 finds a victim. At this time, the Robot 1 and the Robot 3 do not have any evidence of victims in theirs monitoring area. Therefore, they can have their frequency adjusted to lower values. In DBM package, this behavior was implemented by assigning a priority for each topic based on environment events. Thus, the priority can be modeled as a function of environment events and represents how important a topic is for the application. These events are modeled depending on the application where the package is being used. Using teleoperation as an example, when an operator remotely controls a set of robots based on images sent by a camera, we can define the robot speed and the distance of obstacles as environment events. Thus, if the robot speed increases and the distance of the obstacles decreases, the priority of the topic that represents the camera sensor increases. The priority pi of the communication channel ci is calculated as a function of the environment events ei that affect this communication channel, such as speed, distance to obstacles, time to collision, and so for (Eq. 1). pi = f (e1 , e2 , . . . , en ). Fig. 7 Dynamic communication rates in an application of identifying victims in disaster areas The result of that function is normalized to values in the range [0 : 1], as shown in Eq. (2), where 1 is the highest priority, which represents that the communication channel must use the higher frequency within the bounds established by the application and the available bandwidth. Thus, pi at a given time can be defined by [9]: pi .wi , (2) pi = c  pk .wk k=1 where c is the communication channels and wi the message size of the communication channels. This normalization ensures that the message size is taken into account in the calculation of frequencies. Without this adjustment, the optimizer does not assign frequencies in proportion to the priority, generating inconsistent results with the package goal. 4.2 Bandwidth Management Based on Topic Priority As described in previous section, each topic has a priority defined by environment events such as speed and distance to obstacles. But, how does the system calculate a frequency for managed topics based on its priority? DBM package implements a default strategy using a linear optimization problem as described in this section and in [9]. There are other works exploring this problem as in [8, 15]. The total bandwidth consumed by all managed channels is constrained by the total bandwidth available to the system, as described in Eq. (3): n  wi . f i ≤ Bmax , where • • • • wi is the message size sent by channel i, f i is the sending frequency of the channel i, Bmax is the total bandwidth available for the system, n is the number of managed communication channels. All frequencies f i are bounded with a minimum and a maximum value f imin and f imax . Communication channels may be maximized in order to increase the available bandwidth to the application. The priority pi define which channels are more important at a given time to the application and need to get more resources from the bandwidth. This is achieved by adjusting the bounds f imin and f imax according to the value of pi . If a channel has a priority pi = 1 the bounds of frequency f i should be calculated close to the maximum ( f imax ). In other words, the new value of minimum frequency f imin is a function of pi . Thus, f imin can be defined by: f imin = ( f imax − f imin ) pi + f imin . The Eq. (4) defines a minimum value to the frequency f i at a given time based on pi . If the priority pi = 0 the frequency is bounded with the minimum and maximum values defined by a channel. While the priority increases, the lower limit for the frequency is close to the maximum, making the system enable a greater bandwidth to the channel. The frequencies of each managed channel will be formulated as a linear optimization problem. Thus, the problem formulation becomes: maximize subject to n  i=1 n  wi . f i wi . f i ≤ Bmax f i ≥ f imin f i ≤ f imax . However, in some cases, there is no solution for the problem and the system informs it to the designer. Typically, in this case, the maximum bandwidth available to the system should be increased. 5 DBM Package Description This section provides a brief explanation of DBM package. We describe the basic architecture and give an overview to the main classes and nodes. Thus, a flow chart with the basic operation of the package is presented. All DBM source code can be found in [16]. A dynamic frequency strategy for ROS topics is discussed and some examples are shown using the main classes. Finally, some issues about on how to extend the package are discussed. 5.1 Package Architecture Package architecture is divided into four libraries: (DBMPublisher, DBMSubscriber, DBMOptimizer, and DBMRate); and into two nodes: (default_optimizer_node, and dbm_bridge_node). • DBMPublisher is a class that inherits ros::Publisher, receives a minimum and a maximum frequency, and creates a managed topic; Fig. 8 Creating a topic using DBMPublisher Fig. 9 Subscribing in a topic using DBMSubscriber • DBMSubscriber is used to subscribe in a managed topic created by DBMPublisher; • DBMOptimizer enables creation of optimization strategies; • DBMRate helps to run loops with a dynamic rate stored in Parameter Server; • default_optimizer_node solves a linear optimization problem Sect. 4.2 calculating a topic frequency based on topic priority. • dbm_bridge_node allows the use of DBM into existing projects without changing their source code. A node that publishes messages using a managed frequency creates a topic using DBMPublisher and informs the frequency limits (minimum and maximum values). All system parameters are created in Parameter Server when the package is creating a managed topic. Figure 8 shows the behavior of the library when creating a topic using DBMPublisher. In order to perform a dynamic bandwidth management, the system should take in consideration that the message length can change and do not treat it as a static value. Whenever a message is sent by a managed channel, the DBMPublisher checks if the size has changed and changes the parameter for the channel. Thus, the size of messages sent through the communication channels can dynamically change. Another node subscribes in topic using DBMSubscriber class. If the node is running on the same machine where the topic is published then DBMPublisher subscribes in a full-rate topic.3 Otherwise, the topic with a managed sending frequency is used (Fig. 9). The default_optimizer_node, or any other node that is implementing an optimization strategy using DBMOptimizer, runs the optimization algorithm in a rate 3A full-rate topic is a topic with no optimization and publishes massages in a maximum frequency configured for the topic. Fig. 10 Update of frequencies by the optimizer configured in the parameter /dbm/optimization_rate_in_seconds and updates the topics frequencies in Parameter Server. Any node which is publishing a managed topic is notified and updates its communication rate (Fig. 10). Figure 11 shows a summary of the package operation as described above. 5.2 Dynamic Frequency in a ROS Topic In ROS, communication channels are represented by topics; through them sensor data are sent to other system elements. The code below shows the creation of a rescue_node that publishes a topic called camera/image using ROS class ros::Rate to control the topic frequency: Listing 1.1 Using ros::Rate in camera/image topic # ! / u s r / b i n / env p y t h o n # l i c e n s e removed f o r b r e v i t y import r o s p y from s e n s o r _ m s g s . msg import Image def g e t _ r e s c u e _ i n f o ( ) : # R e t u r n s t h e camera image message def run ( ) : pub = r o s p y . P u b l i s h e r ( ’ / camera / image ’ , Image , q u e u e _ s i z e =10) r o s p y . i n i t _ n o d e ( ’ r e s c u e _ n o d e ’ , anonymous=True ) r a t e = r o s p y . R at e ( 1 5 ) # S t a t i c f r e q u e n c y o f 15 hz Fig. 11 Summary of the package basic operation w h i l e not r o s p y . i s _ s h u t d o w n ( ) : message = g e t _ r e s c u e _ i n f o ( ) pub . p u b l i s h ( message ) rate . sleep () i f _ _name_ _ == ’ _ _main_ _ ’ : try : run ( ) except rospy . ROSInterruptException : pass In ROS topic frequencies can be controlled by ros::Rate class. However, this class makes a static rate control which has to be chosen in development time. Thus, an application developed with the default ros::Rate class does not allow a dynamic topic frequency. In other words, when using ros::Rate, the frequency configured for the topic is hard-coded and there is no alternative to change it in execution time. To create a dynamic management system for topic frequencies in ROS, it is necessary to implement other strategies to control topic frequencies. DBM provides a DBMRate class which maintains a dynamic rate (stored in Parameter Server) for a loop. This class was built inheriting all features provided by ros::Rate. Thus, any fix or improvement implemented in base class is automatically incorporated. The parameter name that contains the frequency value is reported during the object construction and a parameter is created in Parameter Server. Every time that sleep() method is invoked, the frequency value is updated and the loop delay time is adjusted. Figure 12 shows a schema with a basic operation of DBMRate class. The main issue with this approach is the amount of calls to Parameter Server. To solve this problem, frequency values are stored using Cached Parameters, providing a local cache of the parameter. Using these versions, Parameter Server is informed that this node would like to be notified when the parameter is changed, and prevents Fig. 12 DBMRate basic schema the node from having to re-lookup the value with the parameter server on subsequent calls. Using cached parameters are a significant speed increase (after the first call), but should be used sparingly to avoid overloading the master. Cached parameters are also currently less reliable in the case of intermittent connection problems between node and master [11]. Listing 1.2 shows the node rescue_node created in code Listing 1.1 using DBMRate class. Topic frequency will be adjusted by changing a parameter stored in Parameter Server named /camera/image/dbm/frequency/current_value. Listing 1.2 Using DBMRate in rescue_info topic # ! / u s r / b i n / env p y t h o n # l i c e n s e removed f o r b r e v i t y import r o s p y from s e n s o r _ m s g s . msg import Image def g e t _ r e s c u e _ i n f o ( ) : # R e t u r n s t h e camera image message def run ( ) : pub = r o s p y . P u b l i s h e r ( ’ / camera / image ’ , Image , q u e u e _ s i z e =10) r o s p y . i n i t _ n o d e ( ’ r e s c u e _ n o d e ’ , anonymous=True ) # C r e a t e s a dynamic r a t e w i t h k e y name # ’ / camera / image ’ , minimum f r e q u e n c y o f 10 hz , # maximum f r e q u e n c y o f 24 hz and d e f a u l t # f r e q u e n c y o f 24 hz r a t e = DBMRate ( ’ / camera / image ’ , 10 , 24 , 2 4 ) w h i l e not r o s p y . i s _ s h u t d o w n ( ) : message = g e t _ r e s c u e _ i n f o ( ) pub . p u b l i s h ( message ) rate . sleep () i f _ _name_ _ == ’ _ _main_ _ ’ : try : run ( ) except rospy . ROSInterruptException : pass 5.3 Creating a New Managed Topic with DBM In ROS, topics are created using ros::Publisher class. This class registers the topic in Master and provides the publish method responsible for messages publishing in this topic. However, as can be seen in code Listing 1.1, the control of the topic rate is done manually in a loop. DBMPublisher allows to publish topic messages with a dynamic frequency. This class uses DBMRate and receives a minimum and a maximum frequency, and a method returning a message to be sent. Thus, publication of messages is automatically made in accordance with the frequency parameter stored in Parameter Server. Listing 1.3 creates the same node shown in previous code Listing 1.2, but using DBMPublisher class. The main difference is that there is no longer a need for a loop to send topic messages. When start() method is invoked, it receives a function returning a message to be published by the topic (getRescueInfo method in this example) and, internally, DBMPublisher class publishes messages in configured frequency. Listing 1.3 Using DBMPublisher in rescue_info topic # ! / u s r / b i n / env p y t h o n # l i c e n s e removed f o r b r e v i t y import r o s p y import dynamic_bandwidth_manager from s e n s o r _ m s g s . msg import Image def g e t _ r e s c u e _ i n f o ( ) : # R e t u r n s t h e camera image message def run ( ) : # Minimum f r e q u e n c y o f 10 hz and maximum # f r e q u e n c y o f 24 hz pub = dynamic_bandwidth_manager . DBMPublisher ( ’ / camera / image ’ , Image , 10 , 2 4 ) r o s p y . i n i t _ n o d e ( ’ r e s c u e _ n o d e ’ , anonymous=True ) # S t a r t s message p u b l i s h i n g w i t h a f r e q u e n c y # s t o r e d i n Parameter S e r v e r pub . s t a r t ( g e t _ r e s c u e _ i n f o ) i f _ _name_ _ == ’ _ _main_ _ ’ : try : run ( ) except rospy . ROSInterruptException : pass 5.4 Using DBM in an Existing Package Section 5.3 shows how to create new managed topics with DBMPublisher class. But, how do use DBM in existent packages without changes in source code? To address this issue, DBM provides the node dbm_bridge_node. With this node it is possible to control topics frequencies of existent packages without changes on its source code. DBM dbm_bridge_node subscribes in a full-rate topic that has to be managed and publishes the received data in a managed rate topic [/topic_name]/optimized. This optimized topic works on the same way that the topic created by DBMPublisher. An explanation about how use dbm_bridge_node is presented in Sect. 3.4. DBM does not make any changes in the full-rate topic. The dbm_bridge_node only publishes the data received from the full-rate topic at a managed rate. 5.5 Implementing Other Optimization Strategies For independence of the optimization algorithm used in the library, a module that deals with bandwidth optimization was created. DBMOptimizer is a ROS library that helps to create more complex strategies for the frequency optimization problem. This module performs the optimization algorithm at each instant as defined by /dbm/optimization_rate_in_seconds and stores the result of the calculated frequencies in the parameter [topic_name]/dbm/frequency/current_value. This last parameter is used by DBMPublisher to recover the topic frequency. Thus, optimization algorithms used by DBM can be replaced without library changes. A researcher can implement new optimization strategies independently and use them to calculate the frequencies of managed topics. This work implements default_optimizer_node using DBMOptimizer, which makes the frequency optimization according to Sect. 4.2. The following code shows an optimization strategy using DBMOptimizer Listing 1.4: Listing 1.4 Optimization strategy using DBMOptimizer # ! / u s r / b i n / env p y t h o n import import import import rospy dynamic_bandwidth_manager pulp numpy a s np def optimize ( managed_topics ) : # Runs o p t i m i z a t i o n and r e t u r n s a d i c t i o n a r y # [ topic_name : frequency ] ( the managed_topics # p a r a m e t e r has a l i s t w i t h a l l managed t o p i c s i f _ _name_ _ == ’ _ _main_ _ ’ : try : rospy . init_node ( ’ de f a u l t _ o p t i m i z e r ’ , anonymous=True ) o p t i m i z e r = dynamic_bandwidth_manager . DBMOptimizer ( o p t i m i z e ) optimizer . s t a r t () except rospy . ROSInterruptException : pass In this example, a new optimization algorithm is created using the DBMOptimizer. The method optimize(managed_topics) implements an optimization strategy of the topics frequencies. This method receives as a parameter a list with all topic names managed by DBM and returns a dictionary [topic_name: frequency] with the calculated frequencies. This method is executed automatically by DBM and the frequencies are updated in Parameter Server. 5.6 Local Topics Management DBM makes a topic frequency adjustment in runtime using environment events. It allows a bandwidth management and sets more bandwidth to most important topics at a moment. However, the question is: how to manage topics that send only messages to other nodes that are running on the same machine? In such cases, the topic does not have any impact on bandwidth utilization and should be ignored by the optimization algorithm. In order to address this problem, DBMOptimizer decides which topics should be managed by the system at a given time checking that there are no external nodes communicating with the topic. If there is no external node registered in the topic, it is not treated as a managed topic and has its frequency set to maximum value. Another important issue is when there are nodes running on different machines registered on the same topic and at least one of them is running on a machine where the topic is being published. For example, node 1 and node 2 are running on machines A and B, respectively, and they are subscribed on /camera topic. This topic is being published by the node 3 which is also running on machine B. Figure 13 illustrates this example. Fig. 13 Problem with local topics Fig. 14 Managing remote and local topics Node 2 receives /camera information, however it is not under bandwidth restrictions (it is running on the same machine where the topic is being published). Thus, full-rate sensor stream should still be available for local processing/logging. To address this issue, DBM creates two topics for each managed communication channel: a full-rate topic ([topic_name]) and an optimized rate topic ([topic_name/optimized]). The decision on which topic subscribe is implemented by DBMSubscriber. If a node is running on the same machine where the topic is being published, it subscribes on the full-rate topic. In the other case, where the topic is being published from a remote machine, the node subscribes on the managed topic. Figure 14 shows a scheme illustrating the behavior described above. 5.7 System Parameters System parameters are a set of parameters used to improve package customization allowing DBM adapt to new applications without the need to change its source code. The parameters are stored in Parameter Server and are shared between nodes. The system parameters are described below: • /dbm/topics: List names of all topics that have nodes running on remote machines. It is updated by DBMOptimizer every time that optimization algorithm runs; • [topic_name]/dbm/ f r equency/curr ent_value: Current [topic_name] frequency; • [topic_name]/dbm/ f r equency/min: Min frequency for [topic_name] topic; • [topic_name]/dbm/ f r equency/max: Max frequency for [topic_name] topic; • [topic_name]/dbm/ priorit y: Current priority for [topic_name] topic; • [topic_name]/dbm/message_si ze_in_bytes: Message size of [topic_name] topic; • /dbm/max_bandwidth_in_mbit: Total bandwidth of the system; • /dbm/max_bandwidth_utili zation: Percentage of available bandwidth for application (values between [0 : 100]); • /dbm/optimi zation_rate_in_seconds: The rate at which the optimization algorithm is executed. In an application, there may be messages being transmitted on the network that are not managed by the DBM. Services and other unmanaged topics can be used, as well as other types of communication between system elements. Examples of unmanaged communications may be allocating tasks to the robots, commands or any other type of feature that depends on the use of the network. In such cases, the bandwidth of the system defined by the /dbm/max_bandwidth_in_mbit should not be fully utilized by the managed topics and a portion of this bandwidth should be reserved for unmanaged communications. This can be done by parameter /dbm/max_bandwidth_utilization ensuring that only part of the total bandwidth is used in the calculation of topic frequencies. 6 Experimental Validation In this section, and as described in [9], we will discuss about a teleoperation application with dynamic bandwidth management using DBM. In a teleoperation application, an user or an automated control device can control a swarm of mobile robots [6, 7] directly driving the robotic motor or sending targets for the robots. In this example, an user will send targets through commands for the robot (turn left, go ahead, and so forth) while viewing the camera image on a remote computer. The main goal is obstacle avoiding. The target’s message size is negligible for the application and does not impact in bandwidth utilization. Thus, to simplify the problem, commands sent by the operator to the robots will be disregarded in this example. The important issue in this example is that a video streaming is transmitted to a control device while a system operator remotely controls the robots. The example was developed in a simulated environment running in machines with webcams. Teleoperation is a reasonable case study because it has well defined elements with a clear instance of how communication may depend on the environment. Applications of teleoperation have been useful in a lot of problems using robots [17]. Setoodeh in [18] describes a conventional teleoperation system with five distinct elements: human operator, master device, controller and communication channel, slave Fig. 15 Teleoperation application robot, and the environment. The human operator uses the master device to manipulate the environment through the slave robot. Communication and controllers coordinate the operation using communication channels (Fig. 15). In our application, two robots (R1 and R2 ) are controlled by a human operator using a workstation representing the master device. Communication channels are used to send position and other commands from master to slave and feedback visual information from slave to master. Images of robot camera are sent to the master device where the human operator is controlling the robots. The operator controls the robots manipulating their velocity and direction. In order to do it, the operator must receive visual feedback, which means sufficient information to distinguish the obstacles in the environment. Chen et al. in [10], demonstrated that people had difficulty maintaining spatial orientation in a remote environment with a reduced bandwidth. If the rate of image transmission decreases, the operator may not be able to avoid obstacles. If the rate increases to the bandwidth limit, commands sent to the robot may get lost (or arrive late) with the loss of packets in the network. 6.1 Communication Channels The communication channel between the human operator and the robot is essential for an effective perception of the remote environment [19]. The quality of video feeds in which a teleoperator relies on for remote perception may degraded and the operators performance in distance and size estimation may get compromised with low bandwidth [20]. Chen et al. in [10], studied common forms of video degradation caused by low bandwidth, which includes reduced frame rate (frames per second or fps). Table 4 Library system settings camera/dbm/ f r equency/min camera/dbm/ f r equency/max camera/dbm/message_si ze_in_bytes /dbm/max_bandwidth_in_mbit /dbm/max_bandwidth_utili zation /dbm/optimi zation_rate_in_seconds 1 Hz 16 Hz 84000; 21000 11 Mbps 100% 1s Chen et al. [10] shows that the minimum video frame rate to avoid degraded video is 10 Hz. Higher FRs such as 16 Hz are suggested to some applications such as navigation. So, in this work, we will create a communication channel for image camera (topic camera) with 1 Hz to the minimum frequency (representing cases where the operator does not need to manipulate the robot due to the stability in the environment) and 16 Hz to the maximum frequency. The imaging resolution of the robot cameras is assumed to be 640 × 480 for R1 and 160 × 120 for R2 , which implies that each frame would be of size 84 and 21 KBytes, respectively [15]. The application uses an available bandwidth of 11 Mbps. This represents a data transfer rate of 1375 KBps (Table 4). 6.2 Environment Events Section 4 describes the event-based priority where a priority is calculated for each channel. This priority is based on environment events that affect the importance of a channel to the system. In our teleoperation application, distance to obstacles and speed are environment events that will be monitored in order to calculate the priority for the camera’s image topic. The operator needs more visual feedback when driving at a higher speed or close to obstacles. Mansour et al. [15] define that the maximum distance detected by the sensors on the robots is 200 cm, and the maximum speed is 50 cm/s. In this simulation, we will define the same parameters in order to get real values in the simulated environment. Thus, the priority based on distance to obstacles and speed will be defined by the functions: distance tc = (6) speed pcamera ⎧ ⎪ ⎨1, = 0, ⎪ ⎩ 20−tc 17 if tc < 3 if tc > 20 , otherwise. Equation (7) defines the priority as a function of the expected time before collision as defined by (6). There are other ways to calculate the priority and the function describing environment events for the same application. They are modelled depending on where the library is being used. 6.3 Bandwidth Management The default_optimizer_node runs the linear optimization problem each one second as described in parameter /dbm/optimization_rate_in_seconds. We compare the results of the suggested algorithm with one other fixed rate algorithm. The static algorithm divides the available bandwidth among the robots in proportion to the size of the messages. Thus, with an available bandwidth of 1375 KBps (11 Mbps), R1 gets 1100 KBps and R2 gets 275 KBps. Respecting bandwidth limits, camera topic will send messages on a frequency of 13 Hz. In order to evaluate the proposed bandwidth management algorithm, we present some results from the simulation using the dynamic bandwidth management library in Table 5 and Fig. 16. The system prioritizes the communication channels by increasing the frequency and providing greater bandwidth which is close to the maximum available (11 Mbpps) in all simulation times (Table 6). Step 1 shows how the library sets a greater frequency to the most important communication channels while ensuring maximum use of the bandwidth available to the system. Robot R1 has a estimated collision time of 20 s and robot R2 has a greater priority because its estimated collision time is 10 s. The library has assigned the maximum frequency for the robot R2 and in order to utilize the maximum of available bandwidth, has assigned 12.37 Hz for the robot R1 . Table 5 Frequencies in teleoperation application Step Robot 1 (R1 ) tc (s) pcamera f (Hz) 1 2 3 4 5 6 7 8 9 10 20 30 6 10 5 15 2.5 36 40 50 12.37 12.37 16 16 16 13 16 12.37 12.37 12.37 Robot 2 (R2 ) tc (s) pcamera f (Hz) 10 20 100 ∞ ∞ 15 ∞ 10 7 3 16 16 1.48 1.48 1.48 13.48 1.48 16 16 16 Fig. 16 Priorities in teleoperation application P=0.59 15 P=0 P=0.82 P=0.88P=1 P=0.59 P=1 P=0 P=0.59 P=0.76 P=0.29 P=0 P=0.29 P=0 P=0 P=0 5 P=0 P=0 P=0 P=0 Table 6 Bandwidth used by all communication channels Step R1 R2 tc (s) Bandwidth (%) tc (s) 1 2 3 4 5 6 7 8 9 10 Robot 1 Robot 2 8 Bandwidth (%) 24.44 24.44 2.25 2.25 2.25 20.58 2.25 24.44 24.44 24.44 Step 2 shows a simulation with P1 = 0; P2 = 0; f 1 = 12.37 Hz; f 2 = 16 Hz. This shows a case where the frequencies are not proportional to the priorities but the results are correct since the objective of the proposed algorithm is to maximize the bandwidth utilized by all communication channels (Eq. 5). However, considering the application scenario, a division of the frequency proportional to the priorities might be suitable. Thus, a simple modification of the proposed algorithm can divide the bandwidth among the communication channels when the priorities are equal to zero. Therefore, the frequencies in Step 2 can be recalculated to f 1 = 13 and f 2 = 13 Hz. Steps 3, 4, 5 and 7 show cases where the R1 is close to an obstacle and the robot R2 is stopped. Thus, frequency of the robot R2 can be reduced since it is not being operated and there is no risk of collision and robot R1 can be operated with maximum visual feedback( f 1 = 16; f 2 = 1.48 Hz). Step 6 shows a case where priorities are equal and the allocated frequencies are divided between the robots. Steps 8, 9 and 10 show cases where the priorities of robot R2 are greater than priorities of robot R1 . In this cases, system assigned a greater frequency to robot R2 allowing it to be operated with a higher quality of information. The total bandwidth used for the communication channels in all simulation time is equal to the total bandwidth available to the system (11 Mbps). Raise the use of bandwidth for the maximum prevents the waste of resources without exceed the bandwidth limits. In static algorithm, robot frequencies are 13 Hz in all simulation times. This value is greater than the minimum defined by [10] but always lower than the frequency of 16 Hz, suggested for this type of application. Best results are achieved by DBM. Only the robots with priority P = 0 obtained a frequency less than 13 Hz and, in most simulation times, the robot with the highest priority obtained a frequency of 16 Hz, providing a better visual feedback and helping the operator to take the decisions and avoid obstacles. 7 Conclusion This chapter presented a dynamic bandwidth management library for multi-robots systems. The system prioritizes communication channels according to environment events and offers greater bandwidth for the most important channels. A case study on how to use the library was presented and a comparison between a static algorithm and the proposed algorithm was shown. A video demonstration of the application running can be found in our in DBM wiki page (http://wiki.ros.org/dynamic_bandwidth_ manager) [21] and online at https://youtu.be/9nRitwtnBj8. Some of the upcoming challenges will be to create a better real-time capability of the system. The proposed library runs the bandwidth management algorithm using a fixed rate. Thereby, rapid changes in the environment or situations in which the robots find themselves may require faster response to ensure the system is not applying bandwidth limits that are out-of-date with respect to the robots situations. As developed in this work, the library does assume that the total bandwidth is known beforehand. This can degrade the system performance in practical settings, for instance with wireless links, whose bandwidth depends on the physical location of the nodes and the obstacles present in the environment. Experiments in real scenarios (and not simply in a simulated one, as done in this chapter) would be therefore expected to validate the approximation of considering the bandwidth known beforehand. References 1. Casper, Jennifer, and Robin R. Murphy. 2003. Human-robot interactions during the robotassisted urban search and rescue response at the World Trade Center. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 33 (3): 367–385. 2. Hiroaki, Kitano. 2000. Robocup rescue: A grand challenge for multi-agent systems. In Proceedings of the fourth international conference on multiagent systems, 2000, 5–12. New York: IEEE. 3. Rekleitis, Ioannis, Gregory Dudek, and Evangelos Milios. 2001. Multi-robot collaboration for robust exploration. Annals of Mathematics and Articial Intelligence 31 (1–4): 7–40. 4. Lima, Pedro U., and Luis M. Custodio. 2005. Multi-robot systems. In Innovations in robot mobility and control, 1–64. Heidelberg: Springer. 5. Balch, Tucker, and Ronald C. Arkin. 1994. Communication in reactive multiagent robotic systems. In Autonomous Robots 1.1, 27–52. Dordrecht: Kluwer Academic Publishers. 6. Fong, Terrence, Charles Thorpe, and Charles Baur. 2003. Multi-robot remote driving with collaborative control. IEEE Transactions on Industrial Electronics 50 (4): 699–704. 7. Tsuyoshi, Suzuki, et al. 1996. Teleoperation of multiple robots through the Internet. In 5th IEEE international workshop on, robot and human communication, 1996, 84–89. New York: IEEE. 8. Chadi, Mansour, et al. 2011. Event-based dynamic bandwidth management for teleoperation. In 2011 IEEE international conference on, robotics and biomimetics (ROBIO), 229–233. New York: IEEE. 9. Julio, Ricardo E, and Guilherme S, Bastos. 2015. Dynamic bandwidth management library for multi-robot systems. In 2015 IEEE/RSJ international conference on, intelligent robots and systems (IROS), 2585–2590. New York: IEEE. 10. Chen, Jessie YC., Ellen C, Haas, and Michael J, Barnes. 2007. Human performance issues and user interface design for teleoperated robots. In IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 37.6, 1231–1245. 11. Wiki ROS. http://wiki.ros.org/. 12. ROS Parameter Server. http://wiki.ros.org/Parameter%20Server. 13. Mitchell, Stuart, Michael, OSullivan, and Iain, Dunning. 2011. PuLP: a linear programming toolkit for python. In The University of Auckland, Auckland, New Zealand. http://www. optimization-online.org/DB_FILE/2011/09/3178.pdf. 14. DBM Video Demonstration. https://youtu.be/9nRitwtnBj8. 15. Mansour, Chadi, et al. 2012. Dynamic bandwidth management for teleoperation of collaborative robots. In 2012 IEEE international conference on robotics and biomimetics (ROBIO), 1861– 1866. New York: IEEE. 16. DBM Source Code. https://github.com/ricardoej/dynamic_bandwidth_manager. 17. Sheridan, Thomas B. 1992. Telerobotics, automation, and human supervisory control. Cambridge: MIT press. 18. Sirouspour, Shahin, and Peyman, Setoodeh. 2005. Multi-operator/multi-robot teleoperation: an adaptive nonlinear control approach. In: 2005 IEEE/RSJ international conference on intelligent robots and systems, 2005, (IROS 2005), 1576–1581. New York: IEEE. 19. French, Jon, Thomas G, Ghirardelli, and Jennifer, Swoboda. 2003. The effect of bandwidth on operator control of an unmanned ground vehicle. In The interservice/industry training, simulation and education conference (I/ITSEC), Vol. 2003. NTSA. 20. Van Erp, Jan B.F., and Pieter Padmos. 2003. Image parameters for driving with indirect viewing systems. Ergonomics 46: 1471–1499. 21. DBM Wiki. http://wiki.ros.org/dynamic_bandwidth_manager. Ricardo Emerson Julio Studied Computer Science at Federal University of Lavras (UFLA). He did his M.Sc. in Science and Computing Technology working in the System Engineering and Information Technology Institute (IESTI), Federal University of Itajuba (UNIFEI). Nowadays, he is a PhD Student in Electrical Engineering at UNIFEI. His research focuses on multi-agent systems, robotics, communication and ROS. He is an expert on software development and programming with 8 years of industrial experience working in mining area. Guilherme Sousa Bastos Studied Electrical Engineering at Federal University of Itajuba (UNIFEI), M.Sc. in Electrical Engineering at UNIFEI, and PhD in Electronic and Computation Engineering at Aeronautics Institute of Technology (ITA), with part of doctorate done at Australian Centre for Field Robotics (ACFR). Nowadays, he is associate professor at UNIFEI and coordinator of Computer Science and Technology. He has experience in Electrical Engineering and Automation of Electrical and Industrial Processes, acting on the following subjects: electrical hydro plants, mining automation, optimization, system integration and modeling, decision making, autonomous robotics, machine learning, discrete events systems, and thermography. An Autonomous Companion UAV for the SpaceBot Cup Competition 2015 Christopher-Eyk Hrabia, Martin Berger, Axel Hessler, Stephan Wypler, Jan Brehmer, Simon Matern and Sahin Albayrak Abstract In this use case chapter, we summarize our experience during the development of an autonomous UAV for the German DLR Spacebot Cup robot competition. The autarkic UAV is designed as a companion robot for a ground robot supporting it with fast environment exploration and object localisation. On the basis of ROS Indigo we employed, extended and developed several ROS packages to build the intelligence of the UAV to let it fly autonomously and act meaningfully as an explorer to disclose the environment map and locate the target objects. Besides presenting our experiences and explaining our design decisions the chapter includes detailed descriptions of our hardware and software system as well as further references that C.-E. Hrabia (B) · M. Berger · A. Hessler · J. Brehmer · S. Albayrak DAI-Labor, Technische Universität Berlin, Ernst-Reuter-Platz 7, 10587 Berlin, Germany e-mail: [email protected] URL: https://www.dai-labor.de/ M. Berger e-mail: [email protected] URL: https://www.dai-labor.de/ A. Hessler e-mail: axel.he[email protected] URL: https://www.dai-labor.de/ J. Brehmer e-mail: [email protected] URL: https://www.dai-labor.de/ S. Albayrak e-mail: [email protected] URL: https://www.dai-labor.de/ S. Wypler · S. Matern Technische Universität Berlin, Ernst-Reuter-Platz 7, 10587 Berlin, Germany e-mail: [email protected] S. Matern e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_11 C.-E. Hrabia et al. provide a foundation for developing own autonomous UAV resolving complex tasks using ROS. A special focus is given on the navigation with SLAM and visual odometry, object localisation, collision avoidance, exploration and high level planning and decision making. Extended and developed packages are available for download, see footnotes in the respective sections of the chapter. Keywords Unmanned aerial vehicle · Autonomous systems · Exploration · Simultaneous localisation and mapping · Decision making · Planning · Object localisation · Collision avoidance 1 Introduction In the German national competition SpaceBot Cup 2015 autonomous robot systems were challenged to find objects in an artificial indoor environment simulating space exploration. Two of these objects had to be collected, carried to a third object, and assembled together to build a device which has be turned on in order to complete the task.1 In the 2015 event the ground rover of team SEAR (Small Exploration Assistant Rover) from the Institute of Aeronautics and Astronautics of the Technische Universität Berlin [1] was supplemented by an autonomous unmanned aerial vehicle (UAV) developed by the Distributed Artificial Intelligence Lab (DAI). The developed ground rover features a manipulator to perform all grasping tasks and is assisted by the UAV in exploration and object localisation. Hence, mapping the unknown environment and locating the objects are the main tasks of the accompanying UAV. The aerial system can take advantage of its capabilities of being faster than a ground based system and having less issues with rough and dangerous terrain. For this reason it was the concept of providing an autonomous UAV, that rapidly explores the environment, gathering as much information as possible and providing it to the rover as a foundation for efficient path and mission planning. The multi-rotor UAV is based on a commercial assembly kit including a low level flight controller that is extended with additional sensors and a higher-level computation platform. All intelligence and advanced software modules are used and developed within the Robot Operating System (ROS) environment. The UAV executes its mission in a completely autonomous fashion and does not rely on remote processing at all. The system comprises the following features: • Higher-level position control and path planning • Monocular simultaneous localisation and mapping based on ORB-SLAM [2] • Vision-based ground odometry based on an extended PX4Flow sensor [3] 1 Complete task description in German at http://www.dlr.de/rd/Portaldata/28/Resources/ dokumente/rr/AufgabenbeschreibungSpaceBotCup2015.pdf. An Autonomous Companion UAV for the SpaceBot Cup Competition 2015 Object detection using BLOB detection or convolutional neural networks [4] Collision avoidance with sonar sensors and potential fields algorithm [5] UAV attribute focused exploration module Behaviour-based planner for decision making and mission control The above components have been realized by students and researchers of the DAI-Lab in teaching courses, as bachelor or master theses or part of PhD theses. Moreover, a regular development exchange has been carried out with the corresponding students and researchers of the Aeronautics and Astronautics department that worked on the ground rover on similar challenges. In this chapter we are focusing on the UAV companion robot and present our work in a case study with detailed descriptions of our system parts, components and solutions including hardware and software. Furthermore, we describe our observations and challenges encountered during development and testing of the system, especially issues encountered with erratic components, failing hardware and components not performing as good as expected. Providing a reusable foundation for other researchers in order to support our goal of fostering further research in the domain of autonomous unmanned aerial vehicles. We explicitly include information and references that is sometimes excluded from publications such as source code, the exact amount of autonomy and extend of remote processing (or lack of it). The remainder of the chapter is structured as follows. Section 2 discusses related work in the context of the developed systems as well as the provided information of other authors. Sections 3 and 4 provide information about our hardware and software architecture. Here, the software architecture gives a general view on our system and should be read before going in details of following more specific sections. Section 5 explains our navigation subsystem consisting of SLAM (simultaneous localisation and mapping) and visual odometry. This section covers also the evaluation and selection process of a suitable SLAM package. The following Sect. 6 elaborates two alternative object detection and localisation approaches we have developed for the SpaceBot Cup competition. Section 7 presents background about our collision avoidance system. After, Sect. 8 goes into detail about several possible exploration strategies we have evaluated in order to determine the most suitable one for our use case. In Sect. 9 we briefly introduce a new hybrid decision making and planning package that allows for goal-driven, behaviour-based control of robots. This is followed by a short section about the collaboration and communication from our UAV with the other ground robot of our University team. After discussing the general results and limitations of our approach in Sect. 11, we summarise our chapter and highlight future tasks in the last Sect. 12. 2 Related Work In the field of aerial robotics several platforms and software frameworks have been proposed for different purposes during the last years. In this section try to focus on platform and architecture descriptions for UAVs that have to autonomously solve similar tasks regarding exploration, object detection, mapping and localisation and touch on the essential components for autonomous flight and mission execution. With the exception of [6], where ROS is only used as an interface, the works included here use ROS as a framework for their software implementation. Tomic et al. [7, 8] describe a software and hardware framework for autonomous execution of urban search and rescue missions. Their descriptions are quite comprehensive giving a detailed view on their hardware and software architecture, even from different vantage points. They feature a fully autonomous UAV using a popular copter base, Pelican,2 and describe specific implemented features like navigation by keyframes, topological maps and visual odometry as well as giving background and recommendations on most crucial aspects of autonomous flight such as sensor synchronisation, registration, localisation and much more. One highlight is that they employ stereo vision, speed up using an FPGA, so they do not rely solely on (2D/3D) LiDAR as many other solutions. Also it appears that all relevant design decisions are sufficiently motivated, many details needed to reproduce certain experiments are given, the mathematical background for key features is given and surprising or important results are stressed throughout. It one of the most complete works we could find, only it does not explicitly point to locations where the described modules can be acquired from, especially the custom ones, and so called mission-dependent modules (e.g. domain feature recognition) are not explained since they were presumably deemed to be out of focus. A description of a more high-end hardware architecture is presented in [9]. The authors describe their hardware and software architecture for autonomous exploration using a sophisticated copter platform and high-end (for a mobile system) components. They give an overview of their hardware architecture, mention the most important components and give some experimental results on localisation and odometry. The software architecture is not described in depth and developed packages are not linked in the publication. Loianno et al. [6] describe a system that is comprised of consumer grade products: A commercially available multicopter platform and a smartphone that handles all the necessary high level computation for mission planning, state estimation and navigation using the build-in sensors of the copter and the phone. The software is implemented as an application (“App”) for the Android smart phone. According to the publication ROS is only used as a transport (interface) variant for sending commands and receiving data. During the first instance of the SpaceBot Cup in 2013, also at least two teams already deployed a UAV in attempt to aid their earthbound vehicles in their mission. 2 manufactured by Ascending Technologies. Although the systems are not described in detail and may not be fully autonomous, they are included here, since they were deployed for the same task. For 2013 The Chemnitz University Robotics Team [10] used the commercially available quadrotor platform Parrot AR Drone 2.0. Without hardware modifications this platform can only do limited on-board processing, so it was basically used as a flying camera, hovering above at a fixed position while streaming a video feed to a remote station on site that ran the mission logic and controlled the crafts position. Although the copter relies on a working communication link, it was designed to be purely optional, thus not mission critical if the link is unstable or failing. The image was also streamed back to the ground station for mission monitoring and intervention. Also another contestant, Team Spacebot 21, deployed a Parrot AR Drone during the contest in 2013 and apparently further prepared a hexacopter for the event of 2015, but no description of their systems could be found. In summary, the level of detail for the descriptions varies greatly. This is may be due to the space constraints imposed by the publication format. However, none of the surveyed publications did provide a full software stack in ROS that could easily be integrated and adapted for use on one’s own copter platform(s). While for example [6] Loianno et al. describe a nice complete system comprised of consumer grade products, to our disappointment the authors do neither explicitly link to the advertised App (or even a demo) nor was it easily discoverable on the net. Although there are some high quality, detailed descriptions of hardware and software platforms for autonomous UAVs available, often either some crucial implementation details are missing or the code for software modules that are described on a high level is not available. 3 Hardware Description The main objective of our hardware concept was having a modular prototyping platform with enough payload for an advanced computation systems as well as a couple of sensors together with a reasonable flight time. Furthermore, we tried to reuse existing hardware in the lab, to limit expenses, and keep the system handleable in indoor environments. The system and its general setup is illustrated in Fig. 1. Since our focus is not on mechanical or electrical engineering, we based our efforts on a commercially available hexacopter self-assembly kit from MikroKopter (MK Hexa XL3 ). The modular design simplified required extensions. We removed the battery cage and added 4 levels below the centre platform of the kit, separated with brass threaded hex spacers. It was required to replace the original battery cage, made from rubber spacers and thin carbon fibre plates, to have a more solid and robust base for the additional payload below. The level platforms were made from 3 http://wiki.mikrokopter.de/en/HexaKopter. Fig. 1 The UAV hardware, setup and live in the competition environment fiberglass for simple plain levels or 3d-printed for special mounts. In case of the computation level the platform is directly shaped by the board itself. The first layer below the centre is still the battery mount, the second layer holds the computation platform, the third layer holds an additional IMU-sensor (Sparkfun Razor IMU 9DOF) and the lowest layer contains all optical sensors. The original flight controller (Flight-Ctrl V2.5) is used for basic low level control of balancing the UAV and managing the motor speed controllers (ESC). The flight control is remotely controlled by a more powerful computation device. Since our research focus is not on low-level control, it is also suitable to use other available flight controllers like the 3DR Pixhawk. The only requirement is support for external control of pitch, roll, yaw and thrust, respectively relative horizontal and vertical motion, through an API. During our research for a small scale and powerful computation board we came across the Intel NUC platform, which is actually made for consumer desktops or media centres. The NUC mainboards are small (less than 10 cm in square), comparable light-weight, provide decent computation power together with plenty of extension ports. Furthermore, they can be directly powered by a 4s-LiPo-Battery as they support power supplies providing 12–19 V. The most powerful NUC version available at that time was the D54250WYB, which provides a dual-core CPU (Intel Core i5-4250U) with up to 2,6 GHz in turbo mode. The current available NUC generation has even better performance and also includes a version with an Intel Core i7 CPU. We equipped our NUC board with 16 GB of RAM, 60 GB mSATA SSD and a mini-PCI-E Intel Dual Band Wireless-AC 7260 card. Due to the required autonomous navigation capabilities, not relying on any external tracking system, the UAV has two visual sensors for autonomous navigation on the lowest layer. The first sensor provides input for the SLAM component and is a monocular industrial grade camera with global shutter and high frame rate (Matrix Vision mvBlueFox GC200w) with a 2 mm fisheye lens (Lensagon BF2M2020). The camera is attached to a tiltable mount allowing for a static 45◦ camera angle. The second sensor is looking to the ground and is a Pixhawk PX4FLOW module that provides optical odometry measurements [3]. The PX4FLOW sensor is accompanied with an additional external ultrasonic range sensor (Devantech SRF08) pointing towards the ground. The additional range sensor and IMU are required to compensate the weak performance of the PX4Flow, further details are given in Sect. 5.4. The competition scenario should not have many obstacles in our intended flight altitude of ~2 m, actually almost exclusively pillars to support the roof structure, thus we decided to use three lightweight ultrasonic range sensors (Devantech SRF02) for collision avoidance. The sensors are attached to the protection cage of the UAV on the extension axis of the three forward facing arms. All ultrasonic range sensors are connected to the NUC USB bus with a Devantech USB I2C adapter. More demanding scenarios may require the use of more such sensors or it may be necessary replace or augment them with a 2D laser range scanner, to get more detailed information about surrounding obstacles. The protection cage of the UAV is build from kite rubber connectors, fiberglass rods for the circular structure and carbon fiber rods for the inner structure. Our basic system without a battery and the protection cage weights 2030g. The protection cage adds 280 g and our 6600 mAh 4s LiPo battery 710 g, resulting in 3020 g all in all, while providing approximately 15 min autonomous flight time. 4 Software Architecture The abstracted major components of our system and the directed information flow is visualised in Fig. 2. The shown abstract components are consisting of several sub modules. The UAV is perceiving its environment through several sensor components that are handling the low-level communication with external hardware modules Fig. 2 The abstract UAV software architecture and taking care of general post-processing. Most of the sensor data and its processing is related to the autonomous localisation and navigation. For this reason the navigation component is fusing all the available data after it was further processed in two distinct sub-systems for SLAM and optical-flow based odometry. The resulting map and location information as well as some of the sensor data is also used in the object localisation, high-level behaviour and decision making/planning components. The object localisation is trying to recognize and locate the target objects in the competition environment. The decision making/planning component is selecting the current running high-level behaviour based on all available information from the navigation, object localisation, the flight controller and the range sensors, as well as from the constraints of the behaviours themselves. Depended on the executed behaviour, like collision avoidance, exploration or emergency landing the navigation component is instructed with new target positions. Hence, the navigation component is monitoring the current state and controlling the low-level flight controller with new target values for pitch, roll, yaw and thrust in order to reach the desired position. The actual motor control runs on the flight controller, while the motor speed is controlled by external speed controllers. The abstract architecture is further detailed in the ROS architecture, see Fig. 3, showing the components including the relevant packages, mainly used topics, services and actions. This architecture shows a common ROS approach of providing continuously generated information (e.g. sensor data) as topics, direct commands and requests (e.g. setting new targets) as services and long running requests (e.g. path following) as actions, using the ROS actionlib. All shown components and packages correspond to one ROS node. If nothing else is stated we used the ROS modules of version Indigo. Most information related to the core challenges of autonomous navigation and object localisation is exchanged and maintained using the tf package. Fig. 3 The UAS ROS software architecture. Visualized are packages, components and service, action and topic interfaces. Each package or subcomponent corresponds to a node instance in the running system, except the behaviour subcomponents inside the uav_behaviour package All higher-level behaviour is controlled by our decision making and planning framework RHB Planner (ROS Hybrid Behaviour Planner). The individual behaviours for start, land, emergency-landing, collision avoidance and exploration are using provided base classes of this framework and are running on one node. The RHB Planner is also supporting a distributed node architecture for the behaviours, but we did not take advantage of it because of the computationally simple nature of most behaviours. Nevertheless, the generalised implementations of the more complex tasks of collision avoidance and exploration are separated in own packages with corresponding nodes. The external monitoring of the UAV is realised with rqt and its common plugins for visualisation and interaction as well as some custom plugins for controlling the position_controller and decision making and planning component. The provided controls are just enabling external intervention by the human but are strictly optional. For accessing the monocular camera we are using a ROS package from the GRASP Laboratory.4 Independent of the ROS software stack running on x86 main computing platform is the software of the MikroKopter FlightCtrl and the PX4Flow module. Both embedded systems are interfaced through RS232 usb converters and their ROS wrappers in the packages mikrokopter and px-ros-pkg. Sensor data is pushed by the external systems after an initial request and collected by the ROS wrappers. The MikroKopter FlightCtrl firmware already contains features for providing sensor information through the serial interface as well as receiving pitch, roll, yaw and thrust commands amongst other external control commands. We extended the existing firmware in several ways in order enable compilation in Linux environment, sensor debug stream without time limited subscription, added a direct setter for thrust and new remote commands for arming/disarming, calibration and beeping. Our fork of Version V2.00a and V2.12a is available online.5 Due to several problems with the PX4Flow sensor, see Sect. 5.1 for more details, we forked the firmware6 as well as the corresponding ROS package.7 We changed the PX4Flow firmware in order to get more raw data from the optical flow calculation, disable the sonar and process additional MAVLink messages provided by the PX4Flow. Furthermore, the firmware fork also includes our adjusted settings as well as an alternative sonar filtering. For the communication with the ultrasonic range sensors of type SRF02 and SRF08 and the interaction with the MikroKopter flight controller we developed new ROS packages.8 4 https://github.com/KumarRobotics/bluefox2.git and https://github.com/KumarRobotics/camera_base.git. 5 https://github.com/cehberlin/MikroKopterFlightController. 6 https://github.com/cehberlin/Flow. 7 https://github.com/cehberlin/px-ros-pkg. 8 https://github.com/DAInamite/srf_serial and https://github.com/DAInamite/mikrokopter_node. More details of the developed or extended modules and related ROS packages for navigation, autonomous behaviour (including decision making and planning) and object detection are given in the following sections. 5 Navigation Autonomous navigation in unknown unstructured environment without any external localisation systems like GPS is one of the most challenging problems for UAV, especially if only onboard resources are available to the flying system. Our approach is combining two methods for the localisation, we use the PX4Flow sensor module as a vision based odometry and a monocular SLAM for additional localisation information as well as creating a map. The advantage of this approach is that the odometry information, calculated from the downwards looking camera’s optical flow, fused with the data from a gyroscope and an ultrasonic range sensor for scaling, is available on a higher frame rate and without initialisation period, but is prone to drifts in long-term. Whereas the SLAM provides more accurate information, but needs time for initialisation and can lose tracking. Hence, the odometry provides a backup system in case SLAM loses tracking. Moreover, using an external device (PX4Flow) for the odometry image processing has the advantage of reducing the computational load on the main computation system. An alternative solution could directly use one ore more additional cameras for optical flow and odometry estimation without a special sensor as the PX4Flow. This could be done for instance by applying libvisio2 or fovis through available ROS wrappers visio29 and fovis_ros.10 However, such an approach would generate more processing load on the main system. In our configuration with the PX4Flow and SLAM we achieved ∼40 Hz update rate for the integrated odometry localisation and ∼30 Hz for the SLAM localisation, while the map is updated with ∼10 Hz. Next we describe the required adjustments of the visual odometry sensor PX4FLOW in order to get suitable velocity data, continued by a section about the used SLAM library and our extensions, after explaining how we fused the localisation information from both methods and finally explaining how we have controlled the position of the UAV. 5.1 Odometry Different than expected the PX4FLOW did not provide the promised performance and had to be modified in several ways to generate reasonable odometry estimates in 9 http://wiki.ros.org/viso2. 10 http://wiki.ros.org/fovis_ros. our test environment. The implemented modifications are explained in the following. In fact we received very unreliable and imprecise odometry measurements during our empirical tests on the UAV. For improving the performance we first replaced the original 16 mm (tele-) lens with a 6 mm (normal-) lens to get a better optical flow performance close to the ground. This is especially a problem during start and landing, as well as supporting faster movements. Second, we disabled the included ultrasonic range sensor and added another external ultrasonic range sensor (Devantech SRF08) with a wider beam and more robust measures, because of heavy noise and unexpected peak errors with the original one. In consequence, we modified the PX4FLOW firmware (see Sect. 4 for references) to provide all required information to calculate metric velocities externally. It would have also been possible to integrate the alternative sensor in the PX4Flow board itself, but due to time constraints and better control of the whole process, we decided to move this computation together with an alternative filtering (lowpass and median filter) of the range sensor, as well as the fusion with an alternative IMU, to the main computation board. This additional IMU board (Sparkfun Razor IMU 9DOF) was required because the PX4Flow did not provide a valid absolute orientation after integration, which is caused by the very simple filtering mechanisms of the only included gyroscope. Here, it was not possible to replace it with the sensors of MikroKopter flight controller, as the board does not include a magnetometer and would not have been able to provide accurate orientation data, too. Instead the additional IMU board enabled us to use 3D accelerometer, gyroscope and magnetometer for orientation estimation. The resulting odometry information, that is velocity-estimates for all six dimensions (x, y, z, pitch, roll, yaw) and the distance to ground, is integrated over time into a relative position and published as transforms (tf) from the optical_flow_to_tf node of the position_controller package, see also Fig. 3 in Sect. 4. The coordinate frame origin is given by the starting position of the system. 5.2 Localisation and Mapping One of the most challenging problems of developing an autonomous UAV that does not rely on any external tracking system or computation resources is finding an appropriate SLAM algorithm that can be executed on the very limited onboard computation resources together with all other system components. Since our focus is not on further advancing the state of the art in SLAM research we evaluated existing ROS solutions in order to find a suitable one that we can use as a foundation for our own extensions. In doing so we especially focus on a fast and reliable initialisation, a robust localisation with good failure recovery (recovery after lost tracking) in dynamic environment with changing light conditions and sparse features. All that is favoured over the mapping capabilities. This is motivated by our approach of enabling autonomous navigation in 6D (translation and orientation) in the first place, while in the second place the created map is considered. The created map has a minor priority since prior knowledge of the competition area would also allow for simple exploration based on the area size, starting point and safety offsets. We evaluated the different algorithms with recorded sensor data of simulations as well as real experiments where we manually estimated the ground truth. All experiments have been executed with best knowledge from the provided documentation and calibrated sensors. In comparison to many existing solutions, see also Sect. 2, we could not rely on a 2D laser scanner as an input sensor, because of the unstructured competition environment without surrounding walls near by. Using a 3D laser scanner was not possible due to financial limitations in our project. Furthermore, we determined that laser scanners have problems in detecting the black molleton fabric that was used to limit the competition area. In consequence it would only be possible to detect the border obstacles from max. 1m distance, independent of the actual maximum range of the sensor. Hence, our initial idea was using a ASUS Xtion RGBD sensor together with the RGBDSLAM V2 [11] instead of a laser scanner or RGB camera. This was based on the selection and positive experience of the rover sub-team in the first execution of the competition in 2013. In this context we have also evaluated the alternative package rtabmap SLAM [12]. Both packages allow configuring different feature detection and feature matching algorithms. In comparison to RGBDSLAM V2, RTAB-MAP supports several sub maps that are created on a new initialisation after lost tracking. Such sub maps allow for a stepwise recovery and are fused by the algorithm later on. RGBDSLAM V2 would require getting back to the latest valid position and orientation for recovery. This is a clear disadvantage especially for an always moving aerial system. However, in our empirical test the pure localisation and mapping performance of both algorithms was similar after tuning the configuration appropriately. Unfortunately we figured out that the required OpenNi 2 driver together with the corresponding ROS package for accessing the raw data of the RGBD sensor is generating very high load on our system. In fact, just receiving the raw unprocessed RGBD data from the sensor in ROS was giving 47.5% load on our two core system, in comparison the later selected RGB camera is just creating 7% CPU load. We did some experiments with different configurations and disabled preprocessing, but have not been able to reduce the load significantly. Due to the high load it was not possible to run any of the existing RGBD-based SLAM solutions with an appropriate frame rate and without quickly loosing tracking during motion. Though an RGBD SLAM solution would have been suitable for our indoor scenario, it is limiting the portability of the system to other applications in outdoor environment, since RGBD sensors are strongly affected by sunlight. In consequence we tested other available SLAM solutions that are able to operate with RGB cameras as alternative sensor. The RGB sensor has also the advantage of having a higher detection range. Here, we especially looked upon monocular approaches since we expected less load, if only one image per frame has to be processed. Table 1 summarises general properties and our experiences with different algorithms. The direct gradient-based method used in LSD SLAM [13] has the advantage of generating more denser maps, while the indirect ORB SLAM [2] Table 1 Comparison matrix of monocular SLAM packages. The numbers indicate the ranked position as qualitative comparison from 1–3, 1 being better Package/ algorithm Position quality Orientation quality System load Map quality ORB SLAM [2] Feature point cloud LSD SLAM [13] Semi-dense depth SVO SLAM [14] Semi-direct and the semi-direct (feature-based combined with visual odometry) SVO SLAM [14] only provide very coarse point cloud maps build from detected features. However, both LSD SLAM and SVO SLAM had difficulties in getting a valid initialisation and have frequently lost tracking, resulting in bad position and orientation estimations. In contrast ORB SLAM is able to quickly initialise, small movements in hovering position are enough, and holds the tracking robustly, while recovering very fast once it is lost in situations without many features or very fast movements. Therefore, we selected the ORB_SLAM package [2] that was providing stable and fast localisation on our system, resulting in update rates of ∼30 Hz for the SLAM localisation and ∼10 Hz for the mapping. We empirically determined the following ORB SLAM configuration that differs from the provided default: 1000 features per image, 1.2 scale factor between levels in the scale pyramid, fast threshold of 10, enabling the Harris score and the motion model. Especially switching from the FAST score to the Harris score improved the performance in environments with sparse features and monotonic textures. Even though the package provided a good foundation, it was missing several features. For this reason we developed some extensions, which are available online in a forked repository.11 The original package is available, too.12 Here, we incorporated an additional topic for state information, disabled the processing of topics if they are not subscribed (especially useful for several debug topics), improved the general memory management (the original includes several memory leaks) and integrated a module for extended map generation. The last extension allows for exporting octree-based maps and occupancy maps through topics and services. The octree representation is calculated from the feature point cloud of the internal ORB SLAM map representation. The occupancy map is created from the octree-based map by projecting all voxels in a height range (slice through the map) into a plane. In our case it was sufficient to use static limits to eliminate all points from ceiling and floor. However, this extension has potential for many optimisations and extensions, for instance all calculation could benefit from caching and reusing former created maps instead of recalculating entire maps or a more sophisticated solution for detecting floor and ceiling would simplify a reuse in other scenarios. Additionally, an extension allowing to create new maps on lost tracking and fuse them later, like supported 11 https://github.com/cehberlin/ORB_SLAM. 12 https://github.com/raulmur/ORB_SLAM. Fig. 4 The SLAM and odometry execution process flow by RTAB-MAP, would increase the robustness and applicability of the algorithm. Furthermore, we plan to improve the new version 2 of ORB SLAM.13 The new version also supports stereo camera setups that may be manageable from the load perspective on the current NUC generation. 5.3 SLAM and Odometry The used SLAM module and the position calculated from the integrated odometry data provide two distinct coordinate frames. The slam_odom_manager is listening to created tf-transformations of both navigation submodules and creates a fused transformation from them. This also requires valid static transformations from the sensors to the base frame of the robot. In this context, the slam_odom_manager is also monitoring the state of the modules in order to react to a changed SLAM state, e.g. successful initialisation, lost tracking or when tracking is recovered, by recording transformations between the distinct coordinate frames and switching or adjusting the currently used master coordinate frame that forms the base for the resulting output transformation of the slam_odom_manager. This transformation is the reference frame for the position_controller. Due to the fact that we are only using a monocular SLAM the resulting position and map are not scaled in real world units. The problem can be addressed by determining the scale based on available metrical sensor data, as shown by Engel et al. [15]. In our solution the challenge is addressed by using the odometry data, which is scaled based on the absolute ground distance from the ultrasonic range sensor, as a reference in an initial stage of the mission in order to get a suitable conversation from the SLAM coordinate frame to the real world. The execution flow of the process of the slam_odom_manager is illustrated in Fig. 4. Here, the shown parallel process execution is repeated from the dotted line 13 https://github.com/raulmur/ORB_SLAM2. once SLAM has recovered. The general idea is to get robust short-term localisation from odometry, while getting long-term localisation from SLAM. This is based on the experimentally verified assumption that the visual odometry of the PX4Flow has an increasing error over time due to repeated integration, but no initialisation stage. Whereas the SLAM is considered as less robust in short-term, since it can loose tracking and need a initialisation stage with a moved camera, but it is more robust in long-term, because it is able to utilize loop-closures. In order to avoid jumps in the position control of the UAV after the transition from SLAM localisation to odometry localisation and vice versa the position controller PID controllers are reinitialised at the handover. 5.4 Position Controller The position_controller package is responsible for the high-level flightcontrol of the UAV. The module is available online.14 It finally generates pitch, roll, yaw and thrust commands for the flight controller based on the given input positions or path. The package is separated into three submodules or nodes. Already mentioned was the optical_flow_to_tf module that converts odometry information into tf-transformations. The path_follower is a meta-controller of the position_controller_tf that controls the execution of flight paths containing a sequence of target positions. The core module that calculates the flight control commands is the position_controller_tf. Therefore, it monitors the velocity, the distance to ground and the x-y-position of the system based on received tf-transformations. The control of the targeted positions in space is realised with a chain of two PID-controllers for acceleration and velocity for each of the four controllable parameters. Since the balance of the vehicle is maintained by the low-level flight controller, the controller does only influence roll and pitch in order to move in x-y-directions, while yaw is controlled in order to hold a desired heading. All controllers can be configured with several constraints for defining maximum change rates and velocities. The PID implementation and configuration make use of the control_toolbox package, but we used an own fork of the official code base15 to integrate some bugfixes as well as some own extensions that allow for an easier configuration using ROS services. Furthermore, the controller includes a special landing routine for a soft and smooth landing, which is activated if the target distance to ground is set to 0. This routine calculates a target velocity based on the estimated exponential decreasing time-tocontact as presented in [16]. 14 https://github.com/DAInamite/uav_position_controller. 15 https://github.com/cehberlin/control_toolbox. Fig. 5 The target objects. From left to right: battery pack, plastic cup and base object 6 Object Detection and Localisation Besides autonomous flight and navigation in unknown terrain, another important key capability of the UAV is the detection of the mission’s target objects. Providing additional knowledge of the terrain and the objects’ positions to the ground vehicle enables quick and efficient path planning. This results in a faster completion of the search, carry and assembly tasks. In the scenario at hand three target objects had to be found. The objects are colour coded and their exact shape and dimensions are known: A yellow battery pack, a (slightly transparent) blue plastic cup and the red base object, see illustration in Fig. 5. Colours, dimensions and weights for these objects were known beforehand. Two different approaches have been developed and evaluated to detect these objects: A simple blob detection and a convolutional neural network based detection and localisation. Besides that, other object recognition frameworks were evaluated regarding their applicability to the task: Tabletop Object Recognition,16 LINE-MOD17 and Textured Object Detection.18 Albeit, neither detection rates nor computational complexity allowed for their use on the UAV in the given scenario. The detection rates were generally not sufficient and tend to fail in case that vital constraints are violated (e.g. no flat surface can be detected, or the vertical orientation of trained objects is limited). The lessons learned and two developed approaches are detailed in the following subsections. The discussed implementations are available online.19 6.1 Blob Detection Motivated by the colour coded mission objects, a simple blob detection approach seemed admissible. Hence, we used a simple thresholding for the primary colours in 16 http://wg-perception.github.io/tabletop/index.html#tabletop. 17 http://wg-perception.github.io/linemod/index.html#line-mod. 18 http://wg-perception.github.io/tod/index.html#tod. 19 https://github.com/DAInamite/uav_object_localisation. Fig. 6 Exemplary image showing prevalence of colours and intermediate processing result for the battery the image to get regions of interest for the objects to be detected. Consecutive contours of the resulting connected areas are extracted and analysed, using some properties of the objects such as their expected projected shapes. The implementation is applying the OpenCV framework in version 2.4.8. First the image is converted to HSV colour space. In the thresholding step, the image is simply clipped to the interesting part of the hue-channel associated with each object’s expected colour. The thresholds for each object were manually tuned with a custom rqt interface during the preparation sessions of the competition. After the thresholding, the image is opened (erode followed by a dilate) to get rid of small artefacts and smoothen the borders of the resulting areas. Then the contours of the thresholded image are extracted (findContours) and the detected contours are simplified using the Douglas–Peucker algorithm (approxPolyDP). The resulting simplified contours are checked for certain properties to be considered a valid detection (Fig. 6). Although the employed method is rather simplistic, with correctly tuned thresholds, it is able to detect a good amount of objects in our test sets (footage from recordings made during flight with rosbag20 ) of about 99%, while only generating a low number of false positives (0.3%), see Table 2 for more details. 20 http://wiki.ros.org/rosbag. Table 2 Results of blob based detection over image sets extracted from recorded test flights Image set Correctly found objects Incorrectly found objects False positives Base Base and cup Battery Batt. and base Batt., base and cup Batt. and cup Cup 686 1303 1174 1407 1359 1069 524 Fig. 7 Original network structure of LeNet-5 (top) and resulting network structure for the detection task (bottom) 6.2 Convolutional Neural Network Convolutional neural networks gained a lot of popularity for generic object detection tasks. For the problem at hand, the tiny-cnn library21 has been selected. Basically the original LeNet-5 network architecture and properties were used (although layer F6 has been omitted), see Fig. 7. LeNet-5 was originally designed to solve the MNIST [17] Optical Character Recognition (OCR) challenge where single handwritten digits had to be recognized. The images are grey-scale, 28 × 28 pixel in size and are usually padded to 32 × 32 pixel which is the default input size of the LeNet5. The example implementation employed Levenberg- Marquardt gradient descend with 2nd order update, meansquared-error loss function, approximate tanh activation function and consisted of 6 layers: 5 × 5 convolution with 6 feature maps, 2 × 2 average pooling, partially 21 https://github.com/nyanp/tiny-cnn. connected 5 × 5 convolution with 16 feature maps, 2 × 2 average pooling and again a 5 × 5 convolution layer producing 120 outputs which are fed into a fully connected layer with 10 outputs, one for each digit. This approach was selected as it builds upon a time-proven and for today’s standards quite small codebase allowing it to perform fast enough on the UAV. More sophisticated approaches for scene labeling or generic object detection methods may easily exceed the processing time requirements and computational constraints of the platform. In order to adapt the LeNet-5 architecture to our use case we tested various modifications. At first the input colour depth was extended to support also RGB, HSV and YUV (as well as YCrCb) colour spaces or components thereof while keeping the rest of the network architecture as described above, except for the last layer output which was reduced to three neurons, one for each object class. Of course, increasing the input depth increases time needed for training as well as classification but this is most likely outweighed by the benefits that the colour channels provide, because colour is expected to be a key characteristic of the mostly textureless target objects. Camera images with a resolution of 640x480 pixels are down-scaled and stretched vertically to fill the quadratic input. Stretching was preferred over truncating because it does not reduce the field of view and therefore maintains the search area as large as possible. Features extractors inside the network will be established during training so it can be expected that the vertical distortion will not affect the network performance because it is trained with similar stretched images. Apparently the objects do not incorporate enough structural information to learn a usable abstraction at downscaled image size. Although the CNN was able to achieve high success rates (close to 100%) and high enough frame rates on the images from the test set, those results were overall highly over-fitted and not usable in practice, since apparently the classifier learned mostly features from the background instead of the depicted object. The image of the objects only cover a comparatively small number of pixels so that the background could have a similar pattern by accident. This fact is especially noticeable with grey-scale images under low light conditions and seemed to support the decision of using coloured images. Next, we determined the best-suited colour representation while increasing the networks’s size only slightly from 36 × 36 to 48 × 48 pixels. Results using three different popular colour spaces were quite similar - with a slight advantage of RGB and YUV over HSV. Though the resulting classifiers were still overfitting and training results remained unusable in practise. The advantages of the mentioned colour representations may arise from the fact that they use a combination of 2 (YUV) or 3 (RGB) channels to represent colour tone (instead of HSV having only one), which results in more features extracted from chromaticity. RGB and YUV might also be superior in this case because the object colours match different channels of those representations and thus result in good contrast in these respective channels which is beneficial for feature extraction and may aid learning. Also image normalization did not improve detection rate. In case of local contrast adjustment the results got worse, maybe because distracting background details were amplified. Combinations with more channels like HUV, RGBUV, and others resulted in only marginal improvements. In consequence, they could not justify the additional computation. In order to make the objects easier to recognize they need to be depicted larger so that they contain at least a minimum amount of structure in the analysed image. Therefore, the input size has to be sufficiently large and networks with input image dimensions of 127 × 127 pixels were successfully tested. Overfitting was reduced drastically and the solution started to become much better with respect to correctness yielding false positive rates of less than 15%. Unfortunately, the performance degraded vastly so that this solution was not applicable on the UAV. During testing of larger and deeper networks with up to 4 convolutional layers and larger pooling cardinality it turned out that the performance did not improve any more but training time increased considerably. Furthermore, shallower networks with less feature extracting convolutional kernels and layers were tried. Also, the input size was reduced to an amount that was computationally feasible on the UAV but still preserved the most significant features of objects like edges when they are not too small. 80 × 60 pixels appeared to be the best size. This is also preserving the camera image aspect ratio. Experiments showed that two convolutional layers (C1 and C2, C2 partially connected) were sufficient (removing the third convolutional layer had no significant impact on accuracy) and that the Multi Layer Perceptron (MLP) can already be fed with a larger feature map from the second pooling layer. This MLP is now 3 layers deep having the first of them (P1) only partially connected to increase learning speed and break symmetry. However, poor performance on background images without objects still remained. This was changed by reducing the last layer output size back to three neurons (one per object) and adding negative (empty) sample images to the training set. Previously at least one class was correct for each image (either base, battery, cup or background). This was changed so that when a background image was trained, none of the three classes got positive feedback but the expected value at each of them was set to be minimal (–1 in a range from –1 to 1). This seemed to counter the observed overfitting. Nevertheless, testing the network that was trained with images containing only a single object or no object at all on images showing multiple objects belonging to different classes did not yield the desired results: Instead of showing high activation for each and every object present in the image, the network decided for one of them, leaving the activation for the second object in the picture not significantly higher than those corresponding to an object that is not present in the frame. A possible alternative, which we did not evaluate, would have been to have a binary classifier for each object and afterwards combine their results. We instead addressed this, by altering the training to support any combination of objects. The final network architecture is illustrated in Fig. 7 (bottom). To further improve detection results, make them more robust and ease automatic evaluation, purely synthetic images were generated from the MORSE Simulation and added to the training sets as well as placing rendered objects on random structured images from websearches (mostly sand, rocks and similar textures). This way it was possible to get ground truth more easily since it is readily accessible in the generating context instead of manually adding object position labels to the captured images. Sample statistics after training are listed in Table 3 and overall similar Table 3 Confusion matrix of CNN after 52 epochs of training (94.4% accuracy) True\detected None Base Battery Base Cup Base Batt. All and and and three batt. cup cup None 572 Base 1 Battery 14 Base and batt. 9 Cup 4 Base and cup 0 Batt. and cup 0 All three 0 Sum 600 0 557 0 33 0 3 0 1 594 4 0 566 4 0 0 3 0 577 0 0 0 538 0 0 0 22 560 3 0 0 3 600 4 58 9 677 0 11 0 2 0 553 0 55 621 0 0 0 0 2 0 505 9 516 0 0 0 1 0 5 0 505 511 accuracy (~94%) was observed with a larger number of images from different sources (captures, rendered and composite images). 6.3 Object Localisation and Results Due to the fact that the CNN only reports the presence of objects in the provided image and not its particular position, we implemented a sliding window approach to localise them. About 30% overlap of consecutive image regions was used for this purpose and splitting any region with a positive detection up into 9 overlapping subregions until the object is no longer found or a sufficient accuracy is reached. Even then the detection may be focused on any salient part of the object (not necessarily its centre) which introduces a certain error into the localisation. However, this process of localising the object within the image makes the approach too computationally complex to be effectively used on the UAV at this point, although it provides some room for trade-offs between time spent and resulting accuracy. Since the detection has to run integrated with all the other computations on the UAV platform and the localisation within the image is crucial for the map projection, we decided to mainly rely on the simpler but efficient blob detection approach. The localisation of the objects can be estimated efficiently using the blob detection approach. The centre position of the detected object as a localisation within the image is given as a result of the detection. This centre position is further transformed into the world frame by projecting a ray from the camera origin, applying a pinhole camera model, into the xy-plane. The calculation is simplified through the assumption of having a flat ground and just considering the current altitude of the UAV. All frame conversations are based on the handy ROS tf package, here the projection vector is converted into the world frame resulting in the estimated object localisation in world coordinates. Future work could explore automatic learning of the object positions using CNN in order to reduce the overhead of the image localisation as well as investigating a combination of CNN as a first stage and using the blob detecting for localisation in a second stage, or running them side-by-side if (likely) they make different errors and the detections can be combined or validated. Moreover, the actual localisation could benefit from a consideration of the terrain model provided by the SLAM map. 7 Collision Avoidance and Path Planning In order to guarantee a safe flight during the competition the UAV needs methods for collision avoidance and path planning. The term collision avoidance subsumes a number of techniques, which protect the UAV from direct threats. During the flight the sensors will recognize local obstacles and the collision avoidance will calculate a safe flight direction once a possible hazardous situation is determined. Path planning algorithms usually use a map to plan a path to a desired target position. The calculated path consists of way points that can be headed for and is preferably optimal to save time and energy. For unexplored and thus unpopulated regions in the map paths to the target locations are initially planned without considering possible obstructions. In this case local collision avoidance prevents the UAV from colliding with obstacles as they are encountered. The available ROS navigation stack22 is targeting ground vehicle only and more complex as needed in the assessable navigation scenario of the SpaceBot Cup. For this reason a computationally lightweight solution was implemented, which is explained in the following subsections. 7.1 Collision Avoidance Collision avoidance can be achieved using Potential Fields. The Potential Fields approach can be envisioned as an imaginary force field of attracting and rejecting forces that the UAV is surrounded with. The target position generates a force of attraction and obstacles push the UAV away. A new heading can be calculated by means of simple vector addition. The following equation is describing this relationship: F(q) = F Att (q) + n  i=1 22 http://wiki.ros.org/navigation. F Repi (q) q is the actual position of the UAV, F Att (q) is a vector, which shows the direction of n  F Repi (q) describes the repulsion of all obstacles in the environment. the target, and i=1 F(q) is the resulting vector, i.e. the heading the UAV should use to move. The following functions (from Li et al. (2012) [18]) can be used to calculate attracting and rejecting forces: F Repi (q) = ⎧ ⎨−η( 1 1 qobs − q 1 − ) i f qobs − q < q0 qobs − q q0 qobs − q2 qobs − q i f qobs − q ≥ q0 ⎩ 0 ⎧ i f q − qgoal  ≤ d ⎨ζ (qgoal − q) qgoal − q F Att (q) = i f q − qgoal  > d ⎩dζ q − qgoal  q0 , d, ζ and η are parameters that can be used to adjust attraction and rejection. qobs represents the position of an obstacle and  ·  is the Euclidean norm. Hence, Potential Fields efficiently calculates a heading of the UAV that leads away from th obstacles and, at best, directly points to the target. It may happen that attracting and rejecting forces cancel out. In this case the UAV is caught in a local minimum. In order to address this problem the potential fields approach can be combined with path planning, as presented in the next subsection. 7.2 Path Planning Having a map of the environment that splits the space into discrete segments allows applying search algorithms for path planning to find the shortest path in the graph. Suitable representations for this kind of purpose are Occupancy Grid [19], which divide the two-dimensional space into squares and rectangles, and Octomaps [20], which provide a volumetric representation of space in the form of cubes or voxels. In our architecture we applied D* Lite as the path planning algorithm. D* Lite is a incremental heuristic search algorithm that has been developed by Likhachev und Keonig [21]. D* Lite repeatedly determines shortest paths between the current position of the system and the goal position as the edge costs of a graph change while the UAV moves towards the goal. 7.3 Summary For the considered scenario it is sufficient to use Potential Fields to successfully navigate the mission area. If the UAV moves to close to an obstacle, Potential Fields will calculate the necessary evasive manoeuvres. We can ignore the problems arising Fig. 8 The behavior of the collision avoidance in simulation from local minima since there is only a small number of obstacles in the altitude the UAV operates in and these obstacles are not having particularly complex shapes. However, we apply D* Lite to always have a valid flight path under the assumption that the UAV’s positions is adequately tracked. Changing the resolution of the map allows to adjust the computational load caused by D* Lite. Providing more spare computation time enables other modules to execute more intensive calculations. The separation of local and global planner has also been proposed by Du et al. [22]. This results in faster reaction times and optimal path planning. As a consequence, collision avoidance and path planning are both passive in our architecture as long as no obstacle is detected. During this normal execution the targeted path is calculated by the exploration node and given to the path_follower. When the collision avoidance becomes active (i.e. the UAV is in the influence sphere of a potential field) the original path will be overwritten by a new pose, namely the avoidance heading, to master the urgent danger (see also Fig. 8). After, the path planning module is triggered, if the potential fields approach has not yet sufficiently resolved the approaching collision, in order to provide a suitable path to the target considering the obstacle. 8 Autonomous Exploration of Unknown Areas A main task of the UAV is exploration, which means mapping the environment and navigating to unknown areas. This section describes the analysis and evaluation of a set of exploration strategies with the goal to use the best one in our UAV software. The quality of an exploration strategy can be measured in the time it needs to cover a specified area, the required computation power and the precision of the resulting map. Since the UAV has a very limited time of flight of roughly 15 min, a rapid exploration with low computation requirements is preferred and map precision analysis is omitted. 8.1 Simulation For the purpose of gathering performance data on the exploration strategies for comparison, a simulation environment has been developed, capable of representing the UAV and its field of view in a flat 40 m × 30 m world without obstacles except the limiting outer walls. The UAV flies with a speed of 0.2 m s−1 and rotates 5 ◦ s−1 . Exploration modules can be switched flexibly due to a minimalistic, ROS-like interface: Exploration is a function getting the current robot configuration (PoseStamped via TransformListener) and a world representation (OccupancyGrid from the SLAM service) and returning the next best view, i.e. favoured robot configuration (Pose). The general handling of the exploration process, like further passing on the target poses to the path_follower is handled in a simple collision avoidance behaviour, which makes use of the package discussed in this section. The strategy performance is logged in matters of map coverage over time, distance flown, rotations made and time needed to discover the whole world. Additionally, in each simulation run three objects are randomly placed in the world and their discovery times are logged. 8.2 Exploration Strategies Five exploration strategies have been examined, of which a typical path is shown in Fig. 9 each. Random flight (9a) makes the UAV steer straight until a wall is reached. Then it proceeds in a random angle, until there are no more frontier cells. Concentric circles (9b) is a rather static strategy, which makes the UAV fly an enlarging spiral path around the starting point. If a wall is reached, the circular path is given up and replaced through a temporary wall-following behaviour. The SRT (9c) method presented in [23] builds a Sensor-based Random Tree by probing the local safe area for unexplored cells and returning to the previous one when there are none left. The utility-based frontier approach (9d) is similar to Yamauchi’s frontier-based exploration [24], but uses a utility function to decide which frontier to visit based on distance and information gain. Similar work has been done in [25, 26]. A variant with penalised rotation has also been tested (9e), addressing the general problematic performance of determining or handling the orientation with SLAM and odometry. The last strategy uses a genetic algorithm for sampling (9f). The algorithm mutates and recombines a pool of randomly generated, frontier-based robot configuration samples over a fixed number of generations. The mutation creates a new sample by a normally distributed random alteration of position and orientation. The recombination yields a sample (a) Random flight (b) Concentric circles (c) SRT (d) Frontier utility (e) Rotation penalty (f) Sampling-based Fig. 9 Typical exploration paths with position and orientation taken from two different random samples. In the end of each generation step, a fixed number of best samples is selected by following utility function U . C U := max(t R , tT ) The function estimates the expected information gain (newly discovered cells C assuming flat ground and a given camera setup) per time needed to reach this configuration, with t R and tT being rotation time and translation time respectively. 8.3 Evaluation and Results On each strategy data has been collected over 100 runs each with the same pool of random-generated starting scenarios, consisting of an initial robot configuration and three object positions. Figure 10 shows the average proportion of visited cells over time for all strategies. The “ideal exploration” rate is the theoretical value of the UAV flying straight through unknown area at maximum allowed speed. The utility-based frontier approach finishes first taking 50 min on average. On the other hand, the samplingbased strategy shows the highest exploration rate in the early phase of exploration, finding the objects first and having covered the most area after 10 min. Nevertheless, the evolutionary non-deterministic sampling algorithm requires more computational resources even if the processed generations are limited, see Fig. 10 Exploration progress over time Table 4 Computation time for one waypoint in average over the full exploration area Average computation time in ms Sampling-based Frontier utility Concentric circles Random flight SRT 1271 28 0 7 3 Table 4 for a comparison. Hence, the utility-based frontier approach is the preferred exploration strategy for our setup in SpaceBot Cup, because it combines a fast exploration with reasonable requirements of computation performance. The implemented ROS package uav_exploration including the specific simulation environment is available online.23 9 Autonomous Behaviour Developing systems that are able to react appropriately to unforeseen changes while still pursuing its intended goals is challenging. As discussed in [27], adaptivity in general, and fast and flexible decision making and planning in particular, are crucial capabilities for autonomous robots. Especially in the Robot Operating System (ROS) community [28] developers are so far mostly using pre-scripted, non-adaptive methods of describing the high-level robot behaviours or tasks. A popular package is SMACH that allows to build hierarchical and concurrent state machines (HSM) [29]. All kind of state machine based approaches have the problem that a decision 23 https://github.com/DAInamite/uav_exploration.git. or reaction can only be given if a state transition was already modelled in advance. Behaviour trees, available in the pi_trees package [30], are an alternative that allows for more dynamic rules. More flexible is the BDI-based implementation CogniTAO [31] available in the decision_making package. The concept is more suitable for uncertain environments because the execution sequence is not fixed and the selection of behaviours (plans) is based on conditions. Nevertheless, it is still difficult to define mission or maintenance goals and there exist only simple protocols for plan selection. In order to provide a flexible, adaptive and goal-driven alternative we are working on a new hybrid approach with tight ROS integration that incorporates features from reactive behaviour networks and STRIPS-like planning. Even though, such a system can perform less optimal, it will support execution in dynamic environments. The behaviour network itself strongly supports the idea of an adaptive robotic system by being • opportunistic and trying to perform the best-suited action at any time even if the symbolic planner cannot handle the situation and does not find a suitable plan; • light-weight in terms of computational complexity; • performing well in dynamic and partially observable environments under the assumption that actions taken at one point in time do not block decision paths in the future. The first expansion stage of our concept, called ROS Hybrid Behaviour Planner RHBP, is going to be advanced in future for instance with extended multi-robot support, incorporating learning and more hierarchical layers. The current architecture of our implementation contains three core layers: the behaviour network itself represented by its distributed components, the symbolic planner and a manager module. It is available online.24 The manager module supports and manages the distributed execution of several behaviours on different machines within the robot. Furthermore, it is monitoring and supervising the behaviour network by interpreting the provided plan and influencing the behaviour network accordingly. The following subsections provide required background in order to understand our framework and implement autonomous behaviour with it. Furthermore, the implemented UAV behaviours are presented. 9.1 Behaviour-Network Base The behaviour network layer is based on the concepts of Jung et al. [32] and Maes et al. [33], but incorporates other recent ideas from Allgeuer et al. [34], in particular supporting concurrent behaviour execution, non-binary preconditions and effects. 24 https://github.com/DAInamite/rhbp. behaviour preconditions effect 0..* 1 goal conditions Fig. 11 Behaviour network components The main components of the network are behaviours representing tasks or actions that are able to interact with the environment by sensing and acting. Behaviours and goals both use condition objects composed of activator and sensor to model their environmental runtime requirements, see Fig. 11. The network of behaviours is created from the dependencies encoded in wishes based on preconditions and effects. Each behaviour expresses its satisfaction with the world state (current sensor values) with wishes. A wish is related to a sensor and uses a real value [–1,1] to indicate both the strength and direction of a desired change, 0 indicates complete satisfaction. Greater absolute values express a stronger desire, by convention negative values correspond to a decrease, positive values to an increase. Effects model the expected influence to available sensors (the environment) of every behaviour similar to wishes. Goals describe desired conditions of the system, their implementation is similar to behaviours except that they do not have an execution state or model effects on the network. Therefore, goals incorporate conditions that allow for the determination of their satisfaction and express wishes exactly like behaviours do. Furthermore, goals are either permanent and remain active in the system as maintenance goals, or are achievement goals that are deactivated after fulfilment. Sensors model the source of information for the behaviour network and buffer and provide the latest sensor measurements. Virtual sensors can also be used to model the world state, for instance the number of detected target objects. The type of the sensor value is arbitrary, but to form a condition a matching pair of sensor and activator must be combined. Due to the fact that raw sensor values can be of arbitrary type they need to be mapped into the behaviour network by activators. Activators compute a utility score (precondition satisfaction) from sensor values using an internal mapping function. The separation of sensor and activator fosters the reuse of code and allows also the abstract integration of algorithms using more complex mapping functions like potential fields. Our implementation already comes with basic activators for expressing a threshold-based and a linear mapping of one-dimensional sensors. Multi-dimensional types can either be integrated by custom activators that provide a normalisation function or by splitting its dimensions into multiple one-dimensional sensors. The key characteristics and capabilities of a behaviour network are defined by the way activation is computed from sensor readings and the behaviour/goal interaction. Behaviours are selected for execution based on a utility function that determines a real number behaviour score, called activation. There are multiple sources of activation, negative values correspond to inhibition. If the total activation of a behaviour reaches the execution threshold and all preconditions are fulfilled the planner selects it for execution, several behaviours can be executed in parallel. The behaviour network calculation is repeated in fixed frequency that can be adjusted according to the application requirements. At every iteration all activation sources are summed to a temporary value called activation step for every behaviour. After the activation step has been computed for every behaviour it is added to the current activation of the behaviour reduced by an activation decay factor. The decay reduces the activation that had been accumulated over time if the behaviour does not fit the situation any more and prevents the activation value from becoming indefinitely large. After behaviour execution the activation value is reset to 0. Behaviours are not expected to finish instantaneously and multiple behaviours are allowed to run concurrently, if they are not having conflicting effects. 9.2 Symbolic Planner Extension The activation calculation is influenced by the symbolic planner based on the index position of the particular behaviour in the planned execution sequence. In order to allow for a quick replacement of the planner we based our interface on the widely used Planning Domain Definition Language (PDDL) in version 2.1. Hence, a majority of existing planners can be used. For our implementation we developed a ROS Python wrapper for the MetricFF [35] planner, a version of FF extended by numerical fluents and in the current version also conditional effects. It meets all our requirements (negated predicates, numeric fluents, equality, conditional effects) and due to its heuristic nature favours fast results over optimality. In fact the wrapper is only responsible for appropriate result interpretation and execution handling. The actual mapping and translation between the domain PDDL and the resulting plan is part of the manager. The PDDL generation on entity level is done automatically by the behaviour, activator and goal objects themselves through a defined service interface. Moreover, the manager monitors time constraints defined in behaviours, re-plans in case of timeouts, new available behaviours or if the behaviour network execution order deviates from the proposed plan. This ensures that replanning is only executed if really necessary and keeps as much freedom as possible for the behaviour network layer for fast response and adaption. The manager also handles multiple existing goals of a mission by selecting appropriate goals at the right time depending on available information, for example if goals can not be reached in the moment. 9.3 ROS-Integration All components of the RHBP are based on the ROS messaging architecture and are using ROS services and topics for communication. Every component of the behaviour network, like a behaviour or sensor, is automatically registered to the manager node and reports its current status accordingly, for details see Fig. 12. The application specific implementation is simplified through provided interfaces and base classes for all behaviour network components that are extended by the application developer and completed by filling hooks, like start and stop of a behaviour. The class constructor automatically uses registration methods and announces available components to the manager. The ROS sensor integration is inspired by Allgeuer et al. [34] and implemented using the concept of virtual sensors. This means sensors are subscribed to ROS topics and updated by the offered publish-subscribe system. For each registered component a proxy object is instantiated in the manager to serve as data source for the actual planning process where the activation is computed based on the relationships arising from the reported wishes and effects. Besides the status service offered by behaviours and goals there are a number of management services available to influence the execution, see Fig. 12. Due to the distributed ROS architecture the whole system works even across the physical boundaries of individual robots on a distributed system. Furthermore, RHBP comes with generic implementations that directly support simple single dimensional topic types for numbers and booleans to enable direct integration of existing sensors by just configuring the topic name. Moreover, activators for some common ROS types are provided as well and are going to be extended in future. manager behaviour GetStatus() behaviour proxies AddBehaviour() RemoveBehaviour() PDDL() RemoveGoal() Activate() AddGoal() Priority () ExecutionTimeout() goal proxies ForceStart() goal GetStatus() PDDL() Activate() Priority () rqt GUI Fig. 12 ROS services used by the behaviour network components arrows indicate the call direction 9.4 SpaceBot Cup UAV Behaviours In Sect. 4 the behaviours for addressing the SpaceBot Cup challenge have already been mentioned from the architectural point of view. As being said, the more complex algorithms and computations for exploration and collision avoidance are separated into own packages, already discussed in Sects. 7 and 8. In order to implement the desired behaviour we implemented the behaviour model illustrated in Fig. 13. In order to do so the provided behaviour base class have been extended for the individual behaviours. Available sensors and abstracted information of the system have been integrated as virtual sensors into the RHBP framework. For that it was necessary to implement some special sensor wrappers, which extract the needed information from complex ROS message types, like poses (TFs). Furthermore, a special distance activator was implemented to determine the activation in the network based on a geometry_msgs.msg.Pose and a desired target pose. The exploration sensor is a wrapper for the exploration module that describes the completeness of the exploration. The realised UAV capabilities are to take off and land (regularly at the landing zone after the mission is completed, the time is over, or the battery is depleted or anywhere else in emergency situations), select a position to move to (performed by exploration or return-to-home behaviour and overridden by the obstacle avoidance), and move start behaviour timeout sensor pose sensor started goal landed goal explored goal land behaviour emergency land behaviour go home behaviour height sensor exploration sensor exploration behaviour collision sensor collision avoidance behaviour used capacity sensor Fig. 13 SpaceBot Cup Planning Scenario Black arrows indicate (pre-) conditions. Solid black arrows ( ) indicate desire for high value, dashed arrows ( ) desire for low value. Green arrows ( ) mean positive correlation (increase, become true), red arrows ( ) indicate negative correlation (decrease, become false). Pose sensor values encode distance from home to the selected location while maintaining constant altitude over ground. While it is operating, the UAV continuously maps the terrain and searches for objects. Theses activities do not need to be turned on or off explicitly. Given the initial situation that the aircraft is fully charged, on the ground, at the landing zone (also referred to as “home”), and the mission starts, the network will activate the start behaviour first and then cycle between exploration (which retrieves a target location) and, if required, collision avoidance, (thereby mapping the terrain and scanning for objects) until it runs out of battery or completed its exploration mission. Finally, it will select the home location as target, move there and land. 10 Teamwork The basic idea that the UAV is the flying eye of the rover only makes sense when both vehicles communicate about map disclosure and object positions. In an ideal both vehicles would send their map update to each other and each vehicle would merge the update with the own existing OctoMap. Due to the limited computational power of the UAV and due to the fact that the UAV moves much faster than the rover, we finally decided to only let the UAV send map updates and known object positions to the rover. Finally, we simplified our approach in the way, that the rover knows the initial offset to the UAV (where it has been positioned before start relative to the rover’s position). The UAV sends object positions in its own coordinate system. Using the tf transformation the rover can now mark the objects in its own map. For the realization of the team communication we base on the ROS multimaster_fkie package together with the master_discovery_fkie and master_sync_fkie packages. This enables having several independent ROS cores in one local area network. The master_discovery node multicast messaging can be used as we are in the same subnet. Hence, the UAV is publishing the coordinates of discovered objects as tf, which can be received and transferred in the independent ROS system of the rover. A still open challenge is the fusion of the two independent OctoMaps in order to iteratively update a common map of both robots. This is probably a computation intensive task and for this reason is planned to be realised within the ground station. An alternative approach would be merging the two-dimensional occupancy maps instead, for instance with the not further evaluated solution in the package map_merging [36]. 11 Results Several results specific to individual modules or insights gained during the development of them have been presented and discussed in the module related sections before, for instance for navigation, exploration and object localisation. In the following we are considering the common capability set and limitations of our approach. In general the developed UAV is able to autonomously start, land, hover on a position, follow given trajectories and detect the target objects of the mission. Moreover, the paths or trajectories are generated by the exploration module and collisions are avoided with the potential field approach, while the whole process is controlled by a high-level goal-oriented decision-making and planning component. The capabilities have been empirically tested in simulation, laboratory environment and the contest itself. The navigation and the object localisation performed robustly in the very unstructured environment without feature-rich textures. The finally integrated system with the above presented components is successfully running onboard of our hardware platform, with the CPU having almost 100% load. However, the system is responsive and able to execute all modules in an appropriate refresh rate. Moreover, some modules, like SLAM are still having potential for performance improvements. Table 5 provides a more detailed overview about the produced load and update rates on our two core system (max. 200% load). The memory usage can be neglected, the whole system consumes less than 1 GB with an initial map. However, the table illustrates that most CPU load is generated by the processing of the visual camera data in the object detection and localisation and mapping. Before we tested our system on the hardware platform as well as for speeding up the development of individual modules we have extensively used the MORSE simulation environment in version 1.3-1 [37]. Due to the 3D engine and high level Table 5 Comparison of node CPU consumption and update frequencies. The separated categories group the nodes into sensors/actors, navigation, object localisation and higher-level behaviour (top to bottom) Node CPU Load in % Update frequency in Hz mikrokopter px4flow sonar_sensors bluefox_camera slam_odom_manager orb_slam position_controller optical_flow_to_tf path_follower object_detection_blob object_localisation_estimator exploration collision_avoidance uav_behaviours and planning 3 2 2 7 1 85 6 2 1 78 4 6 2 1 50 40 25 30 30 30 24 40 5 3 3 5 25 1 sensor interfaces of the simulation environment we have been able to even test the computer vision related SLAM and object localisation modules. For testing modules based on the PX4FLOW as well as the lower level MikroKopter control we implemented custom actuators and sensors that provide or receive data in the same ROS message formats as the originals. Accordingly, we have been able to remap ROS topics provided by the simulator to the actual names in our hardware configuration in order to simulate our mission. Due to the limited hardware capabilities of our UAV the complete ROS software stack can also be executed together with the simulation environment on a common business notebook (Lenovo Thinkpad T440s with Intel Corei7, 16 GB RAM, SSD and integrated graphics) providing similar performance as the actual hardware. MORSE has been favoured over alternative simulation environments, like Gazebo, because it has already been used by our partners implementing the rover robot. Furthermore, MORSE is very easy to extend, due to its Python and Blender origin, for instance complex 3D models can be imported and added easily in Blender. As expected the system performs very well in simulation, because of noise-free sensor data and a less dynamical environment. Nevertheless, our real system has some limitations and open issues, we want to discuss in the following. The altitude hold performance is suffering from the low resolution thrust control of the flight controller (8bit for the full motor speed range). Thus the PID controller running in the position controller has problems in keeping the altitude without ongoing regulations, since the thrust difference between two values can be too large. The contest itself was executed in two stages having a qualification stage and the final competition. The qualification was hold in a smaller arena with simplified and separated tasks for the robots. During the preparation as well as the two parts of the competition we experienced several defects and problems. In particular we had massive problems with the used flight controller, which had hardware problems on several of our boards resulting in temporary IMU acceleration inversions of the z-axis. In consequence our system needed to survive some heavy crashes, even one in the qualification run, which could be fixed by replacing rotors, arms and the 3D-printed platform parts. In that sense, we do not recommend to use the MikroKopter FlightCtrl we have used for future developments and rather propose alternative solutions like the PIXHAWK platform. Furthermore, we are not satisfied with the orientation estimation of our odometry subsystem based on the PX4Flow and the additional IMU. The integrated orientation from the PX4Flow gyroscope is drifting over time and the IMU together with the used package razor_imu_9dof is suffering from fast movements resulting in bad performance. Here, this could be simplified by replacing the flight controller with one that is already coming with a well configured orientation estimation. An alternative approach could also use more advanced sensor fusion and filtering in order to improve the orientation estimation with existing sensors, for instance by applying a Kalman filter. As well related to the navigation subsystem is the SLAM and odometry combination, here our approach is working in general, but is sometimes not as robust as required. In particular the scale estimation of the monocular SLAM is prone to errors resulting in drifts due to the uncertain reference from the odometry. Unexpected problems during the scale determination result in a deviated scale reference for the SLAM. Future improvements could consider a continuous scale update, using an ongoing feedback loop during successful SLAM tracking, special initialisation flight routines or even completely resolve the issue be replacing the monocular SLAM with a stereo-vision-based approach. 12 Conclusion The intention of this chapter was to provide reference, insights and lessons learned on the development of an autarkic UAV on the basis of the ROS framework. The chapter exploits the DLR Spacebot Cup scenario for the exploration of unknown terrain without an external tracking system. The UAV is thought as assistance robot for an autonomous ground rover that is capable of grasping larger objects from the ground. The UAV is more agile than the rover thus acting as supplemental sense of the ground vehicle, aiming at map disclosure and object detection. We use ROS on the UAV as middleware and at the same time make use of the rich ROS module repository together with own ROS modules to create our own architecture that is able to control the UAV in a mission-oriented way. Some ROS modules could be used out-of-the-box, but other existing modules have been extended and adapted to our needs (e.g. ORB_SLAM and px-ros-pkg). An instance of this architecture is deployed on extended but still limited hardware (in terms of CPU and RAM) on the UAV alone and bridges multiple sensor inputs, computational control to actuator outputs. Our architecture makes extensive use of the ROS architecture patterns topic, service and action. Topics are generally used as intended as means for unidirectional data transfer, e.g. continuously reading ultrasonic sensors and streaming the data to the collision avoidance. Services provide responses, so we use them to change states or explicitly retrieve information from other nodes, e.g. for setting a new target and its confirmation or getting waypoints from the exploration_node. Long-running tasks with periodic feedback are realised with the actionlib, for instance the path following. Except the necessary standard ROS packages, our architecture comprises of about 25 ROS packages including dependencies so far as shown in Sect. 4, classified as Sensor/Actor for hardware connectivity and control, Behaviour and Planning for high-level control, navigation for UAV flight control, Object Localisation as missionspecific code and Infrastructure for commonly used functionality. As the focus in this book is on ROS we can summarize our experience with ROS as extremely satisfying when executing our system (also from the background that we have used other frameworks and developed our own agent-oriented middleware earlier already). The ROS-based system runs stable and works as expected. We have been able to prove that it is possible to develop an autarkic UAV with higher level capabilities using the ROS ecosystem. However, always when we are confronted with hardware and the real world more problems arise. Although the flying base is robust we could not prevent it from transport damage or from mischief during crashes. Although we are using sensors that are embedded a thousand times in other technical systems they will conk out when mounted on a UAV. Although, many people work with PIDs and develop and use SLAM algorithms, there is still a lot to be done, e.g. tuning a PID takes time and in terms of SLAM the jury is still out. For future work we will focus on high-level and mission-guided control as well as further advancing our autonomous navigation capabilities, while addressing new application beyond the SpaceBot Cup. Meanwhile, we can assess that multi-rotor systems and other smaller aerial vehicles can fly. But this is only a small part of what the customers want in all application areas of UAV, from agriculture to logistics. Mission-guided control from our point of view means the user can concentrate on the parameters, goal and success of a mission, without struggling with collision avoidance and stability during flight. In terms of ROS, this means that we are working on ROS packages that contain a generic extension for mission specific tasks, that can easily be integrated into behaviour planning, execution and control, and that can easily monitored by human operators. Acknowledgements The presented work was partially funded by the German Aerospace Center (DLR) with funds from the Federal Ministry of Economics and Technology (BMWi) on the basis of a decision of the German Bundestag (Grant No: 50RA1420). References 1. Kryza, L, S. Kapitola, C. Avsar, and K. Briess. 2015. Developing technologies for space on a terrestrial system: A cost effective approach for planetary robotics research. In 1st syposium on space educational acitvities, Padova, Italy. 2. Mur-Artal, R, J.M.M. Montiel, and J.D. Tardós. 2015. ORB-SLAM: A versatile and accurate monocular SLAM system. CoRR. arXiv:abs/1502.00956. 3. Honegger, D., L. Meier, P. Tanskanen, and M. Pollefeys. 2013. An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications. In 2013 IEEE international conference on robotics and automation (ICRA) 1736–1741. 4. Lecun, Y., L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11): 2278–2324. 5. Li, G., A. Yamashita, H. Asama, and Y. Tamura. 2012. An efficient improved artificial potential field based regression search method for robot path planning. In 2012 international conference on Mechatronics and automation (ICMA), 1227–1232. 6. Loianno, G., Y. Mulgaonkar, C. Brunner, D. Ahuja, A. Ramanandan, M. Chari, S. Diaz, and V. Kumar. 2015. Smartphones power flying robots. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), 1256–1263. 7. Tomic, T., K. Schmid, P. Lutz, A. Domel, M. Kassecker, E. Mair, I. Grixa, F. Ruess, M. Suppa, and D. Burschka. 2012. Toward a fully autonomous UAV: Research platform for indoor and outdoor urban search and rescue. IEEE Robotics Automation Magazine 19 (3): 46–56. 8. Schmid, K., P. Lutz, T. Tomi´c, E. Mair, and H. Hirschmüller. 2014. Autonomous vision-based micro air vehicle for indoor and outdoor navigation. Journal of Field Robotics 31 (4): 537–570. 9. Beul, M., N. Krombach, Y. Zhong, D. Droeschel, M. Nieuwenhuisen, and S. Behnke. 2015. A high-performance MAV for autonomous navigation in complex 3d environments. In 2015 international conference on unmanned aircraft systems (ICUAS), 1241–1250. IEEE: New York. 10. Sunderhauf, N., P. Neubert, M. Truschzinski, D. Wunschel, J. Poschmann, S. Lange, and P. Protzel. 2014. Phobos and deimos on mars - two autonomous robots for the DLR spacebot cup. In The 12th international symposium on artificial intelligence, robotics and automation in space (i-SAIRAS’14), Montreal, Canada, The Canadian Space Agency (CSA-ASC). 11. Endres, F., J. Hess, J. Sturm, D. Cremers, and W. Burgard. 2014. 3-d mapping with an RGB-d camera. IEEE Transactions on Robotics 30 (1): 177–187. 12. Labbe, M., and F. Michaud. 2014. Online global loop closure detection for large-scale multisession graph-based SLAM. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, 2661–2666. 13. Engel, J., T. Schöps, and D. Cremers. 2014. LSD-SLAM: large-scale direct monocular SLAM. In Computer vision – ECCV 2014: 13th European conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part II, 834–849. Springer International Publishing, Cham. 14. Forster, C., M. Pizzoli, and D. Scaramuzza. 2014. SVO: Fast semi-direct monocular visual odometry. In IEEE international conference on robotics and automation (ICRA). 15. Engel, J., J. Sturm, and D. Cremers. 2014. Scale-aware navigation of a low-cost quadrocopter with a monocular camera. Robotics and Autonomous Systems 62 (11): 1646–1656. 16. Izzo, D., and G. de Croon. 2012. Landing with time-to-contact and ventral optic flow estimates. Journal of Guidance, Control, and Dynamics 35 (4): 1362–1367. 17. Deng, L. 2012. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine 29 (6): 141–142. 18. Li, G., A. Yamashita, H. Asama, and Y. Tamura. 2012. An efficient improved artificial potential field based regression search method for robot path planning. In: 2012 international conference on Mechatronics and automation (ICMA), 1227–1232. New York: IEEE 19. Thrun, S., D. Fox, and W. Burgard. 2005. Probabilistic robotics. Cambridge: The MIT Press. 20. Hornung, A., K.M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard. 2013. Octomap: An efficient probabilistic 3d mapping framework based on octrees. Autonomous Robots 34 (3): 189–206. 21. Koenig, S., and M. Likhachev. 2005. Fast replanning for navigation in unknown terrain. IEEE Transactions on Robotics 21 (3): 354–363. 22. Du, Z., D. Qu, F. Xu, and D. Xu. 2007. A hybrid approach for mobile robot path planning in dynamic environments. In IEEE international conference on robotics and biomimetics, 2007. ROBIO 2007, 1058–1063. New York: IEEE. 23. Oriolo, G., M. Vendittelli, L. Freda, and G. Troso. 2004. The SRT method: Randomized strategies for exploration. In 2004 IEEE international conference on robotics and automation, 2004. Proceedings. ICRA’04, vol. 5, 4688–4694. New York: IEEE. 24. Yamauchi, B. 1997. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE international symposium on computational intelligence in robotics and automation, 1997. CIRA’97, 146–151. New York: IEEE 25. Surmann, H., A. Nüchter, and J. Hertzberg. 2003. An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments. Robotics and Autonomous Systems 45 (3): 181–198. 26. Tovar, B., L. Munoz-Gómez, R. Murrieta-Cid, M. Alencastre-Miranda, R. Monroy, and S. Hutchinson. 2006. Planning exploration strategies for simultaneous localization and mapping. Robotics and Autonomous Systems 54 (4): 314–331. 27. Hrabia, C.E., N. Masuch, and S. Albayrak. 2015. A metrics framework for quantifying autonomy in complex systems. In Multiagent System Technologies: 13th German Conference, MATES 2015, Cottbus, Germany, September 28–30, 2015, Revised Selected Papers, 22–41. Springer International Publishing, Cham. 28. Quigley, M., K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A.Y. Ng. 2009. Ros: An open-source robot operating system. In ICRA Workshop on Open Source Software 3 (3.2): 5. Kobe. 29. Bohren, J., and S. Cousins. 2010. The SMACH high-level executive [ros news]. IEEE Robotics Automation Magazine 17 (4): 18–20. 30. Goebel R.P. 2014. ROS by example: Packages and programs For advanced robot behaviors. Pi Robot Production, vol. 2, 61–88. Lulu.com. 31. CogniTeam Ltd. Cognitao (think as one). [Online]. Available: http://www.cogniteam.com/ cognitao.html. 32. Jung, D. 1998. An architecture for cooperation among autonomous agents. PhD thesis, University of South Australia. 33. Maes, P. 1989. How to do the right thing. Connection Science 1 (3): 291–323. 34. Allgeuer, P., S. Behnke. 2013. Hierarchical and state-based architectures for robot behavior planning and control. In Proceedings of 8th Workshop on Humanoid Soccer Robots, IEEE-RAS International Conference on Humanoid Robots, Atlanta, USA. 35. Hoffmann, J. 2002. Extending FF to numerical state variables. In Proceedings of the 15th European conference on artificial intelligence, 571–575. New York: Wiley. 36. Yan, Z., L. Fabresse, J. Laval, and N. Bouraqadi. 2014. Team size optimization for multirobot exploration. In Proceedings of the 4th international conference on simulation, modeling, and programming for autonomous robots (SIMPAR 2014), Bergamo, Italy (October 2014), 438–449. 37. Echeverria, G., N. Lassabe, A. Degroote, and S. Lemaignan. 2011. Modular open robots simulation engine: MORSE. In Proceedings of the 2011 IEEE international conference on robotics and automation. Author Biographies M.Sc. Christopher-Eyk Hrabia received a degree in computer science from the Technische Universität Berlin (TUB) in 2012. After he gained some international experience as a software engineer, he started his scientific career at DAI-Lab of TUB. He researches in the field of multiagent and multi-robot system with a focus on high-level control of autonomous, adaptive and self-organizing unmanned aerial vehicles. Moreover, he developed and contributed to several ROS packages and is using ROS for student courses conducted by him. Together with Martin Berger he led the SpaceBot Cup UAV-Team of the DAI-Lab. Dipl. Ing. Martin Berger after receiving his diploma in computer science in 2012, started as research assistant at DAI-Lab. He is involved in several student projects that teach practical application of robotics in competitive settings. He is a member of the RoboCup team DAInamite and frequently participates in international and national robot competitions. Dr. Axel Hessler is head of the Cognitive Architectures working group at DAI-Lab. He received his doctor degree in computer science for research in intelligent software agents and multi-agent systems and how they can fast and easily developed and applied in various applicational areas. Currently he is investigating the correlation between software agents, physical agents and human agents. M.Sc. Stephan Wypler is working as a software engineer in the industry, after he finished his Computer Science (M.S.) at the TUB in 2016. During his M.S. degree he was a core member of the SpaceBot Cup UAV-Team of the DAI-Lab and developed core modules for the higher-level planning, autonomous behaviour and object localisation. B.Sc. Jan Brehmer is currently studying Computer Science (M.S.) at the TUB. For his B.S. degree he researched autonomous exploration strategies for UAVs at the DAI-Lab. Formerly, he assisted as a tutor at the department for software engineering and theoretical computer science. B.Sc. Simon Matern is currently studying Computer Science (M.S.) at the TUB. He was working on the collision avoidance algorithms for the UAV in his final project of his B.S. degree. Prof. Dr.-Ing. Habil. Sahin Albayrak is the head of the chair Agent Technologies in Business Applications and Telecommunication. He is the founder and head of the DAI-Lab, currently employing about one hundred researchers and support staff. Development of an RFID Inventory Robot (AdvanRobot) Marc Morenza-Cinos, Victor Casamayor-Pujol, Jordi Soler-Busquets, José Luis Sanz, Roberto Guzmán and Rafael Pous Abstract AdvanRobot proposes a new robot for inventorying and locating all the products inside a retail store without the need of installing any fixed infrastructure. The patent pending robot combines a laser-guided autonomous robotic base with a Radio Frequency Identification (RFID) payload composed of several RFID readers and antennas, as well as a 3D camera. AdvanRobot is able not only to replace human operators, but to dramatically increase the efficiency and accuracy in providing inventory, while also adding the capacity to produce store maps and product location. Some important benefit of the inventory capabilities of AdvanRobot are the reduction in stock-outs, which can cause a drop in sales and are the most important source of frustration for customers; the reduction of the number of items per reference maximizing the number of references per square meter; and reducing the cost of capital due to overstocking [1, 7]. Another important economic benefit expected from the inventorying and location capabilities of the robot is the ability to efficiently prepare M. Morenza-Cinos (B) · V. Casamayor-Pujol · J. Soler-Busquets · R. Pous Universtitat Pompeu Fabra, Roc Boronat 138, 08018 Barcelona, Spain e-mail: [email protected] URL: http://ubicalab.upf.edu V. Casamayor-Pujol e-mail: [email protected] URL: http://ubicalab.upf.edu J. Soler-Busquets e-mail: [email protected] URL: http://ubicalab.upf.edu R. Pous e-mail: [email protected] URL: http://ubicalab.upf.edu J.L. Sanz Keonn Technologies S.L., Pere IV 78-84, 08005 Barcelona, Spain e-mail: [email protected] URL: http://www.keonn.com R. Guzmán Robotnik Automation S.L.L., Ciudad de Barcelona 3-A, 46988 Valencia, Spain e-mail: [email protected] URL: http://www.robotnik.eu © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_12 M. Morenza-Cinos et al. on-line orders from the closest store to the customer, allowing retailers to compete with the likes of Amazon (a.k.a. omnichannel retail). Additionally, the robot enables to: produce a 3D model of the store; detect misplaced items; and assist customers and staff in finding products (wayfinding). Keywords Professional service robots RFID · ROS · Inventory robots · Autonomous robots · 1 Introduction In this chapter a solution for smart retail that combines robotics and Radio Frequency IDentification (RFID) technology is presented. Traditional retail (a.k.a. “brick-and-mortar” retail) is facing fierce competition from on-line retail. While traditional retail still keeps some advantages (e.g. physical contact with the products or immediate fulfillment), on-line retail continues to offer more and more advantages that are increasingly appealing to customers (e.g. easy to find products, no stock outs, in depth information, recommendations, user opinions, social networks integration, etc.) Also, on-line retail has at its disposal a wealth of data about its customers’ clickstream that can be leveraged through sophisticated analysis to offer personalized and targeted websites against which a generic “one model fits all” physical stores cannot compete. As a result, on-line retail is growing with double digits, while many retailers continue to close physical stores. RFID offers an opportunity for traditional retailers to fight back. If every product in the retail store is tagged with RFID, it is given an Electronic Product Code (EPC), which is universally unique for each item. By placing RFID equipment at the store every relevant event of the product can be detected. Many leading retailers such as Kohl’s, Decathlon, Inditex or Marks and Spencers have already deployed RFID technology in their stores. The most obvious, and most common, application of RFID in the store is for inventory. Most commonly, RFID-based inventories are done by using handheld RFID readers, that store associates use to scan every shelf, rack and fixture in the store. A typical fashion store in a typical shopping mall of about 1.000 m2 , and about 10.000 items can be completely inventoried by a single associate in under 60 minutes. The same process using barcode technology would typically require a team of 3–5 persons working for one or two full days (double or triple counts are typically necessary in this case to reach an acceptable accuracy). However, although the theoretical accuracy of RFID inventory using handheld readers is above 99%, retailers using this method report actual accuracies of between 80 and 90%. The difference lies in human errors. Inventory taking is a very tedious process, and the staff doing it frequently forget a shelf, an aisle, or an entire section. The layout of a retail store is typically not regular, and it is very easy, especially in larger stores, for associates to get confused and believe that they have already scanned a part of the store when they in fact have not. Also, the repetitive movements involved Development of an RFID Inventory Robot (AdvanRobot) in scanning the store have been linked to injuries, raising an issue of health in the work place. Whenever humans are faced with tedious repetitive physical tasks, robots are the ideal candidates to overtake them, especially when there are health risks involved. Keonn Technologies, a manufacturer of RFID solutions for the retail sector, had the idea to combine a standard robotic base with an RFID payload composed of standard components to create a robotic inventory system for the retail store. The idea included some important insights on how to couple the navigation and the RFID systems to increase efficiency and the accuracy of the inventory process. In 2013 Keonn filed a patent and presented the first commercial prototype of an RFID inventory robot at the RFID Journal Live 2013 show, where it was selected as the winner of the “Coolest demo award” [15]. Since 2013 Keonn has taken the latest versions of AdvanRobot to the same show, where it has raised a lot of interest among retailers and in the RFID industry in general. During this time, Keonn, with the collaboration of the Ubiquitous Computing Applications Lab (UbiCA Lab) at Pompeu Fabra University, and the robotics company Robotnik has continuously improved the product, tested it extensively in large retail stores around the world, and established agreements with some of the most important players in the RFID for retail industry. AdvanRobot is the only robotic system for inventory designed by a multidisciplinary team of RFID specialists, robotics specialists, and academia. The resulting product, AdvanRobot, is now a part of Keonn’s portfolio of RFID solutions for retail. AdvanRobot is able to inventory a very large store during the 10–12 h in which a store is normally closed. For the same job, at least 4 associates with RFID handheld readers would be required, making the Return on Investment (ROI) very high. Additionally, AdvanRobot never forgets a shelf, an aisle or a section, and the measured accuracy has always been above 99.5% of all tagged products in the store. In fact, AdvanRobot is the most accurate instrument to perform an inventory. In addition, AdvanRobot is able to provide not only inventory but also the location of the products on the store layout and a 3D model of the store. This is considered of high value for retailers to detect misplaced items, to help customers find products and associates fulfill on-line orders. In this chapter, the following topics are covered: • First, we present a background section on RFID technology and its applications to retail. • Second, AdvanRobot’s overview is given including its design and architecture, analyzing the specific navigation strategies for inventorying a store and finishing with the human-robot interaction. • Third, a short introduction to AdvanRobot simulation is provided. • Fourth, we describe the results of the tests carried out in actual retail environments. • Fifth, ongoing developments are explained. We introduce a framework for the exploration and mapping of 3D environments. The ROS package is named cam_exploration and the code is publicly available at UbiCALab’s github account.1 1 https://github.com/UbiCALab/cam_exploration. • Sixth, we discuss the developments considered for future versions of AdvanRobot. Due to the nature of the project some of the core packages of AdvanRobot are not publicly available. Nonetheless, a set of packages for a basic simulation are available.2 A video3 of AdvanRobot in a store introduces its operation and main features. 2 Background 2.1 RFID Technology The central component of RFID technology is the RFID tag, composed of a chip mounted on a low cost antenna (usually made from etched aluminium). When an RFID reader sends an interrogating wave, all tags within the reach of the reader’s antenna respond with a unique code, which the reader communicates through a wired or wireless network to an information system that makes use of this code for a given application. Figure 1 illustrates the different components of a typical RFID system. In most cases, RFID tags are passive, and obtain the energy to respond by rectifying the interrogating signal wave. In some cases, besides a code, the RFID tag has a limited amount of user memory. RFID technology has been around for decades now. However, it was not until 1999 when the AutoID Center at MIT redefined the frequency, the protocols and the code standards [16], that RFID started to become a widely adopted technology, especially in the retail sector. These standards were later acquired and are now managed by GS1,4 the global organization also managing commercial barcode standards. As opposed to previous standards that use the low and high frequency bands (LF and HF), the new standard uses the ultra high frequency band (UHF) [6]. The band in Europe (ETSI standard) is from 865.6 to 867.6 MHz, and in the USA (FCC standard) from 902 to 920 MHz. An RFID reader in the UHF band can read tags at a distance of up to 10 m as opposed to less than 2 m in the LF and HF bands. RFID antennas have beam widths normally between 30 and 90◦ . In a typical scenario the reader can identify hundreds, sometimes thousands of tags simultaneously, for which the Gen2 protocol [4] incorporates anticollision protocols allowing read rates of hundreds of tags per second. The weight and battery constraints of handheld RFID readers limit the maximum power and in consequence the read range to between one and two meters. A robotic system, on the other hand carries a high capacity battery, which can operate several readers at full power, each connected to several antennas, each of them with a read 2 https://github.com/UbiCALab/advanrobot. 3 https://youtu.be/V72Ep4s9T4o. 4 http://www.gs1.org/. Fig. 1 Components of a typical RFID system. The Reader interrogates the environment by sending an RF signal through the Antennas. Tags, attached to products, reply with their unique identifier. The Reader communicates the data to an Information System for its exploitation range of between 2–5 m. In consequence, a robotic RFID inventory system can have the equivalent reading capability of 10–20 handheld readers. 2.2 Inventory Systems Due to errors, theft, misplacements and other reasons, actual inventories diverge significantly from theoretical inventories in the shop floor, typically by 10–20% [8]. Most retailers will use barcodes for inventorying, but inventories based on barcodes are expensive, disruptive, can only be done every few months, and their accuracy is typically no higher than 95%. This situation results in frequent stock outs, frustrated customers, expensive preventive overstocking, and in the impossibility to source on-line orders directly from the stores, which would allow retailers to effectively compete with online retailers [14]. In contrast, RFID-based inventories are much more affordable, non disruptive, and the accuracy is usually above 99%. There are several options to inventory a store based on RFID technology. First, handheld RFID devices may be used to accurately take inventory of objects tagged with RFID tags. Second, ceiling mounted readers with fixed or steerable beam antennas can be used to inventory RFID-tagged objects. Third, smart shelves or fixtures, incorporating RFID antennas and readers can be used to continuously inventory the objects they contain, as long as they are tagged with RFID. And fourth, autonomous robots [13] or UAVs [17] can be used to inventory all objects in a space, also using RFID tagging. Handheld readers cannot, by themselves, provide any information about the location of the objects within the space being inventoried, while the other three methods Table 1 Comparison of RFID inventory methods Method Location Inventory Fixed accuracy frequency infrastructure Handheld reader Ceiling mounted readers Smart shelves and fixtures Autonomous robot No location 2m 50 cm 2m Hardware cost Labor cost Every few No days or weeks Every few Yes minutes Every few seconds Every day can provide location with different degrees of accuracy. The first and fourth methods can provide frequent but non continuous inventory, while the second and third can provide quasi real time inventory (a.k.a. “near” time inventory). The first and fourth methods do not require any fixed infrastructure installation, and the second and third methods do. The four methods also differ in the cost of hardware and the cost of labor they require. Table 1 summarizes the above comparison. 3 AdvanRobot Overview AdvanRobot is an autonomous mobile robot that takes inventory of RFID labeled products in large retail stores. Therefore, by using AdvanRobot, taking inventory becomes an automatized task. In addition, complementary features revealed after its initial concept, for instance the location of the RFID labeled products and the generation of 3D maps of the environment. This section is developed as follows: first, the AdvanRobot is described detailing its design and characteristics; second, a highlevel overview of the system architecture is given; third, the navigation strategies for inventorying a store are defined; and finally, the human-robot interaction is detailed. 3.1 Design AdvanRobot is designed in two main systems: the robotic base and the payload. Briefly, the robotic base is a ROS based autonomous mobile robotic base that is in charge of satisfying all the requirements that the payload needs for inventorying. It provides power, a safe and reliable navigation, and connectivity with the environment. The payload is the system in charge of performing the main task of the robot which is taking inventory. In addition, it incorporates a web interface for the human robot interaction that can run in any web browser. This interface allows the interaction with the user from a very high-level simplifying all tasks and ensuring that everything runs as required. The Human-robot interface is detailed in Sect. 3.4.3. Robotic Base The robotic base is the model RB-1 manufactured by Robotnik.5 It is a circular base with differential wheels allowing an excellent maneuverability in narrow aisles since its turning radius is 0. Moreover, it has a load capacity of 50 kg and provides high stability and damping. However, note that the RB-1 base used includes ad-hoc modifications. The base consists of three subsystems: the traction subsystem; the brain; and the power subsystem. The traction subsystem includes two motorized and encoded wheels powered by servomotors, three omni-directional wheels, and two dampers for stability and overcoming floor irregularities. Due to its traction configuration it has differential drive capabilities. In addition, it has an emergency push (e-stop) button that cuts the servomotors power and immediately stops the robot. Secondly, the brain, composed by the computer and electronics subsystem is in charge of controlling and connecting all the robot parts and providing all the intelligence required. Its main devices are the computer, a router that provides an access point for external connections and the sensors. The only embedded sensor is an IMU with two gyroscopes, an accelerometer and a compass. The RB-1 base also uses peripheral sensors: an optical (RGB) and depth (D) camera (RGBD camera) placed on top of the payload in order to detect obstacles, and a laser range finder. Finally it is also prepared for the installation of sonars in order to avoid those obstacles that can not be detected by the laser range finder or the RGBD camera such as mirrors, black surfaces and highly translucent materials which can be present in target environments. All the peripheral sensors connectors are easily accessible for their connection and disconnection. Finally, the power subsystem consists of a lithium iron phosphate battery that provides more than 11 h of autonomy. It also includes a battery management system (BMS) which controls the charging and discharging of the battery, and the electronics for recharging on a charging station. 5 http://wiki.ros.org/Robots/RB-1_BASE. RFID Payload The RFID payload is the system in charge of taking inventory. It consists of three main parts: First, AdvanRobot is equipped with 3 RFID readers,6 which control 4 antennas each. However the robot can work with different configurations, using 1–3 readers combined with RFID multiplexers to control all the antennas. Second, AdvanRobot mounts 6 RFID antennas per side, summing up a total of 12 RFID antennas.7 The antennas are placed side by side in a way such that their reading areas overlap. In this fashion, there is a minimization of blind spots in what regards RFID readings. Hence, there is a degree of redundancy in the RFID subsystem. This is paramount in order to ensure a critical inventory accuracy. In the configuration shown in Fig. 2, AdvanRobot is equipped with the RFID antennas aforementioned. However, other types of antennas can be used in order to achieve different reading behaviors and scanning patterns. Last, the structural subsystem, with the main aim of providing a physical support to the former: the RFID antennas and the RFID readers, in addition, the structural subsystem is foldable. AdvanRobot has been designed to read tags up to 2.75 m. As a result its height is slightly above 2 m. Therefore, the possibility of being folded implies the ability to traverse any door at the same time as being high enough to read all the products in a store. Besides, the RGBD camera is linked to the top of the RFID payload. The reason for that is the maximization of the usable field of view of the camera. In what regards obstacle avoidance, this is crucial. For a safe navigation, AdvanRobot should detect any obstacle up to its height. This is better achieved observing the environment from the uppermost location of the payload. Both parts are interconnected by two USB ports for the RGBD cameras; an RJ45 port to interface the RFID system; and a connection for power supply. This connections are accessible through a small door in a side of the payload making the assembly and disassembly a very easy procedure. Comparison with Other RFID Robots A comparative analysis is done based on the information available regarding other commercial RFID-equipped robots. At the moment, such information is not extensive. The comparison highlights the features that are explicitly different between the involved robots, which to our knowledge are: 6 http://keonn.com/rfid-components/readers/advanreader-150.html. 7 http://keonn.com/rfid-components/antennas/advantenna-p22.html. Fig. 2 AdvanRobot in operation in a store. At the bottom, the lower circular part is the robotic base. On top of it, the RFID system, which is foldable. Finally, at the top of the RFID system the RGBD camera for obstacle detection • Tory8 manufactured by MetraLabs • StockBot9 manufactured by PAL Robotics From a structural point of view, both, AdvanRobot and Tory have a circular footprint of 25 cm radius, however, StockBot’s footprint is not circular and its equivalent footprint radius is 35 cm. This directly impacts the minimum size of the aisles where the robot can navigate, limiting its versatility in some environments. Regarding height, AdvanRobot is above the others, but it can be folded. With respect to the battery autonomy, Tory states providing 14 h, Stockbot 8 h and AdvanRobot 11 h. In addition, Advanrobot is the one that recharges the battery in a shorter amount of time, it requires 2 h while the StockBot needs 4 h and Tory from 3 to 6 h for a complete charge. The robots operational availability is defined as the ratio between the time the robot is working and the total robot time (working plus charging time). Therefore, AdvanRobot accounts for an operational availability of 84.6%, Tory of 70% and Stockbot of 66,6%. For instance, AdvanRobot would be operative for 84.6 h over a complete period of 100 h. The remaining 15.4 h would be used for battery charging. Focusing on the RFID system, StockBot has 8 integrated RFID antennas, Tory only mentions that it has integrated RFID antennas. AdvanRobot uses 12 antennas which 8 http://www.metralabs.com/en/shopping-rfid-robot. 9 http://pal-robotics.com/ca/products/stockbot. Table 2 Summary of the comparative analysis of commercial inventory robots AdvanRobot Tory StockBot Height (cm) Equivalent footprint radius (cm) Battery autonomy (h) Charging time (h) Operational availability (%) RFID system 1–3 readers 12 antennas 2 readers 8 antennas characteristics can be selected among different options and from 1 to 3 readers upon application and user request. Thus, AdvanRobot provides an excellent versatility to adapt the RFID system to the environment. Finally, to the best of our knowledge and from an operational point of view, AdvanRobot is the only one that is prepared to work by zones inside a shop floor. Such feature is explained in Sect. 3.4. Table 2 summarizes the analysis. 3.2 Architecture The high-level architecture of the robot consists of 5 main blocks: User; Interface; Task manager; Navigation; and RFID. Figure 3 shows a schematic of the architecture. The Interface allows the communication between the user and the robot, it is the main component of the human-robot interaction. The Task manager basically translates the user requests into a set of actions. The Navigation block receives the actions and transforms them in commands for the movement of the robot. At the same time, the RFID system reads surrounding tags. The Navigation and RFID blocks interact in order to succeed in the selected task. The details of Navigation and RFID are explained in Sect. 3.3. Besides, the architecture follows the modularity of the robot’s Fig. 3 System’s high-level architecture. Solid arrows indicate control while dashed arrows indicate control feedback. RFID plays a critical role in Navigation decisions in order to accomplish accuracy and time constraints design. The Navigation block corresponds to the autonomous base control and the RFID block to the RFID payload. The Task Manager is implemented in a node named task_manager. It is the middleware that translates high-level user orders to lower-level control commands. It has been created in order to dispatch and monitor the tasks that the robot has to perform from a user perspective, meaning that it operates at a higher-level than the Navigation and RFID blocks. task_manager communicates with the interface via ROS Services,10 in which the parameters of the service are the selected task and its options. Shortly, the node performs two main operations. First it keeps the state for the robot, which is communicated to the user. Hence, the user knows the selected action status and progress. Additionally, the state prevents any interaction through the interface that could interfere with the current task. This state assignation allows the robot to work as a simplified finite state machine. Second, it executes the selected task actions, using the ROS actionlib stack.11 By means of actionlib, the node monitors the task, and if required it can also preempt the actions. Also, task_manager is subscribed to other nodes that publish the state of relevant parts of the robot. This has two main benefits. On one hand, ROS message passing facilitates monitoring all the relevant information, from the state of the RFID readers to the temperature of the motors. On the other hand, the task_manager is the node that centralizes all the information and presents it to the user in a comprehensible way via the interface. 3.3 Navigation AdvanRobot uses the ROS navigation stack12 for a safe navigation in retail environments. AdvanRobot is configured to navigate in any wheelchair accessible retail space which are those comprised by aisles equal or greater than 70 cm [5, 9]. It uses a range laser finder for simultaneous localization and mapping, widely known as SLAM [2] and an additional RGBD camera for obstacle detection in 3 dimensions. It is sonar-ready for the detection of mirrors and materials not reflective to lightspectrum. Before deploying AdvanRobot, an environment survey identifies potential risks for navigation and determines the need to use sonars, which can be plugged and played to clear the identified risks. 10 http://wiki.ros.org/Services. 11 http://wiki.ros.org/actionlib. 12 http://wiki.ros.org/navigation. Fig. 4 Building blocks of AdvanRobot’s navigation. In Italic, the names of the ROS nodes involved in each of the sub-blocks The navigation consists of a preparatory human assisted stage and a fully autonomous stage. The first is needed to get a baseline of the environment and it is called Recognition stage. During the Recognition stage AdvanRobot generates a map and records key spots for later navigation. Once the Recognition stage is completed successfully AdvanRobot is ready for the Inventory stage when the actual autonomous inventory taking is performed. In both stages RFID observations are used as inputs to support the optimal performance of AdvanRobot. Figure 4 shows schematically the Navigation parts explained next. Recognition Stage The aim of the recognition stage is providing a guided observation of the environment to AdvanRobot. By doing so, AdvanRobot learns the map of the zone intended for inventory and, by listening to the RFID readings, records the key spots where products are present. In practice, an operator brings AdvanRobot to a zone’s initial spot and using a remote control moves AdvanRobot close to the products to inventory. At the same time, a map is generated using ROS gmapping13 and key spots are recorded by a purpose-developed ROS package: the goal_profiler. Inventory Stage AdvanRobot performs inventory taking during the Inventory stage. Simultaneously, it does a pre-computation of RFID reads as a preparation for the latter offline location computation. The Inventory stage is triggered and controlled by the mission_manager node. Following the trigger of an inventory, the key spots recorded during the Recognition stage start to be dispatched in the form of navigation goals. In order to optimally dispatch navigation goals during the inventory stage, a ROS node, the goal_dispatcher, monitors the progress of navigation and rearranges the goals online. 13 http://wiki.ros.org/gmapping. Fig. 5 Rqt_diagram of the navigation controller node For instance, if a key spot is visited unexpectedly due to path re-planning its corresponding navigation goals are cleared and not dispatched again. Given a navigation goal, linear and angular velocities sent to the motors controller are commanded by the rfid_navigation_controller node, which monitors the progress of RFID reads in order to compute the following velocities. The navigation_control node modulates the output of move_base14 in order to get the best inventory accuracy in the least time possible. For instance, if AdvanRobot moves at its maximum speed, which is 0.4 m/s, in an environment with a high density of RFID labeled products, the RFID system has no time to identify all the products. Thus, the navigation needs an added control layer that takes into account the progress of RFID reads. Otherwise, inventory accuracy requirements are not met. Figure 5 shows the rqt diagram that relates the navigation_control node to the the move_base node. In addition, the node location_precompute matches RFID reads to the corresponding identification antenna pose using ROS transforms15 lookups. Each RFID read is stored along with the antenna pose at the time of the identification for a posterior location computation. The computation of location is an ongoing development explained in Sect. 6.2 Linking properly the output of the Recognition stage to the subsequent Inventory stages is addressed by the mission_manager node. A key system design feature is that AdvanRobot follows a divide and conquer strategy to conduct inventory missions. An important learning from field experience is that it is preferable to define zones of less than 1000 m2 instead of working with a single larger zone. This is explained by 14 http://wiki.ros.org/move_base. 15 http://wiki.ros.org/tf. three main reasons. The first, the ease of operation by the user in case an inventory of a specific zone or set of zones is needed. This has been validated with users during onsite pilots. The second, a less demanding computational cost. Working with big and precise maps implies a high computational cost. The third, a convenient modularity facing considerable layout changes and the consequent need of a re-recognition. Hence, AdvanRobot is designed to work by zones, following a divide and conquer strategy. Given a set of zones, any combination is eligible and is defined as an Inventory mission. An Inventory mission comprises a set of consecutive Inventory stages. In this way, the user can select the zones to inventory as needed. Working with a set of zones requires a proper management of the Recognition stage output, which are maps and goals. For each zone, a pair map and set of goals is kept. Thus, an Inventory mission requires dispatching the proper map and goals in the appropriate order and timing to the navigation layer. The mission_manager is the ROS node that performs the tasks of triggering and monitoring Inventory stages according to a defined Inventory mission. The divide and conquer strategy is implemented placing a zone identifier at the beginning of each defined zone. The zone identifier is the reference for AdvanRobot to know the actual zone. At the moment, the zone identifier is a QR code which is detected by the RGBD camera using the ROS package ar_track_alvar.16 The automatic identification of zones enables the user to perform the recognition stage and afterwards launch an inventory mission without assistance. As well, zone identifiers are used by AdvanRobot to perform map transitions autonomously, commanded by the mission_manager, since they define the relations between zones. Furthermore, QR codes can be used as a support to AdvanRobot’s location recovery and correction, which is discussed in Sect. 3.3.4. An essential feature for a user is knowing when the Recognition stage needs to be rerun. By using ROS navigation stack properly, AdvanRobot is able to adapt to changes in the layout. However, if layout changes are significant, AdvanRobot may not be able to output a reliable localization and Inventory missions can fail. With the purpose of computing the need of rerunning a Recognition stage an algorithm is run by the node layout_watchdog. The algorithm uses as inputs mainly but not only the success and time to reach goals and the reliability of the localization during the mission. Exploration. The main challenge of the Recognition stage is suppressing the need of human assistance. For that, exploration has been considered and preliminary tests conducted. However, in retail environments, which are generally a big extension of interconnected aisles, the time to complete an unassisted exploration is prohibitive. Compared to an unassisted exploration, the actual approach has two main advantages: 16 http://wiki.ros.org/ar_track_alvar. optimizing the time it takes to recognize a zone; and empowering a user with an easy way to define the interesting zones for inventory. The latter, if doing unassisted exploration would require a posterior manual intervention or the addition of beacons for the robot to identify the interest zones. The ideal case would be that of a non human assisted exploration, but assisted by beacons or other technologies. Possible means for assisting explorations without human intervention include those discussed next for supporting localization. At the moment, exploration is under testing, see Sect. 6.1. The exploration is being developed with the combined aim of granting AdvanRobot enhanced autonomy and producing 3D maps. Localization Robustness. AdvanRobot requires a good accuracy in localization in both the Recognition stage and the Inventory stage in order to complete its tasks. During the Recognition stage AdvanRobot does not know the map of the environment, hence, it is executing a SLAM algorithm. It has been noticed that in very regular environments and large open spaces the localization of AdvanRobot is not reliable enough to generate faithful maps. Moreover, at the beginning of the Inventory stage, AdvanRobot needs to deal with the problem known as kidnapping [3]. To cope with this issues landmarks can act as absolute positioning references. Accordingly, the actual implementation makes use of QR codes. However, the detection of QR codes relies on a direct line of sight and lighting. Alternative means to support localization include laser reflectors, Bluetooth beacons and Battery-Assisted Passive (BAP) RFID tags. Robustness to layout changes. Significant layout changes (or the accumulation of minor layout changes) can impact significantly the performance of AdvanRobot. In front of such changes AdvanRobot may not be able to reliably localize itself in the environment and fail to complete a mission. When this happens, not only the mission failure consequences have to be assumed but also the Recognition stage needs to be rerun. An interesting challenge is granting AdvanRobot with the capability of modifying the maps and goals of a zone at the same time it is running the Inventory stage. Hence, after several inventory iterations of a zone there would be no divergence from the original observations to the actual layout, which is the case at the moment. In practice, at every Inventory stage, the zone’s Recognition stage observations would be updated and the impact of cumulative layout changes minimized. This would be equivalent to an assisted exploration, being the assistance the previous observations of the zone. In this way, there would only exist the need for a very first human assisted Recognition stage. Inventory and Location Navigation Strategies. At the moment, the navigation is optimized for the compromise between time and inventory accuracy. However, a precise RFID location requires constraints to be met in terms of navigation [11]. Combining inventory and location constraints in a single navigation strategy is one of the main challenges to navigation control. After, it follows the addition of constraints for a complete 3D mapping of the environment. Combining optimally the constraints for inventory, location and 3D mapping would optimize the valuable outputs of AdvanRobot and, at the same time, minimize the time invested in the commission. 3.4 Human-Robot Interaction Human-AdvanRobot interaction mainly consists of two operational procedures that simplify the user experience and minimize human errors. Both operational procedures are guided and executed by means of a control interface described next. The first procedure empowers the user to launch a Recognition stage in order to create a map of the environment and get a set of indicative goals to follow when doing inventory. The second procedure lets a user launch the inventory of a sequence of selected zones, called Inventory mission. The second procedure can be scheduled and managed remotely. The specific challenges and specifications related to human-robot interface are explained. AdvanRobot is a system that is specially suitable for large stores. Nevertheless, taking inventory of the whole store in a single mission is not always possible due to time constraints. Moreover, the user might request to take the inventory of only a collection of specific products. To cope with this, two key aspects of AdvanRobot operations are introduced: • The division of the shop floor. The shop floor is separated in zones complying with the following: – Zones should contain a family or a set of related families of products. – Zones should be between 750 m2 and 1500 m2 . – Zones should encompass easily identifiable architectonic features. For instance, it is no recommendable defining a zone as an island of hangers in the middle of a store. This is due to the problems that can arise in referencing the zone and the robot localization robustness. Therefore, for each of the zones the robot keeps a separate map interlinked with the other zones’ maps. The reason for this has been discussed previously in Sect. 3.3. • Zones are identified using visual cues, in this case QR codes. The linking and identification of defined zones is done using QR codes placed at the start and at the end of each zone. Noteworthy, the end of a zone is always the beginning of the following zone. Consequently, all the zones are interlinked and any sequence of zones can be selected for inventorying. This two key aspects allow the user to easily select zones for inventorying and recognition. If the layout of the shop floor has changed considerably and the rerecognition of a zone is required, only the specific zone will need to be re-recognized saving AdvanRobot time as opposed to having to re-recognize the whole area (the sum of all the individual zones). And the user can identify such zone easily since it is marked at its beginning and end by a QR code. Launching the Recognition Stage In order to launch the Recognition stage the user sequence of steps to follow is: • Place the robot in front of the starting QR code of the zone. Doing so, AdvanRobot recognizes which zone is about to be recognized and informs the user on the interface for confirmation. • By means of the human-robot interface the recognition is launched pressing the button Start Recognition. • Guide the robot through the zone’s interest spots, those intended for inventorying. While it is being guided, AdvanRobot records key spots where it is reading RFID tags. Later, the key spots are used to guide the inventory mission. • When the RGBD camera detects the final QR of the zone, the interface pops up the options Finish Recognition and Continue Recognition. The first allows the user to end the process, the second is used to resume the guiding process in case, even the final QR has been detected, it is not yet over. • If the option Finish Recognition is pressed, the map and key spots are stored and the Recognition stage processes stopped. Launching an Inventory Mission There are two procedures to start an Inventory mission. The first, which minimizes the user intervention, starts and ends AdvanRobot at its charging station. Given that the human-robot interface is a web application it can be accessed remotely. This empowers the user to launch Inventory missions programatically and remotely. The second is used when the user requires starting an inventory at a specific zone: • Place the robot in front of the starting QR code of the first zone. • Select the list of zones that AdvanRobot that comprise the Inventory mission. • By means of the human-robot interface the Inventory mission is launched pressing the button Start Inventory. • From this moment AdvanRobot is completely autonomous and its status and progress can be monitored on the human-robot interface. Human-Robot Interface The robot’s interface allows the user to interact with the robot in an intuitive and painless manner and it provides feedback of its status and progress. Figure 6 shows a snapshot of the human-robot interface. The interface guides the user along a sequence of steps that ensure the robot is prepared for the selected task. For instance, it gives guidelines to the user about the proper placement of the robot in front of a QR code, or it notifies the user to pull the e-stop button when it is needed. Fig. 6 AdvanRobot interface Once all the steps have been successfully completed by the user following the interface guidelines, the specific selected task and its parameters are communicated to the task_manager node (see Sect. 3.2.1) by means of ROS Services. During the task commission, the interface provides feedback by showing the progress of the task to the user. For instance, during the Recognition stage the interface shows the map together with the key spots that are being recorded; and when the robot is performing an Inventory mission it shows the progress of RFID readings and the progress of the mission itself. In addition, the interface includes other relevant indicators. For instance, the snapshot of the interface, shown in Fig. 6, indicates that the robot is fully charged with the green circle on the bottom right corner and properly connected to internet with the white symbol on the bottom left corner. Also, it shows that the robot and the RFID systems are connected, and the robot status is IDLE, hence, any task can be triggered by the user. The human-robot interface is a web application built using HTML5 and Javascript. The communication with the ROS Master is accomplished using rosbridge_suite,17 a meta-package that provides the definition and implementation of a protocol for ROS interaction with non-ROS programs. Rosbridge is implemented using WebSocket as a transport layer and provides an API which uses JSON for data interchange. Finally, it also uses the ROS package web_video_server18 for streaming the video of the RGBD camera. 3.4.4 In a large retail store there are WIFI blind spots. Therefore, the connectivity with the robot through an infrastructure network can be lost unexpectedly. In case a user needs to interface the robot and the infrastructure connection is not available, without a backup connection the interaction becomes impossible. In order to guarantee a responsive connection at any store location, the robot includes two wireless links: one a as client and the other as an access point. Usually, the robot is linked to the infrastructure network as a client, enabling its remote access and control. Moreover, the robot periodically uploads to the cloud relevant mission and status data, keeping a historical log that can be reviewed even when the robot is not online. In case the infrastructure network is not available in order to control or to know the robot status, AdvanRobot can be interfaced by means of its access point. As opposed to the infrastructure connection, this is available as long as the user stays within the robot’s WIFI range. The roaming between links is performed automatically by the robot’s interface, giving always priority to the robot’s access point. Hence, there is a valuable degree of redundancy in the robot’s connectivity. 4 Simulation The end-to-end simulation of the system has been set up using ROS and Gazebo. Simulation has been used for the validation of functionality in terms of navigation, operations and human-robot interaction. Yet, a realistic simulation of RFID is not available given its physical model is complex. RFID electromagnetic propagation suffers strongly from multipath effect, which means the RFID signal rebounds and is attenuated multiple times depending on the characteristics of the scenario. Modeling such behavior means taking into account each and every item and its characteristics within the reach of every single electromagnetic wave. At RFID frequencies not even raytracing produces satisfactory results. Only a full finite-element simulation of the entire environment, prohibitive from a computational cost point of view, would output reasonable simulation results. 17 http://wiki.ros.org/rosbridge_suite. 18 http://wiki.ros.org/web_video_server. Fig. 7 Simulation of AdvanRobot at the moment of initiating an Inventory mission. On the left, a view of AdvanRobot standing in front of an initial QR. On the right, the corresponding view on the interface Even though an RFID sensor is included for simulation in Gazebo, it is not implemented considering all the RFID simulation complexities. In conclusion, it is not possible with the available tools to simulate a realistic target scenario for the use case. Accordingly, the simulation engine used for the RFID reads is not a physical but a probabilistic one. In this manner, only the throughput of RFID reads can be set to behave analogous to reality, which works for the validation of the coding of navigation strategies but not for the validation of their convenience regarding inventory accuracy and time. Hence, the validation and tuning of navigation strategies for an optimal compromise between inventory accuracy and time can only be performed in actual physical scenarios. A set of packages for a basic simulation of the system can be found in the UbiCALab github repository19 and a snapshot of the simulation is shown in Fig. 7. 5 Experimental Results AdvanRobot has been tested periodically in retail environments in every design iteration. The last version of AdvanRobot has been validated for a duration of 2 months at a retailer’s facility as the preparation for a subsequent pilot. The validation targeted AdvanRobot’s navigation on the shop floor; AdvanRobot’s RFID identification accuracy; and the operation by store associates after a training. 19 https://github.com/UbiCALab/advanrobot. Development of an RFID Inventory Robot (AdvanRobot) Table 3 Maximum and minimum inventory times of the complete store throughout several iterations Minimum time Distance (m) Effective speed (m/s) Maximum time Distance (m) Effective speed (m/s) 407 23:41:33 4, 485 0.052 31:50:51 5, 422 0.047 5.1 Navigation Validation The navigation in a retail environment is not trivial due to the characteristics of shop floors. The main concerns at start have been the validity of the layout for a robust localization of the robot; floor materials and discontinuities; the ability to plan optimally paths; the effective speed of an inventory given the intricate configuration of aisles; and the effectiveness and negotiation of navigation commands. AdvanRobot has been in operation 8 h per night for 40 nights in a 7500 m2 store. Table 3 shows the maximum and minimum inventory times of the complete store throughout the iterations. The effective speed of AdvanRobot at inventorying is roughly 0.05 m/s, which is satisfactory but still the main figure to improve. The effective speed is compromised by the constraints of the RFID system and, at the same time, by the complexity of the layout for navigation. None of the initial concerns were found critical in completing inventory missions. However, the robustness of localization is sometimes compromised by the lack of structural features to robot’s observations reach. This is not a matter of installing more powerful sensors for location - a longer range laser - since it is usual for the robot on the shop floor to end up surrounded by expositor furniture. While the impact of this has not been critical to the day, it is considered a key aspect to improve. There are two main approaches to tackle localization robustness. On one hand, improving localization algorithms. On the other hand, providing localization algorithms with additional observations of the environment. 5.2 RFID Identification Accuracy Measuring inventory accuracy requires a baseline for comparison. The ideal case is that of manually counting each RFID label, which is known in retail as fiscal inventory. This seldom happens throughout a year given the workforce needed to count up to hundreds of thousands of items, a usual amount in a big retail mall. Consequently, a less demanding baseline in terms of man-hours is used. Currently, retailers use RFID handheld inventory devices for stock counting. Therefore, one of the references to compute the robot’s accuracy is the output of handheld devices. Moreover, retailers generally keep an inventory record which is an estimation based Table 4 AdvanRobot and handheld comparative accuracy Product type AdvanRobot’s Handheld accuracy accuracy (%) Men’s wear Women’s wear Women’s underwear Men’s underwear Jeans Amount of RFID labels 39,671 22,277 8,778 1,055 2,027 Table 5 AdvanRobot accuracy comparing to estimated inventory record Estimated inventory Robot count matching Robot count in excess Accuracy (%) count 209,465 on items inputs and outputs but not on actual counts of stock on the shop floor. Even such kind of estimated inventory records diverge from reality quickly over time, they are still a good baseline for an arbitrated accuracy comparison. In conclusion, the two baselines that are used to measure the robot’s accuracy are handheld devices and estimated inventory records. In order to measure the accuracy using the output of handheld devices, the baseline is computed as the sum set of items identified by AdvanRobot and by the handheld in a given zone. Table 4 shows results for a set of tests at selected zones. It is noticeable a higher AdvanRobot accuracy in all the compared cases. Interestingly, in the case of Women’s wear the handheld accuracy is significantly lower. Likely, the explanation for that is human error during handheld inventory taking. One of the main advantages of using a robot for inventory taking is preventing such oversights. Besides, AdvanRobot’s accuracy was measured using an estimated inventory record of the whole retail store as baseline. In this case, the baseline itself is less accurate, which has an impact on the robot’s measured accuracy. One of the main reasons are items that are reported to be at the shop floor but are actually at back stores not visited by the robot. Note that estimated inventory records report stock keeping units (SKU’s) and quantities (count) instead of unique item identifiers. This means that the comparison is not direct. Table 5 shows the accuracy of AdvanRobot measured at a store with more than 200.000 RFID labeled items. The column Robot count matching shows the count of product SKU’s identified by the robot matching the estimated inventory count. The column Robot count in excess shows the amount of references identified by the robot with a count higher than estimated. The accuracy measure is considered satisfactory by the retailer given the nature of the baseline, which is itself divergent from actual stock. Interestingly, the data set showed an excess of 19.938 references identified by the robot and not present in the estimated inventory record. This means that the robot identified product references that were not known to the estimated inventory record for some unidentified reason. In conclusion, the validation of RFID identification accuracy results successful in all the cases. Noticeably, AdvanRobot always outputs an accuracy above 99.5%. This is even more remarkable given the environment is highly dense in terms of RFID labeled products. In the exposed case there were over 200.000 RFID labeled items on a 7500 m2 surface. 5.3 Operation by Store Associates The suitability of the operational design and the usability of the human-robot interface is of utmost importance. For that, AdvanRobot was handed to store associates for its use for a month during the pilot’s preparation after a training. On the operational side it is noteworthy the users flexibility requirements. Each user has its own specifications depending on the details of the use case. For instance, the use of zones has proven useful in some cases while it was not necessary in others. A convenient approach is using a modular design ready for a quick customization. Since AdvanRobot is designed operationally based on a divide and conquer strategy, adjustments on the field are easy and quick to apply. A more complex question is human-robot interaction since potential operators include non-skilled associates. For that, the design of a user-friendly mobile app for AdvanRobot’s control and monitoring is crucial. We have noticed an initial steep learning curve mainly due to the lack of familiarity with advanced technology and a consequent low acceptance. Thus, while the interface has not presented remarkable issues, a good communication strategy to aid in the acceptance of a robotic solution on a shop floor is key for the success in its deployment. 6 Ongoing Developments 6.1 Exploration for 3D Mapping Building a 3D model of an environment can be done with robots equipped with RGBD cameras. Combined with the ability to locate products, it opens the door to a range of interesting possibilities such as measuring the impact of the placement of products, furniture and their combination in the sales; building a virtual store with the aim to link the offline to the online world; or verifying the layout, planogram and signage of a store. For this reasons, the generation of 3D maps is a use case of interest to potential users. Fig. 8 Gazebo simulation of an exploration using the exploration framework. The green cells represent the frontier between the known and the unknown space. The red arrow points at the exploration goal selected Currently, 3D mapping is working by means of the ROS package rtabmap_ros20 and the exploration for 3D mapping has been validated in simulation. Given the amount of factors that influence the output of an exploration, a ROS exploration framework has been developed in order to provide a fast and easy way of measuring the performance of different exploration approaches. With the aim of generating a complete 3D map of the environment it is required sweeping all the space of interest. For that, a proper exploration strategy has to be applied. By using a 3 dimensional exploration, it is assured the completeness of observations of the space needed to generate the 3D map. Note that while the exploration considers the 3 dimensions of the space, the navigation is limited to 2 dimensions. A number of mature techniques exist tackling the robot exploration problem. One family is the frontier-based exploration which has long been exploited since its introduction in [18]. Frontiers are regions on the boundary between open space and unexplored space. By selecting a certain frontier as exploration target, the complete environment exploration is ensured. The exploration framework is intended for the frontier-based exploration technique (Fig. 8). The setup for the simulation using the developed framework includes two RGBD cameras. The second camera introduces an extra source of point clouds from a different perspective. Hence, objects are scanned from more than a single pose leveraging the 3D models. Furthermore, extra cameras are also beneficial for navigation purposes as the consequent increment of the field of view adds valuable redundancy to obstacle detection. The addition of more cameras is considered as they supply extra observation sources, beneficial for the completion and resolution of the 3D model. 20 http://wiki.ros.org/rtabmap_ros. Fig. 9 Main cam_exploration structure with its main libraries ROS Package: Exploration Framework As exploration for 3D mapping is a novel paradigm, the needs for versatility in testing different strategies is a key requirement. Hence, to choose the appropriate sequence of 2D navigation goals, a frontier based navigation framework has been developed as a ROS package (cam_exploration), which is publicly available at UbiCALab github account, see fn. 4. The main source of information used for the exploration is the projection of RGBD camera point clouds on the ground. This is achieved using rtabmap_ros, a package based on the work presented in [10]. Basically, the package provides a whole SLAM implementation for point cloud data. The data flow starts with the RGBD readings from the sensors which are published as ROS point cloud messages. This messages are used by rtabmap node to build a 3D model and to compute its projection on the ground as a map. This allows the differentiation of unexplored regions and explored ones. At this point, the cam_exploration node uses the projection on the ground for exploration. The cam_exploration code structure is shown in Fig. 9 with the developed libraries it contains. All the map related information is handled by the map_server library. The node also provides visual information of its state using markers, which are handled by the marker_publisher library. To keep track of the robot location and handle the interaction with the move_base node, the robot_motion library is used. An important feature of this framework is its modularity. The main strategic decisions of an exploration that can be configured are: • Replanning. To decide whether to send a new goal for exploration, a set of replanning conditions are used. Each condition represents a situation when it is desirable to send a new goal. Such conditions can be combined between them, so that their combination is what actually determines whether to send the new navigation goal. The combination method is an OR operation, so if any condition is met, the robot should send a new goal. This prevents the robot from getting stuck and takes profit of the information received while heading to the goal. The current implementation includes the following replanning conditions: – not_moving: The robot is currently not heading to any goal. – too_much_time_near_goal: Spending too much time near the goal. It votes for replanning if the robot has spent some time near its current goal and it is properly oriented with the goal. – isolated_goal: The goal is not close to any frontier. It is activated when none of the cells in an arbitrary large neighborhood of the goal corresponds to a frontier. • Frontier evaluation. The main policy to be studied in frontier-based exploration is the choice of the goal frontier among the complete set of frontiers. To achieve that, a cost function is usually defined taking into account some criteria. Frontier evaluation methods are defined which can be combined in a weighted sum to provide the final evaluation. The implemented frontier evaluation functions are: – Maximum size (max_size). It favors larger frontiers over smaller ones. – Minimum euclidean distance (min_euclidian_distance). It favors frontiers which are closer to the robot position, regardless of the obstacles in between. – Minimum A∗ distance (min_astar_distance). The A∗ algorithm is intended to find a minimal cost path from a start point to a goal point in a grid. This optimal path is measured for each frontier from the robot’s location and used as a distance measure. The function favors the frontiers with shorter distances in this sense. • Goal selection. After a frontier is selected as the next exploration target, choosing a proper 2D navigation goal is not trivial. Specially, when working with projected point clouds. The actual 2D navigation goal is selected such that it is within the selected frontier and the robot cameras face the unexplored zone. The current implementation uses the frontier middle point (mid_point). In practice, any function can be used to select the frontier point. A likely to consider function is the closest frontier point to the robot location. All the options can be configured in the parameter server, so a simple YAML can be used to describe the exploration strategy in use. ROS API Following, the node main subscriptions, publications and parameters are described. Note that the actual 3D map is published by rtabmap node. Subscribed Topics. • /proj_map (nav_msgs/OccupancyGrid) Incoming map from rtabmap consisting in 3D camera point cloud projections. Published Topics. • /goal_padding (visualization_msgs/Marker) Region considered as the goal neighbourhood for robot-goal proximity purposes. • /goal_frontier (visualization_msgs/Marker) Target frontier. • /goal_marker (visualization_msgs/Marker) Selected goal point. Parameters. • /cam_exploration/frontier_value/functions (list(string), default: []) List of frontier evaluation functions to be used. Possible values are max_size, min_euclidian_distance and min_astar_distance. • /cam_exploration/ < function> /weight (double, default: 1.0) Value used to weight the function < function >. • /cam_exploration/min_euclidian_distance/dispersion (double, default: 1.0) Degree of locality of the function min_euclidian_distance. • /cam_exploration/min_astar_distance/dispersion (double, default: 1.0) Degree of locality of the function min_astar_distance. • /cam_exploration/minimum_frontier_size (int, default: 15) Minimum number of cells of a frontier to be considered a target candidate. • /cam_exploration/goal_selector/type (string, default: “mid_point”) Way of choosing one of the target frontier points for target point. Only mid_point is implemented. • /cam_exploration/distance_to_goal (double, default: 1.0) Distance between the actual 2D navigation goal target frontier point. Should be close to the usual distance from the robot footprint to the nearest 3D camera point cloud projection point. • /cam_exploration/replaning/conditions (list(string), default: []) List of replanning conditions to be applied. Possible options are not_moving, too_much_time_near_goal and isolated_goal. • /cam_exploration/too_much_time_near_goal/time_threshold (double, default: 0.3) Maximum time in seconds allowed for the robot to be near a goal in too_much_time_near_goal replanning condition. • /cam_exploration/too_much_time_near_goal/distance_threshold (double, default: 0.5) Minimum distance from the goal at which the robot is considered to be near it in too_much_time_near_goal replanning condition. • /cam_exploration/too_much_time_near_goal/orientation_threshold (double, default: 0.5) Maximum orientation difference between the one of the robot and the one of the goal, to allow replanning in too_much_time_near_goal replanning condition. • /cam_exploration/isolated_goal/depth (int, default: 5) Minimum rectangular distance from the goal to its nearest frontier allowed without replanning in idolated_goal replanning condition. 6.2 Location of RFID Items Location of RFID labeled items is an actual topic of interest [12] and robotics contribution is paramount since it allows identification of items from multiple locations and knowing precisely the coordinates of that locations. Combining the latter with the detection model of the RFID sensor and applying proper probabilistic algorithms can output reasonable locations of the RFID labeled items. At the moment, an algorithm for the location of items is under validation. The estimated accuracy of the location algorithm is between 1 and 2 m, which is expected to be improved. The basic idea for the improvement of the accuracy is using extended detection instances and enhanced observations of the environment. For that, a precise location algorithm is being explored. 7 Future Work In this section, a set of features in an exploratory or early development stage are discussed. 7.1 Collaborative Inventorying The maximum area that the AdvanRobot can cover in a night shift is highly dependent on product density, and the complexity of the store layout. In sections with a lot of products per square meter AdvanRobot must slow down to allow enough time for the RFID system to read the thousands of tags that may be visible to the robot from a single pose. Another limiting factor may be sections with very narrow and/or irregular aisles, in which the effective speed of AdvanRobot is reduced. As a result, each section of the store will require a minimum time for AdvanRobot to inventory. It may happen that a single robot is not able to inventory the entire store in a day. In this case there are two options: to complete the inventory in several days, or to employ a multi-robot network. Several robots may benefit from machine to machine communication to complete the inventory. This approach is not only more general and flexible, but also much more robust, as one robot may complete the job of another robot that might have malfunctioned or run out of battery. 7.2 UAVs and AdvanRobot Collaborative System AdvanRobot achieves a 99.5% accuracy taking inventory in shop floors that are compliant with its navigation requirements. In order to extend the target scenarios, Development of an RFID Inventory Robot (AdvanRobot) Table 6 Impact of introducing UAVs in the system AdvanRobot UAVs Autonomy Maneuverability (DOF) Passage width (cm) ≶70 Reading throughput Combined impact Mobile charging station World observations enhanced. Reading height > 2.75 m Target scenarios extended Accuracy > 99.5% for instance to warehouses and distribution centers, and aiming at higher accuracy rates the collaboration with UAVs is considered. When the AdvanRobot and UAVs will be working together, it is foreseen that the AdvanRobot will read most of the RFID tags due to its very high reading throughput, which makes it very efficient at inventorying dense environments. Oppositely, while a drone can reach places that AdvanRobot cannot reach, its reading throughput is much lower since the RFID system it can load and supply cannot be as powerful as that of the robot. On the other hand, the UAV can inventory areas that are not accessible to the AdvanRobot: aisles narrower than 70 cm and shelves higher than 250 cm. Also, by observing the environment from a higher point of view, it can provide additional information to the navigation system for planning and exploration. Accordingly, the overall mission efficiency can be increased. In addition, the robot can act as a charging station to UAVs. Usually UAVs have a limited autonomy due their limited payload capacity which implies low-capacity batteries. Hence, using the AdvanRobot as a mobile charging station can improve the operational availability of UAVs. Table 6 summarizes the benefits of a collaborative system combining robots and UAVs. 7.3 Applications Derived from Product Location The location of items on a map of the store enables the development of valuable applications both for customers and retailers. First, by knowing the location of an item it is possible to detect if this is misplaced. Item misplacement is a source of frustration for customers and implies a cost for retailers. An unknown misplaced item can be considered as stolen. Second, the location of items can be used to guide customers and associates to find easily a product or to produce an optimal path to find a set of products. This is commonly known as wayfinding and its output can reduce greatly the time needed for picking the products of orders placed online. Last, given the location of items, it is possible to analyze the profitability of products placements. For instance, a heat map of sales for a given product in different locations can be generated. 7.4 Simulation Performing an end-to-end simulation of the system including the RFID propagation and detection model is a matter of interest. For the time-being, there is no Gazebo plugin for the faithful simulation of an RFID system due to its complexity. Thus, bringing forward RFID simulation in Gazebo is an interesting topic to work on in the future. References 1. Bertolini, M., G. Ferretti, G. Vignali, and A. Volpi. 2013. Reducing out of stock, shrinkage and overstock through RFID in the fresh food supply chain: Evidence from an Italian retail pilot. International Journal of RF Technologies 4 (2): 107–125. 2. Durrant-Whyte, H., and T. Bailey. 2006. Simultaneous localization and mapping: Part I. IEEE Robotics Automation Magazine 13 (2): 99–110. 3. Engelson, S.P. 2000. Passive map learning and visual place recognition. Ph.D. thesis, Yale University. 4. EPCglobal: EPC Radio-Frequency Identity Protocols Generation-2 UHF RFID, Specification for RFID Air Interface, Protocol for Communications at 860 MHz 960 MHz, Version 2.0.1 Ratified. 2015. http://www.gs1.org/sites/default/files/docs/epc/Gen2_Protocol_Standard.pdf. 5. European Commission: Proposal for a directive of the European parliament and of the council on the approximation of the laws, regulations and administrative provisions of the member states as regards the accessibility requirements for products and services. 2015. http://ec.europa.eu/ social/BlobServlet?docId=14813〈Id=en. 6. GS1. 2014. Regulatory status for using RFID in the EPC Gen 2 band (860 to 960 MHz) of the UHF spectrum. 7. Hardgrave, B.C., J. Aloysius, and S. Goyal. 2009. Does RFID improve inventory accuracy? A preliminary analysis. International Journal of RF Technologies: Research and Applications 1 (1): 44–56. 8. Heese, H.S. 2007. Inventory record inaccuracy, double marginalization, and RFID adoption. Production and Operations Management 16 (5): 542–553. 9. House of Representatives of the United States of America. 1990. Americans with Disabilities Act of 1990. http://www.gs1.org/docs/epc/UHF_Regulations.pdf. 10. Labbe, M., and F. Michaud. 2014. Online global loop closure detection for large-scale multisession graph-based SLAM. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, 2661–2666. 11. Miesen, R., F. Kirsch, and M. Vossiek. 2013. UHF RFID Localization based on synthetic apertures. IEEE Transactions on Automation Science and Engineering 10 (3): 807–815. 12. NASA. 2016. RFID-enabled autonomous logistics management (realm) (RFID logistics awareness). http://www.nasa.gov/mission_pages/station/research/experiments/2137.html. 13. Nur, K., M. Morenza-Cinos, A. Carreras, and R. Pous. 2015. Projection of RFID-Obtained product information on a retail stores indoor panoramas. IEEE Intelligent Systems 30 (6): 30–37. Nov. 14. Rekik, Y., E. Sahin, and Y. Dallery. 2009. Inventory inaccuracy in retail stores due to theft: An analysis of the benefits of RFID. International Journal of Production Economics 118 (1), 189– 198. http://www.sciencedirect.com/science/article/pii/S0925527308002648. (Special Section on Problems and models of inventories selected papers of the fourteenth International symposium on inventories). 15. RFID Journal. 2013. Tag-reading robot wins RFID journal’s coolest demo contest. http://www. rfidjournal.com/articles/view?10670. 16. Sarma, S., D. Brock, and D. Engels. 2001. Radio frequency identification and the electronic product code. IEEE Micro 21 (6): 50–54. Nov. 17. Wang, J., E. Schluntz, B. Otis, and T. Deyle. 2015. A new vision for smart objects and the internet of things: Mobile robots and long-range UHF RFID sensor tags. CoRR. arXiv:abs/1507.02373. 18. Yamauchi, B. 1997. A frontier-based approach for autonomous exploration. In CIRA, 146–151. New York: IEEE Computer Society. Author Biographies Marc Morenza-Cinos is a PhD candidate at Universitat Pompeu Fabra. His research interests include Robotics, Wireless Communication and Data Mining. Morenza-Cinos has an MSc in Information and Communication Technologies from Universitat Politcnica de Catalunya. Contact him at [email protected] Victor Casamayor-Pujol is a PhD candidate at Universitat Pompeu Fabra. His research interests include Robotics, Artificial Intelligence and Aerospace. Casamayor-Pujol has a MSc in Intelligent Interactive Systems from Universitat Pompeu Fabra and a MS in Space Systems Engineering from Institut Superieur de l’Aeronautique et l’Espace. Contact him at [email protected] Jordi Soler-Busquets is a Robotics MsC student at Universitat Politcnica de Catalunya. His academic interests include Machine Learning and Robotics. Contact him at [email protected] José Luis Sanz is Industrial Designer at Keonn Technologies. His career has been developed between Conceptual Design and Industrial Design and development for manufacturing. José Luis has a degree in Industrial Design Engineering from Jaume I University in Castelln, a degree in Design from Universitat Politcnica de Catalunya in Barcelona and a posgraduate in Composite Materials from Eurecat Technological Center. Roberto Guzmán owns the degrees of Computer Science Engineer (Physical Systems Branch) and MSc in CAD/CAM, and has been Lecturer and Researcher in the Robotics area of the Department of Systems Engineering and Automation of the Polytechnic University of Valencia and the Department of Process Control and Regulation of the FernUniversitt Hagen (Germany). During the years 2000 and 2001 he has been R&D Director in Althea Productos Industriales. He runs Robotnik since 2002. Contact him at [email protected] Rafael Pous is an associate professor at Universitat Pompeu Fabra. His research interests include Ubiquitous Computing, Retail Technologies and Antenna Design. Pous has a PhD degree in Electrical Engineering from University of California at Berkeley. Contact him at [email protected] Robotnik—Professional Service Robotics Applications with ROS (2) Roberto Guzmán, Román Navarro, Miquel Cantero and Jorge Ariño Abstract This chapter summarizes new experiences in using ROS in the deployment of Real-World professional service robotics applications. These include climbing mobile robot for windmill inspection, a mobile manipulator for general purpose applications, a mobile autonomous guided car and a robot for the detection/measurement of surface defects and cracks in tunnels. It focuses on the application development of the ROS modules, tools, components applied, and on the lessons learned in the development process. Keywords Professional service robotics with ROS · Robots for inspection · Service robotics · Autonomous robots · Mobile robots · Mobile manipulators · RB1 · RB1Base 1 Contributions of the Book Chapter ROS has become a standard for the development of advanced robot systems. According to the statistics presented in the ROS metrics report [1] that measures statistics related with awareness, membership, engagement, and code metrics. The community is also growing exponentially. This chapter describes a number of professional service robotics applications developed in ROS. The number of robots using ROS in professional service robotics is continuously growing. However, even the ROS community and the number robotics startups are using ROS in their developments rise, the number of publicly documented Real-World applications and in particular in product development and commercialization is still relatively low. R. Guzmán (B) · R. Navarro · M. Cantero · J. Ariño Robotnik Automation, SLL, Ciutat de Barcelona, 3A, P.I. Fte. del Jarro, 46988 Paterna, Valencia, Spain e-mail: [email protected] URL: http://www.robotnik.eu © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_13 R. Guzmán et al. This chapter presents as main contribution the description of four real products that use ROS, detailing the principal challenges found from the point of view of a ROS developer. 2 ELIOT: Climbing Robot for Windmill Inspection Eliot is a climbing robot developed to address windmill maintenance. Maintenance of a windmill includes cleaning windmill shafts, painting blades, oil changes, tomography images shafts, and other tasks. Eliot was developed for Eliot Systems [2] a company that has patented several automated solutions for the cleaning and inspection of the windmill. The inspection robot makes use of a patented solution to climb metallic surfaces using magnetic tracks. The robot climbs in semi-autonomous or teleoperated mode to the top of the windmill mast, takes detailed pictures with a high-res camera of cracks on the blades. Elliot uses thermal imaging to process the information from the cracks. The robot also able to mount several payloads to perform NDT measurement operations on the mast itself. Here is a brief summary of system, work done, different components, packages (used and developed), and the developed HMI with problems found (Fig. 1). Due to the nature of the project and associated NDA (Non Distribution Agreement) with the end user, the packages of this robot are not available. Some of the packages used (non protected) are listed below. Fig. 1 ELIOT robot platform Robotnik—Professional Service Robotics Applications with ROS (2) multimaster_fkie: summit_xl_sim: summit_xl_common: imu_tools: robotnik_arduimu: gps_common: robotnik_gyro: axis_camera: https://github.com/fkie/multimaster_fkie a ROS meta-package that offers a complete solution for using ROS with multicores. https://github.com/RobotnikAutomation/summit_xl_sim a simulation package for robots of the Summit family. https://github.com/RobotnikAutomation/summit_xl_common common packages (pad, navigation, localization, etc.) of the summit robot. https://github.com/ccny-ros-pkg/imu_tools a set of IMU-related filters and visualizers. The imu_filter_madgwick was used to filter the raw imu data from the arduimu. The meta-package includes also a plugin for the visualization of the imu in rviz. https://github.com/RobotnikAutomation/robotnik_arduimu ROS package for the Arduimu Board. Needs an adapted ArduIMU firmware that provides the raw data to be filtered externally. http://wiki.ros.org/gps_common a package that provides common GPS-processing routines. The gpsd_client was used to read and procss the gps data. https://github.com/RobotnikAutomation/robotnik_gyro reads a gyroscope, a device used together with the internal IMU gyros. https://github.com/RobotnikAutomation/axis_camera contains Robotnik basic Python drivers for accessing an Axis camera’s MJPG stream based on axis_camera ROS driver. Also provides control for PTZ cameras. 2.1 Brief Description of the System This section focuses on Eliot Preview. Preview is the name of the smallest robot model developed by Eliot Systems with the purpose of inspection of towers and blades of windmills. The company has developed several robots with different sizes and functionality, in this case, they selected Robotnik for the development of this unit. 2.2 Robot Configuration The robot mounts two reinforced tracks with a set of magnets that are able to stick to the metallic tower mast. The robot has an autonomy of 4 h and is able to climb at speeds of 200 mm/s. The weight of the robot is < include file = " ( find f re en e ct _l au n ch ) / launch / freenect . launch " > < arg name = " rgb_processing " value = " true " / > < arg name = " ir_processing " value = " false " / > < arg name = " d epth_processing " value = " true " / > < arg name = " d e p t h _ r e g i s t e r e d _ p r o c e s s i n g " value = " true " / > < arg name = " d i s p a r i t y _ p r o c e s s i n g " value = " false " / > < arg name = " d i s p a r i t y _ r e g i s t e r e d _ p r o c e s s i n g " value = " false " / > Fig. 9 kinect.launch cp / opt / ros / indigo / share / f r e e n e c t _ l a u n c h / launch / freenect . launch ~ / catkin_ws / src / kinect . launch A copy of freenect.launch can now be found in ∼/catkin_ws/src. The file is edited to minimize the number of topics displayed. Thus, eliminating the unnecessary use of circulating data on the network and reduce the number of processing. The file can be edit using the editor of your choice, for example the graphical environment gedit, and with a nano in console environment. To edit the file created, type the following command (changing the word editor by the name of the desired editor). 1 ~ / catkin_ws / src / kinect . launch Edit the file in order to look like the following in Fig. 9. Finally, to run the launch file created, simply type the command: 1 Now you can have access to the main topics provided by freenectand Kinect sensor. From now on, we will deal with the acquisition and processing of data. Examples involving data collection and maintenance will be primarily developed using the Robots Perception Through 3D Point Cloud Sensors Fig. 10 Rviz window with the topic /camera/depth/points Matlab tool, except in cases where it requires the need to develop some code in C or creating launcher’s, as it has already occurred. The Rviz is used to easily view the topics. The Rviz is a ROS tool with many interesting features such as the ability to view the outputs of a topic. The Rviz will be fully explored during this tutorial. To start, let us open Rviz by typing at the terminal: 1 rviz For everything to work properly, make sure that this kinect.launch has been initialized and that the Kinect is plugged into the computer. Now to open Rviz, click the Add button, followed by choosing the tab by topic in the floating opened window. Within this tab, you can see all the currently existing topics in ROS. Click /camera then /depth, then in /points, select the PointCloud2 and click OK. In Global Options, click in fixed frame, and select the option camera_link. After all these steps are carried out, the PCD from the Kinect is successfully displayed as shown in Fig. 10. Repeat the procedure to the topics /camera/depth_registered/points and /camera/ rgb/image_color. Explore the features of Rviz, switch between topics displayed, and change the size of the dots among other activities, in order to better understand the tool. 4.2 Install SR4000 To use the camera 3D ToF SR4000, the libmesasr-dev-1.0.14-748.amd64 [29] driver was used along with the ROS swissranger_camera package [30]. This ROS package is no longer in the official repositories of ROS, therefore it will be available during M.A.S. Teixeira et al. Table 3 Difference between drivers Name libmesasr-dev-1.0.14-747.i386.deb libmesasr-dev-1.0.14-748.amd64.deb Recommended system 32 bits 64 bits the tutorial at GitHub. If you prefer the package cob_camera_sensors can be used, a tutorial on how to install can be found in [31] and how to use it for the SR4000 sensor can be found in [32]. For this session, it is necessary to download a specific folder from GitHub. A tutorial on how to download all files can be viewed at Background. To download the full project folder, open a Linux terminal and navigate to the chosen place where the package will be saved, then type: 1 git c l o n e h t t p s : // g i t h u b . com / air - l a s c a / ros_book_point_cloud Inside this new folder, a folder for each subsection of this chapter has been created. All files required for installation and SR4000 camera setup are located in the /Configuringtheenvironment/. The first step is installing the driver responsible for making the communication between the computer and the camera. This driver has two versions, one for 32-bit and other for 64-bit computers. The two drivers are available on GitHub of this tutorial. Table 3 establishes the recommended system for each driver. The driver can also be found in the manufacturer’s page: http://hptg.com/industrial/. In this page, navigate to the SR4000 camera and select the Downloads tab. These can be installed in two ways, one of them is through a double click on the file, and the second is through the command line in Linux terminal. To install from the command line, open the terminal and type: 1 l i b m e s a s r - dev - $ < version > $ . deb This driver is required to perform the communication between the camera and the computer. After install this driver, the next step is to install the package swissranger_camera, this package makes the conversion of information from the camera in understandable topics by ROS. The package swissranger_camera is not more present in the official repository, but a copy with slight modifications is within the folder /Configuringtheenvironment/ InstallSR4000 with swissranger_camera name. To install the package, open a Linux terminal and navigate to the /Configuringtheenvironment/InstallSR4000 folder, then enter the command: 1 swissranger_camera ~ / catkin_ws / src / This command generates a copy of the folder swissranger_camera to catkin_ws/ src. Once this is done, the next step is to compile the file, in a terminal type: Fig. 11 Rviz window with sensor information SR4000 cd ~ / c a t k i n _ w s / catkin_make If an error occurs permission, type: 1 2 cd ~ / c a t k i n _ w s / src / sudo chmod -R 777 swissranger_camera Re-enter the command: 1 2 All packages required to use the camera SR4000 are already installed and configured. The next step is to connect the camera to a PC by USB interface. Remembering as with Kinect, the camera needs an external source to run. A tutorial on how to plug the camera to the computer can be found in [33]. With the camera connected to the computer, run the package swissranger_camera and verify that occurred as expected. For this open a Linux terminal and type: 1 sr_usb . launch The Rviz window, as shown in Fig. 11, should be opened. In this window, you can view the Point Cloud data in the center, and in the right the same Point Cloud data image in gray scale. The package swissranger_camera does this conversion and it is a user option. The following error can occur: 1 [1463695548.842679171] : Exception thrown while c o n n e c t i n g to the camera : [ SR : : open ] : Failed to open device ! This error may be caused by the fact of the sensor is not connected to power. Checks on the back of the sensor if the light is flashing green. If it is flashing, it means that the sensor is working properly. Make sure the USB cable is connected in the right place. Another cause for this failure can be the fact that the user does not have sufficient permission to access the camera interface. Entering the following command can easily solve this problem: 1 / dev / tty * This command gives reading and writing permission for all tty devices connected to the computer. If you know which device, just replace tty* by the correct port name. Then, again run the command: 1 If everything goes well, the camera is already in place and ready for use. To avoid the hassle of giving permission to the device at a time, you can add the command ∼/.bashrc. In this way, every time you open a new Linux terminal, the permissions will be given and the device will be ready for use. To do this, type the command: 1 editor > ~ /. bashrc With open .bashrc file add at the end the following code. 1 We will develop a new custom launcher. If you install the package on a computer running Linux server, it interesting disables the opening of Rviz, since the system does not have graphics support. To copy the original launcher to our /catkin_ws/src, type: 1 cp ~ / c a t k i n _ w s / src / s w i s s r a n g e r _ c a m e r a / launch / sr_usb . launch ~ / catkin_ws / src / sr4000 . launch This command copies the original launch file to the folder /catkin_ws/src with the name of sr4000.launch. This code in the file can be seen in Fig. 12. The code responsible for the sensor opening is between the fourth and seventh line. The lines from nine to eleven have the codes responsible for opening the Rviz, if you want to open the Rviz just remove the lines. The new launch file will look like the Fig. 13. To test the launch file, type: 1 ~ / catkin_ws / src / sr4000 . launch In a new terminal, type: 1 rostopic Note that several topics were created. It is noteworthy that the SR4000 camera only has the type sensor 3D ToF. The content of other topics are nothing more than the Point Cloud data with some kind of processing. Some of the topics deserve to be highlighted, by providing interesting content and they are: Robots Perception Through 3D Point Cloud Sensors 1 < launch > < node pkg = " sw i s s r a n g e r _ c a m e r a " type = " sw i s s r a n g e r _ c a m e r a " name = " swissranger " output = " screen " respawn = " false " > < param name = " auto_exposure " value = " 1 " / > < node name = " rviz " pkg = " rviz " type = " rviz " args = " -d ) / cfg / swissranger . rviz " / > ( find s w i s s r a n g e r _ c a m e r a Fig. 12 sr_usb.launch 1 2 3 4 5 6 < launch > < node pkg = " s w i s s r a n g e r _ c a m e r a " type = " sw i s s r a n g e r _ c a m e r a " name = " swissranger " output = " screen " respawn = " false " > < param name = " auto_exposure " value = " 1 " / > Fig. 13 sr4000.launch 1. /swissranger/pointcloud_raw: this topic provides a message of type sensor_ msgs/PointCloud that can be considered one of the main topics. All other information are processed from the Point Cloud data present in this topic; 2. /swissranger/pointcloud2_raw: this topic gives a message from sensor_msgs/ PointCloud2. The only difference between this topic and /swissranger/pointcloud_raw is that this topic is in PointCloud2 format. In some situations, it is necessary to use a message such as PointCloud1 and other PointCloud2 of message type, having two topics to avoid the need for possible conversions. An example of an image of both threads can be seen in Fig. 14; Fig. 14 PointCloud2. topic: /swissranger/pointcloud2_raw message of type: sensor_msgs/ PointCloud2 Fig. 15 Image. topic: /swissranger/distance/image_raw message of type: sensor_msgs/Image 3. /swissranger/distance/image_raw: this topic brings a message of sensor_msgs/ Image. This is a conversion of the Point Cloud data in to gray-scale, leaving dark objects near and clear objects away. An example can be seen in Fig. 15; 4. /swissranger/confidence/image_raw: this topic provides a message of sensor_msgs/Image. This topic is processed from the Point Cloud data. Jerky movements leave a trail on the image, recording a movement. An example can be seen in Fig. 16; 5. /swissranger/intensity/image_raw: This topic gives a message of sensor_msgs/ Image and it is processed from the Point Cloud data. This topic brings the image intensity. A figure of this topic can be found in Fig. 17. Fig. 16 Confidence. topic: /swissranger/confidence/ image_raw message of type: sensor_msgs/Image 5 Examples of Point Cloud Processing This section aims to demonstrate and explain possible processing techniques to be performed using the SR4000 sensor (the techniques in this section can be executed also using Kinect). Matlab will be used to perform all processing and to view the results. The results can also be viewed on Rviz if desired. 5.1 Commands in Matlab This subsection aims to bring the main commands used during this chapter. The commands explained here are from Matlab R2015a tool, conflicts can be appear if the commands are used with other versions. First, the main commands involving the PCL in Matlab will be explained. The methodology will be used as follows: First, the explanation of the command, and then the command is used. The following command closes the open connection to the ROS, if one is active. 1 rosshutdown ; The next command opens a communication with the ROS, providing information such as the content of topics. 1 rosinit ; The command rossubscriber allows you to take the topic of reference, the variable topic now becomes the chosen type, having the same format. 1 rossubscriber ( ’ Topic Name ’); The value variable receives the next data read by the topic in the message format, in the case of Point Cloud data, the format is sensor_msgs/PointCloud or sensor_msgs/PointCloud2. Fig. 17 Intensity. topic: /swissranger/intensity/ image_raw message of type: sensor_msgs/Image receive ( topic ); The command readXYZ transfer for the xyzData only the coordinates of each point contained in the Point Cloud data. This command is extremely important and widely used, because without it, working with the message type sensor_msgs/PointCloud is difficult enough. 1 xyzData readXYZ ( pcloud ); The command scatter3 displays a picture containing the Point Cloud data, and you can browse it in Matlab without the need to use the Rviz. This command only works with variables of type sensor_msgs/PointCloud2. 1 scatter3 ( value ); Next command also enables the visualization of Point Cloud data in Matlab, but different from what happens with the scatter3, this command allows you to view data in XYZ matrix format. 1 showPointCloud ( xyzData ); The command rospublisher creates a variable that can be seen in the ROS. You should specify the topic name and type of message, such as: RosPub = rospublisher(’/pcl’,’sensor_msgs/PointCloud’). 1 RosPub ; rospublisher ( ’ Topic NameT ’,’ Message Type ’) Next command allows creating a variable the same type of existing message ROS, simply specify the message name, such as: 1 rosmessage ( ’ RosPub ’); Also is possible to specify the message type, as: 1 rosmessage ( ’ sensor \ _msgs / PointCloud ’) Where the variable msg become a variable of type sensor_msgs/PointCloud. It is also possible to place a sample in place on the variable name, for example: 1 rosmessage ( ’ name message ’); The following command sends the contents of the variable created for the topic created. After this command, it becomes possible to verify the ROS data sent, as in Rviz if the format is compatible with the tool. An example use of this command is send(RosPub,msg) that is being sent to the topic RosPub the existing content in the msg variable. 1 send ( topic created , created ); 5.2 ROS Subscriber with Matlab The next step is the data collect and process step, as mentioning earlier, the tool used for this is Matlab. Matlab allows direct communication with the Ros, and it facilitates data post processing. First, to collect and display the results of Kinect sensor, open Matlab and type the command like the Fig. 18. The code will be briefly explained. The command in the first line closes the connection to the ROS, if any old connection is still alive. In line two, a new connection with the ROS is made, it is through this command becomes possible to visualize ROS in Matlab. In line three the topic /camera/depth/points is transferred to the variable topic, where it can be handled the same way as a topic. In line four, the variable pcloud is receiving the next value read by the variable topic, in this case the variable pcloud if has a variable of type sensor_msgs/PointCloud2 specifies the ROS. On line five displays an image Pcl obtained from the sensor, the image is similar to the view in Fig. 19. The same command can be used for other messages from the Kinect, with slight changes in time to display images. The codes for the data collection of topics /camera/ depth_registered/points and /camera/rgb/image_color can be seen in Fig. 20. The result can be seen in Fig. 21. Note that different from what it has happened with the topic /camera/depth/points, this figure has color. It is characteristic of PointCloud2 and it allows adding a dimension in the data matrix containing any type of information, such as color, which is the case. The process changes to obtain data from the topic /camera/rgb/image_color, this is because the message type is different. The message now is type sensor_msgs/Image. One more step needs to be used to convert the collected message topic in an image understandable by Matlab. The code with the explanations is show in Fig. 22, the Fig. 23 brings the result. The procedure for get the Point Cloud data in Matlab for sensor SR4000 is similar to that used for the Kinect, because it is the same message type in ROS. The code for this procedure can be seen in Fig. 24, the Fig. 25 brings the result. For the data of the topics /swissranger/distance/image_ra, /swissranger/confidence/image_raw and /swissranger/intensity/image_raw the procedure is the same because all topics are the same type of message, which is the sensor_msgs/Image. The code in Matlab to get the message of this type was introduced. Please see the section or the package available on GitHub for this section. It is desirable that the reader already knows how to initialize the Kinect sensor or sensor SR4000, depending on which sensor the reader intends to use. The user rosshutdown ; rosinit ; topic = rossubscriber ( / camera / depth / points ) ; pcloud = receive ( topic ) ; scatter3 ( pcloud ) ; Fig. 18 Command in Matlab to get the Point Cloud data from the topic /camera/depth/points Fig. 19 Viewer of Point Cloud from topic /camera/depth/points in Matlab 1 2 3 4 5 6 7 8 9 10 % Close the ROS c o m m u n i c a t i o n rosshutdown ; % Open the ROS c o m m u n i c a t i o n rosinit ; % tr an sfer the topic for the vari ab l e d e p t h _ r e g i s t e r e d depth_registere d = rossubscriber ( / camera / d e p t h _ r e g i s t e r e d / points ) ; % Receives the value of the topic pcloud = receive ( d e p t h _ r e g i s t e r e d ) ; % Displays data scatter3 ( pcloud ) ; Fig. 20 Command in Matlab to get the Point Cloud data from the topic /camera/ depth_registered/points must already know how to use Rviz to view the topics, know the difference between the types of messages provided by the sensors and know which topic belong to each sensor, and know how to get data in Matlab. If any of these ideas is not clear, please review the corresponding section. 5.3 ROS Publishing with Matlab This subsection aims to create a function that converts the array in XYZ format in a message of type sensor_msgs/PointCloud. The function will be developed in Matlab Fig. 21 Viewer of Point Cloud from topic /camera/depth_registered/points in Matlab 1 2 3 4 5 6 7 8 9 10 11 12 % Close the ROS c o m m u n i c a t i o n rosshutdown ; % Open the ROS c o m m u n i c a t i o n rosinit ; % tr a n s f e r the topic for the va r i a b l e i m a g e _ c o l o r image_color = rossubscriber ( / camera / rgb / image_color ) ; % Receives the value of the topic s e n s o r _ m s g s _ I m a g e = receive ( image_color ) ; % Read the Image data imageFormatted = readImage ( s e n s o r _ m s g s _ I m a g e ) ; % View figure imshow ( imageFormatted ) ; Fig. 22 Command in Matlab to get the image from the topic /camera/rgb/image_color Fig. 23 Viewer of image from topic /camera/rgb/image_color in Matlab 546 1 2 3 4 5 6 7 8 9 10 M.A.S. Teixeira et al. % Close the ROS c o m m u n i c a t i o n rosshutdown ; % Open the ROS c o m m u n i c a t i o n rosinit ; % tr a n s f e r the topic for the va r i a b l e p o i n t c l o u d _ r a w pointcloud = rossubscriber ( / swissranger / pointcloud2_raw ) ; % Receives the value of the topic pcloud = receive ( pointcloud ) ; % Displays data scatter3 ( pcloud ) ; Fig. 24 Command in Matlab to get the Point Cloud data from the topic of SR4000 /swissranger/pointcloud_raw Fig. 25 Viewer of Point Cloud from topic of SR4000 /swissranger/pointcloud_raw in Matlab and it will be used for other experiments, so that the result of the experiments is visible in Rviz. For the experiments and tests presented in this subsection, SR4000 sensor shall be used. First the function code will be presented in Fig. 26 after an explanation of the code and how to use the function. An explanation of this function is given in its header. It has as input the matrix containing the points on the XYZ format, the desired name for the topic in ROS, the name of the reference and the time to leave visible the topic. Let us explanation about the code. Line 10 defines the function name, as well as the input and output attributes. This function will have as output a message such sensor_msgs/PointCloud containing conversion made of XYZ for the message. This output is interesting in other situations. Robots Perception Through 3D Point Cloud Sensors 1 2 3 4 5 6 7 8 9 10 % This function convert a NX3 array dimension in a message % of type s e n s o r _ m s g s / P o i n t C l o u d u n d e r s t a n d a b l e in ROS . % xyz = Nx3 matrix : % Nx1 = X ; % Nx2 = Y ; % Nx3 = Z ; % T o p i c N a m e = topic name in ROS . % Fr a m e N a m e = Name of reference , e x a m p l e s : map , World . % Time = Time the topic had been visible . function msg = X Y Z _ t o _ s e n s o r _ m s g s _ P o i n t C l o u d ( xyz , TopicName , FrameName , Time ) xyzvalid = xyz (~ isnan ( xyz (: ,1) ) ,:) ; PCL1mensage = rosmessage ( geometry_msgs / Point32 ) ; for i =1: size ( xyzvalid ,1) PCL1mensage ( i ) . X = xyzvalid (i ,1) ; PCL1mensage ( i ) . Y = xyzvalid (i ,2) ; PCL1mensage ( i ) . Z = xyzvalid (i ,3) ; end msg = rosmessage ( sensor_msgs / PointCloud ) ; msg . Header . FrameId = FrameName ; msg . Points = PCL1mensage ; pub = rospublisher ( strcat ( / , TopicName ) , sensor_msgs / PointCloud ) ; send ( pub , msg ) ; pause ( Time ) ; end Fig. 26 Function XYZ_to_sensor_msgs_PointCloud Line 12 removes undesirable values such as NaN, from the variable XYZ. These values can arise when using the command readXYZ and a message of type sensor_msgs/PointCloud2. The sensor possessed a defined resolution, but if some points are not unavailable during the time of capture of information, the sensor returns NaN. This situation is not desirable since the array typically has a too high size, discarding these points reduces the processing cost. Line 13 creates a message of type geometry_msgs/Point32, to be able post a message of type sensor_msgs/PointCloud it is necessary convert each point of XYZ in a message type geometry_msgs/Point32, which is the activity of this function. The Lines 14 to 18 create a repeating loop. This loop is intended to go through the whole XYZ array and convert each set of points in a message type geometry_msgs/Point32. Line 20 creates the message type sensor_msgs/PointCloud, this message will be published in ROS and it will be the returning message. Line 21 defines the frame according to the name sent to the function. Line 22 transfers the created variable PCL1mensage of type geometry_msgs/Point32 for the message created in line 20. Line 23 creates the visible topic in ROS with the name you specify when calling the function. The topic is the type sensor_msgs/PointCloud. The 24 line sends the created message to the topic, at this time the information processed by the function will be available in ROS. Line 25 breaks code execution. Refers to the desired pause time for the life topic. This pause is necessary because as the topic has been created in the function when the function end the topic die. A change in the code can be done if desired, by simply remove the lines 23:24 and post the topic in your code. Fig. 27 Point Cloud data obtained of the topic /swissranger/pointcloud2_raw We will test the created function. The function can be obtained through the package available on GitHub, or just copying the text previously provided in a text file, and save it with XYZ_to_sensor_msgs_PointCloud.m name in your workspace. With open Matlab and function saved in your workspace and being recognized by Matlab, type in the command window: 1 2 rosshutdown ; rosinit ; These commands can close any open communication with the ROS, and open a new statement. We now get the Point Cloud data through a topic of ROS, for this type: 1 2 topic = rossubscriber ( ’/ swissranger / pointcloud2 \ _raw ’); pcloud = receive ( topic ); Replacing /swissranger/pointcloud2_raw by the desired topic, if you are using the sensor SR4000 any changes need to do. Let us see the PCL before processing, for this type the following command. A window appears with the PCL, similar to Fig. 27. 1 scatter3 ( pcloud ); The next step is the extraction of the Point Cloud data points, for that enter: 1 Fig. 28 Point Cloud data created with the function XYZ_to_sensor_msgs_PointCloud and published on the topic /PCL The variable XYZ now has a NX3 array containing all the Point Cloud data points. N refers to the resolution of the sensor by multiplying the number of lines multiplied by columns. In the case of the sensor SR4000, N is equal to 176 × 144 resulting in 25344 lines containing the XYZ data in each line. The next step is to call the function for this type: 1 X Y Z _ t o _ s e n s o r _ m s g s _ P o i n t C l o u d ( xyz , ,500) ; ’ PCL ’ , ’ map ’ This command will call the function XYZ_to_sensor_msgs_PointCloud sending our variable XYZ. The name set for the new topic is PCL and the desired orientation is in reference to the map, lasting 500 s. The choice of high duration time fact becomes possible to display the topic in Rviz. Open Rviz, add the topic and change the orientation of the application to map, something similar to Fig. 28 will appear. In this subsection, read and re-create the Point Cloud data without any change steps will be carried out. This methodology was used to illustrate the operation of the function. In the next subsections, Point Cloud data will be modified before being published. 5.4 Creating Markers The Markers are a type of existing messages in ROS. It can be found isolated in a single Marker as with message visualization_msgs/Marker [34] or a vector containing multiple Markers as with visualization_msgs/MarkerArray message [35]. The Markers can be viewed in Rviz as with Point Cloud data [36], the conversion of the Point Cloud data in Markers is very useful for robotics, mainly on the creation M.A.S. Teixeira et al. % create a marker function [ marker ] = marker (x ,y ,z ,r ,g ,b , type , scale , id , frame ) % Create a maker marker = rosmessage ( v i s u a l i z a t i o n _ m s g s / Marker ) ; % Set the type of the marker marker . Type = type ; % Set the pose of the marker . marker . Pose . Position . X = x ; marker . Pose . Position . Y = y ; marker . Pose . Position . Z = z ; % set the o r i e n t a t i o n of the marker . Pose . Orientation . X = marker . Pose . Orientation . Y = marker . Pose . Orientation . Z = marker . Pose . Orientation . W = marker 0.0; 0.0; 0.0; 1.0; % Set the scale of the marker marker . Scale . X = scale ; marker . Scale . Y = scale ; marker . Scale . Z = scale ; % Set a RGB color of the marker marker . Color . R = r ; marker . Color . G = g ; marker . Color . B = b ; % Set the t r a n s p a r e n c y marker . Color . A = 1; % Set the frame marker . Header . FrameId = frame ; % Marker id marker . Id = id ; Fig. 29 Function marker of maps, the OctoMap [37] for example, can be considered a set of Markers. This subsection is not intended to create a OctoMap, but to show the conversion of a Point Cloud data in Markers. We will create two functions in this subsection, a function to create an isolated Maker and a function to convert Point Cloud data into Makers. It is interesting to use two functions for ease of learning, as with an only function, the code would become extensive. Let’s start with the code needed to create a single Maker as shown in Fig. 29. The function is annotated and explanatory. The result of the function is a single Marker. The inputs function are x, y, z, r, g, b, type, side, id and frame referring: 1. x, y, z: Refers to the position of Marker in the case of a conversion of Point Cloud data refers to existing data in the XYZ; 2. r, g, b: Refers to the color desired for the market, this color is in the RGB format; 3. type: Refers to the type of marker. The Marker can take various formats, the Table 4 brings the value of the variable type must assume for conversion to Point Cloud data, together with the format of Marker; Robots Perception Through 3D Point Cloud Sensors Table 4 The variable type value and corresponding format Arrows Cube Sphere Cylinder 4. scale: Refers to Marker size; 5. id: It refers to the number id the Marker assume, this number is important because it is possible to modify a Marker already published in ROS; 6. frame: Refers to the orientation of the maker, the same type of reference used to create a PointCloud. Let us now create a function that transforms our XYZ data in Markers. Due to the large number of parameters used to create the Maker, our function worked with fixed values for color. The developed function is show in Fig. 30. This function creates a message of type visualization_msgs/MarkerArray and converts each point of the XYZ array in a Marker, and then adds the Marker into the created message. Finally, the message publishes the result. The input parameters are XYZ, topicName, type, scale, frame, numberPoints and time to refer you: 1. 2. 3. 4. 5. 6. xyz: Refers to the coordinated input matrix containing type X, Y and Z; topicName: Refers to the topic name published in ROS; type: It refers to the type Marker the same as found in Table 4; scale: Refers to Marker size; frame: Refers to the reference point; numberPoints: Refers to the number that will be converted from XYZ data. As the Marker occupies a size larger than a point of Point Cloud, it is not necessary to convert all elements to Marker. 7. time: Refers to the time the topic is visible on the ROS. Go to the experiment, copy the contents of the function and paste into a text document in your workspace with convertPCLtoMarkersROS name, make sure it is set on your path in Matlab. First, we have to get a xyz matrix for this type: 1 2 3 4 5 rosshutdown ; rosinit ; topic = rossubscriber ( ’ Topic_Name_PointCloud2 ’); pcloud = receive ( topic ); xyz = readXYZ ( pcloud ) ; Now just call the function, enter the following code, changing the parameters if you want: 1 c o n v e r t P C L t o M a r k e r s R O S ( xyz , 1000 , 10) ; ’ Markers ’ 1 ,0.08 , ’ map ’ , Figure 31 brings converted into a Point Cloud to Markers with different possible types. The number of points used in the tests was 1500 and size of Markers was 0.08. M.A.S. Teixeira et al. % r esponsible function to create a Marker Array function [ Markers ] = c o n v e r t P C L t o M a r k e r s R O S ( xyz , topicName , type , scale , frame , numberPoints , time ) % Create topic pub = rospublisher ( strcat ( / , topicName ) , visualization_msgs / MarkerArray ) ; % Creates MakeArray markers = rosmessage ( visualization_msgs / MarkerArray ) ; % Sets the loop jump jump = round ( size ( xyz ,1) / numberPoints ) ; % Loop r e s p o n s i b l e for c r e a t i n g the Ma r k e r s contMarker = 1; for i =1: jump : size ( xyz ,1) points ( contMarker ) = marker ( xyz (i ,1) , xyz (i ,2) , xyz (i ,3) ,1 ,1 ,1 , type , scale , contMarker , frame ) ; contMarker = contMarker +1; end % Pass the vector for Markers markers . Markers = points ; % Sends ROS send ( pub , markers ) ; % Active time pause ( time ) ; end Fig. 30 Function convertPCLtoMarkersROS 5.5 Filter XYZ Data In some situations, we need parts of the Point Cloud data. This can happen when we want to remove a wall for example, or when we want to work with only the nearest points. In mobile robotics PCL can be used to obtain information about the distance of an object and for that, you need to filter the PCL to decrease the noise number, for example. This subsection will bring developed a function in Matlab with of limiting the Point Cloud data in one of the axes XYZ, or even in all three axes simultaneously. Following the methodology already used in the chapter, it will first be presented to function, followed by explanations of their development and code examples and then actual use. The Rviz tool will be used to observe the results, for it will be used the previously created function XYZ_to_sensor_msgs_PointCloud. To develop the filter the find [38] command will be used. The find command lets you search in an array indexes that satisfaction the condition desired. A filter sample in the X-axis of a matrix on XYZ format can be seen in Fig. 32. Line 16 takes the all indexes of matrix xyz that satisfy the condition. The condition is to have X between [-1: 1]. In this way, I cut my Point Cloud data in the X-axis leaving a total of two meters, one meter to the right and one meter left side of the center of the sensor. An example of this code can be seen in Fig. 33. The same filter can be used in any of the axes. The Y-axis is responsible for height, having zero as the center respect to the sensor, the above of center it has a positive (a) PCL used in this experiment. (b) Conversion of PCL in Markers of type Arrows. (c) Conversion of PCL in Markers of type Cube. (d) Conversion of PCL in Markers of type Sphere. (e) Conversion of PCL in Markers of type Cylinder. 554 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 M.A.S. Teixeira et al. % ROS start rosshutdown rosinit ( localhost ) % Get PCL topic = rossubscriber ( / swissranger / p ointcloud2_raw ) ; pointcloud = receive ( topic ) ; % Convert to XYZ matrix xyz = readXYZ ( pointcloud ) ; % Sets the filter size xFilter = 1; % Apply filter index = find ( xyz (: ,1) >( xFilter * -1) & xyz (: ,1) < xFilter ) ; % Create a new XYZ matrix xyzFiltred = xyz ( index , 1:3) ; % Displays the result in Rviz X Y Z _ t o _ s e n s o r _ m s g s _ P o i n t C l o u d ( xyzFiltred , xFilter , map ,10) ; Fig. 32 XFilter (a) Original PCL. (b) Filtered PCL Fig. 33 Filter on the X axis 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 % ROS start rosshutdown rosinit ( localhost ) % Get PCL topic = rossubscriber ( / swissranger / p ointcloud2_raw ) ; pointcloud = receive ( topic ) ; % Convert to XYZ matrix xyz = readXYZ ( pointcloud ) ; % Sets the filter size yFilter = 1; % Apply filter index = find ( xyz (: ,2) >( yFilter * -1) & xyz (: ,2) < yFilter ) ; % Create a new XYZ matrix xyzFiltred = xyz ( index , 1:3) ; % Displays the result in Rviz X Y Z _ t o _ s e n s o r _ m s g s _ P o i n t C l o u d ( xyzFiltred , yFilter , map ,10) ; Fig. 34 YFilter value and below the center is a negative value. The following code performs a filter on the Y-axis can be seen in Fig. 34 and its result can be seen in Fig. 35. (b) Filtered PCL. Fig. 35 Filter on the Y axis 1 2 3 % ROS start rosshutdown rosinit ( localhost ) % Get PCL topic = rossubscriber ( / swissranger / pointcloud2_raw ) ; pointcloud = receive ( topic ) ; % Convert to XYZ matrix xyz = readXYZ ( pointcloud ) ; % Sets the filter size zFilter = 2; % Apply filter index = find ( xyz (: ,3) < zFilter ) ; % Create a new XYZ matrix xyzFiltred = xyz ( index , 1:3) ; % Displays the result in Rviz X Y Z _ t o _ s e n s o r _ m s g s _ P o i n t C l o u d ( xyzFiltred , zFilter , map ,10) ; Fig. 36 ZFilter The Z-axis is responsible for the distance from the object to the sensor. The SR4000 sensor that is being used for the test has a maximum distance range of five meters. Limit the Z-axis is important because eliminate any unwanted noise. Most robots are designed to operate at a distance not far from your body, so you do not need to work with all the Point Cloud data in these cases. The code for limiting the Z axis can be seen in Fig. 36, the result of this code can be seen in Fig. 37. It is also possible to develop a filter on all three axes. The following code shows an example of how to do in Fig. 38. Fig. 37 Filter on the Z axis 1 2 3 % Sets the xFilter = yFilter = zFilter = filter size 0.5; 1; 1.8; % Apply filter index = find ( xyz (: ,1) >( xFilter * -1) & xyz (: ,1) < xFilter & xyz (: ,2) >( yFilter * -1) & xyz (: ,2) < yFilter & xyz (: ,3) < zFilter ) ; % Displays the result in Rviz X Y Z _ t o _ s e n s o r _ m s g s _ P o i n t C l o u d ( xyzFiltred , xyzFilter , Fig. 38 XYZFilter This code was defined that the new Point Cloud data told with a depth of 1.8 m, width of 0.5 m to the left and right of the center and the height of 1 m above and below the center. The result of this code can be seen in Fig. 39. 5.6 Transformation The sensor data need to be adjusted when used in robots. It is necessary that data be provided in relation to the center robot, and the robot turn to be converted in relation (a) Original PCD. (b) Filtered PCD. (c) Original PCD view from the top. (d) Filtered PCD view from the top. (e) Original PCD side view. (f) Filtered PCD side view. Fig. 39 Filter on the Z axis 558 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 M.A.S. Teixeira et al. % Receives tf tftree = rostf ; pause (1) ; % Cr ea te s a r e f e r e n c e s r 4 0 0 0 _ l i n k tfSR4000 = rosmessage ( geometry_msgs / TransformStamped ) ; tfSR4000 . ChildFrameId = sr4000_link ; tfSR4000 . Header . FrameId = base_link ; % Sets the distance between the robot and the sensor tfSR4000 . Transform . Translation . X = 0; tfSR4000 . Transform . Translation . Y = 1.5; tfSR4000 . Transform . Translation . Z = 0; % Send the tf for ROS tfSR4000 . Header . Stamp = rostime ( now ) ; sendTransform ( tftree , tfSR4000 ) pause (1) ; % Cr ea te s a r e f e r e n c e k i n e c t _ l i n k tfKinect = rosmessage ( geometry_msgs / TransformStamped ) ; tfKinect . ChildFrameId = kinect_link ; tfKinect . Header . FrameId = base_link ; % Sets the distance between the robot and the sensor tfKinect . Transform . Translation . X = 0; tfKinect . Transform . Translation . Y = -1.5; tfKinect . Transform . Translation . Z = 0; % Send the tf for ROS tfKinect . Header . Stamp = rostime ( now ) ; sendTransform ( tftree , tfKinect ) pause (1) ; Fig. 40 createTF to its position on the map. To make these adjustments, the ROS has a set of tools called tf. These settings are nothing more than geometric calculations, converting the position and the sensor rotation relative to the robot. In this subsection, we will initialize the two cameras, Kinect and SR4000, and add them to the same robot with a stipulated distance between them. First we need to define all the transformations, in the center of the robot will be the name base_link, the sensor Kinect the name Kinect_link and the SR4000 sensor the name sr4000_link. The code to create these transformations can be seen in Fig. 40. The distance from the sensor SR4000 this sitting as +1.5 m from the center of the robot on the Y axis, i.e., above the center, while the Kinect sensor is sitting as −1.5 m from the center of the robot on the Y axis, i.e. below the center. You can view the changes by rqt_tf_tree tool for that type in your terminal the command: 1 rosrun rqt_tf_tree A window as shown in Fig. 41 can be observed. Note that the two sensors are connected to base_link, which is the center of the robot. The next step is the start of the two sensors, for these start two launchers created by typing: 1 2 roslaunch roslaunch ~/ c a t k i n _ w s / src / kinect . launch ~/ c a t k i n _ w s / src / sr4000 . launch To apply the transformation, we must first change the reference sensor for reference to create, and then apply the transformation. This can be seen in Fig. 42. The result of code can be observed in Fig. 43. Fig. 41 Rqt tf tree 1 2 3 4 5 6 7 8 9 10 11 12 % ROS start rosshutdown rosinit ( localhost ) % Creates the tf criatTF ; pause (1) ; % Read the topics kinect = rossubscriber ( / camera / depth / points ) ; sr4000 = rossubscriber ( / swissranger / pointcloud2_raw ) ; % Create new topics pub = rospublisher ( Kinect , sensor_msgs / PointCloud2 ) ; pub2 = rospublisher ( sr4000 , sensor_msgs / PointCloud2 ) ; while true % takes the Point Cloud and modify your re f er e n c e pointcloudKi n e c t = receive ( kinect ) ; pointcloudKi n e c t . Header . FrameId = kinect_link ; % Applies tf and publishes p o i n t c l o u d K i n e c t 2 = transform ( tftree , base_link , pointcloudKinect ) ; send ( pub , p o i n t c l o u d K i n e c t 2 ) ; % takes the Point Cloud and modify your re f er e n c e pointcloudSR 4 0 0 0 = receive ( sr4000 ) ; pointcloudSR 4 0 0 0 . Header . FrameId = sr4000_link ; % Applies tf and publishes p o i n t c l o u d S R 4 0 0 0 2 = transform ( tftree , base_link , pointcloudSR4000 ) ; send ( pub2 , p o i n t c l o u d S R 4 0 0 0 2 ) ; pause (0.5) ; end Fig. 42 Apply TF 6 Conclusion The Point Cloud can be used for various tasks in mobile robotics, for example for mapping, for SLAM and even with the location reference. The Point Cloud can also be used to identify the distance of objects or object recognition. However, for advanced work with the Point Cloud, you must first be able to use it in simple jobs. This tutorial brought a set of techniques and tools that allow the reader to start its activities with Fig. 43 Transformation applied to Point Cloud data seen in RVIZ the Point Cloud using Matlab. At the end, the reader will know everything you need about the Point Cloud. References 1. Oliver, A., S. Kang, B.C. Wünsche, and B. MacDonald. 2012. Using the kinect as a navigation sensor for mobile robotics. In Proceedings of the 27th conference on image and vision computing New Zealand, CM, 509–514. 2. Endres, F., J. Hess, N. Engelhard, J. Sturm, D. Cremers, and W. Burgard. 2012. An evaluation of the RGB-d slam system. In 2012 IEEE international conference on robotics and automation (ICRA), 1691–1696. New York: IEEE. 3. Whelan, T., M. Kaess, M. Fallon, H. Johannsson, J. Leonard, and J. McDonald. 2012. Kintinuous: Spatially extended kinectfusion. 4. Whelan, T., M. Kaess, J.J. Leonard, and J. McDonald. 2013. Deformation-based loop closure for large scale dense RGB-D slam. In 2013 IEEE/RSJ international conference on intelligent robots and systems (IROS), 548–555. New York: IEEE. 5. Wallace, L., A. Lucieer, Z. Malenovsk`y, D. Turner, and P. Vopˇenka. 2016. Assessment of forest structure using two UAV techniques: A comparison of airborne laser scanning and structure from motion (SFM) point clouds. Forests 7 (3): 62. 6. Wang, Q., L. Wu, Z. Xu, H. Tang, R. Wang, and F. Li. 2014. A progressive morphological filter for point cloud extracted from UAV images. In IEEE international geoscience and remote sensing symposium (IGARSS), 2014, 2023–2026. New York: IEEE. 7. Nagai, M., T. Chen, R. Shibasaki, H. Kumagai, and A. Ahmed. 2009. UAV-borne 3-d mapping system by multisensor integration. IEEE Transactions on Geoscience and Remote Sensing 47 (3): 701–708. 8. Tao, W., Y. Lei, and P. Mooney. 2011. Dense point cloud extraction from UAV captured images in forest area. In 2011 IEEE international conference on spatial data mining and geographical knowledge services (ICSDM), 389–392. New York: IEEE. 9. Matlab, “Matlab”. 2016. http://www.mathworks.com/products/matlab/. 10. GitHub. 2016. Installation from GitHub on debian. https://github.com/Singular/Sources/wiki/ Installation-from-GitHub-on-Debian. 11. PCL. 2016. What is PCL? Abr. http://pointclouds.org/about/. 12. Li, L. 2014. Time-of-flight camera–an introduction, Technical White Paper. 13. PCL. 2016. Module io, Abr. http://docs.pointclouds.org/trunk/group__io.html. 14. ROS. 2016. PCL overview, Abr. http://wiki.ros.org/pcl/Overview. 15. Kreylos. 2016. Kinect hacking. http://idav.ucdavis.edu/~okreylos/ResDev/Kinect/. 16. HEPTAGON. 2016. Our history. http://hptg.com/about-us/#history. 17. ROS. 2016. About ROS, Abr. http://www.ros.org/about-ros/. 18. ROS. 2016. ROS jade installation instructions, Abr. http://wiki.ros.org/ROS/Installation. 19. ubuntu. 2016. About ubuntu, Abr. http://www.ubuntu.com/about/about-ubuntu. 20. Ubuntu. 2016. Ubuntu 14.04.4 LTS (trusty Tahr), Abr. http://releases.ubuntu.com/14.04/. 21. Ubuntu. 2016. Install Ubuntu 16.04 LTS, Abr. http://www.ubuntu.com/download/desktop/ install-ubuntu-desktop. 22. ROS. 2016. Ubuntu install of ROS indigo, Abr. http://wiki.ros.org/indigo/Installation/Ubuntu. 23. ROS. 2016. freenect_camera. http://wiki.ros.org/freenect_camera. 24. ROS. 2016. freenect_launch. http://wiki.ros.org/freenect_launch. 25. UBUNTU. 2016. Package: libfreenect-dev (1:0.0.1+20101211+2-3). http://packages.ubuntu. com/precise/libfreenect-dev. 26. ROS. 2016. openni_launch. http://wiki.ros.org/openni_launch. 27. Robot, B.R. 2016. Kinect basics. http://sdk.rethinkrobotics.com/wiki/Kinect_basics. 28. ROS. 2016. Kinect: Using microsoft kinect on the evarobot. http://wiki.ros.org/Robots/ evarobot/Tutorials/indigo/Kinect 29. HEPTAGON. 2016. Swissranger. http://hptg.com/industrial/. 30. ROS. 2016. swissranger_camera. http://wiki.ros.org/swissranger_camera. 31. ROS. 2016. cob_camera_sensors. http://wiki.ros.org/cob_camera_sensors. 32. ROS. 2016. Care-o-bot: Configuring and using the swissranger 3000 or 4000 depth sensor. http://wiki.ros.org/cob_camera_sensors/Mesa_Swissranger. 33. M. imaging. 2016. Sr4000/sr4500 user manual. http://www.realtechsupport.org/UB/SR/range_ finding/SR4000_SR4500_Manual.pdf. 34. ROS. 2016. visualization_msgs/marker message. http://docs.ros.org/api/visualization_msgs/ html/msg/Marker.html. 35. ROS. 2016. visualization_msgs/markerarray message. http://docs.ros.org/api/visualization_ msgs/html/msg/MarkerArray.html. 36. ROS. 2016. The marker message. http://wiki.ros.org/rviz/DisplayTypes/Marker. 37. OctoMap. 2016. An efficient probabilistic 3d mapping framework based on octrees. http:// octomap.github.io/. 38. MathWorks. 2016. find. http://www.mathworks.com/help/matlab/ref/find.html. Environment for the Dynamic Simulation of ROS-Based UAVs Alvaro Rogério Cantieri, André Schneider de Oliveira, Marco Aurélio Wehrmeister, João Alberto Fabro and Marlon de Oliveira Vaz Abstract The aim of this chapter is to explain how to use the Virtual Robot Experimentation Platform (V-REP) simulation software with the Robot Operating System (ROS) to create and collect signals and control a generic multirotor unmanned aerial vehicle (UAV) in a simulation scene. This tutorial explains all the steps needed to select an UAV model, assemble and configure the propellers, configure the dynamic parameters, add sensors, and finally simulate the scene. The final part of the chapter presents an example of how to use MATLAB to create control scripts using ROS and also collect data from sensors such as accelerometers, gyroscopes, GPS, and laser scanners. Keywords Multirotor simulation · Multirotor ROS · Hexacopter PID · V-REP hexacopter 1 Introduction Unmanned aerial vehicles (UAVs) are currently one of most interesting areas of robotics, with many studies currently taking place in the scientific community. This kind of research is popular because the vehicles and control electronics are becoming increasingly smaller and cheaper. One common difficulty of working with UAVs is the fragility of this type of aircraft. If a control strategy works badly, an accident can occur and damage the vehicle. Another difficulty when beginning real tests with a UAV is the need for a large controlled test area, which is not commonly accessible for research labs and education centers. Virtual environments are a powerful tool for UAV simulation and enable the careful assessment of new algorithms without the A.R. Cantieri (B) · M. de Oliveira Vaz Federal Institute of Parana, Rua Joao Negrao, 1285, Curitiba, Brazil email: [email protected] URL: http://www.utfpr.edu.br/ A.S. de Oliveira · M.A. Wehrmeister · J.A. Fabro Federal University of Technology - Parana, Av. Sete de Setembro, 3165, Curitiba, Brazil © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_17 A.R. Cantieri et al. Fig. 1 Main UAV topologies risk of damage. This kind of software minimizes the time spent on tests and the risks associated with them, making development more efficient. This chapter presents the development of a virtual environment for dynamic UAV simulation using the Virtual Robot Experimentation Platform (V-REP) simulation software and Robot Operating System (ROS). This chapter is divided in five sections, beginning with a basic review of multirotor flight and finishing with a summary of all the steps needed to assemble and control a virtual multirotor aircraft. Section 2 presents some basic background regarding multirotor aircraft movement and flight, explaining the concepts of roll, pitch, yaw, and throttle. It also describes how it is possible to create movement in the desired direction by changing the propeller rotations. Section 3 presents the principal topologies of UAVs, focusing on multirotor or multicopter aircraft (Fig. 1), and explaining their particular characteristics, assembly, and propulsion forces. The objective is to present the basic concepts behind this kind of aircraft and their flight, making it easier to understand the necessary actions for motion control and stabilization. Section 4 introduces V-REP and shows how to create an environment scene and a virtual UAV model fully compatible with ROS on V-REP, as shown in Fig. 2. The step-by-step construction of an example model is shown, which includes the assembly of the UAV frame, propeller and motor system, and sensors. Some important considerations about the simulation software are discussed, showing the details of the model manipulation and its configuration parameters. Environment for the Dynamic Simulation of ROS-Based UAVs Fig. 2 Virtual hexacopter style UAV Fig. 3 Virtual environment to dynamic simulation of UAVs The final section focuses on the interconnection between the virtual UAV and ROS, with careful discussion about the design of the ROS nodes (publishers and subscribers). The main objective of this section is to create nodes that are nearly identical to the ones for real equipment with respect to the message type and its update frequency. The simulation is based on a distributed PID stabilization controller running on MATLAB scripts through ROS nodes. The control signals are presented in V-REP’s simulation time graphs to express the UAV behavior during flight, as shown in Fig. 3. 2 Basic Multirotor Flight Concepts Modeling the flight of a multirotor aircraft is a complex problem. It depends on the kinematic and dynamic characteristics of the aircraft, which change for every new model or configuration. In this tutorial chapter, the three-dimensional motion of the aircraft is considered only in terms of the basic moves, roll, pitch, yaw, and throttle, created by the difference in rotation of the propellers. The study of the dynamic characteristics of the aircraft and how to create these characteristics in the model are beyond the scope of this chapter. To understand multirotor control and flight, we need first to understand the different moves of a multirotor in space. The three angles of rotation, roll, pitch, and yaw, are used to describe the aircraft’s orientation in space. Figure 4 shows the orientation of these axes relative to the center of the multirotor body. The multirotor body rotates around these axes to move horizontally in space. For instance, to make the aircraft move forward, we rotate the frame around the pitch axis, elevating its back, which creates a force that pushes the aircraft forward. If the multirotor body rotates around the roll axis, the aircraft will move sideways. Rotation around the yaw axis reorients the front of the aircraft in another direction. The vertical moves of the aircraft are an effect of an imbalance between gravity and thrust. When these forces are equal, the aircraft stays static at the same altitude. To make the multirotor go up, it is necessary to increase the velocity of all the propellers Fig. 4 Roll, Pitch and Yaw Fig. 5 Pitch rotation example by the same amount. This increases the thrust and makes the aircraft gain altitude. In contrast, to make the multirotor dive, we decrease the velocity of all the propellers, making the thrust force smaller and allowing altitude to be lost. All the spatial moves of a multirotor aircraft are achieved by changing the propeller rotation velocities. If the number of propellers of the multirotor configuration is even, as on quadcopters, hexacopters, and octocopters, the moves are created by increasing the velocity of the motors on one side and decreasing the velocity of those on the other side. In practical situations, changes must be small to avoid the loss of flight control and stability. Figure 5 shows an example pitch rotation and its associated move. Yaw rotation occurs in a somewhat different way. This kind of rotation results from the rotational torque created by the propellers of the aircraft. To prevent this situation, multirotor aircraft are assembled such that the rotation of each rotor on the frame compensates for another rotor that is located on the other side of the frame and rotates in the opposite direction. This neutralizes the rotational torques and keeps the yaw rotation at zero. Figure 6 shows the propellers with the opposite rotation directions to balance the rotational forces. When we want to re-orientate the front of the aircraft in another direction, we must create yaw rotation by increasing or decreasing the rotation of a specific group of propellers. Figure 7 shows this effect. Fig. 6 Rotational torques of created by the helices Fig. 7 Unbalanced propellers velocity and yaw rotation of aircraft 3 Multirotor Configurations and Specific Characteristics Several models of multirotors are available today, with many frame configurations and specific characteristics. The most common models are quadcopters, hexacopters, octocopters, tricopters, and V-tails. The several kinds of configurations, construction Fig. 8 Tricopter multirotor example materials, motors, helix and other components create a different flight performance for each aircraft. The thrust force, payload, velocities, and other characteristics will change for each aircraft, but in general, the fundamental characteristics like total payload and external perturbation reaction capacity can be compared for different basic multirotor configurations. This section briefly describes the most common configurations and their fundamental characteristics. The tricopter configuration is one of the cheaper assemblies and was popular for some time. This configuration is shown in Fig. 8. It has an odd number of propellers, demanding additional care to control yaw rotation. In all other cases, there are two opposite rotating propellers canceling the rotational torque one of each other. The two front propellers rotate in opposite directions, mutually canceling the rotational torque. The back propeller does not have torque cancellation, and it is hence necessary to create a counter-torque on the frame by rotating the propeller by an appropriate angle. This is achieved by mounting this propeller on a horizontal fixed servo-motor that turns the propeller to the necessary inclination. This kind of problem can be fixed by adding a second back propeller on the frame on the same axis, but directed towards the ground and rotating in the opposite direction. This arrangement is called a Y-4 multirotor, and decreases the mechanical complexity of the arrangement. The currently most popular arrangement of propellers is the quadcopter. This configuration provides good thrust force and stability, and its components are relatively low cost. The quadcopter can be assembled in two different ways, in an “x” or “+” arrangement. Figure 9 shows these two configurations. Controlling the movement of a quadcopter is relatively simple because of its motor symmetry. The energy consumed by four propellers is less than that consumed by the hexacopter and octocopter configurations, but the thrust force is smaller for most cases. Another disadvantage is the fact that if one propeller fails, the electronic flight controller cannot stabilize the aircraft, resulting in a crash. Fig. 9 Quadricopter frames X and + Fig. 10 Hexacopter frames X and + The hexacopter is a good option for achieving higher thrust values and flight stability. Like the quadcopter, a hexacopter can be assembled using x or + configurations, as shown in Fig. 10. The presence of six propellers assures better flight security, because if one propeller fails, the others are able to maintain the thrust force and balance necessary to avoid a loss of control. Although the power consumption is greater than a quadcopter, the use of larger batteries can assure equivalent flight time, as the thrust force of the six propellers can carry more weight. This weight capacity also allows the use of a heavier sensor payload, which makes it a good choice for scientific and research applications. The octocopter is very similar to the hexacopter, but the eight motors provide more thrust force, and a better immunity to external perturbation. It can be assembled on Fig. 11 Octocopter frame Fig. 12 V- tail multirotor x and + configurations, like the hexacopter and quadcopter. The eight propellers consume a larger amount of energy than the other multirotor configurations, but this disadvantage is compensated for by its flight stability, which makes it a good choice for outdoor applications. Figure 11 shows common octocopter configurations. A special case of a quadcopter is the V-tail configuration. In this kind of multirotor, the two back propellers are inclined outside of the frame. This inclination offers some additional acrobatic performance, but leads to lower power efficiency and air blow interference between the two back propellers. Figure 12 shows this model. For all the above configurations except the V-tail, it is possible, for each upwardfacing propeller, to add another propeller pointing downward, resulting in a dual- propeller multirotor configuration. These configurations enable more stability and thrust force, and have the additional advantage of helping the associated propellers cancel the rotational torque. The problem with this kind of configuration is the higher cost of components and power consumption, making them less common. 4 Multirotor Model Creation and Scene Composition in V-REP V-REP is an abbreviation of the Virtual Robot Experimentation Platform, a software platform created for professional robot systems development. The V-REP simulator offers good flexibility, strong simulation tools, and a wide number of robot and component models, which makes it an optimal choice for developing applications in all robotic areas and a good alternative to other simulation software like the GAZEBO simulator. V-REP is available for Linux, Windows, and macOS operational systems. The educational version is free to use, and can be download from the Coppelia Robotics website (www.coppeliarobotics.com). For Linux users, is sufficient to download the software and run it on a terminal using the “sh vrep.sh” command. When running, the VREP interface will appear like the example in Fig. 13. This figure shows the components of the interface such as the simulation window, menus, and other tools used to created and run the simulation. An interesting feature in this figure is the presence of some commercial robot models in the model browser window. These models are created and offered in the Fig. 13 V-REP interface Fig. 14 One complex scene software by users or partners and provide a great opportunity to work with some expensive robots without the necessity of purchasing a real one. A good knowledge of all the tools, components, and resources available in the software is important before beginning to work with V-REP. Reading the VREP User Manual and studying the basic tutorials available on software’s website are the best ways to begin working with the software. For a better understanding of the actions and steps shown in this tutorial, we strongly recommend this study before starting the experiments [1]. To begin the hexacopter simulation, first the scene must be created and the desired components must be included in it. A scene is the virtual environment of a simulation on V-REP, which contains all the simulation elements, objects, and scripts that make the simulation work. A default scene contains cameras and illumination objects as well as the main script for the simulation. The main script is responsible for running all the associated scripts for the simulation, and it is not supposed to be modified. Environment objects can be added to the scene as static or dynamic objects. A collection of object models is available in V-REP’s model menu. To add an object in the scene, we can drag and drop the object model to locate it at the desired position. Figure 14 shows a complex environment created in a scene with common V-REP models. The behavior of the model is described by its model script and properties. The properties of one model are configurable by accessing the “Object Properties Toolbox.” There are several configuration parameters available for each model, and we strongly recommend reading the V-REP User Manual to understand them better. The most relevant parameters for the example in this chapter are as follows: • Collidable: Allows software collision detection for the selected object. Fig. 15 Toggle custom user interface button • Measurable: Allows software minimum distance calculation for the selected measurable object. • Detectable: Allows proximity sensor detection for the selected detectable object. • Renderable: Allows vision sensor detection for the selected renderable object. Let us now create the scene with the necessary components to perform the example simulation. To create it, the following tools are necessary: • • • • Four slider buttons to control the horizontal and vertical position of the hexacopter. Four graphs components to record the position data of the hexacopter. Four floating view components to show the data recorded by the graphs. The hexacopter model. To create the slider button, click on the “Toggle custom user interface edit module” button on the toolbar. This button is shown in Fig. 15. The pop-up menu started by this button is shown in Fig. 16. We now create the buttons by clicking on the “Add new user interface,” setting the cells to 10 × 4. This action will create a one-button model, as shown in Fig. 17. To create a slider on the button base, click on the cells of one line holding the Ctrl key to select them. Next, click on “Insert merged button” and select type “Slider.” On the “Button label” box, enter the button label, which is “Throttle” for the first button. Repeat these operations for the other three buttons, labeling them “Roll,” “Pitch,” and “Yaw.” On the left side of the screen, the “Custom User Interface” window shows the name of this button, in this case “UI.” This name can be changed by clicking on the text, but for the example simulation, it must be retained, because the hexacopter Fig. 16 Pop up menu Fig. 17 Slider button model Fig. 18 Scene Object Properties window script uses this name to get the values from these buttons. In this example, another slider-button is needed to control the moves of the hexacopter. Repeat the steps above, labeling these buttons “Height,” “Roll,” “Pitch,” and “Yaw.” The name of this button is “UI0.” Now we create the graphs to record the hexacopter position data. This can be done using add–> graph The graph will appear in the “Scene Hierarchy” window. Rename it to “Graph Height.” To associate one data stream with this graph, click on the graph icon in the “Scene Hierarchy” window. A “Scene Object Properties” box like the one shown in Fig. 18 will appear. Click on “Add new data stream to record,” select the item “Various: user defined” on the “Data Stream Type” button and select “User Data” Fig. 19 Completed scene on the “Object/Items to record” button. On the “Data stream record list” window, rename the label “Data [User data]” to “Height _Desired [User data].” To finish, add a new data record on the same graph and rename it “Height _Controlled [User data].” Now it is necessary to associate the graph with a “floating view” window, where the data will be drawn. Right click on the scene and click on add–> floating view A empty floating window will appear on the screen. To associate the graph with this window, click on the window and click on view–> associate view with selected graph The graph must be selected in the scene to perform this association. This step completes the creation of the height graph display window used in the example. To complete the scene, repeat all the steps to create the other three graph windows for showing the roll, pitch, and yaw data. After these steps, the basic scene is ready. Figure 19 shows the result of these actions. Once the scene is ready, the virtual multirotor model may be created. All multirotor models can be simulated in V-REP in a similar way if the motion characteristics of each kind of aircraft are correctly considered. The most common way to create this kind of model is by drawing it directly in the simulator using primitive forms. We can also import a previously designed CAD model, as done in this example. Some websites offer this kind of model for download, like the Grabcad Community website [2], so it is easy to find some common configuration models for use in this kind of simulation. The other alternative is to draw a specific model in CAD software and import it into V-REP. The models drawn in V-REP provide better computational performance during the simulation because of their low complexity. Nevertheless, the creation of a detailed model can be laborious, and the task of importing a prepared CAD frame is attractive. Currently, the CAD data formats supported by V-REP are OBJ, STL, DXF, 3DS, and Collada Windows. This importation is made using edit –> file –> import –> mesh The frame models shown in Figs. 8, 9, 10, 11, and 12 are available in https:// sourceforge.net/projects/rosbook-2016-chapter-4/files/. All these models were created in V-REP using simple form object tools (except the helix), and can be easily extended to more complex models. For the initial simulation tests, they offer good performance and are easy to use. For the tutorial example, a CAD model of a hexacopter + configuration was downloaded and imported in V-REP. This was done to keep the tutorial’s presentation and use of the tools in V-REP simple. The imported CAD models are treated by the software as a single object composed of non-pure non-convex forms, considered by V-REP to be the most complex kind of object that can be simulated. A close inspection of this model shows that it is composed of a large number of triangles assembled on a complex mesh. The more detail this model has, the more triangles are needed to make it. Figure 20 shows one example of this composition in detail. Fig. 20 Details of the model frame composition before triangle minimization Fig. 21 Details of the model frame composition after triangle minimization To minimize this effect, we can reduce the complexity of the frame using the tools “decimate selected shape” and “extract inside selected shape,” both available on the Edit menu. The first tool reduces the total number of triangles in the frame by associating the triangles that belong to the same elements of the frame. This leads to a loss of detail, but such a loss may not be significant when simulating the mechanical behavior of the aircraft. This reduction can be greater or smaller in magnitude according to the chosen parameter settings in the toolbox. For this example, the default reduction of 20% downsizes the model from 16,029 to 3,205 triangles. The second tool removes the triangles that compose the inner parts of the object not visible in the simulation. After applying these tools in sequence, the final number of model triangles is reduced to 351. Figure 21 shows the frame after these operations have been performed. The next step is to associate the propellers with the frame at their specific positions. A propeller attached to the frame will create a force and torque on it according to the reference frame position. The force is created in the propeller’s Z-direction, and the torque is created in the XY-plane and applied to the center of mass of the frame. Hence, for the hexacopter example, six upward forces are added to push the aircraft up, and six torques make the aircraft rotates around its center of mass. If the six forces are set to be the same value, the hexacopter stays at a relatively static inclination but, without a control script, the random velocities present in the propeller Fig. 22 Changing the object translation step factor scripts create unbalanced forces, and the hexacopter moves in a random direction. To avoid undesired yaw rotation, it is necessary to put negative and positive signals in the propeller scripts, depending on their position on the frame, just as for real aircraft propeller rotation. Figure 25 shows the hexacopter frame and the six propellers before they are attached to their positions, and the association details. In the scene hierarchy, the six propellers are not part of the frame assembly. The next step is to associate each one with the frame by first selecting the desired propeller and then the frame model and clicking on edit –> make last selected object parent After associating all the propellers with the frame, we can rename them for better identification in the scene hierarchy. To do this, we simply double click on the object name in the hierarchy list, change the name, and hit “Enter.” We must now put each propeller in the correct position on the frame. We simply select the desired propeller and click the “object shift item” toolbar button. For a more precise positioning of the propeller, the step size translation parameter can be reduced by clicking on tools –> object manipulation settings Here, the value was changed to 0.001, as shown in Fig. 22. After all the propellers have been properly positioned, the example hexacopter looks like the one in Fig. 23. When the simulation is running, each propeller applies an upward force in the frame position where it is allocated that is proportional to the speed signal set by the Fig. 23 Assembled hexacopter script. Thus, the mechanical behavior of the hexacopter will simulate the interactions of the six propeller forces and external forces, such weight or inertial forces. To achieve a simulation that is closer to reality, we must define the hexacopter weight according to a real model. This is done by modifying the properties in the dialog box “Rigid Body Dynamic Properties,” in Tools –> Scene Object Proprieties –> Show Dynamic Properties Dialog This dialog box, shown in Fig. 24 allows us to change the configuration of the dynamic parameters of the object as necessary. We can specify model mass, change the object inertia matrix, or set a specific material for its composition. The values of these properties change the way the object behaves in certain situations during the simulation. The calculation of the dynamic parameters is a complex issue and beyond the scope of this chapter, but for practical purposes, we can define the mass and center of mass for the hexacopter model. For this example, the mass was set to 0.5 kg and the center of mass to x = 0, y = 0, and z = 0.5 m from the origin of the coordinate system. These values were based on a real hexacopter with characteristics similar to the simulation model. If these values are difficult to determine, the alternative is to use the software to automatically calculate the parameters. The software is able to take a specific mass for the model material and use it to calculate the values of the total mass, center of mass, and inertia matrix. The calculation is performed only on a simplified model mesh, and some differences between the calculated values and values of a real model should always be expected. To simulate the dynamics characteristics and behavior of the aircraft, a model that is closer to reality must be created, but for a great number Fig. 24 Shape dynamics properties dialog box of experiments, such as testing artificial intelligence algorithms, image processing based navigation, simultaneous localization and mapping, among others, this kind of simplified aircraft model is usually adequate (Fig. 25). To complete our model, is necessary to associate the sensors for the control and application data achievement. V-REP has a set of interesting sensor models, including accelerometers, gyroscopes, Global Positioning System (GPS), and some commercial sensors such as laser scanners and Kinect. To use any of these sensors, it is only necessary to add it to the scene and, if applicable, associate it with the aircraft frame. For this example, some basic sensors were associated with the hexacopter frame: an accelerometer, gyroscope, GPS, and laser pointer distance sensor, not all of which were used for aircraft control. The laser range finder and gyroscope are sufficient to control the vehicle position and stabilize the flight in this case, but other sensors can be used to improve the control and movement capacities. The group of sensors collects signals and sends them to the hexacopter control script in V-REP as well as to ROS via the ROS node. Each of these sensors runs a specific script for its work. By Fig. 25 Scene showing the hexacopter frame plus six propellers Environment for the Dynamic Simulation of ROS-Based UAVs 585 Fig. 26 Sensors on the hexacopter frame changing these scripts, a user is able to achieve some different measuring parameters for a specific sensor, or even change its global operation. Figure 26 shows the three selected sensors positioned at the center of the hexacopter frame. Each one of these sensors is represented as a small cube in the scene. On the left side of the image, the hierarchical list of the hexacopter objects shows the sensors and sensors scripts as a part of the frame. We now analyze the behavior of the individual sensors by looking at the sensor scripts. The first sensor we analyze is the GPS sensor. This sensor returns the value of the spatial X, Y, and Z position related to the origin of the scene’s world coordinates. One interesting point is that the script adds some “noise” to the position measurement to simulate the position measurement error of a real GPS sensor. This sensor is not based on a commercial model, but is a simple model provided by the software, so its performance will not be close to real GPS systems. For specific experiments, more accurate models probably will be needed. The second sensor we consider is the accelerometer. This sensor calculates the object acceleration about the X-, Y-, and Z-axes of the hexacopter frame. This sensor does not add noise to the measured values like the GPS sensor does, but noise can be added if necessary by changing its associated script, just as for all the other sensors provided by the software. The third sensor is the gyroscope. The gyroscope measures the angular velocity of the object about the X-, Y-, and Z-axes and returns the results. These values are essential for the correct operation of aircraft controller. The laser range finder sensor is added to the base of the hexacopter frame, pointing at the ground. The function of this sensor is to obtain the absolute height of the aircraft. The laser pointer sensor measures the distance between the laser emitter and the object that reflects the laser beam. In the MATLAB control script example, this sensor is pointed toward the ground and returns the height of the aircraft. To run a simulation of the scene, it is necessary to associate the V-REP control scripts with the hexacopter model and the propellers. The scripts used in this example are available in the book’s online repository. Download all the scripts, then copy and paste the text associated with each element into the appropriate script. To open an object script such as the one for the hexacopter, click on the object script icon present in the scene hierarchy window. Replace all the text present in this script with that of the downloaded one. The hexacopter script implements a simple position control based on a PID algorithm. The operation of this PID is not the focus of this chapter, but it is needed to allow the hexacopter model to fly relatively in a stable manner and move during the simulation. To change the position of the hexacopter, move the slider buttons of the “Position controller.” Small changes are better at each change to avoid losing control of the model. The second slider buttons allow us to set “perturbations” in the hexacopter position to show the work of the PID algorithm when it searches for the original position. 5 ROS Virtual Hexacopter Control After creating the hexacopter virtual testing environment, it is necessary to make the vehicle compatible with the ROS. The ROS is the communication tool between the control nodes (i.e., any software, in this case MATLAB Simulink) and the real or virtual vehicle. For this example, a set of ROS nodes was specified to provide information about the propeller speeds, hexacopter linear and angular speed twists, transformations between the reference systems, absolute position of the hexacopter in the map, hexacopter position relative to the reference odometry (starting position), and the laser sensor reading, which is responsible for determining the distance from the ground. All these nodes are illustrated in the text below. / hexacopter / ground_distance / hexacopter / laserscan / hexacopter /odom / hexacopter / pose / hexacopter / propeller1 / hexacopter / propeller2 / hexacopter / propeller3 / hexacopter / propeller4 / hexacopter / propeller5 / hexacopter / propeller6 / hexacopter / twist / rosout / rosout_agg / tf / vrep / info Once the ROS nodes have been specified, it is necessary to create the V-REP script to enable communication between them. Two functions are used to make the script work with the ROS, one for the task of publishing (sim E xt R O S e nable Publisher ) and the other for subscription (Sim E xt R O S e nableSubscriber ). A more detailed description of these functions can be found in the simulator tutorials in [3, 4]. The full script can be seen in the code below. V-REP script for Hexarotor interface with ROS if ( s i m _ c a l l _ t y p e == s i m _ c h i l d s c r i p t c a l l _ i n i t i a l i z a t i o n ) then - - - Start motor v e l o c i t i e s motor1 = 0 motor2 = 0 motor3 = 0 motor4 = 0 motor5 = 0 motor6 = 0 --- Create float signal with propeller velocities s i m S e t F l o a t S i g n a l ( ’ prop1 ’ , m o t o r 1 ) s i m S e t F l o a t S i g n a l ( ’ prop2 ’ , m o t o r 2 ) s i m S e t F l o a t S i g n a l ( ’ prop3 ’ , m o t o r 3 ) s i m S e t F l o a t S i g n a l ( ’ prop4 ’ , m o t o r 4 ) s i m S e t F l o a t S i g n a l ( ’ prop5 ’ , m o t o r 5 ) s i m S e t F l o a t S i g n a l ( ’ prop6 ’ , m o t o r 6 ) - - - H a n d l e s for s e n s o r s and f r a m e h e x a H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ H e x a c o p t e r _ R O S ’) hexarotor handle h o k u y o H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ H o k u y o ’) laserscan handle a c e l H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ A c c e l e r o m e t e r ’) accelerometer handle g y r o H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ G y r o S e n s o r ’) gyroscope handle l a s e r H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ L a s e r P o i n t e r _ s e n s o r ’) ground sensor handle --- Reference handles o d o m H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ odom ’) reference m a p H a n d l e = s i m G e t O b j e c t H a n d l e ( ’ map ’) -- o d o m e t r y -- map r e f e r e n c e - - - C r e a t e ROS s u b s c r i b e r s for p r o p e l l e r s v e l o c i t i e s s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’/ h e x a c o p t e r / p r o p e l l e r 1 ’ ,1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ prop1 ’) s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’/ h e x a c o p t e r / p r o p e l l e r 2 ’ ,1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ prop2 ’) s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’/ h e x a c o p t e r / p r o p e l l e r 3 ’ ,1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ prop3 ’) s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’/ h e x a c o p t e r / p r o p e l l e r 4 ’ ,1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ prop4 ’) s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’/ h e x a c o p t e r / p r o p e l l e r 5 ’ ,1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ prop5 ’) s i m E x t R O S _ e n a b l e S u b s c r i b e r ( ’/ h e x a c o p t e r / p r o p e l l e r 6 ’ ,1 , s i m r o s _ s t r m c m d _ s e t _ f l o a t _ s i g n a l , -1 , -1 , ’ prop6 ’) - - - C r e a t e ROS p u b l i s h e r s for odometry , pose and g r o u n d s e n s o r s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’/ h e x a c o p t e r / odom ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ o d o m _ d a t a , hexaHandle , odomHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’/ h e x a c o p t e r / pose ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ o b j e c t _ p o s e , hexaHandle , odomHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’/ h e x a c o p t e r / g r o u n d _ d i s t a n c e ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ f l o a t _ s i g n a l , -1 , -1 , ’ l a s e r P o i n t e r D a t a ’) - - - C r e a t e ROs p u b l i s h e r s for t r a n s f o r m a t i o n s s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ tf ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ t r a n s f o r m , l a s e r H a n d l e , hexaHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ tf ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ t r a n s f o r m , gyroHandle , hexaHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ tf ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ t r a n s f o r m , acelHandle , hexaHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ tf ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ t r a n s f o r m , h o k u y o H a n d l e , hexaHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ tf ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ t r a n s f o r m , hexaHandle , odomHandle , ’ ’) s i m E x t R O S _ e n a b l e P u b l i s h e r ( ’ tf ’ ,1 , s i m r o s _ s t r m c m d _ g e t _ t r a n s f o r m , odomHandle , mapHandle , ’ ’) end In this script, the vehicle transformation sequence is also created, according to the ROS standard specified in [5], which results in the transformation trees shown in Fig. 28. The ROS interface with MATLAB is performed using the Robotics Systems Toolbox, available in the Simulink Toolbox, which is present in 2015 and later versions of MATLAB, only for Linux operating systems. For more information about this topic, see [6–8]. The toolbox consists of three building blocks, one to publish information to the ROS (Publish), another to read information from the ROS (Subscribe), and the last one to create messages (Blank Message), as illustrated in Fig. 27 (Fig. 28). The hexacopter can be operated using a set of read/write actions, performed by Simulink on ROS, using the Robotics Systems Toolbox. The result is shown in Figs. 29 and 30. These diagrams can be grouped into a subsystem that deals with the connection interface of the virtual hexacopter using the ROS, represented by a Fig. 27 Robotics System Toolbox Fig. 28 Hexacopter frames (ROS TFs) 590 A.R. Cantieri et al. Fig. 29 Matlab interface for ROS publishers and subscribers - 1 large block with outputs (actuators for the publishers) and inputs (subscribers for the sensors). The odometry represents the hexacopter position and orientation relative to the initial position. This information is a nav type information ( m sgs/Odometr y) to ROS. The orientation is expressed using a vector that consists of four elements (qx , q y , qz , qw ), used to extract the hexacopter orientation. The necessary calculations are done by a Simulink block called M AT L AB f unction. In this block, a MATLAB script was written to calculate the Euler angles, as presented below. MATLAB script for conversion of quaternion to RPY angles f u n c t i o n [ roll , pitch , yaw ] = q u a t 2 e u l (x , y , z , w ) % C o n v e r t the q u a t e r n i o n to a roll - pith - yaw eul = q u a t 2 e u l ([ w , x , y , z ]) ; roll = eul (2) ; pitch = eul (1) ; yaw = eul (3) ; end To perform the hexacopter movements, another MATLAB function was written, using the M AT L AB f unction. This is necessary to send the speed signals that increase or decrease each propeller velocity depending on the desired motion. The motion code for this example is shown below. MATLAB script for Hexarotor f u n c t i o n [ prop1 , prop2 , prop3 , prop4 , prop5 , prop6 ] = h e x a r o t o r ( roll , pitch , yaw , t h r u s t ) h o v e r i n g = 0.42; thrust = thrust + hovering ; prop1 prop2 prop3 prop4 prop5 prop6 thrust thrust thrust thrust thrust thrust +0 +0 - roll +0 +0 + roll prop1 = s i n g l e ( prop1 ) ; prop2 = s i n g l e ( prop2 ) ; prop3 = s i n g l e ( prop3 ) ; prop4 = s i n g l e ( prop4 ) ; prop5 = s i n g l e ( prop5 ) ; prop6 = s i n g l e ( prop6 ) ; end -( pitch /2) -( pitch /2) +0 +( pitch /2) +( pitch /2) +0 + yaw ; - yaw ; + yaw ; - yaw ; + yaw ; - yaw ; With the motion blocks generated, is next necessary to create a control script mechanism to correct the motion stabilization of the aircraft model. For this example, a classical PID controller was created in MATLAB. The objective of this example is not to discuss the control algorithm project, but show how to create a virtual aircraft in V-REP and perform flight control operations on it using MATLAB via ROS as a basis for more complex experimentation. Hence, the principal task of this controller is to set a fixed spatial position for the hexacopter without significant variation. This is achieved by measuring the roll, pitch, yaw, and height and performing corrections on the propellers to reduce the position and inclination error to zero. For the proposed example, a height of 1 m at the (x = 0, y = 0) position was set as the target. The result of the control algorithm is shown in Fig. 31. In the ROS context, the inclusion of all these nodes and topics results in the logical structure shown in Fig. 32. The velocity of the propellers is controlled in the MATLAB Simulink Toolbox. One way to change this simulation example and make the hexacopter move from a fixed point is by changing the code in this block. It is a good initial task to better understand all the simulation parts and prepare for creating other kinds of applications. 6 Final Considerations This chapter was meant to be a starting point for more complex applications. The objective was to provide the minimal background necessary to enable the development of a simulated simple multirotor virtual aircraft control, making it possible for interested users to work with the tools without spending a long time learning them. Because of this, all the examples were simplified, and the implementation of real applications based on these tools will demand some additional work. All the scenes and scripts created can be download in https://sourceforge.net/projects/ rosbook-2016-chapter-4/files/. The V-REP simulation software is a powerful tool for developing several robot and automation simulations. The authors strongly recommend the study of V-REP’s manual, which is cited in the bibliography, and also following the software tutorials before starting to work with more complex simulations. The models, scripts, code, and other materials of interest used in this chapter are accessible online in the book’s digital repository. Additional questions will be accepted by the authors at any time. Fig. 31 Hexarotor controller in Simulink Fig. 32 ROS nodes for Simulink control of Hexarotor References 1. Coppelia and Robotics. 2015. Virtual robot experimentation platform user manual version 3.3.0. Technical report, Coppelia Robotics. http://www.coppeliarobotics.com/helpFiles/ 2. GrabCAd. 2016. GrabCAD Community Web Site GrabCAD. https://grabcad.com/library 3. Coppelia and Robotics. 2015. V-rep rosplugin publishers. Technical report, Coppelia Robotics. http://www.coppeliarobotics.com/helpFiles/en/rosPublishers.htm 4. Coppelia and Robotics. 2015. V-rep rosplugin subscribers. Technical report, Coppelia Robotics. http://www.coppeliarobotics.com/helpFiles/en/rosSubscribers.htm 5. Meeussen, W. 2015. Coordinate Frames for Mobile Platforms Ros rep-105 coordinate frames for mobile platforms. Technical report, Ros.org. http://www.ros.org/reps/rep-0105.html 6. Mathworks. 2016. Get Started with ROS in Simulink. Matlab Documentation, Mathworks. http://www.mathworks.com/help/robotics/examples/get-started-with-ros-in-simulink.html 7. Corke, P. 2015. Integrating ros and matlab [ros topics]. IEEE Robotics Automation Magazine 22 (2): 18–20. 8. Mathworks. 2016. Get started with ros. Technical report, Mathworks. https://www.mathworks. com/help/robotics/examples/get-started-with-ros.html Author Biographies Alvaro Rogério Cantieri has been an Associate Professor at the Federal Institute of Paraná (IFPR) since 2010. He obtained his undergraduate degree and Master’s degree in electronic engineering at the Federal University of Paraná 1994 and 2000, respectively. He is currently studying for his Ph.D. in electrical engineering and industrial informatics at the Federal University of Technology - Paraná (Brazil). He started his teaching career in 1998 in the basic technical formation course of the Politecnical Institute of Parana (Parana, Brazil) and worked as a Commercial Director of the RovTec Engineering Company, focusing on electronic systems development. His research interests include autonomous multirotor aircraft, image processing, and communications systems. André Schneider Oliveira obtained his undergraduate degree in computing engineering at the Universidade of Itajaí Valley (2004), Master’s degree in mechanical engineering from Federal de Santa Catarina University (2007), and Ph.D. degree in Automation Systems from Santa Catarina University (2011). He works as an Associate Professor at the Federal University of Technology - Paraná (Brazil). His research interests include robotics, automation, and mechatronics, mainly navigation systems, control systems, and autonomous systems. Marco Aurélio Wehrmeister received his Ph.D. degree in computer science from the Federal University of Rio Grande do Sul (Brazil) and the University of Paderborn (Germany) in 2009 (double-degree). In 2009, he worked as a Lecturer and Postdoctoral Researcher for the Federal University of Santa Catarina (Brazil). From 2010 to 2013, he worked as tenure-track Professor with the Department of Computer Science of the Santa Catarina State University (Brazil). Since 2013, he has worked as a tenure-track Professor in the Department of Informatics of the Federal University of Technology - Paraná (UTFPR, Brazil). From 2014 to 2016, he was Head of the MSc course on Applied Computing at UTFPR. João Alberto Fabro is an Associate Professor at the Federal University of Technology - Parana (UTFPR), where he has worked since 2008. From 1998 to 2007, he was with the State University of West-Parana. He has an undergraduate degree in informatics from the Federal University of Paraná (1994), a Master’s degree in computing and electrical engineering from Campinas State University (1996), a Ph.D. degree in electrical engineering and industrial informatics from UTFPR (2003) and recently became a Postdoctoral Researcher at the Faculty of Engineering, University of Porto, Portugal (2014). Has experience in computer science, especially computational intelligence, and is actively researching the following subjects: computational intelligence (neural networks, evolutionary computing, and fuzzy systems) and autonomous mobile robotics. Since 2009, has participated in several robotics competitions in Brazil, Latin America, and the World Robocup with both soccer robots and service robots. Marlon de Oliveira Vaz obtained his undergraduate degree in computer science from the Pontifical Catholic University (PUCPR -1998) and Master’s degree in mechanical engineering from PUCPR (2003). He is now a Teacher at the Federal Institute of Parana and pursuing a Ph.D. in electrical and computer engineering at the Federal University of Technology – Parana. He works mainly in the following research areas: graphical computing, image processing, and educational robotics. Building Software System and Simulation Environment for RoboCup MSL Soccer Robots Based on ROS and Gazebo Junhao Xiao, Dan Xiong, Weijia Yao, Qinghua Yu, Huimin Lu and Zhiqiang Zheng Abstract This chapter presents the lesson learned during constructing the software system and simulation environment for our RoboCup Middle Size League (MSL) robots. The software is built based on ROS, thus the advantages of ROS such as modularity, portability and expansibility are inherited. The tools provided by ROS, such as RVIZ, rosbag, rqt_graph just to name a few, can improve the efficiency of development. Furthermore, the standard communication mechanism (topic and service) and software organization method (package and meta-package) introduces the opportunity for sharing codes among the RoboCup MSL community, which is a fundamental issue to forming hybrid teams. As known, to evaluate new algorithms for multi-robot collaboration on real robots is expensive, which can be done in a proper simulation environment. Particularly, it would be nice if the ROS based software can also be applied to control the simulated robots. As a result, the open source simulator Gazebo is selected, which offers a convenient interface with ROS. In this case, a Gazebo based simulation environment is constructed to visualize the robots and simulate their motions. Furthermore, the simulation has also been used to evaluate new multi-robot collaboration algorithms for our NuBot RoboCup MSL robot team. Keywords Robot soccer · Gazebo · ROS · Multi-robot collaboration · Simulation J. Xiao · D. Xiong · W. Yao · Q. Yu · H. Lu (B) · Z. Zheng College of Mechatronics and Automation, National University of Defense Technology, Changsha 410073, China e-mail: [email protected] J. Xiao e-mail: [email protected] D. Xiong e-mail: [email protected] W. Yao e-mail: [email protected] Q. Yu e-mail: [email protected] Z. Zheng e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_18 J. Xiao et al. 1 Introduction RoboCup,1 short for Robot World Cup is an international initiative to foster the research in artificial intelligence (AI) and mobile robotics, by offering a publicly appealing, but formidable challenge. In other words, it is a perfect combination of sport and technology, thus has attracted many researchers and students. Since founded in 1997, it has promoted the research field for almost two decades [1, 2]. The final goal of RoboCup is that a team of fully autonomous humanoid soccer robots will beat the human World Cup champion team by 2050 [3]. Besides soccer games, other competition stages have been introduced into RoboCup along with its growth. As a result, the contest currently has six major competition domains, namely RoboCup Soccer, RoboCup Rescue, [email protected], [email protected], RoboCup Logistics League and RoboCupJunior, and each domain has several leagues and sub-leagues. More information can be found in the Robot World Cup book series published by Springer [4, 5]. For RoboCup middle size league (MSL), the robots can be designed freely as long as they stay below a maximum size and a maximum weight. The game is played on a carpet field at the size of 18 m × 12 m, with white lines and circles as landmarks for localization. In the competition, all the robots are completely distributed and autonomous, which means they must use their own on-board sensors to perceive the environment and make decisions. According to the rules, wireless communication is allowed to share information among a team of robots, which can help the cooperation and coordination. Therefore, RoboCup MSL is a standard and challenging real-world test bed for multi-robot control, robot vision and other relative research subjects. As one league with the longest history among RoboCup, lots of scientific and technical progresses have been achieved in RoboCup MSL, an overview can be found in [6]. Its games are also becoming more and more fluent and fierce. For example, the robots can actively handle the ball for stepping forwards, turning and stepping backwards, can make dynamic long passes, and their velocity can reach about 5 m/s, etc. Therefore, in recent years, the RoboCup MSL final has been serving as the grand finale of RoboCup, which gives the opportunity to all audiences and participants to enjoy the game together. A typical competition scenario has been drawn in Fig. 1. However, this brings lots of difficulties for new teams to catch-up, because it is not easy and very time-consuming to design and implement a team of RoboCup MSL soccer robots from the very beginning. Since its birth in 2010, as an open source software, ROS has attracted a huge number of robotic researchers and hobbyist, which have been serving as an active and high-productive community to boost the development of ROS. The community not only optimizes the code but also stands in the front of robotic research, i.e., implementations of state-of-the-art algorithms can be found in ROS. Therefore, ROS is becoming the de facto standard for robotic software. Built under ROS, the robot software components can be well and easily organized. Strictly speaking, the code 1 http://www.robocup.org/. Building Software System and Simulation Environment … Fig. 1 A typical scenario of the RoboCup MSL competition: a match between TU/e and NuBot in RoboCup 2014 Joao ˜ Pessoa, Brazil implementation can achieve high modularity and re-usability. Meanwhile, lots of useful tools have been provided by ROS, which can ease data logging and sharing, and code sharing among RoboCup MSL teams. In April 2014, a question is raised among our team, i.e., whether ROS is suitable for RoboCup MSL? As a result, we decided to drop our self-developed software framework (more than 10 years development) and build the software system for our soccer robot based on ROS. Then in July, we participated RoboCup 2014 with robots with new “soul”. The competition brought two positive remarks. First, the software was more robust than ever before. Second, the work was acknowledged by other teams. In this paper, we will detail the ROS based software, and hope to provide a valuable reference for building ROS based software for distributed multi-robot systems. In addition, to evaluate new algorithms for multi-robot collaboration using real robots is very time consuming, which demands a high-fidelity simulation environment. Particularly, it would be of efficiency if the simulated robot has the same control interface as the real robot. In fact, there are many robotic simulators either commercially available or open source, such as V-REP [7], Gazebo [8], Webots [9], LpzRobots [10], just to name a few. Detailed introduction and comparison of the state-of-the-art robotics simulators can be found at [11, 12]. Among the simulators, Gazebo offers a convenient interface, i.e., from the software interface point of view, there is no difference between controlling a real robot and its Gazebo dummy. In other words, the algorithms evaluated using the simulation environment can be applied to the real robots without change. Therefore, we choose Gazebo to construct the simulation environment. This Chapter will cover the design and implementation of the software system, simulation environment, and their interfaces for our RoboCup MSL robots. It is based on our previous work of [13, 14]. The code has been made open source, with the ROS based software can be accessed at https://github.com/nubot-nudt/nubot_ ws, while the simulator can be accessed at https://github.com/nubot-nudt/gazebo_ visual. A video showing a simulated match using our multi-robot simulator can be found at https://youtu.be/rMuAZGf65AE. The remainder of the chapter consists of the following topics: • First, the background is introduced in Sect. 2. • Second, a brief introduction of our NuBot multi-robot system is given in Sect. 3, including the mechanical structure, the perception sensors and the electrical system. • Third, the ROS-based software is detailed in Sect. 4. • Fourth, the Gazebo-based simulation environment is drawn in Sect. 5, with the focus on how we designed the same control interface for the real robots and simulated robots. • Fifth, two short tutorials are given for single robot simulation and multi-robot simulation in Sects. 6 and 7, respectively. • A short conclusion is given in Sect. 8. 2 Background It is not trivial to design robots for highly competitive and dynamic environments like the RoboCup MSL, thus many hardware designs and software algorithms have been proposed, see [6] for an overview. In this section, we try to give a brief introduction from which the readers can acquire more details about the achievements in RoboCup MSL, and the efforts which have been done to cut down the difficulty in developing RoboCup MSL robots. Our previous works will also be introduced in this section. RoboCup MSL has achieved scientific results in robust design of mechanical systems, sensor-fusion, tracking, world modelling and distributed multi-robot coordination. A special issue named “Advances in intelligent robot design for the Robocup MSL” was published in 2011 [15]. In this issue, the state-of-the-art research about mechatronics and embedded robot design, vision and world modelling algorithms, team coordination and strategy was presented. The paper [6] overviews the history and the current state of the RoboCup MSL competition, which also presents a plan to further boost scientific progress and to attract new teams to the league. Surveys about team strategies, vision systems and visual perception algorithms in RoboCup MSL can be found in [16–18]. Recently, some RoboCup trustees reported the history and state-of-the-art of RoboCup soccer leagues, in which they had very positive comments on the RoboCup MSL [19], e.g., “This Middle Size League has had major achievements during the last few years. Middle Size League teams have developed software that allows amazing forms of cooperation between robots. The passes are very accurate and some complex, cooperatively made goals are scored after passing the ball, rather than just out-dribbling an opponent and playing individually”. However, this brings lots of difficulties for new teams to catch-up. As a result, the number of RoboCup MSL teams has not rise for years. How to draw more teams to participate RoboCup MSL and then make contributes to RoboCup MSL is becoming a major problem. Facing this reality, the RoboCup MSL community has made efforts to reduce the difficulty in implementing RoboCup MSL teams. For example: • The launch of ROP (Robotic Open Platform) [20], it facilitates the release of hardware designs of robots and modules under an open hardware license. In the repository, the robots named Turtle of team Tech United have been fully released. • Another remarkable propose is to design and implement an affordable platform for RoboCup MSL, thus providing an easier starting point for any new team, i.e., the Turtle-5k project.2 With the support from the Tech United team, the TURTLE5k platform is developed based on the 2012 TURTLE robots, which has won the RoboCup MSL world Champion. The Value Engineering method has been employed to find out some the most cost part, where the cost could be reduced. • Real-time efficient communication among robots is of key important for cooperation, the CAMBADA team [21] proposed a TDMA (Time Division Multiple Address) based communication protocol, which is designed for real time data sharing in a local network. CAMBADA also implemented the communication protocol, which is named real-time database tool (RTDB) [22, 23]. Furthermore, RTDB has been made open source, and several teams are using it for communication. Although RoboCup MSL teams have made significant achievements, there are still some open problems and challenges in constructing a RoboCup MSL robot team to play with human beings: • The robot platform should have good performance in critical aspects such as top speed and top accelerations, and be able to handle impacts. It should be easy to assemble and maintain. • It is necessary to improve the stability of the electrical system, and the extension of sensors should be better supported. • The robustness of the vision system should be improved to make it work reliably in both indoor and outdoor environments with highly dynamic lighting conditions. • The software framework should support code reusing and data sharing as much as possible. 2 http://www.turtle5k.org/. Fig. 2 The 5 generation NuBot robots 3 The NuBot Multi-robot System Our NuBot team3 was founded in 2004. As shown in the Fig. 2, from the very beginning, 5 generations robots have been created. It can be seen that, the NuBot robots have been always using omni-directional vision system [24, 25], and have been equipped with omni-directional chassis since the second generation [26]. We had participated in RoboCup simulation and small size league (SSL) at first. Since 2006, we have been participating RoboCup MSL actively, e.g., we have been to Bremen, Germany (2006), Atlanta, USA (2007), Suzhou, China (2008), Graz, Austria (2009), Singapore (2010), Eindhoven, Netherlands (2013), Joao Pessoa, Brazil (2014), and Hefei, China (2015) [27]. We have been also participating RoboCup ChinaOpen since it was launched in 2006. This chapter will present the software system and simulation environment for the last generation robot. 3.1 Mechanical Platform This section draws the mechanical platform of our NuBot soccer robots. When designing the robot platform, there are several criteria to be considered. First, it should comply with the rules and regulations of RoboCup MSL, namely, its size, weight and safety concerns. Second, it should have good maneuverability in order to play against others. Lastly, because malfunctions or failures are unavoidable during the intensive and fierce RoboCup MSL games, the mechanical parts should embrace high modularity such that they are easy to assemble and maintain. To fulfil these requirements, The NuBot robots have been designed modularly as shown in Fig. 3. Currently, the regular robot and the goalie robot are heterogeneous due to their different tasks. For the regular robot, it should be able to do the same things as a human soccer player, such as moving, dribbling, passing and shooting. Therefore, the mechanical platform is subdivided into three main modules as illustrated in Fig. 3a. 3 http://nubot.trustie.net. Fig. 3 a The regular robot. b The goalie robot • the base frame; • the ball-handling mechanism; • the electromagnet shooting system; For the goalie robot, the ball-handling mechanism, the electromagnet shooting device and the front vision system are removed, instead two RGB-D cameras are integrated as shown in Fig. 3b. Furthermore, to increase the side acceleration, the configuration of omni-directional wheels has also been modified. In the below, we give a brief introduction to each part. Base Frame The holonomic-wheeled platform, which is capable of carrying out rotation and translation simultaneously and independently, has been used by most RoboCup MSL teams [21, 28]. In the NuBot platform, our custom-designed omnidirectional wheel has been utilized, as illustrated in Fig. 4a. Four such omnidirectional wheels are uniformly arranged on the base as shown in Fig. 4. Despite the added costs of extra weight and extra power consumption, the four-wheelconfiguration platform can generate more traction force than a normal three-wheelconfiguration one, thus can improve the maneuverability. For the goalie robot, its main motion is side moving to defend coming balls. In this case, the configuration of omni-directional wheels have been modified as shown in Fig. 4c. Ball-Handling Mechanism The ball-handling mechanism enables the robot to catch and dribble the football. As illustrated in Fig. 5, there are two symmetrical assemblies, each contains a wheel, a DC motor, a set of transmission bevel-gears, a linear displacement transducer and a support mechanism. According to the gravity and pressure from the support mechanism, the wheels are stuck to the ball when the ball is in. Therefore, they can generate various friction forces to the ball, and make it Fig. 4 a The custom designed omni-directional wheel. b Base for regular robots. c Base for the goalie robot Fig. 5 The ball-handing mechanism of the NuBot rotate in desired directions and speeds together with the soccer robot. During dribbling, the robot constantly adjusts the speed of the wheels, in order to maintain a proper distance between the ball and the robot using a closed-loop control system. This control system takes the distance between the ball and the robot as the feedback signal, which is measured by the linear displacement transducers. As the ball moves closer to the robot, the supporting mechanism will raise, then compress the transducer, otherwise the support mechanism will fall and stretch the transducer. This system effectively solves the ball-handling control problem. Electromagnet Shooting System The shooting system enables the robot to pass and score, currently there are three ways to construct a shooting actuator, i.e., spring mechanisms, pneumatic systems and solenoids electromagnet, an overview can be seen in [29]. When using spring mechanisms, the shooting power is quite hard to control. The pneumatic systems usually need a large gas tank to generate high pressure to realize strong shooting, and the number of shots generally depends on the size of the gas tank. As a result, most RoboCup MSL teams choose to use solenoid electromagnet, whose shooting force can be high and is easier to control. Our customdesigned solenoid electromagnet has been depicted in Fig. 6, it consists of a solenoid, an electromagnet core, a shooting rod, and two linear actuators with potentiometer. The shooting rod can be adjusted in height to allow for different shooting modes, Fig. 6 a The solenoids electromagnet. b Mechanism of the shooting system namely, flat shots for passing and lob shots for scoring. Two modes are realized using two linear actuators to move the hinge of the shooting rod to different positions. For more detail, please refer to [13]. 3.2 Visual Perception System For RoboCup MSL robots, the visual perception system is of great importance as they should be fully autonomous. There are three kind of visual perception sensors in our system, namely an omni-directional vision system, a front vision system and a RGB-D vision system. Among them, each robot has an omni-directional vision, and each regular robot has an additional front vision, while the goalie robot has dual RGB-D cameras, as shown in Fig. 3. Below a short introduction to the vision sensors is given. Omni-Directional Vision System Almost all RoboCup MSL teams are using omnidirectional vision systems, which is composed of a convex mirror and a camera pointing upward towards the mirror. The panoramic mirror plays the most important impact on the imaging quality, especially on the distortion of the panoramic image. Currently we are using the mirror designed by the team named Tech United Eindhoven [30], which has a relative simple profile, at the same time is easy to calibrate. Front Vision The front vision system is an auxiliary sensors for the regular robots, which is a low-cost USB camera and facing down upon the ground, as shown in Fig. 3. With it, the robot can recognize and localize the ball with high accuracy when the ball is close to the robot. The position of the ball is estimated based on the pinhole camera projection model. It is of great significance for accurate ball catching and dribbling. Dual RGB-D Cameras In the current RoboCup MSL games, most of the goals are achieved by lob shooting, so accurate estimation of the shooting touchdown-point of the ball is fundamental for the goalie robot to defend these shoots. Although the object’s 3D information can be acquired using the omni-directional vision system and a front vision system together, the accuracy cannot be guaranteed because the imaging resolution of the omni-directional vision is relative low due to its large field of view (FoV). The Kinect 2 RGB-D camera can stream out color and depth information simultaneously at the frame rate of 30 fps, whose sensing range is up to 8 m. Therefore, it is an ideal sensor to obtain the 3D ball information for the goalie robot. Thus, our goalie robot is equipped with dual RGB-D cameras, as demonstrated in Fig. 3, to recognize and localize the ball, estimate its moving trace and predict the touchdown-point in 3D space. 3.3 Industrial Electrical System As the RoboCup MSL game becomes more and more competitive and fierce, the requirements on the robustness and reliability of the electronic system are also increasing. To improve the robustness of our robot control system, the electrical system of NuBot robots is designed based on the so called PC-based control technology, whose block diagram has been drawn in Fig. 7. As can be seen, the system uses an Ethernet-based field-bus system named EtherCAT [31, 32] to realize high-speed communication between the industrial PC and the connected modules. All vision Fig. 7 The electrical system based on PC control technology. The blue-dashed box represents industrial PC and Ethernet-based field-bus, which are the core module of the PC-based control technology and control algorithms are processed on the industrial PC. The industrial electrical system has been used through 2014 Brazil and 2015 China international RoboCup competitions, 2014 China RoboCup competition. 4 ROS-Based Software for NuBot Robots The recent achievements in robotics make autonomous mobile robot play an increasingly important role in daily life. However, it is difficult to develop a generic software for different robots, e.g. it is usually difficult to reuse others’ code implements. A solution named Robot Operating System (ROS), launched by Willow Garage company, provides a set of software libraries and tools for building robot applications across multiple computing platforms. ROS has many advantages: ease of use, high-efficiency, cross-platform, supporting multi-programming languages, distributed computing, code reusability, and is completely open source (BSD) and free for others to use. We also use ROS to build our NuBot software. Furthermore, our software is developed on Ubuntu, and it is also open source. For the current version, the operating system is Ubuntu 14.04, and the ROS version is indigo. The software framework, as shown in Fig. 8, is divided into 5 main parts: 1. 2. 3. 4. 5. the Prosilica Camera node and the OmniVision node; the UVC Camera node and the FrontVision node; the NuBot Control node; the NuBot HWControl node; the RTDB and the WorldModel node. Fig. 8 The software framework based on ROS Fig. 9 The goalie software framework based on ROS For the goalie robot, four Kinect related nodes will replace the FrontVision node and the UVC Camera node, i.e., a driver node and a 3D ball tracking node is required for each Kinect, as shown in Fig. 9. These nodes will be described in the following sub-sections. 4.1 The OmniVision Node Perception is the base to realise the autonomous ability for mobile robots, such as path planning, motion control, self localization, action decision and cooperation. Omni-directional vision is one of the most important sensors for RoboCup MSL soccer robots. The image is captured and published by the Prosilica Camera node.4 It takes less than 30 ms to perform the computation of below algorithms, so the OmniVision node can be run in real-time. Colour Segmentation and White Line-Points Detection The color lookup table is calibrated off-line. Because of its simplicity and low computational requirements, it is used to realize color segmentation. A typical panoramic image captured by our omni-directional vision system is shown in Fig. 10a in a RoboCup MSL standard field, the corresponding segmentation result is shown in Fig. 10b. As can be seen, this method can be used to distinguish the ball, green field, black obstacles and white 4 http://wiki.ros.org/prosilica_camera. Fig. 10 a The image captured by our omni-directional vision system. b The result of color segmentation for image in a, for the visualization purpose, the ball has been colored with pink and obstacles have been colored with purple line in the color-coded environment. To detect white line in the panoramic image, we search for significant color variations along some scan lines because of the different color values between the white lines and the green field. As shown in Fig. 10b, these scan lines are radially arranged around the image center, and the red points represent the resulted white line-points. Self-localization To localize an autonomous mobile robot under a highly dynamic structured environment is still a challenge. A matching optimization algorithm has been employed to realize global localization and pose tracking for our soccer robots accurately in real-time. A brief introduction is given below, see [33] for more detail. As an off-line preprocessing step, 315 samples are acquired as the robot’s candidate positions which are located uniformly in the field. Then, for the real-time global localization, the orientation is obtained by an Motion Trackers instrument (MTi). Afterwards, the match optimization localization algorithm is used to determine the real pose among the samples. Once global localized, pose tracking phase is started, where the encoders based odometry is used to obtain its coarse pose, and a Kalman Filter is employed to fuse the odometry with the match optimization result. A typical localization result is illustrated in Fig. 11, during the experiment, the robot was manually pushed to follow straight lines on the field, which is shown as black lines in the figure. The red traces depict the localization result. The mean position error is less than 6 cm. Fig. 11 The robot’s self-localization results 4.2 The FrontVision Node and the Kinect Node The FrontVision node processes the perspective image captured and published by the UVC Camera node,5 and provides a more accurate ball position information when the ball is in the near front of a regular robot. The node detects the ball using a color segmentation algorithm and region growing algorithm similar to the OmniVision node. Then we can estimate the ball position based on the following assumptions. First, the ball is located on the ground. Second, the pinhole camera model is adopted to calibrate camera interior and exterior parameters off-line. Lastly, the height of the camera to the ground and its view direction is known. 3D information of the ball is of great significance for the goalie robot to intercept lob shot balls. However, using the front vision system and the omni-directional vision system to interpret depth information is difficult. Therefore, a dual RGB-D cameras setup is employed to recognize and localize the ball, estimating its moving trace in 3D space. The OpenNI RGB-D camera driver, which has been integrated into ROS, is employed for obtaining point clouds data in the Kinect driver node. Basic point cloud processing, such as noise filtering and segmentation, is based on algorithms of the Point Cloud Library (PCL) [34]. As shown in Fig. 12, in the 3D ball processing node, the same color segmentation algorithm as that in the OmniVision node is used to obtain some candidate ball regions. Then, the random sample consensus algorithm (RANSAC) [35] is used to fit a spherical model to the shape of the 3D candidate ball regions. With the proposed method, only a little number of candidate ball regions need to be fitted. Lastly, to intercept the ball for the goalie, the 3D trajectory of the ball regarded as a parabola curve is estimated and the touchdown-point in 3D space is also predicted in the 3D ball processing node, using a similar algorithm as in [36]. In total, the node takes 5 http://wiki.ros.org/uvc_camera. Fig. 12 The 3D ball processing data flow about 30–40 ms to process a frame of RGB-D data, therefore can meet the real-time requirement of highly dynamic RoboCup MSL games. 4.3 The NuBot Control Node On top level of the controllers, the NuBot soccer robots typically adopt a three-layer hierarchical structure. To be specific, the NuBot control node basically contains strategy, path planning and trajectory tracking. The design of soccer robots aims to fulfil all the tasks completely autonomously and cooperatively. Therefore, multi-robot cooperation plays a central role in RoboCup MSL. To allocate the roles of the robots and initiate the cooperation, a group intelligence scheme is proposed to imitate the captain or the decision-maker in the competition, see [37] for detail. In our scheme, a hybrid distributed role allocation method is employed, including role evaluation, role assignment and dynamic reassignment. The soccer robot can select a proper role among the following set: attacker, defender and others. While the roles are determined, each robot is motivated to perform the corresponding tasks individually and autonomously, such as moving, defending, passing, catching and dribbling. Path planning and obstacle avoidance is still quite a challenge under highly dynamic competition environments. To deal with it, an online path planning method based on the subtargets method and B-spline curve is proposed [38]. Benefiting from the proposed method, we can obtain a smooth path and realize real-time obstacle avoidance at a relative high speed. The method can be summarized as follows: • generating some via-points using the subtargets algorithm iteratively; • obtaining a smooth path by using B-spline curve between via-points; and • optimizing the planning path via actual constraints such as the maximal size of an obstacle and the robot velocity and so on. To track the planned path at a high speed with a quick dynamic response and low tracking error, Model Predictive Control (MPC) is utilized, as MPC can easily take into account the constraints and use the future information to optimize the current Fig. 13 A typical path tracking result of the proposed controller. a The robot starts at control point p0 to track the given trajectory, and finally stops at point p3. The reference trajectory and the real trajectory is shown in a red curve and a blue curve, respectively. b The speed during the tracking, which is bounded at 3.25 m/s. c The tracking errors output [26]. Firstly, a linear full-dynamic error model based on the kinematics model of the soccer robot is obtained. Then, MPC is used to design the control law to satisfy both the kinematics constraints and kinetics constraints. Meantime, Laguerre Networks is used to design the MPC controller, in order to reduce the computational time for the online application. As illustrated in Fig. 13, the robot can track the path with a quick dynamic response and low tracking errors by our proposed MPC control law. 4.4 The NuBot HWControl Node On bottom level of the controllers, the NuBot HWControl node performs four main tasks: 1. controlling the four motors of the base frame; 2. obtaining odometry information; 3. controlling the ball-handling system; and 4. actuating the shooting system. The ROS EtherCAT library for our robots is developed to exchange information between the industrial PC and the actuators and sensors, e.g., AD module, I/O module, Elmo controller, motor encoder, linear displacement sensor. The speed control commands calculated in the NuBot Control node are sent to four Elmo motor controllers of the base frame at 33 Hz for realizing robot motion control. Meanwhile, the motor encoder data are used to calculate odometry information, which are published to the OmniVision node. For the third task, high control accuracy and high-stability performance are achieved by feedback plus feedforward PD control for the active ball-handing system. The relative distance between the robot and the ball measured with two linear displacement sensors is regarded as feedback signal, and the robot velocity is used as the feedforward signal. For the last task, the shooting system first needs to be calibrated off-line. During competitions, the node adjusts the hinge of the shooting rod to different heights according to the received commands: flat-shooting or lob-shooting from the NuBot Control node. Furthermore, it can determine the shooting strength according to the calibration results and kicks the ball out. 4.5 The WorldModel Node The real-time database tool (RTDB) [22, 23] developed by the CAMBADA team is used to realize the robot-to-robot communication. The information of the ball, the obstacles and the robot itself provided by the OmniVision node, the Kinect node and the FrontVision node are combined with the data communicated from teammates to acquire a unified world representation in the WorldModel node. The information from its own sensors and other robots is of great significance for single-robot motion and multi-robot cooperation. For example, every robot fuses all obtained ball information, and only the robot with the shortest distance to ball should catch it and others should move to appropriate positions; each robot achieves accurate positions of the obstacles and obtains the positions of its teammates by communication, thus it can realize accurate teammate and opponent identification, which is important for our robots to perform man-to-man defense. 5 Gazebo Based Simulation System In this paper, the open source simulator Gazebo [8] is employed to simulate the motions of our soccer robots. The main reason to use Gazebo is that it offers a convenient interface with ROS, which has been used to construct software for our real robots, see Sect. 4 for detail. In addition, Gazebo also features 3D simulation, multiple physics engines, high fidelity models, huge user base and etc. Therefore, 614 Table 1 Properties of the robot model J. Xiao et al. Property 31 kg Izz = 2.86 kg · m2 I x x = I yy = I x y = I x z = I yz = 0 0.1 Linear: 0. Angular: 0 nubot_gazebo Moment of inertia Friction coefficient Velocity decay Model plugin the simulation system based on ROS and Gazebo can take advantage of many stateof-the-art robotics algorithms and useful debugging tools built in ROS. It can also benefit from or contribute to the active development communities of ROS and Gazebo in terms of code reuse and project co-development. The remainder of this section is organized as follows. Section 5.1 introduces the creation of simulation models and a simulation world. Section 5.2 presents the realization of a single robot’s basic motions by a Gazebo model plugin. Furthermore, in Sect. 5.3, the model plugin is integrated with the real robot code so that several robot models are able to reproduce real robots’ behavior. Finally, in Sect. 5.4, three tests are conducted to validate the effectiveness of the simulation system. 5.1 Simulation Models and a Simulation World Gazebo models, which consist of links, joints (optional), plugins (optional) and etc., are specified by SDF (Simulator Description Format)6 files. Besides, a simulation world, which determines lighting, simulation step size, simulation frequency and other simulation properties, is specified by a world file. Simulation Models Models used in this simulation system include the NuBot robot model, the soccer field model and the soccer ball model. • Robot model: It is composed of a chassis link without any joint. Table 1 lists some important properties specified in the robot model SDF file. Besides, another two important properties, mesh and collision that are used for visualization and collision detection respectively, are illustrated in Fig. 14. They are drawn by the open source 3D drawing tool SketchUp.7 Note that the collision element is not a duplicate of the model’s exterior but a simplified cylinder with the same base shape and height as the model exterior. Furthermore, we do not model the real robot’s physical mechanisms, such as omni-directional wheels, ball-dribbling, ball-kicking and omni-vision camera mechanisms. Therefore, this model does 6 http://sdformat.org/. 7 http://www.sketchup.com/. Fig. 14 Mesh and collision properties of the robot model. Left Mesh property; Right collision property Table 2 Properties of the simulation world Physics engine Max step size Gravity Open dynamics engine 0.007 s −9.8 m/s2 not require any joints. The simplification is reasonable according to the simulation purpose: to test multi-robot collaboration strategies and algorithms. Therefore the emphasis of the simulation system is on the final effect of robot basic motions but not the complicated physical processes involved. The physical mechanisms capabilities are realized by a Gazebo model plugin that will be discussed in Sect. 5.2. • Soccer field model: Images of the goal net, field ground and field lines, together with OGRE material scripts8 are used to construct the field model. The field is then scaled according to the 2015 RoboCup MSL rules. The collision elements are composed of each parts’ corresponding geometry. • Soccer ball model: The soccer ball model is built with the same attributes of a defined FIFA (Fdration Internationale de Football Association) standard size 5 soccer ball that is played in RoboCup MSL. The pressure inside the model is neglected and the collision element is a sphere of the same size of the soccer ball. The Simulation World The world file specifies the simulation background, lighting, camera pose, physics engines, simulation step size and etc. Some important properties of the simulation world are listed in Table 2. Finally, a simulation world with three robots and a soccer ball is created, see Fig. 15. 8 http://www.ogre3d.org/. Fig. 15 The simulation world, with three robots playing a ball 5.2 Basic Motions Realization To realize a single robot’s basic motions, a Gazebo model plugin named “nubot_gazebo” is written. A model plugin is a shared library that attached to a specific model and inserted into the simulation. It can obtain and modify the states of all the models in a simulation world. Overview of the “nubot_gazebo” Plugin When “nubot_gazebo” plugin is loaded at the beginning of a simulation process, its tasks include: • Obtaining parameters of the soccer ball model’s name, ball-dribbling distance threshold, ball-dribbling angle threshold and etc. from the ROS parameter server. • Setting up ROS publishers, subscribers, service servers and a dynamic reconfigure server. • Binding model plugin update function that runs in every simulation iteration. The model plugin starts running automatically when a robot model is spawned. For example, when the robot model “bot1” is spawned, a computation graph shown in Fig. 16 is created, which is visualized by the ROS tool rqt_graph. As can be seen, there is only one node called “/gazebo”, which publishes (represented by an arrow pointing outward) and subscribes (represented by an arrow pointing inward) several topics enclosed by small rectangles. The topics inside the “gazebo” namespace are created by a ROS package called gazebo_ros_pkgs, which provides wrappers around the stand-alone Gazebo and thus enables Gazebo to make full use of ROS messages, services and dynamic reconfigure. Those inside the “bot1” namespace are created by the model plugin. All the topic names are self-explanatory. For instance, messages on the /bot1/nubotcontrol/velcmd topic are used to control the robot model’s velocity. Fig. 16 The computation graph of the model plugin Although the physical mechanism of the omni-vision camera is not simulated, the robot model is still able to obtain information of other models’ positions and velocities by subscribing to the topic /gazebo/model_states. In addition, ball-dribbling and ballkicking are realized by calling corresponding ROS services. They will be discussed in the following part. Motion Realization A single robot’s basic motions include omni-directional locomotion, ball-dribbling and ball-kicking. • Omni-directional locomotion: Gazebo’s built-in functions SetLinearVel and SetAngularVel are used to make the robot model move in any direction given any translation vector and rotation vector respectively. • Ball-dribbling: If the distance between the robot and the soccer ball is within a distance threshold and the angle from the robot front direction to the ball viewing direction is also within an angle threshold, then the dribble condition is satisfied and the robot is able to dribble the ball. Under this condition, to realize ball-dribbling, the soccer ball’s pose is directly and continuously set by Gazebo’s built-in function to continually satisfy the dribble condition. • Ball-kicking: Similarly, ball-kicking is realized by giving the soccer ball a specific velocity at the start of the kicking process. There are two ways of kicking, e.g., the ground pass and the lob shot. For the ground pass, the soccer ball does not lose contact with the ground so its initial velocity vector is calculated in the field plane. As for the lob shot, the soccer ball is kicked into the air so its speed in the up-direction should also be taken into account. Since the air resistance is trivial compared with the gravity effect, it is reasonable to assume that the ball’s flight path is a parabola. Single Robot Motions Test To test single robot’s basic motions, four behavior states are defined as follows: CHASE_BALL, DRIBBLE_BALL (including two sub-states MOVE_BALL and ROTATE_BALL), KICK_BALL, and RESET. The robot model performs these motions following the behavior states transfer graph as shown in Fig. 17. The test results, as shown in Fig. 18 prove that the “nubot_gazebo” model plugin realizes basic motions successfully. Fig. 17 Single robot behavior states transfer graph Fig. 18 Single robot simulation result. a Initial state; b CHASE_BALL state; c DRIBBLE_BALL state; d KICK_BALL state 5.3 Model Plugin and Real Robot Code Integration It would be better to use the same interface to control the real robots and the simulated robots. In this case, the multi-robot collaboration algorithms could be evaluated using the simulation system. Furthermore, the implementation can be directly applied to the real robots without any modification. In other words, it is significant to integrate the model plugin with the real robot code. In the real robot code, there are eight nodes in total (see Fig. 8). Among them, “world_model” and “nubot_control” are close related to multi-robot collaboration and cooperation. In addition, there is a coach program which receives and visualizes information from each robot and sends basic commands such as game-start, gamestop, kick-off and corner-ball via RTDB. To integrate the real robot code with the model plugin, the left five nodes which are related to hardware should be replaced by the model plugin. This successful replacement requires an appropriate interface, in other words, correct ROS messagespassing and services-calling between them. Finally, the data flow of the integration of the real robot code and the model plugin is shown in Fig. 19. There are three noticeable changes described as follows. • Coach communicates with each robot’s “world_model” node via ROS messages: for convenience and reliability, the communication between Coach and “world_model” no longer requires RTDB in the simulation scenario. Instead, they are able to send and receive ROS messages in one local computer. In particular, Fig. 19 The data flow graph of the integration of the real robot code and the model plugin each robot receives messages about game status from the Coach. However, the Coach only receives the world model information from one selected robot. This is because all the robot’s world model information is accurate and shared in the simulation environment, there is no need for the Coach to obtain other robots’ world model information. • An intermediary node (simulation interface) for communication among robots: in the real world scenario, robots share their own strategies information with their teammates by RTDB. However, as for simulation, it is neither practical nor necessary to use RTDB as a communication measure since all robots are simulated in one computer. Therefore, an intermediary node (simulation interface) subscribes to messages on collaboration strategies from all robots and in return, publishes new messages containing all the strategies information. So all the robots are able to share the information without the use of RTDB network communication. In addition, topic-name-prefixing is employed for simulation to distinguish different robots. Because all the robot models use the same model plugins and are created into one simulation world, they cannot distinguish their own messages and services from others. In this case, it is necessary for each model’s name to be used as a prefix to their own topic names or service names. Therefore, the robots can subscribe to their own topics or respond to their own services. These prefixes, i.e., the model names, are obtained by a bash script to guarantee that each name is mapped to the appropriate robot models as shown in Fig. 20. The bash scripts also start the simulation interface node and spawn models for Gazebo. It works as a mapping mechanism and a bridge between different separate components. This helps isolate the real robot code from the simulation components so as to improve the adaptability of the simulation system. In other words, different robot code can be easily tested in this environment since it does not depend on the simulation. • Gaussian noise: Gaussian noise is added to the position and velocity information obtained by the robot model to mimic the real world situation. Fig. 20 The functions of the bash script Fig. 21 The computation graph of the simulation with two robot models Finally, two robot models bot1 and bot2 are spawned into a simulation world and the corresponding computation graph is shown in Fig. 21. Note that all the model plugins are embedded in the /gazebo node and the topic names are all prefixed by corresponding model names due to the mapping function of the bash script discussed before. 5.4 Simulation of a Match It is also possible to simulate a match of two simulated teams, which could be used to evaluate new collaboration algorithms. Furthermore, machine learning algorithms could be used to train the simulated robots during the simulated match, and the trained Fig. 22 The overall structure of the configuration of two simulation teams Fig. 23 The simulation of a soccer match by two robot teams results can be then applied to the corresponding real robots. Figure 22 shows the overall structure of the setup. There are totally three computers to simulate a soccer match between two robot teams. One of the computer is for Gazebo visualization with model plugins to simulate the motions of each robot. The other computers are used for running the real robot codes and their corresponding Coach programs. The total computation involved has been distributed to three computers and therefore, the simulation speed is fast enough to test the multi-robot coordination strategies in real time. In addition, there is only one ROS master in computer A, which registers nodes, services, topics and other ROS resources from all the three computers. Finally, the simulation of a match (without goalie) is shown in Fig. 23. 6 Single Robot Simulation Tutorial Note that the single_nubot_gazebo package can simulate only ONE robot soccer player for RoboCup MSL. It is designed for demonstration of how the simulation system works. However, it can be adapted for other purposes. If you want to test multi-robot cooperation strategies, please refer to the gazebo_visual package, while the compilation in this tutorial is still useful. For further information, please refer to our previous paper [14]. 6.1 Get the Package If you have git installed, you could use the below command to download the package: $ git clone g i t @ g i t h u b . com : nubot - nudt / g a z e b o _ v i s u a l . git As an alternation, you could also go to https://github.com/nubot-nudt/single_nubot_ gazebo and download the package in zip format and extract it in your computer. 6.2 Environment Configuration The recommended Operating Environment is Ubuntu 14.04 and ROS Jade with Gazebo included. For more operating environment, please refer to the readme file at https://github.com/nubot-nudt/single_nubot_gazebo. ROS Jade has gazebo_ros_pkgs with it, so you don’t have to install the package again. However, the following steps should be done to fix a bug in ROS Jade related to Gazebo: $ sudo gedit / opt / ros / jade / lib / g a z e b o _ r o s / g a z e b o In this file, go to line 24 and delete the last ‘/’, i.e., s e t u p _ p a t h = $ ( pkg - c o n f i g -- v a r i a b l e = p r e f i x g a z e b o )/ s h a r e / g a z e b o / is replaced with s e t u p _ p a t h = $ ( pkg - c o n f i g -- v a r i a b l e = p r e f i x g a z e b o )/ s h a r e / g a z e b o After these steps, try to run the command below to check if it is successful. $ rosrun gazebo_ros gazebo or $ roslaunch gazebo_ros empty_world . launch If either one is successful running, then you are ready for the following steps. 6.3 Package Compiling (1) Go to the package root directory (single_nubot_gazebo), e.g. $ cd ~/ s i n g l e _ n u b o t _ g a z e b o (2) If you already have CMakeLists.txt in the src folder, then you can skip this step. If not, execute the commands below: $ cd src $ catkin_init_workspace $ cd .. (3) Configure the package using the command below. In this step, you may encounter some errors related to Git. However, if you did not use Git, just ignore them. $ ./ c o n f i g u r e (4) Compiling the package, the simulation system is ready if the compiling is completed done. $ catkin_make 6.4 Package Overview The robot movement is realized by a Gazebo model plugin called NubotGazebo generated by source files nubot_gazebo.cc and nubot_gazebo.hh. Most importantly, the essential part of the plugin is realizing three motions: omnidirectional locomotion, ball-dribbling and ball-kicking. Basically, this plugin subscribes to topic /nubotcontrol/velcmd for omnidirectional movement, and subscribes to services /BallHandle and /Shoot for ball-dribbling and ball-kicking, respectively. You can customize this code for your robot based on these messages and services as a convenient interface. The types and definitions of the topics and services are listed in Table 3. For the definition of /BallHandle service, when enable equals to a nonzero number, a dribble request would be sent. If the robot meets the conditions to dribble the ball, the service response BallIsHolding is true. For the definition of /Shoot service, when ShootPos equals to -1, this is a ground pass. In this case, strength is the initial speed you would like the soccer ball to have. When ShootPos equals to 1, this is a lob shot. In this case, strength is useless since the strength is calculated by the Gazebo plugin automatically and the soccer ball would follow a parabola path to enter the goal area(only if the robot heads towards Table 3 Topics and services Topic/Service /nubotcontrol/velcmd nubot_common/VelCmd /BallHandle nubot_common/BallHandle /Shoot nubot_common/Shoot Definition float32 Vx float32 Vy float32 w int64 enable --int64 BallIsHolding int64 strength int64 ShootPos --int64 ShootIsDone the goal area). If the robot successfully kicks the ball even if it failed to goal, the service response ShootIsDone is true. There are three ways for a robot to dribble a ball, e.g., (a) Setting ball pose continually: this is the most accurate one; nubot would hardly lose control of the ball, but the visual effect is not very good (the ball does not rotate). (b) Setting ball secant velocity: this is less accurate than method (a) but more accurate than method (c). (c) Setting ball tangential velocity: this is the least accurate. If the robot moves fast, such as 3 m/s, it would probably lose control of the ball. However, this method achieves the best visual effect under low-speed condition. For package single_nubot_gazebo, it uses method (c) for better visualization effect. However, for package nubot_gazbeo, it uses method (a) for better control of the soccer ball. 6.5 Single Robot Automatic Movement The robot will do motions according to the state transfer graph shown in Fig. 17, following the below steps: (1) Go to the package root directory. (2) source the setup.bash file: $ source devel / setup . bash (3) Using roslaunch to load the simulation world $ roslaunch nubot_gazebo sdf_nubot . launch Note: Step 2 should be performed every time to open a new terminal. Alternatively, this command can be wrote into the ∼/.bashrc file so that step 2 is not required when opening new terminal. Finally, the robot will rotate and translate with a given trajectory, i.e., it accelerates at a constant acceleration and stays at a constant speed after reaching the maximum velocity. You could click the Edit->Reset World from the menu (or press ctrl-shift-r) to reset the simulation world so the robot would do the basic motions again. When the robot reaches its final state (HOME), its motion can be controlled using keyboard under $ rosrun nubot_gazebo nubot_teleop_keyboard. You could also run $ rqt_graph to see the data flow chart of messags/topics. 6.6 NuBotGazebo API For the detailed information and usage of the NubotGazebo class, please refer to the doc/ folder. 6.7 How You Could Use It to Do More Stuff The main purpose of the simulation system is to test multi-robot collaboration algorithms. As a precondition, the users have to know how to control the movement of each robot in the simulation. The topic publishing and service calling could be inferred by reading the source of keyboard controlling. In a word, to control the movement of the robots requires publishing velocity commands on the topic /nubotcontrol/velcmd. If the robot is close enough to the ball, dribble the ball by calling the ROS service named /BallHandle and kick the ball by calling the service named /Shoot. The types and definitions of theses topics and services are presented in Table 3. 7 Multi Robot Simulation Tutorial 7.1 Package Overview The following three packages should be used together to simulate multi-robots together. The nubot_ws and the conach4sim package can be downloaded at https://github.com/nubot-nudt/nubot_ws and https://github.com/nubot-nudt/coach 4sim respectively. package gazebo_visual nubot_ws coach4sim description For robot simulation and visualization For robot controlling Game command sending Qt has to be installed in order to use coach4sim. However, for those who do not want to install Qt, a solution is to use ROS command line tools for sending game commands: $ r o s t o p i c pub - r 1 / n u b o t / r e c e i v e _ f r o m _ c o a c h nubot_common / CoachInfo " M a t c h M o d e : 10 MatchType : 0" In the command, MatchMode is the current game command, MatchType is the previous game command. The coding of the game commands is in core.hpp. For quick reference: enum M a t c h M o d e { STOPROBOT = 0, OUR_KICKOFF = 1, OPP_KICKOFF = 2, OUR_THROWIN = 3, OPP_THROWIN = 4, OUR_PENALTY = 5, OPP_PENALTY = 6, OUR_GOALKICK = 7, OPP_GOALKICK = 8, OUR_CORNERKICK = 9, O P P _ C O R N E R K I C K = 10 , O U R _ F R E E K I C K = 11 , O P P _ F R E E K I C K = 12 , DROPBALL = 13 , STARTROBOT = 15 , P A R K I N G R O B O T = 25 , TEST = 27 }; The robot movement is realized by a Gazebo model plugin which is called NubotGazebo generated by source files nubot_gazebo.cc and nubot_ gazebo.hh. Basically the essential part of the plugin is realizing basic motions: omni-directional locomotion, ball-dribbling and ball-kicking. The plugin single_nubot_gazebo is similar to that in package single_ nubot_gazebo, i.e., it subscribes to the topic nubotcontrol/velcmd for omnidirectional movement, and subscribes to the service BallHandle and Shoot for ball-dribbling and ball-kicking, respectively. For package gazebo_visual, there is a new topic named omnivision/OmniVisionInfo which contains messages about the soccer ball and all the robots’ information such as position, velocity and etc. Since there may be multiple robots, the name of those topics and services should be prefixed with the robot model names in order to be distinguished with each other. For example, if a robot model’s name is nubot1, then the topic names are /nubot1/nubotcontrol/velcmd and /nubot1/omnivision/Omni VisionInfo and the service names would be /nubot1/BallHandle and /nubot1/Shoot accordingly. The types and definitions of the topic nubot1/ omnivision/OmniVisionInfo is as: Header header BallInfo ballinfo ObstaclesInfo obstacleinfo R o b o t I n f o [] r o b o t i n f o As shown above, there are three new message types in the definition of the omnivision/OmniVisionInfo topic, i.e., BallInfo, ObstaclesInfo and RoboInfo. The field robotinfo is a vector. Before introducing the format of these messages, three other underlying message types Point2d, Ppoint and Angle are listed below. # P o i n t 2 d . msg , r e p r e s e n t i n g a 2 - D p o i n t . float32 x # x component float32 y # y component # P P o i n t . msg , r e p r e s e n t i n g a 2 - D point in polar c o o r d i n a t e s . float32 angle # a n g l e a g a i n s t p o l a r axis float32 radius # d i s t a n c e from the origin # A n g l e . msg , r e p r e s e n t i n g the a n g l e float32 theta # a n g l e of r o t a t i o n # B a l l I n f o . msg , r e p r e s e n t i n g the i n f o r m a t i o n about the ball Header header # ROS h e a d e r d e f i n e d in s t d _ m s g s int32 b a l l i n f o s t a t e # the state of the ball i n f o r m a t i o n P o i n t 2 d pos # p o s i t i o n in the g l o b a l r e f e r e n c e # frame PPoint real_pos # r e l a t i v e p o s i t i o n in the robot # body frame Point2d velocity # v e l o c i t y in the g l o b a l r e f e r e n c e # frame bool p o s _ k n o w n # ball p o s i t i o n is known (1) or not (0) bool v e l o c i t y _ k n o w n # ball v e l o c i t y is known (1) or not (0) # O b s t a c l e s I n f o . msg , r e p r e s e n t i n g the o b s t a c l e s i n f o r m a t i o n Header header # ROS h e a d e r d e f i n e d in s t d _ m s g s P o i n t 2 d [] pos # p o s i t i o n in the g l o b a l r e f e r e n c e # frame P P o i n t [] p o l a r _ p o s # p o s i t i o n in the polar frame , whose # o r i g i n is the c e n t e r of the r o b o t # and the polar axis # is along the k i c k i n g m e c h a n i s m # R o b o t I n f o . msg , r e p r e s e n t i n g teammates ’ i n f o r m a t i o n Header header # ROS h e a d e r d e f i n e d in s t d _ m s g s int32 AgentID # ID of the r o b o t int32 t a r g e t N u m 1 # r o b o t ID to be a s s i g n e d for t a r g e t # position 1 int32 t a r g e t N u m 2 # r o b o t ID to be a s s i g n e d for t a r g e t # position 2 int32 t a r g e t N u m 3 # r o b o t ID to be a s s i g n e d for t a r g e t # position 3 int32 t a r g e t N u m 4 # r o b o t ID to be a s s i g n e d for t a r g e t # position 4 int32 s t a t i c p a s s N u m # in s t a t i c pass , the passer ’ s ID int32 s t a t i c c a t c h N u m # in s t a t i c pass , the catcher ’ s ID P o i n t 2 d pos # r o b o t p o s i t i o n in g l o b a l c o o r d i n a t e # system Angle heading # r o b o t h e a d i n g in g l o b a l c o o r d i n a t e # system f l o a t 3 2 vrot # r o t a t i o n a l v e l o c i t y in the g l o b a l # coordinate system Point2d vtrans # l i n e a r v e l o c i t y in the g l o b a l bool iskick bool i s v a l i d bool i s s t u c k bool i s d r i b b l e char c u r r e n t _ r o l e float32 role_time Point2d # # # # # # # # # coordinate system robot kicks the ball (1) or not (0) r o b o t is v a l i d (1) or not (0) r o b o t is s t u c k (1) or not (0) r o b o t d r i b b l e s the ball (1) or not (0) the c u r r e n t role time d u r a t i o n that the robot keeps the role u n c h a n g e d target position 7.2 Configuration of Computer A and Computer B The recommended way to run simulation is with two computers to run nubot_ws and gazebo_visual separately, i.e., computer A runs gazebo_visual to display the movement of robots, while computer B runs nubot_ws to control the virtual robots. In addition, computer B should also run the coach program for sending game commands. Communication between computer A and computer B is via ROS topics and services. Following is an configuration example: • • • • Adding each other’s IP address in the /etc/hosts file; Run gazebo_visual in computer A; In computer B, export ROS_MASTER_URI and then run nubot_ws; In computer B, run the coach and send game command. 8 Conclusion In summary, we have presented the ROS based software and Gazebo based simulation for our RoboCup MSL robots. ROS based software makes it easier to share data and code among RoboCup MSL teams, and construct hybrid teams. Further, we have also detailed the design of the interface between the robot software and simulation, which brings the possibility to evaluate multi-robot collaboration algorithms using the simulation. We expect this work to be of value in the RoboCup MSL community. On the one hand, the researchers can refer to our method to design both software and simulation for RoboCup MSL robots, or even general robots. On the other hand, the NuBot simulation software can be used to simulate RoboCup MSL matches, which enables the state-of-the-art machine learning algorithms to be used for multi-robot collaboration training. Lastly, the presented ROS based software and Gazebo based simulation can also be employed for multi-robot collaboration researches more than RoboCup with little modification. Acknowledgements Our work is supported by National Science Foundation of China (NO. 61403409 and NO. 61503401), China Postdoctoral Science Foundation (NO. 2014M562648), and graduate school of National University of Defense Technology. All members of the NuBot research group are gratefully acknowledged. References 1. Kitano, H., M. Asada, Y. Kuniyoshi, I. Noda, and E. Osawa. 1997. Robocup: The robot world cup initiative. In Proceedings of the first international conference on Autonomous agents, 340– 347. ACM. 2. Kitano, H., M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa, and H. Matsubara. 1997. Robocup: A challenge problem for AI. AI Magazine 18 (1): 73. 3. Kitano, H., M. Asada, I. Noda, and H. Matsubara. 1998. Robocup: robot world cup. IEEE Robotics Automation Magazine 5: 30–36. 4. Almeida, L., J. Ji, G. Steinbauer, and S. Luke. 2016. RoboCup 2015: Robot World Cup XIX, vol. 9513. Heidelberg: Springer. 5. Bianchi, R.A., H.L. Akin, S. Ramamoorthy, and K. Sugiura. 2015. RoboCup 2014: Robot World Cup XVIII, vol. 8992. Heidelberg: Springer. 6. Soetens, R., R. van de Molengraft, and B. Cunha. 2014. Robocup msl-history, accomplishments, current status and challenges ahead. In RoboCup 2014: Robot World Cup XVIII, ed. R.A.C. Bianchi, H.L. Akin, S. Ramamoorthy, and K. Sugiura, 624–635. Heidelberg: Springer. 7. Rohmer, E., S.P.N. Singh, and M. Freese. 2013. V-rep: A versatile and scalable robot simulation framework. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 1321– 1326. 8. Koenig, N., and A. Howard. 2004. Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings, vol. 3, 2149–2154. IEEE. 9. Michel, O. 1998. Webots: Symbiosis between virtual and real mobile robots. In the First International Conference on Virtual Worlds, (London, UK), 254–263. Springer. 10. Der, R., and G. Martius. 2012. The LpzRobots Simulator. In The Playful Machine Ralf, ed. R. Der, and G. Martius. Heidelberg: Springer. 11. Harris, A., and J.M. Conrad. 2011. Survey of popular robotics simulators, frameworks, and toolkits. In 2011 Proceedings of IEEE Southeastcon, 243–249. 12. Castillo-Pizarro, P., T.V. Arredondo, and M. Torres-Torriti. 2010. Introductory survey to opensource mobile robot simulation software. In Robotics Symposium and Intelligent Robotic Meeting (LARS), 2010 Latin American, 150–155. 13. Xiong, D., J. Xiao, H. Lu, Z. Zeng, Q. Yu, K. Huang, X. Yi, Z. Zheng, C. Loughlin, and C. Loughlin. 2016. The design of an intelligent soccer-playing robot. Industrial Robot: An International Journal 43 (1). 14. Yao, W., W. Dai, J. Xiao, H. Lu, and Z. Zheng. 2015. A simulation system based on ros and gazebo for robocup middle size league. In 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 54–59. IEEE. 15. Van De Molengraft, M., and O. Zweigle. 2011. Advances in intelligent robot design for the robocup middle size league. Mechatronics 21 (2): 365. 16. Nadarajah, S., and K. Sundaraj. 2013. A survey on team strategies in robot soccer: team strategies and role description. Artificial Intelligence Review 40 (3): 271–304. 17. Nadarajah, S., and K. Sundaraj. 2013. Vision in robot soccer: a review. Artificial Intelligence Review 1–23. 18. Li, X., H. Lu, D. Xiong, H. Zhang, and Z. Zheng. 2013. A survey on visual perception for RoboCup MSL soccer robots. International Journal of Advanced Robotic Systems 10 (110). 19. Nardi, D., I. Noda, F. Ribeiro, P. Stone, O. von Stryk, and M. Veloso. 2014. Robocup soccer leagues. AI Magazine 35 (3): 77–85. 20. Lunenburg, J., R. Soetens, F. Schoenmakers, P. Metsemakers, R. van de Molengraft, and M. Steinbuch. 2013. Sharing open hardware through rop, the robotic open platform. In Proceedings of 17th annual RoboCup International Symposium. 21. Neves, A.J., A.J. Pinho, A. Pereira, B. Cunha, D.A. Martins, F. Santos, G. Corrente, J. Rodrigues, J. Silva, J.L. Azevedo, et al. 2010. CAMBADA soccer team: from robot architecture to multiagent coordination. INTECH Open Access Publisher. 22. Santos, F., L. Almeida, P. Pedreiras, and L.S. Lopes. 2009. A real-time distributed software infrastructure for cooperating mobile autonomous robots. In International Conference on Advanced Robotics, 2009. ICAR 2009, 1–6. IEEE. 23. Santos, F., L. Almeida, and L.S. Lopes. 2008. Self-configuration of an adaptive TDMA wireless communication protocol for teams of mobile robots. In IEEE International Conference on Emerging Technologies and Factory Automation, 2008. ETFA 2008, 1197–1204. IEEE. 24. Lu, H., S. Yang, H. Zhang, and Z. Zheng. 2011. A robust omnidirectional vision sensor for soccer robots. Mechatronics 21 (2): 373–389. 25. Lu, H., H. Zhang, J. Xiao, F. Liu, and Z. Zheng. 2008. Arbitrary ball recognition based on omni-directional vision for soccer robots. In RoboCup 2008: Robot Soccer World Cup XII, ed. L. Iocchi, H. Matsubara, A. Weitzenfeld, and C. Zhou, 133–144. Heidelberg: Springer. 26. Zeng, Z., H. Lu, and Z. Zheng. 2013. High-speed trajectory tracking based on model predictive control for omni-directional mobile robots. In 2013 25th Chinese Control and Decision Conference (CCDC), 3179–3184. IEEE. 27. Xiao, J., H. Lu, Z. Zeng, D. Xiong, Q. Yu, K. Huang, S. Cheng, X. Yang, W. Dai, J. Ren, et al. 2015. Nubot team description paper 2015. In Proceedings of RoboCup 2015, Hefei, China. 28. Rajaie, H., O. Zweigle, K. Häussermann, U.-P. Käppeler, A. Tamke, and P. Levi. 2011. Hardware design and distributed embedded control architecture of a mobile soccer robot. Mechatronics 21 (2): 455–468. 29. Zandsteeg, C. 2005. Design of a robocup shooting mechanism. University of Technology Eindhoven. 30. Martinez, C.L., F. Schoenmakers, G. Naus, K. Meessen, Y. Douven, H. van de Loo, D. Bruijnen, W. Aangenent, J. Groenen, B. van Ninhuijs, et al. 2014. Tech united eindhoven, winner robocup 2014 msl. In Robot Soccer World Cup, 60–69. Springer. 31. Jansen, D., and H. Buttner. 2004. Real-time ethernet: the ethercat solution. Computing and Control Engineering 15 (1): 16–21. 32. Prytz, G. 2008. A performance analysis of EtherCAT and PROFINET IRT. In IEEE International Conference on Emerging Technologies and Factory Automation, 2008. ETFA 2008, 408–415. IEEE. 33. Xiong, D., H. Lu, and Z. Zheng. 2012. A self-localization method based on omnidirectional vision and mti for soccer robots. In 2012 10th World Congress on Intelligent Control and Automation (WCICA), 3731–3736, IEEE. 34. Rusu, R.B., and S. Cousins. 2011. 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA), (Shanghai, China), 9–13 May 2011. 35. Schnabel, R., R. Wahl, and R. Klein. 2007. Efficient ransac for point-cloud shape detection. Computer Graphics Forum 26 (2): 214–226. 36. Lu, H., Q. Yu, D. Xiong, J. Xiao, and Z. Zheng. 2014. Object motion estimation based on hybrid vision for soccer robots in 3d space. In Proceedings of RoboCup Symposium 2014, (Joao Pessoa, Brazil). 37. Wang, X., H. Zhang, H. Lu, and Z. Zheng. 2010. A new triple-based multi-robot system architecture and application in soccer robots. In Intelligent Robotics and Applications, ed. H. Liu, H. Ding, Z. Xiong, and X. Zhu, 105–115. Heidelberg: Springer. 38. Cheng, S., J. Xiao, and H. Lu. 2014. Real-time obstacle avoidance using subtargets and Cubic Bspline for mobile robots. In Proceedings of the IEEE International Conference on Information and Automation (ICIA 2014), 634–639. IEEE. Author Biographies Junhao Xiao (M’12) received his Bachelor of Engineering (2007) from National University of Defense Technology (NUDT), Changsha, China, and his Ph.D. (2013) at the Institute of Technical Aspects of Multimodal Systems (TAMS), Department Informatics, University of Hamburg, Hamburg, Germany. Then he joined the Department of Automatic Control, NUDT (2013) where he is an assistant professor on Robotics and Cybernetics. The focus of his research lies on mobile robotics, especially on RoboCup Soccer robots, RoboCup Rescue robots, localization, and mapping. Dan Xiong is a Ph.D. student at National University of Defense Technology (NUDT). He received his Bachelor of Engineering and Master of Engineering both from NUDT in 2010 and 2013, respectively. His research focuses on image processing and RoboCup soccer robots. Weijia Yao is a Master student at National University of Defense Technology (NUDT). He received his Bachelor of Engineering from NUDT in 2015. He curretly focuses on multi-robot coordination and collaboration. Qinghua Yu is a Ph.D. student at National University of Defense Technology (NUDT). He received his Bachelor of Engineering and Master of Engineering both from NUDT in 2011 and 2014, respectively. His research focuses on robot vision and RoboCup soccer robots. Huimin Lu received his Bachelor of Engineering (2003), Master of Engineering (2006) and Ph.D. (2010) from National University of Defense Technology (NUDT), Changsha, China. Then he joined the Department of Automatic Control, NUDT (2010) where he is an associate professor on Robotics and Cybernetics. The focus of his research lies on mobile robotics, especially on RoboCup Soccer robots, RoboCup Rescue robots, omni-directional vision, and visual SLAM. Zhiqiang Zheng received his Ph.D. (1994) from University of Liege, Liege, Belgium. Then he joined the Department of Automatic Control, National University of Defense Technology where he is a full professor on Robotics and Cybernetics. The focus of his research lies on mobile robotics, especially on multi-robot coordination and collaboration. VIKI—More Than a GUI for ROS Robin Hoogervorst, Cees Trouwborst, Alex Kamphuis and Matteo Fumagalli Abstract This chapter introduces the open-source software VIKI. VIKI is a software package that eases the configuration of complex robotic systems and behavior by providing an easy way to collect existing ROS packages and nodes into modules that provide coherent functionalities. This abstraction layer allows users to develop behaviors in the form of a collection of interconnected modules. A GUI allows the user to develop ROS-based software architectures by simple drag-and-drop of VIKI modules, thus providing a visual overview of the setup as well as ease of reconfiguration. When a setup has been created, VIKI generates a roslaunch file by using the information of this configuration, as well as the information from the module definitions, which is then launched automatically. Distributed capabilities are also guaranteed as VIKI enables the explicit configuration of roslaunch features in its interface. In order to show the potential of VIKI, the chapter is organised in the form of a tutorial which provides a technical overview of the software, installation instructions as well as three use-cases with increased difficulty. VIKI functions alongside your ROS installation, and only uses ROS as a runtime dependency. Keywords ROS · GUI · Abstraction layer · Modularity · Educational · Software architecture R. Hoogervorst · C. Trouwborst · A. Kamphuis University of Twente, Drienerlolaan 5, 7522 NB Enschede, Netherlands e-mail: [email protected] C. Trouwborst e-mail: [email protected] A. Kamphuis e-mail: [email protected] M. Fumagalli (B) Aalborg University, A.C. Meyers Vænge 15, 2450 Copenhagen, Denmark e-mail: [email protected] © Springer International Publishing AG 2017 A. Koubaa (ed.), Robot Operating System (ROS), Studies in Computational Intelligence 707, DOI 10.1007/978-3-319-54927-9_19 R. Hoogervorst et al. 1 Introduction One of the major advantages of using the Robot Operating System (ROS) [1] is the possibility to exploit the modular features that it provides, thus allowing the user to develop packages that are easy to install and highly reusable. These functionalities allow the new generation of robot developers to build complex distributed systems based on open source ROS packages in an easy and reliable manner. However, even though single packages are usually easily installed and used, the development of complex systems that use multiple existing packages remains a task that requires experience and prior knowledge of the tools necessary to properly set up and configure the environment. This means that novice developers need to first grasp different concepts, such as nodes, topics, topic types, and many more, before being able to properly configure their system. Even for experienced users, it is often hard to reuse packages efficiently. Reusing a package in a different experiment or setup requires adding references to the package in a ROS launch file, adding lines for a large variety of settings available to the packages used, while keeping overview of the different communication channels used. It is very easy to lose a clear perspective through several development iterations. To minimise this problem, different tools have been developed and are used by ROS users for configuration and debugging as well as for monitoring. These tools typically allow the visualization of the running setup. This is helpful for the developers and users developing and testing combinations of different ROS packages, but may yield complicated graphs. Additionally, these tools do not provide the ability to make changes inside their graphical representation. In order to reduce the complexity of creating and configuring large runtime systems with ROS, while still providing the functionality of reusability and modularity, a new software package, namely VIKI (see Fig. 1), is proposed and presented in Fig. 1 A screenshot of VIKI VIKI—More Than a GUI for ROS Fig. 2 The intention of the abstraction layers within VIKI is to build a configuration using modules, while ROS is being used for runtime. The gray sections are within the abstraction layer of VIKI, the white ones are within ROS this chapter. VIKI aims at improving the ease of configuring the environment, thus minimising the problems of complexity and loss of overview for the user. In order to do this, VIKI adds a layer of abstraction which presents simplified information to the end-users and developers and a Graphical User Interface (GUI) that allows clear and intuitive visualization of the interconnection and communication throughout the setup. By visualizing the modules, rather than all nodes, the visual overview can be greatly simplified, while still viewing the information that is important. More precisely, the abstraction layer within VIKI utilizes meta-information about packages, and allows for the combination of (parts of) packages. This is implemented in modules and can be visualized as seen in Fig. 2. The modules represent building blocks with coherent functionality that can be combined more easily to ultimately define the overall system behaviour. While VIKI abstracts the ROS packages inside the modules, the running environment is still entirely based on these packages, keeping the entire set of ROS functionalities unaltered. As a consequence, the user can reason about the packages on a higher level of abstraction, allowing the user to focus on the system architecture while using VIKI. In order to allow an easy and intuitive use of its modules, VIKI employs extra metadata about the inputs, outputs and the packages in order to provide visual connection of modules. Besides that, VIKI provides a Graphical User Interface that allows to arrange modules by dragging and dropping. This creates a visual overview of the setup, which is easily adaptable for implementing and testing new control schemes. VIKI gives the ability to adapt the arrangements easily and run them again. Because the details of the package implementation are already dealt with in the abstraction, the user can use this GUI also to reason at a higher level about the software. Since the module description files provide extra information about the nodes, VIKI can use this information to aid the user. Types of topics are specified inside these module files and this prohibits the user from connecting wrong topic types to each other. Besides that, the GUI can provide an instant overview of the inputs and outputs that a module exposes, without consulting further documentation. Especially for the starting ROS developer, this can help avoid confusion. VIKI is opensource and released under an MIT license. The full code repository can be found at https://www.github.com/UT-RAM/viki. Full documentation can be found at http://viki.readthedocs.io. The rest of this chapter is structured as follows. Section 2 gives an overview of the existing available applications that provide similar functionalities of VIKI. Sections 3 and 4 aid the reader in setting up VIKI and running some examples, respectively. Section 5 finally provides a more detailed technical overview of the internals of VIKI. 2 Background 2.1 Existing Software Using ROS might involve using many different tools aiding in running an environment such as editors, compilers, make systems and package specific tools. Many of these tools are console applications due to the fact that console applications are in general easier to create and are sufficient to provide the experienced user with enough power. Inexperienced users however may be daunted by console based applications and therefore require a more visual experience. Several existing ROS tools and packages provide such a visual experience for very specific cases. An interesting and fairly complete example is rxDeveloper [2]. This software aims at a visual interface for building ROS launch files. Although the package provides a very promising list of features, including the generation of template files for both C++ and Python, it appears to have not received the attention it deserved. The project wast last updated over 4 years ago for a currently deprecated ROS version. Using unsupported software is not recommended as it usually leads to unmaintainable setups for the research itself. Furthermore, for new ROS user it can still be challenging to use and start with, since rxDeveloper relies on roslaunch files and lacks tools for easy configuration of these packages. Another package that comes close to the functionality of VIKI is BRIDE [3]. BRIDE looks similar to rxDeveloper, in the sense that it visualizes ROS nodes and makes it easy to connect them. The tool seems to aim at a workflow that lets the user design visually, and generate package code based on this visual design. Similar to rxDeveloper, BRIDE does not benefit from active development or an active community. A major difference with respect to VIKI is that BRIDE places emphasis on making it easier to create your own packages, while VIKI places emphasis on reuse of already available modules and integration. In fact, BRIDE allows the generation of ROS packages based on the design, while VIKI generates visual design based on the available modules. Another package worth mentioning is FKIE Node Manager [4], which is intended to be a GUI for managing ROS nodes, topics, services, parameters and launch files present on multiple systems. It provides the user with a clear overview of running nodes and can support the user in the design phase with its launch file editor. This text editor has syntax highlighting and allows you to insert lines from templates. This is the first big difference with VIKI, that allows connecting ROS packages through a Drag-and-Drop workflow. Another difference is that VIKI allows for an additional layer of abstraction through the use of VIKI modules. In short: where FKIE node manager can be used to get information on all the parts in a running system, VIKI excels in designing system architecture by focusing on the connections between functional blocks. Besides these complete packages, there is a big number of visual tools available for ROS. A few examples are presented in [5] and others can be found on ROS wiki pages and scattered around the web [6, 7]. Most of the examples are robot specific and therefore offer no advantage outside the use of that specific robot except possibly for the reuse of code and structure. More general tools are often built in the ROS GUI [8, 9] and focus on the visualization or control of a robot at runtime. It is worth to note that for very specific use cases interesting packages are available, such as the ROS GUI for Matlab (proprietary software) [10] that provides a way of connecting to the ROS master through a Matlab GUI, and Linkbot Labs (proprietary software) which uses small linkable robots to teach students how to program [11]. Many of these tools provide functionalities that are compatible with VIKI. This can be of great advantage when the user wants to use specific tools for different utilities. One could, for example, load the roslaunch file that VIKI generates into FKIE node manager, and use FKIE’s functionality for a runtime lower level overview and manage launching nodes using that. This allows the user to design his or her environment using VIKI on a high abstraction level, while using a low abstraction level during runtime. This low-level overview could be useful for, for example, specific debugging cases. Similarly, rqt can provide visualizations on a low-level scale, while VIKI is used to design the top-level functionality. The user is free to switch between these environments, and using VIKI does not prohibit using the lower level tools at runtime. 3 ROS Environment Configuration VIKI is a standalone application alongside the installation of ROS. The prerequisites are that ROS is installed and a catkin workspace is available. Furthermore, it is assumed that git and python 2 are installed. Further dependencies are installed automatically by the provided self-configuring tool, making VIKI easy to set up.1 Further configuration depends on what modules are loaded into VIKI. As stated earlier, a module uses (a set of) ROS packages. Some ROS packages require additional configuration, which is needed for VIKI modules that use these packages as well. VIKI ensures that the modules provided in the core are either automatically set up, or easy to set up by following the documentation. For more information visit http://viki.readthedocs.io or follow the steps below. The installation of VIKI is a process of two steps2 : 1 This configuration is based on version 0.2-Alice, released on 9 May 2016. 2 Installation instructions might change in future releases. For the most recent installation instructions on VIKI, read the instructions located in the github repository. 1. Clone the repository located at https://www.github.com/UT-RAM/viki into a dedicated directory. The authors suggest to install VIKI inside the home folder, although the user is free to choose any other preferred location. 2. Navigate to the installation folder in the command line and run: ./viki configure This command starts a program that guide the user through the installation of VIKI. The user will be asked to provide relevant information necessary for the proper installation of VIKI, such as the installed ROS version and the location of the catkin workspace. When in doubt regarding the entries, use the default value. After the installation is completed, a desktop entry will be added such that it is possible to launch VIKI using the Unity Dash. The options provided by the user at the moment of the installation can be post-edited by editing config.json. The user is suggested to visit the documentation [12] for further info and troubleshooting. After completing the installation of VIKI, modules need to be added. By running ./viki add-module-repository core, VIKI will install the core module repository inside the root directory specified during installation. This is the location where future VIKI modules will be added. When the command add-module-repository core is ran, modules are installed by pulling the git repository github.com/UT-RAM/vikimodules into the aforementioned directory. At completion of this step, VIKI will be able to automatically find the modules and use the module files that are available in there. Verification of the installation is done by launching VIKI, as explained in the next section. If any problem is encountered during installation, usually a quick search on google may solve these issues. If you are running into any specific issues with VIKI, do not hesitate to contact the developers on github or create an issue at the github repository. 4 Testdriving VIKI This section is a hands-on tutorial that will allow the reader to get acquainted with VIKI and its functionalities by describing three test cases with increasing difficulty. The first tutorial is based on the use of the well-known turtlesim. This is a 2D simulation for a turtle robot. Here, the reader is instructed on how to launch and play with the turtle from within VIKI. After that, a more realistic system is setup using VIKI, and the user is guided through launching an UAV (the Parrot A.R. Drone is used as UAV) and controlling this with a controller or joystick. This specific example is of course difficult to carry out without the proper hardware. However, it proves how VIKI can handle this sort of task very well. The last section will guide the reader in setting up networking capabilities for ROS by exploiting the embedded functionalities of VIKI and its modules, which allow intuitive and easy configuration of the networked software architecture. In this third tutorial, the same Fig. 3 Screenshot of VIKI’s interface. 1 The canvas: a container where you can build your project. 2 Module Palette: list of all available modules with descriptions. 3 Module specifics: it shows information on the currently selected module. 4 Toolbar: buttons with program specific actions such as save and open. 5 Run button: builds and launches your project. 6 Status pane: it shows debug information on internal actions performed by VIKI UAV is launched from one computer, while the controls are launched from another device. 4.1 Turtlesim This first tutorial is meant to guide the reader through launching the first project with VIKI using the turtlesim. As a first step, VIKI has to be started. There are two methods to do this: • in a terminal, navigate to the installation folder of viki (by default /home/[user]/viki) and execute: ./viki run This launches the graphical interface of VIKI, which is illustrated in Fig. 3. • launch VIKI from the Unity Dash, which is commonly used in Ubuntu to launch application. During the installation of VIKI an entry is created in Unity Dash. By clicking the dash icon in the upper-left corner of Ubuntu, it is possible to search for VIKI. Note that Ubuntu may not refresh its dash immediately. Therefore, if the VIKI icon is not present in the dash after installation, it may be required to log off first and or to reboot. Fig. 4 The palette inside the interface of VIKI after entering ‘turtle’ in the search bar Fig. 5 The canvas in VIKI after adding two loose modules to it For this tutorial, two modules are needed, namely the turtle simulator and the module that interprets the keyboard input and sends it to the simulator. In order to find an existing module, it is possible to make a search inside the palette (indicated by number 2 in Fig. 3) by clicking in the textbox and entering e.g. “turtle”. By doing this, two modules will be displayed, as shown in Fig. 4, namely: • turtle_teleop_key • turtlesim node Now these modules can be clicked and dragged on to the canvas (indicated by 1 in Fig. 3). The result should look similar to Fig. 5. Note that the two modules are still loose, meaning that no connection has yet been made among the modules. In order to connect the modules, it is possible to drag the output of one module, to the input of another. In the specific case addressed by this tutorial, it is necessary to connect the output of turtle_teleop_key to the input of turtlesim node, which can be done by dragging the teleop’s output node to the turtle’s input node. An arrow will appear that indicates the direction of information. It must be noted that it is not possible to start dragging from the turtle’s input node, as VIKI is constructed in a way that the direction of information should be followed. After completing these steps, the setup is now ready. The user can now hit the green run button on top of the screen, indicated by number 5 in Fig. 3. This will open a new terminal providing some text feedback to the user, regarding the status of the setup. More importantly in this tutorial, a window with a turtle in it should also appear. By clicking at any point of the text window, it becomes possible to use the arrow keys to control the turtle in the other window. In order to close the turtlesim application, it is possible to select the terminal window and press Ctrl + C, which will kill its processes. After gracefully shutting down, the terminal window will disappear, and the user is free to run again the canvas setup. It is important to point out that after pressing the launch button, VIKI will actually launch a seperate process in a new terminal window. In case something goes wrong during launch, it is by default impossible to see what actually went wrong, since the terminal closes automatically, due to the process finishing. This is due to the default setting in Ubuntu. In order to avoid this, thus giving the user the possibility to check the output of the processes that are displayed in the VIKI terminal, the following procedure needs to be completed: 1. Open a terminal window 2. In the menubar, click on edit → Profile preferences. A configuration window should now open up. 3. Click on the tab Title and command 4. The last option When command exists:, choose Hold the terminal open, instead of Exit the terminal, as shown in Fig. 6 4.2 Flying the Parrot A.R. Drone After the demonstration of the basic functionalities of VIKI in the previous section, the reader is guided through a more advanced tutorial, showing the possibilities of using of VIKI modules and system setup. The system that is chosen for this tutorial is a Parrot A.R. Drone, as it is a commonly used platform for experimentation with drones, as well as a commercially affordable system for educational purposes. In this tutorial, the reader will be shown how to launch this drone and fly it with a joystick. In the actual implementation of VIKI, out of the box packages are provided in the form of existing VIKI modules. Whereas the desired module is not available yet in the VIKI module repository, the user is invited to follow the documentation available on http://viki.readthedocs.io. In order to complete this part of the tutorial, the hardware necessary to launch the system needs to be available and ready to be used. The Parrot A.R. Drone and the joystick are the elements that need to be set up. Important information for a proper configuration of the modules are the IP address of the drone, as well as the device location of the joystick. In order to set up the drone, it must be turned on and the computer running VIKI should be connected to its wireless network. This network Fig. 6 Set the terminal window to hold open after launch to be able to read the errors that ROS may raise during runtime is usually called ardrone_xxxx, with xxxx being an identifier. If all is correct, the default IP for it is 192.168.1.1. More complex network set ups are possible, but this is out of the scope of this tutorial and will therefore not be covered hereafter. The setup of the joystick requires the knowledge of the device name. This can be discovered by opening a terminal and typing the command: ls -al /dev/input |grep js. This will provide a list of all the devices that are recognized as joystick by the machine. Most likely, the joystick device will be listed as /dev/input/js0. In case multiple joysticks are found, it is possible to test and find the correct joystick location by using a program called jstest. After setting up the hardware for this tutorial, the software architecture can be designed in a similar manner as in the previous tutorial. The steps that are necessary to do so are: • launch VIKI • drag the modules called Joystick node, Parrot A.R. Drone and Image view to the canvas (Fig. 7) • setup the desired connections as illustrated in Fig. 8: drag from the cmd_vel_joy to the input cmd_vel of the Parrot A.R. Drone module. Fig. 7 The modules that are needed for launching the Parrot A.R. Drone Fig. 8 Modules on the canvas connected in the right way The Joystick node will read from the joystick and publishes cmd_vel messages on the cmd_vel_joy topic. The Parrot A.R. Drone and Image view modules are wrappers for existing ROS packages, providing functionality to launch the parrot and visualize image topics respectively. Note that while dragging, it is possible to see that this input of the Parrot A.R. Drone turns green, while the reset, land and takeoff entry points turn red. VIKI colours the in- and outputs while dragging based on topic type. This prohibits linking wrong topics to each other. You should now also be able to drag the right topics from the joystick module to the Parrot module. The Parrot module also provides an video stream from its front camera. By linking this to the Image view module, VIKI will show you this video stream to a window on your screen. By click on the Joystick module, it is possible to configure the joystick parameters (Fig. 9). On the right-hand side, in the properties panel (as indicated by 4 in Fig. 3), you should see a set of possible parameters, mostly for configuring the buttons that the user may like to use. The most important setting is the dev setting, which configures the joystick device to be used. These parameters are for reference shown in Fig. 9. Enter the address of the device previously discovered here. The Parrot should be configured correctly by default. If this is not the case, it will be necessary to reconfigure the software in order for the Parrot to respond to any command. The ardrone autonomy package uses arguments to configure the drone. These arguments can be set using VIKI as well. In order to do so, it is necessary to select the Parrot module and click on add/edit arguments on the right column. A window should open up, similar to the one of Fig. 10. Change the text field to “-ip [ip]”, where the user needs to substitute the IP address of the ardrone in this field. Fig. 9 The properties panel for the joystick, where dev can be set to /dev/input/js0 Fig. 10 The arguments panel in which extra launch arguments for VIKI can be added Following these instructions should achieve proper setup of the necessary hardware and VIKI software architecture. By pressing the big green RUN button it is possible to launch the set up that has just been created. By doing so, a video stream should pop-up displaying the videostream of the front camera of the A.R. Drone, thus enabling an on-eye viewpoint when piloting. The user can finally save the configuration. 4.3 SSH The previous two sections have guided the reader through setting up the VIKI canvas and module configurations to launch an UAV using VIKI. In practical situations, these UAVs are often controlled on a distributed system (e.g. an onboard computer connected to a ground station). ROS provides the functionality to delegate launches to other machines, using SSH. The GUI of VIKI has support to configure this and automatically generate it as well, by exploiting the capabilities of the roslaunch runtime layer. This section is based on the documentation on distributed systems, that can be found in the main documentation of VIKI. This section is split in two parts. The first part covers the network configuration that needs to be applied to run a distributed system. This makes sure that the computers can reach each other using SSH. The second part shows how to launch the complete software architecture from the centralised VIKI canvas. Network configuration: this section introduces the inexperienced user to configure two computers on the same local network. In case a different setup has to be configured, the user should make sure to access the PCs by hostname. More precisely, this section guides the user through the setup of one master computer, which will run VIKI, and a slave which will launch ROS nodes. First of all, make sure the following prerequisites are satisfied: • the computers used are connected on the same network where a wireless networking adapter is also available. • there is no firewall between the computers that may block the connections between them. When on a local network, this is usually the case by default • VIKI is installed on the master computer • ROS is installed on both computers.3 For the computers to find each other, usually a DNS server is used. Since the discussed setup does not use it, the hostnames have to be added by hand to the hosts file of the computer, which takes care of resolving hostnames as well. To do so, it is necessary to open a terminal and run these two seperate commands on each computer to retrieve the local IP address and hostname: • ifconfig: this will show, among other information, the IP address of the machine. This is located at the inet addr field, under the adapter that you are using. • hostname: this will print out a single line with the default hostname of your computer. This information needs to be known to each of the computers and can be added to the /etc/hosts file. To do this, open up this file in an editor by, e.g. 3 Note that ROS versions do not need to be the same, however VIKI is follows the ROS updates and this might change in the future. sudo gedit /etc/hosts. For each external host that needs to be reached from this computer, a line for that computer has to be added. If this is done correctly, it should now be possible to reach each other computer from this one. This can be tested by typing ping < hostname >. Once the hostnames are set up correctly, we can use specific capabilities of this in VIKI. In case of any issue on this system configuration, the reader is invited to make a quick search on the ROS documentation on distributed systems, or at the VIKI documentation itself. Launching distributed systems inside VIKI: in this last part of the tutorial, the Parrot AR. Drone will be launched from inside VIKI, which is ran on the master computer, while the parrot is connected to the slave PC. In order to prepare the hardware to allow this tutorial to be executed, it is necessary to make sure that the slace PC has both an ethernet adapter. If this is the case, perform the following steps: • connect the slave PC to your local network using the ethernet adapter • connect the Parrot A.R. Drone to the slave PC using the wireless adapter. At completion of the hardware connections, perform the following steps: 1. launch VIKI on the master 2. open the configuration from the previous section, including the modules for the joystick node, parrot and image view. 3. Click the harddrive icon in the toolbar, Open Machine list, to show the list with machines. A panel as shown in Fig. 11 should open up. 4. change the hostname to the hostname of your master PC by pressing the edit sign in the corner to the right. 5. click on plus sign to add a machine, which should show a panel as in Fig. 12. 6. the name is used to reference this machine later on in VIKI. For hostname, username and password, enter the necessary values to be able to connect to the slave PC and click Save twice to go back to the main canvas. 7. click on the Parrot Module 8. on the right bar, click on Select Machine. This opens up a panel as in Fig. 13. 9. select the machine you just created 10. click Save changes. The full configuration is now completed. Pressing the RUN button launches this setup. The outcome of this tutorial should behave similarly as in the previous section, with the difference that modules are now running on two different machines. We understand that this specific example is difficult to copy without having the proper hardware available, but to our believe it is necessary to include, as it proves that VIKI is very capable of handling these kind of tasks. Fig. 11 The overview of machines within VIKI. Note that the R O S M AST E RU R I can also be set here, as well as viewing/editing the list of remote machines Fig. 12 The screen to add a machine within VIKI Fig. 13 The panel in which a machine for a specific module can be chosen 5 Technical Overview This section provides a technical overview of VIKI and the components it is build of, and it relates VIKI to the other tools putting VIKI in perspective with most of the tools that are originally available by the ROS environment. 5.1 VIKI Architecture From a broad perspective, VIKI can be considered as a tool that provides an interface between the user and low-level software. It assists the user in interacting with ROS, Fig. 14 Overview of the structure of VIKI in the environment. The user interacts with VIKI, while VIKI interacts with ROS, git and the File System to provide the functionality version control systems like git and mercurial, and the file system, as shown in Fig. 14. This allows users to use VIKI as an interface to configure, create environments and start their projects, without requiring them to re implement or make explicit use of these specific details and tools. In this context, ROS is used to launch the setup and execute the software of the end user. Version Control is used for updating and creating VIKI modules, and the file system is used for storing configurations and finding available modules. In this sense, VIKI aims to enhance, rather than replace other ROS tools. More precisely, the authors aim at providing a tool that can be used by both new and experienced ROS users, without affecting the options to use existing runtime tools, such that the user can still take advantage of these.4 As Fig. 14 shows, VIKI is a level of abstraction between the user and ROS, and not between ROS and the environment. After the user presses the RUN button present inside the VIKI GUI interface, ROS is launched using an automatically generated launch file. Note that this does not prevent other applications to interact with the ROS-based software, thus making it possible to use the developers’ preferred debugging and monitoring tools. From a lower level, architectural point of view, the structure of VIKI is defined by 4 distinct components; being the Command line interface (CLI), Graphical User Interface (GUI), Configuration Component and the Backend Component. The architecture of these 4 components is shown in Fig. 15. The Backend handles all main functionalities of VIKI. The user can access these functionalities by launching the GUI. The GUI only provides an interface and does not handle any logic specific to VIKI. In order to support the GUI, an CLI is provided to aid in small configuration tasks. More preciseli, the CLI is used for configuration, installation of module repositories and other support features. When the CLI has created the configuration, 4 Note that VIKI is still under strong development after the first release. Some of the advanced ROS tools, such as the multi-master, could not be yet fully integrated at the moment of reading this chapter. For an update on the current status of the installation, the interested user may refer to the online documentation. Fig. 15 Overview of the internal structure of VIKI. VIKI provides both a CLI and GUI as interface for the user. The backend handles functionality using ROS and the File System. The GUI provides the interface to this backend for the user. The CLI is used for configuration, to be used by the backend the GUI runs using this configuration. When modifications need to be done by the user, the CLI provides a quick and solid way to change allow these changes. The configuration component is seperate and is used for internal configuration purposes. It only stores information coming from the CLI and Backend, but does not provide additional functionality. 5.2 Modules One of the core concepts in VIKI are modules. For a user familiar with ROS it may be difficult at first to differentiate between packages or nodes and VIKI’s modules. This may be very well due to the deliberate design decision that a VIKI module could act as a single node, or as a package. The goal of a module is to provide coherent functionality for a specific use case. VIKI handles abstraction of ROS packages by providing these as modules, as shown in Fig. 16. The first level shows available ROS packages. Using module description files, the ROS packages are predefined for the end-user as modules. Using VIKI, these modules can be arranged in a configuration file, which can be processed to a ROS launch file, which will be launched using ROS. By predefining essential information of these packages in the module description files, users can use these modules directly. Module description files are always named viki.xml and put into separate directories inside the ROS workspace. These description files are of the type XML. A default module existing in VIKI for instance called twist_from_joy consists of two nodes: • the joy_node node from the joy package • a custom node fly_from_joy that translates joy messages into sensible twist messages for the operation of, e.g., a multicopter. Fig. 16 Visual overview of the abstraction layer in VIKI. VIKI modules are build on top of ROS packages, embedded with extra information. With these modules, a configuration file is built, which is converted to a ROS launch file. This launch file is used by VIKI during runtime Note that these two nodes are not in the same package, but are in the same module. The user will thus, on first sight, only see one VIKI module. This module provides a single output which in the back end corresponds with a ROS topic on which the twist messages are published. This is an important aspect of the back end: it leaves the regular ROS structure completely intact. If one were to run a tool to visualize the running nodes and active topics (e.g. rqt_graph) both nodes mentioned above will be active and connected through a uniquely named relay node. This provides experienced users with the possibility to leave their existing ROS work flow unaltered. Even though modules may be a simplification of a sub-system within a desired configuration of ROS nodes, VIKI allows to fully customize how these nodes are launched by passing parameters, command line arguments or launch prefixes similar to how ROS launch does. In light of the desire to run certain nodes on other machines, support for adding SSH required tags to a launch file has been implemented. For most of the VIKI users, a complete combination of all possible functionalities will not be provided in terms of available modules belonging to the VIKI module library. Therefore, this requires the user to define custom modules where it is considered necessary. Creating a module is done by creating an XML file describing the properties of the module. This XML file contains: • Meta information, such as title and description. • Inputs and outputs of the module. These provide the interface of the module and are linked to the in- and outputs of executables. • Executables, which are ROS nodes. These contain information about the node such as the inputs, outputs and a set of default parameters to be used. • Extra configuration options, which can be used to link executables internally or add extra options. This information is stored in a viki.xml file using the XML format, and is placed in the ROS workspace. An example of such a file is shown in listing 1. The next section elaborates on the organisation of modules and how to increase the reusability. More information about the internals of the XML file can be found in the documentation online. PID controller An example node with a PID controller → Listing 1: Possible contents of a viki.xml file that holds the information about the ROS packages in the module. 5.3 Modularity and Reusability One of the main requirements in the design of VIKI is that the additional abstraction layer should not conflict with the modularity that ROS packages and nodes offer. It in fact stimulates the user to leverage the powerful ROS structure that is built around reliable communication between independent nodes. Thanks to the ease of adding modules to the system and connecting them to other modules by simply draggingand-dropping, VIKI promotes having modules that have a small, but well-defined functionality (as the example module twist_from_joy demonstrates). VIKI relies on two principles to maintain full modularity and promote code reusability: 1. It uses existing communication infrastructure, thus relying on ROS topics, topic name spaces and the ROS topic_tools to abstract the process of connecting nodes. 2. It requires no change to existing node logic. Combining existing nodes and packages into a module will never require developers to change the code of their nodes. An additional XML-formatted file instructs VIKI on how to combine existing nodes. These principles not only guarantee that VIKI will always be compatible with existing ROS infrastructure, but it greatly promotes abstracting projects in small, reusable parts. VIKI scans the specified ’root module directory’ recursively for viki.xml files on startup. This means all module definition files live in subfolders inside this root folder. This gives the user flexibility on how to organise his or her personal modules. In order to keep overview, VIKI encourages two seperate methods of locating module files. To allow for modules to be shared and used between different developers, VIKI introduces the concept of module repositories. These are git repositories that include a set of module description files. The second option is to include a viki.xml file directly in the repository of a ROS package. Parts of the modules used in projects can originate from the original, open-source VIKI module repository, while other parts can be located in a private repository for the project team. These built-in functionalities prove that VIKI is designed with sharing well-defined, reusable modules in mind. Thus, there are two different methods for sharing module description files, each with their own goal. • Using module repositories. VIKI makes it possible to add different module repositories and manage these using version control. When a new module is required, the user can create a new folder inside this repository and add a new viki.xml file to this folder. This module file can use packages that are available on the system (e.g. using apt-get), or include code for a small package itself inside a subfolder. This approach is preferred when a binary installation of the ROS packages requires to be added to a VIKI module, or when it is desired to combine nodes of several packages into one module. When it is required to include a large package inside a repository, it is encouraged to put this in a seperate repository and include this as a dependency in the description file, as this keeps module repositories small and easy to use. • Add viki.xml file to the ROS package. This is the preferred method when using the ROS package directly from source when designing a VIKI module specifically for that package. Note that in this case, it is important that the ‘root module directory’ is specified as the root of the catkin workspace (in the config.json file), such that all directories in the workspace are scanned. All modules for the first method live within the viki_modules directory inside the catkin workspace. A good use-case for these repositories would be a project inside a research group. For this project, a new repository could be created which includes all these module files. The users can easily pull these repositories from remote storage locations and use them directly within VIKI to browse the different modules that are available. This gives users inside this project a quick overview of what is already available and the components that can be used directly. 5.4 GUI Many tools within ROS are aimed at providing overview after the software has launched (see Sect. 2.1). The VIKI GUI, on the other hand, aims at creating a visual building space to compose projects and complex software architectures, while providing the user with a direct overview. This is done at the abstraction level of VIKI, providing overview at a higher level. When detailed overview at ROS level is needed, rqt_graph can still be used. While rqt_graph provides a graphical overview of all nodes and topics after launch, VIKI provides this overview between a set of modules with a subset of topics. From within the GUI, all available modules are listed in a palette which can be searched through quickly based on module name or description. From there modules can easily be dragged onto the canvas and connected to other modules. The canvas then provides a visual representation of the architecture. Modules can be connected via dragging and dropping arrows representing data flow. The GUI will provide visual feedback on which topic types match. For every module it is possible to edit settings on the executable (ROS node) level. The GUI provides an all-in-one run button, which starts the created project. Behind the scenes, VIKI generates a ROS launch file which is launched within a seperate thread. While the steps from GUI to launch are abstracted from the end-user, they are easy to run independently. It is possible to generate a launch file using VIKI, copy it to another computer and launch from there, provided that the ROS packages are available. 5.5 Future Goals VIKI is under heavy development. At the time of writing, the latest version is 0.2, which is the version this chapter is based on. The goal of VIKI is to reduce time researchers, students, as well as software integrators spend on setting up a robotic experiment. The vision behind the development of VIKI is to let users use it as a main design tool, while still allowing ease of access and use of the most important tools provided by ROS (and compatible with it), in order to boost the time of development of complex behaviors for robots. For this reason, VIKI has been designed to be opensource, and it is licenced5 under an MIT license. This has been chosen in order to allow building a community around VIKI. To reach this goal, focus has to be put on integrating VIKI, as well as enhancing the VIKI modules’ repository, with existing ROS tools and packages that will guarantee better usability for the end user and more functionality within VIKI. Development of VIKI’s features will be based on the community and the feedback that is provided. The authors find it important to interact with the users and focus on building features that are requested most. Users can therefore obtain the required information for development by getting into contact with the authors through github. Among the major functinalities that are still under development, is the possibility to use multi-master tools that allow full distribution of the ROS executables, while minimising the use of the communication bandwidth. It has been mentioned that VIKI has support for launching nodes using the distributed capabilities of ROS itself, but the correct functioning at the current version requires the startup of one single ROS core. Future goals on the short term include incorporating at least one method of running multiple masters. Besides that, future goals will also aim at better integration with the existing ROS environment and improving the workflow during package and module development, to ensure users are not bound to only use VIKI. Features that support this might include automatic or interactive module generation and generating VIKI configuration or modules based on launch files. Specific decisions on the implementation of this will be discussed with the community and tailored to their needs, as already mentioned before. References 1. Quigley, Morgan et al. 2009. ROS: an open-source Robot Operating System. In ICRA Workshop on Open Source Software. 2. Muellers, Filip. 2012. rxDeveloper 1.3b with sourcecode generators. http://www.ros.org/news/ 2012/04/rxdeveloper-13b-with-sourcecode-generators.html (visited on 17 April 2016). 3. BRIDE. BRICS Integrated Developement Environment. 2014. http://www.best-of-robotics. org/bride/. 5 At the moment of writing of this chapter. 4. Fraunhofer FKIE. Node Manager FKIE. 2016. https://fkie.github.io/multimaster_fkie/node_ manager.html (visited on 13 May 2016). 5. Robotnik. ROS graphic user interfaces. 2013. http://www.robotnik.eu/ros-graphic-userinterfaces/ (visited on 17 April 2016). 6. Price, John H. Creating a Graphical user Interface for Joint Position Control in Controlit! https://robotcontrolit.com/documentation/gui_jpos (visited on 17 April 2016). 7. Stumm, Elena. 2010. ROS/Web based Grahical User Interface for the sFly Project. Semester-Thesis. ETH Zurich. http://students.asl.ethz.ch/upl_pdf/289-report.pdf?aslsid= c472f08de49967cf2e11840561d8175a. 8. Willow Garage. ROS GUI. 2012. http://www.willowgarage.com/blog/2012/10/21/ros-gui (visited on 17 April 2016). 9. Kaestner, Ralf. 2016. Plugins Related to ROS TF Frames. https://github.com/ethz-asl/ros-tfplugins (visited on 17 April 2016). 10. James (Techsource Systems). ROS GUI. 2015. https://de.mathworks.com/matlabcentral/ fileexchange/50640-ros-gui (visited on 17 April 2016). 11. Linkbot. Linkbot Labs. 2016. http://www.barobo.com/downloads/ (visited on 17 April 2016). 12. Hoogervorst, Robin, Alex Kamphuis, and Cees Trouwborst. VIKI documentation. 2016. http:// viki.readthedocs.io (visited on 09 Sep 2016). Author Biographies Robin Hoogervorst is a Master student Computer Science at the University of Twente. He has successfully completed the Bachelor Advanced Technology. Based on the research from his Bachelor Assignment, a paper called ‘Vision-IMU based collaborative control of a blind UAV’ has been published on RED-UAS Mexico. His main interests are in the field of Software Engineering, more specifically at creating solid and dynamic software which people love. Cees Trouwborst is a Master student Systems and Control at the University of Twente, specializing in Robotics and Mechatronics. Before this, he finished the Bachelor Advanced Technology with a bachelor thesis on “Control of Quadcopters for Collaborative Interaction”. His areas of interest include autonomous systems, machine learning, Internet-of-Things and software architecture. Alex Kamphuis is a Mechanical Engineering student at the University of Twente. He was awarded a bachelor of science degree in Advanced Technology after the completion of his thesis on the ‘implementation of the velocity obstacle principle for 3D dynamic obstacle avoidance in quadcopter waypoint navigation’. Since then he pursues a master of science degree at the multiscale mechanics group. Part of his current research on sound propagation through granular media is conducted at the German Aerospace center in Cologne. It entails cooperation with experienced researchers on topics such as stress birefringence and zero gravity environments. As such he has performed experiments in over 60 zero-g parabolas. His other interests are running, reading and music. Matteo Fumagalli is Assistant Professor in Mechatronics within the Department of Mechanical and Manufacturing Engineering at Aalborg University. He received his M.Sc. in 2006 from Politecnico di Milano, and his PhD at University of Genoa, where he worked in collaboration with the IIT - Istituto Italiano di Tecnologia. He has been post-doc at the Robotics and Mechatronics group of the University of Twente, where he carried out research on advanced robotic system design and control. Report "Robot Operating System (ROS) - The Complete Reference (Volume 2)" Share & Embed "Robot Operating System (ROS) - The Complete Reference (Volume 2)"
cc/2019-30/en_middle_0023.json.gz/line1587
__label__wiki
0.770938
0.770938
Maharashtra quietly amends its RTI Rules By karira, March 30, 2012 in RTI in Media understanding rti act karira 5,893 RTI India Architect On 16 Jan 2012 and 30 Jan 2012, Maharashtra quietly amended the RTI Rules. 1. RTI application only 150 words 2. RTI application on one single subject matter 3. During "inspection", applicant only allowed to carry pencil The Gazette notifications are attached to this post. Notified Maharashtra RTI Rules Amendment Jan 2012.pdf Copied from email received from Krishnaraj Rao, a leading activist from Mumbai 30 March 2012, Mumbai: There has been a startling breach of trust and public confidence by Govt of Maharashtra. Without any public debate, the government has quietly notified an amendment to Maharashtra RTI Rules on 16th January 2012. See Notified Maharashtra RTI Rules Amendment Jan 2012.pdf - File Shared from Box - Free Online File Storage Shockingly, we did not even learn this from any government source such as a public notice in the dailies. Nor was it told to any RTI activist – many of whom are in regular touch with Mantralaya. We were informed by our RTI Union member Advocate Vinod Sampat, who saw this notification in the March edition of a publication he purchased outside City Civil Court. After reading it, we are left no doubt as to the authenticity of the notification and the amendment. THE AMENDMENT SAYS: (i) Request for information must nor ordinarily exceed 150 words (ii) Request for information must relate to one subject matter only. If necessary, separate applications must be made if it relates to more than one. (iii) Public Information Officer (PIO) must allow the person inspecting the documents to take a pencil only. All other writing instruments must be deposited with the PIO. dssampath 5 This is a very bad one. This will spread over to other states. Before that we have to take some steps to stay the said notification. namitabh 1 MAHARASHTRA KILLS RTI ACT - sum and substance of the amendment and final result... aslamkhan 56 Maharashtra is the only state in India. where citizen are paying taxes much higher then any other state of the country. & tax collection is also higher but look at this what citizen gets in return back stabbing by the government of Maharashtra. can we ask U/S 4(1)(d) of the RTI Act 2005 to the Governor of Maharashtra about this decision? natarajan.k 10 Members, Going to meet leader of opposition and present the point of amendment in RTI act on behalf of all the RTI activists and request him to have a debate in the ongoing assembly in maharashtra. If any members want to join me for the meeting, send me a private message. digal 184 <br/><a href="http://www.rtiindia.org/forum/staff/" style="color Great!, Pl. do it. Today they set for 150 words, tomorrow may amend to 150 characters....These demagogues suffocate the very cause of RTI. They amend even without a discussion! What is happening to the elected representatives? Governor has nothing to do with this. It is the GAD - the nodal department for RTI in Maharashtra. You can file a RTI application (under Sec 6(1)) and ask for a inspection of the entire file. dr.s.malhotra 1,471 RTi India Theorist Is the Govt / Framing Department not required to put the public on notice that they intend to make so and so amendments and to seek objections and suggestions from the affected public ? The Notification can be challenged in the High Court if the Govt refuses to take back the notification . This is deplorable . As reported by Ashutosh Shukla in dnaindia.com on 31 March 2012: Activists see red over Maharashtra govt move to amend RTI rules - Mumbai - DNA Activists see red over Maharashtra govt move to amend RTI rules As more and more scams come tumbling out of the closet by the day, courtesy the Right to Information (RTI), the Maharashtra government has made a move to amend RTI rules. As per a notification, dated January 16, which is floating around on emails, an applicant can ask questions only on a single subject matter and his application cannot exceed 150 words. Besides, during inspection, a person can carry only a pencil along with him. Say, for instance, you have filed an RTI application with the building proposal department. If a part of the reply involves the building construction department as well, chances are that the authorities may ask you to file a separate RTI application. RTI activists in the city have dubbed this move an “absolute breach of trust”. The clause of the ‘single subject matter’, in particular, has left them worried. “It was done only to scuttle the RTI Act. They did not even bother to tell us,” , alleged RTI activist Krishnaraj Rao. “Instead of appointing commissioners, the government is more interested in destroying the Act,” said GR Vora, who has started a signature campaign against the move. “Many scams have come out of the closet of the government. This is being done only to hide them,” alleged Bhaskar Prabhu, another RTI activist. VK Sampat, an advocate, also slammed the move. Nandukmar Jantre, former under-secretary, general administration department, confimed having prepared the notification. “It was cleared by the CM in January,” said Jantre. Each state is allowed to make rules to ensure the smooth functioning of the RTI Act, 2005. skmishra1970 77 It is very sad, any RTI Rule made by AG or PA should not be over and above of main RTI Act 2005 - as ruled by Central Information Commission. One can move to CIC with a complaint in this regard ? One side such amendments may generate more funds for PA in Govt. A/c but another side the people who have tested RTI will file numerious application which will be additional burden on PIO, anyhow first such amendments is not in the interest of general public and secondly for PIO, both should oppose this amendments. Is it possible ? These rules have been amended as per powers vested with the State Government under Sec 27 of the RTI Act 2005. Central information Commission (CIC) has no role to play in this matter. Yogi M. P. Singh 120 Hon'ble members. Why Government of Maharashtra befooling its public that for smooth functioning of act, state government can enact rules. Here this question arises that whether such enactmentment causing generalization of the act or cutting its wings. With this enactment, more powers of PIO to reject the R.T.I. communique is as follows. 1-RTI communique has more than prescribed words. 2-RTI communique has more than one subjects. Whether state government is wreaking vengeance with RTI Act 2005 as its one of the chief ministers of state was exposed through transparency act. Here opposition is playing silent role and indirectly supporting the antipeople stand of state government. J.C.Mishra 4 Ridiculous indeed. Central Information Commission, New Delhi in Decision No. CIC/SG/A/2010/003545/11147, Appeal No.CIC/SG/A/2010/003545 ruled on 27-01-2011 that :- "Therefore, further exemptions can neither be claimed under the RTI Act nor be provided for in subordinate legislations. In other words, the rules framed by a competent authority cannot go beyond the exemptions provided for in Sections 8 and 9 of the RTI Act. The Supreme Court of India as well as various High Courts have categorically held that that subordinate legislations or rules cannot go beyond the letter of the delegating legislation. The Supreme Court of India in Additional District Magistrate (Rev.), Delhi Administration v. Shri Siri Ram AIR 2000 SC 2143 held as follows: “It is well recognised principle of interpretation of a statute that conferment of rule making power by an Act does not enable the rule making authority to make rule which travels beyond the scope of the enabling Act or which is inconsistent therewith or repugnant thereto.” In other words, the Supreme Court of India has held that a rule which is inconsistent or goes beyond the scope of the enabling statute would not be valid. The Supreme Court of India has further observed in Hukam Chand v Union of India AIR 1972 SC 2427 that: “The underlying principle is that unlike Sovereign Legislature which has power to enact laws with retrospective operation, authority vested with the power of making subordinate legislation has to act within the limits of its power and cannot transgress the same. The initial difference between subordinate legislation and the statute laws lies in the fact that a subordinate law making body is bound by the terms of its delegated or derived authority…” In the instant case, the information sought by the Appellant was denied on the basis of Rule 7(vi) of the District Court Rules. Rule 7(vi) provides that the PIO will not give information which relates to a judicial proceeding, or judicial functions or the matters incidental or ancillary thereto. On review of the said Rule, the Commission observed that Rule 7(vi) provides for a much wider exemption than that stipulated under Section 8 of the RTI Act. If Rule 7(vi) is to be implemented, it would defeat the purpose of the RTI Act and reading it as valid would be tantamount to adding exemptions to the RTI Act, which were notenvisaged by the Parliament. Therefore, the exemption contained in Rule 7(vi) of the District Court Rules cannot be invoked to deny information under the RTI Act as it goes beyond the scope of the exemptions provided under Section 8 and 9 of the RTI Act. Therefore, the Commission holds that the denial of information by the PIO by relying on the exemption contained in Rule 7(vi) of the District Court Rules is devoid of any merit and he is required to furnish the complete information as sought in the RTI application. It must be noted that no public body is permitted under the RTI Act to take upon itself the role of the legislature and import new exemptions hitherto not provided. The District Court Rules made by the competent authority under the RTI Act appears to bring in exemptions not provided for in the RTI Act and transgress the exemptions envisaged by the Parliament under Sections 8 and 9 of the RTI Act. Since the right to information is a fundamental right of the citizens, any move which constricts it should be avoided. Even Parliament is very wary of the restrictions it can place on the fundamental right of the citizen and hence competent authorities would be well advised to ensure that they do not create any exemptions which the lawmakers did not provide. CIC is disagree with the rules made by CA, it may be disagree with the rules made by AG which are beyond the scope of RTI. We should try. Dear skmishra, "Therefore, further exemptions can neither be claimed under the RTI Act nor be provided for in subordinate legislations. In other words, the rules framed by a competent authority cannot go beyond the exemptions provided for in Sections 8 and 9 of the RTI Act. The Supreme Court of India as well as various High Courts have categorically held that that subordinate legislations or rules cannot go beyond the letter of the delegating legislation. ...................it may be disagree with the rules made by AG which are beyond the scope of RTI. We should try. IC SG's view is definitely in the spirit of RTI. But, he was deciding an appeal against PIO of Office of the District Judge-III,Tis Hazari Court, the PA which comes under jurisdiction of CIC. He was deciding whether the rule framed by the PA is in the spirit of RTI ACT or not. While seeking information from central Govt./central PSUs one has to go through rules framed by central Govt, the "appropriate Government" For state Govt./PSUs, one has to go through rule framed by "appropriate Government", the state Govts. CIC has no role to change or amend the rule. You mean, if the matter relates to State, than CIC can not do anythin but State Commission can do which they will not do actually because they are appointed by that govt. mahekovoor 43 it is really shocking to hear this,at a time when people are expecting further relaxation in rti procedures . hope some of you might have seen a message being circulated among internet fraternities regarding lokpal bill. an anticipated news headline in 2060.....Munna Hazare,grand grand son of Anna Hazare fasting at Jundhar Mandhir for early implementation of lokpal bill..... from the author...if things move like this...people may have to fight again at 2060 for reintroduction of rti act,as one by one various states and central govt may scrap this beautiful act in due course..... Even, in the instant case, SIC can not change the Rule under Maharastra Right to Information circulated by GAD. PIOs would follow the rules framed as mentioned above & SICs , while deciding Appeals/Complaints, would decide whether the ACT (so rule) has been complied with or not. jskeshriya 4 These people want to discourage RTI activist by amending the law, without any kind of debate or any notice, they just want to kill the RTI act, so that no new scam will come out, as everyday there is new scam coming out in public with the help of RTI law & our great RTI activist who are fearless and are ready to fight till the end. I salute to all the fearless RTI activist, and if they need our support we are ready to fight with them. mehtakishor 0 Early Adoptors Sir/s, In the matter of the recent and unannounced amendments to the RTI act 2006. What is to be done? can these alterations be challenged with a PIL? Are we really helpless against these autocratic leaders? Are we destined to feel frustrated? Have we lost our spines? Well, somebody do something. Kishor Mehta Bureaucrat vies for top post in RTI body after ‘strangling’ act [h=3]As Indian Administrative Service officer Nandkumar Jantre eyes state info chief’s chair, wary activists say he should not be appointed for blunting rules by restricting words per RTI query[/h][h=4]Yogesh.Naik @timesgroup.com[/h] Asenior bureaucrat, who was accused of trying to throttle rules pertaining to the right to information, is now in the race for the top post in the state information body. And social activists are not too happy about this development. The Indian Administrative Service officer, Nandkumar Jantre, had incurred activists’ wrath by limiting word count of queries to 150 words. The Right to Information act is a potent tool in citizens' hands to acquire information of goings-on in government departments. Such information can stand as evidence in a court of law. The state government, which was the first in the country to introduce this act, clandestinely amended the rules in January this year. This amendment mandates that an applicant limit questions seeking information to 150 words. The notification dated January 16 was signed by Jantre who, at that time, held the post of secretary in the state’s general administration department. He said he was not sure whether the file pertaining to amendment of the rule, was sent to chief minister Prithviraj Chavan who heads the general administration department, or to the state’s top bureaucrat, chief secretary Ratnakar Gaikwad. While the chief minister did not comment, Gaikwad said he was unaware of any circular issued by Jantre. Jantre, who retired in January, confirmed he has applied for the state information commissioner’s post. Activists say that putting a cap on word count for an application is a tacit way of armtwisting information seekers. Lawyer Vinod Sampat, an information activist, said the plan to elect Jantre to the top post in the state is aimed at “curtailing the RTI movement”. Sampat said activists were planning to protest the decision. “Jantre’s act [of restricting word count] is against the movement and he should not be appointed to the post.” Another activist, Anil Galgali, echoed the view. “It seems the state is further trying to strangulate the act,” said Galgali, adding that it was this act which had exposed irregularities in the Adarsh housing society scam and alleged corruption by senior Congress politician Kripashankar Singh. “The cost of procuring information will rise with a limit to words per application. It will mean that one will have to file at least four to five applications to seek information which could otherwise have been procured by just one application,” said Galgali. Jantre, however, justified his decision, saying applicants often write never-ending applications, which are peppered with their comment and criticism. “Officials are not interested in this Ramayan or Mahabharata in applications,” Jantre said. “Queries need to be short. The act is meant to seek information, and officials’ time should not be wasted.” Amid the clamour that he is not a fit candidate for the state nformation commissioner’s post, Jantre said that his application for the top post had nothing to do with what he had done as secretary in the general administration department. “There is no reason to link the two,” Jantre said, while parrying questions on the flak drawn from activists. KEEP IT SHORT: A PLOY TO STONE-WALL? » Indian Administrative Service officer Nandkumar Jantre (pictured), in his former capacity in the state’s general administration section, issued a notification restricting word count of each Right to Information query to a maximum of 150 words per application. It drew flak from activists who saw it as a move to ‘strangle’ the act. » Now, retired Jantre has applied for the info chief’s post for the state. This move has irked activists. Source: e-paper Sign-in As reported by Amarpita Banerjee in indianexpress.com on 02 April 2012: ‘150 words too few, changes could hurt entire process’ - Indian Express 150 words too few, changes could hurt entire process’ Those applying for information under the Right to Information (RTI) Act in the state will henceforth have to limit their queries to 150 words, as per an amendment in the Act by the state government in January.Also, as per the amendment, the questions can pertain to only one subject per application. Activists across the city, who were unaware of the change till a few days ago, expressed their unhappiness at the government taking the decision without any discussion with the stakeholders. This notification has been brought out by the state government’s general administration department. However, the activists allege the decision was taken in an undemocratic way, without inviting suggestions from citizens.As per the notification by secretary to the government, Nandkumar Jantre, “A request in writing for information under Section 6 of the Act shall relate to one subject matter and it shall not ordinarily exceed 150 words. If an applicant wishes to seek information on more than one subject matter, he shall make separate applications.” Activist Vivek Velankar of Sajag Nagrik Manch said, “Fixing a word limit for RTI applications will hamper the entire process. The step was taken probably because there were complaints about applications reading like essays. However, 150 words is too less. Also, the government took the decision without consulting the public.” Shivaji Raut, a Satara-based activist, said making such changes would defeat the purpose of the Act. “An amendment like this from a government which calls itself progressive is surprising. None of us knew about these changes. There is a need for the government to take into consideration the views and suggestions of citizens. If they had to limit the words, they could have kept it at a more agreeable number, such as 500. We plan to protest against this amendment.” While most say the word limit will hamper applicants from adequately expressing their queries, some feel the bigger issue is the way the amendment was brought about. “The amendment in itself is not a big deal. However, the government took this step without any consultation with the stakeholders. Tomorrow, they might show the same attitude towards bigger issues. This is typical bureaucratic behavior and rather than concentrating on issues such as this, the government should make sure that the aim of the Act, that of bringing about transparency in governance, is met,” said Maj Gen (Retd) SCN Jatar. samss 0 The Funniest part of it is that no one in the General Administration Department seems to know about it, or they are just pretending to be oblivious about it. i made around 10 phone calls in that dept and they just kept passing me around (their usual business). The current under secretary's office also pretends ignorance. CIC moved on recruitment procedure of High Court By ganpat1956 A Supreme Court lawyer has moved the Central Information Commission seeking information on the procedure of the recruitment of class III and IV employees in the Delhi High Court after it was denied by its administration. Advocate Kamini Jaiswal approached the CIC contending that orders of the High Court Public Information Officer and Chief Public Information Officer (First Appellate Authority) refusing to part away with the information was a violation of the Right to Information Act and also her Fundamental Rights. She alleged that information had been denied for erroneous reasons and none of the exemption available under Section 8 of the Act allows the authority not to part away with the information sought. The lawyer had filed the application before the Public Information Officier on September 22, 2006 seeking information regarding number of class III and class IV employees recruited by the Court from the year 1990 to 2006 and the procedure followed for their recruitment. The High Court PIO while denying the information held that information pertaining to those decisions which were taken administratively or quasi-judicially would be available only to the affected parties. The lawyer then approached Appellate Authority challenging the PIO order contending that the High Court (Right to information) Rules were inconsistent with the provision of the Right to Information Act and it should be held void. But the Appellate Authority refused to accept the contention of the lawyer and dismissed her appeal. Now the lawyer has moved Central Information Commission against this order. CIC moved on recruitment procedure of High Court .:. NewKerala.Com, India News Channel Top score but IIM will not take her in Bangalore, February 23, 2007 The wait is certainly agonizing for Vaishanavi Kasturi, a visually impaired student, as she knocks on the doors of the Indian Institute of Management-Bangalore to know why she could not make it despite her excellent performance in CAT 2006. On Friday, Vaishanavi’s father RK Kasturi spent several hours closeted with a team of officials from IIM-B, asking them why his daughter was not called for a group discussion and personal interview. Vaishanavi cleared the CAT with a percentile of 89.29, outdoing thousands of other candidates. She was certainly eligible to sit for the next round of tests — the group discussion and interview — what with the IIM-B setting a cut-off of 86.42 percentile for the disabled. But the call never came. Disappointed, Vaishanavi’s family filed a notice under the Right to Information Act, which got Kasturi the meeting with the school authorities. At the end of the discussion, Kasturi still did not have an answer for his daughter. He told the Hindustan Times: “They told us that she did not make it because others (in the category of applicants with physical disabilities) were graduates or had work experience, etc. We had a long meeting and discussed many things because we want to understand where we stand. Let us wait till Monday (February 26). We have to attend a hearing at the RTI Commissioner’s office that day. The group discussions and interviews are scheduled for April. Let us see what happens on Monday.” For Vaishnavi — a sixth semester BCom student of a local college — the doors to IIM-B may not have opened for her but another prestigious institute, the MS Ramaiah Institute of Management, has offered her a free seat for a post-graduate diploma in management. Vaishanavi, however, still hopes she will qualify for the Indian Institute of Management-Bangalore at the end of the hearing at the RTI Commissioner’s office in the state capital on Monday. IIM says no to top scorer : HindustanTimes.com
cc/2019-30/en_middle_0023.json.gz/line1589
__label__wiki
0.872956
0.872956
Dayen writes: "Warren singled out by name her own Democratic colleagues who were supporting the bill, catching internal blowback from the caucus. But Warren's warnings have proven prescient, as the SunTrust-BB&T merger represents the latest in a wave of deals in the financial sector." Elizabeth Warren. (photo: Michael Dwyer/AP) Elizabeth Warren Was Right: New Law Is Already Making Banks Bigger By David Dayen, The Intercept 10 February 19 he proposed $28 billion merger announced Thursday between large regional banks SunTrust and BB&T is the biggest banking tie-up since the financial crisis, creating what would become the nation’s sixth-largest bank. And it’s a direct result of actions taken by the Trump administration and the bipartisan group of lawmakers who passed a bank deregulation bill in 2018. While Democrats insisted that the bank bill, S.2155, was merely about community bank regulatory relief, critics and industry experts expected that it would lead to consolidation of the sector, which began to occur almost immediately after passage. “I’m concerned about the negative impact of increased consolidation caused by S.2155 on community banks and on customers who benefit from more competition for their business,” wrote Massachusetts Sen. Elizabeth Warren in April 2018, just a month after the bill’s passage. Warren singled out by name her own Democratic colleagues who were supporting the bill, catching internal blowback from the caucus. But Warren’s warnings have proven prescient, as the SunTrust-BB&T merger represents the latest in a wave of deals in the financial sector. “Once again, big bank deregulation is leading to more consolidation,” said Rep. Katie Porter, D-Calif., a Warren protégé and first-term congressperson who sits on the House Financial Services Committee. “I opposed last year’s bank giveaway bill, and the Trump administration’s loosening of protections, precisely because it would make ‘too big to fail’ even worse. This merger will do the same and end up hurting our nation’s community banks.” The merger was made possible by a series of gifts granted to banks since Donald Trump took office. The December 2017 tax cut has had perhaps its most dramatic effect in lowering the tax burden for financial institutions. The nation’s 23 largest banks, which include BB&T and SunTrust, paid around $21 billion less in taxes in 2018, according to a Bloomberg analysis. That built up massive cash reserves at these banks, most of which were channeled to investors and executives rather than line-level workers. Dividends and stock buybacks went up 23 percent, while the companies cut around 4,300 jobs, with more firings on the way. Even lending growth slowed at these big banks. Meanwhile, S.2155 weakened regulatory standards for banks between $50 billion and $250 billion in assets, a category that also includes SunTrust and BB&T. This allowed them to reduce projected spending on regulatory compliance. The combination of this and the tax windfall gave the banks operating funds to devote to mergers and acquisitions. Experts believed that firms under $250 billion in assets would feel free to grow through consolidation, without concern for a higher regulatory burden. The new bank created by the deal between SunTrust and BB&T would actually breach that threshold, creating a bank with $442 billion in assets. But an obscure — but certainly not accidental — provision of S.2155 enabled the Federal Reserve to provide relief for banks at that level too. Section 401 of S.2155 changed the authority for tailoring heightened regulatory standards for banks above $100 billion in assets. By changing one word in existing statutes, from “may” to “shall,” the bill required the Fed to assess all its rules to ensure that they’re tailored to each specific bank, based on size or amount of risk. The Fed took this loophole and drove a truck through it. Last October, it proposed assembling four categories of banks, with different regulatory standards provided to each. Firms between $100 billion to $250 billion would get the lowest scrutiny, followed by those between $250 billion and $700 billion, the category the new SunTrust-BB&T bank would fall into. Those banks would no longer be subject to certain capital and liquidity requirements, which the Fed labels as “advanced approaches.” They also would only be subject to company-run stress tests, where the bank must show how it would fare under adverse economic conditions, every two years, as opposed to semiannually. The American Bankers Association cheered this “right-sizing” of bank regulations. While none of the specifics were explicitly required under S.2155, the law clearly gave the Fed the excuse to engage in this further deregulation. In fact, Randal Quarles, the Fed’s vice chair for supervision, stated in a speech last July that S.2155 “requires the Board to tailor its framework of supervision and regulation of large firms.” In its proposal, Fed staff acknowledged that the changes would “slightly lower capital requirements under current conditions and reduce compliance costs related to capital planning, stress testing, and, for certain firms, the advanced approaches capital requirements.” By lowering these barriers for going above $250 billion, the Fed greased the path for SunTrust and BB&T to merge. Sixteen Democratic senators and 33 Democratic House members voted for S.2155, making the legislation one of the most notably bipartisan efforts of Trump’s first term. At the time, Democrats said the law was narrowly intended to only help community banks. But it appears to be facilitating this big bank merger. The weak supervision of the Trump administration on banking matters also makes the step up above $250 billion less burdensome and therefore more attractive. For example, SunTrust and BB&T would still be subject to annual supervisory stress tests, administered by the Fed. But the Fed just changed those as well, proposing to give banks more upfront information on the testing process, including the scenarios it uses to probe bank balance sheets and the results of how simulated loan portfolios perform. This is the equivalent of giving banks a cheat sheet before the test, which they can then exploit to ensure a passing grade. In theory, antitrust authorities will scrutinize this merger before it advances. In reality, there’s been an 18 percent drop in staffing at the part of the Justice Department’s antitrust division that reviews mergers since Trump’s inauguration. One final hurdle to big bank mergers is the Community Reinvestment Act, which assesses banks for lending into low- and moderate-income areas. The CRA really only has one enforcement mechanism: Regulators examine it when banks attempt to merge. But the lead regulator that would do the examining, the Office of the Comptroller of the Currency, is headed by Joseph Otting, former CEO of OneWest bank, who has expressed his disdain for the CRA both before and while in office. Otting has cited his experience from when OneWest merged with CIT, previously the largest merger since the financial crisis, as cementing his views on the CRA. “I went through a very difficult period with some community groups … who came in at the bottom of the ninth inning that tried to change the direction of our merger,” he told a banking conference last April. There is credible evidence that Otting and OneWest submitted fake public comments in support of the merger, which went through in 2015. The text of the fake supportive comments is identical to a sample letter placed on the OneWest website in 2015 encouraging customers to support the merger. Otting, then OneWest CEO, sent emails to his contacts on Wall Street at the time, pointing to the sample letter on the website and soliciting support. Now at OCC, Otting has proposed “modernizing” the CRA. Critics believe the changes would eliminate bank commitments to low-income residents and local communities. Even if the CRA update is not completed before the merger approval process on SunTrust-BB&T begins, it’s unlikely that Otting, a committed opponent to the CRA, would use it in any way to hold up the merger. Meanwhile, SunTrust and BB&T have said that the merger would cut costs by $1.6 billion by eliminating jobs in accounting, legal, and bank branches. The merger combines two “stadium banks,” a moniker that came out of the debate over S.2155, referring to banks that separately aren’t big enough to be global names, but are big enough to have a stadium named after them. SunTrust Park is the home of the Atlanta Braves; BB&T Ballpark houses the Charlotte Knights, and BB&T Field is where Wake Forest University’s football team plays. +6 # Rodion Raskolnikov 2019-02-11 08:16 It is good that Warren called out democrats for their support of this bill. Both Demos and Repubs have supported the neo-liberal legislation that has created the mess we now have with banks and corporations in general. We know how we got here. Corporations and banks pay very little in taxes. The middle class pays most of the support for governments at all levels. Corporations and banks use their money to buy other institutions and grow larger. This is an unsustainable trend. Its trajectory points to such extreme levels of wealth inequality that most people on earth simply will not be able to live. It was in the Bush I regime that the laws preventing commercial banks from operating across state lines was repealed. Before that Sun Trust was a Virginia bank (called Virginia Bank Shares) and BBT was North Carolina Bank. Both are moving toward being national banks like Bank of America or Citibank. Profits go up, service goes down. Billionaire banks buy influence of politicians so they get more favorable legislation. Warren is especially good on this issue. I congratulate her for having the guts to take on Democrats. They are just as bad as republicans on banking issues. +4 # Kootenay Coyote 2019-02-11 09:56 'This bigger they are, the harder they fall': & they fall on top of everyone else, too. 0 # Robbee 2019-02-11 18:05 Quoting Kootenay Coyote: harvard bank law professor Was Right: New Law Is Already Making Banks Bigger - warren? elizabeth? elizabeth warren? - where have i heard that name before?
cc/2019-30/en_middle_0023.json.gz/line1590
__label__cc
0.658487
0.341513
THE RIGHT TO HEALTH HAS SIMPLY NOT YET BEEN GRANTED WHAT IS INHERENT TO IT. (Part one of two) Written by schuftan@gmai.com [Taken from the doctoral thesis of Eduardo Arenas Catalan entitled ‘Solidarity and the RTH in the Era of Healthcare Commercialization’, 2018] What is missing that is inherent to it? 1. What is missing is effectively linking the many (so far) unheeded claims with the duties of the institutions responsible to make the right to health (RTH) possible. The hypothesis here is that it is solidarity and not the further expansion of legal rights what will eventually give this social right its distinctiveness and its impact. For the most part, said institutions do not conceive of the RTH in its more substantive way; it is rather fundamentally interpreted as a legal right.* *: Amartya Sen has written against that ‘legally parasitic view of human rights’ arguing that human rights (HR) must be seen as an approach to ethics, which stands in contrast, for instance, to utilitarianism. 2. Conversely, interpreting the RTH as a solidarity mechanism, it follows that social services are to function in a way that meets collective rather than individual entitlements. 3. The predominant legalistic interpretation of the RTH shifts it from the declared goal of equal-access-to-healthcare-for-all to the goal of achieving-a-justiciable-minimum for those denied (or unable to pay) healthcare services. And precisely this, it is contended, is what gives social rights (the RTH included) second-class status vis-a-vis civil and political rights.** **: Social rights have a more democratic policy-making pedigree. The judiciary cannot operate actively on HR issues, but only in relative terms, i.e., it can only review a legislative decision, not (democratically) generate new policy. That is not its role. 4. To say that the RTH is grounded in solidarity is to mean that the goal of this right is to ever-increase the path towards equal access to healthcare for all --i.e., from access conditioned by social privilege to access based on citizenship and medical needs (including non-citizens within the state’s jurisdiction). 5. Let us be clear: The obligation to fulfill the RTH entails the establishment of a non-market right to access to public healthcare services free of charge, again, based on medical need. 6. Or in other words: Conceived under the solidarity lens, the RTH concerns the designation of an area that, due to its fundamental importance, is placed outside the market and is guaranteed to all. 7. Therefore, the inability of the authoritative bodies to identify the commercialization of healthcare as the greatest threat to the RTH is a fundamental part of the problem. Furthermore, the right to health is a construct that cannot be envisioned separate from politics Predatory forms of globalization cannot be reconciled with human rights. 8. Beware: It is not true that the idea of neoliberalism is opposed to the state. The forces of globalized capital have successfully put the state at the service of its agenda. The real management of capitalism requires market plus state. (Samir Amin) 9. Two key questions for HR workers are: What happens when one of the RTH’s most crucial components becomes operationalized under a private logic? and What does that say about the protection-on-an-equal-footing of the HR/RTH of all claimholders? Equal treatment is a non-negotiable HR principle. In this context, solidarity means not just equal access, but also means that, rather than confining scarcity on the shoulders of those rendered poor, these scarcities are justly distributed among all members of society. 10. Human rights law must, therefore, take a more decisive stand against the commodification of healthcare. Failure to do so is one of the most important reasons to explain the advances from the far right at the expense of HR. An area of priority action here. 11. It is thus disingenious to defend a framework of individual legal rights; they will never lead to health equality. Unsurprisingly, those that most strongly defend such a stand are often the ones with the economic means to seek the private healthcare they need. So beware, the predominant interpretation of the RTH including the private commercial sector is wrongly based on a ‘political neutrality’ stand 12. This said, we must defend all social rights from two different, but equally testing adversaries: a) the cultural liberal voices that zealously defend HR as long as the adversary is the state and not the market; and b) the vociferous groups that do not want to know anything about HR. 13. It has become too co mmon to assert that “we all want the same thing, we just have slightly different ways of going about it”. This is simply a false and biased assertion. The rich do not want the same thing as those rendered poor. Those who depend on their job for their livelihood do not want the same things as those who live off investments and dividends. Those who do not need public services, because they can purchase private services, do not seek the same things as those who depend exclusively on the public sector. (Tony Judt) Claudio Schuftan, Ho Chi Minh City Your comments are welcome at This e-mail address is being protected from spambots. You need JavaScript enabled to view it www.claudioschuftan.com ARTICLE VIEWS: 202 Did we lose WWII ? Robert L Vogel America was great when FDR got us out of the depression with the New Deal, created the WPA, drafted the four freedoms, soon after came the Second Bill of Rights and the Universal Declaration Atrocities - A sure-fire way to bring atrocities about Zepp Jamieson July 13th, 2019 François-Marie Arouet, aka ‘Voltaire’ once famously said, 'Those who can make you believe absurdities, can make you commit atrocities.' The other day, Mike Pence, trumpenfuhrer Possible Nuke on the Loose in the DMV area buckwilliams@europe.com Breaking news. Possible nuke on the loose in the DMV area. by Buck Williams A Lebanese man , Karim Kawas, was questioned by US Federal authorities in Northern California regarding an Donald Trump, Antichrist-In-Chief at the Whore of Babylon Donald Trump, Antichrist-In-Chief at the Whore of Babylon by Buck Williams Word is spreading around Latin America that Donald Trump is the antichrist. So, I did a survey of Latin Americans States Tackle Student Debt and For-Profit Colleges Rise Jubilee USA Network Washington, DC - As Democratic presidential candidates debate student debt, States attempt to tackle what they see as a debt problem impacting their economies. California relieved $59 million in Recent Developments That Could Endanger Free Speech jgsf1987 Recent actions taken by Google and other major web companies have begun to crack down on internet channels that don't fall within the corporate media's mainstream in an effort to block them out. This Kashmir: Refocus on the Martyrs’ Day Sajjad Shaukat Every year, Kashmir Martyrs’ Day (Youme Shudaha-e-Kashmir) is commemorated on July 13 in the memory of 21 Kashmiris who were martyred outside Srinagar Central Jail by the troops of Dogra Maharaja
cc/2019-30/en_middle_0023.json.gz/line1591
__label__cc
0.714186
0.285814
August 7 signed second the it’s car bargain considering right wholesale nfl jerseys from china Seriously, these seats are Volvo good. It’s Jaguar’s first all-electric car. Those are all things that go into play. It sopped up the imperfections of the island roads easily-striking down the Cheap Jerseys 90 joke: You know you’re from Newfoundland when driving is better in the winter because the potholes Wholesale Jerseys NFL are filled with snow. He doesn’t believe the assets to acquire Faulk would be as high considering all that the NHL Jerseys Wholesale team got for Jeff Skinner. I was overwhelmed at the time , but now I’m excited, said Duchene. It can even report potential fuel waste and provide coaching to reduce it. In addition to Apple CarPlay and Android Auto support, you get Co-Pilot360, Ford’s suite of driver-assistance and active safety features. Duke is 16 all-time in Maui. The standard engine then was the 6-liter LS2, rated at 400 hp, later upping to 430 with the 6-liter LS3. It’s just as well-appointed inside as the DB11, meaning you won’t worry about spending a lot of time in the cabin. Volvo has buffed-out annoyances and, while outright performance might not be the S60’s strong suit, this is a machine that’s beautifully executed and outfitted to support the vast majority of drivers in security and style. Optional Porsche Carbon Ceramic Brakes add 6-piston front calipers and 350 mm carbon-ceramic discs both front and rear. Simmons was absolutely sensational, pouring in a playoff career NHL Jerseys Wholesale high scoring total while missing only two field goal attempts and two free throws. The system can even harvest energy from the suspension motion. In our real-world testing, both the V-6 and hybrid earned 26 mpg on the highway. Girgensons’ top season thus far was an injury-maligned 2014 season. The Ram 1500’s base model only has a rearview NHL Jerseys Wholesale camera, while the F-150 comes standard with a rearview camera, automatic emergency braking, and pedestrian detection. TribLIVE commenting policy We moderate comments. Although it has been a tough tournament so far for Woods, he does have a chance for redemption Sunday. Lighting up Sharks goalie Antero Niittymäki, Perry proceeded to score a hat-trick to earn his 48th, 49th and 50th goals. But it’s the A7’s Virtual Cockpit and MMI Touch Response infotainment systems that will attract the most attention. This season Gabriel ranks second on the Bears with 63 receptions for 627 yards, both career highs. Overall, North Andover’s Super Bowl appearance marked the 20th time a team coached by a Dubzinski has qualified for a state title game. Limited to 500 examples, the STI’s Type RA suffix stands for Record Attempt, in reference to the highly modified WRX STI Type RA NBR that Subaru has trotted out to set Cheap Jerseys Usa particular lap records at the Nürburgring Nordschleife and other challenging venues. But we’re enjoying our time with the Odyssey, and neither it Wholesale NHL Jerseys nor the Pacifica gets much time to cool off between trips. It’s still the sort of car you won’t feel compelled to drive Wholesale NHL Jerseys quickly or fast, and sitting as it will in showrooms with hybrid versions of the Camry, Avalon, RAV4, and Highlander-to say nothing of the Prius, which also has a Prime plug-in variant-continues the normalization Cheap Jerseys Usa of the powertrain type that Wholesale Jerseys NFL the automaker helped popularize. However, the Sport model feels livelier, especially when the CVT is in seven-speed manual mode. We couldn’t have been more shocked by the result. While the aging Ford Edge remains a solidly capable and utterly practical two-row SUV in the mid-size class, the stylish Chevrolet NHL Jerseys Wholesale Blazer is, at this price point, a more compelling package for its contemporary execution, stronger performance, and more secure road manners-even though it, too, can feel Cheap Jerseys 90 light on value near the $50K mark. Dynamically it’s well-resolved and wonderfully polished. Stamkos opened the scoring from the left circle off a pass through the slot by Nikita Kucherov during a power play 3 into the game. http://toyota-tancang.net/international-destinations-from-seattle-including-an-offensive-and-defensive-lineman-damian-jones-jersey.html https://powerdigitallearning.tk/2019/06/18/force-man-collinsworth-said-to-game-authentic-mario-kempe-jersey/ Previous PostReserve starts at $31 sienna boost Greg Little JerseyNext PostOff the beaten path now were disappointed 12 learn pound cheap jerseys free shipping
cc/2019-30/en_middle_0023.json.gz/line1594
__label__wiki
0.775564
0.775564
Queensland tackles Turnbull over dumping of renewable energy grants Queensland Labor government to challenge Turnbull over moves to de-fund ARENA and remove its grants-making ability, saying it would put a halt to plans for large scale solar farms in the state. Giles Parkinson Posted on 31 March 2016 20 July 2016 11 Comments GTM Research forecasts the U.S. solar market to surpass 100 cumulative gigawatts by 2021 The Queensland Labor government says it plans to tackle the Turnbull Coalition government over plans to remove grant funding for large scale renewable energy investments, saying it threatens to hold back the nascent large scale solar sector in the state. Energy minister Mark Bailey says premier Annastacia Palaszczuk intends to raise the issue at the COAG meeting on Friday, and says that large scale solar projects will come to a halt once the grant funding is removed. The Coalition government last week announced that the Australian Renewable Energy Agency would cease grant funding once its current round of $100 million for large scale solar farms was complete. ARENA will also lose $1.3 billion in unallocated funding – should the Coalition succeed in pushing such legislation through the Senate – and will be relegated to a role of assisting the Clean Energy Finance Corporation in allocating existing CEFC funds based under a newly badged “innovation fund.” Bailey said the Queensland Labor government had held talks with the Australian Solar Council to discuss the impact on large scale solar projects in regional Queensland. Large scale solar will provide a critical component of the Queensland Labor government’s plans to reach 50 per cent renewable energy by 2030. It has announced “solar 60” project, intending to support 60MW of large scale solar projects in conduction with ARENA. Ten of 22 solar projects shortlisted by ARENA are based in Queensland, although it is likely that only around half will get grant funding. Although solar costs have fallen dramatically, the absence of long term off take agreements means that financing is difficult, and the solar industry is concerned that the removal of grant funds will prevent projects going ahead. This is despite Origin Energy announcing on Thursday that it had signed a 15-year power purchase agreement with the 56MW Moree solar farm in NSW. However, this farm has already been constructed, with the help of significant grant funds from ARENA and financing from the CEFC. Bailey said the proposed cuts to solar grant funding was a critical issue for regional Queensland. “The Premier will be looking to raise it at COAG on Friday,” Bailey said in a statement. “Right across regional Queensland there are shortlisted solar projects ready to go, to help transition to a clean energy economy. “ARENA is a key partner of the Palaszczuk Government in delivering renewable energy projects and jobs across the state. If it loses its capability for competitive grant funding then that means renewable energy projects that can deliver jobs in regional Queensland simply won’t happen once this round is complete.” Bailey said “not one” large solar project has been financed since the election of Tony Abbott in 2013. “Things are getting worse under Malcolm Turnbull. I call on the Prime Minister to urgently rethink these massive cuts, and get on board with us turning the Sunshine State into the Solar State.” John Grimes, the head of the Australian Solar Council, said the $1.3 billion cut to renewables funding is “bad news” for Queensland’s economy and for regional jobs. “The Palaszczuk Government has shown national leadership by committing to 50 per cent renewables by 2030, but Malcolm Turnbull has pulled the rug from under Queensland’s economic and environmental future,” Grimes said in the joint statement with Bailey. ARENA noted in a statement today applauding the Origin contract with Moree that “not one” of the 22 solar projects on its shortlist would go ahead without grant funding. Ironically, environment minister Greg Hunt said the Moree solar power purchase agreement “demonstrates that a developer can finance and build a large-scale solar plant in Australia without first securing a contract to sell the electricity generated by the plant.” “The agreement is a clear sign that the innovative approaches to financing projects are being developed and deployed since we fixed the Renewable Energy Target after Labor’s shambles.” But ARENA itself pointed out that the Moree solar project would not have been built without the grant funding. And clean energy groups have raised concerns that the ARENA cuts will hold back future solar plants and developments in other technologies. They pointed out that projects such as Carnegie Wave Energy’s world-first multiple array wave energy project off Garden Island would not have proceeded without grant funding. Hunt sought to deflect that criticism last week. When challenged on Sky News last week about the defunding of ARENA’s $1.3 billion, Hunt replied: “The Australian Conservation Foundation, the Investors Group on Climate Change, the Climate Institute, have all welcomed it.” Yes, they welcomed the government’s decision to drop their three-year-old plans to scrap the CEFC, but here’s what they said about the proposal to de-fund ARENA and remove its grants-making ability: The ACF: ““While we welcome this new clean energy fund, the removal of the Australian Renewable Energy Agency’s grant-making function and the reported decision to fund the (new) body with money reserved for the CEFC is disappointing and undermines ARENA’s role.” The Climate Institute said: “It would be a big mistake to lose ARENA’s grant making lever. In addition, the government will have to deal with the legislative fact that it should be putting $1.3 billion into that task.” In a statement issued on Thursday in response to the Queensland-ASC statement, Hunt accused the Palaszczuk Government of being “utterly dishonest”. “We are continuing the large scale solar grants round – in which Queensland has the lion’s share of shortlisted projects.” A spokesman for Bailey said the Queensland government did not dispute that, but was concerned about the lack of future grant funding that may be required. “It’s disappointing that Labor and the Australian Solar Council are misleading the Australian public,” Hunt said. “The changes we have announced will drive innovation and create the jobs of the future, while delivering a financial benefit from the investment of public money.” Giles Parkinson Giles Parkinson is founder and editor of Renew Economy, and is also the founder of One Step Off The Grid and founder/editor of The Driven. Giles has been a journalist for 35 years and is a former business and deputy editor of the Australian Financial Review. Tesla Model 3 registrations attract big queues for first "mass-market" EV World’s first triple geo-PV-solar thermal power plant unveiled in Nevada Reality Bites 3 years ago QLD Labor needs to realise that it is one thing to have an aspirational policy to achieve 50% renewables but another thing to actually fund it. QLD Labor has no money but expects the Feds to give money away to benefit QLD. Where is that Labor money tree? Andy 3 years ago crowd-funded would be a solution. Better investment than having it on a bank account. Works every else in the world… MaxG 3 years ago For that you need smart people… which are lacking in Australia. Let’s face it, someone must have voted for the clowns in power. Chris Fraser 3 years ago I think not much of that Hunt tough talk. His group hug on the floor of the House after removing a price on carbon emissions is his legacy. Rob G 3 years ago Still having trouble believing anything Hunt says… BsrKr11 3 years ago how shocking is it that the Liberals are doing this? I mean it is so predictable at this stage you could bet your house on more of it happening… Not shocking at all… it was very predictable, and the LNP never made a secret out of it. In other words, you and I are in agreement…. lin 3 years ago “Annastacia Palaszczuk intends to raise the issue at the COAG meeting on Friday, and says that large scale solar projects will come to a halt once the grant funding is removed.” For Ghunt, Talcum and the gang, this would be Mission Accomplished, would it not? Michael Rynn 3 years ago Watch the pea under the cups. Where will it go? Watch how party of Dirty Big Coal Codgers try to bamboozle the public, and nobble the renewable competition, and call it innovation. They give several times these amounts of money free to the coal corporations every year as tax subsidies and infrastructure money. Naturally the coal and CSG people return a small part of these taxpayer funded favours as large and frequent donations to the big political parties. Australian government is about as fossil-fuel corrupted as it wants to be. This is not a renewable government. Liz Veitch 3 years ago This Liberal Government has profiled its policies as heralding a new focus on innovation, yet by withdrawing ARENA’s discretionary grant funding, it is stifling the projects that are best placed to spark that innovation.
cc/2019-30/en_middle_0023.json.gz/line1597
__label__cc
0.746129
0.253871
Signal Processing for Radio Astronomy van der Veen, A.J. (TU Delft Circuits and Systems) Wijnholds, SJ (Netherlands Institute for Radio Astronomy (ASTRON)) Mouri Sardarabadi, A. (University of Groningen) Bhattacharyya, S. (editor) Deprettere, E. (editor) Leupers, R. (editor) Takala, J. (editor) Radio astronomy is known for its very large telescope dishes but is currently making a transition towards the use of a large number of small antennas. For example, the Low Frequency Array, commissioned in 2010, uses about 50 stations each consisting of 96 low band antennas and 768 or 1536 high band antennas. The low-frequency receiving system for the future Square Kilometre Array is envisaged to initially consist of over 131,000 receiving elements and to be expanded later. These instruments pose interesting array signal processing challenges. To present some aspects, we start by describing how the measured correlation data is traditionally converted into an image, and translate this into an array signal processing framework. This paves the way to describe self-calibration and image reconstruction as estimation problems. Self-calibration of the instrument is required to handle instrumental effects such as the unknown, possibly direction dependent, response of the receiving elements, as well a unknown propagation conditions through the Earth’s troposphere and ionosphere. Array signal processing techniques seem well suited to handle these challenges. Interestingly, image reconstruction, calibration and interference mitigation are often intertwined in radio astronomy, turning this into an area with very challenging signal processing problems. http://resolver.tudelft.nl/uuid:1d42b311-ef9a-49a9-ad62-263ab94386f7 https://doi.org/10.1007/978-3-319-91734-4_9 Embargo date Handbook of Signal Processing Systems Bibliographical note Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. © 2019 A.J. van der Veen, SJ Wijnholds, A. Mouri Sardarabadi /islandora/object/uuid:1d42b311-ef9a-49a9-ad62-263ab94386f7/datastream/OBJ/view
cc/2019-30/en_middle_0023.json.gz/line1600
__label__wiki
0.514275
0.514275
Intrathymic Deletion of IL-7 Reveals a Contribution of the Bone Marrow to Thymic Rebound Induced by Androgen Blockade Flagellar Dynamics of Chains of Active Janus Particles Fueled by an AC electric field © Research Publication : Vector Biology Journal Identification and Antibioresistance Characterisation of Culturable Bacteria in the Intestinal Microbiota of Mosquitoes Team: Microbiota of Insect Vectors Network: Institut Pasteur de la Guyane Member: Mathilde Gendrin Scientific Fields Published in Vector Biology Journal - 09 Jan 2018 Yerbanga RS, Aminata Fofana, Mathilde Gendrin, Ibrahim Sangare, Soufiane Sanou, Soumeya Ouangraoua, Somé AF, Aly Drabo, Jacques Simpore, Thierry Lefèvre, Anna Cohuet, George Christophides and Ouédraogo JB Vector Biol. J. 2018 Jan;2(2) Background: The bacterial microbiota which colonize the mosquito midgut play an important role in vector-parasite interactions and consequently can modulate the level of malaria transmission. Their characterization may contribute to new control strategies of malaria transmission. However, these bacteria may also be eliminated in areas of high antibiotics usage. In this study, we identified paratransgenesis bacteria candidate in the gut of adults female Anopheles in Burkina Faso. Methods: The gut of 73 semi-field mosquitoes and 28 laboratory- reared mosquitoes from two villages in Burkina Faso were analyzed by conventional in vitro culture techniques to isolate and identify bacteria of the microbiota. The gene 16S sequencing was used to confirm the presence of bacteria of paratransgenesis interest. Due to the effect of antibiotics on bacteria, we evaluated in vitro their susceptibility to antibiotics generally used for infectious diseases treatment. Results: In total, eleven genera of bacteria were identified: Pantoea, Sphingomonas, Escherichia, Micrococcus, Staphylococcus, Klebsiella, Serratia, Acinetobacter, Pseudomonas, Citrobacter, Asaia. Among these bacteria isolated, Asaia. sp and Pantoea. sp have already been reported as candidates for paratransgenesis. In addition, we observed pathogenic bacteria such as Escherichia coli, Klebsiella pneumonia and Pseudomonas luteola. Investigation of the correlation between the bacterial microbiota and malaria infection status showed that mosquitoes engorged with blood containing Plasmodium falciparum contained a higher bacterial load than non-blood fed mosquitoes. The antibiotic susceptibility test showed that Asaia, Pantoea and Serratia, previously proposed as paratransgenesis candidates, were susceptible to different antibiotics tested in contrast to Escherichia coli, which were resistant. Discussion: Midgut analysis shows that the composition of the bacterial microbiota in wild field mosquitoes exhibits a large variability in contrast to laboratory-reared mosquitoes. The presence of genera already proposed as paratransgenesis candidates in previous studies among bacteria isolated in our study, suggested the possible implementation of this control strategy in Burkina Faso. Nevertheless, our data indicates that an in vivo verification of the stability of these bacteria is needed, as this strategy may be impaired by mass drug administration programs and antibiotic misuse. https://www.scitechnol.com/peer-review/identification-and-antibioresistancecharacterisation-of-culturablebacteria-in-the-intestinalmicrobiota-of-mosquitoes-sQAS.php?article_id=7075 Published on: 09 Jan 2018 • Modified on: 16 May 2018
cc/2019-30/en_middle_0023.json.gz/line1606
__label__cc
0.674862
0.325138
Robert’s Review: Mandao of the Dead (2018) By ZED December 17, 2018 ( 1 ) ★★★ out of ★★★★★ Murder, ghosts, and astral projection! Directed by Scott Dunn. When Hollywood studios talk about making a “low-budget” movie they’re still probably talking about a movie with a budget of over a million bucks. James Wan’s Saw (2004) was considered low-budget and cost about $1.2 million. John Carpenter’s original Halloween (1978) had a budget of only $325,000, but those were 1978 dollars. If you take inflation into account, both of those movies cost about the same to make. Low-budget in the indie world is a different story. The massive hit, The Blair Witch Project (1999), was made for a scant $60,000 and took the world by storm. Writer/director Scott Dunn and his partner in crime/producer, Gina Gomez, are becoming masters of the art of micro-budget movies. We’re talking more along the lines of Paranormal Activity (2007) and its $15,000 price tag. The moviemaking duo’s sophomore feature film, Mandao of the Dead, cost even less, but you’d never know it. The story revolves around Jay Mandao [Scott Dunn; Schlep (2016)], the underachieving son of a failed cereal executive. Living off the small royalty checks from the one mildly-popular breakfast cereal his dad managed to come up with, Jay gets stuck taking care of his 33 year old nephew-by-marriage, Jackson [Sean McBride; Schlep (2016)], who’s even more of a slacker than Jay. Right around the time Jay discovers he has a talent for astral projection, Jackson’s disturbed ex-girlfriend Maeve [Marisa Hood; Broken Things (2012)] murders an unfortunate blood bank employee [David Gallegos; 2-Headed Shark Attack (2012)]. Using Jay’s new ability to project his spirit through time as well as space, the boys try to prevent the murder from ever happening. All of the actors in Mandao of the Dead did a great job. The two brightest lights being Gina Gomez as the constantly-taken-advantage-of Uber driver, Fer, and Sean Liang [TV’s 9-1-1 (2018)] as Jay’s astral projecting cousin, Andy. Scott Dunn sometimes comes across as being pretty rough on Sean McBride’s character, Jackson, but there’s probably fair bit of resentment built up there. After all, Jackson has been sleeping rent-free in a tent in Jay’s living room for a long time. The dialog flows naturally, for the most part, and the story is humorous and charming. It’s not a particularly speedy tale, though. I suspect a more ruthless editing hand could tighten things up here and there and improve the pacing but, since we’re talking about a movie about two slackers, the easy-going pace didn’t seem entirely out of place. The production quality of this movie is what’s truly impressive. The camera work is well done. The score, while occasionally a tiny bit monotonous, is decent overall. The set dressing and lighting is superb; it has to be to keep the movie looking fresh when it’s very nearly being shot entirely in single location (Jay’s apartment). And the sound quality — a personal pet peeve of mine when done poorly — is excellent regardless of where the scenes are being filmed. Interiors, exteriors, in the car, it’s flawless. Kudos to the sound department! Would I recommend Mandao of the Dead to everyone? Probably not. Micro-budget films are lost on folks who need giant movies with massive explosions and crazy special effects. However, if you’re a fan of indie horror/comedies and enjoy seeing a super tiny budget being pushed to its limit through sheer force of will, this movie is a good time. I, for one, will be keeping an eye on Team Dunn as they continue to hone their craft. Mandao of the Dead is currently available for streaming from Amazon’s Prime Video. Categories: ReviewsTags: 2018, 2018 Horror Movies, Astral Projection, David Gallegos, Ghosts, Gina Gomez, Mandao of the Dead, Marisa Hood, Micro-budget, Scott Dunn, Sean Liang, Sean McBride By tastethemilkofchocula 2 days ago By lightninli 3 days ago By tastethemilkofchocula 2 weeks ago 1 comment › Scariest Interview: Sitting Down with Writer/Director Scott Dunn – The Scariest Things
cc/2019-30/en_middle_0023.json.gz/line1624
__label__cc
0.514652
0.485348
Horror Shorts: Treevenge (2008) 🌲🌲🌲🌲🌲 out of 🌲🌲🌲🌲🌲 Directed by Jason Eisener and written by Rob Cotterill and Jason Eisner Ah…the holidays. Yes. Those holidays. The Christmas-y ones. The holidays that fill you full of joy, happiness, togetherness, peace, family…frustration, long lines, anger, resentment, hatred, and, AND, AND….CHRISTMAS TREES! The bane of the past, present, and the future. Commercialism mixed with a toxic cocktail of entitlement, greed, waste, and environmental cynicism and acrimony. Just a plain ol’ dislike for mother earth. As any good horror fan knows comeuppance is always right around the corner. Horror is full of checks and balances and there’s no more brutish check than mother earth. She watches, she waits, and strikes when needed — and boy does she strike! In the 2008 movie Treevenge, directed by Jason Eisener, mother nature gets her revenge on purposeless consumerism, wasteful practices, and, well, a gaggle of Canadian dolts. Having cut his teeth as a writer on the 2007 Tarantino/Rodriguez flick Grindhouse, you can see the outrageousness he would later bring to 2011’s Hobo with a Shotgun, and 2013’s VHS 2. He’s parked rather firmly in a loving look, a freaky feel, and the tacky and squalid side of grindhouse film making. Treevenge begins with a series of Canadian loggers on a Christmas Tree farm hollering, cursing, screaming, and hacking the holy-hell out of every single tree in sight. The trees, confused and panicked, tremble with fear. They mutter to each other in hushed Ewok-like tones and sounds “…what are you doing to us?” and “help me, please help me.” The loggers don’t hear, or choose to ignore, their pathetic cries for help. Listening instead to their own horrible caterwauling “I wish I stayed in school. I WOULDN’T HAVE TO DEAL WITH THIS CRAP!” The trees and the saplings are eventually chopped to the nub and sent off to grimy parking lots throughout suburban Canada. As the trees are enslaved and subjugated to the grotesque horrors of Christmas, the trees wait and watch…and plot. A pastoral Christmas morning arrives and TREEVENGE is set in motion! Decapitated house cats, tree branch nasal torture, squished babies, Christmas tree stars buried in people’s necks, and a leg decapitation courtesy of a rightfully pissed chainsaw-wielding Christmas tree. The gore, glory, and terror all unfolds at the end of a not-so-quiet cul-de-sac. Needless to say, this little 16 minute short is gruesome, grindhouse, goodness — head to toe — or rather…stump to crown. Jason Eisener really knows his way around a camera, piecing together a fun yet thought-provoking story, and the ability to reach in to the past and pull the best pieces and parts forward. The title card, Riz Ortolani’s music cribbed straight from opening theme from 1980’s gore-party Cannibal Holocaust, and even the hyper-realized characters and their hyperactive approach to each and every scene. Treevenge is a present. Plain and simple. This is a present to horror fans and fans of cinema, and who doesn’t love presents under the tree. Um, well, trees don’t — that’s who! Categories: ShortsTags: 2008, Christmas Horror, Christmas movie, gore, gore film, Gory, grindhouse, Grindhouse horror, hobo with a shotgun, horror, horror film, horror movies, Horror Short Film, Horror Short Films, Horror Shorts, jason eisener, killer plants, plant horror, revenge, rob cotterill, Scariest Horror Movies, scariest things horror, Scariest Things Podcast, scary, Terror Evil, Treevenge
cc/2019-30/en_middle_0023.json.gz/line1625
__label__cc
0.712563
0.287437
You are currently browsing the category archive for the ‘Columbus Blue Jackets’ category. WE’RE ALL THINKING IT, BUT I’M NOT GOING TO SAY IT 10.16.2008 in Ales Kotalik, RJ Umberger, Ryan Miller, Thomas Vanek | by Cari | 3 comments I know what you’re all thinking, and that’s only because I’m thinking it, too. Remember how we started off the year two seasons ago, when you-know-what happened? See, I told you I wasn’t going to say it, but as soon as the final buzzer went last night, you know I was thinking it. ANYWAYS. Since I’m helping Kim housesit, we were originally going to have a couple of her friends over to the house to watch the game on the ginormous TV here. Instead, while we were out to pick up our usual LaNova pizza beforehand, my friend Brittany calls me and invites us over to her boyfriend’s house, where we would eventually watch the game with herself, one other girl, and five 20-something, sports-fanatical guys. This could get interesting… During the first intermission, Keith busted out his old-school XBox, which he had some sort of adapter thing that allowed him to play games from all video game systems, including Nintendo NES, so he and Scott sat through the intermission playing Ice Hockey, like 1995, or something ridiculous like that, while Phil tried to get their other TV to work in order to monitor puck drop. I haven’t seen that game since I was 7, when my brother used to pwn me in it. Yikes. But back to the game. After Vanek scored his first goal of the night, Keith wondered what he’d be on pace for for the season, since our friend is always throwing those stats out there. Well, at that point, it would’ve been 109. Scratch that, though, since he netted YET ANOTHER shortie. 136. And Tommy? WAY TO LEAD THE LEAGUE IN GOALS. And at some point, they must’ve shown the Campbell hit on Umberger, because Lucas decided to tell us a story about a friend of ours shopping in a department store around that time. He bumped into a little kid, and the kid stares him down, and flatly states, “BUMP INTO ME AGAIN AND I’LL UMBERGER YOU!” Only in Buffalo would you find someone saying something like that, and only in Buffalo would anyone understand it. I really don’t have too much to say that hasn’t been said already, since I didn’t post last night, since we unexpectantly went out for the game. And there isn’t a whole lot of news to go around, EXCEPT THAT PAUL GAUSTAD IS SKATING. Yay. I’ll post tomorrow morning with something good (hopefully), and maybe tomorrow night (probably) after the game. No liveblogging, as Kim and I will be there. =] Oh… My three stars… First Star : Thomas Vanek Your prowess on the PK and the PP are still amazing me. And like I said earlier, you’re #1 in the league and on a ridiculous pace to score 136 goals. Enough said. Second Star : Ryan Miller Nothing surprising here; the guy was solid. He kept the door shut when we needed him to, and that was pretty much all the time, as we only managed to get 18 shots for us. Therefore, the 20 he did stop were extremely important. Plus, it’s good to see him step in after a very early night off and maintain his game. Third Star : Ales Kotalik Again, special teams are key to winning, and with this guy on there, it’ll make a difference in Connolly’s absence. Al’s goal tied it up at 1, and kept us in the game. Until that point, the game had ben ho-hum, with the Rangers shutting everything down. This broke the ice and allowed the Sabres to move in and take control. Ranger’s Star : ? I’m not going to lie; I didn’t pay much attention to anyone on that team. So they don’t get a star this time around. Shucks. ‘Til tomorrow. Great game guys! I don’t necessarily want to go there, but… 10.15.2008 in Jochen Hecht, Michael Peca, Nathan Gerbe, Portland Pirates, Sabres Shortcomings | by Cari | 2 comments Quickly, before I run out the door to go to my Anatomy/Physiology Lab (which, by the way, is 2.5 hours, and the labs literally takes me less than 1; ridiculous), I just want to say a few things that I feel need to be said, some of which could be a bit controversial (I said a bit; that could be a stretch): There is absolutely nothing controversial about this gorgeous German. Anyways, the Sabres really need to stop giving me bad news while driving at high speeds. Last night, Kim and I drove up to the outlet mall in Niagara Falls, but then decided to go on a little adventure. So, whilst driving down the 219 to Boston, I hear my phone vibrate in my door, so I grab it, and seeing “722737” on my screen, I say, “Oh God, what did the Sabres do now?”. Then, while holding the wheel, at about 70 mph, I read: “BREAKING NEWS–Buffalo Sabres GM Dary Regier announced that Jochen Hecht underwent surgery today on his finger and will miss a couple of weeks. Txt End 2 quit.” Wow. Enough said, pretty much. No, really, though, I’m extremely upset. Then Kim goes, “Well, just think: This could mean a call-up for Gerbe!” And then I thought rationally (why do I always have to do that?), “No, because we have Ellis, and besides, Ruff always calls up whoever had been hot down there, and that’s MARK MANCARI!!!” Which is kinda funny because Ruff said he would be the guy if they do, indeed, decide a filler guy is in order. But then he’d have to clear waivers again, and I’d be a basketcase for the entire 24-hour waiver period. Jochen, ich liebe dich. That’s “I love you” in German, if you can’t remember my story about him hearing me say that to him during practice one day. He literally turned around and looked at me. Who was mortified? This girl. MICHAEL PECA I’ll be the first to admit that when this guy was a free agent, I wanted the Sabres to pick him up again. I love Mike Peca. True, his game is a fine line between gritty and dirty, but the guy’s smart. He didn’t win the Selke purely by chance, and there was a reason for his being named Captain during his tenure here. And the guy’s just not stupid. That being said, I don’t think whatever happened here (WARNING: Really bad fan video) is worth a 10-game suspension. He admittedly grabbed the ref’s arm, but there is no way in hell you can convince me that he hit a linesman with his stick, as was perceived. Now I realize that there’s this thing in the NHL where you’re not supposed to touch the sacred officials, but (1) they often ignore player’s pleas, especially if the person they’re coming from has a negative reputation of any sort, and (2) how were they supposed to hear anything remotely sounding like a voice when, HELLO! Dallas just scored and 18,532 people are screaming their heads off because, HEY! the Stars just scored on a 2-man advantage! So he grabbed your arm. Big deal. Give him the three games mandated by the automatic contact clause or whatever it’s called and be done with it. Peca is not so stupid as to use excessive force with an official. There was something else I had to say too, but I can’t remember what it was, and I really need to get going to lab… Anyways, since I can’t provide you with anything good to read, go read THN’s article about Nathan Gerbe that appears in the issue my household received yesterday. Actually, I can’t find it, so I’ll post about it later, or tomorrow, or something. But I love it, because the Sabres list him as 5’6″, as do the Pirates, by they (THN) say he’s 5’5″. WHICH MEANS I AM TALLER THAN HIM! That’s exciting shit right there! And speaking of the Pirates, they do Bobblehead Nights, so go vote for the guy to have his very own Bobblehead. Okay, ’til later. Have a nice day, everyone! And, oh yeah, Go Sabes!
cc/2019-30/en_middle_0023.json.gz/line1632
__label__wiki
0.930139
0.930139
How to Convert Steam Flow to Megawatts ••• Jupiterimages/liquidlibrary/Getty Images How to Calculate the Amount of Condensate Per Amount of Steam By Brian Baer Steam flow is typically measured in pounds per hour (lb/hr). Steam has a measure of heat that is given in British Thermal Units (Btu) per pound of steam. The heat in steam is also a function of the temperature and pressure of the steam. If the steam flow is known and the duration of flow is also known, then the steam flow can be converted into a measure of power in megawatts. Power plants use steam flow to turn steam turbines, which create electricity. Electricity generation is measured in megawatts. Determine the heat of steam flow using the resource listed below. Assume there is 2,500 lb/hr of steam flow that has a pressure of 300 pounds per square inch (psi). This provides a heat input (enthalpy of saturated vapor) of 1,203.3 btu/lb. Determine the heat input per unit of time by multiplying the steam flow by the heat input (25,000 lb/hr x 1,203.3 btu/lb), which is 30,083,500 btu/hr. Convert the heat input from the steam flow into a unit of power in megawatts. This is done by using the conversion 1 btu/hr is equal to 2.93e-7 megawatts. Multiple the total heat input of 30,083,500 x 2.93e-7 for a result of 8.82 megawatts. Unit Conversion: Power Conversion efunda: Saturated Steam Tables Brian Baer has been writing since 1982. His work has appeared on Web sites such as eHow, where he specializes in technology, management and business topics. Baer has a Bachelor of Science in chemical engineering from the University of Arkansas and a Master of Business Administration from the University of Alabama, Huntsville. Jupiterimages/liquidlibrary/Getty Images How to Calculate Boiler Heat Input Rate How to Convert KPa to Liters Per Minute How to Calculate Coefficient of Performance How to Calculate Heat Loss During Pipeline Depressurization How to Calculate Steam Velocity How to Convert CMH to BTU How to Convert CV to GPM How to Calculate QCAL
cc/2019-30/en_middle_0023.json.gz/line1634
__label__wiki
0.591977
0.591977
Is movie Deadpool's fourth wall breaking a “mutant power”? In the first Deadpool movie, Deadpool gains his powers as part of attempting to force a mutation. And he does get a mutation: the incredible healing factor. He also gains the ability to break the fourth wall and recognize he's in a comic book movie some time shortly after the mutation. However, neither movie is really clear on whether his ability to see and interact with the audience was a literal mutant power, a side effect of him going insane from the therapy, or just a running joke that isn't bound by logic. marvel x-men-cinematic-universe deadpool deadpool-2016 deadpool-2 TheLethalCarrot GGMGGGMG As far as I know, Deadpool wasn't born with any powers, which means he's officially a mutate, not a mutant. So, arguably, he can't have mutant powers, only mutate powers. I know I'm being pedantic here, which is why this isn't an actual answer, just a small clarification for the sake of accuracy. – trlkly Sep 21 '18 at 19:07 Maybe in the movies that fact is not clear, but in the comics, however, IT IS actually a superpower, as is explained in this related question – LudovicoN Sep 21 '18 at 23:41 After reviewing the script again from the advice of a friend offline, it looks like Wade's 4th wall breaking isn't a power. While he has his collar on, he quips the following while turning and looking directly at the camera: Fun fact about the Ice Box... though no one's ever seen it, they keep a monster in the basement. Right next to a huge, steaming bowl of foreshadowing. https://www.springfieldspringfield.co.uk/movie_script.php?movie=deadpool-2 So, even when he's powerless, he can still call out movie tropes and recognize that it's a popcorn flick with an audience. But surely if his mutant power gave him the knowledge that he was a fictional character in a movie, taking away his power wouldn't take away that knowledge. – delinear Sep 21 '18 at 14:42 @delinear Maybe, but he wouldn't know where the camera was at that specific time. – GGMG Sep 21 '18 at 14:59 True, although if he knew he was part of a movie he might reasonably expect that if he did a "monologue to camera" thing, a camera would be there to capture it XD – delinear Sep 21 '18 at 15:02 @delinear It would be fun if he was giving his monologue but he was facing the wrong way. – Arturo Torres Sánchez Sep 21 '18 at 15:47 @ArturoTorresSánchez yeah if they'd filmed it so that whenever he had his collar on, he kept wrongly guessing where the camera was meant to be. That'd be super meta, but definitely in line with the character! – delinear Sep 21 '18 at 16:03 It's not a mutation. It's actually a sign of his madness. Deadpool gained his powers through torture1, which mentally broke him. Not only is he under the "illusion" (in-universe this is seen as a derangement) that he is a fictional character (his frequent 4th-wall breaks) Ryan Reynolds has told Empire that he’s insistent on Deadpool‘s habit of breaking of the fourth wall to carry over from the comics to the big screen. That means, in effect, that Deadpool/Wade Wilson will sometimes address the audience directly, as with Ferris Bueller, say, or the character of Paul in Funny Games. He also suffers from schizophrenia (voices in his head)2, And while it could be attributed to the excessive amount of pain he undergoes on a regular basis - his reaction to pain and being mutilated over and over again is not one you would expect from a sane person. 1. In Deadpool 1, Ajax explains that the Weapon X program requires the body to undergo increasing amounts of stress in order to trigger the mutation. 2. This is in the comics, but not depicted in the movies. It is possible that this was tied into the 4th wall breaks as talking to the audience as the "voices" in his head The schizophrenia seems to be in the movies too. For example in the first one he has a limited number of bullets left and uses an excessive number to take down one guy who was particularly annoying. He says something like "stupid stupid stupid... worth it", similar to how in the comics each voice gives a different take on events. – user Sep 21 '18 at 10:47 The problem is, he doesn't just believe he's a character in a movie and narrate to an imaginary audience the thoughts in his head, if he did it would be easy to write off. However, he also references accurate information from outside the world of the movie which he has no reasonable way of knowing and which we, the audience, know to be true (like when he asks whether the professor they're going to see is "McAvoy or Stewart", if he was just mentally unstable we'd have to write that off as a ridiculous coincidence, and he does it a lot). – delinear Sep 21 '18 at 14:50 @delinear That's the joke - people in-universe think he's insane, but the audience knows he's right. That's why "illusion" is in quotes in the answer. – IllusiveBrian Sep 21 '18 at 19:28 "Ajax"? Don't you mean Francis? – Steve V. Sep 21 '18 at 22:33 It's how I've always seen it. Going insane can very well involve thinking that you are being watched, and talking to imaginary friends. In Deadpool's case, it just happens to be true. – Misha R Sep 23 '18 at 8:10 Movies are a little more vague on this but if you include comic lore it is much clearer. So I have read a lot of Deadpool comics and I can tell you he is fully aware that he is in a comic book but it is not derived from his mutant powers whatsoever just a side affect of his craziness. At one point he instructs the readers to update his wiki page to show that he was offered to join the X-Men and refused after Storm asks for his help even. If you are looking for a character whose power revolves around manipulating the comic book she is in I would recommend the Gwenpool character. The Unbelievable Gwenpool #13 even has Deadpool and Gwen talking about being in each others books. Brett JohnsonBrett Johnson Can you provide some direct references to support this? – DavidW Sep 21 '18 at 16:58 Ya it was Deadpool (2012) #36. Deadpool knows he is in a comic. – Brett Johnson Sep 21 '18 at 17:26 @BrettJohnson - I love the tie in with Gwenpool, but this does not "show" that the ability is not related to his superpower and therefore does not technically answer the question. Is there another reference that you know of supporting/proving the idea that either of their wall breaking in the comics is not a superpower? – Odin1806 Sep 21 '18 at 18:04 Gwenpool's powers are centered around her knowing she is in a comic so I would say her superpower is the ability to break the 4th wall. For Deadpool there is just an absence of any lore ever equating his powers to this knowledge, for him breaking the 4th wall is just something he does. So I would have to just say the complete lack of supporting material around it makes me believe it isn't associated with his forced mutation. – Brett Johnson Sep 21 '18 at 18:14 Just more proof Deadpool is insane. – Jamie Clinton Sep 21 '18 at 23:31 Not the answer you're looking for? Browse other questions tagged marvel x-men-cinematic-universe deadpool deadpool-2016 deadpool-2 or ask your own question. Does Deadpool know he is breaking the 4th wall? How does Deadpool break the 4th wall? In the X-Men movie franchise, were the third and fourth movie written out of the timeline? Is Cyclops’ mutant power just an organic version of Iron Man's repulsor tech? Was Deadpool always aware of the fourth wall? Has another character ever observed Deadpool breaking the fourth wall? Are Colossus and Negasonic Teenage Warhead aware that Deadpool is breaking the 4th wall? Can Captain Marvel break the fourth wall?
cc/2019-30/en_middle_0023.json.gz/line1635
__label__cc
0.629143
0.370857
What would have happened to Spock's 'soul' if he hadn't regenerated on Genesis In 'A Search For Spock', Sarek makes it perfectly clear that it is Vulcan custom for Kirk to have returned Spock's soul (or katra) that he mistakenly believed Kirk to have been given. At the end of the movie, when the regenerated Spock is returned to Vulcan, it is stated that the ceremony (the 'fal tor pan?'), to rejoin the new Spock with his katra, has pretty much only ever been performed in legend. What, by Vulcan custom, was to have been done with Spock's katra if the unlikely event of his regeneration hadn't taken place? star-trek spock the-search-for-spock SQB johncjohnc It is implied that most Vulcan's katra remains with whatever Vulcan receives it - it is treated as a part of their soul, given to others to be remembered by. A piece of them will be with their closest loved one (the intended recipient of the katra) for the rest of that Vulcan's life. This is evidenced by the fact that what Spock regains is NOT complete - he has to relearn much. The seemingly full restoration he has by the end of his run most likely has more to do with his body being regenerated - the combination of his soul fragment and the restored brain (which would still retain his memories, given the regeneration on Genesis) did it. In several of the novels, the soul is transferred to a stone, where it can be mind-touched. Given some of the episodes in the Enterprise series, this approach seems to have been dropped in favor of the passing along theory. So, either he would have... remained in McCoy, driving him insane been placed into a stone been transferred to someone who could handle it, almost assuredly a Vulcan. It should be noted that as of Enterprise, it's a forbidden practice by an underground movement within vulcan society; by the time of the TOS-Crew Movies, that faction has come to power. Fanon tends to use one or the other. aramisaramis It's worth noting that, unlike Star Wars novels, Star Trek novels are not canon by default. – Jeff Sep 1 '11 at 13:03 Personally, I think the term 'soul' is wrong here (though, depends on definition). As far as I understood those scenes, Spock transferred his knowledge, memories and pretty much what makes up his personality into the brain of McCoy. During the ceremony on Vulcan it was transferred back into the brain of Spock (it must be quite a feat to split two personalities and minds). The problem was, McCoys brain was far too primitive to handle that kind (and amount) of information. That's why he slowly went nuts and that's most likely also the explanation why Spock had to relearn quite a few things. During the movie we can see that McCoy has problems differentiating between his mind and that of Spock. So I think that McCoy simply would have gone crazy with time, losing his and Spocks mind on the way down the rabbit hole. Now that I think about it, it seems quite logical that Spock was 'less' human after the ceremony. The priestess needed to sort out what belonged to Spock and what belonged to the human host, not knowing that Spock had ...uuuhhhh... achieved quite some human traits, she simply assumed they belonged to McCoy. Not the answer you're looking for? Browse other questions tagged star-trek spock the-search-for-spock or ask your own question. What actually happened between Spock and Saavik? Is there anything more to the way others mistreat Spock? Why does Spock's rapid aging stop after Genesis? How would Starfleet know what a Romulan looks like in Star Trek 2009? Carol Marcus learning about David after The Search for Spock Speed of growth of Spock's hair on Genesis “He's dead, Jim”: Which Star Trek characters have returned from the dead? Where is Sybok during Michael and Spock's childhood? Why didn’t Lt. Saavik tell Kirk to bring Spock's body to Vulcan instead of leaving it on Genesis? Was Spock the First Vulcan in Starfleet?
cc/2019-30/en_middle_0023.json.gz/line1636
__label__wiki
0.779956
0.779956
‘Neighbors 2: Sorority Rising’; Available On Digital HD September 6 & On Blu-ray & DVD September 20, 2016 From Universal July 27, 2016 · by bdvdannounce · in Blu-Ray/DVD Announcements, Movies, News. · Universal Pictures Home Entertainment has officially announced and detailed the home entertainment releases of ‘Neighbors 2: Sorority Rising’ which is scheduled to arrive home early on Digital HD September 6, with the Blu-ray Combo Pack and DVD releases to follow on September 20, 2016. Hit the jump to read the full announcement including disc specs, bonus content listings and more. SETH ROGEN AND ZAC EFRON ARE BACK IN PACKED WITH HILARIOUS BONUS CONTENT INCLUDING DELETED SCENES, GAG REEL AND MORE! “ONE OF THE BEST MOVIES OF THE YEAR” – FORBES AVAILABLE ON DIGITAL HD SEPTEMBER 6, 2016 BLU-RAY™ COMBO PACK, DVD AND ON DEMAND SEPTEMBER 20, 2016 FROM UNIVERSAL PICTURES HOME ENTERTAINMENT Universal City, California, July 27, 2016– Just when you thought it couldn’t get more outrageous, Seth Rogen (This Is The End, Pineapple Express) and Zac Efron (The Lucky One, That Awkward Moment) return in the hilarious, no boundaries comedy, NEIGHBORS 2: SORORITY RISING, coming to Digital HD September 6, 2016 and Blu-ray™ Combo Pack, DVD and On Demand on September 20, 2016, from Universal Pictures Home Entertainment. The follow-up to Neighbors, the explosively funny blockbuster, NEIGHBORS 2: SORORITY RISING features even more hysterically raunchy banter and outrageous antics, but this time the ladies have taken charge. The DVD and Digital HD include deleted scenes,gag reel, line-o-rama and more. Blu-ray™, DVD and Digital HD include deleted scenes, gag reel, line-o-rama and more. Now that Mac (Seth Rogen) and Kelly Radner (Rose Byrne) have a second baby on the way, they are ready to make the final move into adulthood: the suburbs. But just as they thought they’d reclaimed the neighborhood and were safe to sell, they learn that the new occupants next door are a hard–partying, out-of-control sorority, led by Shelby (Chloe Grace Moretz). Mac and Kelly are forced to team up with their charismatic ex-neighbor and nowsecret weapon, Teddy (Zac Efron), since the ladies of Kappa Nu aren’t going down without a fight. It’s parenthood vs. sisterhood when the new neighbors assert their right to party just as hard as the boys. Director Nicholas Stoller (Neighbors, Forgetting Sarah Marshall) delivers an over-the-top laugh fest with the new wild neighbors. Bring home the party with the riotously funny NEIGHBORS 2: SORORITY RISING that’s “brilliantly funny” – Edward Douglas, Den of Geeks. BONUS FEATURES on BLU-RAYTM and DVD Line-O-Rama Nu Neighbors – Neighbors is back with a new direction and plenty of new cast members. Director Nick Stoller and Producers Seth Rogen and Evan Goldberg talk about their initial ideas for the Sorority Rising, the traps they avoided in making their first sequel, and the big differences between the first film and Neighbors 2. The Prodigal Bros Return – The not so beloved brothers of Delta Psi return… for a day. We hear from Dave Franco, Christopher Mintz-Plasse, Jerrod Carmichael and Zac Efron about their day on set while they catch us up on where they’ve been since the end of Neighbors. Girls Rule – Neighbors 2 has added a new threat to Mac and Kelly’s household in the form of the Kappa Nu women. Learn more about the experiences of the new women on set. The Ultimate Tailgate – A behind-the-scenes look at the tailgate sequence. Feature Commentary with Co-writer/director Nicholas Stoller and Producer James Weaver The Blu-ray™ Combo Pack includes a Blu-ray™, DVD and Digital HD with UltraViolet™. Blu-ray™ unleashes the power of your HDTV and is the best way to watch movies at home, featuring 6X the picture resolution of DVD, exclusive extras and theater-quality surround sound. DVD offers the flexibility and convenience of playing movies in more places, both at home and away. Digital HD with UltraViolet™ lets you watch movies anywhere, on any device. Users can instantly stream or download movies to watch on iPad®, iPhone®, Android™, smart TVs, connected Blu-ray™ players, game consoles and more. FILMMAKERS: Cast: Seth Rogen, Zac Efron, Rose Byrne, Chloe Grace Moretz, Dave Franco, Ike Barinholtz Casting By: Francine Maisler Music By: Michael Andrews Music Supervisor: Manish Raval, Tom Wolfe Costume Designer: Leesa Evans Edited By: Zene Baker Production Designer: Theresa Guleserian Cinematographer: Brandon Trost Executive Producers: Nathan Kahane, Joseph Drake, Ted Gidlow, Andrew Jay Cohen, Brendan O’Brien Produced By: Seth Rogen, Evan Goldberg, James Weaver Based on Characters Created By: Andrew Jay Cohen, Brendan O’Brien Written By: Andrew Jay Cohen, Brendan O’Brien, Nicholas Stoller, Evan Goldberg, Seth Rogen Directed By: Nicholas Stoller TECHNICAL INFORMATION BLU-RAY™: Street Date: September 20, 2016 Copyright: 2016 Universal Pictures Home Entertainment Selection Number: 61170551 (US)/ 61170711 (CDN) Layers: BD-50 Aspect Ratio: Widescreen 2.40:1 Rating: Rated R for crude sexual content including brief graphic nudity, language throughout, drug use and teen partying Languages/Subtitles: English SDH, Spanish and French Subtitles Sound: English DTS-HD Master Audio 5.1/Dolby Digital 2.0, Spanish and French DTS Digital Surround 5.1 Run Time: 1 hour 33 Minutes TECHNICAL INFORMATION DVD: Copyright: 2016 Pictures Home Entertainment Layers: Dual Aspect Ratio: Anamorphic Widescreen 2.40:1 Sound: English Dolby Digital 5.1/Dolby Digital 2.0, Spanish and French Dolby Digital 5.1 About Universal Pictures Home Entertainment Universal Pictures Home Entertainment (UPHE –www.uphe.com) is a unit of Universal Pictures, a division of Universal Studios. Universal Studios is a part of NBCUniversal, one of the world’s leading media and entertainment companies in the development, production, and marketing of entertainment, news, and information to a global audience. NBCUniversal owns and operates a valuable portfolio of news and entertainment television networks, a premier motion picture company, significant television production operations, a leading television stations group, world-renowned theme parks, and a suite of leading Internet-based businesses. NBCUniversal is a subsidiary of Comcast Corporation. Tags: Announcement, Blu-Ray, Chloë Grace Moretz, Comedy, Neighbors 2, Neighbors 2 Blu-ray, Neighbors 2 DVD, Rose Byrne, September 2016 BD Releases, September 2016 Blu-ray Releases, September 2016 Digital Releases, September 2016 DVD Releases, September 2016 Releases, Seth Rogen, Universal, Universal Pictures Home Entertainment, Zac Efron ← Own ‘The Conjuring 2’ On Blu-ray & DVD September 13 Or Own It Early On Digital HD August 30, 2016 From Warner Bros [Blu-Ray Review] ‘Barbershop: The Next Cut’: Now Available On Blu-ray & DVD From MGM & Warner Bros → Share your opinion! Cancel reply
cc/2019-30/en_middle_0023.json.gz/line1637
__label__wiki
0.992692
0.992692
Minor League Baseball Tickets West Michigan Whitecaps Tickets Events near Find tickets to West Michigan Whitecaps at Beloit Snappers on Wednesday July 17 at 6:30 pm at Pohlman Field in Beloit, WI Wed · 6:30 pm West Michigan Whitecaps at Beloit Snappers Pohlman Field·Beloit, WI Find tickets to West Michigan Whitecaps at Beloit Snappers on Thursday July 18 at 6:30 pm at Pohlman Field in Beloit, WI Thu · 6:30 pm Find tickets to West Michigan Whitecaps at Beloit Snappers on Friday July 19 at 6:30 pm at Pohlman Field in Beloit, WI Find tickets from 16 dollars to West Michigan Whitecaps at Wisconsin Timber Rattlers on Saturday July 20 at 6:35 pm at Fox Cities Stadium in Appleton, WI Sat · 6:35 pm West Michigan Whitecaps at Wisconsin Timber Rattlers Fox Cities Stadium·Appleton, WI Find tickets from 16 dollars to West Michigan Whitecaps at Wisconsin Timber Rattlers on Sunday July 21 at 1:05 pm at Fox Cities Stadium in Appleton, WI Find tickets from 16 dollars to West Michigan Whitecaps at Wisconsin Timber Rattlers on Monday July 22 at 12:05 pm at Fox Cities Stadium in Appleton, WI Mon · 12:05 pm Find tickets from 16 dollars to Lansing Lugnuts at West Michigan Whitecaps on Wednesday July 24 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Lansing Lugnuts at West Michigan Whitecaps Fifth Third Ballpark·Comstock Park, MI Find tickets from 16 dollars to Lansing Lugnuts at West Michigan Whitecaps on Thursday July 25 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Find tickets from 16 dollars to Lansing Lugnuts at West Michigan Whitecaps on Friday July 26 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Find tickets from 16 dollars to Great Lakes Loons at West Michigan Whitecaps on Saturday July 27 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Great Lakes Loons at West Michigan Whitecaps Find tickets from 15 dollars to Great Lakes Loons at West Michigan Whitecaps on Sunday July 28 at 6:00 pm at Fifth Third Ballpark in Comstock Park, MI Find tickets from 16 dollars to Great Lakes Loons at West Michigan Whitecaps on Monday July 29 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Mon · 7:05 pm Find tickets from 7 dollars to West Michigan Whitecaps at South Bend Cubs on Tuesday July 30 at 7:05 pm at Four Winds Field at Coveleski Stadium in South Bend, IN West Michigan Whitecaps at South Bend Cubs Four Winds Field at Coveleski Stadium·South Bend, IN Find tickets from 14 dollars to West Michigan Whitecaps at South Bend Cubs on Wednesday July 31 at 7:05 pm at Four Winds Field at Coveleski Stadium in South Bend, IN Find tickets from 14 dollars to West Michigan Whitecaps at South Bend Cubs on Thursday August 1 at 7:05 pm at Four Winds Field at Coveleski Stadium in South Bend, IN Find tickets from 23 dollars to West Michigan Whitecaps at South Bend Cubs on Friday August 2 at 7:35 pm at Four Winds Field at Coveleski Stadium in South Bend, IN Find tickets from 10 dollars to Fort Wayne TinCaps at West Michigan Whitecaps on Saturday August 3 at 7:05 pm at Parkview Field in Fort Wayne, IN Fort Wayne TinCaps at West Michigan Whitecaps Parkview Field·Fort Wayne, IN Find tickets from 10 dollars to Fort Wayne TinCaps at West Michigan Whitecaps on Sunday August 4 at 1:05 pm at Parkview Field in Fort Wayne, IN Find tickets from 7 dollars to Fort Wayne TinCaps at West Michigan Whitecaps on Monday August 5 at 7:05 pm at Parkview Field in Fort Wayne, IN Find tickets from 8 dollars to Fort Wayne TinCaps at West Michigan Whitecaps on Tuesday August 6 at 12:05 pm at Parkview Field in Fort Wayne, IN Tue · 12:05 pm Find tickets from 16 dollars to Bowling Green Hot Rods at West Michigan Whitecaps on Wednesday August 7 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Bowling Green Hot Rods at West Michigan Whitecaps Find tickets from 16 dollars to Bowling Green Hot Rods at West Michigan Whitecaps on Thursday August 8 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Find tickets from 16 dollars to Bowling Green Hot Rods at West Michigan Whitecaps on Friday August 9 at 7:05 pm at Fifth Third Ballpark in Comstock Park, MI Find tickets from 14 dollars to West Michigan Whitecaps at Lake County Captains on Saturday August 10 at 7:00 pm at Classic Park in Eastlake, OH West Michigan Whitecaps at Lake County Captains Classic Park·Eastlake, OH Find tickets from 14 dollars to West Michigan Whitecaps at Lake County Captains on Sunday August 11 at 7:00 pm at Classic Park in Eastlake, OH The fans have spoken Benjamin M. "Great seats at a great price. Purchasing and receiving the Etickets was easy and fast." Related Teams South Bend Cubs 42 Upcoming Events Western Michigan Broncos Hockey Pohlman Field in Beloit, WI Fox Cities Stadium in Appleton, WI Four Winds Field at Coveleski Stadium in South Bend, IN Parkview Field in Fort Wayne, IN Fifth Third Ballpark in Comstock Park, MI Classic Park in Eastlake, OH Dow Diamond in Midland, MI Cooley Law School Stadium in Lansing, MI Fifth Third Field Dayton in Dayton, OH Dear Evan Hansen Tickets Related Seating Charts Classic Park Seating Chart Cooley Law School Stadium Seating Chart Fifth Third Ballpark Seating Chart Fifth Third Field Dayton Seating Chart Four Winds Field at Coveleski Stadium Seating Chart Fox Cities Stadium Seating Chart Parkview Field Seating Chart
cc/2019-30/en_middle_0023.json.gz/line1644
__label__wiki
0.868019
0.868019
Select Event 6 Guitars Elvis! Blue Christmas Green River Revival Kings of Classic Country Little House on the Prairie, the Musical Liverpool Sessions: Abbey Road Live Mike Stevens Mom's Gift Six String Nation Sultans of String Surfin' Safari The Great Canadian Fiddle Show The Slocan Ramblers The Wonderful Wizard of Oz: The Family Musical Panto Yes Virginia, There is a Santa Claus ZED - The Led Zeppelin Experience! Select a Performance Saturday, November 23, 2019 - 8:00 PM 3. Click to select Seats Next Go Back Title: Sultans of String Venue: Fergus Grand Theatre 3x JUNO nominees and Billboard charting Sultans of String are “an energetic and exciting band with talent to burn!” (Maverick Magazine UK). Thrilling audiences with their genre-hopping passport of Celtic reels, Flamenco, Gypsy-jazz, Arabic and Cuban rhythms, fiery violin dances with kinetic guitar, while funky bass lays down unstoppable grooves. Throughout, acoustic strings meet electronic wizardry to create layers and depth of sound. Since forming 10 years ago, Sultans of String’s music has hit #6 on Billboard’s world music charts, landed the New York Times Christmas Hits list, and received multiple awards and accolades, including 1st place in the ISC (out of 15,000 entries), 3 Canadian Folk Music Awards, and a Queen’s Diamond Jubilee Medal (for bandleader Chris McKhool). They have also performed/recorded with such luminaries as The Chieftains, Sweet Honey in The Rock, and Ruben Blades. McKhool (Jesse Cook, Pavlo), a Queen’s Diamond Jubilee medal recipient, has an Egyptian-born mother who happened to play piano, teach classical theory, and feed her young son as much Middle Eastern cuisine as she did music lessons. From there, the powerful violinist developed a taste for multi-genre string sounds and found a like-minded crew of all-world enthusiasts. With founding guitarist Kevin Laliberté’s (Jesse Cook) rumba rhythm, their musical synergy created Sultans of String’s signature sound – the intimate and playful relationship between violin and guitar. From this rich foundation, the dynamic duo grew, featuring such amazing musical friends as in-the-pocket bass master Drew Birston (Chantal Kreviazuk) Sultans of String have been criss-crossing North America and UK for the last several years at many taste-making forums such as NYC’s legendary jazz club Birdland, Boston’s Scullers, and London’s Trafalgar Square. They recently sold out Koerner Hall (Toronto’s Carnegie Hall), and performed with Toronto, Vancouver, Edmonton, Stratford, Ontario and Niagara Symphony Orchestras, as well with Kingsfield POPS in Maine and Maryland’s acclaimed Annapolis Symphony. Sultans of String have also been featured on MPBN’s Maine Arts!, and performed live on BBC TV,BBC Radio, Irish National Radio, and the internationally syndicated shows WoodSongs, World Cafe, and on SiriusXM in Washington. Sultans of String Fergus Grand Theatre 3x JUNO nominees and Billboard charting Sultans of String are “an energetic and exciting band with talent to burn!” (Maverick Magazine UK). Thrilling audiences with their genre-hopping passport of Celtic reels, Flamenco, Gypsy-jazz, Arabic and Cuban rhythms, fiery violin dances with kinetic guitar, while funky bass lays down unstoppable grooves. Throughout, acoustic strings meet electronic wizardry to create layers and depth of sound. Since forming 10 years ago, Sultans of String’s music has hit #6 on Billboard’s world music charts, landed the New York Times Christmas Hits list, and received multiple awards and accolades, including 1st place in the ISC (out of 15,000 entries), 3 Canadian Folk Music Awards, and a Queen’s Diamond Jubilee Medal (for bandleader Chris McKhool). They have also performed/recorded with such luminaries as The Chieftains, Sweet Honey in The Rock, and Ruben Blades. McKhool (Jesse Cook, Pavlo), a Queen’s Diamond Jubilee medal recipient, has an Egyptian-born mother who happened to play piano, teach classical theory, and feed her young son as much Middle Eastern cuisine as she did music lessons. From there, the powerful violinist developed a taste for multi-genre string sounds and found a like-minded crew of all-world enthusiasts. With founding guitarist Kevin Laliberté’s (Jesse Cook) rumba rhythm, their musical synergy created Sultans of String’s signature sound – the intimate and playful relationship between violin and guitar. From this rich foundation, the dynamic duo grew, featuring such amazing musical friends as in-the-pocket bass master Drew Birston (Chantal Kreviazuk) Sultans of String have been criss-crossing North America and UK for the last several years at many taste-making forums such as NYC’s legendary jazz club Birdland, Boston’s Scullers, and London’s Trafalgar Square. They recently sold out Koerner Hall (Toronto’s Carnegie Hall), and performed with Toronto, Vancouver, Edmonton, Stratford, Ontario and Niagara Symphony Orchestras, as well with Kingsfield POPS in Maine and Maryland’s acclaimed Annapolis Symphony. Sultans of String have also been featured on MPBN’s Maine Arts!, and performed live on BBC TV,BBC Radio, Irish National Radio, and the internationally syndicated shows WoodSongs, World Cafe, and on SiriusXM in Washington. Fergus Grand Theatre © 2014. Powered by TixHub.
cc/2019-30/en_middle_0023.json.gz/line1646
__label__cc
0.706258
0.293742
Famous aries and taurus relationship Aries and Taurus Compatibility: The Hero and the Lover ⋆ Astromatcha The union of Zodiac signs Aries and Taurus brings together Mars and Venus: One is fiery and fast, the other is laid-back. Can they last?. To predict whether a zodiac sign is compatible with other signs in the chart or Speaking of the Aries-Taurus cusp, it is known as The Cusp of. Aries and Taurus Love Match Compatibility Possessiveness: Taurus people are famous for their stubbornness which could trouble an Arian. Taurus is sensual, patient and gentle. Aries is attracted to these qualities; Aries sees Taurus as their rock, totally stable and loyal forever. These Signs are a good balance for each other. Which Zodiac Signs are Compatible with Aries-Taurus Cusps? Aries might sometimes play games with Taurus, playing off that Bullish laziness, or try to push Taurus into making hasty decisions, but the Bull can usually convince the Ram to slow down a bit. Aries brings excitement to the relationship, while Taurus brings security and romance. When Aries wants instant gratification, Taurus can show just how sexy and sensual slow, deliberate movement can be. Venus and Mars go well together; they represent the two necessary halves of the same relationship coin. Meaning, a Cancerian would not give up on their partner considering the emotions involved, however, in case of the Aries-Taurus cusp, it would be because not only are they absolutely faithful and committed, they would consider giving up on their partner a form of failure, or defeat. Nonetheless, with the presence of Taurus' careful analysis, understanding, and patience, these differences can be kept under control. With certain adjustments, both have the potential to be happy together. Being the last sign, it possesses the traits of all the other zodiacs. Pisces is ruled by the water element and has mutable qualities, which means it will easily adapt, or, take the shape of any container it is poured into. The compassion comes from the in-depth understanding that a Piscean has towards others. Aries and Taurus Love Match Compatibility Therefore, no matter how changeable, dominant, demanding, flustered, caring, intuitive, jealous, or unruly Aries-Taurus' tend to be, a Pisces will 'always' understand. What works great for this pair is the fact that each can provide what the other desires. On the other hand, Aries-Taurians needs a partner who can keep up with the unstability that tends to crop up within their inner atmosphere and soothe the flares that tend to arise as the outcome. They need someone whom they can take care of, with the ultimate control in their own hands, and a Piscean would not mind that at all. The tendency of mood swings and swimming away to the world of dreams and fantasies can create a little problem. But, if these differences are taken care of, both these individuals will value each other to the core. The imbalance that they tend to feel within due to the influence of the dual-elements and planets that rule them, can very well be balanced when in the company of the sign that is known as "The Balancer or Harmonizer" of the zodiac chart. Another plus point is that Libra too, is ruled by the planet Venus just like Taurusso the passion, love, indulgence, and romance just doubles with this combo. We can't say that Librans are "tameable" in the true sense of the word. They use their own brains to weigh the pros and cons of a given situation and balance out the unevenness by choosing the option that calls for peace and harmony. Having said that, they can become quite manipulative and stubborn at times, but soon enough, they realize that this tendency of theirs is sabotaging the peace and passion of the bond, and immediately they balance themselves again. They understand that it isn't wise to categorize everything into black and white, that there also exists a gray shade that is an inevitable part of every personality, even if it is a challenging and bold one such as the Aries-Taurus cusp. Both want the best of life, therefore, this mutual need and zest for life would make them great companions in discovering the unknown roads in the path of life. But irrespective of this, we say that this zodiac sign can be quite a good match for the Aries-Taurus cuspians. Well, first off, when you have two different elements ruling you, the imbalance can be brought to a significant level, if you find someone with one of the two elements. What we are trying to say is that "two times" Taurus will successfully overshadow the fierceness and childishness that comes from the Aries influence. While the same can stand true in case of an Aries partner, the thing is that pairing up with an Aries will be equivalent to adding fuel to the fire, and too much fire can be destructive. What this cusp needs is more of earth, and that too the same bull-owned earth, because a part of this cusp has the very same nurturing energy. What if you have someone who always agrees to one of the two options that you have in mind? She is a hardworking, hands-on mother of four children and she is well respected in the fashion community. Like most Aries womenVictoria is an alpha-female and a great project starter. Victoria and David Beckham Love Compatibility Born May 2David Beckham is a famous Taurus Athlete who has shown to be a dedicated soccer player and doting father to his kids! Taurus Men are hard-working, loyal, down to earth, persistent, stubborn and yet also patient. Taurus in general are known to be finishers of long, boring projects because they have the will-power and endurance to complete tasks. So how does a Taurus Man and Aries Female get along? Very carefully… On one hand they compliment each other as far as work life is concerned. However, Aries is known for starting many things without finishing them, so Taurus partners can help keep Aries accountable for seeing something through to the finish line. David and Victoria Beckham are compatible in a very great career related sense, and it shows! Having money together is an important part of a Taurus-Aries love match… what about the other aspects of this astrological match? Meet the parents 2000 cast Sudan meet the janjaweed Choo sarang meet kim jong kook news Distance relationship quotes and sayings Single flirt up your life demo Angie bowie and duncan jones relationship Ways to spice up a relationship sexually Relationship between koshas and chakras healing draktbutikk porn tube
cc/2019-30/en_middle_0023.json.gz/line1650
__label__wiki
0.50771
0.50771
I needed to replace my original Fatty Patty and searched high and low for one. Found this at Spencers at a great price. Unfortunately the great price meant less product. My original Patty was the perfect BBW. Large and comfy. This item from Specers is just a glimmer of the original. Two of the three holes are ok not great. The mouth is so small that you can fit your finger in but not any other extremity. The physical size is much less. This doll may be Patty in her very early years before she developed into the full sized woman she is today. At appro. two thirds the original size she should be called "Average Patty". If you truly want a big girl find the original Fatty Patty and pay the extra bucks to get a true BBW women! Small Strapons “The main thing we told each other is we don’t want it to look like anything that exists right now, we want it to look like a boutique. We have a ton of plants — it’s very botanic and green. It was very important to us that it was not neon, not plastic. I like the word erotic as very natural and a kind of primally sexual experience, not pink and fake,” explained Kalasz. “The main thing we told each other is we don’t want it to look like anything that exists right now, we want it to look like a boutique. We have a ton of plants — it’s very botanic and green. It was very important to us that it was not neon, not plastic. I like the word erotic as very natural and a kind of primally sexual experience, not pink and fake,” explained Kalasz. Vibrators Silicone In this connection we may refer to fornicatory acts effected with artificial imitations of the human body, or of individual parts of that body. There exist true Vaucansons in this province of pornographic technology, clever mechanics who, from rubber and other plastic materials, prepare entire male or female bodies, which, as hommes or dames de voyage, subserve fornicatory purposes. More especially are the genital organs represented in a manner true to nature. Even the secretion of Bartholin's glans is imitated, by means of a "pneumatic tube" filled with oil. Similarly, by means of fluid and suitable apparatus, the ejaculation of the semen is imitated. Such artificial human beings are actually offered for sale in the catalogue of certain manufacturers of "Parisian rubber articles."[3] Reacting to the ongoing development of "sex robots" or "sexbots",[23] in September 2015, Kathleen Richardson of De Montfort University and Erik Billing of the University of Skövde created the Campaign Against Sex Robots, calling for a ban on the creation of anthropomorphic sex robots.[24][25][26][27] They argue that the introduction of such devices would be socially harmful, and demeaning to women and children.[25] A dildo is a device usually designed for penetration of the vagina, mouth, or anus, and is usually solid and phallic in shape. Some expand this definition to include vibrators. Others exclude penis prosthetic aids, which are known as "extensions". Some include penis-shaped items clearly designed with vaginal penetration in mind, even if they are not true approximations of a penis. Some people include devices designed for anal penetration (butt plugs), while others do not. These devices are often used by people of all genders and sexual orientations, for masturbation or for other sexual activity. Cheap Sextoys We have dildos in all shapes and sizes, for vaginal and anal use. From realistic, flesh-colored dildos, to more abstract dildos in all kinds of funky colors and textures, there's a perfect dildo for everyone. If you're a first time user, consider picking from our selection of small dildos and working your way up. We also have options that fit your budget, from cheap dildos to luxury dildos. Bullet Vibrator Best This toy is awesome. The shape is intuitive, easy to hold, and provides a steady vibration. Sometimes, when the tip is directed at your clit it can feel a bit on the pointy side, but basically, this vibe is a winning way to make coming during penetration much, much easier. Better yet? It's endorsed by none other than Alicia Silverstone herself, and was rated the most eco-friendly sex toy for it's phalate-free material and eco-friendly packaging. am I being punked? battery-operated bless my partners blogiversary butt toys California Exotic Novelties Courtney Trouble dial back that marketing plz did humans even test this? dildos discontinued double-ended dual-density DVD Fleshlight friendship Fun Factory giveaway glass Hitachi James Deen Jimmyjane LELO lube eater not for the faint of vagina orgasm ruiner plastic porn Pure Wand queer rechargeable rumbly as fuck sex bloggers SheVibe silicone squirting Tantus textured Tristan Taormino vibrators Vixen Creations wands We-Vibe wood worth every damn penny Online Sex Toy If it’s simple and rudimentary sex toys you seek, there’s no package better suited to your needs than JimmyJane’s Boy Meets Girl Vibrator Set. In the set, customers will receive JimmyJane’s ICONIC RING: a vibrating cock ring, as well as the ICONIC POCKET: a compact and powerful clitoral companion that can be discreetly packaged in a handbag for “pleasure on the go.” External Vibrator If it’s simple and rudimentary sex toys you seek, there’s no package better suited to your needs than JimmyJane’s Boy Meets Girl Vibrator Set. In the set, customers will receive JimmyJane’s ICONIC RING: a vibrating cock ring, as well as the ICONIC POCKET: a compact and powerful clitoral companion that can be discreetly packaged in a handbag for “pleasure on the go.” We specialize in helping you find the right products to fulfill your sexual desires. Whether you are looking for self-serve adult toys and products or wish to use a product as a couple, Jack and Jill staff members are experts when it comes to giving you the best advice. Every purchase from our shop is guaranteed with our discreet shipping policy. Feel free to reach out to us by phone, email, or Facebook chat. We are always here for you. From veiny and realistic to nubby and glass, we have every dildo you can possibly imagine! Thrust, ride, and gyrate your way to a more satisfying orgasm using one of our pleasurable dongs. Whether you are looking for a real-feel toy to keep you company or a tantalizingly textured penis sex toy that can satisfy you in ways you have never known, you can find it here. We also offer a variety of hollow, curved, and vibrating strap on dildos for men and women to give you the most thrilling sexual experience ever! Here is everything a woman needs to boost her intimate pleasure and sexual wellness, all in one place! All of the sex toys perfect for women that you could ever imagine! This is a complete guide to all of our feminine products. It is designed by women for women so you have quick and easy access to the adult sex toys, romantic wear, and libido-improving sex aides you need to experience greater sexual satisfaction. Browse our gorgeous lingerie that will make you feel like the sexy, confident woman you are or get yourself an amazing personal massager that will give you the most explosive orgasms of your life! In this modern world, there is a tool available to assist with just about every task. You use a knife or a mandoline to slice up food. You use a rake or leaf blower to clean up the leaves in your yard. You use a screwdriver or a power drill to install a screw. Now imagine how ridiculous it would be to perform these tasks without any tools to help you. Why would anyone try to twist a screw into the wall with their fingers when they could use a drill? We like to apply that same logic to the bedroom. With all the tools available to improve your sexual experience, does it really make sense not to take advantage of them? Buttplug Vibrator In February 2008, a federal appeals court overturned a Texas statute banning the sales of dildos and other sexual toys, deeming such a statute as violating the Constitution's 14th Amendment on the right to privacy.[33] The appeals court cited Lawrence v. Texas, where the Supreme Court of the United States in 2003 struck down bans on consensual sex between gay couples, as unconstitutionally aiming at "enforcing a public moral code by restricting private intimate conduct." Similar statutes have been struck down in Kansas and Colorado. Alabama is the only state where a law prohibiting the sale of sex toys remains on the books.[34] Man, I was really disappointed with the We-Vibe 4 Plus. It's known as the No. 1 couple's sex toy because the idea is so ingenious: it's a hands-free vibrator that is inserted into the vagina and then remains in place during penetrative sex. Because it's sort of hooked in there — with one half of it inside, and the other outside, hitting your clit — this toy is intended to stimulate both the inside and outside funparts of your vagina. Even with the Lelo logo, the Mia 2 vibrator looks more like a mascara or lip gloss than it does a sex toy. Not to worry though, despite its under-the-radar appearance, this rechargeable vibe definitely isn't lacking in power or intensity. The flat side of the vibe offers pinpointed vibration and six settings, so you'll definitely find your sweet spot. Plus, if you uncap the toy, there's a USB stick built in for easy charging on the go. No more lost cords! Thinking about making your bedroom scene kinky and erotic? Bondage sex toys for couples are the perfect addition to your sex life. From masks to cuffs, you can find bondage accessories that will bring the kink factor to your bedroom and allow you to indulge in light BDSM play with your partner. Whether you want to do simple bondage moves or go all out with a full BDSM scene, these couples bondage sex toys and accessories give you that forbidden pleasure you desire. The Hitachi Magic Wand is referred to as the "Cadillac of vibrators" for a reason. It's big, it's powerful, and it's reliable. Originally created to relax muscles, the wand quickly gathered a cult following as a vibrator for its undeniable ability to relax people (especially those with clits) in other ways. While the sex toy works wonders for solo play, it's fun to use in a relationship to help a partner with a vagina reach orgasm during penetrative sex. For those into BDSM, the magic wand is often used by the dominant on the submissive partner to bring the sub to orgasm while they're bound or tied up. If it’s simple and rudimentary sex toys you seek, there’s no package better suited to your needs than JimmyJane’s Boy Meets Girl Vibrator Set. In the set, customers will receive JimmyJane’s ICONIC RING: a vibrating cock ring, as well as the ICONIC POCKET: a compact and powerful clitoral companion that can be discreetly packaged in a handbag for “pleasure on the go.” Dildoes And Vibrators ©News Group Newspapers Limited in England No. 679215 Registered office: 1 London Bridge Street, London, SE1 9GF. "The Sun", "Sun", "Sun Online" are registered trademarks or trade names of News Group Newspapers Limited. This service is provided on News Group Newspapers' Limited's Standard Terms and Conditions in accordance with our Privacy & Cookie Policy. To inquire about a licence to reproduce material, visit our Syndication site. View our online Press Pack. For other inquiries, Contact Us. To see all content on The Sun, please use the Site Map. The Sun website is regulated by the Independent Press Standards Organisation (IPSO) Adult Massagers Man, I was really disappointed with the We-Vibe 4 Plus. It's known as the No. 1 couple's sex toy because the idea is so ingenious: it's a hands-free vibrator that is inserted into the vagina and then remains in place during penetrative sex. Because it's sort of hooked in there — with one half of it inside, and the other outside, hitting your clit — this toy is intended to stimulate both the inside and outside funparts of your vagina. Sex Toy Site Conventionally, many dildos are shaped like a human penis with varying degrees of detail; others are made to resemble the phallus of animals. Not all, however, are fashioned to reproduce the male anatomy meticulously, and dildos come in a wide variety of shapes. They may resemble figures, or simply be practical creations which stimulate more easily than conventional designs. In Japan, many dildos are created to resemble animals or cartoon characters, such as Hello Kitty, so that they may be sold as conventional toys, thus avoiding obscenity laws. Some dildos have textured surfaces to enhance sexual pleasure, and others have macrophallic dimensions including over a dozen inches long.[2] There is a common misconception that vibrators are exclusive to female solo play, but the G-Gasm Delight delivers vibrations of adjustable intensity that can just as easily be used for prostate stimulation and couples play. And unlike the massage wand, the G-Gasm Delight can be used internally without any additional attachments. The G-Gasm Delight boasts a superior design with an ovular tip that lends itself to increased coverage and an inclined neck that maximizes reach. This kit is ideal for beginners because it includes four plug sizes that allow you to start smaller and then explore plugs of increasing size. These plugs are designed with narrow tips that gradually widen for easy insertion and suction cups at the base for hands-free play. They can help to prepare for anal sex, be used to achieve double penetration or even used in masturbation to stimulate anal nerve endings. However you choose to use them, be sure to pair the Real Vibes Anal Training Kit plugs with plenty of lube. Artificial vaginas, also known as "pocket pussies" or "male masturbators", are tubes made of soft material to simulate sexual intercourse. The material and often textured inner canal are designed to stimulate the penis and induce orgasm. The male masturbators come in many shapes and styles; they can be shaped like vulvas, anuses, mouths, or as non-descriptive holes. Some male masturbators are disposable and some can be washed and used repeatedly. Some are equipped with sex-machine options that work similar to milking machines.[6] The Rabbit Toy This kit is ideal for beginners because it includes four plug sizes that allow you to start smaller and then explore plugs of increasing size. These plugs are designed with narrow tips that gradually widen for easy insertion and suction cups at the base for hands-free play. They can help to prepare for anal sex, be used to achieve double penetration or even used in masturbation to stimulate anal nerve endings. However you choose to use them, be sure to pair the Real Vibes Anal Training Kit plugs with plenty of lube. Buy Dildo Online Whether you're looking to ~make a statement~ or are just the sort of person who needs the convenience of literally wearing your vibrator around your neck (we're all that person sometimes, TBH), the Vesper has you covered. It's designed to look like an actual piece of jewelry and fits in the palm of your hand. If you're worried something that tiny won't be enough to get you going, this is one of the strongest clit vibes out there. Maybe it's something about that metal on skin contact, or maybe it's the three settings this little guy operates on, but it's great for a really strong clit orgasm. Get you a vibe that can do both (make you orgasm and double as a sleek piece of jewelry). Sex Toys Demonstration ©News Group Newspapers Limited in England No. 679215 Registered office: 1 London Bridge Street, London, SE1 9GF. "The Sun", "Sun", "Sun Online" are registered trademarks or trade names of News Group Newspapers Limited. This service is provided on News Group Newspapers' Limited's Standard Terms and Conditions in accordance with our Privacy & Cookie Policy. To inquire about a licence to reproduce material, visit our Syndication site. View our online Press Pack. For other inquiries, Contact Us. To see all content on The Sun, please use the Site Map. The Sun website is regulated by the Independent Press Standards Organisation (IPSO) Penis Dildo If your partner has a penis, this aptly-named "Clone-A-Willy" lets you create a silicone mold of it — one that vibrates. Ideal for couples in long-distance relationships, the toy comes in a variety of skin tones, so you can get as realistic as you'd like. (You also have the option to make it neon pink and glow-in-the-dark, which…yes.) If creating a penis clone sounds daunting, fear not: According to customer reviews, the Clone-A-Willy comes with detailed and easy to follow instructions. Silver Bullet Vibrators In Japan one can purchase inflatable love pillows or "dakimakura" that are printed with a life-size picture of a porn star or anime character. Other less common novelty love dolls include overweight, intersex, elderly and alien dolls, which are usable for pleasure but also tend to be given as gag gifts. Some inflatable dolls even have the form of children. Some of our best-selling clitoral vibrators include rabbits, bullets and The Womanizer massagers in all shapes and sizes. If you're a frequent traveler, then you can bring racy thrills with you with a discreet vibrator tucked in your luggage. Whether you’re a pro looking for a high-tech, multi-speed rabbit vibrator, or a beginner just looking for your first bullet vibrator, Spencer’s offers a wide selection that’s sure to have something for every experience level. Luckily, if you’re new and you’re unsure of where to start, we’ve got a selection of vibrators for beginners to help anyone who’s looking to experiment with a vibrator for the first time. Best Clit Vibrators Not only does the new Magic Wand mean you're no longer tethered to a wall, but it's also lighter and quieter while still maintaining the same powerful rumbly vibrations women have been relying on for 47 years. Attachment heads for more direct clitoral stimulation and penetration are also sold separately, so you can always pimp your wand out later like an after-market car stereo. And don't worry, there's still a plug-and-play option if you neglected to charge it but need an orgasm, like, now. The Bullet has earned its reputation for good reason: It's small enough to tuck into a pocket and take anywhere (making it a great option for globe-trotting couples), and it only costs $16 (but is made by one of the highest-quality sex-toy companies in the game, Jimmyjane). Opting for the Bullet is like getting your socks from Ralph Lauren. Sure, you could grab some from Hanes, but the few dollars extra is worth the added comfort. Our wide selection of sex toys has something for everyone, whether you’re getting a toy for the first time or you’re an absolute pro. From a wide selection of vibrators and dildos to strokers and cock rings, our sex toys offer a large variation so that you can find the perfect toy for your needs. Perhaps you want to experiment with a dildo or butt plug, or you crave the pleasurable vibrations from a rabbit vibrator or a vibrating cock ring. Some of our toys are even remote-controlled to provide a totally “hands free” experience—you could even give the remote to your lover and see how much it turns the both of you on. Sexytoy Because I have the best job ever, I decided to test seven luxury couples sex toys over the past year. Some were so good they died on me from overuse, and some simply didn't do it for me at all. Of course, this is just the way my body responded — everyone is different. That said, I hope my findings will help you make an informed investment in your multi-orgasmic future. Because while these are all non-refundable and a bit of a splurge, finding the right luxury sex toy is truly priceless. Adam And Eve Catalogue Take fantasy masturbation to new levels of pleasure with one of our high-quality female sex dolls. These are inflatable or non-inflatable realistic love dolls made with high-quality, body-safe materials. Each one has a single-, double- or triple-entry design to simulate lifelike oral, anal or vaginal sex on demand. SexToy carries the super popular Pipedream Extreme Dollz in tons of positions with highly realistic Fanta Flesh openings for an amazingly arousing experience. Worn on the finger, the Frisky Ripples Finger Bang takes foreplay to the next level. This discreet, travel-friendly and powerful accessory can be effective for both genders and is very easy to use: just slip it on your finger and press the power button on the tip. That’s it! The Finger Bang can also be used during intercourse to stimulate the clitoris during penetration. Vr Sex Toy If you're curious about adding BDSM to your relationship, but unsure of where to start, this starter kit is just what you need. It contains wrist and ankle cuffs, a padded leather blindfold, rope for tying one another up, a whip for light spanking, and a tickler for teasing. Just be sure to learn about some BDSM best practices, and then get ready to dip your toes into the world of kink. Sex Machines Sale Many other works of bawdy and satirical English literature of the period deal with the subject. Dildoides: A Burlesque Poem (London, 1706), attributed to Samuel Butler, is a mock lament to a collection of dildos that had been seized and publicly burnt by the authorities. Examples of anonymous works include The Bauble, a tale (London, 1721) and Monsieur Thing's Origin: or Seignor D---o's Adventures in London, (London, 1722).[23] In 1746, Henry Fielding wrote The Female Husband: or the surprising history of Mrs Mary, alias Mr. George Hamilton, in which a woman posing as a man uses a dildo. This was a fictionalized account of the story of Mary Hamilton.[24] If your partner has a penis, check out this remote-controlled toy. The Pulse III comes with six different vibration patterns, and is marketed in a unique and inclusive way: It can be used on both a flaccid or erect penis, for those who have erectile dysfunction or mobility issues. Plus, the Duo model is specifically designed for couples, and features an additional motor on the outside to stimulate the receiving partner during penetration. When it comes to adult sex toys, there’s something out there for everyone! Spencer’s knows that each person is unique when it comes to their toy of preference, and that not everyone is going to love the same kind of sex toys. Our adult customers have many sexual interests, which is why we offer favorite sex toys like rabbit or G-spot vibrators, body wands, dildos of all sizes, plus men’s sex toys including strokers, cock rings and penis pumps. As your go-to online sex shop, we make sure to continually add new and exciting adult toys, for men and women alike, so you can have the best sex of your life whenever and wherever. These intimate, adult sex toys are designed to make everything from masturbation to partner sex, bondage, anal sex, and strap-on play all the more pleasurable. To go along with the Afterglow Massage Oil Candle (or just on its own), the Contour M is a great rubbing stone for couples massage. Straight up, sometimes our partners want us to rub their backs, but we're feeling tired and kind of lazy and don’t want to show it. This stone is the savior. It cuts down the work, while also giving your partner a strong, enjoyable massage (which probably beats your lazy hands, anyway). Plus, the Contour M holds to body temperature, and is ergonomic. While some people are uncomfortable with the idea of anal sex, it’s quite the pleasurable experience for those who enjoy it. Using butt plugs is often done as a secondary stimulation, while you and your partner can focus on your other erogenous areas. If you’re new to the world of anal play, then you should consider one of our anal stretching kits, which includes different sized butt plugs for you to use. Start with the smallest size, and work your way up at a comfortable pace. Remember that your anus is not self-lubricating, so we highly recommend using lubricant during anal sex for the most pleasurable experience possible. G Spot Viberators Jump up ^ "House Passes Donovan's "CREEPER Act" to Ban Child Sex Dolls". United States House of Representatives. June 13, 2018. Today, the U.S. House of Representatives unanimously passed Congressman Dan Donovan’s legislation to help better protect innocent children from predators. The bipartisan Curbing Realistic Exploitative Electronic Pedophilic Robots (CREEPER) Act will ban the importation and transportation of child sex dolls. At the middle market price-range ($100 to approximately $1,000), dolls are made of thicker vinyl or heavy latex without welded seams or a polyurethane and silicone mixture, typically surrounding a foam core. Most have plastic mannequin-style heads and styled wigs, plastic or glass eyes, and occasionally properly moulded hands and feet. Some vinyl dolls can contain water-filled body areas such as the breasts or buttocks. Latex dolls were made in Hungary, China and France but only the French manufacturer Domax now remains in production.[citation needed] Finally, we're getting into some stuff my finicky vag liked. The Lyla 2 can be used in bed, held against your clit, or inserted. Because it has a remote-control option, you can also take it out on the town for a night of secret sexiness. While I'm sorry to say I haven't been to a club loud enough to try that at yet, the idea of being able to stick this in your panties and have your partner control it while you go dancing does sound fun. Shin Takagi, founder of the company Trottla, manufactures lifelike child sex dolls in the belief that doing so provides a safe and legal outlet for men expressing pedophilic desires.[7][8] This has been disputed by paraphilia researcher Dr. Peter J. Fagan, who argues that contact with the products would likely have a reinforcing effect, increasing the risk of pedophilic action being taken.[8] Since 2013, Australian officials have confiscated imported shipments of juvenile sex dolls legally classified as child exploitation material.[9] The first dildos were made of stone, tar, wood and other materials that could be shaped as penises and that were firm enough to be used as penetrative sex toys. Scientists believe that a 20-centimeter siltstone phallus from the Upper Palaeolithic period 30,000 years ago, found in Hohle Fels Cave near Ulm, Germany, may have been used as a dildo.[10] Dildo-like breadsticks, known as olisbokollikes (sing. olisbokollix),[11] were known in Ancient Greece prior to the 5th century BC.[12] Chinese women in the 15th century used dildos made of lacquered wood with textured surfaces. Nashe's early-1590s work The Choice of Valentines mentions a dildo made from glass.[13] Couples Dildos
cc/2019-30/en_middle_0023.json.gz/line1654
__label__wiki
0.966823
0.966823
Home / Newsroom / News Archive / 2018 / 2017 Allied Command Operations Military Member of the Year Awards 2017 Allied Command Operations Military Member of the Year Awards Supreme Allied Commander Europe, General Curtis M. Scaparrotti, introduces the award ceremony of ACO Military Member of the Year, June 21, 2018 at Allied Joint Force Command Naples. NAPLES, Italy – Supreme Allied Commander Europe (SACEUR), General Curtis M. Scaparrotti, and Command Senior Enlisted Leader (CSEL), Command Sergeant Major Davor Petek, presented four Allied Command Operations (ACO) military personnel with ACO Military Member of the Year awards on Thursday, June 21, 2018 at Allied Joint Force Command Naples. The recipients this year include British Army Corporal Adam Wilson employed with NATO Communication and Information Group (NCISG), Canadian Army Sergeant Mark Hall employed with Allied Joint Force Command Brunssum (JFCBS), a United States Navy Member employed with the NATO Special Operations Forces (NSHQ) and United States Air Force Captain Amanda Zenner employed with NATO’s Airborne Early Warning and Control Force (NAEW&C Force). "Command Sergeant Major Davor Petek and I would like to congratulate the recipients of the 2017 Allied Command Operations Military Member of the ‎Year Award. They have all worked extremely hard and should be very proud of this achievement,” said General Scaparrotti. "We enjoyed congratulating them and their families during the ceremony today. I would like to personally thank the supervisors, Command Senior Enlisted Leaders and Commanders for their efforts in highlighting the accomplishments of all our outstanding military personnel.” Recipients of this award are nominated by their unit in order to recognise their superior performance and professional excellence. It is only awarded to junior ranks, non-commissioned officers and junior officers for their significant contributions towards the success of Alliance operations. The Allied Command Operations Military Member of the ‎Year programme was implemented in 2013. Story by SHAPE Public Affairs Office You can view photos of the award ceremony here. Explore SHAPE The SHAPE Officers' Association SHAPE International School SHAPE International Band SACEUR History Corner Finance and Acquisition Management Enhanced Forward Presence Command Senior Enlisted Leader Anti-submarine exercise Dynamic Mongoose concludes Allied Air Command works with EUROCONTROL to make European skies safer SHAPE Honours First DSACEUR Montgomery Baltic Air Policing detachment supports multidomain exercise Baltic Protector NATO Supreme Allied Commander Europe, General Wolters meets with Russian Chief of General Staff, General Gerasimov What is SHAPE? Working at SHAPE Operation Althea NATO Mission in Kosovo (KFOR) NATO Headquarters Brussels (NATO HQ) [ Terms of use Contact us Sitemap Login ]
cc/2019-30/en_middle_0023.json.gz/line1656
__label__wiki
0.616318
0.616318
Rockin' on the River Cruises Shark in the Park Shark App Shark on Alexa Shark Newsletter Shark Schedule Aaron "A-Train" Lapierre Robby Bridges Shark Mobile App Shark on Google Home Shark Club VIP Breakfast Bar 2019 Year of Service Awards Pro Partners Send Feedback & Requests Nikki CruzNikki Cruz Tom Petty’s Daughter Responds to Cause of Death Report: ‘He Is an Immortal Badass’ Streeter Lecka, Getty Images Following the official statement from Tom Petty's family revealing that his death last October was the result of an accidental overdose of pain medication, Petty's daughter AnnaKim has reached out to fans to stress that, in her eyes, her father was not addicted to opioids. As previously reported, Petty's family said he'd been prescribed "various pain medications for a multitude of issues including Fentanyl patches" prior to his death, at least in part to help him battle through a hip fracture during his 40th anniversary tour with the Heartbreakers. Adding that they recognized the news "may spark a further discussion on the opioid crisis" and expressing hope that "in some way this report can save lives," the family's statement shed painful new light on Petty's final days. It was not, however, intended to suggest that Petty had lapsed into addiction before his death. As his daughter AnnaKim wrote in an Instagram post days later, Petty wasn't using his meds recreationally — he was simply doing his best to cope with a crippling injury that was only getting worse, all in order to live up to his touring commitments. "His recent death is tragic, yet he died from doing what he loved and what will continue to keep his spirit alive," reads the post in part. "Touring with a broken hip because he would have it no other way. He loved performing. There are no hypothetical questions I love my dad and feel he is an immortal badass. The amount of pain his hip caused was beyond a normal surgery." The post, which you can read in its entirety below, stands as the latest in a series of fierce tributes from AnnaKim to her father — and concludes with a reminder to embrace and express the love that really matters while you still can. "He passed away with his family in a room filled with love. I feel very connected to him," she added. "Give love to everyone you meet ... we are connected by love." Thomas Earl Petty lived a million lifetimes in one. He over came much psychic pain from an abusive childhood by transforming his anger into the greatest rock n roll band ever. My dad in the past openly overcame a crippling drug problem with no shame. His recent death is tragic yet he died from doing what he loved and what will continue to keep his spirit alive. Touring with a broken hip because he would have it no other way. He loved performing. There are no hypothetical questions I love my dad and feel he is an immortal badass. The amount of pain his hip caused was beyond a normal surgery. He is at peace out of painI thank you for respecting my family’s privacy and inviting love during this shocking new chapter. My dad loved his life and left behind so much love in his music for us to share. Invite love listen to Tom Petty. He passed away with his family in a room filled with love. I feel very connected to himgive love to everyone you meet we are connected by love #invitelove much love to you and much love to u dad⚡️ A post shared by Annakim (@annakimwildflower) on Jan 19, 2018 at 5:08pm PST Tom Petty Through the Years Next: Top 10 Tom Petty Songs Source: Tom Petty’s Daughter Responds to Cause of Death Report: ‘He Is an Immortal Badass’ Filed Under: tom petty Categories: Articles, News Cheat Sheet For 'Shark In The Park 2019' in Dover Portsmouth Business Listings 2019 102.1 & 105.3 The Shark is part of the Loudwire Network, Townsquare Media, Inc. All rights reserved.
cc/2019-30/en_middle_0023.json.gz/line1657
__label__cc
0.664476
0.335524
Shawn Hitchins entertainer. author. ginger. Ginger Nation The Ginger Spring By shawnhitchins.com on 05/06/2014 05/21/2014 Hello Shawntourage! I know it’s been a long time… but a lot of things have been brewing since taking Edinburgh by storm last August. Of course there was a huge adjustment period after the Festival and I binge watched a lot of Netflix, but soon it was time to get back to work. The winter was all about planning and working towards undertaking the next phase and I’m excited to share some news… The Ginger Spring! In This Update: Hitchins Signs First Television Deal. Sing-a-long-a Grease Screenings in Ontario. Ginger Nation comes to Victoria, BC. 2014 Ginger Pride Walk in Edinburgh GINGER NATION TAKES ON PRIMETIME After months knotted like macramé, I’m happy to announce that this spring I signed my first television deal with DHX Media. It’s a unique production and a series development deal that will bring the live stage version of Ginger Nation to an exciting new audience. Although a lot of details have yet to be released, the newly expanded Team Ginger is working towards a fall filming in Toronto. It will be an incredible experience performing this show in front of a hometown crowd and creating a moment that will be enjoyed and shared with a mass audience. A press release will be issued soon(ish)… please hold. (Team Ginger is unable to comment on this exciting endeavour until everything is perfect and awesome.) All Hail! The Red, Orange & Pale! SING-A-LONG-A GREASE This winter saw many hosting opportunities throughout Ontario with the Sing-a-long-a Sound Of Music and Grease . I’m happy again to be hosting four upcoming Grease screenings in Belleville (May 10 – Empire Theatre) and Toronto (May 16-18, TIFF Bell Lightbox) . Grease is a fun event filled with pink ladies and beehives, join me and let’s sing! For tickets click the hyperlinks above! GINGER NATION @ UNO FEST Team Ginger is quickly getting the show back on its feet in time for two shows in Victoria, BC. I’m flying out to the west-coast of Canada for the first time and hope to not only rock the stage but also indulge in the beauty of the Vancouver Island. Rally the Troops! Join The Ginger Nation! May 30 & 31 @ Uno Fest Click here for show and ticket information. THE GINGER PRIDE WALK Okay! I know this is what a lot of people want to know… Unfortunately because I dedicated a lot of my resources towards inking a tv deal, I am unable to bring a new show to the Edinburgh Fringe. I plan on returning with a new show Summer 2015. However, I had such a blast last year… why not fly to Edinburgh to walk with y’all? I’m starting to send query emails to various parties in Edinburgh towards organizing another Ginger Pride Walk. If the walk DOES happen, I’ll need to enlist a Red Army to make the event bigger, better and redder. The Ginger Pride Walk must be owned by the community and so there will be a call for volunteers. The Ginger Pride Walk will be confirmed by July 1. A dusk to dawn singalong at TIFF Bell Lightbox during Scotiabank Nuit Blanche and of course the holiday Singalonga Sound Of Music – now celebrating its forth year at TIFF. Prev Post: We’re Nuts About Being Ginger – Chat Magazine Next Post: UNO Fest 2014
cc/2019-30/en_middle_0023.json.gz/line1658
__label__cc
0.746338
0.253662
Tag: things three Things Everyone Should Do When Looking To Hire An Attorney Johnny ColemanAttorney attorney, everyone, looking, should, things, threeLeave a comment In 2011, a Texas Family Court Judge was knowledgeable that he had a 14-year-old son. In different phrases, if possible, it is best to record specific laws that again you not being liable for the legal professional fees. Illinois licensed attorney with exceptional research and authorized drafting expertise needed to assist attorney work in several areas of regulation, including immigration, bankruptcy, mortgage foreclosure and loan modification, family, legal, among others. Admittedly, there’s such a factor as over-scheduling an attorney’s day and there is additionally such a factor as an attorney taking up extra work than she or he can handle. I could not agree more, the so known as officers of the regulation who out of sheer negligence let a toddler rapist/murder go free as a result of they’d not do their jobs and take a look at the evidence ought to be those in jail! There is a powerful protection out there under federal legislation for an employee with Irritable Bowel Syndrome (IBS) who is worried that missed time from work goes to place his or her job in peril – the Americans with Disabilities Act (ADA). Even if an assistant calls to say that the lawyer has your message and can call as soon as possible. The legal professional can fill you in on the results of the pre-sentence investigation, let you know what temper the judge is in and what to anticipate will be requested of you. Whoever you title to be the one in your energy of attorney will be legally acting on your behalf and is called the agent or attorney-in-reality. In 2014, the National Trial Lawyers granted Attorney Fowler with the Top forty Under forty award. In a search warrant filed in February 2015, the sufferer had indicated that she had left Giansante in 2012 …attributable to his jealousy and possessiveness towards her”, stating that the relationship was unhealthy”. In 2006, Attorney Reza Torkzadeh graduated from Thomas Jefferson School of Law with a Juris Doctor diploma in Law. The Texas college with an all-black starting 5 defeated an all-white Kentucky staff, 72-sixty five.… Johnny ColemanCriminal Defense Attorneyattorney, everyone, looking, should, things, threeLeave a comment The legal defense attorneys at the Sammis Law Firm in Tampa, Hillsborough County, FL, created this weblog to debate legal justice issues, statistics and policy. The attorneys of the Cochran Firm have over 30 years of expertise in prison defense regulation Our attorneys aggressively defend these shoppers charged with prison offenses in Federal, State, Juvenile and Military courts. For entrapment protection to work the defendant must not be predisposed to commit the alleged felony act. Fighting for the perfect result usually requires hiring an legal professional as early in the process as possible earlier than recollections fade and favorable proof is misplaced. If you’re involved in an incident, go to a protection legal professional as quickly as attainable. In Ulster County, three men and a sixteen-12 months-previous woman were traveling collectively from Detroit to New York City. James Sullivan has won many jury trials in legal and juvenile cases, together with on charges of aggravated theft, aggravated sexual assault of a kid, aggravated assault, negligent homicide, housebreaking of a habitation and others. Defendants charged with CPW2, CPW3 or other felony fees involving weapons occurring within the metropolis of Rochester shall be arraigned in Part 5. The PH date shall be set. At Shrager Defense Attorneys , we deal with every case as our most essential case and will work aggressively that will help you have your fees lowered or withdrawn. However tempting it is to take a seat again and simply exhale after giving a defense summation, neither the defendant nor the defense counsel can afford to have counsel cease working throughout the prosecutor’s summation, which is quite a essential stage of the trial. Incarceration drains not only patient well being however the legal justice until as properly. Luis appealed to the Supreme Court arguing that the money was not related to the claims of Medicare Fraud and that by freezing her belongings, she was unable to pay for the lawyer that she wished to characterize her in the case. The criminal protection lawyer / protection legal professional performs a definite function from the prosecutor, who has the duty of proving in court docket that the defendants have dedicated the offence.…
cc/2019-30/en_middle_0023.json.gz/line1659
__label__wiki
0.573003
0.573003
Emergency and Escape: Explaining Derogation from Human Rights Treaties International Organization, Vol. 65, p. 673, Fall 2011 35 Pages Posted: 9 Jun 2010 Last revised: 20 Feb 2015 See all articles by Emilie Marie Hafner-Burton Emilie Marie Hafner-Burton UCSD School of Global Policy and Strategy Laurence Helfer Duke University School of Law; University of Copenhagen - iCourts - Centre of Excellence for International Courts Christopher J. Fariss University of Michigan at Ann Arbor - Department of Political Science Date Written: August 19, 2011 Several prominent human rights treaties attempt to minimize violations during emergencies by authorizing states to “derogate” - that is, to suspend certain civil and political liberties - in response to crises. The drafters of these treaties envisioned that international restrictions on derogations and international notification and monitoring mechanisms would limit rights suspensions during emergencies. This article analyzes the behavior of derogating countries using new global datasets of derogations and states of emergency from 1976 to 2007. We argue that derogations are a rational response to domestic political uncertainty. They enable governments facing serious threats to buy time and legal breathing space from voters, courts, and interest groups to confront crises while signaling to these audiences that rights deviations are temporary and lawful. Our findings have implications for the studies of treaty design and flexibility mechanisms and compliance with international human rights agreements. Keywords: International Law, Human Rights, Treaties, Flexibility, Derogation, Compliance, Emergencies, Crises, ICCPR Hafner-Burton, Emilie Marie and Helfer, Laurence and Fariss, Christopher J., Emergency and Escape: Explaining Derogation from Human Rights Treaties (August 19, 2011). International Organization, Vol. 65, p. 673, Fall 2011. Available at SSRN: https://ssrn.com/abstract=1622732 Emilie Marie Hafner-Burton (Contact Author) UCSD School of Global Policy and Strategy ( email ) 9500 Gilman Drive La Jolla, CA 92093-0519 HOME PAGE: http://gps.ucsd.edu/ehafner/ Duke University School of Law ( email ) 210 Science Dr. +1-919-613-8573 (Phone) HOME PAGE: http://law.duke.edu/fac/helfer/ University of Copenhagen - iCourts - Centre of Excellence for International Courts ( email ) University of Copenhagen Faculty of Law Karen Blixens Plads 16 Copenhagen S, DK-2300 HOME PAGE: http://jura.ku.dk/icourts/ University of Michigan at Ann Arbor - Department of Political Science ( email )
cc/2019-30/en_middle_0023.json.gz/line1664
__label__cc
0.570969
0.429031
Posted on April 9, 2013 by wildbow Tattletale stood at the very edge of the floor, with a twenty-five story drop just in front of her. The wind whipped her hair around her, and she didn’t even have a handhold available. Shatterbird had cleared out all of the window panes, long ago. She lowered her binoculars. “He’s gone. If he was going to pull something off, he’d want to watch and make sure everything went off without a hitch.” “I could have gone with them,” Imp said. “Listened in.” “Not without us knowing their full set of powers,” Tattletale said. Imp folded her arms, pouting, “I thought you were one of the cool ones.” “Othello’s a stranger,” Tattletale said. “I’d think he has an imaginary friend who can mess around with us, but I didn’t see any sign of anyone invisible walking around.” “Isn’t that the point?” Regent asked. “No dust or glass being disturbed, none of that. I might think his ‘friend’ is invisible and intangible, but then what’s the point? Accord tends to have people with good powers. Citrine, only bits I could figure out were that she’s got an offensive power, something with substance, and her focus was in a strange place. She was more focused on places in the room where the strongest powers were clustered, and her focus was fairly indiscriminate beyond that. Either her power wasn’t anything that anyone here would have been able to defend against, like Flechette’s arrows or a controlled version of Scrub’s blasts, or she’s a trump classification.” “What’s that?” Regent asked. “Official classification for capes who can either acquire new powers on the fly,” Tattletale gestured towards Grue, “Have an interaction with other powers that can’t be categorized or they nullify powers.” “She’s powerful, then,” Regent said. “She acts like she’s powerful,” Tattletale said, “And she probably is. But that database of PRT records we had didn’t have anything in it about those two. I don’t know where he finds those guys, but Accord collects some damn heavy hitters.” Parian broke her spell of silence. “You keep talking like we’re going to fight them.” “Threat assessment,” Tattletale said. She made her way back to her chair, sitting at the long table. “Be stupid not to know what we’re getting into, especially with someone like him.” “Not to mention we’ve gotten in fights with pretty much everyone who ever set up shop in the ‘Bay,” Regent commented. “There’s nothing imminent,” Grue said. “Let’s focus on more immediate problems.” He turned his attention my way. “Me?” I asked. “He’s right. We’ve been so busy preparing for possible fallout that we haven’t had time to discuss this,” Tattletale said. “I’m a non-factor. The damage is done, and it’s a question of the dust settling,” I said, staring down at my gloves. I’d altered some of my costume, but the real adjustments would have to wait until I had time. I’d made up the extra cloth in an open area of my territory I was devoting to the purpose, but hadn’t had time to turn it into something to wear for tonight. Some of my mask, the back compartment of my armor and my gloves were more streamlined. Or less streamlined, depending on how one looked at it. Sharper lines, convex armor panels that flared out more, gloves with more edges for delivering damage if I had to get in a hand to hand fight. I’d only done some of the armor, pieces of my costume that were already battered and worn. My gloves, my mask and the back compartment of my armor tended to take the most abuse. I’d update the rest later. “I’m not sure it’s that simple,” Grue said, his voice quiet. He reached across the table and gripped my hand, squeezing it. “Have we double checked to see what bridges they’ve burned for us? My parents aren’t showing any sign of interference.” “Mom wouldn’t care either way,” Aisha said. “She might try to capitalize on the attention with appearances on television if she could get money for it.” “Yeah,” Grue agreed. “My family wouldn’t care,” Tattletale said. “I’d be surprised if they didn’t already know. They’d choose to ignore it, I’d bet. Parian? You’ve covered your bases.” “Most of my family is dead. The ones who aren’t dead already know,” Parian said. She looked out toward the window, at the city lights under the night sky. Tattletale nodded, “Let’s see… Rachel isn’t a problem, not really. Never had a secret identity.” Rachel shrugged. Her attention was on her dogs. They were shrinking, their extra mass sloughing away. She already had Bastard sitting next to her, his fur spiky and wet from the transformation. “And if they tried to come at me through my family, they’d get what they deserved,” Regent said. “Why?” Parian asked. “His dad’s Heartbreaker,” Tattletale said. “Oh. Oh wow.” “Funny thing is,” Regent said, “If you think about it, we might be bigger than Heartbreaker, now. People all over America know who we are, and I’m not sure if Heartbreaker is known that far to the south or west.” “That’s not our focus right now,” Grue said, squeezing my hand. “It’s good that we’re talking about safeguards and damage control, but discussing villains and the rest of America can wait. They came after Skitter while she was out of costume.” “How are you coping?” Tattletale asked, leaning forward over the table. “You were pretty heavy-handed tonight. We discussed it, sure, but I thought you’d at least pretend to play ball with them.” “I didn’t need superpowered intuition to figure out they weren’t going to cooperate no matter what I said,” I replied. “But you were provoking them. Valefor especially. You up for this, with all the other distractions?” “This is what I’ve got left, isn’t it? The good guys decided to play their biggest card. They couldn’t beat Skitter, so they beat Taylor. As far as I’m concerned, there’s no reason not to throw myself into this, to deal with both heroes and villains as a full-time thing. I lay down the law, because now I’ve got time to enforce it. I can be stricter with the local villains, back you guys up if they cause trouble, and dedicate the rest of my time to my territory.” “Dangerous road to travel down,” Tattletale said. “You need to rest, to have downtime.” “And do what? Go to a movie? I’m not sure if any theaters are open-” “They are,” Tattletale said. “-And I couldn’t go even if they were. My face is plastered all over the news, and I’ve got a tinker who might be watching every computer system and surveillance camera in the city, because she’s not willing to go against her bosses. I can’t go shopping, can’t leave my territory unless I’m in costume and ready for a fight.” “More time to go after them,” Regent said. “You can’t let this slide.” “I’m not planning to,” I said, standing from my seat. “Hold on,” Grue said, as my hand came free of his grip. “Walk with me,” I said. “All of you. The city may be getting better, but there shouldn’t be lights on in this building, and it’s only a matter of time before one of the local heroes decides to stop by and see why.” “We can take them,” Rachel said, from the rear of the group. “We can, and we will,” I said, entering the stairwell. “On our terms, not theirs.” “There’s enough enemies to fight,” Parian said. She had to hurry around the table to catch up. “We don’t need more.” “I agree,” Grue said. “Not that I don’t understand the need for some response, but you’re talking aggression.” “I’m feeling aggressive,” I said. “I think. I don’t know. Hard to pin it all down.” “Might be better to wait until you have a better idea of what you’re feeling,” Grue said. “It doesn’t matter,” I said, stepping down onto the staircase. “Logically, there’s no choice but to act on this. You heard Valefor. The villain community won’t respect us until we answer the PRT, and the so-called good guys won’t have a reason to think twice about doing it again.” “The rest of us aren’t as vulnerable as you are,” Regent said. “Don’t want to sound disrespectful or anything, but we don’t have the same kinds of civilian lives to protect.” “There’re others,” I said. “Part of the reason we uphold these rules is because it sets precedents. Other villains hold to the rules and we benefit, the opposite is true.” “The flip side of it,” Tattletale said, “Is that we’re risking an escalation in conflict.” “I don’t see how they can escalate,” I said. “As I see it, they played the last card they have. The harder we hit them now, the more clear it is to outsiders that the PRT doesn’t have an answer. I can show that it doesn’t bother me, and the effect is the same.” “Doesn’t it, though?” Tattletale asked. “Doesn’t it bother you?” “Yes,” I said. “In terms of me, I don’t know. I can’t say for sure whether it’s justified or not. But they went after my dad.” “I get that,” Grue said, “I’d be pissed if they went after Aisha. God, you know, when I was swallowed up by Echidna, and she was filling my head with all the worst stuff I could think of, revised memories, it-” He stopped, and I paused to glance back up the stairs at him. “Bro?” Aisha asked. He took a second to compose himself, then said, “I get what you’re saying, Taylor. Believe me. I was buried in it. If anyone here knows what it’s like to want to protect people-” “That’s not it,” I cut him off. “It’s not about me wanting to protect my dad from the aftermath of all of this. That’s done, and right now he’s hurting more than he has since my mom died. Some of that’s on me, and some of it’s on the people who sent Defiant and Dragon into the fray. The damage is done.” “And you want to go after the non-capes who made the call?” “Yeah,” I said. “I’m sick of being on the defensive. I hate waiting for the other shoe to drop, because there’s always another shoe, and always a bigger threat. Speaking of, what’s your interpretation on the company we had tonight, Tattletale? How do you think they’re going to play this?” “The Ambassadors are on the up and up, as far as I can guess their direction. Accord’s unpredictable, which is kind of ironic. I’d say they’re lower priority.” “They’re going to stick to the deal?” “Until Accord’s neurosis pushes him to break it,” Tattletale said. “Then who’s a higher priority? The Teeth?” “Lots of aggressive powers. Butcher’s at the forefront of it all. Spree has rapid fire duplicate generation, Vex has the ability to fill empty spaces with small, razor-sharp forcefields, Hemorrhagia is a limited hemokinetic with some personal biokinesis, Animos can transform for limited times and packs a power nullification ranged attack while in his other shape. There’s two or three others.” “I’m asking about their goals,” I said. “Any clue what they’re thinking? Are they going to come after us?” “Probably. We seem weak and unbalanced right now, especially with Parian not doing the absolute best job protecting her territory.” “I’m trying,” Parian said. “You’d be doing better if you’d accept help,” Tattletale retorted. “Except you don’t want to do that because you haven’t committed to this.” “I will. I’m still figuring out the more basic stuff you guys figured out ages ago.” “Commitment on a mental level, P. That’s more than just coming to meetings. You don’t have to like us, but respect us, get to know us, trust us and maybe allow for the occasional intimate moment.” Parian snapped her head around to stare at Tattletale, in a way that was rather more dramatic than the statement warranted. “Not that kind of intimate. Sorry hon. Trust me when I say we’re all pretty accepting here, and there’s no reason to lie; none of us girls here bat for the other team.” “I didn’t say anything.” “Of course,” Tattletale said, smiling. “But I was talking about letting us see more of the girl behind the mask. Share those vulnerabilities, let us give you a shoulder to cry on.” “I don’t need one,” Parian said, “And that has nothing to do with me defending my territory.” “More than you think,” Tattletale said. She glanced at me, “They’re the type to prey on weakness, and Parian’s capable of only protecting a short section of her perimeter.” “Hire people?” I asked. “Henchmen, mercenaries.” “I don’t want to put innocents in the line of fire,” Parian said. “You don’t want others to suffer if the Teeth come after the people you wanted to protect, either,” I said. “I don’t know what you want me to do. If I call for help, they’ll retreat, and we wind up wasting your time, while leaving me looking and feeling useless.” “There’s an alternative,” I said. “What I was talking about before. Going on the offensive. Only it’s not about just the good guys. I’m talking about targeting our enemies, wiping them out before they hurt us and give us cause.” “That’s dangerous,” Grue said. “You guys keep saying things along those lines,” I responded. “I shouldn’t be so strict with our enemies, I shouldn’t ratchet up my involvement in things, I shouldn’t be aggressive. It’s more dangerous to leave them loose, to always give our enemies the first move.” “The flip side to that coin is that it gives everyone else we deal with less reason to play ball. We need to get other villains to parley if we’re going to seriously hold this territory. The Ambassadors are only step one,” Grue said. “If some other group comes into town and they’re considering joining us, are they going to look at whatever humiliating defeat we visit on the Fallen and feel it’s better to attack us first?” “Escalation,” Parian echoed Tattletale’s earlier statement. I sighed. Atlas had descended from his vantage point above the building, and flew in to land next to me. I ran my hand along his horn. “We’re not… the idea here isn’t to attack you, Taylor,” Tattletale said. “Hell, what they did was low. You said it yourself, in that cafeteria. But you’re talking about changing our dynamic, and it’s a dynamic that’s been working. We’ve already been through some high-tension, high-conflict scenarios. A bunch of times when we went days without a chance to breathe. You want to ratchet that up?” “Not entirely,” I said. “If we do this right, if we play this smart, then this should reduce the amount of conflict. I need to know if you guys are on board.” “Yeah,” Rachel said. “I’m in,” Regent replied. Imp nodded. “My- my vote doesn’t count,” Parian said. “I only wanted a show of force, to see if we couldn’t scare the Teeth. Only I think it had the opposite effect, because what you guys were saying about Butcher is spooking me. If you guys want to help me with them, okay. But I don’t want to commit to anything major here, and I can’t tell you guys how to operate, because I’m new to this. Skip my vote.” “Okay,” I said. “Tattletale? Grue?” “I’ve already said my bit,” Tattletale said. “You call the shots in the field, and act as the face of the group, I do the behind the scenes stuff. That’s how we worked it out. I’m kosher with that.” Grue said, “I have one thing to say. Think it over, or keep it in mind. We made it further than most groups do. Some villains set their sights high, and they fall. Others try to eliminate their enemies and get eliminated in turn. Still others set their mind on a goal and they strive for it, only to get worn down along the way.” He paused, glancing away. I didn’t interrupt. Picking the right words? Thinking about himself, as one of the ones who were worn down by circumstance? Or maybe he was thinking about me in that light. “Maybe part of the reason we made it this far was because you weren’t striving for that. When we were villains, you were trying to be the good guy, behind the scenes. When we were trying to take out some pretty nightmarish opponents, your focus was on surviving more than it was on attacking. I didn’t get the impression you craved to be team leader or to rule the city, but you took on the job because you knew the alternative would be disaster.” I nodded. Even if I’d wanted to say something in response, I wasn’t sure what I would’ve wanted to say. “Maybe the reason I’m less comfortable with this is that it’s not your usual pattern. I feel like you’re wanting to be aggressive because you’re hurt and angry. There’s nothing to temper it. Think about it, okay? I won’t tell you not to do this. Despite everything I just said, I do trust your instincts, and I’m not sure I trust mine these days.” “Grue-” “I don’t. That’s me being honest. Do what you have to do, but do it with your eyes wide open.” “Okay,” I said. “I’ll try.” I had a sudden impulse to hug him, to hold him as close and as tight as our costumes allowed, my arms tight around his broad back, his muscled arms holding me just as tight. The idea alone made me feel like I might suddenly burst into tears, and I found it startling, inexplicable. I didn’t hug Grue; I wasn’t sure enough about what I was feeling or why, didn’t want to come across as anything but a leader. Leading this team was something I could do. Something concrete, with real dividends. Why had I brought Atlas here? Had I already been thinking about running? Avoiding further contact with these guys? Avoiding Grue? It was disconcerting to think about. Tattletale was staring at me. Could she read what I was experiencing, or get a sense of the emotions that were warring inside me? “Okay,” I said, and I was surprised at how normal I felt. “We’re playing this much like we did against the Nine, only we aren’t waiting for better excuses to do it. Groups of three, one group active at a time, one target at a time.” “Who are we fighting?” Rachel asked. “The Fallen, the PRT, and the Teeth.” “And you’re in this group of three for tonight’s mission?” Tattletale asked. “Yeah.” I needed a release, to do something. She glanced at Grue, and I suspected there was some kind of unspoken agreement there. She met my eyes, or the opaque yellow lenses that covered my eyes. “I’ll come.” “You’re ops,” I said, “I thought the whole point of that was that you’d stay behind the scenes and out of trouble.” “I’ll come,” she repeated herself. No argument, no manipulation. Only the statement. “Me too,” Rachel said. “Not sure that’s a good idea,” Tattletale said. “Maybe someone more subtle?” “No,” I said. “It’s fine.” Subtlety wasn’t what I had in mind. Bentley crashed into the side of the PRT van. The vehicle rocked, but it was set up to be in the field amid villains with superstrength and literally earth-shattering powers. It didn’t tip over. Two more dogs crashed into the side of it, and the thing fell. The PRT officer in body armor fell from the turret at the top, his armor absorbing just enough of the impact that he wasn’t badly hurt. The containment foam sprayers might have been an issue, but none of the uniforms were in a position to use the stuff. I’d come prepared, and each sprayer was either thoroughly snagged on spider silk at the top of the equipped trucks, or the PRT agents who were wearing the portable tanks were bound, blind and under siege by massed bugs. Dovetail flew after Atlas and I, a trail of luminous slivers of light falling in her wake. She was good at maneuvering in such a way that the sparks didn’t fall on the PRT uniforms and heroes below, even with my swarm crawling over her head, shoulders and arms. Where the slivers touched something solid, they ballooned out into what Tattletale had described as soft force fields, encasing the subject. Anyone could push hard enough against the force fields to break them, even with multiple fields layered over one another, but it impeded movement, and she could hover over a target to keep reinforcing the forcefields until the victim could be smothered in more permanent containment foam. It might have been a crummy power, but she was fast. If she could have thrown the forcefield-generating slivers further than she did when she flung her arms out, she might have had us. It was to my advantage that it was easier to dodge pursuit than to match someone else’s course exactly. Didn’t hurt that she had bugs in her nose, ears and mouth, and that she was being bound by silk, limiting her range of movement with every passing second. She was already unable to use the compact containment foam sprayer she had built into her costume. Nothing I did would stop her from flying, but so long as she was blind and unable to use her arms, I didn’t see her being too much of a threat. She wasn’t making headway on the offense, but retreating wouldn’t change her circumstances. I’d still bind her in silk, blind and choke her. Her costume had a flared collar, and my bugs were crawling inside, between skin and cloth. That attack was as much about the psychological effect as about getting to more skin to inflict bites. I wasn’t sure if it was just me, but her movements were bordering on the frantic, now. No holding back. I only had so many wasps and hornets, but I did what I could. Mosquitoes were a good one. Welts. Leaving a mark. Rachel’s dogs knocked over another one of the vans that had been circled around the PRT headquarters. The van was knocked into the side of the building, bending the bars that were supposed to protect the windows. Each window cracked, with the lines spiderwebbing out between the hexagonal sections, but they didn’t break. Adamant got into close quarters combat with the dog, slashing at it with pieces of his armor and driving the animal back. Rachel whistled, shrill, and two dogs tackled him. He delivered one good swipe before the other blindsided him. The disadvantage of forming a full covering of armor was that it limited his peripheral vision. She wasn’t going even two seconds without giving a command. There were five dogs in the field, or four dogs and one young wolf, and many were lacking in serious training, so she managed them with lengths of chain between their collars and Bentley’s, and by giving enough commands that they wouldn’t have time to get creative and go after one of the PRT uniforms. Sere was indoors, along with Triumph. Binding Sere had been a first priority, and I’d achieved it in much the same way. He’d done what he could to target the bugs managing the threads, and to disentangle himself, but time spent on that was time he wasn’t moving outdoors and shooting me or one of the dogs. As with Dovetail, I’d managed to make enough progress that he was more or less out of the fight. She was blind, he was immobile. The other heroes would be arriving soon. I double-checked Dovetail wasn’t in a position to give pursuit, then ventured inside, entering through an open window on the uppermost floor. I felt calm, which was odd, given the scene. Bugs swarmed every employee, from the official heroes to the kids who might have been interns. Some howled in pain, others screamed more out of fear, or yelped as bugs periodically bit them. The bugs gave me a sense of the route I needed to take, my destination. There were offices in the back corner, but I had a sense of where I was going. I’d been here before, when Piggot had been director. I saw the labels on the door. Commissioner. Deputy Director. Director. I opened the last door. Director Tagg. He held a gun, but he didn’t point it my way. There was a woman behind him, using him as a shield. I’d had statements ready, angry remarks, any number of things I could have said to him, to punctuate what my swarm was doing to his assembled employees. Statements, maybe, that could have surprised him, woken him up to what he’d done to me. Then I saw the steel in his eyes, the sheer confidence with which he stood in front of the woman… they had matching wedding bands. His wife. I knew in an instant that there wouldn’t be any satisfaction to be had that way. Rather, the word that left my mouth was a quiet, “Why?” His eyes studied me, as though he were making an assessment. His words were gruff, the gravelly burr of a long time smoker. He very deliberately set the gun down on the desk, then replied, “You’re the enemy.” I paused, then pulled off my mask. I was sweating lightly, and my hair was damp around the hairline. The world was tinted slightly blue in a contrast to the coloring of my lenses. “It’s not that simple.” “Has to be. The ones at the top handle the compromising. They assess where the boundaries need to be broken down, which threats are grave enough. My job is to get the criminals off the streets and out of the cities.” “By starting fights in schools.” “Didn’t know it was a school until the capes were already landing,” he replied. “Had to choose, either we let you go, and you were keeping an eye out for trouble from then on, or we push the advantage.” “Putting kids at risk?” “Dragon and Defiant both assured me you wouldn’t risk the students.” I sighed. Probably right. Someone behind me screamed as a group of my hornets flew to him to deliver a series of bites across his face. “Barbaric,” Director Tagg said. “Inflicting pain isn’t the point.” “Seem to be doing a good job of it,” he commented. “There are heroes on their way back from patrol, your guys called them in. But there’s also news teams on the way here. We called those guys in. They’ll find your employees covered in welts, every PRT van damaged or trashed. Your employees won’t be able to get any cars out of the parking lot, so they’ll have to walk, which will make for some photo opportunities. A handful of heroes will be a bit the worse for wear. You can try running damage control, but some of it’s bound to hit the news.” “Uh huh,” he said. “I couldn’t let you get off without a response from us.” “Didn’t expect you to.” “This was as mild as I could go,” I said. “I think you know that. I’m not looking to one-up you or perpetuate a feud. I’m doing what I have to, part of the game.” “Game? Little girl, this is a war.” His voice took on a hard edge. I stopped to contemplate that. Rachel was destroying the last containment van, and Tattletale was saying something to her about incoming heroes. I was low on time. “If it is a war, my side’s winning,” I said. “And the world’s worse off for it. You can’t win forever,” he said. I didn’t have a response to that. He must have sensed he had some leverage there. “All of this goes someplace. Do you really see yourself making it five more years without being killed or put in prison?” “I haven’t really thought about it.” “I have. Bad publicity fades with time. So do welts and scabs. Five or ten years from now, provided the world makes it that long, nobody will remember anything except the fact that we fought back. Good publicity will overwrite the bad, carefully chosen words and some favors called in with people in the media will help whitewash any of our mistakes. We’re an institution.” “So you think you automatically win? Or you’re guaranteed to win in the long run?” “No. They didn’t pick me to head this city’s PRT division because I’m a winner, Ms. Taylor. They picked me because I’m a scrapper. I’m a survivor. I’m the type that’s content to get the shit kicked out of me, so long as I give the other guy a bloody nose. I’m a stubborn motherfucker, I won’t be intimidated, and I won’t give up. The last few Directors in Brockton Bay met a bad end, but I’m here to stay.” “You hope.” “I know. You want to fight this system? I’ll make sure it fights back.” “So you want to escalate this? Despite what I said before?” “Not my style. I’m thinking more about pressure. I could pull your dad in for questioning every time you pull something, for example. Doesn’t matter where, doesn’t matter who it’s directed at. You or your team do anything that gets an iota of attention, I drag the man into the building, and grill him for a few hours at a time.” I felt a knot in my stomach. “That’s harassment.” I was aware of Tattletale approaching me from behind. She leaned against the doorframe, arms folded. “It’s a war of attrition,” Tagg said. “I’ll find the cracks, I’ll wear down and break each of you. If you’re lucky, then five years from now they’ll remember your names, speaking them in the same breath as they talk about the kid villains who were dumb enough to think they could keep a city for themselves.” “He’s playing you,” Tattletale murmured. “He knows he’s got you on a bad day. Best to just walk away. Remember, the Protectorate hasn’t had a good day against us yet.” I thought about asking him about Dinah, but there wasn’t a point. It was something he could use against me, and I already knew the answer. I approached the desk and turned around the photo frames. The second showed Tagg with his wife and two young women. A family portrait. “You have daughters,” I said. “Two, going to universities halfway across the world.” “And you don’t feel an iota of remorse for hurting a father through his daughter?” “Not one,” he replied, staring me in the eye. “I look at you, and I don’t see a kid, I don’t see a misunderstood hero, a girl, a daughter or any of that. You’re a thug, Taylor Hebert.” A thug. His mindset was all ‘us versus them’. Good guys versus the bad. It wasn’t much, but it served to confirm the conclusion I’d already come to. Dinah had volunteered the information. Whatever else Director Tagg was, he wasn’t the type to abuse a girl who’d been through what Dinah had. “We should go,” Tattletale said. “Rachel’s downstairs with all her dogs, we can run before the reinforcements collapse in on us.” “Yeah,” I said. “Nearly done. You, back there. Are you Mrs. Tagg?” The woman stepped a little to one side, out from behind her husband. “I am.” “Visiting him for the night?” “Brought him and his men donuts and coffee. They’ve been working hard.” “Okay,” I said. “And you stand by your husband? You buy this rhetoric?” She set her jaw. “Yes. Absolutely.” I didn’t waste an instant. Every spare bug I had flowed into the room, leaving Director Tagg untouched, while the bugs flowed over the woman en masse. She screamed. He reached for his gun on the desk, and I pulled my hand back. The thread that I’d tied between the trigger guard and my finger yanked the weapon to me. I stopped it from falling off the desk by putting my hand on top of the weapon. Tagg was already reaching for a revolver at his ankle. He did. Slowly, he straightened. “I’m illustrating a point,” I said. My bugs drifted away from Mrs. Tagg. She was uninjured, without a welt or blemish. She backed into the corner as the bugs loomed between her and her husband. “Not sure why. Doesn’t change my mind in the slightest,” Tagg said. I didn’t respond. The swarm shifted locations and dogpiled him. Stubborn as he professed to be, he started screaming quickly enough. I picked up the gun from the edge of the desk, joining Tattletale. We marched for the exit together, moving at a speed between a walk and a jog, passing by twenty or so PRT employees, each covered in bugs, roaring and squealing their pain and fear to the world as they stumbled blindly and thrashed in futile attempts to fight the bugs off. Nothing venomous, the wasps and hornets weren’t contracting their bodies to squeeze the venom sacs. There was nothing that could put their lives at risk. It was still dramatic enough. “He’s right,” Tattletale commented. “You won’t change his mind with a gesture like that. Sparing his wife.” “Okay,” I replied. I opened a drawer and put Director Tagg’s service weapon inside, while Atlas ferried Tattletale down to the ground floor. Atlas returned to me, and I took to the air, flying just above Lisa and Rachel and the dogs as we fled the scene. I made a point of leaving every single bug inside the PRT headquarters, to infest it until they had the place exterminated, which would only be another photo opportunity for the media, or to serve as a perpetual reminder as it took weeks and months for all of the bugs to be cleared out. The news teams were already arriving on the scene. No doubt there was a camera following us. I remembered Director Tagg’s threat, to bring my father into custody. Only a threat, going by his wording, but it did make me think about how every activity, every thing I did that brought me into the public consciousness, it would be a little twist of the knife that I’d planted in my dad’s back. Not a good feeling. Maybe the little demonstration I’d done with Tagg’s wife hadn’t been for him. It could just as easily have been me trying to prove something to myself. This entry was posted in 21.01 and tagged Adamant, Bastard, Bentley, Bitch, Dovetail, Grue, Imp, Parian, Regent, Sere, Tagg, Tattletale, Taylor by wildbow. Bookmark the permalink. leinadrengaw on April 9, 2013 at 00:02 said: Every time I try to load this at midnight and fail I panic anonymus on April 9, 2013 at 09:48 said: you ninja’d wildbow?!??!? Your fast! And I’m late! RazorSmile on April 9, 2013 at 00:03 said: Random Lurker on April 9, 2013 at 00:06 said: Hoping for a quick typo catch: “She’d talk about me, not you,” Imp said. How can Aisha interrupt herself? Psycho Gecko on April 9, 2013 at 02:18 said: Man, her power is good. AVR on April 9, 2013 at 00:12 said: “were the strongest powers were clustered” First were should be where. “Parian snapped her head around to stare at Tattletale, in a way that rather more dramatic than the statement warranted.” Missing a was. Packbat on April 9, 2013 at 00:12 said: She was more focused on places in the room were the strongest powers were clustered, and her focus was pretty indiscriminate beyond that. were -> where? Extra line after next/previous chapter line at end. endgame on April 9, 2013 at 00:13 said: Why is “Subtlety wasn’t what I had in mind.” located past the chapter section? Hobbes on April 9, 2013 at 00:17 said: “Subtlety wasn’t what I had in mind” repeats after the “Last/Next Chapter” link. Was distracted earlier today, little time to write. Fixed typos. Thanks. At the risk of making (or perhaps for the sole purpose of making) a really bad/obvious pun, you didn’t tag Tagg. Or anyone, actually. On the content of the chapter: Really loving Grue in this chapter. He’s right — being outed is really screwing with Taylor’s head, putting her in the same kind of space she was in when he had to stop her from trying to fight Burnscar head-to-head. Not as severe — her tactical instincts are still hella good — but she’s not thinking strategically, not really. chaos985 on April 9, 2013 at 00:20 said: Whats a “Comfmitment”? Like a f’ing commitment, but a little mixed up. Spell check failed me. Icarus on June 14, 2017 at 16:08 said: A specific type of covfefe. johnnythexxxiv on August 12, 2017 at 18:45 said: I love seeing the recent comments pop up every now and again. It’s awesome to see that I’m not the only one on the internet still getting guided to wildbow’s work years after it’s finished. The reference just made it even better Etraque on March 27, 2018 at 07:24 said: Pinkhair on April 9, 2013 at 00:25 said: Hrm, I’m surprised that the PRT fell so easily- but I suppose there was at least some planning and tattletaling skipped over on the trip! ““Comfmitment on a mental level, P. ” Commitment. “spare bug had” Either an extra space or a missing ‘I’. “every singly bug” Single. Typos are embarrassing. Aunt & her boyfriend were by this weekend, and they left just before noon, then there was an unexpected drop-in by furnace maintenance guys, and that ate up another hour. And I gave it -more- proofreading than usual, about an hour and a half of reading through, spellchecking, before watching the latest episode of Game of Thrones. And only feedback thus far is typo corrections. Feels bad, man. 😦 No One in Particular on April 9, 2013 at 00:34 said: Anyone have a plate of virtual cookies? I’m fresh out. Just realized how ambiguous that was. I meant to give to wildbow. *smacks head, because is one of few people who actually do do that* randomsoul2 on April 9, 2013 at 00:36 said: If it helps, I liked it! Every time a new power is introduced or explained, I squee a little. I love the mechanics of the Wormverse. Oh good. Thank you. Trusting on April 9, 2013 at 00:51 said: same here (though all the characters means theres no way i’ll ever get to draw a chibi of everyone ) , and of course I love puzzle pieces and seeing how new information fits in with earlier observations and enigmas . enjoyed the chapter and look forward to watching the various factions start going at it in ernest . No offence intended. Typo spotting’s quick to do. But yes, interesting, particularly the comments in character on the last interlude. > No offence intended. Typo spotting’s quick to do. Plus, it makes it easier for later readers to enjoy the chapter without distraction. aflamingostolemyparasol on July 30, 2016 at 00:05 said: And we later readers thank you for that. Indeed we do australday on October 26, 2017 at 03:08 said: Sorry! I actually do feel bad about doing that- but I wouldn’t want to leave the typos until I had something intelligent to say, I suppose, and sometimes that takes a bit longer; it was much easier when I was doing the archive crawl to stick them in with more thoughtful commentary. Anyway, I did enjoy the chapter, though as I mentioned I am surprised that it went down the way it did- I suppose after all they’ve been through it is fair to give them a ‘And then they kicked ass’ moment and not show every step of the plan, since I’m sure not everyone is a fan of that sort of thing. After all, it is certainly plausible that they could do it even with the PRT expecting something of the sort. Skitter’s head is an interesting place right now, and it was definitely good to hear it being called out, commented upon- and perhaps a first step to keeping her from doing something bad. Parian is gonna be a real terror if/when she embraces her abilities… but she is also potentially the weak link until then. Of course, the best of both worlds would be her bringing in Flechette. I have to wonder what Dinah’s game is- and whether the pieces of paper even survived the inside of Noelle. Would be quite the thing if only Assault knew what they said. I’d not be surprised if she hadn’t learned a lot from Coil, and she’s been through the horrid effects of a multiverse of withdrawal symptoms so she might be able to tough through the occasional faked or perhaps simply massaged statistic these days. Also, I should perhaps mention that I look forward to each update eagerly. Worm, where every month is NaNoWriMo! At the conclusion of the Noelle arc, Taylor has the papers and crumples them. Fake Name on April 9, 2013 at 01:48 said: Any comments on why you’ve left that dangling for so long? I did a few times (end of last arc) in a few drafts, but decided it would’ve been distracting/misleading/lost in the jumble. I mean, as opposed to just letting us read them when Taylor did. Well, Taylor very specifically didn’t refer to any legible writing being on them. I’ve been through much less punishment and had the contents of my pockets ruined=P Took longer than expected to write out my comment, otherwise I’d have been there sooner. *huggledysnugglez* beyondperformant on April 9, 2013 at 02:56 said: Even a perfectionist can’t always be perfect. Interesting direction. Can’t see Taylor being overly satisfied with the results. Tagg is a blunt instrument who doesn’t let facts get in the way of his world view. If he does follow through on his threats to Taylor’s father things are going to go very badly for him. Rika Covenant on April 9, 2013 at 03:11 said: Amazing. You can feel the seething hurt that’s built up without an outlet inside Taylor in this chapter, a scared- yes, scared, not for herself but for what it means, how it’ll affect her father even more than before; Not just all the little lies told every time but now the pain of watching /his little girl/ doing what she’s doing- teenager who is coming into herself at the same time as this fuster cluck is going off around her. She’s trying to redirect that pain and aggression outwards, to take all the pain and sadness that’s been inflicted on her and rejected it, when really she needs to stop and get introspective, and confront the source of the pain inside her. Thank you for yet another beautiful chapter of Worm, Wildbow. Never feel embarassed at making typos. I still find them in professionally editted paperbacks and hardcovers from every period of time I’ve read from, from seventy year old printings to current hot-off-the-presses iterations, including reprints. Heck, even the bible isn’t immune to typos (King James Bible, anyone?). Everyone makes mistakes. We’re just helping with the editting process. ^_^ Always helps to have additional eyes on the work. Gives a degree of separation that a first-tier inspection might miss because the brain just glosses over it and fits it in because you wrote it in your mind, but never actually put it to page- Lord knows the number of times I’ve done that, with both written and spoken words. In short: It’s okay to not be perfect, we like you just the way you are Wildbow. ❤ *hugs* its a good chapter, first feedback is usually going to be typo corrections because last time i read a chapter all the way though, then posted the noticed typos, i was number 7 to post the same one. also, i like to think on a chapter for a bit before commenting on the content. Gilgamesh on October 16, 2018 at 05:57 said: Speaking of game of thrones, I wonder if Worm would ever end up as an HBO series. 🤔 Patrick Reitz (@dreamfarer) on April 9, 2013 at 00:29 said: Taylor’s emotions are surprising her more and more often it seems. I read that as showing how she’s breaking in some pretty scary ways after all of the insanely traumatic stress she’s been through. I also notice though that her plight isn’t escaping those around her. Brian and Lisa both seem to be very aware of what’s up, maybe even more than Taylor is. There may be no Endbringers on the immediate horizon but I’m betting there’s some of roughest seas we’ve seen so far lying ahead in the next few weeks. I agree completely. This is almost exactly what I was about to say. The key, though, is that she’s “breaking,” rather than “broken.” There’s still time for her…I suppose redemption *is* the word I would use, really. She lived up to Director Tagg’s description of her today. Anzer'ke on April 9, 2013 at 00:47 said: I’m not sure how. Considering what they did, the PRT are really no better than a gang of thugs themselves by now. They almost certainly just engaged in all manner of unfortunate tactics regarding those students (especially the ones Clockblocker got trampled), are being purposefully antagonistic (choosing a guy like Tagg, going after secret identities) and most of all are now on the wrong side of the moral line. Not just in the public eye, also literally. The Undersiders did more for Brockton Bay by far. The PRT are now truly threatened and thus showing some really unpleasant true colours. The Undersiders continue to avoid targeting civilian identities despite certainly being better at it. While the heroes have stopped even pretending. Which means we now have the PRT relying on and taking advantage of, the Undersiders’ morals. That’s pretty clear villain behaviour, no wonder Parian’s not going for the hero option. TheAnt on April 9, 2013 at 00:51 said: A thug. Hmmm, well its not like the PRT can sit on their high horse for too much longer. The institution is about to be forever marred with the whole crimes against humanity thing. Plus his whole little war mindset could backfire big time. Even in war, there are rules or codes of conduct. So go ahead director Tagg, break the unwritten rules. The villains are going to break them in a heartbeat right afterward. That means going after families, and you guys are the ones who started it. He is also very wrong in that her act of Mercy changed nothing. If Taylor really wants to beat him, she has to beat him by proving that she is the better person. The students DID favor her over the heroes after all. So if she keeps taking down villains, and refuses to stoop to their level, people are going to notice. But he is right that it is unrealistic for them to keep doing this for a decade and expect to get away unscathed. As unrealistic as I think the 9 are for never being stopped, even they had to constantly get new members. After this I can sort of see a birdcage arc. A hero gets lucky and she gets put away just in time for the breakout. I’m pretty sure that Dragon would arrange for Skitter to not end up in the Birdcage, push come to shove. Well I really hope Dragon is okay and she lets a few prisoners go. I think her nature prevents her from being predicted by the Smurf, so if she lets Canary/Panacea maybe she can prevent the inevitable escape from being as bad as it could be. Don on April 9, 2013 at 01:24 said: It might not be her call, anymore. Remember, she’s… indisposed, aside from the whole quitting the PRT thing. Can Dragon get someone out of the Birdcage at all? I thought it was a one way trip in terms of she can send you in, but no one can get you out (barring cheaters like the Simurgh maybe), From Interlude 15 (where Panacea was sent into the Birdcage) Dragon said “She’ll be transported there and confined for the remainder of her life, barring exceptional circumstance.” That implies there are ways if necessary… Dragon is the world’s greatest tinker — if she wanted to send in an elevator capable of lifting a passenger out of the Birdcage, she could. The other six hundred inmates might have objections, though. And Trickster is in there now. If anyone were to go up an elevator, he could swap himself out no problem. izoughe on April 11, 2017 at 11:36 said: Not necessarily; he’d need to have line of sight. I’d assume the prisoner in question would be removed as subtly as possible so as to prevent riots, and if Dragon took that line of action, there’s a very good chance Trickster wouldn’t even know about it until after the prisoner was already removed. If Trickster were aware of this with enough time to spare, he would absolutely make sure he had line of sight and swap himself out. >Nothing venomous, the wasps and hornets weren’t contracting their bodies to squeeze the venom sacs. This contradicts something earlier: >Someone behind me screamed as one of my bullet ants was flown to him to deliver a bite. Pretty sure bullet ants have venom- and if Taylor is being “nice” enough not to properly sting with wasps and hornets she sure as hell wouldn’t be using bullet ants (except maybe on the Director). Bullet ants, as far as I’m aware, can’t cause anaphylactic shock. There’s a tribe in the same area where the bullet ants can be found, which harvests the ants and sews them into what basically looks like an oven mit, their pincers facing inwards. It’s a rite of passage to wear these gloves with dozens or hundreds of ants and dance for hours. http://www.asktheexterminator.com/ants/Bullet_Ant.shtml > In the off chance you experience a bullet ant sting, have someone take you to the emergency room immediately. You cannot drive because of the pain that will hit you about ten minutes after being stung. Also, take an antihistamine. Many people are allergic to bullet ant stings and may suffer severe allergic reactions, including anaphylactic shock. My research failed me. Well damn. Ok. Will fix. Remember, they apparently have to keep those glove things on their arms, with all those ants, and don’t wind up in shock over it. I don’t know how many she’s using, but maybe not any more than that tribe does. alexanderthesoso on April 9, 2013 at 17:30 said: simply put, if the venom has protein in it, it can cause anyphylactic shock. Even though most ant venom is mostly formic acid, there is still a bit of protein that some people are allergic to. Individuo on April 9, 2013 at 00:42 said: Those ants are in spot N1 in the Pain scale, but as far as i know, no anaphylactic shock. Fire Ants are a 1.2 on the Schmidt pain scale… Red Carpenter Ants are a 3.0 The pain from them is described as, “A drill excavating an ingrown toenail.” Bullet Ants are a 4.0+… and are described as such: “Pure, intense, brilliant pain. Like fire-walking over flaming charcoal with a 3-inch rusty nail grinding into your heel.” And on TOP of that, the pain lasts for hours (It’s colliquially known as the ’24 Hour Ant’) and can cause temporary paralysis and uncontrollable shaking for days preceding the bite event. http://upload.wikimedia.org/wikipedia/commons/d/d6/Paraponera_clavata_MHNT.jpg is an image of one of the buggers… ;________; I actually feel sorry for anyone bitten by them, now… And anyone who has been bitten by them in story should probably have been screaming and writhing in pain far louder and longer than otherwise- Possibly even dead of anaphylactic shock, considering the extreme pain from multiple bites would likely cause people to pass out. Get Tagg in the nuts! All references to bites should actually be stings; Apparently they’re like some sort of wasp-ant that stings to inject venom- the venom isn’t from their bites. However, their bites are still plenty strong enough (the ants are/were actually used as sutures in India; have the ant bite the wound, twist the head off. pincers keep the wound closed). I don’t think you (Rika) mean “preceding the bite” unless the pain is so bad that it reaches backwards in time and starts before the ant actually stings you. Mind you in the Wormverse, I could actually see that being possible. Hmm, and now I’m thinking of the Nine’s newest member “Temporal Bullet Ant”. Patrick, that’s what I was thinking too. Pain so bad it hits you days before you even get stung! That I didn’t. Butthat’s what happens when I type at 3:50 AM and am sleepyish. 😛 Clarvel on April 9, 2013 at 00:31 said: I think its crazy just how unstoppable the undersiders are when they really want to be. This is the second time they’ve assaulted the PRT, and the second time they walk away unhindered. They, with the travelers kept the pressure on the s9 largely by themselves, all the heros and villains they’ve taken down. I think its time for tattletale to start posting some of those secrets they know. Good to see how wonderfully this latest tactic worked out for the PRT. I suppose their next idea will have to really go for broke if they want to top it. Maybe they could blackmail their own already tenuous, vitally important allies with deeply personal threats…oh wait a Dragon. Maybe they could continually taunt people who show more moral fibre than… You know I’m struggling to think of a new low. Though I’m sure they’ll find one. Easy. They try to defend what Cauldron did. Come on Tattletale use that blackmail material. There are probably plenty of nasty things about them without spilling the beans on Cauldron. Already done. A lot actually. Wow they have done that a lot. Unless you mean the mainstream PRT, however given that Cauldron are villains I’d say them publicly defending Cauldron would be completely dropping the act. Though Tagg was witness to Dragon being blackmailed into doing immoral things with threat of putting a mercenary in charge of a prison, in order to avoid damaging Cauldron…so yeah, they are already doing it. I really hope Dragon records everything around her. She can try and go above their head. I suppose Vista could be raped (by them or one of the worse groups in town) and they just decide to blame it on one of the Undersiders so they can issue a kill order. Well, looks like they’re going after Grue or Regent next. >Rape Too far, PG. Too far. Not far enough. They could, say, blame undersiders for killing the family members capes, for example, like saying Regent took them over and used them to attack the hero. Or they could accuse Skitter of murdering a PRT director – oh wait she already did. Or they could go after their families – oh wait they already did. Rape is so cliche in grimdark setting though I doubt wb would use them – in part due the above examples of things that are easier(Vista doesn’t have to testify) and worse than the action. Mind you, they already took Dinah pretty soon after she was given back to parents, so they actually would have someone stand in for an Undersider and do that to Vista. It is just that they could do worse. For PG nothing is too far. for examples read what he did to holdout. “Rape, murder, arson, and rape.” “You said rape twice.” “I like rape.” “A little roleplay never goes wrong. Don’t you watch Law and Order?” “I don’t watch anything BUT Law & Order. Rape is all I know. I just paid for lunch in rape dollars.” Dear Psycho Gecko, Imply rape towards a little girl agian and this will be your fate. http://www.youtube.com/watch?v=R3Mt2E1M6dU From, Skitter. I set out to suggest the worst thing possible the PRT could do next and from your responses, it seems I have found it. Rika especially seems to think that would be an absolutely horrible thing for the PRT to do. Also, due to Wildbow’s sensibilities, I doubt that would be done anyway. Rika, I like you and I’m not a very rapey person. Despite the dialogue from Blazing Saddles up there, didn’t even do that to dear little Holdout, may he rest in pieces. It should be noted that I am not intimidated by Skitter. If we met, I’m sure the encounter would turn out quite fun for me. But since we’re volleying threats about: http://www.youtube.com/watch?v=1ve-iyV5zns Huggles and snuggles, Psychopomp Gecko PS: If you use a Dimension Bomb to destroy a planet connected to another by a portal that’s always open, what do you suppose happens to that second planet too? Should have used Dinner with the Arkhams, but some things are spoiled by that point. Ah well, I can just go bear hunting and the whole mess will work itself out. Ooch…that chapter was like a punch in the gut on several levels. I swear, before I got to the part where Taylor revealed she hadn’t hurt the wife, I wasn’t sure what to think about Taylor anymore. Still not sure what to think. Is this just her lashing out, hurt and wild, or a permanent change? This chapter was uncomfortable, because this is the really the first time I’m completely starting to feel like Taylor has started to do things for the wrong reasons as well as the right ones, After all that Tt/skitter shipping, I found tt’s response to Parian equally as hilarious as parians response when she misunderstood. First time I’ve laughed in a while, so thanks. What was really interesting about this chapter is I felt like wildbow was talking to us though it. Shooting down skitter and tattletale romance, disproving any ideas Parian has started taking a more active part, telling us what w trump is, discussing skitters feelings right now…it feels like Iike I’m reading a built in explanation that answered some questions as I had them, and others tht had been lurking for a while. Which isn’t a bad thing; certainly an interesting writing style. Looking forward to the next update, as always. I actually wonder if Tattletale threw in the word “intimate” purely to confirm her gaydar reading. Of course, like a lot of Tt’s moves, it’s not the best long-term thinking — I bet Parian would have been more comfortable coming out on her own terms. I guess her gaydar is more powerful than anyone else. Will Flechette and Parian be our hero/villain romeo and juliet? Anyone else think maybe Lisa has a bit of a thing for Taylor? Like an older sister vibe? Yeah. I thought Lisa’s revelation about why she recruited Taylor made that explicit. It was to save her from suicide- Something she wasn’t able to do for her brother. That’s what I took from that revelation. But she seems to be more and more taking on a bigger role as the big sister to Taylor, it seems. And I like it. I was hoping for something less sisterly between Skitter and Tattletale, but at least there was intercourse between Tattle and Parian this time around. Re: parian, it’s even more interesting when you consider she might actually *be* a lesbian. Puts that entire conversation in a different light – especially the line, “none of us girls here bat for the other team”. Ninja’d. AGAIN. I’ve been refreshing before every comment on a typo/whatnot, just in case, and been preceded EVERY TIME. Blarg! My first comment would have been when there were only 2 up. I caught the story right away, 5 minutes after midnight. Still ninja’d. This drives home that the Director position is cursed. Does the application form require violent insanity and delusions of grandeur or is it just highly valued? I’m kinda sad the undersiders didn’t attack when the directors were reviewing the school incident. THAT would have been an interesting meeting. I present, Tagg’s best line from that meeting. “If you would have cut off the feed, deleted the footage from phones, we would have had time to do damage control.” Yeah. What a wonderful guy. So to recap: Piggot was a racist (and a mild sociopath, going by some of her actions); Calvert was Coil (a complete sociopath); and Tagg seems to have modeled himself after all the military generals from those old war movies; specifically, all the generals people hate for their aggressive mercilessness and warmongering (and mild sociopathy). “So, you’d like the new PRT Director position in Brockton Bay huh?” “Yes. There has been a decided lack of discipline here.” “Well, the last guy just died. Something about a wound on his knee becoming gangrenous. Imp has started using a bow. How about we go ahead and give you a week or two probation in the spot. If you impress us, or just survive, we’ll keep you.” “It sounds like I shall have to act quickly to put everything back in order.” “Alright, so how do you say your last name again?” “Umbridge. My name is Dolores Umbridge.” Naeblis on April 9, 2013 at 02:43 said: The horror! D: Now thats funny! ….Somehow, the idea doesn’t really seem all that out there. After all, there are those who use magic- Why not a prim and proper “lady” teacher/tutor/educator figure who models herself after her, with ‘Magic’? The problem being that the administrators of the PRT program are not admitted to the position unless they have a complete lack of powers. A non-magical Umbridge would be plausible though. soulpelt on April 9, 2013 at 08:38 said: Mother of God….. ._. Pandemonious Ivy on April 9, 2013 at 12:15 said: Imp using a bow is a novel idea, actually It gets worse. Tagg used to be the director until…? Retsam on March 28, 2015 at 16:19 said: … he took an arrow to the knee? Did it really take 2 years for someone to comment on that joke? Psycho Gecko on March 29, 2015 at 21:46 said: Congratulations! This is the Gecko Automated Comment service! As the first of no-doubt many people to answer correctly, you now win the special rigged lottery numbers for the lotto to be held on the next drawing, April 10th, 2013! [Image Expired] Enjoy this fabulous prize! Wow, was beginning to think you’d gone AWOL. Missing your input over at Pact, guy! (Yes, I’m *still* running behind. Shaddap. xD) I did a little with Pact, but then stopped at one point. Kept meaning to go back after there were more updates, but then it wound up ending. Scolopendra on April 9, 2013 at 09:13 said: I believe there is a trope for this. It’s called the “General Ripper”. Basically, a batshit insane general that pursues conflict for no good reason. But General Ripper doesn’t care about body count, while I think Tagg does care, at least a little. If just for the PR side of things. Tagg acknowledged that he became aware that the operation was at a school when contact was made. At any point, he could have ordered a withdrawal based on the off-chance that students may be targeted or injured. He chose to take that risk based on an assumption about Taylor’s personality. To him, the risk of innocent casualties was one worth taking. No, I don’t think Tagg gives a damn about the body count, so long as he’s a winner. He is also aware that Taylor is teamed up with known killers who don’t really have the same restraint. If Bitch had shown up with her dogs on a rescue attempt, there is practically no way there wouldn’t have been students getting hurt left and right. Again, he went through with this insane plan knowing there was at least a chance of bad things happening. His attitude and lack of regard qualify him as a General Ripper as far as I’m concerned. Dis on April 9, 2013 at 20:41 said: Seems oddly appropriate, what with the bug theme: I watched a snail crawl along the edge of a straight razor. That’s my dream. That’s my nightmare. Crawling, slithering, along the edge of a straight razor … and surviving. *Accidentally drops a salty fry on the snail, killing it.* trey on April 9, 2013 at 00:35 said: I’m really surprised she didn’t start stripping Tagg to the bone with whatever insects she had available. Or give him the Triumph treatment. _Never_ do a foe a small harm. Zyaode on April 9, 2013 at 00:48 said: I’m surprised Tattletale didn’t turn his world upside down more than anything else. I wonder if any of the other undersiders left surprises behind for the PRT – this seems far too light for violating the truce Nonsensical Nonsense on April 9, 2013 at 01:04 said: I think Tattletale went with Taylor for the sole purpose of making sure she wouldn’t do something stupid, there is no doubt that Tat could read how bad a place Taylor was in and she was focusing more on her than trying to mess with Tagg. Agreed, emphatically. I think that was the unspoken agreement Grue and Tattletale had. Yeah, you’re probably right about her having tunnel vision – still, this means this attack accomplished little towards convincing the PRT it was a Bad Idea to play games with the truce so casually. Nothing of any real value was broken, nobody died (though I didn’t really expect anyone to) and Taylor’s further entrenched Tagg’s opinion of her as a thug by threatening his wife. Tattletale was in the right place to make a big difference with Tagg, but keeping Taylor from doing anything exceptionally stupid was just as important. Tagg may be in for a rude awakening next time the PRT needs villainous aid if he keeps this up – and given what just happened I think he will. As he said, bruises and scabs heal quickly – all they’ve done is injure his pride and set up for a more spectacular PRT collapse later. Indigo on April 9, 2013 at 00:37 said: Taylor is going down a very dark path. Taylor: “Then I’ll bring fireflies.” …Mental image of Skitter working in tandem with River Song during one of her Badass moments. ._. Song? Tam* Though Song would be awesome too. If she were younger, Summer Glau would make a great Dinah. Skitter, sweetie, I hate to BUG you, but those beasts are about to eat the Doctor. Have you figured out how those Rigellian Centipedes spit their acid ye… ohh, dear me, that’s wonderful! Lets go cause some property damage! rmctagg09 on April 9, 2013 at 00:37 said: Taylor’s not in a good place right now, it’s frightening me. Mabelode on April 9, 2013 at 00:40 said: Director Tagg really needs to brush up on the the subtle differences between the police force and the military. Also, the strange new concept called ‘rules of engagement’. I imagine that the PRT and police are highly militarized to say the least. The differences are almost purely cosmetic, probably. But there needs to be. A military and police are two very different things. Police are there to apprehend the bad guys and kill only as a last resort if necessary to protect themselves or others. A military is trained to kill and defeat the enemy. I honestly can not believe how shortsighted the guy is. Does he not remember that they need the truce to fight the Endbringers or that the villains only play by the rules because they do as well? I can’t help but notice how the Wards considered themselves in a war in that psychologist, can’t remember her name, interlude. I don’t think they started out that way till after Leviathan. I hope they aren’t acting the same way in other cities. I had something written expressing some of these, but there was a portion it took me to work on, so yours came first. The PRT is peace enforcing agency. You don’t pull out a gun and automatically shoot every criminal, then go hunt down his family and friends and shoot any of them that have a history of being criminals. Plus, his actions have made it to where she can’t go back. Because her identity is out, she can never take this to a peaceful end where she just stops, backs away, and makes something of her life without crime. She’s got nothing left she can do but this kind of stuff. Her life is now on the line because her fates are either the Birdcage and death, so there’s no reason for her to hold back. I just realised that this is probably a major part of stuff Tattletale was talking about where they catch but don’t unmask, even with really bad guys. As long as the secret identity remains, it’s possible for someone to simply retire. If Skitter wasn’t so noble it would have been entirely believable for her to just take a huge pile of money and go take care of Brian somewhere. I imagine that this kind of thinking is why even the ruthless stick to the code. It means an enemy can always leave peacefully. Just look at the Pure. They are completely unable to just go be normal (if a tinsy bit, disgustingly bigoted) people. Whereas Purity at least might well have ended up retreating from it all to take care of Aster. Tagg: “Bah! Good is good, and evil must die, there’s nothing else to it! Now, if you’ll excuse me, I’m trying to read Les Misérables in one sitting and I keep getting distracted and have to start over, so shut up! Ahh, Inspector Javert, only you understand my view of law and morality.” endgame on April 29, 2013 at 10:13 said: Since it’s been 20 days since I posted this and no one has replied to it, I’ll just explain the joke (spoilers for Les Misérables): Inspector Javert was a lawman with a very strict black and white view of morality/ the law. At the end of the book (which Tagg hasn’t gotten to yet), Javert realizes that, since Jean Valjean is both a criminal and a good person (sound familiar?), his aforementioned views are wrong. Unable to cope with this, he kills himself. To paraphrase one of Sun Tzu’s first lessons, Do not enter into a protracted war. “It’s a war of attrition,” Tagg said. “I’ll find the cracks, I’ll wear down and break each of you. I believe Sun Tzu also said to always allow your enemy a path of escape, as a cornered foe will fight all the harder. By eliminating Taylor’s civilian identity, and thus her escape, he’s just made it that much harder for himself. Ah, but don’t they also say you should burn your bridges before you cross them? No, wait… Mmm… but aren’t they a paramilitary force acting in concert with the heroes who ARE a military force? I know many a time it’s been referenced that heroes are used as military by many countries, and it doesn’t seem any different in Brockton/the US. In Hannahs Interlude she made note of the differences between the PRT/Protectorate and actual military parahuman organizations; apparently it was a large enough difference that she prefers being in the Protectorate paramilitary, not military. So they are twice as bad? (for shame PG, letting me beat you to the pun. ) First, I’d like to congratulate those people who guessed “Imago” as the story arc title. Taylor really has changed due to what happened last arc. Whether its permanent or a passing thing remains to be seen. The fact that the rest of the Undersiders sense the problem just underscores it. Tattletale insisted on going with Skitter. It makes sense, knowing the backstory, and her reasons for helping Taylor. She probably doesn’t want to let Taylor out of her sight. How will that affect their friendship, I wonder. I know this is from Taylor’s point of view, but seeing Director Tagg here made me actively dislike him. I know he’s trying his best to take a hard stance on crime, and I can see his reasoning in taking such a cold-hearted appearance toward Taylor. Heck, I bet much of it was bluff and bluster, just like Skitter’s old tactics. But he just enforced Taylor’s “us versus them” view of society, something Dragon, an A.I., knew to avoid very early on. He might not know the full circumstances regarding Taylor (Coil being his predecessor, Sophia/Shadowstalker being the source of all the problems), which would make him incompetent since a good director should look into things. More likely, considering the bonus interlude, he’s actively complicit and unwilling to change things for the better, which means he’s just as bad as the others before him. To be honest, the chapter felt shorter than others, since not much actively occurs. Things are set up, and previous plots haven’t started, yet. But that’s the fun of coming back to Worm and reading. three rights make a left. on April 9, 2013 at 00:41 said: Maybe its just me but I feel like Taylor is teetering on the edge of a dark abyss throwaawy on April 9, 2013 at 00:45 said: definitely not just you. she’s… kinda scary right now. i’m almost getting dissonant serenity vibes from her right now… Yeah, this arc feels like it is going to build up to something big. I’m guessing there are two ways for it to go. She pushes herself into a moral event horizon and maybe tries to go back or she is faced with a difficult choice and ultimately chooses the good route no matter how bad the consequences restoring our faith in her. But yeah, the fallen are in for rude awakening if they mess with her right now. If the chapter names are anything to go by, skitter is going to have a very hard descision to make, and it is going to change how she acts from then on. I think we’re finnaly going to see what dinah meant by skitter being different in 2 years. Any guesses on how she is different? 1. More hardened/jaded/willing to kill-if an Undersider or her dad was killed I could see this. 2. Has 2nd trigger event-kind of unlikely if Noelle’s stomach didn’t do it and the possibility she already had it. 3. A true villain-no more grey for her. I think there are at least six of us who have posted expressing similar feelings before one a.m. If I could give Skitter orders right now, I’d tell her to kill an energy drink (electrolytes are important!), get ten hours of sleep, and then write letters to her dad and to Dinah Alcott. Let one of her minions post them and forward any replies back to her. Like Burnscar said, it’s incredibly crappy and anyone else would find it completely pathetic, but it’s the best option she has left to keep the kind of normal human contact Taylor needs. Definitely. I think just the act of putting the words on paper would really help get some of those stressful thoughts and feelings off her chest. Let her explain things to her dad. Yeah, given that she openly recognised that bad communication kills with Weld and MM it seems she has forgotten her own wisdom. It’s not like it can put him in any more danger. Though if he gets killed because the PRT outed her, we may well see her completely lose it. You forget things when you’re pissed. I also figure her once again failing to stand up for herself adequately, as opposed to how she did against Emma and Dragon, is because she hadn’t thought all this through as much as she’d have liked to. She got pissed, she wrecked some stuff, hadn’t thought it all through. Reminds me of this time with a homeless man, a scooter, and a portapotty. Okay, now that I’ve actually read it: – Imago, ha! Called it! – “A bunch of times when we went days without a chance to breathe.” Heh. I see what you did there – Man, Tattletale is ridiculous. She’s a grotesquely unfair force-multiplier for any side she’s on – Grue makes a fine consigliere. Almost as fine as Tattletale. They’re like the devil and angel on Skitter’s shoulders … except that the devil is a gorgeous blond and the angel is a skull-faced form obtenebrated in black mist. Oddly fitting for Skitter’s life. – Director Tagg is … well, he’s not wrong — but. – To quote Ambrose Chase of Planetary fame, “This is going to get damn ugly.” Typo Hunt (unless I’ve been ninja’d of course): – “Comfmitment” – you have “Subtlety wasn’t what I had in mind.” repeated at the very end. Either some kind of epilogue or a typo. – “As with Dovetail, I’d managed to make enough progress that he was more or less out of the fight.” Dovetail ain’t no dude, dude(tte) :p Oops, the ‘he’ referred to Sere. Apologies. Also Tagg gives speech about surviving. Taylor laughs “Survive this” Poisonous Bugs swarm Tagg biting him. Jguy on April 9, 2013 at 00:46 said: Is anyone else afraid that Regent is giving Skitter advice and Skitter seems to be agreeing/taking it? Oddly, that doesn’t bother me, but that might be because I see glimmers of a decent human being hiding away in Regent and I think hanging out with Skitter is helping that to emerge. That boy needs a hug. Not just Skitter – Imp, of all people. Nothing like a real peer to give you perspective, want to lift yourself up. I really liked that moment when we found out that Imp had protected a bunch of kids. Combined with her Interlude, it says that she’s nicer than she likes to let on. Before the portal appeared I had a hunch that eventually the PRT would cut their loses and abandon the city what with their constant inability to hold ground against the Undersiders , now that there is a portal I keep thinking they are going to have to place Brockton Bay under martial law at some point if they really want to keep that portal secure . I imagine at some point a group is going to make a grab for that portal simply because it’s a strange , unknown thing getting alot of attention . I think that’s the primary reason that the Undersiders are courting villains like Accord — they want to have more capes in town with a vested interest in maintaining the status quo re: the portal. Reveen on April 9, 2013 at 00:53 said: I really wonder what Miss Militia and her team would think of this joker’s little WAR ON CRIME FUCK YAR boner. Whether they’re sick of the conflict escalating and escalating while they end up traumatized and the city gets trashed in the crossfire. The city that they’re supposed to be protecting. This wasn’t a war until this jackass made it a war by saying the rules don’t matter. Motherfucker is threatening the family of a villain, if Triumph in particular let’s this shit fly without a word than he’s pretty much saying to the Undersiders “Yeah, go after my family again. I mean, acceptable losses right?” I mean, it’s not like wars ever involve things like ceasefires and peace treaties. Nope, fuck it. Total war! Let’s party like it’s 1914! I swear, this guy just makes me like Piggot even more. Atleast she wasn’t self righteous while being a limp-dick about it. She would’ve smashed through the school’s ceiling with a gunship with Ride of the Valkyries playing with PRT stormtroopers running in wearing codpieces with DEAL WITH IT written on ’em. Coffee and donuts, for fucks sake. People fighting real wars in real warzones don’t get coffee and donuts delivered by their wives you chucklefuck. I agree completely with that. Even in war there are rules. Things like how you treat prisoners, use of chemical weapons, no civilian targets etc. Granted, not everyone follows them but keeping to the rules lets them have the moral high ground with the world. Plus lets face it, if you break the unwritten rules, the villains are not going to hesitate. This stupid stunt might have broken the truce, and invite a rash of attacks on families. I wonder what he will say if the Undersiders remain the only ones to not cross that line. What can Miss Militia do? What can Triumph do? They don’t call the shots. Scrambles on April 9, 2013 at 01:09 said: They can refuse to pull the trigger. Literally, in Miss Militia’s case. They have plenty of power. They can publicly come out against the PRT’s choices and call them out on their stupid decisions. People have to notice all the heroes leaving. There is nothing stopping them from breaking away and being heroes on their own terms. Lets see him fight his stupid war when all his capes refuse to work with him. Okay, I can see that. Now that is a hilarious thought. Just him and a few others sitting in their office. No capes, no foam-armed troops (cause no Dragon) and no shits given about them. She’s the head of the superhero team, chain of command or no, if she lets this guy go hog wild without atleast protest then she’s basically worthless as a leader. I mean, it endangers her team after all. Eddie on April 9, 2013 at 01:40 said: Reveen wins. Doucetagg seems to be driving the PRT directly towards a schism. As mentioned by others, I don’t think many of the capes will stand for it. Especially given Tattletale’s all-singing all-dancing power and the Undersider’s penchant for surviving. I think the capes will soon realize that Tagg needs to be removed, before he pushes the Undersiders and Skitter to the dark side. If they really wanted to, I don’t doubt that the Undersiders could do just as much damage as the Endbringers did. More than that really since Skitter can hit you from blocks away. If she really started feeling backed into a corner, full-on man against the world mode, she could probably kill most of Brockton Bay within a day. Probably many of the heroes see what’s happening, the good guys are not so good any more and the bad guys are getting mad. Tagg is setting it up for the entire board to get knocked off the table, and he’s dragging everyone else down with him. He’s going to alienate most of his allies, and more importantly the public. While he’s raging after the Undersiders like a tyrannosaurus with a hard on, the “villains” are the ones making the city cleaner, and safer, and feeding and washing the unwashed hungry masses. I smell a confrontation coming. Tagg said it himself, recent good press will overwrite old bad press. I really hope to see a moment when the people of the city look at the PRT and ask “What have you done for us lately?” The PRT is looking very much like a never ending cycle of using a greasy, dirty cloth to clean up a spill. They’re just making it worse and spreading the dirt around. It’s like wrestling with a pig, you’re going to get dirty and even if you win you lose more than the pig did. I love Worm so much. So few other stories capture my interest like this one. I don’t really like how the encounter with Tagg goes. I’m fine with them assaulting the PRT- it feels a bit un-Taylor-ish, but she’s a bit off balance and I can see why she feels like she has to do it. But what Taylor does to Tagg’s wife? It feels really awkward. And I can understand the need for some direct retaliation against him personally, but simply attacking him feels really crude. I was expecting Tattletale to take the lead and try to pick him apart or attempt to find some dirt about him to reveal (even if they don’t find anything). I don’t think Taylor was thinking straight. I also wanted her to just humiliate them, and then have Tattletale spill the beans on a few of their secrets. I don’t necessarily think everything should go perfectly- I just expected them to at least _try_ something like that. It’s kind of Tattletale’s thing. The chapter is kind of depressing, and I get the feeling that it’s supposed to feel like an empty victory (or at least, that’s how it feels to me). I don’t want to suggest that everything should be sunshine and rainbows- so I want to make it clear that I’m fine with TT failing to find or say anything damaging (Iike they did with Piggot). Really, the main thing I’m trying to say is not “They should have done X” but rather that walking up to Tagg, having that conversation, making it clear that she easily could (but won’t) harm his wife, and then attacking him and walking away feels really weird to me. I can’t really put into words what it is. I know Taylor is going down a bit of a dark path, and while I don’t like that, I’m alright with that happening in the story. It makes a lot of sense. I want to say that it’s not _that_ which is bugging me, but I can’t really put into words what it is. Showing she could attack the wife but doesn’t shows “I AM better than you. I stick to the rules. No civilians. No family. This is just between you guys, the PRT, and us, the Undersiders.” if not so eloquently as I just noted. I know that’s the intention- but it feels like the worst possible way of trying to show that. Its clumsy and awkward. Drachomen on April 9, 2013 at 15:27 said: Actually, it seems well thought out to me. She has teams from the local, and probably national, news already called in. When every member of PRT is seen with welts and bites, Director Tagg hit the worst, yet the “innocent bystander”/off-limits family member is completely unscathed, it will do a lot to reinforce the notion that the Undersiders still follow the code. Seriously, EVERYONE is hurt except the wife? That’s one hell of a statement to the press. @Drachomen- I was talking specifically about when Taylor swarms his wife after she says that she stands by his rhetoric. I think it might be that her actions against the wife still feel like an attack. If purely a mental one. Maybe? It was done without a lot of the forethought that Taylor has been known for She makes what is basically a childish move (well within her rights, given the situation, but still) and loses her temper She hasnt really done anything like that since Manniquin, and she didnt have any other choice in that instance Bobby on April 9, 2013 at 01:18 said: “Trust me when I say we’re all pretty accepting here, and there’s no reason to lie; none of us girls here bat for the other team.” The meaning of this is clear. ALL the Undersider girls are lesbians. Quick, get Psycho Gecko to write a fiveway orgy with Parian! (I’m sorry.) I think the Undersiders could do better than a 5-way given Regents power to take control of bodies, and then if Grue took Regents power with his smoke… THE POSSIBILITIES, THE SHIPPING, THEY ARE ENDLESS He will, you know. Well with skitter’s relationship with grue, does that mean she’s bisexual? nvm, read that wrong Fans on April 9, 2013 at 06:42 said: Or that Grue has secretly been a woman all along? Ye Gods…..PG will have a field day with this! Hey Wildbow once Worm is published in some form you need to give a shout out to PG, he deserves it. xD I don’t think I am very high on the shout out list. Haven’t donated, haven’t done anything near Packbat’s work on the trope page, haven’t touched the wiki, can’t draw, my stories haven’t been related to Worm except for rewrites, the forum I advertised Worm in was for a superhero game that got shut down, I didn’t even make the Parahumans Online forum, and Wildbow doesn’t find me funny unless I am discussing how what Bonesaw did to Blasto with her spine is related to one of my fetishes. You’re just the backbone of the comments section community, that’s all. 😛 that joke was out of alignment! I can vertabarely stand it. This arc is looking to be quite scary, I have to say the implications of having Taylor’s identity revealed kinda just started to catch up to me, makes me really want to see how she deals with her territory now. Taggs threats toward her Dad also made me realize how dangerous her picking fights with people like the Fallen or Teeth, who probably wouldn’t think twice about using her father as leverage against her, is. Overall I also think Tagg is right about how as long as the PTR stay, no matter what the condition, in the long run they will win. Regardless of how it really is people will always view the PTR as the “heroes” and the more power the Undersiders get the more people will just start seeing them as villainous dictators, Taylor is the closest the Undersiders have to a “hero”, given her popularity and general attitude about her territory but with all this stuff happening to her it is only a matter of time before she snaps. The funny thing is, the PRT is explicitly an organization, they think in terms of asset management, not saving lives. Remember how they didn’t try to help finish of the s9 when the undersiders had their location pinned down, whatever they did to dinah, attacking a known dangerous villain in a school expecting them to take hostages, etc. To be honest I was thought the death of her father was inevitable at some point. Worm always felt like the type of story that had great highs and bad lows. Once I realized that she is going to stay a villain, I expected that big choice down the line that is going to define her. My guess, which for once I hope is completely wrong, is that her dad is killed by another villain to hurt her. She goes on her roaring rampage of revenge, and then chooses whether to cross that final line or not. I have yet to predict the elusive wildbow, so who knows. But the arc title is very suspicious. I hope it doesn’t come to that, but it might be bearable if Skitter or Regent (team’s sociopath) kills Tagg in the aftermath. If Danny DOES die then I’m scared for the poor bloke who kills him. I see the sky being darkened by swarms, people stripped clean of their flesh, the PRT scrambling, and failing, to keep peace as the swarms focus on the individual who took hers fathers life and turn him or her into a living hive, maybe hire a Tinker to find a way to make something that allows that…….or just kill him in the most public way possible. Though honestly I see the PRT trying to protect Danny as much as they can, to use a barrier against Taylor. Though one thing I still want to see is Emma right now, I want to know how badly her mind broke. I NEED to see it. >:3 ….I can see Skitter approaching Bonesaw and Jack, the two of them dropping into combat ready stances, or at least as much as either reacts like that… Only for Atlas to dump the barely-living body of her father’s murderer on the ground in front of her, bugs writhing all over him, biting, stinging, tearing at him even as he lays there, still, unmoving. “Bonesaw, right? We let you leave the city, so you owe me. Fix him. Make him live. Make him unable to die, even if I kill him, over and over.” her voice heavy with pain and hatred, shaking with the visceral need to take the few strides over to the hapless man and tear into him once again… …Or, alternatively, falling to her knees before Bonesaw, weeping, begging for her to save the life of the man she brought… the bugs clearing away to reveal Danny’s lifeless body. Sobbing, pleading, saying she will do anything, anything at all, just to save his life. Or, alternatively, Skitter enters Nilbogs domain, alone and begs the monster who can create life to give her back her father DUNNNN Worm: One More Day, by Joe Quesada. I’d rather see Skitter paralyzed and slowly eaten by rats. Except Bonesaw is no Mephisto. And Skitter would no doubt be inducted into the S9 as their newest member, geting the ‘standard squishy package’… before being forced to kill her father, again and again, each time Bonesaw reviving him only for her to be forced to kill him again- because he’s going berserk, or because he’s in excruciating pain that will never end unless he dies, or because he begs her to- even if it’s entirely an act put on by the meat puppet that Bonesaw makes of the corpse. Broken, mentally and spiritually, Skitter would be an easy victim of Jack’s particular wiles, especially if used in combination with some special triggers Bonesaw would implant, much like the Cherish triggers. This IS Worm, after all. Camo005 on April 9, 2013 at 01:45 said: Man, Skitter is going straight to the Dark Side isn’t she. I’m not entirely sure how i feel about that. Still i cant wait for this “War”. The rules have been broken, and now all hell gets to break loose. *Throws open the gates of hell with a laugh, singing Grace For Sale as he leads an army of demons and tortured, unrecognizable souls past Camo. They grab him and pull him along as they…hit up the Gulf Coast for Spring Break! Ghouls gone wild! Gecko grills for you, often having to swat Beelzebub away from the burgers and wieners while Azazel runs around sounding campy and luring in bikini-clad women with free makeovers. Mammon sells lemonade by the side of the road while keeping track of the local horseracing from his phone. Meanwhile, Asmodeus is yapping away on the cellphone with the writer he manages, Stephenie Meyer. Satan bangs away at the bathroom door, desperate to sit on the porcelain throne, but unfortunately Belphegor has fallen asleep on the john. Down by the water, Lucifer lays out, sunning himself, failing to account for just how quickly sunburn can set in.* *Gecko brings over your burger, some fries, and hands you a small plastic pitchfork* Welcome to the party in the comments section. Stay while and enjoy yourself. What’s the worst that could happen? *Cue the evil laughter…coming from Belphegor as he holds the door shut now that Satan is pounding on the other side to be let out of the bathroom as he’s in and found Belphegor that left it quite fragrant.* I expected another The Villain Has A Point moment, right after this: ““And the world’s worse off for it. You can’t win forever,” he said.” So many possibilities here. Instead, she proves his point. Threatening his wife was a very thuggish thing to do. I really hope she takes a moment, or a bunch of moments, to realize that she has GOT to do some serious work on institutionalizing herself. She’s now a government. If she isn’t going to lose everything she’s fought so hard for, a city that has hope and is rebuilding, then she has to get the real government on board somehow. Or if not them, the people. I wonder – what’s her best first step in doing this? By institutionalizing herself, I mean making herself the institution. Not, you know, Baker Acting herself. You know, Skitter in the nuthouse could make for some interesting reading. It could actually be what she needs, depending on the fallout. Many of the elements of her being in the birdcage, but without the whole “trapped inside the perfect jail” thing. Also, Wildbow, I have given you Rare Candy. Use it wisely. In “attacking” the wife but NOT actually attacking, she’s showing that she does have the power to do so, IF Tagg pushes her, but that she’s better than he and his PRT are. She isn’t involving THEIR family, like he did and is threatening to do again to hers. Also, by definition she already was a thug when she first became a villain; thug means criminal or ruffian, and ruffian means violent or lawless person, of which the latter applies regardless of former. 😀 Yeah, I was thinking the same. Her point was not to threat, but to show that she’s better than that. By the way, this is another very practical reason why you don’t just unveil someone’s identity in public. She doesn’t have very much to lose now at all. They’ve made a permanent version of Battle Royale Gitmo the only possible ending for her aside from death. But as Sun Tzu says, roughly, always leave your enemy a way out because if death appears certain, they will fight much harder with their lives definitely on the line. I admit I don’t always follow it, but my fights are not at that level of conflict. Man, this Tagg guy just strikes me as a soldier. Do anything to win the war, let someone else put a pretty caption on the images. Thinks he’s right just because he serves some greater institution. The problem is that conflict within the bounds of one society is not the same as war. That’s because this is actually a peace, and maintaining peace requires a different set of skills. It might be very informative for him to have a very long talk with Armsmaster. Armsy was an ass, but he was still better than this buffoonish, cretinous, deplorable, egregious, fetid, goonish, halfwitted, ignoble, jabbering, knurly, licentious, malignant, nitwitted, odious, putrescent, quixotic, rancorous, splenetic, trollish, useless, verminous, witless, xerotic, yecchy zealot! Still, nice to see some reprisal here. Sweet, sweet revenge. There needs to be some sort of humiliation added for all to see on the outside of the building. Like posting the wife’s cell number along with “Call for a good time.” Was there supposed to be something before buffoonish that started with ‘a’, or was @$$ supposed to fill that role? Also, I’m thinking of calling him “General Tagg” from here on out; what do you think? No need to self-censor here. And don’t demean the title of general by applying it to Tagg; It makes him seem to have greater power and potential than he really has. Just call him Douchetagg. I actually tend to avoid swearing as a general rule. And I’m not demeaning the title or trying to give him more power/potential; the guy just clearly seems to think himself one, so I’m using it sarcastically since ‘Director’ just doesn’t fit this guy. Armsy was an ass. If you’re going to avoid swearing, avoid the swear entirely, then, please? If you’re going to say ass, say ass. Don’t say at sign dollar sign dollar sign and look like a goof. 😛 Duly noted. Yeah, im loving douchetagg Kytin on April 9, 2013 at 03:53 said: I think that is a slander against all good and noble Generals. General Douchtagg has a nice ring to it, as he’s a general douchebag. I am not sure he fits the model. I doubt he lacks information vegetable, animal, and mineral or knows the kings of England or quotes fights historical from Marathon to Waterloo in order categorical. It’s doubtful he’s very well acquainted, too, with matters mathematical or understands equations, both the simple and quadratical. About binomial theorem, he’s out of clues, and has no facts about the square of the hypotenuse. He’s not good at integral and differential calculus; he doesn’t know the scientific names of beings animalculous: In short, in matters vegetable, animal, and mineral, he doesn’t fit the model of a modern general. He doesn’t know mythic history, King Arthur’s, or the inventor of Crocs. He can’t answer hard acrostics and has no taste for paradox. Nor does he quote in elegiacs all the crimes of Madonna’s tourbus, and in conics he is floored by peculiarities parabolous. He can’t tell undoubted Raphaels from Donatello or even Afghanis and doesn’t know the Python bit with the assaulting Brit grannies. He can’t hum a fugue of which I’ve heard the music’s din before or know the significance of Astley’s pompadour. He can’t translate my washing bill from Babylonic cuneiform, or even tell me all the details of superman’s uniform. In short, in matters vegetable, animal, and mineral, he is NOT nearly the model of a modern general. He doesn’t know the difference between Behemoth and Leviathan and can’t tell at sight a Dragonsuit from a javelin. When such affairs as sorties and surprises he’s been beaten at, so badly he’d be executed by competent commissariat. And his wife is such a whore and example of pure cun*ery, she might as well heed Shakespeare and “get thee to a Nunnery”. In short, when Skitter’s blown him up to Earth’s apogee, you’ll be forced to offer up a sincere apology. Despite his military knowledge, he only thinks he’s plucky and adventury, and he’ll be feeling what’s next for close to a century, but still in matters vegetable, animal, and mineral, he’s not the very model of a modern times general. …I applaud your ability to Wormify Gilbert & Sullivan. Great Greedy Guts on April 9, 2013 at 13:04 said: Jenna K. Moran on April 9, 2013 at 02:05 said: Dang. This is a pretty big loss for the Undersiders, IMO; Taylor’s just finished getting out of an impossible situation and (depending on how Dragon’s code changes go) possibly changing the world by sheer force of principle and it still left her damaged enough to willingly give up some of that principle shortly thereafter. I mean . . . I know that the public can’t really tell the difference between this and their previous PRT raids, so in public relations terms it’s not a huge disaster. And she’s closer to the ground so I’m sure her tactical sense that something had to be done is meaningful. Maybe inaccurate, but meaningful. But she’s spent a long time not being what authority kept telling her she was. Not being nothing despite the bullies who wanted her to be nothing; not being a soulless villain despite the PRT pushing her to be; and now this guy gets in his big swively (presumably) chair and decides that she should be a thug, and he makes her one. It’s tragic. A minor tragedy, I guess, but tragic. (Though I guess on some level I’m really just reading that from the emotions she has here— it’s not the worst or least defensible thing she’s ever done, it’s just that she’s no longer narrating from the part of her headspace that’s trying to not be a villain.) I guess another part of this, ironically, is that she’s losing the underdog status. There’s no win to cheer here, no clever pulling-off-a-victory-despite-the-PRT’s-overwhelming-power (which I think is what some people were hoping for from Tattletale—not a more definitive tactical victory, but to somehow pull out a moral victory using her talents.) There’s just a show of strength. Guh. I hope that something gets her off of this path. Er, empathically hope, I mean; she’s still a great protagonist and I don’t mind reading tragedy. (Though a happy ending would be even better.) Happy ending?! HA! This is WORM we’re talking about. There are no ‘happy’ endings. Just endings with a little less emotional trauma, and sometimes physical trauma. storryeater on April 9, 2015 at 12:01 said: But she does!blink and you miss it,because,indeed,she is not on her right headspace,but if she let the PRT to get away with nothing,she would be projecting weakness to the other villains. A big thank-you goes out to Edward for the generous donation. Scheduled another bonus chapter. Thanks Edward! All your bonus chapters are belong to me! I’ll see if I can rummage up something else next paycheck. Ajoxer on April 9, 2013 at 02:50 said: Yes, Taylor is in a dark place. She’s had her life taken away from her, and been essentially locked in to living her life only as a super villain. After all the work she did, the heroes took it away from her- And casually, for no real reason, and to no real gain. Part 1: The disproportionate responses. There was the discussion, early on, between Skitter and, if I recall correctly, Tattletale. The discussion of the kinds of people who do this, the kinds of people who get kill orders, and the kind of people who get a light sentence. Skitter, it is important to remember, is NOT someone who has done others a great deal of harm. She has killed one man who was in a position of power so great that leaving him alive would’ve been a death sentence on her and those she cared about. She has nearly killed a hero in a desperate attempt to keep his father from shooting her, imprisoning her, and resulting in some extremely dark shit from going completely unopposed. Skitter may very well have made the difference, in the fight against Echidna. The world could have ended if not for her. She has, on whole, done significantly less harm than many of the heroes we’ve seen. She’s made some calls that would require review by a board of inquiry; Letting the merchant die, shooting Coil, lying to Sundancer and having her kill four innocent people along with Noelle. If I were on that board, I’d acquit her. She had no duty to help the Merchant, Coil is the sort of person with the sort of power that you should absolutely not allow to just go around. And she made a sacrifice for the sake of humanity; Four people, chosen completely at random, to save the world. Would she have done the same call, if it were Tattletale, or Grue, or someone else she knew personally and cared about? Maybe. That’s important, sometimes. She has the potential to do incredible harm. She has always been able to stop short of doing this, but we can understand why people would be scared. But the reason that the people in power are scared of Taylor isn’t because she might slip up and kill someone. The reason they’re scared is because she is a Leader. She’s a leader in a way that very few people are, and it’s becoming more and more clear. Her stunt in the previous story, showing that she was supported by the people, should by all rights, have the PRT shitting its pants. Part 2: The PRT’s responses. If the PRT were wise, they’d want her to be on good terms with them. She is an extraordinarily moral person, and frankly, they can’t pull that whole ‘Well, so you say’ thing on her anymore and sound convincing. She just had the PRT break the unwritten rules in a truly horrible way, and she proved that she and her friends could completely cripple the branch of the PRT, effortlessly, without losses- And without having to put a single person at risk of death. They could be the Slaughterhouse Six if they wanted to be. But they aren’t. So, there are two reasons why the members of the PRT are against them. The people on the lower levels, who are not related to Cauldron, have the Tagg mentality. ‘Your cause is not as righteous as ours, so you’re going to lose, and it doesn’t matter what we do, because we have the right.’ This is, arguably, the more dangerous mentality, because it justifies hideous actions without considering them for a moment. They believe that their cause justifies all action, to the point where they do not have to justify their cause. The other reason, and the one being used by anyone in a position of power, are the people behind Cauldron. They’ve been using the excuse ‘We get more villains than heroes from trigger events’ to justify themselves for a long time, and they’ve started to forget what, exactly, it means when they say heroes and villains. Taylor is a powerful, charismatic, strategically brilliant cape, who just happens to be a ‘villain’, and has a tremendously powerful moral code. She could bring down their power, and whether they oppose that because they think the world will descend into anarchy without their support, or because they’ll lose their power over the world. Regardless, they have been convinced theirs is the only way for nearly thirty years. This does not seem about to change. Part 3: The likely directions. A lot of people state that they think Skitter is going to go to a very dark place in the coming days. I agree, but I think that she’ll be leading a charge into Hell. The Birdcage isn’t going to hold. Come on, if we make it to the end of this story and there has not been a massive breakout from the Birdcage or something then- Okay let’s face it the Birdcage has practically screamed out loud ‘BREAK OUT OF ME’ for as long as we’ve been here. We’ve got 600 capes in an incredibly confined space, Amy’s in there, this shit is going to go completely bananas. We’ve had our sixth S-class threat mentioned. I don’t know if Nilbog’s gonna be important- It’d surprise me if he didn’t show up at some point, but meh. But we have had Sleeper referenced, and let’s face it, that’s a chekhov’s gun if I’ve ever seen one. The next Endbringer attack is going to be coming up soon enough. And considering the current state of the Hero-Villain truce, it sounds like things are going to go very poorly. And we have Scion being told to go all-out. The Slaughterhouse Nine is going hog-wild on some DNA. This is clearly a Bad Thing. A dimensional portal has been opened to another world. Skitter has been getting trained to be a general from the very beginning. Her power is that of the ultimate general. Complete, and very-difficult-to-disrupt battlefield awareness. Communication that can overcome most forms of interference. A weak soldier who requires tremendous planning and careful marshaling, requiring a complex experience of research and development, and careful examination of her tactical choices. And a level of multitasking awareness that is superior to absolutely anyone, except maybe Dragon. And she can make the hard choices. This may not be so easy if it’s her personal friends, or if she has to choose deliberately to put people into a place where she absolutely knows that they’re going to die, but we shall see how she adapts to that. Look back at the Leviathan fight. It was a brawl. They set down a loose battle plan, but once things were started, they didn’t have a clear leader. Compare that to the fight against Echidna. Once things hit their stride, they were capable of working together, fighting, and killing her. A single strong, intelligent general, linking people together, using their talents intelligently, and being able to be trusted with an overall strategy, and the Endbringers could be destroyed. The Slaughterhouse Nine could be wiped out to the last. Taylor’s becoming a general. She’s living for the war. That’ll have a dark ending- Either the world ends and the war is lost, or the war is won and she’s left as the old hero, perhaps one of the most tragically painful results you can have. But this is her imago. She’s a general, taking control of her troops, and facing a war that goes beyond anything that has ever been known to humanity. It is NOT all up to her- A general is desperately important, but they are a glue that makes a collection of people more, rather than less, than a sum of their parts. Every troop under them makes a difference, and everyone is important. So, I think that things are going to get dark. But I think that Taylor’s going to save the world. Because this isn’t the kind of story that ends with ‘And then the Endbringers won and everything was fucked’, and I’m grateful for that. There are going to be losses, sacrifices, and I’d be truly astounded if Taylor ended up being truly happy. But she’s going to save the world. God I ramble on a lot. To elaborate a little bit more after reading more. Many people feel that this was Skitter going out and betraying her principles in some way, but I am not sure that I entirely see that. She went out and picked a fight with the PRT, but she used the minimum necessary force, and despite being provoked significantly, she showed, conclusively, that she could do this, that she could find a wife, and that she could hurt her- But that she wasn’t going to. She’s capable of hurting people very badly, and she’s often threatened to do it, and people are terrified when they’re threatened, that way, but ultimately, she doesn’t. She’s not a Bad Person. She’s not following the law; but if you think that, alone, makes her a bad person… Additionally, it’s important to realize that the entire group has, effectively, been on a war footing, for a long, long time. The entire world is on a war footing. The last time the Undersiders had anything that could be treated like a ‘play fight’ was probably about the time they crashed the hero’s fundraising dinner. Leviathan, the Slaughterhouse Nine, Dragon’s massive presence, Coil, Echidna, they have faced no less than three level S threats, the preeminent Tinker in the world, and a man with an almost unbelievably dangerous set of abilities and resources. And none of this is going to change. The veneer of civilization is still there, but it’s hollow and worn. The worst that Taylor has done is scare the hell out of people with what she could do. Frankly, to call her a villain is, at this point, kind of hilarious. The world is wounded. The wound may be mortal, but it is bad, and the war that is engulfing it is severe. A single bad decision by a bulimic girl created a creature that could have wiped out humanity in its rage and horror. This is a world where the current authority is not working. And yet more rambling Very well thought out. I think Wildbow mentioned he won’t bring in Sleeper/Nilbog in case he does a sequel so there is more of the universe to explore. Her teaming up to attack the birdcage seems unlikely. She might try to help rescue those that don’t belong in there like Canary/Amy, but she probably would try to take down any other escaping prisoners. I’m thinking that quite a few prisoners will come to the bay. Lung, Marquis, and Glaistig would do very nicely as new big bads. Well, I think we can all agree that Taylor is going to change after this arc. Whether good or bad remains to be seen. I agree with TheAnt, that was very well worded I just have a bone to pick regarding the Birdcage as well IF it opens, its likely to be at the very end of the story, because so far the *expected* has either not happened or appeared at unexpected times Also, Marquis breaking out would be great, he is easily my favorite out of the entire Wormverse You guys are gonna make me blush. Pshaw. Thoughts on opening the Birdcage. I do not think that Taylor’s going to break into the birdcage. That seems the most unlikely. But people breaking out of the Birdcage, or the Birdcage getting voluntarily opened? Now that seems much more likely. The thing is that a lot of the people in the birdcage are unfit for living in polite society. But going back to what I have reiterated, again and again… They don’t really have to be, anymore. The earth is now in a state of war against itself, and as has been mentioned- I think perhaps even recently? Lung may have thought it, I cannot recall precisely- there are patterns of behavior that would’ve been much more acceptable in the course of human history than they are currently. This is not to say this isn’t going to be a gruesome moment, letting some of these people free. They’re thugs and murderers, and sometimes much, much worse. They can do some gruesome damage to the world. But the earth cannot afford to pick and choose its defenders so carefully. Someone like Canary? Someone like Marquis? Hell, even someone like Lung? They cannot do anything remotely comparable to what one of the Endbringers can do. And I think that a fair number of them, given the choice between ‘rot in the birdcage forever’ or ‘risk your life fighting an Endbringer for the sake of your life’ would go for the latter. This is a classic idea, the prisoner given a suicide mission; That’s because it’s an idea that is appealing to the mind, and creates an interesting relationship. It’s not without its risks. These people are unstable and you need to keep a close hand on them, because they could cause serious dangers if they don’t. But with generals like Skitter and forensic psychologists like Tattletale, they can make the difference. Thoughts on justice. Add onto this, the fact that, to our understanding, the birdcage is a ridiculously hideous perversion of justice. Canary caused a man significant wounds- maybe death- through negligence. This is something that’s punished, with a prison stay in the shape of quite a few years, but she was given life imprisonment, with absolutely no chance of parole, in a hellhole of a prison. Amy suffered a mental breakdown driving another cape insane and making her look really hideous, and volunteered for the Birdcage. Nobody suggested ‘Hey, you know, maybe we should put the transcendentally skilled healer into the asylum where she can come to term with her mental issues and become a great cure to the world.’ My personal moral view probably should be explained. In my view, the most horrible thing you can do to someone is to kill them. There are things that are essentially the same- wiping someone’s mind and personality so thoroughly that they are essentially gone. But these are the great crimes, because they destroy someone’s future impact on the world; And they leave no hope. Next up are injuries. These are things that lessen you in some way. For example, if you sever a tendon and lose a great deal of motor ability in your hand. If your back gets snapped. If you get your brain rewired to be incredibly attracted to your little sister. These are things that change you in a long-term sense, and damage your capabilities. These are like little deaths, but they can be overcome. Almost everything else that could be considered bad is in the area of ‘hurting people’. It feels really bad while it’s happening, but ultimately, you’re not lessened permanently as a person. Skitter hurts people a lot, in many different ways. She makes the heroes feel helpless, she makes the ordinary people feel watched and nervous, she makes her foes feel terrified. But she doesn’t kill people, and she actively avoids risking their death. Now, let’s ask; Which of these two things is worse. Scaring the shit out of someone and giving them an incredibly painful bite, or putting them in prison for five years. Five years of being locked up away from society among people who are in an almost animalistic state of mind, and when you get out, you have been reclassified as a second class citizen. Or you get a bite from a bullet ant and told to get out of town. This may be just me. But I can handle pain. Pain fades. Injuries are worse. They’re the sort of thing that laws usually do. Chopping off hands or putting someone in a stay at a prison. The Bird Cage is, for all intents and purposes, a death sentence. Sure, you continue living, but your life is forever diminished, a life in a small circle of incredibly unpleasant people. It’s a mortal injury, and it’ll never stop or be allowed to heal. Amy injured people. Canary injured a person. Skitter has been remarkably good at avoiding ever injuring someone. Look at Lung- She was in full knowledge of his regenerating capabilities, so she inflicted admittedly frightening pain upon him, because she knew that he would recover from it. She’s killed, once. I think it’s justified; reasonable minds may disagree. But neither of them deserve to be in prison. And frankly, they cannot afford to leave Amy in prison. So long as a personality understands and fears consequences, and is given reason to do so, it can. And Amy goes beyond that. She genuinely wants to do good. She was literally pushing herself to the absolute limit, and even pushed herself past her mental blocks and fears to save her adopted father, and the world was completely hideous to her in response. One mistake, in the worst possible circumstances, and bam, that’s it. No redemption. They can’t afford to take a hard line of no redemption. This whole artificial sideshow of Villains and Heroes has intruded on the fight between Humanity and Extinction, and it simply cannot afford to do that anymore. There’ll be time for Nuremberg trials when and if they win. This is gonna be a super controversial discussion, but I think it’s one that can be divorced from its political basis to the essential elements. A large part of the War on Terror is that it hasn’t had much impact on Americans. Compare it to World War 2, or any war, really, where there’s expected to be shortages. Materials are put into strict rationing, and everyone is expected to make their effort. The War on Terror was different. The whole idea was ‘Don’t change your lives! We’ll fight wars, but we’re not going to let it effect our day to day lives!’ A continued consumption chain, and humans being at war with themselves. Now, this is a bad thing in our world. War must be terrible so as to keep people from desiring it too much. But it’s worse in a world like Earth Bet. I’m not saying every nutjob needs to be wiped out. I’m not even suggesting that they need to go root out Nilbog and the Slaughterhouse nine and never ever rest their eyes and every Super must be conscripted. But there are presumably thousands of Capes. There’s six hundred in the Birdcage alone. The Endbringers are a colossal threat. They are a massive, terrifying, extreme threat, and they are everyone’s business. Humanity is at war with a force that is essentially dropping a nuclear weapon in a random place every /two months/. And they don’t act like it; They act as though things are proceeding as normal. A girl with heavy body issues drank half of a serum and became a monster that could have wiped out a city, maybe a whole planet. This would have psychological costs, existing in a state of war against these things. But I don’t think any more than there are currently. The world is currently adopting a cold war state of mind, and frankly, that’s enough to put subtle but intense pressures on any mind. An active war would create a united front, which would likely reprieve stress. The best and brightest minds, putting work to mass-producing and reverse-engineering brilliant Tinker devices. Human beings working together to accomplish a great goal. I don’t know. I think it might work. Extinction of humanity is everyone’s business, and everyone should be motivated to put a stop to it. And when the Endbringers are brought down? Things go back to Cops and Robbers. Villains Vill, Heroes Hero, and the Rogues make a fabulous living. They keep themselves sharp. People who abused their privileges during the war get psychological counselling, because the least you can do with a soldier is make sure that they’re actually truly determined to hurt other human beings, rather than simply hurt from the deep strains. My rambling continues. I think a lot about this stuff. I didn’t say it earlier, but I’ve been reading each of your posts, and I have to say, amazingly well thought out. I agree with most if not all of your points intrinsically, though maybe a few disagreements superficially. Your mind is a beautiful place *drools slightly* Reminds me of some of my longer discussions and is in line with almost all my thoughts on the subjects. Aside from the wording, it really reminds me of times when I’ve had longer posts that people generally disliked due to lack of comedy. I’m especially fond of the idea that when the apocalypse is a serious possibility, villains will unite against it. After all, hard to rob a bank when the money’s worthless and what are you going to buy with it when the Earth is destroyed or under alien rule? I don’t know for sure that the Birdcage will suffer a breakout, though. It was mentioned by Dragon I think that people can leave the Birdcage again. She’s getting more independent so she may have need of those prisoners. Dump them all on Leviathan and see what happens, that kind of thing. I will bring up that there are psychological injuries caused by things that have hurt people. I don’t think she’s traumatizing everyone though. Also, we know the Sleeper isn’t going to be a part of this. Wildbow is saving him for a sequel. I am unsure if that means the world will survive two years down the line. Nilbog may pop up again, but I only expect he’ll be a part of the story again if an Endbringer pays him a visit. They ARE putting the Birdcage into the hands of what are, essentially, mercenaries. And as is well known, the loyalty of a mercenary always goes to the highest bidder… It was Hookwolf who thought about the whole ‘time and place for people like me’, in the interlude where he fought Shatterbird. @Gnarker: I don’t think the birdcage automatically goes to the Dragonslayers if they get the position. Its been mentioned that Dragon owns the land around and built the birdcage, so I would think Dragon would still own it and be the warden. And if you think about it, no one else would have the proper monitoring capabilities Dragon has, so if she leaves and takes all her stuff with her, I doubt the Dragonslayers will be able to do anything with the site. @PG I think an Endbringer could take Nilbog, especially if its Behemoth But he would likely survive and have to find a new place to live hnnnng Yeah, I think an Endbringer could take Nilbog too, but not without a surprising fight. After all, if Behemoth tries to set Nilbog’s minions on fire, that’ll just create more of them. I think an Endbringer against Nilbog would just end up a stalemate; Nilbog can generate life, so he could quite concievably make his new lifeforms resistant to whatever it is that any particular Endbringer brings to the table- He’s already insane, and likely can just change himself to ignore Simurgh’s telepathy; He can make his creations super-dense and naturally near zero kelvin so that behemoth’s combustion powers just excite their bodies from ‘hibernation’ so they can fight back; And Leviathan could be stopped by high-pressure aquatic beings. Honestly, if one attacked Nilbog, I’d expect a retreat, much like against Scion. You know, there seems to be something about this chapter that makes it easy to miss exactly what Taylor did to Tagg’s wife. When I first read it, I missed the reveal where she showed that she hadn’t actually hurt her. It made me think that Taylor was going a fair bit darker than she actually was. It wasn’t until I read some of the comments that I realised I must be missing something. Judging by some of the things people said, I don’t think I’m the only one. You know, this chapter is really interesting. Here we have Taylor, a villain, using villain tropes (like justifying her actions: “I HAD to inflict grievous pain on all those people who did absolutely nothing to me and whose only crime is joining an organization so they could help and protect others, it’s all part of the GAME!”), and we have Tagg, part of the Good Guys ™, using good guy tropes (like calling villains on their bullshit: “this isn’t a game, and you’re not a misunderstood hero, you’re just a thug!”), a traditional scene many have read from the other side and cheered for. And yet readers, understandably, side with Taylor and bemoan its occurrence this time. It’s all so deliciously morally grey. The Sandman on April 9, 2013 at 07:25 said: I think the big difference is that Taylor actually realizes that it’s a fight between grey and gray while Tagg thinks it’s black and white (and like pretty much everybody who does that, he puts himself on the side of the angels). I find the problem to be that once you widen focus the PRT look pretty awful at this point. Hell, its not even the whole of the undersiders attacking, its just skitter, tattletale, and bitch (and tattletale likely isnt helping take down the bad guys). So really, this is another example of how powerless the PRT is, 2 villains can basically waltz right in and incapacitate the whole PRT branch. To be fair, the PRT didn’t have all of its members there for whatever reason. I imagine if the Undersiders had attacked and they had all been present, things would have gone differently. Not to mention that two of the heroes present had already been curbstomped by Skitter before and apparently had learned nothing from the previous encounter. That said, if it had been the entire local PRT versus the full Undersiders, I’m pretty sure the Undersiders probably wouldn’t have even broken a sweat. The Undersiders have been there this long and the PRT still hasn’t figured out any effective counters to their abilities? Seriously, how difficult would it be to look up in their roster heroes who might actually be able to accomplish something and dispatch them to Brockton Bay? They already tried bringing in super counters, either dragon or defiant outright stated that sere was supposed to be untouchable by skitter, adamant has his huge full covering of armor, and I’d imaging dovetale’s forcefields were thought to hinder the bug’s flight or something. Easy for human to break, much harder for an ant. Logically, if you have been utterly beaten over and over again by a superior foe, then you should either tactically withdraw or make peace with your enemy. Yet the PRT keeps trying the same stupid tactics over and over again. I take it back: they are augmenting their already stupid tactics with innovative new ways to fail miserably. For example, the last time I checked, the only people ever captured in Brockton Bay by use of containment foam have been… the PRT themselves. The PRT knows that Skitter trounced Mannequin, another target that she should have not stood a chance of harming, yet they bring in Sere thinking he’ll be better? Did anyone notice that she beat Sere with essentially a dumbed down version of the same tactics she used on Mannequin? If something has been proven to be ineffective, why keep doing it? They completely ignored history and didn’t bother to do their homework on their opponent. I’d also like to make mention that the PRT built their plan on data received from a precog, a Thinker class. It’s been mentioned before that Thinker powers tend to interfere with one another, as evidenced with Accord and Tattletale. Piggot has also remarked that Skitter possessed “Prescience” when she anticipated an attack from behind and was able to avoid it. That would imply some level of precognition on some level, meaning there is a possibility that Skitter can muck with other precogs. All of this data is known to the PRT, yet they apparently ignored it and went through with their moronic plan anyway. That in mind, the PRT has not deployed any actual counters as they had already been proven ineffective or invalid well in advance. Hell, two of their smarter members (who had actually faced off against the Undersiders) told them explicitly that it wouldn’t work, and why, yet they still ignored it. That said, the PRT could probably work up an effective counter against Skitter if they weren’t such arrogant dumbasses. @Scolopendra Generally agree, but a couple of things: 1. Dragon managed to catch Skitter using containment foam (granted, with the assistance of Bitch’s backstabbing). Skitter only escaped that one because Dragon let her escape rather than be caught in the impending explosion of Kid Win’s stuff. 2. Pretty sure Piggot was crediting Skitter with possible prescience before they realised she was seeing through her bugs. Until they worked that out, it must’ve seemed like some sort of clairvoyance/danger sense… @Scolopendra: Honestly, I assumed that the PRT didn’t actually know the details of the attacks they weren’t present for. It would be stupid — police organizations need to use informants to generate leads, and military organizations need outright spies — but given that Calvert wouldn’t need any PRT intelligence and Tagg can’t count to eleven with his shoes on, it’s eerily possible that you’re right and they simply didn’t do any homework. If the PRT (still under Piggot at the time) had interviewed those present at Mannequin’s first attack on Skitter, they would have found out what Skitter had done and at least have made note. Seeing as there were casualties that ended up being taken to the morgue or hospital after the encounter, the PRT would have been able to debrief them offsite. Piggot was smart enough to have at least drawn up a report on the incident. So, it’s almost certain that the PRT had at least some intelligence to make them think “maybe assuming a person is surrounded by an impervious shell/field isn’t a good idea”. Honestly, the PRT’s failures can be more attributed to hubris and incompetence than the Undersiders’ skill. If they ever got someone in there with a level head and actual leadership and competence, the PRT might actually be somewhat threatening. Right, of course. Forum Explorer on April 9, 2013 at 04:08 said: Very interesting chapter and a good showing of some of the personal fallout that revealing Skitter’s identity will have. I mean who would of thought that removing access to a normal stable social life would have negative mental repercussions on a potentially unbalanced villain? Certainly not the PRT! And I don’t consider her actions to be thuggish. It truly was the minimal response she could take as she couldn’t afford to do nothing. She could have fully revealed pretty much all of the heroes personal identities with Tattletale’s help, or the PRT’s dirty secrets. She doesn’t even hurt the Director’s wife, even though they brought her father into this mess first and continued to threaten him. I think that was the point of that last bit, saying to both the PRT and herself that she’s still better then them. Oh and Director Tagg should be glad Skitter considers this a game. She waged war against the S9 and look at the damage she did then. If she was waging war I imagine that most of the heroes in the city would have died tonight. And a good portion of the PRT’s basic people. So go ahead Director, push the team that has regularly gone up against S-Class threats and not only survived but won. I’m sure it won’t lead to your messy destruction. ereshkigala on April 9, 2013 at 04:23 said: Come on guys. Skitter isn’t going Dark Side – she’s enforcing the Geneva convention. You know, the whole “a lawful combatant wears a uniform” and “no war on civilians” and “no military actions against one’s own country” and “if you violate this, we round you up and briefly try you for war crimes before executing you” thingy. The PRT is not waging war right now. It is waging an unlawful war, performing major war crimes in the process – at least as far as war crimes are defined in international law. A policing force cannot wage war, period. A military force could – but they can’t also be a policing force unless someone declares martial law. Yeah, Skitter is a big proponent of staying nice and civilized, what with how she’s set herself up as an unlawful dictator who enforces the law with violence and maintains her power with threats. Not to mention she doesn’t respect her people’s right to privacy and routinely performs illegal searches without warrants for (admittedly illegal) stuff like weapons and whatever. And they wonder why that one guy in the online interlude thought Skitter’s territory was too creepy to stay in. And despite all that, she’s still not only more moral but more lawful than her opponents the PRT. Not helping your case. Yes she really is. As long as you don’t kick the hive (so to speak) she’s downright polite and reasonable. Also don’t kid yourself, that’s how every government that has existed, does exist or will ever exist maintains their power. The law is a complicated way to say that if you do something we don’t like you will be hurt. Eventually most governments start to allow the people living in their territory some semblance of control over what those things are. Not seeing much of that from any one in this party. She can’t really be blamed for that given that it’s as natural to her as breathing. This is like holding a guy with permanent ‘see through walls’ to such a standard. Except nowadays, we kind of expect a government to “punish” criminals with imprisonment and therapy and rehabilitation, not torture. (Ok, I might be a bit optimistic here. SOME people expect it. Hopefully. Maybe? Goddamnit, Real Life governments, you’re not supposed to wreck my point.) Also stuff like the separation of state and judiciary, etc. No matter how benevolent she is, Skitter’s territory is a police state where you never know when you might be under surveillance and where infractions are brutally punished, and where all the power is consolidated in a single person’s hands. Sure, she’s VERY effective at stopping crime, but does that make it okay? Because from where I’m standing, living there is pretty much sacrificing freedom for security, and I’m sure all the dutiful and patriotic Americans in attendance know what Ben Franklin had to say about that. Yes, Skitter does good, helps and protects people. Yes, she’s certainly more moral than Cauldron. But bringing up the Geneva convention while ignoring how many crimes she has under her belt? There’s such a thing as whitewashing her actions too much. I doubt Skitter would fare well if she was judged by the United Nations. And isn’t it said somewhere that she actively sends out bugs to search people’s belongings and make sure her people are the only ones with weapons? I might be misremembering. She hasn’t committed that many crimes, in all honesty- Unless you count each individual officer of the PRT who has tried to fight her and been swarmed as a single case of assault, but that’s playing the books, especially when they would do far worse to her than just cause superficial damage- Namely, ending her life. Sure they may not do it directly, ie execution, but they would end it all the same. And the government certainly has been dropping the ball every step of the way along her path; Looking the other way when Sophia was involved, deliberately giving them a light sentencing when it was finally forced into the open, Armsmaster alienating her from day one and treating her like dog crap he stepped in- A hero, and that’s how he treats someone able to take down *Lung* on her very first night? Someone offering an inside job on the most dangerous local team of supervillains? And lets not get started on how they’ve treated her at every opportunity when she has played fair and tried to get their help/helped them with a massive threat. She does the right thing consistantly and gets slammed for it, simply because of who she associates with. And now, breaking the rules and being massively stupid at the same time by effectively killing Taylor, leaving only Skitter. Prejudice, disrespect, being treated as less than a person- You really think a government that fails this badly at treating ONE person deserves to exist? Deserves respect? It deserves to fall. As for her territory: No peddling drugs, no assaulting people. Typical citizens don’t do those things. Show respect, get respect. Live and let live. She has fought off armed invasions of her area to protect civilians, to the point of FIGHTING OFF MANNEQUIN IN A FISTFIGHT to protect her people. The only way you’re going to cause an infraction is if you decide to beat someone up or sell drugs and show no respect. Don’t be a dumbshit, in other words. Skitter’s territory is a dictatorship. That is completely true. Dictatorships aren’t necessarily bad. They usually are because most people can’t handle getting absolute power and start to abuse the crap of it. The other problem is the matter of succession. Skitter seems to be a classic example of a benevolent Dictator. Her people don’t have any political freedoms. They get no input on laws, punishments, or who leads them. They have plenty of personal freedoms and the laws are clearly laid out and enforced. There also isn’t anything preventing someone from leaving her territory whenever they want. Also her efforts have made her territory the safest and most prosperous in the city. Which is another point to her being a dictator, they tend to be efficient, for better or worse. Wageslave on April 9, 2013 at 04:28 said: “We’re giving you a promotion. A full Directorship in a very active city with key strategic interests in place we need secured.” “If it is two words and the last one rhymes with ‘pay’ I’m in.” …Seriously, I think that they’re going to the on-line applications to the PRT to find someone to run the organization in Brockton Bay. Because the Internet never lies, right? P.G. : “If I determine the enemy’s disposition of forces while I have no perceptible form, I can concentrate my forces while the enemy is fragmented. The pinnacle of military deployment approaches the formless: If it is formless, then even the deepest spy cannot discern it, nor the wise make plans against it.” “Put them in a spot where they have no place to go, and they will die before fleeing. If they are to die there, what can they not do? Warriors exert their full strength. When warriors are in great danger, then they have no fear. When there is nowhere to go they are firm, when they are deeply involved they stick to it. If they have no choice, they will fight.” –Sun Tzu, “The Art of War” Ah, good quote. And Skitter’s reasoning for this chapter, you may have noticed. Except for maybe the first part. Felt like that was directed at me. It’s the five years bit I find most fascinating of all. I hope Tattletale got a recording of that one. The PRT’s newest tactic. Give up for half a decade and hope the enemy screws themselves over in that time frame. umthemuse on April 9, 2013 at 10:34 said: Especially funny when you consider that the world is supposed to end in two years. “If t is two words and it rhymes with ‘pay’ I’m in” http://wikirhymer.com/words/pay/pure-rhymes I can’t see anything that rhymes with pay that he’d be interested in… Unless he was being sillystupid and was meaning that ‘Pay rhymes with pay, of course!’… Jakinbandw on April 9, 2013 at 10:56 said: Brockton *BAY* I believe the rhyme that you’re looking for here is “bay,”as in Brockton Bay. This spot would be a godsend for the ambitious, because if someone were able to stop the notorious criminal gang overrunning the city, that person would probably enjoy muchos kudos. Aaaaaaaaaand I’m dumb. >_> Or tired. Can’t believe I didn’t get that… Does that make you Rika Covenant the Unbeliever? aaaaagh –Dave, my eyes, the pain! my spleen! …You people make me feel all warm and fuzzy inside, knowing such great series, characters, and other works of art. You, however, are the very VERY first to ever reference from where I got the inspiration for my screen name. You get a tray of peanut butter and jam cookies (Peanut butter cookies with a thumbful of (my personal choice is usually raspberry) jam dolloped into the center). A on April 9, 2013 at 04:31 said: You know they don’t have to damage prt property or hit any heros in retaliation. They could kidnap director and person right under him, to get information. They can drain his bank accounts and even hire a type of hacker to make life in Brockton Bay impossible. Or she could let imp, regent and tattletale have fun with the co’s until they leave the country. Naah. Best thing to do would be to just vanish him some Tuesday night while he’s home sleeping with his wife. No news, no fuss, no taking credit/blame, no clues left. To paraphrase some pretty scary guy “the PRT won’t think to look for his corpse on Neptune”. (or in this case, in some alternate reality or in the stomach of bugs) Well at this point, Tagg is probably paranoid as hell about his security considering what happened to the last guys. Yeah, all that security and a PRT building full of capes and non-capes trained in taking down other capes couldnt stop bitch from wrecking his assets, and skitter from going straight to his office and attacking him. Really his only protection at this point is skitter’s reluctance to really deal with him. Eventually the PRT will have to instate someone the undersiders like, or they are going to run out of directors. Given how this went, that’s unlikely to help much. Remember the old days, when the Undersiders were actually worried about getting foamed by PRT agents and somehow escaping from the heroes? Well, see, that’s what happens when you fight opponents whose Challenge Ratings are way higher than yours — you level up really quickly. Mazzon on April 9, 2013 at 04:56 said: Another PRT director, another psychopath with zero regard for law, morals or collateral damage. Kind of like Piggot, except with some of the nihilism swapped for hostility. In fact, it seems to me Coil was out of place as a PRT director because even though he was a psychopathic villain just like the rest, he was actually trying to do some good for the city. It’s interesting how accepting the lesser evil is totally okay and acceptable if it means the heroes should be lenient with Skitter (didn’t arc 20 start with her torturing a bunch of thugs to consolidate her reign of terror?) and someone who doesn’t do that is an asshole, but it is DEFINITELY not okay when dealing with Cauldron/PRT and compromise is a sign of weak principles. Not saying it’s wrong. Just interesting. Skitter is a villain. She is expected to have no regard for the law as she is an outlaw. By definition she does not follow the law. The PRT is the organization tasked with enforcing the law. They are part of the system. They have no regard for those laws and even break worse rules than Skitter ever did. They are inherently worse because they are tasked with upholding those laws and instead hypocritically uphold them only for people who aren’t them. This makes them little better in practice than some Asian dictator publicly hating on the immoral West while enjoying fine Western liquor and pornography. Given what we know of their corruption, how do we know they haven’t already done everything Skitter has ever done and then covered it up because of and with good PR? If the people who enforce the law also break it all the time, they’ve become little more than a particularly well-armed and well-financed gang. Firstly it’s hypocrisy. Secondly Skitter isn’t anything close to Cauldron’s level of evil. There’s a difference between making peace with someone who can be violent but is ultimately quite moral (they are outright relying on them not being willing to kill at this point) and handing the reigns to people who commit crimes against humanity and refuse to tell anyone their real goal. Also Skitter has screwed up much less. mc2rpg on April 9, 2013 at 14:37 said: It amuses me that you are claiming that Skitter is ultimately moral this far into the story. Just because she is good in comparison to people like the Teeth that doesn’t make her willingness to torture people or threaten to kill to change governmental policy into something that is moral. Taylor tumbled down the slippery slope to save Dinah quite some time ago. She is most certainly still ethical and moral. Certainly she is no paragon of virtue, able to not harm any of those who work for those in the position of being truly evil within the story as we see it so far, but she still follows a set of principles of moral conduct. Coil was irredeemably evil, willing to use and discard anyone and everyone to get his way, break his oaths multiple times, keep a little girl in a situation where she was constantly getting drugged to the point of addiction so he could keep her managable, and otherwise be a general rat bastard. Her morals refused to allow such a being to continue to exist because of how many times he broke her ethics; he betrayed her and her family and her friends over and over again. Thus, he was killed. She has not killed a SINGLE other person. She has maimed and injured those who regenerate, and so are capable of healing the damage (namely, Lung on the first count by accident the second on purpose to shut him down), and otherwise only used enough force to disable her targets. Even now, when attacking, she holds back on the venom that would cause potential deaths or grievous injury. She sticks to her mores. Your standards for morality baffle me. You require absolute perfection in line with your own ideals for Taylor. However the PRT are allowed to use ends -> means methodology as much as they like without upsetting you. @Anzer’ke: You might be reading more into mc2rpg’s remark than is there — Skitter can be a villain without the PRT being heroes. mc2rpg: What would you suggest Skitter do, instead, to reach the highest feasible apex of morality? How would you have handled things in her position? I’m not criticizing, I’m genuinely curious. Interesting? Definitely. However there’s a few important points at play here: 1) As PG already said, claiming to be the good guy sets the bar higher. A lot higher. 2) Lesser evil my hairy rump. Sure, Taylor’s a villain and should go to jail, but she’s not a mass murderer with rapidly climbing body count or anything. Therefore catching her shouldn’t be prioritized high enough to risk civilian casualties or undermine the co-operation in Endbringer scenarios. And frankly, I think there’s something pretty badly askew when supposed law-enforcers base their strategy on the villains not being willing to stoop to their level. Or maybe Tagg just really hates his daughters. I am really curious why dinah volunteered that information about skitter. I’m curious why she thinks a guy like that would decide not to be messing with Dinah since he doesn’t care about little things like “laws” or “rules” anyway. Because Dinah seems similar enough to his daughters that she doesn’t trip his “RAR SMASH KILL” instincts and therefore being nice to her is one of the lies that helps him sleep at night. He had no problem seeing Taylor as different enough despite attacking her at school. Because he didn’t see her as a person there. She was simply the villain Skitter incognito, in his eyes. Which is the kind of “my enemy above all” mentality that really does lead to some horrible stuff. …you’re right — I could absolutely see Tagg applying the metaphorical corkscrews to compel Dinah to provide precognitive assistance. Heck, Dinah might be cooperating precisely to prevent Tagg from doing whatever he would do to force her to. Dinah can use the power for herself, maybe she saw that volunteering the information led to a more favourable outcome in another area. Maybe it keeps Skitter alive, maybe it gives her a better chance to save the situation when the world begins to end. I’m guessing Dinah is manipulating Skitter into doing something important or trying to prepare Skitter in some way. Or trying to eliminate the PRT. I wouldn’t be surprised if Dinah lied about the numbers on their success of actually capturing Skitter. Dinah can’t lie about her predictions. This has been made clear. Forum Explorer on April 10, 2013 at 13:27 said: Oh? I don’t remember that. Time for an archive binge! She technically can, but it messes up her power for awhile and causes extreme pain. We still don’t know if she can accept someone asking her a question and then just use her power on a DIFFERENT question, though. She said it’s easier for her if someone else asks the question, but that she can do it herself just fine, too. agreyworld on April 9, 2013 at 07:02 said: I hope she had a voice recorder on her, considering this was a PR stunt and he said quite a lot of stupid stuff. People never bother to record their conversations where the villains confess though T___T This your first time on the comments? agreyworld on April 11, 2013 at 05:12 said: It is not, I think you might have missed my earlier comments PG *talks into a radio* Stand down people. No, we won’t need the elephant anymore. Well I don’t care that you already fed it the laxative, I told you to wait until we painted the moon pink. Hey, shit rolls downhill. You don’t like it, you should have stood uphill from the cage. We’ll talk later. Darn. Haven’t always been welcoming people to the comments, so obviously I’ve missed a bunch. simeraz on April 9, 2013 at 07:48 said: I am sorry to say it, maybe in another story i would have been on is side, but i hope Tagg met a bad end and Piggot come back Muse on April 10, 2013 at 09:45 said: Piggot was kind of a bitch but she wasn’t a horrible director. Of course I might just think that because her successor was a supervillain and his successor is apparently clinically retarded. I don’t think you have to compare her to anyone to make her look competent — look at the way she used her knowledge of Tattletale’s access and Coil’s mercenaries to perform counterintelligence activities. She was completely callous about capes in general — unperturbed by the thought of firebombing the Undersiders and Travelers even when they were helping against an S-class threat, indifferent to the idea that the Wards should receive therapy even in the wake of Leviathan — but even knowing that she was given the job as a sinecure she did it well. Also, consider the fact that it was Piggot who was literally the only person who ever figured out how to defeat Crawler. She ain’t stupid. Eh, figuring “If we throw these tinker bombs at it en masse, and we know at least one turns matter into crystal- We should be able to kill it.” doesn’t take much brainpower. I mean, she though firebombs might take him out. Firebombs. @Rika: Like Tattletale said, “Stupid? No. Genius? No.” Apparently, you guys forgot about the phone call. How did she get Crawler to stick around long enough to get bombed? Be assured that I did not forget that phone call. …I did forget to add it to the Batman Section on TV Tropes, though, so thanks for the reminder. 😀 Batman Gambit section, sorry. Awesome! The Director Tagg had better think twice about messing with Taylors dad. As Grey said I hope Taylor recorded this. It’d be perfect to give to the news. ‘PRT harassing Mr. Hebert? Find out why Director Tagg seems to be breaking all the rules between Capes.’ Can’t wait till the next update. I see the Fallen getting lots of bugs in their soup, also I see one of the Undersiders getting hurt in one of the ‘Blitzkrieg’ attacks. Maybe even seriously hurt. Oh and I found Manni’s Theme: http://www.youtube.com/watch?v=ZLUj-jh_UyQ Man in the Box (Alice in Chains) Crawlers – http://www.youtube.com/watch?v=26dKVC1hnp8 Harder Better Faster Stronger xD I love boredom. :3 Glassware on April 9, 2013 at 08:56 said: I was surprised not just by them attacking the headquarters and rolling about four full-fledged superheroes to do it, but by what they didn’t do. See, if I was in their position and I wanted to make the PRT’s position in Brockton Bay even more untenable than it is? I’d kidnap the heroes and Director Tagg. And then I’d leave them in a room with Regent for about three hours, with him occasionally exercising his power. Then I’d let them go. Maybe call the news services and have them take some pictures, show the Undersiders being merciful. Bam. As far as the PRT is concerned, that’s four heroes and a director that are now security risks so long as they stay in Brockton Bay. It neutralizes them as a threat while still leaving the PRT as a whole intact enough to do its job and fight the Endbringers. Alternatively, if you do need the heroes, then just kidnap Tagg. It’s his policies that did this, after all, and once he’s gone the new guy will think twice before crossing you.. Besides, at this point being kidnapped by the Undersiders is practically a right of passage for Directors assigned to Brockton Bay. “And all I got was this stupid T-Shirt” I can see Regent giving out a T-Shirt for kidnappings. But then I continue to daydream about Imp doing stuff like stealing one out each pair of socks. Or copying someone’s handwriting and leaving them notes from themselves they never wrote, messing up their diaries and planners. Or maybe just moving in to their house for a couple weeks…months…just plain not leaving before them. “Girls Gone Regent!” Actually, Regent would be a great excuse. “Mom, I was going to stay home and study for my big exam, but next thing I know Regent made me go to a wild rave where I met this girl named Rika and tried to recreate the Serenade short from The Lorax.” Pft. Regent’s not my type. Till he controls your nervous system Dear Rika. ;D Then he MAKES you act like he’s your type. 😛 Primarily directed at Packbat and Psycho Gecko: What do we know about Doctor Mother? Interlude 15 indicates that she is a black woman, with long hair. Anything else? Nothing else, I don’t think. I’d have to re-read some interludes though. I know she was there for Alexandria’s was ‘made’, and I think Battery too. Could be wrong…..Oh and I think she’s either the leader of Cauldron or in the higher echelon of their organization. I think Legend’s interlude makes it pretty clear that she’s in charge. And yes, she was present for both Alexandria and Battery taking their doses. Yes, there’s just not a lot on her yet, except that she’s got Contessa as a bodyguard and Alexandria’s her bitch, along with a portal opener and the super accountant. I got the sense that she was claiming to not be in charge drom Battery’s interlude, but I haven’t reread it since. With years gone by, she might have even taken charge since then, though she struck me as more of a pro than to pull that. Highest rung on the ladder we’ve seen. “Doctor Mother” is tagged in three chapters: Interlude 12.5, Interlude 14.5, and Interlude 15 (Donation Bonus #3) — Battery’s, Legend’s, and Alexandria’s. In terms of physical description I found the following: In 12.5 (~2005, if I added up the years correctly): “Leaving? After coming all this way?” The voice was female, rich with hints of a French accent, but the English was probably better than her own. She turned, then stepped a few feet in front of her car to look inside the barn. A woman stood there, dark-skinned, with her hair cut into a short style that was more utilitarian than stylish. She wore a doctor’s lab coat and held a white plastic clipboard with both hands. In 14.5 (2011): The Doctor: dark-skinned, hair tied into a prim bun with chopsticks stuck through it, wearing a short white dress beneath a white lab coat. (Also in 14.5: the ‘end of the world’ is suggested to be in twenty-three years. This is the same time Accord said in Interlude 20. I thought at the time it was a measure of how long his plans would take to reach fruition, but now it looks like it’s a time limit.) In 15 (1986): A black woman with long hair in a doctor’s get-up was messing with the IV bag. Details? Not many. She’s at least a few years older than Alexandria, possibly not yet graying. Thanks, Bat! I figured that was all we had available, and all I could find tagged for her as well, that didn’t involve just talking (mostly lying). You know, if the Undersiders wanted to be rid of Tagg, they could just choose to make him an utter liability to the Protectorate. Just let Regent take control of him for a while, have fun, and then let him go. Since Regent can assume control any time he wants after that, the Protectorate would have no choice but to get rid of Tagg. Ninja’d by Glassware! *shakes tiny fist* Curse you Longaberger! UnlikelyLass on April 9, 2013 at 10:30 said: If there’s a theme I’ve picked up on in Worm, it’s one of asymmetric power relations, and the (mis)use of them. Right from the get-go, Taylor & her bullies, Taylor & the school administration, and even Taylor and her father (where there wasn’t misuse, but which helped inform some of the other relationships). Likewise, things like Coil & the Undersiders, the AAB and everybody, the nature of the Birdcage, Dragon’s limitations, the true nature of the PRT as a tool of Alexandria & Cauldron — it all ends up commenting on the themes of power and authority and relative powerlessness. And now we have Tagg, who is a thug calling a thug a thug. You have the PRT’s shifting moral authority revealing more and more its brutal thuggish underbelly. And you have Taylor, no longer simply a bullied HS student now suddenly Public Enemy #1 AND criminal mastermind/ruler of the (local) underworld. Is she making the best choices? Probably not. But Wildbow is a good enough writer that he’s actually using the reversal of situation to further explore the theme, not just turning it into Revenge Porn, or making out like Taylor is somehow instantly more perfect in how she excercises her authority than anybody else who has had it in the story so far. Well done, Wildbow! Neat analysis there. The title of this arc fits in there as well. Imago is the final stage of a metamorphosis. Is warlord Skitter the adult form of Taylor though? Skitter represents power, where Taylor was characterized by vulnerability. The previous chapter gave us a shift in the power balance within our protagonist, vulnerability being cast off (the loss of being “Taylor” anymore) with only power remaining. That’s not a terribly healthy way to live, and we’re seeing that right away in this chapter as she heads down a path that’s darker than even the rest of the Undersiders are comfortable with walking. If that’s her “adult form” though, where else can she go? Maybe nowhere, but there is Taylor’s line that “Skitter” never really fit her. She doesn’t know what other name she would want to be called yet, but it does speak to Skitter being the larval stage of what she’s ultimately going to become. Well I am curious what her adult form is. Despite her darker place, I don’t think Taylor will change too much. She is more comfortable with violence now, and is probably willing to kill other villains to protect herself/others, but she still doesn’t like to hurt people. In terms of changing I would like to see her throw a press conference/give an interview. Give her side of the story, and let the public know some of the PRT’s dirty secrets and let them decide if they’re on the side of angels. There is always a 2nd trigger event too, though chances are low if Noelle’s stomach didn’t do it. Looking at her trigger event, I don’t think claustrophobia was too big a part of it. It was the fact that no one would help her, that the school failed to help her, and just the injustice she probably felt toward the world. Now the authorities are blatantly taking advantage of the fact that she doesn’t like to hurt people and has a moral code, and still claim the moral high ground despite all the things they have done/cauldron. If something happens to her dad due to it, I can see that same anger/despair at how ugly the world is and she has her 2nd trigger and then has two choices. She can say fuck it, the world is going to treat me like his despite the things I have done and the fact I play by the rules, well let them have a monster. Or she can decide to truly stay on the moral side no matter what the consequences, despite all the crap she goes through. The hair is wrong but otherwise this seems pretty spot on for Taylor at the moment in terms of her troubles with the PRT and lack of “quality time” with Brian. http://phobso.tumblr.com/post/47527420219/evil-villain-is-evil I think an original character by Phobs (the artist) but I could be wrong. I believe the character’s name is The Citizen, a villain from a Russian comic called Major Thunder, drawn by Phobs. Or at least, that’s what I’ve gleaned from the tumblr and assorted media I’ve been able to dredge up. http://25.media.tumblr.com/b9b24532012ee84e500328ceef94458b/tumblr_mksxlwUwcB1s4s0zko3_500.png seems to be a ‘bird form’ of the same character. http://25.media.tumblr.com/c6ce2912bb50ab18bad205838bfdc454/tumblr_mksxlwUwcB1s4s0zko1_500.png is the main coer of Major Thunder issue #7. http://25.media.tumblr.com/84c35c2d33b96dcc4b2a3a42b231ce19/tumblr_mk8kh9u4sp1s4s0zko3_500.png Marguerita, a female white crow and pet of the Citizen, trying to wake a sleepyhead. :3 This hair a bit better? The thing about this chapter that stood out most to me (besides the title and how hurt Taylor seems) is the shifting relationships within the Undersiders. Since the beginning Lisa has been on Taylors side, with Brian coming in at a close second, with Alec seeming ambivalent but ultimately supportive, Aisha being civil but wary (regarding her brothers feelings), and Rachel fluctuating between outright hostility and cool respect. But in this chapter (probably because I am currently in the Parasite arc during my rereading) it highlights how much that has shifted along with Taylors position and feelings; – Alec seems to be coming into a right-hand man status who genuinely seems to care about her as a friend and teammate, something that was first explored with this line “Says the person who tried to slit my teammate’s throat,” Regent spoke. (Parasite 10.1) – Aisha stopped antagonizing Taylor as a whole and has grown into her own as a mature (relatively speaking) member of the Undersiders who also holds some compassion for its other members. I believe on the TVTropes for Worm, there was an entry for Band of Brothers, and this is reflected in how she initially was petulant, annoying, and really only relevant for her ability. Now, she actually has some presence both within the group and without (she was referenced in the previous arc with defending civilians from gangs) – Rachel, to keep this brief, has been willing to follow wherever Taylor leads as of late. Keeping in mind her whole canine-mindset, she appears to have wholeheartedly accepted Taylor as her alpha and has been looking after her the best she can. Basically, I see those three backing up a new, more dangerous, less merciful Skitter and possibly causing increasing friction with the other three members. WMG dictates that I also say that its possible the two groups will have a schism and maybe even a fight. tl;dr I love the changes the Undersiders have undergone as they have fought, becoming more family than just a ragtag bunch of misfits, even if that means that the status quo will be upset within their group. Lost Demiurge on April 9, 2013 at 13:18 said: Well now. Taylor was right. The strike against PRT headquarters was the mildest form of punishment she could scrape up. And she got a chance to talk with Tagg, see who she was dealing with. Hoo boy… “Strong man, making hard choices! My way Semper Fi Do or Die! Rah!” Precisely the worst possible person for the PRT to put in here. I halfway wonder if it’s not deliberate… Cauldron pulling a Machiavelli ploy, to make the handoff of the city to Taylor a bit easier. True, he rattled her a bit, but not so much that he scored a victory there. No victories there. And she kept composure well enough that nothing was lost. Glad Lisa went with, too. She made the right call there, was the best person to reign Taylor in, keep her together. Now. Now Taylor has to think above the short term. Now, if this arc is to live up to its name, she’s going to have to choose a new goal beyond mere survival. She’s going to have to sit down and figure out what she WANTS, and how to get it. And then how to hold on to it, make it sustainable. There’s one path to winning here that I can see, and Accord saw it too. If she can get the city on her side, drive the PRT to break more and more of its rules, show that the heroes are no better than the Undersiders and in many cases often worse, then she can turn this thing around. Of course, things are going to get tougher before it’s all done. Gonna be darker, and I honestly don’t know if Danny will survive the arc. I look forward to seeing how this one goes! As part of the getting the city on their side, the undersiders should pull a coil and try covertly replacing the directors in the bay’s PRT branch with their own people, perhaps get tattletale elected mayor. Also, this line was unclear: As with Dovetail, I’d managed to make enough progress that she was more or less out of the fight. By *she* I assume you meant Sere, so yeah, sexchange Also also, Triumph is mentioned but isnt tagged and nothing happens to or with him. Add a line indicated both he and Sere were tied up, mayhaps? (Pardon any pretentious soundingness, totally unintentional) From what i’ve noticed, names aren’t tagged unless they actually make an appearance. Menioned offhandedly by name as being in the area but nothing happens with him = not tagged, so people looking for Triumph references aren’t stuck looking at every page his name is spoken/thought. Which would make a lot of sense, but Sere is tagged and is said to have been in the same area as Triumph and had just as much influence in this particular chapter (read: none) Also, sometimes capes have been mentioned with no appearance and have been tagged; Im specifically remembering the end of Extermination, where Lisa discusses Coil with Taylor and his name is tagged even though he isnt even around Sere was more than mentioned, he was dealt with. His powers were referenced, and how he was dealt with was referenced. Unless my tired eyes are skipping something, she didn’t do jack all to Triumph, from what I can see- just noted he was there in the area. In the case of Coil, if he’s being discussed- focussed on- then it’s relevant and tagged. His aim was really off lately and he dried it so much it crumbled off. archivis on April 9, 2013 at 13:49 said: Wow another good update. 🙂 Alathon on April 9, 2013 at 14:45 said: I’m kinda hoping this isn’t Taylor’s final response to the PRT and Protectorate, just the logical one she needs to get on the news, to retain respect and avoid Brockton Bay getting dogpiled by villains who smell blood. I feel like she could do so much more to destroy their ability to function, drive them out even, just by going after their equipment alone. Damned if this affront, jumping her in school and unmasking her publicly, doesn’t cry out for answer.. and I don’t feel like roughing up the PRT Director and employees really covered even a tenth of it. Tagg wasn’t terribly impressive, came off with a serious case of small man syndrome. He says there’s a war on, well sure, it’s his war for relevance, trying to keep everyone believing that people like him deserve their authority. Unlike Piggot, who was at least more-or-less competent, he thought nothing of authorizing a confrontation without context, without even knowing where the fight would be, what the potential collateral damage could be. If Tagg comes to a bad end, I don’t think anyone’s gonna shed a tear. I’m looking forward to the interlude, have to imagine there’s a lot of very large talk at the water coolers across the board. Who’d have thought we’d ever be nostalgic for Piggot’s level of competence? Yeah, he really seems like he’s been set up to lose. There are places where a hard-liner would be perfect, I think Brockton Bay is pretty much the opposite. Alexandria can’t really think that this is a winning move for the PRT. I think she’s written off the PRT, she said as much in her interlude, and is now burning them down on the way out. That’s the worst of it really, it was such a completely senseless move. The PRT gained absolutely nothing from outing Taylor. Instead they lost a great deal. In short, this was pure spite. Admittedly if the gamble had paid off it might have been slightly better (still terrible but easier to spin, and damn if I don’t want to see Skitter in custody. Maybe with a dash of Joker-style complete relaxation with it. Just quietly waiting for the break out, chatting and getting beat up) for them but given what they risked…just not worth it. And yeah, not knowing where it was happening made this a blank check command moment. I’ve known soldiers who really did tell stories about that kind of incompetent lunacy. She could have been visiting her sick grandmother in hospital. She could have been in a factory full of high explosives. She could have been in her base/ORPHANAGE!!! That last one would have played incredibly: “Tonight at nine, PRT slaughters orphaned children. Local girl Taylor Hebert tried to defend them from the sudden attack but could not succeed alone against their full numbers. However intervention of furious locals turned the tide and now all that’s left is to pick up the shattered fragments of Timmy’s skull.” We need more Parian, she needs a wake-up call and fast. More like she needs to give Flechette a wake-up call, the Wards are gonna be a less and less safe place to be. Yes, a wake up call, preferably in the morning with the sun peeking in to caress the tousled sheets of the bed they share, the passion of the night before now faded into a loving glow as they bask in the warmth of their long-awaited union with that fire waiting just beneath the surface to be stoked once more. acediamonds on April 9, 2013 at 16:13 said: You know, I’ve been wondering why Tagg hasn’t revealed Tattletale’s identity either. Piggot figured it out several arcs ago and Tagg seems petty enough to reveal it even though they wouldn’t benefit as much from it as revealing Taylor’s identity. The only reason I can think of is that Piggot never put the information in the system and hates Tagg as much as we do so she decided not to share. Tagg didn’t reveal Skitter’s identity for spite. Tagg revealed Skitter’s identity because it was supposed to maximize her probability of capture. Revealing Tattletale’s identity? That doesn’t help him at all. It’s not that he wants to protect it, it’s that he doesn’t actually care, and not caring in this case works out in Tattle’s favor. It feels very much like it was for spite, though. Revealing who she is in civilian garb just puts her- and by result, them- between a rock and a hard place. She has nothing to lose anymore. If they’d discussed things reasonably like she has shown to want to do EVERY SINGLE TIME THEY TALK, then maybe this wouldn’t have happened. It feels very much like a “Father knows best” situation where the gov’t is the ‘dad’ figure and Taylor the ‘wayward child’ figure. Except the gov’t has no clue what it’s doing but panic-reacting and the ‘child’ knows all too well what’s best and is getting forced down the worst paths each time because of having to react to really horrible stuff. Not disputing your characterization of Tagg’s attitude, but in 20.5 Taylor specifically asked Dragon and Defiant “Why out me in front of everyone?” and the reply was “A precog told us it was our best option for bringing you into custody.” Or that’s what she was told by Tagg, or whomever she was communicating with on Tagg’s payroll, that was acting as a relay between Dragon and him. That last part sounds like the Baudelaire orphans. I wouldn’t exactly say it doesn’t help him at all. Tagg says he’s a scrapper, everything’s fine in his books if he can at least get a punch in there. Revealing Tattletale’s identity shows the public that yes, the PRT are in fact still fighting. Though I suppose he could try and use that strategy after this debacle. Tagg needs to remember that while quitters never win and winners never quit, we have a word that describes those who never win and never quit: idiots. Quitters never win, winners never quit, and Skitter never loses. I fail to see how outing Tattletale shows anything of the kind. Interrupting an Undersider heist — that would show they were fighting. Seizing Undersider goods — that would show they were fighting. Actually successfully capturing one of them — that would show they were fighting. Outing one of the Undersiders? Wouldn’t even slow them down. It just shows how ineffectual they are that this is the best they can do. “we can’t stop you so we’re gonna TATTLE on you~” and said tattle-ing has the opposite the intended effect… Did it? Depends on who’s intention we are talking about. Who’s intention is to steal second, but what’s that got to do with anything? Im thinking, what if Tagg does decide to escalate this situation? He reveals exactly who (at least those he knows) the Undersiders are in their civilian identities. We know they at least know who Lisa and Alec are, and for all of their uncaringness, Alec did reveal that he harbors a minor fear of Heartbreaker paying him a visit. Possible plot hint? Either way, its cool to think about in a time where there are no /obvious/ Big Bads for this arc cycle. Well thats discounting the leaders of the three villain groups, who are obvious enough, but I dont think the Accord route is happening anytime soon, Valefor and Eligos might be tough, but I dont doubt the Undersiders under Skitter, and Butcher and the Teeth might be a war on its own, but Butcher doesnt have that *umph* factor to hold a decent amount of expectation and fear as a Big Bad. Night_stalker on April 10, 2013 at 22:30 said: Long time follower, first time commenter… Anyway, I can’t see Tagg retaliating in any meaningful way. Odds are the Undersiders battened down the hatches the instance the utter SNAFU that was the school attack hit the news or Skitter made a phonecall, which probably meant moving any significant others or relatives to safehouses and the like, or going off the grid. Not to mention, the higher ups are probably going to be suppressing the urge to reassign him to Antartica for this screwup, especially seeing how the PR department is probably getting ready to use Tagg’s picture for target practice. I mean, first a attack on a school, then they get curb stomped at their own HQ? Not going to do any wonders for morale, or for their image, I’d be surprised if Ms. Militia didn’t have some choice words for him… Then we have his attitude. Seeing how he was in the Army, I’m guessing he never really understood the concept of “Rules of Engagement”, or “Unacceptable casulties”… I mean, this could’ve very easily been a bloodbath, if they weren’t holding back, well, the PRT and capes would’ve been crippled, badly. Which while it’d prove his point, would also show that the Undersiders aren’t to be screwed with. And then there’s his choice to leak it. Dinah suggesting it or not, that’s given them nothing but headache and nightmares. All it’s done is burn what little goodwill they had, and alienate every villain in the tri county area, if not further. It’s like Machivelli said “Upon this, one has to remark that men ought either to be well treated or crushed, because they can avenge themselves of lighter injuries, of more serious ones they cannot; therefore the injury that is to be done to a man ought to be of such a kind that one does not stand in fear of revenge.” Or in layman’s terms, if you avenge someone, make sure the person you’re attacking is incapable of retaliating. Welcome, dear militarily-minded newcomer, to the comments section, where we put the sis anal in analysis. Y, you ask? Because we hear she gets off on wordplay. If you enjoy a good time, I suggest midnight, if you’re looking for debate, try the end of a fishing hook, and if you’re in denial about how far I can take these, then you should know I can go straight up Mediterranean on your ass. Like an Athenian. I’m Psycho Gecko, and like that drunken binge after exams when a friend roped you into trying a club you’ve never heard of, I’m the guy you’ll either regret knowing or find slimy, yet satisfying. Lion King aside, let me be a truthful king Gecko and say that we’re always glad to have another person along who just can’t get Worm. A natural enough consequence of some of the beef they sell at fast food places, as well as for reading this amazing story we all enjoy. who just can’t get enough Worm* The issue with Tagg, I think, isn’t just that he’s a thug. It’s that he’s a stupid thug. And he apparently forgot that Taylor’s dad is a well-known and respected figure in the local dockworker’s union, and therefore can thug back just as hard if needs be. I also can’t see him getting along well with any of the local heroes, and the only one who might theoretically be willing to put up with his attitude (Assault) is also going to hate him because there’s no way Tagg treats a former villain with anything even remotely resembling respect. He’s pretty much the Wormverse equivalent of the Pointy-Haired Boss. In fact, I think I’ll just call him PHB from now on. If Dilbert exists in the Wormverse, his employees probably already call him that when they don’t think he’s listening. On a different subject, I hope that Dragon comes out publicly as an AI soon, because Skitter’s making some very incorrect assumptions about her thanks to not knowing that vital detail. His attitude of ‘We’re right, thus, anything we do will ultimately be justified by history’ definitely seems pretty questionable. Do we have a list of the current heroes in Brockton Bay? Miss Militia… Well, I’d respect her a hell of a lot less if she doesn’t find this an affront. She seems like a Captain America sort, the kind who believes in the spirit of America, rather than the current administration. Clockblocker has shown clear willingness to argue with Skitter, and the argument- With a girl who is not interested in imposing her views, and while being very unfair in declaring ties- could only come up with a ‘two to one’ moral victory. I feel hope because of that. It feels as though he understands that the area he’s coming from isn’t entirely tenable, and that because of that, he may start to feel sympathy. I’m not sure who else is actually still a member of the hero’s team, though. But the heroes, by and large, aren’t stupid thugs. They may be a bit more secure in their righteousness than they deserve to be, but they got into this business because they wanted to help people. And they’ve worked alongside Skitter. Their greatest flaw at the moment is pride, feeling ashamed that a villain is doing more to be the hero than they are. But they’re going to have to start trusting her eventually, and it’ll be a glorious day when they do. Probably also deeply tragic, but glorious. A good point. Pissing off the local villains is one thing, but pissing off organized labor? Hoo boy. I propose that the local union-affiliated mobsters change the term from “sleeping with the fishes” to “going on a date with Cherish”. Ahahahaha! Nice one. Psycho Gecko presents “You can’t unthink it.” Well, I suppose if they strap one of those fleshlite thingies to her… …So they can sob and SWEAR that they’ve never had THOSE problems before, it’s not heer, it’s them… Even as her crushing despair crushes their self-worth even further, until they just give up on life, the mortification at being emasculated amplified by her emotion control ratcheted up to eleven. So, like any other date? I think i found what skitter needs to breed http://theoatmeal.com/comics/mantis_shrimp Weaponized mantis shrimp would get her a kill order. Of course, enforcing it would require that they get past the mantis shrimp first, so it might not matter. taliesinskye on April 10, 2013 at 15:18 said: They breathe water, sadly. Hasn’t stopped Skitter from using bugs before; Have a few tanks of water moved into place by crabs, then have the mantis shrimp wrapped up in spider silk and lifted to the target and either used until dead or done in strafing runs and returned to the water. chaos985 on April 10, 2013 at 22:10 said: “OH MY GOD IM BEING ATTACKED BY RAINBOWS!” Oooh, like we were separated at birth. Still, Psycho Shrimp is a little lackluster of a name, and I have this feeling like a lawyer demon is circling me if I consider Psycho Mantis. The thing with the claws reminds me of the Pistol Shrimp, who pull the same trick, creating a pressurized bubble in the water that very temporarily becomes as hot as the sun. They’re not as flamboyant as Mantis Shrimp though. Squivler on April 11, 2013 at 09:43 said: Same family I think. Hmm, cape with crustacean based powers. Well Skitter can already control crabs… Crow on April 10, 2013 at 00:16 said: Why intimidate Tagg, when Tattletale can find the files? Take the juicy hard drives, crack them later at Tattletale’s leisure. Or sell them. Tagg is nothing but trouble, yet Taylor didn’t put a bullet in his head from his own gun. Foolish. Also, crippling their infrastructure would have been such a better attack. “It’s bugs in the system. No, really. They ate the insulation off and shortcircuited every vehicle, computer, tinker device, and camera in our headquarters.” Also of note, whoever takes down Butcher may well end up inheriting the powers. That just seems to paint a target on Butcher. Not since it was established that if you werent a part of the Teeth, the voices would drive you mad; doesnt seem worth the trouble. You really think Taylor ought to have killed Tagg? Even if you leave the moral issue aside, what good would it do? This is the third director that the undersiders have had to deal with, and none of the previous ones were any better. What makes you think that whoever they replace him with would be any more reasonable? Piggy may have been a bigot, but she was smart. Tagg is just a dumb but sly brute. He sees value in PR, but only so it slanders his enemies and raises his name on high, instead of serving the people he’s meant to protect. It also sends a stronger message. Fuck with an Undersiders’ identity or break the agreement, get put down. I don’t pretend that it is moral. It is, in my opinion, a stronger play, that would protect Taylor and many others. If her intent is to hit the suits calling the shots, then really hit them. Don’t just leave welts. It also makes the BB PRT Director position even less desirable. “So how long did my predecessor hold the job?” “Less than a week.” “Yeah, no, thanks.” I believe they did in fact cripple their infrastructure. Bitch went to town outside destroying all their big vehicles, skitter went inside and bugged the place out. We know the bugs can cut through wiring, and that detail isnt very important to be included really, but it can be assumed to have happened. Not to mention any emergeny medical supplies, foodstuffs, and survival gear they may have had in the building has just become contaminated. They also know that Skitter can use the bugs to eavesdrop if she pleases, so the PRT is going to have to fumigate the place extremely thoroughly. Bonus points if Taylor infested the barracks with bedbugs. That does raise the question of why Tt decided to come along. I can understand if it was just for Taylor’s sake but out of all the Undersiders, i expect her to have an ulterior motive. Tagg’s right. In the long run, not many will remember the Undersiders’ brief reign, unless they rule for *years.* Even then, maybe not. And it’s clear that they don’t have a long-term plan. Plus, regarding Cauldron…you could argue that they’re “good” from a utilitarian perspective. How many have the Triumvirate alone saved, indirectly, fighting against Endbringers alone? I wonder if Skitter will kill Butcher, and that’s why she’s different when the apocalypse rolls around. That would actually be pretty cool. Shatterbird’s theme? http://www.youtube.com/watch?v=NRa_rEEav2E Here is a plot device. Rig it so Mrs General Tagg kills Butcher. That way when she starts hearing voices and having uncontrollable homicidal urges she will have more than coffee and donuts for the boys. The Undersiders don’t have a long term plan? Sure they do, it’s called “survive till next week, then we’ll make a new plan”. Give them a few months of breathing room, like they might get in a couple of years if the world doesn’t actually end, and then investing in some long term goals becomes pretty reasonable. As for Cauldron/the Triumvirate? Even from a utilitarian perspective they’re looking pretty shaky. If the world needs more “good” parahumans, then the way they’re going about producing them seems pretty inefficient. I’d also like to see even a flashback where the non-Scion heroes actually drove off an Endbringer. So far the batting average for the world seems to 0.000 on that score. The Triumvirate keeps getting credit as being the ones that the world will collapse without, but all we’ve seen them do is buy time for the real problem solver to show up. As a note: Armsmaster and Skitter also managed to buy time, so, yeah, not seeing Alexandria being all that critical really. Here’s a thought. Tagg tries that ‘take them in for questioning’ thing with people in Skitter’s territory and Skitter steps in to defend her people from the PRT. Last night I had a dream that all of us readers and wildbow got together for some kind of Worm convention. Nobody wore nametags or identified with their commenter names so that we didn’t know who anybody was. We were mostly trying to figure out who wildbow and PG were. I figured out who wildbow was by finding the one person not arguing about the morality of the characters XD. Never did find PG though. You’re clever in your dreams. Honestly, that’s how I’d do it, as an outsider. Also you were carrying a bag of ice. That may be me unconsciously stereotyping Canadians? If so, I apologize. I live in Texas, so I just assume that people who live much farther north really like cold things. Man, this really makes me want to have a Worm-Con or something. That would be so damn cool >.< Jakinbandw on April 10, 2013 at 15:04 said: I live in Winnipeg Canada and the snow is melting right now. The rivers are still frozen though. I actually wanna turn my air conditioning on; It’s too hot and the building I live in uses the old timey water heaters, so we either have them on full blast making the apartment too hot, or not on, making us freeze. -.-; Damn, Wildbow! They know our plans to send me in as a decoy with a bag of ice (because I’m a COOL customer) Nice ice baby, dun dun dun dun dun dun dun dun. Give them the idea like that and the Canadians will try and blend in to the Southern U.S. by drinking Ice-T. Or even start flavoring their liquor on the rocks with some Vanilla Ice. But still, icy what you did there. Instead of a bag of ice, look for the Slurpee cup. There’s something I’m having a difficult time buying. Alexandria was the head of the PRT, and set it up specifically to blunt any damage between capes and the rest of the population. But… the PRT, it seems, is practically overwhelmed with bigots. How can this be? I’m not sure about this. If anyone else were in charge, I’d get it – but Alexandria is practically the essence of intelligent and observant, at least according to her description… and the PRT is her baby. There’s no way she could have overlooked this problem. What’s going on? Honestly, I’m surprised that Tagg is the first person we see who acts like this. The PRT may have been led by and revolve around superheroes, but Tagg is exactly the kind of small-minded, self-important bureaucrat that tends to rise high in government organizations. Because they don’t care about doing the right thing, they’re better at bootlicking and backstabbing their way up the ladder while their peers are struggling to solve real issues. These are the guys who get people trying to make a difference fired so that they can take their office, who do things like run Guantanamo Bay and turn over POWs to foreign torturers so that they have extra bullet points on their resume. This can’t be emphasized enough. Corporate/bureaucratic environments reward all manner of unethical behavior and tend to punish employees unwilling to give themselves a leg up by tearing their co-workers down. ” Corporate/bureaucratic environments reward all manner of unethical behavior” No. No, they don’t. There are plenty of corrupt people in the world, but it is not the predominant trait of corporations or government entities. I’ve worked at fortune 500’s, and experienced a few small pieces of the government/military world,and the only universal constant for behavior between them was that they were all different from each other – a difference usually determined by the people in charge and the corporate culture. SOME rewarded bad behavior (a coke-snorting drill sergeant comes to mind…) while OTHERS rewarded honestly, and still others were blue and orange morality which doesn’t match up with any thorough definition of honest OR corrupt. The culture of the PRT higher ups seems to be one of clear bigotry. And it’s jarring. It doesn’t make sense. It should not be a given that this is true. There’s something else going on, here. That’s what I’m saying. Is it possible that, in the Wormverse, veterans who can’t adjust to civilian life, rather than joining the Foreign Legion or PMCs in some random country, just go to the PRT instead? It’s a self-selecting pool of unstable individuals. Actually, corporation environments DOES engender unethical behaviour, because of how people are promoted beyond their level of ability, rather than stay where they are most useful- It’s known as the Peter Principle. When you have incompetent people in the higher echelons of management, which is what any corporation will tend to- Not out of some concept of wanting to tear themselves down from the inside, but because competency is rewarded with a higher level position until the person having gotten the better position is not capable enough to continue getting a higher and better position. A good example is a superb engineer who is taken off the floor and put into the role of a supervisor, because he’s done such a good job- That requires a whole different set of responsibilities and capabilities within the management branch of talents, and many people who are raised up like that don’t have the experience or inherant ability to manage well- Even though they Engineer really well, in this example, they cannot manage and are seen as mediocre- And in fact would likely do far better if he was able to return to his old job than to continue as a supervisor. There are going to be those whose skillsets are optimal for supervisory positions- But they’ll prove to be good enough at their job that they, in turn, will end up getting promoted, until they eventually aren’t able to handle the work they are to do beyond a mediocre level and stop getting promotions. Thus, the entire business stack tends towards mediocrity. Certainly it won’t just go to shit all at once, but over time the current model of business is set up to move that way. Executives have known about the Peter Principle for a loooong time. Let me throw a few other terms at you: Kaizen, Six Sigma, horizontal management structures (http://www.businessweek.com/articles/2012-04-27/why-there-are-no-bosses-at-valve), TQM… Just because something has an organization chart does not mean it’s prone to inefficiency, corruption or bigotry. DARPA is a fine example: a government agency that is not what anybody really thinks it is, is amazingly productive and is probably a good model for what a *real* PRT would look like: (http://www.amazon.com/Department-Mad-Scientists-Remaking-Artificial/dp/0062000659) Fact is, there’s a competitive structure between government agencies that tends to discourage blatant idiocy at higher levels. It also tends to discourage bigots and the corrupt. Not perfectly, but much better than the popular conception of things, as illustrated by these responses. So, I repeat: I find the bigotry and tactics of the PRT jarring and questionable, given that their leader was a hyperintelligent cape whose goal was to reduce friction btn. norms and capes, AND given that government agencies are NOT always inherently prone to collecting idiots of this nature – at least not in real life. Small-minded, self-important bureaucrats may make for interesting literary adversaries, but IRL they are not statistically favored over open-minded and perceptive bureaucrats when promotions happen. This is a characterization device, not a fundamental law of employment. Well, there’s the reason Alexandria *says* she setup the PRT, and then there’s her actual reasons. We’re privy to one, but the evidence we’re seeing points to the other being different in some way. Or, possibly, an organization such as the PRT is simply too big for someone like Alexandria to micromanage and with the level of power it deals with this is the best she was able to manage with the resources that were available. His eyes studied me, as though he were making an assessment. His words were gruff, the gravelly burr of a long time smoker. He very deliberately set the gun down on the desk, then replied, “Because you are not the law. I am the law.” “Has to be. The ones at the top handle the compromising. They assess where the boundaries need to be broken down, which threats are grave enough. My job is to get the criminals off the streets and out of the cities. I am the law.” “You started a fight in a school. What are you, a 12 year old?” “Didn’t know it was a school until the capes were already landing,” he replied. “Had to choose, either we let you go, and you were keeping an eye out for trouble from then on, or we push the advantage. I AM THE LAW!” “Sure, go ahead, trust the people who were so right about catching me before. They know all about me, don’t they? You ever notice how I beat the motherfucking odds from a goddamn oracle?” Someone behind me screamed as a group of my hornets flew to him to deliver a series of stings across his face. Behind them flew several more hornets with an MP3 player that blasted “Flight of the Bumblebees”. “Ride of the Valkyries is overused. Besides, inflicting pain isn’t the point.” “Seem to be doing a good job of it. Almost as good a job as I am doing at BEING THE LAW.” he commented. “There are heroes on their way back from patrol, your guys called them in. But there’s also news teams on the way here. We called those guys in. They’ll find your employees covered in welts, every PRT van damaged or trashed. Your employees won’t be able to get any cars out of the parking lot, so they’ll have to walk, which will make for some photo opportunities. Maybe offer the women here a calendar deal. A handful of heroes will have full underwear. You can try running damage control, but some of it’s bound to hit the news.” “Uh huh, have I ever told you how I am-,” he said. “I couldn’t let you get off without a response from us,” I interrupted. “I suppose that makes sense.” “Game? Little girl, this is a war,” his voice took on a hard edge. “War? Huh, good god, y’all. What is it good for?” I said. “Absolutely everything. You can’t win forever, because I. Am. The. Law,” he said. I had a response to that. “This isn’t a war. In war, people can surrender and go back home. You, you little shit, decided to make this where I can’t surrender. You gave me no choice but death or life imprisonment. You want to know what’s next? What’s next is I order some dung beetles to crawl in there and get takeout, that’s what’s next.” “The world ends in two, bitch. Try me.” “And like a lot of institutions, you’re full of crazy people with delusions of being great leaders in history.” “No. They didn’t pick me to head this city’s PRT division because I’m a winner, Ms. Taylor. Because I’m not. They picked me because I’m a scrapper. I’m a survivor. Sure, I wasn’t here in the city when all the bad shit went down. I was off patrolling the mean streets of Greenwich, Connecticut, taking down foursomes strapped with nines coming off the green. I’m the type that’s content to get the shit kicked out of me, so long as I give the other guy a bloody nose, like that time the old woman rushed the buffet at the country club. I’m a stubborn motherfucker, I won’t be intimidated, and I won’t give up, no matter how much the police, FBI, CIA, NSA, ATF, DEA, ASPCA, KFC, NCAA, EPA, ACLU, NAP, GOP, TVA, TNA, WWE, WWF, WCW, NAACP, army, navy, airforce, and the so-called ‘President of the United States’ want me to. The last few Directors in Brockton Bay were like syphilis that you treated with penicillin, but I’m here to stay like herpes.” “The last few Directors gave me the same speech. One of them did this job with one kidney because she was one of two people take on Nilbog and live. The other could split into two different universes for twice as many tries at me and was the other one to take on Nilbog and live. You’re his replacement. Good luck finding the body.” “Jenny Craig is a system. Obesity is an epidemic.” I felt a knot in my stomach. “Oh yeah, pull in a local labor leader whose daughter was bullied into becoming a villain. Talk about being a dumbass. How many times did your parents have to drop you on your head for you to become smart enough to read?” “It’s a war of attrition,” Tagg said. “I’ll find the cracks, I’ll wear down and break each of you. If you’re lucky, then five years from now they’ll remember your names, speaking them in the same breath as they talk about the kid villains who were dumb enough to think they could conquer a city. Soon enough, you’ll go the way of Genghis Khan. Some obscure, barely-known wannabe conqueror who tried to defeat China in the Boxer Rebellion. But you’ll notice, they still wear tighty whities in China. I! AM! THE! LAW!” I approached the desk and turned around the photo frames. The second showed Tagg with his wife and two young women. A family portrait. Given the family’s looks, you could be forgiven for thinking it was a Picasso. “Two, going to universities halfway across the world. Because I’m such a loyal, ooh rah, goddamn American.” His mindset was all ‘us versus them’. Good guys versus the bad where “good” was defined as the institution. Don’t let the smoking gun be a mushroom cloud, that kind of thing. It wasn’t much, but it served to confirm the conclusion I’d already come to. Dinah had volunteered the information. Whatever else Director Tagg was, he wasn’t the type to abuse a girl who’d been through what Dinah had. He was just the type to abuse a girl who’d been through everything I had, totally ignoring the fact that I’ve been picked on, tortured, beaten on, hit on, had crappy lesbian fanfiction written about me, and I went toe to toe with Leviathan, Jack Slash, Mannequin, and Echidna while he sat behind a desk somewhere using his service pistol as a fleshlite. “Wow, some war here. What, they fly you in with the 82nd Airborne, give you a special posting with body armor? You at risk of being captured and raped by a bunch of medieval fundamentalists?,” I said. “And you stand by your husband? You buy this bullshit?” “I’m illustrating a point. I had your gun right there, could have shot you, and rather than do the smart thing, you reached for an ankle gun. I could have shot you, but that would have gotten blood on my costume, and you don’t get blood on my costume, motherfucker, are you outside your mind?” I asked. My bugs drifted away from Mrs. Tagg. She was uninjured, without a welt or blemish, but with several penises drawn onto her face by bugs tugging along markers. She backed into the corner as the bugs loomed between her and her husband. “Not sure. Common sense doesn’t change my mind in the slightest. I am-!,” Tagg said. I didn’t respond. The swarm shifted locations and dogpiled him. Stubborn as he professed to be, he started screaming quickly enough. If the smell coming from his trousers was any indication, the dung beetles would feast tonight. Funny. And really ilustrates some important points. But lets be realistic here, Taylor got an empty victory. Well, in her place I would walk by the Prostated Regular Teochrats everyday and destroy some of their machines. Bug their computers, destroy delicate parts of their cars. Clog the bathrooms … Just to make it expensive for the government to keep them working. It actually crystalised a thought for me. Namely that Danny Hebert may be fragile when it comes to his family but professionally he’s still a labour spokesperson, one who stuck it out through the Bay’s worst times. In other words a) He probably has a fair bit of credit with the kind of people who will be appreciative of the Undersiders reinvigorating the city. and b) He’s unlikely to be all that phased by some police grilling. I got the impression that union situation has been pretty bad, for all we know he may have undergone that kind of thing before. those are actually REALLY good points. I imagine the interrogation would be more about building up a sense of intimacy, then exploiting it. You know, “Help us help your daughter,” (false) promises of leniency/amnesty, and so on. His job is organizing labor in a city that’s pretty much post-Katrina New Orleans with superpowered white supremacists and serial killers. I have no problem whatsoever with believing that Danny, Kurt and their posse can be more hardcore than this desk jockey dickhead and his goon squad. I mean, shit. It’s barely out of the Slaughterhouse Nine rampage when they we’re slamming back beers before heading to yell at a town meeting. What makes Tagg think that people aren’t going to stir shit if he starts messing with them? I’d rip the hard drives out of as many computers as possible. Good source of info, AND it’ll piss them off to no end. What is this I don’t even… I do these sometimes. People have generally like them, except for the one with the Travelers. The part about Sktter having crappy lesbian fanfiction has to do with other rewrites I’ve done. Which reminds me, I think think there was one heavy on the Pulp Fiction and Zero Wing that I need to hunt down. At least I know which phrases to check for. In fact, looking back at the first one of these I did, waaaaaay back with Legend’s speech before fighting Leviathan, that prompted Wildbow to say something about me having my own story. I know what it -is-, but that doesn’t make me boggle any less than I did reading it initially. >.o; we need a button to filter Psyco Gecko comments. I havent decided if its to show them, or hide them, but we really need a filter. There is one. But it’s a big orange button at the corner of every one of his comments that reads ‘unapprove’, and only I can see it. Oh god no, please dont wildbow, PG’s comments are hilarious. If you want to see PG’s comments, just use the find function on your browser. Fans on April 11, 2013 at 02:41 said: What if he does do that and what we see is only an edited Psycho Gecko? The pure stuff may be too ruinous for sane people’s health. It’s gotten so bad that sometimes the comments I post that don’t have any links in them will be held up to be approved. But psh, you know you love me. In like a strictly heterosexual, “Grunt just fell out of a hospital, climbed the Krogan memorial statue, lit a C-Sec aircar on fire, jacked the flaming cop car, and avoiding getting put in containment foam because he was on fire” kind of way. I believe there IS a way to highlight individual users the way your posts are highlighted, but I do not know the method in the particular plugin you are using. Think it’s cos Wildbow is the author. I kinda wish PG would just link his stories instead of posting them in whole on the forums . You wound me. That’s no fun, especially since there’s no link to World Domination in Retrospect here. I thought if I asked you’d set that up for me like with Tieshaunn. I guess that’s just the way the bowling ball bounces. Please, let it stay as it is. PG tales are a fun way to wait for the next chapter. Those that do not want to read them may simply jump his comments. Mishie on April 10, 2013 at 10:25 said: What I’m curious about is what Tagg is actually planning on doing to deal with the Undersiders, because well, sure threatening Danny sounds big and scary, but well, what exactly is it meant to do? Even if he did go through with the threat of bringing Danny in every time the Undersiders do something, well, all it does is waste his time, and if they do anything more than that they’ll cause a shitstorm. Thing is though, Tagg isn’t stupid, sure he took a gamble with targetting Skitter as Taylor, but he was taking advantage of a huge opportunity that he didn’t know would ever happen again, with Skitter being isolated from not only the rest of the Undersiders, but most likely a large amount of her bugs. When you add in the fact that a reliable preocog told him that they had an almost 100% chance of capturing her, it was the perfect start to his Directorship. Sure there would have been some bad PR about targetting her when she wasn’t Skitter, but it would have been easy to spin it so that she was attacking somebody/thing whilst she wasn’t in costume to be stealthy or something. The way I see it, the PRT probably has teams of profilers going over what they know about the Undersiders and are using the new information about Taylor to try to get inside her head to figure out how to take her down. Hell, even though they didn’t capture her, we’ve already seen how much it’s messed with her head, and when you add in the threat to Danny, things may start to fall apart for Skitter. Or it will just force her to become stronger to deal with the new problems, 50/50 :P. AA Bacontaters on April 10, 2013 at 11:54 said: Happy Birthday Wildbow! Thank you. 😀 Happy Birthday, Wildbow!!! Thanks for the many gifts you give us! Be well! Feliz aniversário ! (happy birthday) Muito dinheiro no bolso e saúde para dar e vender. (Lots of money in the pocket and health enough to give and to sell). Yes, happy birthday! Happy wormday! Happy all the days! Here’s your birthday song 😀 Psycho Gecko puts on a vinyl version of the 1812 Overture, except that when it comes time for the guns to fire the first five times, each one coincides with an explosion at Mr. Rushmore replacing a face with Tattletale, Grue, Regent, and Skitter. Followed by everyone wondering who the younger girl is who got her mug blasted onto the mountain. A big cake is wheeled in and the next blasts coincide with first Scion, then Assault and Battery, and then Miss Militia all popping out of it each time Wildbow tries to go for the cake, being helplessly knocked away each time while going “Nuuuuu!”, leaving a much smaller cake behind. *boom boom!* Cut to the Statue of Liberty destroyed and the Simurgh standing there with a tiara on holding a torch. *na na na na na na na, boom boom!* Behemoth bursts out of the ground under the Eiffel tower while wearing a beret, destroying it. *na na na na na na na, boom!* Leviathan crashes through the Tower of London and grabs a human skull delicately, kneeling and raising it up as if to speak to Yorick. *Boom!* Hookwolf pulls himself out of the water, covered in hooked fish. *Boom!* Jack Slash preps and cuts them into sushi *Boom!* Bonesaw’s spiders bring out the trays of sushi The rest of the booms are concerned when they all rush to your birthday party, slipping on deflated cake and crashing into each other. Simurgh winds up on Behemoth’s shoulder, singing something about 15 Men on a Dead Man’s Chest, while Behemoth wears an eyepatch and Leviathan is left hobbling on a peg leg with a hook in place of one claw. Jack Slash steps on a rake, smacking him in the nose. Miss Militia accidentally shoots the punchbowl and gets knocked down as it flows out and is tripped over by a stumbling Jack Slash. Hookwolf accidentally activates a magnet, flying out fo the room. He’s soon joined by Assault, winding up in a compromising position. Battery starts throwing lit birthday candles at Hookwolf’s head for messing with her man. Bonesaw hops on Skitter’s back for a spontaneous piggyback ride. She’s batted off by Tattletale, who tries to get Skitter into a kissing contest between her and Grue. Her attempts are interrupted by Imp who plants one on Skitter and activates her power before Grue can wallop her with a plate of sushi. Instead, he hits Regent, whose power backfires and causes Tattletale to start disco dancing. Above the chaos is Wildbow, taking notes, being held aloft by Scion. The pair share a look, with Scion rolling his eyes like “I don’t know what’s up with these wacky people.” And just then Psycho Gecko leaps out with a “Ta daa!” holding a big banner that says “Happy Birthday Wildbow” until Scion zaps him, leaving only a pile of ash and the Happy Birthday banner behind. Fap Fap Fap. Happy birthday Overlord Wildbow! 😀 Happy B-day, mate. Great Greedy Guts on April 10, 2013 at 18:17 said: Happy Birthday! Huzzah! Oh, alright. I’ll say it too. Happy Birthday, Wildbow, you sadistic little Canuck! Congratulations! Like a fine wine, you get better as the years go by! And to many, many more! underwhelmingforce on April 10, 2013 at 21:10 said: The happiest birthday! I would say to take a day off if I thought I could handle the suspense. I took today off. Been a busy couple of weeks, with Easter weekend, family coming to visit the weekend immediately after, and my birthday/father’s birthday this week. No bonus chapter tonight. Then I hope to do one bonus chapter a week for the next few weeks to get caught up. Or, if people want, perhaps regular chapters on an updated schedule, for less in the way of interruptions to the main narrative. I would prefer the regular chapters on an updated schedule, too many possibilities for where this is going to wait! Perhaps a mix, alternating between them. Interludes are easier to write than regular chapters. Less attention needed to continuity, in large part. A mix is good — but so is extra regular chapters. And so is just regular bonus interludes. And so is building up a buffer and having another interludes week. Any of it would be awesome. 😀 Aww… but happy birthday nonetheless! I try not to work on my birthday when I can help it 🙂 Camo005 on April 10, 2013 at 23:52 said: Happy Birthday, just started reading Worm this past week, cant wait for the next chapter which would be…Saturday? I dont really know your schedule @Camo005: Indeed – regular updates are Tuesday and Saturday. I update one minute after midnight, so it’s essentially Monday/Friday at midnight, EST, sometimes Wednesdays, as time allows (except a minute or two later than that). happy birthday! enjoy it. I think you’ve worked your ass off enough, you’re making the rest of us look bad! May you have many more. Happy birthday! In lieu of a birthday present, I’ll go add Worm to a bunch of trope pages. 😛 (p.s. if anyone wants to help: open the page in one tab/window and the ‘related to’ page in another and look for tropes on the former and not the latter. I’m going to do the A’s tonight.) This man knows what’s up. I figure an adult human being should be able to pick up on a hint after having it spelled out explicitly for him the fourth or fifth time in a row. 😀 Well, could probably stand to do away with “Needs More Love” at this point. @Psycho Gecko: …probably. I imagine at this point it doesn’t need that one extra wick. Happy birthday, mate. Yay, Happy birth day! Happy birthday…oh yeah, it’s Thursday. Well, hope you had a good one. You know, I started reading worm almost immediately after my birthday, funny enough, so this was a wonderful “present”. Can’t thank you enough in a thousand years for this story, so I’ll just say I hope worm and your birthday brought you as much joy as reading this brings me. Happy Birthday!(sorry if it’s a day late, because of where I am, the time zone difference makes it confusing) Three Lefts Make a Right on April 11, 2013 at 02:17 said: It would appear I’m late to the party. … Ah well, better late than never. What I want to see is what the group Weld was leading does (I don’t really remember what they were called at the moment.) I’ve been wondering how they’re going to respond to the situation. They are technically heroes, but the PRT has pretty well screwed them over by now, on top of pulling a pretty dirty stunt with Taylor/Skitter’s identity. Honestly, I just think that group provides such an opportunity for drama (the good kind) that they are most definitely going to feature at least occasionally, but I’m not sure where their allegiances lie (also how do they get funding?) Judging by the (non-canon) interlude a while back, they’re going to be working as superheroes on a mercenary basis. Probably hiring themselves out to local governments. I expected them to come into conflict with the Undersiders. I wouldn’t begrudge them an attempt to ousted them with some good old fashioned superheroing without the PRT’s dirty tactics. But now trying to take Skitter in would just make it look like they’re cleaning up the hero’s messes for them. Perhaps the undersiders could hire them. That… wouldn’t work. It’s a Luke Cage and Iron Fist deal, they’re the good guys except they get paid for it. It’s probably unfair to call them mercenaries. Because normal heroes get paychecks too, the Irregulars are just freelancing. Brockton Bay needs good guys, and why should the Undersiders be stuck dealing with everything if they’ve got the money to sponsor a hero team to pick up the slack? It’d be sketchy as hell, sure. It’d also be a hilarious way to thumb their nose at the more reactionary elements of the PRT, though it didn’t look like Weld wanted to go that way. they could also do it ala coil, and have maybe grue or imp fund them in civies. Most of the undersiders’ work nowadays is helping rebuild or going after worse villains than they are, which really isn’t villainous at all. They’re called The Irregulars. They do, in fact, get government resources, presumably out of the PRT/Protectorate/Wards budget. I have no idea what their plans are at present. If they’re willing to stomach it, I bet there’s plenty of money to be had plugging holes in cape towns where there’s been a lot of turmoil among the Protectorate or Wards. It’d have a neat sort of vibe to it, too.. “Take your money? Sure. Do your jobs? Sure. Accept you as [i]leaders[/i]? Just… no.” I must have missed that, Packbat. Where did it say they were getting government resources? I thought they’d struck out totally on their own. Interlude 19 (Donation Bonus #2). ► Weld (Verified Cape) (Irregulars) Replied on July 6th, 2011: I dunno, I’m optimistic. 😉 For the record, I harbor no animosity toward the Protectorate. We’re still attached, and we’re receiving equipment, funding and contacts through them. They were very respectful as a whole, but we got a chance to interact a few weeks ago, and we collectively agreed that while the Protectorate’s plan to build a rapport between us Case-53s (as the Protectorate terms us) and the public was sound (making me leader of the Brockton Bay Wards, for example), it was too slow, and we could do more as a group. I suppose he could be lying, but it doesn’t seem in character. AVR on April 11, 2013 at 23:26 said: How do you do quotes/formatting in these comments, BTW? @AVR: Some HTML tags are allowed. I believe em, i, b, strong, a href=, and all work — I don’t recall trying any others. wildbow: > Also del, ins, ul, ol, li, code, lookup. What’s “lookup”? del, ins, ul & li ol & li and code are all familiar, but “lookup” I haven’t heard of. @ Packbat – realized after the fact that it was a built-in way to search a dictionary of what the individual tags meant. (Posted my comment, then went to edit it to see menu where the buttons with individual features were listed, tried each, realized it wasn’t that useful, removed). Weird. I clearly misremembered that part. I got something different out of it. (Of course, there’s so much to absorb in that interlude, it shouldn’t surprise me.) It`s implicit in the “internet forum” of a few chapters ago. Weld does not say the truth about why he left and states that they are getting funds. Oh, damnation… I just HAD to look up what “imago” meant. Did not need to see that wikipedia page at all… What?! What do you mean? That has to be one of the coolest things I’ve ever seen!!!! I think it looked pretty awesome. Those wings looked awesome. and green. *reads this* Well dang, now I /have/ to see what their talking about. *one wikipedia viewing later* … Y’know, you could’ve mentioned the picture moved. Pictures don’t move, its a .gif of timelapsed images. It could still have been mentioned, is all I’m saying. Thank you to Max for the donation. Thanks Max! Last night I realized something based on the following facts: * Alexandria and by extension the PRT work for cauldron * Cauldron’s plans were improved by coil taking over Brockton Bay * Cauldron seems to be fine with skitter and the undersiders running/ruling the bay * Accord is working directly for cauldron * The new PRT director who apparently doesn’t have much common sense Clearly cauldron wants capes in positions of power, see: alexandria’s positions, the triumverat, coil’s ambitions, and now they are pinning their plans on the udersiders who are trying to rule over a city. Now lets look at their actions recently: they install an incompetent director in the very city where the undersiders have both begun becoming popular and have taken down 2 previous directors. If anything this move will likely lead to the PRT branch becoming ineffectual and allow the undersiders to increase their hold over the city. However cauldron isn’t relinquishing influence on the city because they have accord and his flunkies (of whom he is getting even more recruits). Accord is definitely planning on dispatching the undersiders at some point in the future, judging by his conversation with citrine and othello last chapter. Once he’s done that, he and by extension cauldron will have control over brockton bay, moreso than they had as the PRT. They will also have control over the portal. Now we have to ask what does cauldron gain by creating and experimenting on capes, and how does gaining control over cities help whatever cause they have? Well for starters, they are creating an army of capes indebted to them (many who are charismatic and powerful), and are developing stronger powered capes each time. They are likely constantly learning more and more about how powers work and what causes them, and probably know much more than bonesaw and tattletale combined (well, not if they were really combined, but both of their current knowledge about powers combined). If I recall correctly, we’ve seen a trigger event from a vial, and the ‘passengers’ do NOT like being forced onto a person. They already have the power to cross realities relatively freely, and it is assumed the ‘passengers’ come from one of these realities (or possibly in the space between them). With the brockton bay portal, they eventually will have another way to cross realities (though I doubt they would have known about this beforehand). It has been speculated that endbringers are essentially these passengers in crystalline form, and we’ve seen the damage one of these monstrosities can do. Now I havent verified this, but I would assume that because natural triggers produce more villains, and cauldron is actively making capes, that naturally in any fight against an endbringer, said endbringer brings down more cauldron capes than natural capes. Its not that big of a leap to imagine cauldron accidentally made the endbringers when they were first working out the powers in a bottle, especially because the endbringers and powers seem to be linked, as evidenced by the fact that neither have been seen in other realities despite the potential. I would hazard a guess that cauldron is planning on attacking/defending against/harnessing the main power that gives powers to capes, and their activities at the moment are geared around consolidating power in this reality and gaining more powered troops to rely on when said event happens in ~2 years. But thats just what I thought everything pointed to, I could be completely wrong. Wow, this is actually kinda brilliant. If Wildbow has something else in store for us, I would be surprised if it deviated largely from the main points you outlined. Well said. I honestly hope I’m wrong on multiple accounts because this story is too damn good for me to accidentally spoil. I will admit there are a lot of plot holes in going this route, but I was trying to tie in everything I know of the story so far, general tropes for a story of this caliber, and we clearly still dont have the whole script. This is just one of the ways I think it could go down. Also, see every curveball wildbow’s thrown at us with this story, even with several commentators who clearly enjoy cape literature and tropes (see skitter NOT being arrested by D&D last chapter), I’m fairly certain most of what I wrote is wrong but I will be happy to argue semantics and theories till saturday. This is why I tossed Wildbow a email with potential spoilers before putting it up here to make sure it’s okay. 😛 hm, thats actually a good idea, but seeing as none of us have any idea besides wildbow, I think I’d rather discuss possible ideas than have them be confirmed/denied by wildbow directly. I don’t think anything here is necessarily spoiler-ish mainly because I am basing everything off of what I remember from everything that has been posted before. If I had direct confirmation from wildbow, that WOULD be spoilers because I could be confident that it will happen later in the story. For the record, I won’t/can’t say anything particular in relation to emails like that. As Clarvel states, it spoils, and I’m not willing/wanting to spoil anything prior to it happening in-story. Two exceptions, really. I have an online buddy, a fellow writer, who I’ve outlined the main story arc to (insofar as I had anything sketched out), but he doesn’t read Worm (that may be a bad sign). I also have one buddy who I explained the Guts & Glory storyline to, up to her being possibly incarcerated. He does/did read Worm, but rarely comments. As such, the emails don’t really do anything. I appreciate the sentiment, but there’s no point where I can say, “That’s not ok,” without revealing that there’s something that’s problematic/spoilery in the content of the email. As far as I’m concerned, speculate away. If you guess right, then that’s ok. I’ll cope. It’s well thought out but I don’t think it’s entirely correct in Cauldron’s goals. I think that Cauldron is wanting to create a situation where people are being lead/ruled by an individual with powers and the people accept it. So what they want is an example of a prosperous place being lead by a parahuman who is popular with said heroes. *said civilians in that place. Not heroes. Right, but once you have a model city like that, its not a stretch to see other capes following said model, as thats what models are for. This would then lead to multiple cities with capes in charge, and ties back into what I was talking about. I only disagree on one thing, Cauldron doesn’t have to force out the Undersiders and take over directly. It’s in their interests to farm local management out to locals who give a damn and who have the legitimacy to keep things running in good order. Why should Cauldron do the legwork of running the world, it’s enough to have the power and influence to see their policies implemented. Mrmdubois on April 11, 2013 at 22:19 said: I’ve been thinking on this thing with Skitter’s civilian identity revealed and it’s not the worst that could happen. She’s not internationally known in all likelihood and the PRT is mainly based in America, Canada and soon Mexico. So that means she can pull the same move a lot of successful criminals do and move to the Bahamas whenever she feels done with Brockton Bay. Or if she feels like moving closer to home just skip through the portal into the unpopulated dimension on the other side. Also, if the PRT is right about her being a Thinker on top of being a Master that means she has the power to screw over precogs. No wonder she beat Dinah’s odds. If her thinking power is subsidized by her swarms then the bigger her swarm the higher her Thinker score goes and so far we haven’t seen an upper limit to her swarm control. I mean this is pure speculation but if true she could screw over the Simurgh too. This gives her an edge against Accord as well of course. As for where things are going now, I felt kind of disconnected from Taylor this chapter, but on rereading a few times she seems kind of disconnected from herself, so that’s ok. I am kind of worried about what she’s going to do about the PRT, the other two threats the Teeth and the Fallen are basically just the same old threats, but with the PRT she runs a serious risk of crossing too many lines. I think she’ll be okay if she sticks to her usual M.O. and continues trying to provide for Brockton Bay and build up public support though. She’s definitely a villain, but the city has had worse villains. If you think Skitter is scary in Brockton Bay, put her in a rainforest. Vanishing and moving wouldn’t get her relationship with her Dad back. Also, as Skitter in Brockton Bay she has a reputation, good friends and allies, minions, a share in a portal to another universe … it’d be a lot to give up and recreate elsewhere. She just needs to get Tagg replaced somehow then work out a truce with the PRT. And survive or stop the end of the world of course. Alvan K. Fleming on April 12, 2013 at 19:31 said: Only thing I don’t like about this story besides the folks from Cauldron, is the fact that now I have caught up to your postings and have to wait for the next chapter. Thank you for sharing so freely your imaginations with me. Glad you’ve enjoyed, Alvan. Welcome to the Wormling fold Alvan. 🙂 Hey Wildbow, do you have anything in particular planned for the second anniversary of Worm? Good question. Nothing in particular, but depending on how fast the next few arcs progress, there’s one possibility. I don’t want to force it, though. I’d rather get things right in the meantime than rush/prolong the story to time it properly. Mmkay, sounds great. Also, completely unrelated, but how likely is it that we’ll get a Nilbog POV interlude? Not very? The one where he was introduced was almost his interlude, despite not being shown from his perspective. No ideas spring to mind with that fellow. Okay, thanks for the responses. Had plenty of questions, but restricted them to the ones that were least likely to be spoilerish. No need to step on eggshells around me. If I can’t answer one way or the other, I’ll let you know. 1) It’s mentioned that Weld has properties which renders quite a few powers ineffective against him, does this qualify as a Trump ability? 2) Is a Circus Interlude in the cards? 3) Regarding Jack Slash’s ability, does it seriously /only/ apply to knife edges? Because it seems like a spatial warping function that could have way more applications than he uses it for. Or, alternatively, has he decided to not explore any other possibilities with it? 4) When did the first iteration of the S9 form? Who led them? 5) Did Noelle experience what was called a “Deviation scenario” as was mentioned in 11:7? 6) Is it possible we’ll see Earth Aleph through Genesis’ or Sundancer’s eyes? Since Ballistic already had an Interlude, if I’m not mistaken. 7) Does Scrub have some kind of mental deficiency related to his powers? He seems…off. 8) Is a POV of a Guild member likely? 9) Are there any internationally known capes of the beneficent variety, besides Scion, that aren’t affiliated with a group? Sorry for the list, I still tried to trim the fat, so to speak. 1) No 2) Possible. 3) Not just knife edges, but by and large, it’s edged weapons only. He finds knives are easier/faster to handle. 4) The first iteration formed twenty years ago. They were a very different group, led by King (mistakenly called Rex in a previous chapter). Jack was a junior member, as was Gray Boy. 5) Yes and no. Deviation scenarios include Weld and Gully. 6) Prrrobably not. I won’t rule anything out, and I can envision one scenario, but it probably wouldn’t be what you’re picturing, and only then if it was specifically asked for. 7) Yes and no. His deficiency isn’t directly due to powers. 8) Possible, but no big ideas in mind. 9) Not really, unless you count, say, a European cape in a geographical region with lots of individual, smaller countries as being ‘internationally known’. It’s a result of the Endbringers, related but not directly linked to the Protectorate. It’s not feasible/economical/efficient to contact a bunch of big-name solo operatives in a time of crisis and arrange to bring them to a specific location, so they either band together/gather others under them and fall in line with the basic preparations that have been set in place, or they fall by the wayside. 4) Would this history be unveiled in story? 9) This actually makes a lot of sense, I’m surprised I didn’t think of it beforehand. -Would Faultline’s power work on an Endbringer? -Do you have backstories for all of the characters, named and otherwise? -Do you have a favorite? -If she tried, could Bitch use her powers on other creatures besides canines? -Is there a list somewhere (mental or otherwise) on the particular mindstates that each of the Undersiders were in when they triggered? -Also, the temp. power increases; is every parahuman privy to them? Also, thank you very much for humoring me. I really appreciate it. 4) Possibly, probably not. 9) That’s why I’m the writer. 😉 – No. – No. Many though. – Yes. – As it stands, no. – Yes. In my head. – Yes and no. Belated typo: With no “said” after the quote, should probably be a period after “war” and “his” capitalized. Intelligent Fool on July 18, 2013 at 20:41 said: Just recently found PH/Worm, and started reading from about the point when s9 happened (I was bored when I tried reading from the start). Nothing has made me feel compelled to comment as much as Tagg. I hate him. Hate. He’s everything wrong with any of the “heroes” in the universe bundled into one: Corrupt, Absolutist, Self-Righteous, Uncooperative, Aggressive, Ruthless. He fits more with the s9 than a PRT position. His mindset doesn’t need any tweaking – he just needs a reason to want the world to end rather than for it be ruled by the PRT. I suspect an “or” is missing somewhere. Reread and realized that there’s no error. Probably. And Taylor continues down the slippery slope. Once you start attacking people to make a point you’re past the point of no return. Just realized what’s been bugging me for a while when I read the scene with Dovetail: Taylor’s bug have an arbitrary flying speed. Few insects can even outrun a person, let alone a flying cape. I note the PRT still holds the idiot ball with regards to not using sealed suits. Just spent a couple of hours playing through a scenario where a young Sakura (from a Naruto fanfic, used chakra strings through ninja wire a lot) gets transported here and joins the Wards. Was fun imagining the Undersiders getting their just desserts from an opponent that’s an excellent counter to them. 😀 Thinking of Taylor’s required secondary powers: Arbitrary amounts of insects in whatever ratios are desired. Insects with arbitrary flying speed. Stills the wind nearby. More silk than possible (I recall her making a rope the width of her arm once on the fly, tens of millions of entire spider’s silk stock worth) faster than possible. Causes every opponent nearby to become retroactively stupid and incompetent (that was an incredibly shoddy defence by the heroes, no containment foam on the dogs? insect swarms just allowed into the building without being hit by cans worth of bug spray, foam or powers?). Poor Dragon. 😦 She didn’t start the story with these (compared her earlier battles to this most recent one) but seems to have picked them up over time. It’s a real shame, this was a good story until the Sueishness started ramping up. Might continue on reading for a few chapters to see if anything changes. Tagg feels like an unrealistic, brainless caricature. Probably my least favourite chapter so far (read up to 21-2). Yes, Tagg. I’m sure your PRT can take down the Undersiders with their new member(or members, depending on when you count from) and their alliance with another powerful supervillain group. Why don’t we chat with Lung, or Armsmaster, or the entire pre-Leviathan Brockton Bay Wards team, or half the local Protectorate, or Coil, or Jack “Fucking” Slash? Oh, right. Birdcaged, humiliated and then unofficially imprisoned, strongly defeated, embarrassed and defeated, killed, and driven out of town with a fraction of his team remaining and most of the losses taken out in large part by the Undersiders. I’m sure you can take her, Tagg. Take her down the slippery slope with you. I imagine she’ll be on top by the bottom. When you edit this stuff, you might want to make sure to cut down on the overuse of the two expressions “make a/the call” and “get one somebody’s case”. Tagg didn’t know Taylor was at a school? When she was (most likely) discovered by the principal pulling up school records? Um . . . . Why didn’t Tattletale call him on that BS? Maybe I’ll find out in the next chapter. oliverwashere on May 7, 2014 at 00:43 said: I always thought that the Protectorate went after Taylor at Arcadia because Clockblocker recognized a Skitter lookalike practicing villainy against an apparently innocent bystander named Greg. Now I’m curious about whether they went to Dinah for odds for a plan, or if Dinah saw something she didn’t like and intervened first. Either way, it seems like Dinah’s still on Taylor’s side based on the fact Taylor escaped. To me though, Taylor’s downward spiral towards becoming a ruthless bully isn’t ideal, even if it might be necessary to avert the end of the world(or whatever Dinah, and pertinent Cauldron members might be individually after for them to push this situation into being). There should be other, more morally digestible, methods available to the precogs messing around. And if there aren’t any win-wins, the precogs probably aren’t abusing their powers enough. Tangentially, if power use in tandem produces exponential effects like the hole between Brockton Bay and the virgin Earth, what would happen if precogs cooperated? The pessimist in me says they’ll get into a feedback loop, and I’m starting to remember why a non-linear interpretation of time gives me headaches. On Taylor, it’s a shame she can’t recognize how mentally broken she is. I hope she finds help for her reflexive outrage whenever she perceives bias and favoritism by authority figures that aren’t herself. Grue’s attempts seem to be desensitizing her to criticism though, and it’s like looking at an inevitable train wreck. I’ve just realized that “bat for the other team” is an even less precise declaration of orientation than “homosexual” or “heterosexual”. Homosexual and heterosexual define orientation in terms of the subject’s gender (without specifying that gender); this defines it in terms of another person’s orientation, without specifying that orientation or even that person. There are supposedly only two teams, but if you don’t know which team is “this” one, how do you know which is “the other” one? Anyway. I’m GUESSING that Lisa is saying “none of the girls here are into girls”. Which is disappointing. But then, Lisa isn’t always as right as she thinks she is… As far as I am concerned,Targ is the worst type of scum.For future reference,this is how I categorise morality,which is based both on the heinouseness of the person and his capability to repent.Notice that batshit amoral insane is not in the list,its hard to judge blue and orange morality,and intentions are categorized,not results,because I do not believe the road to hell is paved with truly good intentions,and because we can talk about optimal decisions for years.I also won’t categorise persons who are arguably (I know you wanna argue,slider) not accountable for their actions,like Bonesaw and Simurgh victims (Anyway,here: Paragon (person who does what s/he thinks is good,regardless of what society says,but listens to others and try to understand all people,thinks about the consequences and keeps double guessing him/herself,Taylor falls in this category for some reason)>beta paragon (same,but because this is too hard,he has some hard rules he sets that he/she cannot break,which cause him/her to be innefective or stupid sometimes,but he would break them if they became a liability,Dragon is this)>hero (person who goes above and beyond the call of duty to help others,but is not really that invested into it,Clockblocker is this)>nice guy (person who tries to understand others and not antagonize them even if he does not completely agree,but not really determined to do much,I,myself,think I am this,and Parian is too)>principled(person who stucks to some rules he set to him/herself,consequences be damned,pre Bonesaw Panacea was this)unglorifying faux hero (a person who is perfectly aware what he does is not moral,and that s/he might even be biased,but keeps doing it because somebody has to do the dirty work,Piggot is this)uncomited (person who does not really care much,Kid Win and Regent are this)> justified villain (person who is justified in his vilainy but is nevertheless evil and has failed to raise above his/her situation,lashing out instead of becoming a bigger person,pre Taylor Bitch was this)ignoble hero (person who does good deeds for the wrong reason,but holds no illusions about it Armsmaster was this and Accord is this)>noble villain (person who understands he is evil but still has some rules he won’t break,Marquis is this)>unprincipled egotist (person who does not care about the others,only about himself,and the optimal way to please himself.Coil and Crawler were this.Note that more heroic egotists are uncommited or ignoble heroes,so this is the worst kind)>sadist (person who goes out of his/her way to torture or cause suffering to others regardless of reason,though the most common is reafirming his ego,unfortunately very common,Emma and Shadowstalker were this)>entropic (“some persons just want to watch the world burn”,not very realistic,so unlikely to evoke deep hatred,but still eviler than everyone behind him,Jack Slash is this)>fanatic (person who does what s/he thinks is good,regardless of what society says,and regardless of logic,rules,other people opinions,consequences etc,Targ is this.They tend to have a mentality of “we are right,they are wrong,”for arbitary reasons,such as their races or religions,and allow themselves to do evil things because when they do it,they are good,because they are against villains). You might have understood why I am so baffled at people calling Taylor evil,but even if you didn’t,lets see the points about her so caled “evil actions”and counter them. Inheritance is not really an issue here,for the Undersiders,laws and democracy were established not because they are some kind of “abstract good”,but because no ruler/oligarchy can be assured to be moral,a problem not held with Taylor,thus setting a benevolent dictatorship under not inheritance extreme conditions is here moral.Her so called “torture”is more merciful than the criminal justice system,as it destroys not the criminals function in society,and even after the guy does evil it is only applied when he does not heed her warnings.Triumpth is on Trickster’s head,her actions were panicked and necessary,and no real harm was done.She is,instead,judged by people who do not show her any good alternatives (sure,they give her bad ones,like surrendering to an obviously corrupt government,or saving Triumpth and then being shot in the back,letting Dinah suffer for the lest of her life and provoking the Undersiders into a raprage that would likely kill the mayor,vourtesy of Bitch).Her only actions that were not morally optimal,but still not evil,were joining the Undersiders (downright stupid,sealed her fate in a few choices,but she was not yet experienced to make solid strategical judgement),the bank (again,stupid,she was not her current self),the crashing of the party (notice a pattern?she seems to become smarter only after her fight with Leviathan)and this very chapter (still,her anger was understandable,9/10ths of the commenters wanted to do worse things to Emma,and nobody was really hurt,and it was not morally optimal only because she did it partway for revenge,she still had to do it,not taking revenge from them after her unmasking would undermine her image,thus creating danger for her people.I bet that,in her place,none of the commentors would be as merciful to Targ.) The eagle eyed might have noticed that the first sentence of the paragon and the fanatic is the same.This is intentional:imo the only difference between the best of heroes and the worst of villains is that the world of the worst of villains is one to fit their arbitary image,while the best of heroes have enough introspection to not be really arbitary in their perfect world,even though it would be by no means flawless either,or even truly better for people…A fanatic would feel glee in the headmasters punishment of her tormentor,Emma because she was evil,while the fanatic was good(though even a hero of my classifications could feel glee about this,to be fair),a paragon would only feel he sits in the different side of the same corrupt system,and would only feel a hollow guilt.Heck,many people would justify Triumpth with “I did what I had to do”,and they would be right,only a paragon or a beta paragon would feel guilt under the same circumstances. Emil W. on April 14, 2015 at 23:34 said: I guess we’re due for a bullheaded, straightforward antagonist, but I can’t help but wonder what Tagg thinks he’s going to possibly accomplish. If he’s going to put the Undersiders in jail no matter what, they’re not going to have any other choice but to deal with him in a permanent fashion. Maybe, let’s say, incarcerate him. (With Regent, a three hour sentence is going to last the rest of his life.) It’s just a question of how hard he has to push before they stop being polite. Walking right into his office and covering him and the missus in a carpet of spiders without leaving a mark should illustrate that point perfectly to any reasonable person. You’re going to need to work with them like you would, for example, Saudi Arabia; a state you might not particularly like but which nonetheless wields military and economic power enough that making peace with them means everyone wins. Pretty rash of Skitter to attack out of anger. Understandable, but still brash. She shouldn’t have done that, she’s really walking into the path of villainy to the point of no forgiveness. A better retaliation would have been to reveal that Thomas Calvert was actually coil. PRT image would’ve gotten hurt further, and the whole “identity for an identity” unofficial rule would’ve been upheld. Taylor used to have a deadman’s chest on his information, she could’ve easily revealed it without much contradiction. I’m also wondering if Tagg will do something unforgivable just to give Taylor “a bloody nose”. I haven’t read ahead, but I could imagine that Tagg might decide to give Emma powers just to piss off Skitter. Nah… I agree with her. They had to hit back or look bad. Difference being, since Tagg decided to treat it as a war, and wouldn’t back down, I’d have killed him. Skitter has more patience and good will than even a Paladin proved. inventorfrog on December 1, 2015 at 14:45 said: I think you used ‘he’ instead of ‘she’ to refer to Dovetail: “As with Dovetail, I’d managed to make enough progress that he was more or less out of the fight.” I might have misinterpreted. No, it’s right as is. It’s saying: “As with Dovetail, I’d managed to make enough progress that he (Sere) was more or less out of the fight.” Ah, I see. Thanks. The character of Tagg doesn’t fit right for a military man. They are way more experienced at dealing with shades of gray than we civilians are. The “us against them” thing is more of a Hollywood construction. That’s probably less true of Worm Earth. It’s a crapsack world that doesn’t seem to bring out the best in anyone. After decades of dealing with things like the S9, Nilbog and the Endbringers I’m guessing there’s been a big increase both in the army’s willingness to shoot first and ask questions later and in the public’s willingness to support that… Mirrorscissors on October 23, 2016 at 23:40 said: Oh, man. This arc is starting off exciting! If it is anything like Chrysalis, I might not be able to pace myself from reading this story so voraciously. As exciting as it may be, I do hope the relationship between Taylor and Rachel is fleshed out some more. The last memorable interaction between them was when Taylor made Rachel a costume. The story is nearing the end, so I fear their friendship won’t develop any more. I hope I’m wrong. Anyway, I’ve been really enjoying Worm. It’s inspired me to think about writing my own web serial one day. The only problem with that is the writing part. I’m not really confident in my ability to put pen to paper, or finger to key. Maybe I’ll get over it, who knows? «Dovetail flew after Atlas and I» “after me*” m object of the preposition. Daryl Katana Algarra on July 12, 2017 at 04:50 said: any chance for my Rachel x Taylor Ship just got smashed to shit by a god damn Endbringer. World Weaver on January 2, 2018 at 09:37 said: Maybe the little demonstration I’d done with Tagg’s wife hadn’t been for him. OI FUCKING IDIOT! THAT’S ONLY GOING TO CONVICT (I can’t spell when i’m angry) THE PRT TO DROP FUCKING NUKES ON YOUR STUPID FUCKING ASS. Who the fuck are Haven? And I really would wish that she just kills them all to make a point and not goes so weakly. He obviously doesn´t understand anything. Max Leviton on June 22, 2019 at 22:20 said: Damn I forgot just how much I utterly despise Tagg. The man is almost as bad as Emma. It’s like he wakes up in the morning and goes “How can I possibly makes things even worse today? Great, let’s go do that! No, wait, let’s do WORSE! Yeah, excellent plan, go me.” And then he pats himself on the back for casually threatening to essentially pyschologically torture teenagers and their families despite those people doing a lot to help the city. I miss the creepy, possibly child molesting, sadistic Calvert… Leave a Reply to trey Cancel reply
cc/2019-30/en_middle_0023.json.gz/line1665
__label__cc
0.610408
0.389592
Posts Tagged Christian LaBissoniere On June 3, 2019 / Featured, News, Press Release Flying Foot Forum’s emotional, dance filled musical returns Connie Shaver, shaver@parksquaretheatre.org Park Square Theatre and Flying Foot Forum present Heaven, on the Proscenium stage from May 31 through June 23, 2019. Created and directed by Joe Chvala, H EAVEN is a choreographic blend of frenzied dancing, music and theatre set in war-torn Bosnia during the 1990s. The production features Orkestar Bez Ime, a Balkan party band, music by Chan Poling (The Suburbs, Glensheen) Joe Chvala, Victor Zupanc, and Natalie Nowytski, and actors who sing and speak in English and Serbo-Croatian as this story steeped in history celebrates the Bosnian culture and the dignity of those who lived through the war. Heaven was first presented in March 2011 at the Guthrie’s Dowling Theatre. That production was ironically timed as the Arab Spring uprising was engulfing Egypt. Chvala is finding the messages of the play are as timely today as they were in 2011. “Heaven is a show of sharp contrasts, filled with beautiful music, love stories and raucous dancing, as it brings us inside the violence of the Bosnian War,” said Chvala. “It is a cautionary tale of the need to find common ground rather than fight those who are different from us.” The play begins in a café, the crowd singing Bosnian songs. Photo journalist Peter Adamson is documenting the war and is frustrated that his photos are not prompting the world to take action. He finds himself on a journey with his translator Faruk to save Faruk’s wife. Amid the horrors of war, there is humor, love and hope as the characters try to maintain their humanity. “This is theater that grabs you by the shoulders and shakes you in your seat.” (How Was the Show) The production is enriched by a series of public events created in partnership by Park Square Theatre, Flying Foot Forum, World Without Genocide, The St Paul Rotary Club and the Minneapolis University Rotary Club to take audiences into the heart of the Bosnian War, genocide and through the transitional justice efforts that have followed in the decades since the war ended. Next up: Exhumations and Justice Post-show conversation following the Sunday, June 16 performance A discussion following the performance will be led by Dr. Andrew Baker, Hennepin County Medical Examiner, who served as a forensic pathologist in Kosovo. Sex Trafficking and Genocide Film The Whistleblower and FBI talk on sex trafficking Tuesday, June 11, 7:00 p.m. Mitchell Hamline School of Law, 875 Summit Avenue, St. Paul This 2010 biographical crime drama starring Rachel Weisz and Vanessa Redgrave is the story of Kathryn Bolkovac, a Nebraska police officer recruited as a UN peacekeeper for DynCorp International in post-war Bosnia in 1999. While there, she discovered a sex trafficking ring serving (and facilitated by) DynCorp employees, with the UN’s SFOR peacekeeping force turning a blind eye. A post-film talk will be moderated by FBI Special Agent Michael Melcher of the sex trafficking unit. The production team for Heaven includes Emma Lai (Assistant Director); Jake Endres (Music Director); Robin Mcintyre (Scenic Design); Cindy Forsgren (Costume Design); Kirby Moore (Properties Design); Marcus Dilliard (Lighting Design); Cody Anderson (Sound Design); Steve Campbell (Video Design); Stela O’Center (Language and Culture Consultant); Joe Papke (Dialect Coach); Rachel Lantow *(Stage Manager);and Paran Kashani (Assistant Stage Manager) Jeremy Bensussan Jan Campbell Joe Chvala Peter Colburn Michelle de Joya Ariel Donahue Kevin Dustrude Mary Gantenbein Karla Grotting Liam Hage Christian LaBissoniere Cooper Lajeunesse Helena Magalhaes Riley McNutt Natalie Nowytski Charles Robison Jessica Staples Molly Stoltz Nicolas Sullivan Lara Trujillo Eric Webster Joe Weismann Mabel Weismann Colleen Bertsch Jeffrey Gram Jake Endres Scott Keever Eric Ray Ticket prices: Previews: $20-$37. Regular Run: $25-$60. Discounts are available for seniors, military personnel, those under age 30, and groups. Tickets are on sale at the Park Square ticket office, 20 W. Seventh Place, or by phone: 651.291.7005, (12 noon to 5 p.m. Tuesday through Friday), or online at parksquaretheatre.org. #PSTHeaven *Member, Actors Equity Association Park Square’s Proscenium Stage Previews: May 31 – June 6, 2019 Opening Night: June 7 Regular Run: June 7 – 23, 2019 Tickets: Previews: $20-$37; Regular Run: $25-$60 PARK SQUARE THEATRE, 20 W. Seventh Place, Saint Paul Ticket office: 651-291-7005 or parksquaretheatre.org UP NEXT ON THE PARK SQUARE PROSCENIUM STAGE: The world premiere of: Book and Lyrics by Keith Hovis Directed by and Laura Leffler Previews: June 14 – 20, 2019 Opening Night: June 21 Regular Run: June 21 – July 28, 2019 PARK SQUARE THEATRE. 20 W. Seventh Place, Saint Paul. Ticket Office: 651.291.7005. www.parksquaretheatre.org
cc/2019-30/en_middle_0023.json.gz/line1667
__label__cc
0.570922
0.429078
Advice from Tea Sommelier The right temperature, right amount of tea, and the correct steeping times were some of the methods used by this year’s Tea Infusionist Champion, a title bestowed on the “sommelier” of teas. There is a craft to brewing the perfect cup of tea, experts say, one that involves understanding the subtle alchemy that takes place when pouring hot water over delicate, fragrant tea leaves. And the tea brewmaster who best understood this equation was Steven Downer of Sipping Streams Tea Company in the US state of Alaska, whose matcha was brewed at the right temperature, was the “perfect balance of water and tea,” and gave off excellent aromas, judges said. Competitors were given 15 minutes to brew four cups of tea, including a Darjeeling, matcha, oolong and a puer tea from China. Cups were judged on the quality of infusion — color, taste, smell and ‘leaf agony’ (the term used to describe how the tea leaves unfurl in agony once hit with hot water) — and technical skills, like timing, volume and overall presentation. “It’s not just about water temperature or steeping times,” said Kim Jage, executive vice president of World Tea Expo. “It’s about moving beyond the basics of tea. The contenders of the event, all tea experts, approach tea methodically and imaginatively; they embrace the tea’s nuances.” The World Tea Expo wrapped up in Las Vegas last week, and also awarded the best bottled iced teas this year. In the category of Ready-to-Drink-Flavored green tea, Snapple Green Tea came out on top, followed by Gourmetti Brands’ Chaitea Passion Fruit. Numi’s Earl Grey Pu-erh tea was awarded the top prize for being the best Ready-to-Drink-Flavored Black tea, and the Ready-to-Drink-Flavored sweetened black tea award went to Milo’s No Calorie Tea Gallon. The highest marks were given for teas that were unique in flavor and for boasting a distinctive taste “with brilliant style.” Meanwhile, in a bid to reduce the art of tea brewing to a science, the Royal Society of Chemistry in the UK created a formula they said resulted in the perfect cup. Among some of their more “controversial” recommendations, was to pour the milk in the cup first to achieve a color that’s “rich and attractive.” Here are some other tea brewing tips, courtesy of science: Avoid hard water For optimum results, use a ceramic tea pot with loose tea. Two grams or one teaspoon per cup of water is the general rule for loose tea Tea infusion needs to be performed at a high temperature and that includes a pre-warmed tea pot. Fill the pot at least a quarter full of boiling water and keep it there for half a minute. Or, heat water in the tea pot in the microwave. Steep tea for three to four minutes. Add milk before pouring in the tea. The reasoning behind this is that the heat from the tea is liable to degrade milk proteins. Add sugar to taste. The perfect drinking temperature is between 60 C (140 F) to 65 C (149 F). Any hotter, and this leads to the very uncouth habit of “slurping” one’s tea. If the tea is too hot, an effective cooling method is to leave a teaspoon in the tea for a few seconds. Tags: tea Alex Carter Formally of all great things... Travel Destination: The Hermitage Hotel, Nashville Pricing announced for Mini/Rolls-Royce collaboration Wedgwood Launches Project Tea Cosy At London Fashion Week Prêt-à-Portea Afternoon Tea at The Berkeley An herbal remedy for those nasty allergies Miranda Kerr Designs Teaware Collection For Royal Albert Chantal Classic Copper Teakettle Yorkshire Tea cottage up for sale Top 10 Best Venues for Afternoon Tea in London Tea and coffee can reduce heart disease risk Green tea ‘slows prostate cancer’
cc/2019-30/en_middle_0023.json.gz/line1675