text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Q: Precompiled Header not found In my program I have 3 files right now. main.cpp, DirectXTemplatePCH.h, DirectXTemplatePCH.cpp The only line in DirectXTemplatePCH.cpp file is the include for the header. I have gone into the properties and with All Configurations selected Configuration Properties > C/C++ > Precompiled Headers the Precompiled Header option is set to Use(/Yu), and the Precomiled Header File is DirectXTemplatePCH.h. Then Under just the DirectXTemplatePCH.cpp file the PreCompiled Header is set to Create(/Yc). This should be all that I need to do, but whatever I try to do it keeps giving me an error saying "fatal error C1003: Cannot open include file: 'DirectXTemplatePCH.h': No such file or directory". All the .cpp and .h files are in the same folder.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,914
Mammal Species Kitty Terwolbeck/Flickr by Regina Bailey Regina Bailey is a science writer and educator. Her work has been featured in "Kaplan AP Biology" and "The Internet for Cellular and Molecular Biologists." Have you ever thought about what makes mammal species different from other vertebrates? If not, I'm sure that you have noticed the differences between a snake, which is a reptile, and an elephant. Being a mammal myself, I have always found this particular class of vertebrates very interesting. As you will see, mammals have certain characteristics that differentiate them from other vertebrates. Let's take a look at some of these characteristics. Mammal Characteristics To begin with, mammal species are in the Class Mammalia, within the Subphylum Vertebrata, under the Phylum Chordata, in the Kingdom Animalia. Now that you have that straight, let's look at some specific traits of mammals. One main characteristic that mammals have is a feature that usually stands on end in frightening situations. Can you guess what it is? Yes, it's hair or fur, whichever the case may be. This trait is useful in maintaining the constant body temperature that is important to all endothermic animals. Another characteristic is the ability to produce milk. This comes in handy while nourishing babies which are usually born fully developed (exceptions are the monotremes and the marsupials). Fertilization occurs within the reproductive tract of the female and most have a placenta that provides nutrients to the developing embryo. Mammalian young are usually slow to leave the nest, which allows for a longer period of time for the parents to teach skills that are necessary for survival. Respiratory and circulatory features of mammals include a diaphragm for proper lung ventilation and a heart that has four chambers to ensure that blood is circulated appropriately. Mammals can comprehend and learn things, which can be attributed to a larger brain size as compared to vertebrates of similar size. Finally, the existence of teeth that are different in size and function is a trait that is seen among mammals. All of these characteristics (hair, maintaining a constant body temperature, production of milk, internal fertilization, young born fully developed, highly developed circulatory and respiratory systems, larger brain size, and differences in the size and function of teeth) make mammal species unique among the vertebrates. These Eight Traits Separate Mammals From Other Vertebrates The Basics of Vertebrate Evolution What Is a Cladogram? Definition and Examples Meet the Chordates (aka Vertebrates) 5 Best Ways to Distinguish Reptiles From Amphibians, Fish, and Mammals 10 Essential Facts About Fish Interesting Crustacean Facts What Is Delphinidae? 10 Fun Facts About Reptiles Shark Facts: Habitat, Behavior, Diet Can You Identify These 30 Different Invertebrate Groups? Do You Know These 10 Essential Bird Facts? Mammalia - Profile of Class Mammalia Vertebrates: Famous for Their Backbones These Monkey's Uncles Stretched Back for 70 Million Years Do You Know These Basic Facts About Mammals?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,145
_a_ SAVOR THE SOUTH® _cookbook_ Crabs & Oysters SAVOR THE SOUTH® _cookbooks_ _Crabs and Oysters_ , by Bill Smith (2015) _Sunday Dinner_ , by Bridgette A. Lacy (2015) _Beans and Field Peas_ , by Sandra A. Gutierrez (2015) _Gumbo_ , by Dale Curry (2015) _Shrimp_ , by Jay Pierce (2015) _Catfish_ , by Paul and Angela Knipple (2015) _Sweet Potatoes_ , by April McGreger (2014) _Southern Holidays_ , by Debbie Moose (2014) _Okra_ , by Virginia Willis (2014) _Pickles and Preserves_ , by Andrea Weigl (2014) _Bourbon_ , by Kathleen Purvis (2013) _Biscuits_ , by Belinda Ellis (2013) _Tomatoes_ , by Miriam Rubin (2013) _Peaches_ , by Kelly Alexander (2013) _Pecans_ , by Kathleen Purvis (2012) _Buttermilk_ , by Debbie Moose (2012) _a_ SAVOR THE SOUTH® _cookbook_ # Crabs & Oysters **BILL SMITH** The University of North Carolina Press CHAPEL HILL © 2015 William B. Smith Jr. All rights reserved. Manufactured in the United States of America. SAVOR THE SOUTH® is a registered trademark of the University of North Carolina Press, Inc. Designed by Kimberly Bryant and set in Miller and Calluna Sans types by Rebecca Evans. The paper in this book meets the guidelines for permanence and durability of the Committee on Production Guidelines for Book Longevity of the Council on Library Resources. The University of North Carolina Press has been a member of the Green Press Initiative since 2003. Jacket illustration: oyster, © istock.com/margouillatphotos; crab, © Ingram Image/Picturpartners Library of Congress Cataloging-in-Publication Data Smith, Bill, 1949 January 11– Crabs & oysters / Bill Smith.—1 [edition]. pages cm.—(Savor the South cookbooks) Includes bibliographical references and index. ISBN 978-1-4696-2262-0 (cloth : alk. paper) ISBN 978-1-4696-2263-7 (ebook) 1. Cooking (Crabs) 2. Cooking (Oysters) 3. Cooking, American— Southern style. I. Title. II. Title: Crabs and oysters. TX754.C83S65 2015 641.6′95—dc23 2015006183 Frances and Ed Mayes's Spaghetti with Lemon and Crab recipe from _The Tuscan Sun Cookbook: Recipes from Our Italian Kitchen_ , by Frances Mayes and Edward Mayes, 2012. Used by permission of Clarkson Potter/Publishers, an imprint of the Crown Publishing Group, a division of Random House LLC. All rights reserved. Jean Anderson's Stuffed Crab au Gratin (Santola Recheada e Gratinada) recipe from _The Food of Portugal_ , by Jean Anderson © 1986, 1994 by Jean Anderson. Reprinted by permission of HarperCollins Publishers and McIntosh & Otis, Inc. _To the memory of all of those_ _glorious meals we had at the beach and to_ _the people who cooked them_ ## Contents INTRODUCTION _Hors d'Oeuvres_ Deviled Crab Dip Crab and Artichoke Dip Dot's Crab Dip Pickled Oysters Roasted Beets Crab-Stuffed Eggs TWO CRAB-CLAW COCKTAILS Crab Claws with Basquaise Sauce Crab Claws St. Charles _Soups and Stews_ Crab Bisque Soupe Hendaye Corn and Crab Chowder Louis Osteen's Brown Oyster Stew Cocktel _Sit-Down First Courses_ Corinne Dunbar's Artichoke and Oyster Cocktail Oysters in Champagne Crab Aspic Oyster Fritters Tartar Sauce Crabmeat Salsa Crab and Shrimp Calas with a Riff on Tartar Sauce Crabmeat Remoulade Crabmeat Ravigotte _Either/Or_ RECIPES THAT CAN BE AN APPETIZER OR A MAIN COURSE Traditional Oyster Stew Fried Oysters _Out in the Yard_ Roasted Oysters Basic Cocktail Sauce Hard-Crab Stew Cornmeal Dumplings _Dinnertime_ Oyster Dressing Deviled Crabs Jean Anderson's Stuffed Crab au Gratin (Santola Recheada e Gratinada) Crab and Oyster Gumbo Stuffed Crabs Crabes Farcis Soft-Shell Crabs Crabmeat Cobbler Oyster Shortcake My Grandmother's Crab Pilaf Oyster Loaf, or Bread Box Crab Soufflé Baked Crab Sandwiches Oyster Rarebit Frances and Ed Mayes's Spaghetti with Lemon and Crab Green Cabbage Slaw TWO KINDS OF CRAB CAKES Indochinese Crab Cakes More Traditional Crab Cakes Cucumber Relish _Drinks_ Oyster Juice TWO MICHELADAS WITH OYSTERS Michelada Tlaxapana Michelada Tlaxapana Obscura _Acknowledgments_ _Bibliography_ _Index_ _a_ SAVOR THE SOUTH® _cookbook_ Crabs & Oysters ## Introduction Legend has it that the author Colette's system had become so delicate by the time she died that she could have only oysters and champagne. I hope this happens to me. I grew up in eastern North Carolina catching crabs, cleaning fish, and shucking oysters. I loved the beach. Still do. Today when I look back, the best memories I have of those times always include something to do with food and the beach. In the summer, my family and I would often go down to the beach after church and spend the day, and even though we only lived thirty minutes from the ocean, we would still take a beach cottage for at least a week sometime in July or August. The ocean and the coast were so much a part of our lives that we never really considered going anywhere else. People from eastern North Carolina are like that. I hope to share some of this vibe with you in the recipes here. When I was five or six years old, I would go on Sunday afternoon "rides" with my father's brother, Alex, and his wife, Hi. These excursions usually included lunch. One of our favorite places was a seafood restaurant in the town of Sea Level in Carteret County. On one such afternoon, I ordered soft-shell crabs. My aunt Hi was sure that I had meant deviled crabs, but I wouldn't change my order. (Deviled crabs are mildly spicy crabmeat baked in the crab's shell. Soft-shell crabs are the whole beast minus the face and the guts, fried crispy, and eaten shell and all.) She ordered deviled crab just in case. When lunch came, she had in fact been right. I had meant deviled, but of course I wouldn't admit it and the rest is history. To this day soft-shell crabs are one of my favorite foods. Part of my family is Roman Catholic, so Fridays put seafood on the table every week. This was in the days before Vatican II when meat was forbidden to us on that day as a small bit of penance. Fridays were "fish days." A strange idea of punishment I thought to myself, but then I've always been impious. In New Bern in the 1950s our two rivers were lined with "crab factories." These were small crab-packing houses where you could buy crabs and crabmeat. Mostly, though, we caught our own with chicken necks tied on string. All you had to do was slowly coax the crabs near to the surface of the water and scoop them up with a net. They were ridiculously easy to catch. Crabs were essentially free food. We learned to clean and pick them ourselves. Oysters were more often gifts. My father worked for the post office, and for a time his route was in rural and maritime Pamlico County, which has a long coast of marshes and bays. In those days people gave gifts to the mailman, so besides the piña colada cakes, corn, or homemade sausage, we sometimes received a bushel of oysters. Strangely, oysters will keep out of the water, under wet burlap, in a cool, dark place for a long time. I remember baskets of these in our basement. The health department says never to do this, of course. (See the instructions for storing oysters later in this chapter.) We learned early to eat them roasted. Backyard oyster roasts were held in good weather. People built permanent pits, not for barbecue, but for oysters. You soon learned to dodge the smoke and not to burn your fingers as you helped yourself. You could of course have raw oysters as well. At oyster parties, my uncle Alex would delight and terrify us at the same time by also eating the tiny crabs that either scurry out of freshly shucked oysters or don't out of roasted ones. These tiny "pea crabs" were once a commercial product. In the March 20, 1893, edition of the _New York Times_ there is reference to an oyster crab salad. I have retained the _r_ -month prejudice against eating oysters in warm weather (that is, oysters should be eaten only during months that contain the letter _r_ ). It's the way we were raised, I guess, plus by summer they have become too big for my taste. Having said all this, I always eat them when I'm in New Orleans whatever the time of year or whatever their size. It's almost always warm there, and a trip to Casamento's is always in season. This is a southern book by definition, and the South has a long seacoast. Nine of the states of the old Confederacy have shorelines. Add in Maryland (some do, some don't) and there is a lot of ground, or perhaps water, to cover. And so while North Carolina will figure heavily in this work, I've looked all over the South for things to include. A large part of what I've discovered has been found in church and community cookbooks. I've come to feel that these are the true repositories of southern cooking culture. This research has been delightful, and I have learned a great deal more than I would have imagined at the start. I will never forget how to spell Worcestershire, for instance. It is in everything. There are a host of sensible but forgotten techniques that come up again and again in these old books. Several people use a double boiler rather than a skillet to soften peppers and onions before they are incorporated into a dish. Beurre manié, used to thicken or finish sauces, shows up a lot. You rarely see it called for today, although it works like a charm. Sometimes the descriptive language is too good to be true. "Take a knob of butter the size of a walnut..." comes to mind. I became the chef at Crook's Corner in 1993. Besides cooking, the job requires a surprising amount of travel, especially around our region, and hence I have gotten to spend lots of time in both Charleston and New Orleans, the South's two most sophisticated bastions of good food. I've also had the pleasure of visiting the Eastern Shore Virginia to meet fisher folk, and of spending a morning on the docks in Dare County, North Carolina, hearing people talk about their work. This book became both academic and hands-on. I had planned to organize these recipes by course—from soup to nuts, as it were. It turns out that a lot of these things aren't so easily categorized. We serve both oyster stew and fried oysters as first courses at Crook's, but both could be a main course. I've included a new (for me) slaw recipe that's good with the fried oysters, in case you decide to do that. Pickled oysters are often served at cocktail parties, but again at work, I've been tossing them with roasted beets to make a salad. A few of the crab salads could be used as the main course for a light luncheon. Oyster dressing is a side dish. Fortunately, I didn't discover any desserts using either crabs or oysters, but I did come up with a beverage or two. Two are for real and come from Mexico, and the other is a strange sort of tonic from the early 1900s. Before we begin cooking, there are some technical things to address. Some of them deal with health and safety. Some of them have to do with the vocabulary of the trade, others with the seasonal nature of our subjects. Let me start out by saying that health department officials are a lot more skittish about seafood than I was raised to be. We learned certain commonsense precautions growing up, but by and large anything we caught we ate right away or froze for future use. The strictures regarding crabs are much less alarming than those concerning oysters. The National Restaurant Association's course book on food safety advises that crabs must be sold alive and kept cool and moist. Processed crabmeat should come from reliable sources, kept at 41°, and used within the recommended time period. All fresh seafood must be kept cold, usually directly on ice. Never eat crabs that have died after you bought them alive. Don't eat oysters raw that you haven't seen shucked—or at least know who did it and when. When you shuck them yourself, they should be difficult to open. Only buy from reputable fish markets. Smell will tell you a lot, as will appearance. Soft-shell crabs should have plump, lustrous gills, for instance. Even the guts you squeeze out of them should smell fresh and oceanic. The condition of the waters from which these creatures are taken must be considered as well. There are many places where I fished as a child that are unappetizing today, to say the least. Crabs present a little less worry since they are almost always eaten completely cooked. Oysters, on the other hand, involve more peril not only because they are eaten raw or partially cooked but because of the filtering nature of their feeding. Otherwise wholesome oysters may have ingested something unpleasant. There is the possibility that they've accumulated dangerous toxins or harmful organisms. Both of these problems are the result of the condition of the waters where the oysters live. Store live oysters at 45°. The opinion of the food inspection community is that it is never ever completely safe to eat raw or barely cooked oysters. Harmful bacteria _can_ be killed by cooking thoroughly, but there are certain toxins not detectable by smell or taste that are not destroyed by either cooking or freezing. Again, the bottom line is to buy from reputable seafood merchants who know where their products have come from. Having said all this, though, I decided long ago to keep eating oysters until one of them gets me. ### _Grades of Crabs and Oysters_ An important point to note right away is that the crab in this book is the North American blue crab— _callinectes sapidus_ —the "beautiful swimmer." This is the crab native to the southern coast. There are 4,400 species of crab in the world, and according to _The Encyclopedia of Food_ , all of them are edible. Processed blue crabmeat comes in grades according to the part of the animal from which it comes. It is usually sold by the pound, and a pound of crabmeat goes a surprisingly long way. Jumbo lump is far and away the most expensive grade. It should be made up of the two large pieces of meat that are on each side of the animal where the outside legs attach to the body. Regular old lump contains some of the smaller chunks in the same region of the animal. Back fin is next. This is a mixture of all of the body meat. It usually contains a lump or two right on top for the buyer to see. Claw meat is next, coming, in fact, from the arms and claws. Lastly is something called special, which comes from the whole body. It tastes fine and is made up mainly of shredded meat. It is a good value and is especially good for dips and soups, as long has it has been properly cleaned. A sort of separate category is something called cocktail claws. These are claws that have been cracked and shelled but with the meat intact and the point of the claw left on as a handle. They are usually sold in one-pound cartons. These would seem to be a lot of work to prepare, so it is odd that they are usually inexpensive. As far as I know, crabmeat is always sold pasteurized, i.e., cooked. Obviously, the rule about eating dead crabs does not apply here. Every recipe here that uses crab starts with the instruction to pick through it carefully for bits of missed shell. Soft-shell crabs are blue crabs that have just molted. The crab grows but its shell does not, so in the spring they shed their hard shell and are briefly entirely edible. Right before this happens, they are known as peelers. Crabbers watch for crabs at this stage and move them to shedding tables and wait. If left alone, the soft crab's new shell would harden up in a day or so. With the crab kept out of the water, this hardening is slowed down. It used to be said that crabs begin to shed at the full moon in May, but in recent years it has begun earlier than that. Consecutive warm nights are the trigger, and these now begin to happen in April. Soft-shells are generally sold in cardboard trays lined with either straw, seaweed, or damp newspaper and come in five sizes, although I believe that the classification of "medium" is not legal to catch and keep in all jurisdictions: mediums are 3 1/2–to 4 1/2 inches; hotels, 4 1/2–5 inches; primes, 5–5 1/2 inches; jumbos, 5 1/2–6 inches; and whales (or whalers), 6-plus inches. Hard crabs seem to be graded by industry consensus. The most desirable is the #1 Jimmy crab. This is the mature male. All crabs are graded into five sizes: small, 5–5 1/2 inches; medium, 5 1/2–6 inches; large, 6–6 1/2 inches; jumbo, 6 1/2–7 inches; and colossal, 7-plus inches. The minimum size for harvesting is 5 inches "point to point." The other types of classification are: #2 Jimmies, which are less meaty and sometimes have just shed. They are usually sold to crab-picking companies. #3 Sooks are mature females. #4 Sallys are immature females. Sponge crabs are mature females that are carrying eggs, or roe, under their aprons. The roe is added to she-crab soup. Live crabs are sold by the piece or the bushel. The number of crabs in a bushel will, of course, vary according to their size. Whereas crab harvesting is somewhat self-regulating due to crabs' seasonal habits, oysters need some rules. Most state laws are broad guidelines that leave the details to their fisheries departments. In North Carolina the season is roughly from mid-October to early May. There are private beds that have slightly different rules. Harvested oysters must be at least three inches long from hinge to lip. Oyster sellers must provide harvest tags that list the fisherman, the location of the oyster bed, and the date of catch, and these must be kept on file for ninety days. Restaurants that serve raw oysters must keep copies of these tags on premises for ninety days as well. Shucked oysters are generally graded standard, select, and extra-select. A pint of standard oysters will contain around twenty. Selects and extra-selects are progressively larger, and thus the number in a pint will go down. Remember that it's best not to eat these shucked oysters raw, although I have done it and survived. When I purchased shucked oysters for these recipes, I always got selects. If you shuck your own, you will usually get a variety of sizes, but unless they are really huge, you should be able to proceed as instructed here. Oysters in the shell are sold either by count or by bushel. The number in a bushel will, again, depend on the size, but generally a bushel weighs around fifty pounds. Research turned up claims of from 100 to 200 oysters per bushel, so you'll need to ask when you buy. They can be very muddy. The best way to really wash them is out in the yard with the garden hose. People are surprised to learn that virtually all of the oysters in eastern North America are the same species— _crassostrea virginicus_. So whether they're called Wellfleets, Blue Points, or Beausoleils, they are all the same animal. They are sort of like dogs, I guess. The differences in shape, size, saltiness, and flavors are the result of locale, or "meroir," as people are fond of saying these days. ### _Cleaning Crabs_ I have been cleaning crabs all my life, but describing it to someone else without having one in hand is difficult. Unless you can persuade your fish market to clean them for you, you must start with live crabs. Some people have to stop right there. Fishing is really the last instance where you have to kill what you eat. I got over this years ago. #### SOFT-SHELL CRABS Soft-shell crabs are less of a problem than hard-shell crabs because they can't hurt you. 1. Take a crab in one hand, holding it right-side-up in the middle of its back with the face pointed away from you. I use just my thumb and index finger to hold them. 2. Using kitchen shears, clip off the eyes. This means cutting off what would be the face. You want to get both the eyes and the mandible beneath them but as little else as possible. I think the only reason this is done is because people don't want to have dinner staring at them. 3. Lift both the pointed sides of the shell up to reveal the gills. These look like a row of soft fangs. Snip these away. 4. Cut away the tail flap. This is underneath the back of the crab. Males have a narrow flap, females a wide one. 5. Under running cool water, gently but firmly squeeze out the yellow and grey soft matter that is found just underneath the shell. Try to do this without breaking the crab. 6. Rinse thoroughly, allowing the water to run underneath the shell and inside of each crab. 7. Place on a clean kitchen towel in a place where they can drain; cover with crushed ice if you are using them right away, or refrigerate them for later use. They can also be frozen. Carefully wrap each crab in several thicknesses of plastic wrap. Try to make time to thaw them slowly in the refrigerator when you want to cook them. #### HARD-SHELL CRABS Hard-shell crabs will fight back. Although you can easily snap a claw off with two fingers, they have enough strength to hurt if they can get to you first. If possible, put them in the freezer for twenty minutes before cooking or cleaning them. This renders them all but dormant. Sometimes, though, when you have a crowd to feed, this won't be practical. For my father's eightieth birthday party we made crab stew for sixty people, and suddenly I was the only person who remembered how to clean live crabs. There was no freezer large enough to hold this many, so I was on my own in the back yard. In any case, cleaning crabs is best done outside with a garden hose at hand. Hope that there won't be many mosquitos and that you don't kick over your beer. Crabs usually come in bushel baskets and often cling to one another, forming long, squirming chains. Shake them apart as best you can. 1. Grasp each crab in the same way you did the soft ones, with the face pointing away from you. You will soon learn how to stay out of their reach. 2. Coming in from behind, snap off the claws with your fingers. 3. Working quickly, pull off the back shell. It will usually come off in one piece. (Save these if you are making deviled crabs.) This will again reveal the gills and something called the devil's fingers or sometimes dead man's fingers. 4. Before cleaning out this part, I chop the bodies in two down the middle with a cleaver or large knife. There is a line on the belly to guide you. This finishes the kill. 5. Under running water, using your fingers, shears, or a paring knife, clean away the innards. At certain times of the year you will also find bright orange "coral" or roe. This can be saved to enrich sauces or soups. Its color makes it obvious and easy to extract. I never count on finding this, and if I use it, it is usually an afterthought. 6. Rinse the crabs well and set them on towels to drain for a bit. Then either put them in crushed ice to be used soon or cover and refrigerate to use later. You can also freeze crabs once they are cleaned. Wrap each one very well in plastic wrap. I'm not crazy about frozen crabmeat, but these will be fine for stewing. If you have saved the crab backs for deviled crabs, rinse them in cool water and then cook them in boiling water for seven minutes. Cool them in ice water, then drain. They will keep for a while in the fridge. Alternately, when you need crabmeat but only have live crabs, toss them into a large pot of boiling water as you would a lobster. There is no nice way to do this. The water should be salted and if you like, seasoned with seafood boil. You are salting for the amount of water, not for what is cooked in it, so taste before you add the crabs. If necessary, cook them in batches in order not to crowd them. When the crabs are bright red (no more than ten minutes depending on size), stop the cooking by transferring them into ice water. Drain as soon as they are cool since it's not good to soak them in water indefinitely. Picking the meat takes patience. Pull off the back shells and save the pretty ones if you are making deviled crabs. The best meat is found where the legs join the body in the part of the crab that would be the tail. This is where the largest "lumps" are, but there is meat throughout the whole body, as well as in the claws. I use kitchen scissors and a nut pick to get at it. It takes a dozen large crabs to yield a pound of crabmeat. ### _Cleaning and Opening Oysters_ Cleaning and opening an oyster is a lot less complicated than cleaning crabs. 1. Begin by cleaning the shells. This helps prevent sand or bits of shell from getting into the meat. Again, the best tool for this is a garden hose with good pressure, but you can, of course, do this in the sink. Most people who sell oysters in the shell give them at least a preliminary rinse. 2. Put them in the sink under cool running water and go over each one quickly with a scrub brush. You will never get them spotless but can remove most of the loose grit. Use an old dish towel to help you hold each oyster firmly. This also helps protect your hand from cuts either from sharp edges the shells may have or from the oyster knife you are wielding in the other hand. 3. For opening oysters, I prefer a knife designed for this purpose. Kitchen knives are too sharp and too flexible, and you will be more liable to stab yourself with them. Oyster knives are short, blunt, and sturdy. Turn the oyster so that the deep side of the shell is on the bottom. This will allow the juice to collect in the deepest side of the shell. 4. Holding the oyster in one hand, insert the blade of the knife into the hinge of the oyster and wiggle it in as far as it will go. Twist the blade with force. Sometimes the shell will give suddenly, so be careful not to jab yourself. (Oysters vary wildly in size, shape, and structure. Some have shells that are more brittle than others.) Keep wiggling the knife as you force the shell open. Take your time until you get the knack of it. If you do enough of this, you will eventually develop skill and speed. 5. Once the oyster is open, wipe the knife blade before you proceed so as not to introduce any grit inside. Muscles attach the oyster to both sides of its shell. One is much stronger than the other. Use the knife blade to sort of scrape and sever both of these so that they become detached from the shell, while trying to reserve the juice. 6. If you are not eating the oysters as you go, collect them and their juice in a clean bowl and refrigerate. Try to use them within twenty-four hours. Oyster shells are good for the garden. All over eastern North Carolina they are piled up around the root of fig trees for fertilizer. When that need is filled, they are used to pave driveways. These days there are also recycling programs that collect them for creating new reefs for growing more oysters. One caveat about cooking oysters _in_ things: When they are cooked, they release an unpredictable amount of juice, or liquor, as it is also called. Sometimes it's a lot, sometimes not. This can play havoc when you are trying to thicken things. If you end up with too much juice, sometimes you can correct this by adding more of the thickening agent. Other times, reducing the liquid (by cooking it longer) might be preferable, but if you do this, remove the oysters first. Set them aside until your sauce is correct, then stop the cooking and return the oysters to the dish. I've tried to sort the following recipes in a logical way, but as I said earlier, their place on the menu is changeable. Circumstance, season, and custom can come into play and change the norm. It would not be unusual to be invited to an oyster roast that included only oysters and beer, and that would be great. So much for courses. As you read and use this book, keep in mind that my intention was to entertain as well as to instruct. It was fun to include recipes that on the surface seem a little silly in this age when people are always sitting down to "a serious dining experience." You see, I was raised to see every meal as a little party. ## Hors d'Oeuvres Hors d'oeuvres are often served to guests before they sit down at the table, as something to nibble on until everyone arrives or something to have with drinks. Likely as not they are finger foods, so if they seem at all messy, make sure there are plates and napkins at hand. ### Deviled Crab Dip _This delicious little recipe is as simple as can be, and you're likely to have all the ingredients except for the crab in your kitchen any time_. MAKES 10 OR SO SERVINGS **1/2 pound fresh special crabmeat, picked over for shell** **3 hard-boiled eggs, chopped** **1/2 cup mayonnaise** **1 tablespoon fresh lemon juice** **1/2 teaspoon dry mustard** **1/2 teaspoon onion powder** **1/2 teaspoon salt** **1/8 teaspoon black pepper** Carefully fold all of the ingredients together, taking care not to break up the crab too much. Cover and chill for at least 1 hour. Serve with Ritz crackers. ### Crab and Artichoke Dip _The real sweep of the internet was made plain to me when I decided to do a little investigating about this ubiquitous recipe. The minutiae are endless. The varieties are endless. No crab cookbook can be without it. Generally, these dips are baked and then brought to the table hot from the oven. Sometimes they are kept warm in chafing dishes, but they can become oily if they sit too long over low heat, so it is perhaps better to make them in small batches to be heated as needed. This amalgam contains Ritz crackers, but these same crackers are often recommended as the thing to be dipped as well. Although this dip has come to be seen as somewhat pedestrian, I find this version rather luxe. It will work very well for a cocktail party that overlaps dinner time. I didn't try to break up the artichoke hearts at all. This made it more of a spread than a dip. Thick, well-toasted garlic bread would be a better vehicle than crackers_. MAKES 10 OR SO SERVINGS **1 (13 1/2-ounce) can quartered artichoke hearts, well drained** **1/4 cup mayonnaise** **10 Ritz cracker, crushed by hand** **2 tablespoons chopped pickled jalapeños** **1/2 cup freshly grated Parmesan cheese** **1/2 teaspoon fresh lemon juice** **1/4 teaspoon Worcestershire sauce** **1/2 pound fresh back fin crabmeat, picked over for shell** **Lemon wedges** **Garlic toast** Preheat the oven to 320°. Combine all the ingredients but the crabmeat and put in an attractive oven-safe serving dish. Bake for 20 minutes or so or until hot and beginning to brown. Remove the dish from the oven and quickly stir in the crab with as little mixing as possible to keep the meat somewhat intact. If it seems too dry, stir in a few more tablespoons of mayonnaise as well. Return to the oven and cook for 5 minutes more to warm the crab and to make the top pretty again. Bring to the table. Serve with lemon wedges and garlic toast. This is actually really good cold the next day. ### Dot's Crab Dip _My aunt Dot loved to entertain. At her house she always had a pot of coffee waiting should you drop by. She also always set out little buffets of snacks at holidays. I found this recipe in my father's collection. It calls for commercial French salad dressing. I have always thought that there was a place for commercially prepared products in my menu. I would never try to make Girl Scout cookies, for instance, and where in the world would we be without Tabasco sauce?_ MAKES 10 OR SO SERVINGS **4 tablespoons unsalted butter, softened** **1/2 pound fresh special crabmeat, picked over for shell** **1/2 cup grated sharp white cheddar cheese** **1 tablespoon prepared horseradish** **4 tablespoons bottled French dressing** Carefully mix all the ingredients together and chill, covered, for at least 1 hour. Serve with—you guessed it—Ritz crackers. ### Pickled Oysters _I love pickled shrimp, but I had never had these until I began working on this book. There are many recipes from eastern North Carolina but I never encountered them growing up. There are three schools of thought here. Some recipes say to cook the oysters in their own juice and then put them, drained, into vinegar brine. Another recommend cooking them straight away in the brine. The oysters really didn't survive this well. The third had you dump the oysters, juice, and all into the brine. This was too watery for me. Here, I adopted the first method. People serve these at cocktail parties with just toothpicks and napkins, but at Crook's Corner we've turned them into a salad by tossing them with roasted beets (recipe follows)_. MAKES 1 QUART **1 quart shucked select oysters with their juice** **1 cup vinegar** **6 whole cloves** **6 bay leaves** **1 (at least) hot pepper pod (fresh or dried)** **1 teaspoon celery seeds** **1/4 teaspoon ground mace** **1 1/2 teaspoons salt** Bring the oysters to a simmer in their own liquid over medium heat and cook just until they begin to curl. At the same time, bring the vinegar to a boil with all of the seasonings in a nonreactive (i.e., enamel or stainless steel) saucepan. Strain the oysters and stir them into the vinegar. (You might save the broth for a soup or stew.) Immediately remove the pan from the heat and set it in ice to stop the cooking. Refrigerate overnight before serving. ### Roasted Beets _I use red beets mostly, but keep in mind that their color will overwhelm anything you mix them with. This amount makes enough to turn the Pickled Oyster recipe (page 19) into a salad course_. MAKES 2 SERVINGS **4 peach-size beets** **2 teaspoons olive oil** **2 bay leaves** **4 whole cloves** **1 teaspoon whole fennel seeds** **1/2 teaspoon salt** Preheat the oven to 350°. Put the beets in a bowl with the oil and the seasonings. Swirl everything around and put it in to a baking dish. Cover tightly and place in the oven. Check after 40 minutes by piercing a beet with a knife. If the knife passes easily through the center, the beets are done. If not, you should be able to judge how much more time will be needed for them to finish cooking (see Note). When they are done, allow them to cool in their own juice. Peel them when they are cool enough to handle. Slice into rounds, then cut the rounds into 1/4-inch-wide strips. Chill. Toss together with the drained oysters. Add a little of the oyster brine and taste for salt. Serve as a salad. NOTE * Beets can vary a great deal in terms of how long they should be cooked. Some are dense and slow to cook, while others cook as quickly as a baked potato. The same goes for peeling: some can be peeled by rubbing them with a dishcloth, while others will require a paring knife. I have never been able to tell which kind I have before I cook them. ### Crab-Stuffed Eggs _This is a fancy and substantial version of deviled eggs that I found among my grandmother's recipes. I've taken a few liberties here. Hard-boiled eggs are an excellent platform for all sorts of elaborate snacks. It is generally thought that there can never be too many deviled eggs. This recipe uses only a dozen_. MAKES 24 DEVILED EGGS **12 hard-boiled eggs** **1/2 cup mayonnaise** **1 teaspoon yellow mustard** **1/4 teaspoon salt** **1/8 teaspoon black pepper** **1/2 pound fresh special crabmeat, picked over for shell** **2 tablespoons diced pickled jalapeño** **A mixture of equal parts chili powder and smoked paprika to decorate the tops** Slice the eggs in half lengthwise and scoop out the yolks into a bowl. Mash them with a fork and then stir in the mayonnaise, mustard, salt and pepper. Carefully fold in the crab and pickled jalapeño. Fill the egg whites and dust tops with the paprika mixture. ### Two Crab-Claw Cocktails On a trip to New Orleans once, I noticed little dishes of crab claws were being served almost everywhere we went. When I asked about them, my friend Charles said that there isn't really a recipe for them—everyone has their own. Sometimes they are hot, sometimes cold; sometime spicy, sometimes not. A pound carton usually contains around three dozen shelled claws. One carton will serve at least four people. Here is a hot version and a cold version. ### Crab Claws with Basquaise Sauce _I guess that this sauce is really just a salsa with meat. I suppose you could serve it with chips, and I've certainly eaten enough of it by itself. It needs to be cold, but don't try to make it too far in advance as it begins to lose its crunchiness after a time. This is also a good sauce for the Roasted Oysters (page 61)_. MAKES 4 SERVINGS **1 carton crab claws** **5 strips bacon, chopped raw, or 1 cup side meat, diced** **Two celery ribs washed and diced (save and chop the leaves if they are pretty)** **1/2 medium green bell pepper, diced** **1/2 small red onion, diced** **2 tablespoons diced pimientos** **Zest and juice of 1 lemon (I like to use a zester that makes threads for this)** **Chopped fresh herbs** **1 cup good-quality olive oil** **Salt and black pepper, to taste** Brown the bacon in a large skillet. Drain the bacon, leaving a little of the grease in the pan and cool it enough so that it won't cook the other ingredients. Stir in the rest of the ingredients and season with salt and pepper. Arrange the crab claws on 4 dishes with the shell ends sticking up. Spoon the dressing over them and let them sit for half an hour before serving. Rough chop the celery leaves and sprinkle on top at serving time. ### Crab Claws St. Charles _You don't really need to do anything to cocktail claws if you don't want to. Just squeeze a lemon wedge over them and eat them right out of the box. If you want to be fancier than that, try this. Basically, the sauce is the snail butter used by the French to make escargot. Make sure you have bread on hand to mop up the leftover sauce once the claws are gone_. MAKES 4 SERVINGS **1 carton cocktail claws** **1 stick unsalted butter, softened** **Juice and grated zest of 1 lemon** **3 large cloves garlic, minced** **1/2 cup fine, toasted bread crumbs** **1 tablespoon chopped fresh parsley** **1 teaspoon coarse sea salt** Preheat the broiler. Inspect the claws for bits of shell. Arrange them in 4 oven-proof ramekins, with the shell ends sticking up; set aside. Put 2 tablespoons of the butter in a small skillet and warm it. As soon as the butter melts, stir in the bread crumbs and then the garlic. Turn up the heat and cook until the garlic sizzles and smells good, being careful not to let it brown. Add the lemon juice all at once. Put the skillet in the refrigerator for a minute or two until it has cooled down enough to touch. In a small bowl, mash the rest of the butter with a fork; add the garlic mixture, lemon zest, and parsley and thoroughly combine. Add the salt, mixing just until incorporated—you want the crystals to remain whole. Refrigerate the composed butter to allow it to firm up. Crumble a tablespoon or so of the composed butter over each ramekin and broil until the butter is sizzling and a little browned, about 8 minutes, more or less, depending on the effectiveness of your broiler. Serve at once. Any leftover butter freezes well and will have a million uses. ## Soups and Stews Soup is a natural destination for crabs and oysters. Here are a few nice recipes, but I suspect that the list of them is legion. ### Crab Bisque _Bisques made from lobster or shrimp often start with the shells. Oddly, I didn't come across any crab bisque recipes that called for that, but if you are starting with whole crabs, you might use some of the boil for this recipe if it's not too salty. Most recipes seem to be very timid when calling for the sherry or Madeira. I am not_. MAKES 4 VERY ELEGANT SERVINGS **4 cups milk, divided** **1 tablespoon all-purpose flour** **4 egg yolks** **1 pint heavy cream** **1 cup fresh crabmeat (grade of your choice), picked over for shell** **1/4 cup plus 4 teaspoons sherry or Madeira, divided** **Salt and cayenne pepper, to taste** **4 teaspoons unsalted butter** **Paprika** Bring 3 cups of the milk to a simmer in a double boiler. In a Mason jar with a lid mix the flour into the remaining cup of cold milk. Shake vigorously to make as smooth as possible. When the milk in the pan has started to bubble around the edges, strain the flour mixture into it through a kitchen sieve, whisking constantly. In a small bowl, whisk the egg yolks and cream together completely, then whisk in a cup of the hot milk. Through the sieve, strain this slowly back into the rest of the simmering milk, stirring all the while until the mixture is hot, being careful not to let it boil. Stir in the crabmeat and 1/4 cup of the Madeira and season with salt and cayenne. To serve, put a teaspoon of butter in the bottom of each of 4 bowls. Ladle in the soup. Top each with a sprinkling of paprika and 1 teaspoon of the remaining Madeira. ### Soupe Hendaye _This recipe comes from my days at La Residence, Bill and Moreton Neal's wonderful and, if I may opine, groundbreaking restaurant here in Chapel Hill. Bill encouraged the cooks to think of the kitchen as a "laboratory." We worked primarily with French classic recipes, but we were given permission to occasionally stray. Hendaye is the southwesternmost town in France on the Spanish border. The name was chosen because the soup is in the Basque style. We would sometimes use commercial seafood broth if we didn't have the fixings for homemade_. MAKES 6 SERVINGS **3 bell peppers (of various colors, if possible)** **1 small red onion** **1/2 cup dry white wine** **2 quarts seafood broth, at room temperature** **1 pint shucked select oysters with their juice** **Salt and black pepper, to taste** Under the broiler, in a very hot oven, or on a grill, roast the bell peppers whole. They will eventually collapse and the skin may char a little. The time this takes will vary. A grill is faster than most broilers, which in turn will be quicker than an oven. The moisture content of the peppers is also a factor. Over the flames of a grill, 5 minutes on a side should be enough time. The same under an open flame broiled. Electric broilers will take more time, but you need to watch. Put them in a paper bag and allow them to sweat for half an hour. Peel and clean them under cool running water. Seed and cut into narrow strips. Cut the onion into similar strips. Pour the wine into a soup pot, and turn the heat to high. Stir the broth and oyster juice together and whisk into the warming wine. Bring the broth to a simmer. Skim the broth of any unsightly foam. Add the onion and cook for about 15 minutes, then add the peppers. Your soup is now ready for the oysters. Just before serving time, add the oysters to the simmering broth. Cook them just until they begin to curl, a minute at most. Season with salt and lots of pepper. ### Corn and Crab Chowder _I've been a fan of this soup ever since I first had it in 1969 at a drugstore lunch counter in a small town on the coast of Maine. ("Maine?" you ask.) For the most part, while testing these recipes, I've used commercially packed fresh crabmeat. You can, of course, catch and pick your own crabs. Twelve large crabs yield about a pound of meat. When making soups, you have the extra bonus of being able to use the boil for stock. When I first starting making this soup, I finished it with heavy cream (as chowders often are). Later I just allowed the potatoes to dissolve a little, thickening the soup, and decided not to add any. Both versions are great. I have used commercial clam juice as part of the liquid in this soup as well_. MAKES 6–8 SERVINGS **1/4 pound side meat, diced** **1 small onion, peeled and diced** **2 tablespoons cornmeal or Maseca (instant corn masa flour)** **4 cups water (or 4 cups of your crab boil)** **3 baking potatoes, peeled and diced** **2 cups corn cut from the cob, cobs reserved** **1/4 teaspoon crushed red pepper flakes** **3 bay leaves** **1 pint heavy cream (optional)** **Salt and black pepper, to taste** **1/2 pound fresh special crabmeat, picked over for shell** Render the side meat in a heavy-bottomed soup pot. When it has begun to brown and to give up some grease, add the onions and cook until soft, about 5 or 6 minutes. Do not brown them. Stir in the cornmeal and cook for 3 minutes more. Whisk in the water and stir to smooth any cornmeal lumps. Bring to a simmer, then add the potatoes, corn, reserved corn cobs, bay leaves, and red pepper. Cook until the potatoes are tender, about 20 minutes. Fish out the corn cobs. Whisk in the cream, if using, and bring the chowder back to a simmer. Taste for salt and pepper. (The side meat has been seasoned already with salt and pepper so you may not need more.) If the chowder seems too thick, add a little more water or crab boil (or cream). Fold in the crabmeat and cook for a few minutes more just to warm it. Serve at once. ### Louis Osteen's Brown Oyster Stew _I first had this stew at breakfast on the last morning of a Southern Foodways Alliance Symposium in Oxford, Mississippi. I remember wishing that I had thought of it first. This has happened to me before with Louis's cooking. He was kind enough to let me reprint his recipe from his cookbook_ Louis Osteen's Charleston Cuisine. MAKES 4 SERVINGS **4 tablespoons benne seeds** **2 tablespoons peanut oil** **2 tablespoons (about 1 ounce) very finely diced pancetta or side meat** **2 tablespoons finely minced yellow onion** **2 tablespoons all-purpose flour** **1 1/4 cups heavy cream** **24 shucked oysters, juice strained and reserved** **1 3/4 cups seafood stock (store-bought is fine)** **1 teaspoon chopped fresh thyme** **1 tablespoon fresh lemon juice** **1 teaspoon sesame oil** **2 tablespoon chopped fresh chervil or Italian parsley, or a combination of both** **Salt and freshly ground black pepper, to taste** **Buttered toast or oyster crackers** Place the benne seeds in a small, heavy-bottomed sauté pan over medium heat and dry roast them by cooking them for about 9 minutes or until they become dark and fragrant. Remove from the stove. Roughly crush half of the seeds with a spoon. Heat the oil in a heavy-bottomed saucepan over low heat. Sauté the pancetta or side meat for about 5 minutes or until crisp and lightly browned. Remove the meat with a slotted spoon and drain on a paper towel. Leave the oil and fat in the saucepan. Add the onion and the crushed benne seeds to the saucepan and sauté for about 3 minutes, stirring frequently. When the onion is slightly browned, add the flour, stir well to combine, and cook for 2 minutes. Meanwhile, in a separate pan, heat the cream to just below a simmer. Whisk the reserved oyster juice, stock, and thyme into the onions. Simmer and stir until there are no lumps. Add the warmed cream and simmer for 5 minutes more. Add the oysters, the uncrushed benne seeds, lemon juice, sesame oil, and herbs. Cook just until the oysters begin to curl. Taste for salt and pepper. Serve in warm bowls garnished with the pancetta or side meat with buttered toast or oyster crackers on the side. ### Cocktel _This is a cold soup that is often found in Mexican restaurants. It's a cross between gazpacho and a salad. In this version I've used only crabmeat, but I have seen it served with octopus, oysters, and shrimp. The "deluxe" version of this is often finished with a can of Orange Fanta. Skip that part_. MAKES 6–8 SERVINGS **1 (46-ounce) can tomato juice** **1 large green bell pepper, seeded and diced** **1 medium red onion, diced** **2 celery ribs, diced** **2 (or more) jalapeños, finely diced** **1 large avocado, peeled, seeded, and cubed** **Juice and grated zest of 1 orange** **1 tablespoon fresh lemon juice** **1/2 pound fresh crabmeat (grade of your choice), picked over for shell** Combine everything except the crab and chill until very cold. At serving time, fold in the crab. ## Sit-Down First Courses The recipes in this section are for occasions that are a little more formal—that is, you'll want a fork and a place to sit. An aspic cannot be eaten when you are standing around a campfire. The last two recipes could be the main course at a light lunch. ### Corinne Dunbar's Artichoke and Oyster Cocktail _I salute any recipe that calls for a quarter of a stick of butter per person. This is an unusual recipe in another regard. It serves only one. I can imagine curling up in a window seat and eating this out of an antique soup cup on a cold winter afternoon_. _This recipe comes from_ Cooks from Old Brook _(Brookhaven Junior Auxiliary [1982]), a cookbook that I received as a gift over twenty years ago. It was published by the Junior Auxiliary of Brookhaven, Mississippi. The present members of the chapter voted to allow me to reprint it. It is one of the best of these locally publish books that I've seen_. _I love this recipe, but it is sort of ugly. I sprinkled it with some chopped mint leaves to brighten it up. Any fresh herb except cilantro would work, I think_. MAKES 1 SERVING **1 small artichoke** **4 large oysters** **2 tablespoons unsalted butter** **1 teaspoon Worcestershire sauce** **Salt and black pepper, to taste** **Chopped fresh mint or another favorite herb (optional)** Boil the artichoke in salted water until done, about 16 minutes. (A sharp boning knife should pass easily through the thickest part of the base.) Stop the cooking by submerging it in ice water for 2 or 3 minutes. Set it upside down on a towel to drain for a minute or two more. Scrape the leaves and cut up the heart. Place the artichoke with the rest of the ingredients in the top of a double boiler and cook just until the oysters begin to curl. Serve hot. ### Oysters in Champagne _This recipe came from my late friend Henry Hobbs. He was also from Mississippi and was the father and grandfather of several good friends. He was a large, urbane, witty man and I always enjoyed his company_. _You can buy shucked oysters to make this, but then you won't have the shells to use as serving dishes. Also, you don't need to be extravagant when purchasing the wine. Any nice, clean-tasting one will do, but if you go too far down the list, the reduced wine will have a nasty nose. You'll need to work quickly and have everyone ready to eat these. Poached oysters don't sit well for long. If you are not comfortable cooking so many of these at once, it is fine to cook and then bring them to the table in smaller batches_. MAKES 6 SERVINGS **3 dozen oysters, well-scrubbed and rinsed** **3 cups champagne or other sparkling wine** **4 tablespoons unsalted butter** **1 generous teaspoon curry powder (hot or mild, according to your preference)** **2 tablespoons chopped fresh parsley** **Salt and black pepper, to taste** Shuck the oysters and save them in their juice. Rinse the shells and reserve the deeper half of each one. (You can do this much in advance.) When you are ready to eat, put the empty shells in a hot oven. Put the champagne in a large skillet or wide-bottomed saucepan and bring to a gentle simmer. Arrange the warmed shells on a platter or on plates. Gently slide the oysters into the simmering wine along with a half cup or so of their juice. Cook them just until they begin to curl a little, a minute max. Put 1 oyster into each warmed shell. Turn the heat to high and quickly whisk in the curry and butter. Whisk vigorously and allow the wine to reduce and thicken just a little. Season the oysters with salt and pepper. Add the parsley to the sauce and spoon a little over each oyster. Serve at once. Serve with champagne, of course, and any of the leftover broth. Toast Colette. ### Crab Aspic _OK, OK, aspic is the joke food of southern cooking. Stuffy ladies' luncheons spring to mind. It's one of those dishes that shows up all the time even though everyone claims to dislike it. It is the fruitcake of the salad world. This recipe is an amalgam of two—one from my aunt Theresa and the other from a friend of hers from church. I tried to taste this with a dispassionate tongue and have decided that I like it. I also wonder now why everyone makes fun of these. But then I also like fruit cake_. _It's best to make this a day ahead if you can so you will be sure that the mold has set completely. The original instructions called for a "decorative ring mold." I didn't have one of those so I used a one-quart porcelain loaf pan. This is probably lucky because I have vague memories of decorative ring molds that wouldn't release their contents evenly, leading to irritated panics on the part of hostesses. Whatever you use, dip a paper napkin in cooking oil and coat the pan completely. You don't want it to be greasy, but you must cover every bit of it. Put the mold in the refrigerator to chill_. _One of the recipes suggests serving this in "lettuce cups." If you do, slice the mold using a thin, sharp knife; dip it in very hot water each time since the crabmeat can be uncooperative_. MAKES 6 SERVINGS **1/2 pound fresh crabmeat (grade of your choice), picked over for shell** **2 envelopes (1 1/2 tablespoons) gelatin** **1/2 cup cold water** **2 1/4 cups tomato juice** **1/4 cup cider vinegar** **1 teaspoon Worcestershire sauce** **1 teaspoon fresh lemon juice, plus a little grated rind** **1 teaspoon prepared horseradish** **1/2 teaspoon salt** **Pinch or 2 of cayenne pepper** **1 teaspoon Tabasco sauce** **1 tablespoon seeded, minced jalapeño pepper (about half a pepper)** **1/2 cup minced celery** **5 martini olives, thinly sliced into rounds** **1 cup mayonnaise spiked with 1/2 teaspoon paprika and 1/4 teaspoon chili powder** Set the crab in a sieve to drain. You will discard any liquid. Dissolve the gelatin in the cold water. Bring 1 1/4 cups of the tomato juice to a simmer in a nonreactive saucepan, then remove from the heat. Whisk the gelatin into the juice and then stir in the remaining juice and the vinegar. Whisk in the Worcestershire sauce, lemon juice and rind, horseradish, salt, cayenne pepper, and Tabasco. Set the pan, uncovered, in the refrigerator to cool for at least 1 hour. Stir from time to time. Just before the gelatin is completely set up, it will pass through a stage that resembles a runny jelly. This is when you fold in the vegetables and the crab. Use a spatula and distribute the ingredients evenly. Turn into the oiled loaf pan. Return to the refrigerator and allow to set, overnight if possible. To unmold, set the aspic mold in a pan of hot water that will come 3/4 of the way up the sides of the mold for 2 or 3 minutes. This will slightly melt the edges, allowing the gelatin to come free. At the same time, gently loosen the mold from the edges using your fingers. Remove the mold from the water and dry the outside of it. Place a serving platter on top and quickly flip the mold, giving it a few sharp shakes as you do. The salad should emerge jiggling and glistening. Return it to the refrigerator to recover. You may either bring the salad to the table as a piece or slice it into individual servings. Top with a dollop of the seasoned mayonnaise. ### Oyster Fritters _I've deliberately avoided recipes that call for ground or minced oysters—and there are more of them than you would think. Here is the exception. In Belhaven, North Carolina, there was a famous inn called the River Forest Manor. Its buffet was legendary. The dish most often mentioned was the oyster fritters. I couldn't find that recipe, but this reminds me of it. I've added corn as recommended in another recipe that I came across from Maryland. When really hot, these fritter almost don't need any sauce, but I've included a tartar sauce recipe for good measure (page 46). Make the tartar sauce in advance and put it in the refrigerator so it can set up a little. It'll be good with other things in this book as well_. MAKES 3–4 DOZEN **6 large shucked oysters, drained and roughly chopped** **1 1/2 cups all-purpose flour** **1 teaspoon salt** **2/3 cup milk** **4 eggs, separated** **1 tablespoon cooking oil, plus more for frying** **1/2 cup cooked corn kernels (frozen is fine or use leftover corn on the cob)** Sift the flour and salt together into a large bowl. Add the milk and mix well. In a separate bowl beat the egg yolks with the oil and then beat into the batter. Beat the egg whites with a pinch of salt to stiff peaks and fold by thirds into the batter. Combine the oysters and the corn and carefully fold them into the batter. Let rest for half an hour in the refrigerator to "cure." Fill a straight-sided saucepan with 4 inches of oil and heat the oil to 365°. (If you don't have a thermometer, you can test the temperature with fair accuracy by dropping a few specks of the batter into it. If the oil is ready, the batter will sizzle, float, and brown quickly.) Working in batches, drop the batter into the oil by tablespoons. Don't crowd. When the bottoms get a little brown, 2 to 3 minutes, turn the fritters and cook until this side is also brown. Remove to a dish covered with paper towels and continue cooking the rest of the fritters. You will need to stir the mixture from time to time as the corn and oysters will settle to the bottom. Mostly these work, but every once in a while a fritter will fall apart in the oil. I ate these fragments as I cooked for everyone else. If too many fall apart, tighten the batter with a little more flour. (See the comments about oyster juice on page 11.) ### Tartar Sauce MAKES ABOUT 2 PINTS **2 cups mayonnaise** **1/4 cup drained and roughly chopped capers** **1/4 cup chopped fresh parsley** **1/4 cup minced scallions, green and white parts** **3 garlic cloves, minced** **1/2 cup chopped pickles or drained pickle relish** **2 tablespoons Dijon mustard** **2 teaspoons grainy mustard** **2 tablespoons whole mustard seeds, quickly toasted in a dry skillet** **1 teaspoon dry mustard** **2 tablespoons fresh lemon juice** **Pinch of cayenne pepper** Mix everything together and chill. ### Crabmeat Salsa _My Mexican friends have been feeding me variations of this for years, only instead of crabmeat, they use either canned tuna or raw hamburger—really. We eat it on fried tortillas, but it is easy to imagine it spooned over black beans and rice. Of course you can buy ready-to-eat chips for this, but warm, just fried tortilla wedges bump everything up a notch_. MAKES 4 CUPS **2 large ripe tomatoes, diced into 1/4-inch pieces** **1 large onion, diced into 1/4-inch pieces** **2 jalapeños, or more to taste, finely diced** **Juice and grated zest of 1 lime** **3 tablespoons good-quality olive oil** **1/4 cup chopped fresh cilantro** **Pinch of salt** **1 cup fresh crabmeat (grade of your choice), picked over for shell** **Salt and black pepper, to taste** **Fresh tortillas or store-bought tortillas chips** **Cooking oil for frying** In a medium bowl, toss the tomatoes, onions, and jalapeños with the lime juice and zest, oil, cilantro, and salt. Let the mixture sit in the refrigerator, covered, for half an hour. Fold in the crab, trying not to break it up too much. Season with salt and pepper. Fan the edges of the stack of tortillas as you would a deck of cards so that will separate easily. Cut the whole stack into sixths. Fill a straight-sided saucepan with enough oil to float the tortilla wedges and heat the oil to about 360°. Working in batches so as not to crowd them, fry until crispy and brown. Drain them for a second in a sieve or on a towel and then toss with salt. Serve the salsa with the warm tortillas. ### Crab and Shrimp Calas with a Riff on Tartar Sauce _I first had calas (rice fritters) in New Orleans the summer before Katrina. My friend Poppy Tooker, a Louisiana food writer and radio host, served them at a meeting of the Southern Foodways Alliance. Calas are a traditional street food but they had begun to disappear. It is Poppy's mission to save them from passing out of memory and being lost. On this morning, she served a sweet dessert variety_. _Jump forward eight years. Another friend, Lolis Elie, was promoting a cookbook_ (Treme: Stories and Recipes from the Heart of New Orleans) _based on his TV series_ Treme. _At the party we threw when he came to Chapel Hill, I wanted to serve at least one thing from that book. He and Poppy had by then come up with a savory version of calas, and this is based on their recipe. They used crayfish. I used crabmeat and shrimp for the party. They are so easy and so good that they are on my menu all the time now. I have been enlisted to Poppy's cause_. _Make the sauce first since the sour cream will need a while to recover its consistency. Grating martini olives is tedious, but for this delicious sauce it is worth it. My admiration for grated onion grows daily_. MAKES 3–4 DOZEN **FOR THE SAUCE** **1/2 small onion, grated** **1/2 small unpeeled cucumber, grated** **6 martini olives, grated** **2 cups sour cream** **Salt and black pepper, to taste** **FOR THE CALAS** **4 cups cooked rice** **12 scallions, both green and white parts, roughly chopped** **3/4 cup all-purpose flour** **1/2 teaspoon salt** **4 teaspoons baking powder** **4 eggs, well beaten** **1 cup fresh crabmeat (grade of your choice), picked over for shell** **1 cup boiled shrimp, well salted, cooled, and roughly chopped** **Cooking oil for frying** To make the sauce, put the onions, cucumber, and olives into a sieve, sprinkle with a little salt, and drain for about 10 minutes. In a small bowl, combine the sour cream with the vegetables and season with salt and pepper. Set the bowl in a bowl of ice and chill in the refrigerator for half an hour so that the sour cream can set up again. (Make a sandwich from the unused cucumber half, since it won't keep.) To make the calas, put the rice in a large mixing bowl. Purée the scallions in a food processor until almost liquid and fold into the rice. Combine the flour, salt, and baking powder and stir into the rice. Fold in the eggs, followed by the seafood. Let the batter rest in the refrigerator for half an hour. Using a small ice-cream-style scoop, form the batter into 1-inch balls. Fill a straight-sided saucepan with enough oil to float the calas and heat it to about 360°. (If you don't have a thermometer, you can test the temperature with fair accuracy by dropping a few specks of the batter into it. If the oil is ready, the batter will sizzle, float, and brown quickly.) Place as many calas into the oil as you can without crowding them. As they cook, they will float and brown. Usually they will turn themselves over as they cook; if not, do this with tongs. Fry for 4–5 minutes. Break one open to make sure they are done through. Serve hot with the sauce. ### Crabmeat Remoulade _There was once a wonderful, old-school French restaurant on the East Side of Manhattan called La Cote Basque. The walls were painted with big murals of St-Jean-de-Luz. Bobby Short always seemed to be having lunch there. It was there that I first saw sauce painting. In the 1970s La Cote Basque was a touchstone in New York for cooks in French restaurants everywhere else. One of the many things I loved to get there was a remoulade of crab. This was the mustardy sauce of classic French cooking, not the red spicier one of Louisiana. I don't have the original recipe, but this is a close approximation. It really needs nothing else except a lettuce leaf to perch on and perhaps an olive or two on the side. Be sure to eat the leaf as well. This sauce is hard to make in smaller amounts, so this recipe makes more than you will need for a pound of crabmeat, but it keeps well and is good on other things, like hard-boiled eggs, Belgian endive, or romaine hearts_. MAKES 4 SERVINGS **1 pound fresh jumbo lump crabmeat, picked over for shell** **1 cup boiling water** **1 1/2 cups Dijon mustard** **3/4 cup fresh lemon juice** **3 cups olive oil** **1 tablespoon chopped fresh parsley** **Salt, to taste** **Lettuce leaves** **Olives (optional)** Go over the crabmeat for bits of shell, but try to leave it in large pieces. Usually, each lump will have a small blade of soft shell that helps hold it together. You can remove this if you like, but growing up, I just learned to spit it out. Put the water, mustard, and lemon juice in the bowl of a food processor. With the machine running, slowly drizzle in all of the oil. Mustard is often already salty, so taste for salt. Gently toss the crab with the some of dressing to just moisten it. Serve as described above. ### Crabmeat Ravigotte _Ravigotte is a French term indicating refreshment. (In Quebec years ago there were advertisements for the soft drink 7-Up that claimed "Ce Ravigotte!") I found examples of this recipe from Louisiana and from the southern parts of both Mississippi and Alabama. It is indeed refreshing_. MAKES 4–6 SERVINGS **1 pound fresh lump crabmeat, picked over for shell** **1 tablespoon fresh lemon juice** **Salt and black pepper, to taste** **1 tablespoon Worcestershire sauce** **3 tablespoons olive oil** **4 scallions, both green and white parts, finely chopped** **1/2 teaspoon finely chopped garlic** **4 heaping tablespoons mayonnaise** **1 hard-boiled egg, grated** **Lettuce leaves** In a medium bowl, combine the crabmeat with the lemon juice and season with salt and pepper. Let stand for 30 minutes. In a separate bowl, combine the Worcestershire sauce, olive oil, scallions, garlic, and mayonnaise. Divide the crab into serving portions on the lettuce leaves. Spoon some of the sauce on top of each serving. Garnish with the hard-boiled egg. ## Either/Or **RECIPES THAT CAN BE AN APPETIZER OR A MAIN COURSE** The two recipes in this section can be used either as an appetizer or a main course. A big plate of fried oysters is a fine dinner as far as I'm concerned, but both at Crook's Corner and at dinner parties, I've served smaller portions as a first course. The same is true for oyster stew. In case you decide to serve fried oysters for dinner, an excellent recipe for slaw is on page 92. Fried seafood and slaw traditionally go together in the South. People are always whining about fried food in the South, usually because they think the breading masks the flavor of what's been fried. I have absolutely no patience with this. There is in fact a tradition of beloved and tasty commercial breading mixes here. The recipe below got its start in Louisiana, but that isn't the only place that loves prepared seafood breaders. Many of these products predate the onslaught of convenience foods that came at the end of World War II. Three milling companies in North Carolina that make them are over a hundred years old. Almost all of them contain yellow cornmeal or flour, wheat flour, and powdered eggs. Variations include the addition of cracker meal, onion powder, and various sweeteners. I tried several of these and liked them all. The flavors were familiar as well because they have always been used in seafood restaurants along our coast. ### Traditional Oyster Stew _It amazes me that something this good could be so easy. The only variation I ever encounter is chopped scallions added right at the end. Although it seems odd to me now, as a child I was often given this when I was sick_. MAKES 4 SERVINGS **4 tablespoons unsalted butter** **1 quart whole milk** **1 pint shucked oysters with their juice** **Salt and black pepper, to taste** **2 tablespoons chopped scallions (optional)** **Oyster crackers** Put a tablespoon of butter in the bottom of each of four bowls. Bring the milk to a strong simmer in a large saucepan. Gently stir in the oysters and their juice. Return the milk to a simmer but do not allow a full boil. When the oysters are just beginning to curl a little on their edge turn off the heat. A minute in the simmering milk should be sufficient for this. Season with salt and pepper, keeping in mind that some oysters are saltier than others. Divide the oysters among the four bowls, then fill each with the warm milk. Garnish each with scallions, if using. Don't forget the oyster crackers. ### Fried Oysters _This recipe is a happy accident. When I come home from a trip to New Orleans, I try to make a quick pass through the French Market on my way to the airport to see what's what. On one such visit I grabbed up lots of those local brand-name products that Louisianans love but are not found anywhere else. One was a seafood breader. I used it to fry flounder. The box was already in the trash before I realized how delicious it was. I fished it out of the dumpster. It revealed that it contained corn flour instead of cornmeal, which is traditional here. Thanks to the Latino grocery store around the corner, the corn flour was eventually replaced by Maseca, which is corn flour milled especially for tamales. This improved the recipe even more. Because I love crust on fried foods, I always use self-rising flour in my breadings. It gives a good puff when cooking. I serve these with Basic Cocktail Sauce (page 63) or Tartar Sauce (page 46). Lately I've taken to Sriracha sauce stirred into mayonnaise until it is the color of Thousand Island dressing_. MAKES SNACKS FOR 4 OR DINNER FOR 2 _(although I can easily eat a pint of oysters myself)_ **2 cups Maseca** **2 cups self-rising flour** **2 teaspoons coarse sea salt, plus more for dusting oysters after they are cooked** **1 teaspoon freshly ground black pepper** **1 pint shucked oysters, drained** **4 cups, more or less, oil for frying** **Lemon wedges** Combine the flours, salt, and pepper in a bowl. Taste to make sure that it is seasoned to suit you; set aside. In a straight-sided saucepan, heat the oil to 365°. The oil should be deep enough to float the oysters. If you don't have a thermometer, you can test the oil temperature with fair accuracy by dropping a little of the breading in it. If it sizzles, the oil is ready. Drain the oysters. Working in batches so as not to crowd the oysters in either the breading bowl or the frying pan, toss the oysters in the breading and then transfer them to the oil. Fry for a minute to a minute and a half at most. They should float and be pretty and brown when done. Let the oil recover its heat between batches. Drain the oysters in a bowl lined with a clean kitchen towel, then dust with sea salt. Serve at once with lemon wedges and a favorite sauce. ## Out in the Yard Roasted Oysters and Hard-Crab Stew are messy and thus are summertime meals that my family always served outdoors. You can of course bring them both inside if you like. I've include instructions for cooking the oysters either place. The crab stew is cooked indoors, wherever you choose to eat it. ### Roasted Oysters _The oyster roast is a tradition all along the coastal South, and I've attended them my whole life. This isn't so much a recipe as it is a procedure, and a free-form procedure at that. Although big oyster parties are generally outdoor affairs, it is quite possible to roast oysters in the kitchen oven, so I've included instructions for cooking them in either place. The same sauces used for raw oysters all work well with roasted ones_. SERVES AS MANY AS YOU'D LIKE **At least 12 large oysters per person** **Basic Cocktail Sauce (recipe follows)** **Basquaise Sauce (page 23)** **Lemons and horseradish** You must first clean the shells. It is rare to receive oysters that couldn't use one more scrubbing. In the yard with a garden hose is the most effective, but if that isn't possible, the sink will do. Submerge the oysters in cool water to loosen the dirt, but remember that they won't like fresh chlorinated water. Left too long, they will probably die, in fact, so after a quick soak, rinse them under cold running water. Refrigerate them under damp cloths until you are ready to cook. If you are cooking outside, you will need to prepare a fire and let it burn down to glowing coals. This can be done in your outdoor grill. We always used wood growing up, but in recent years I have seen charcoal used successfully. Again, when I was growing up, there was always a sheet of corrugated metal around to serve as a roasting pan. You can use any baking sheet or roasting pan for this, but keep in mind that they may warp on the fire, so you may want to use something that is already old and beat up. Place the oysters on the pan and cover with wet cloth. If the oysters came in a burlap bag, rinse it out and use it. It will be perfect. Cooking time will vary because of the size of the oysters and the amount of time they have been out of the sea. They are ready when they have just _barely_ begun to open. People will need to be careful retrieving these from the fire. It might be a good idea to have one person in charge of this. If you are going to do this inside, preheat your oven to high (500° in most ovens) or broil. Have your guests assembled. Place the oysters on sheet pans in single layers and cover with wet cloth. Put the oysters in the oven. Start looking at them after 8–10 minutes. People like different degrees of doneness. I like mine when they have just barely begun to open. They will be hot of course, but they don't hold well, so people will need to figure out how to handle them without getting burned. They always do. You can provide dish towels or oven mitts to help out, but these usually quickly become soaked with juice, causing diners to lose patience with them. Give everyone oyster knives and lead them to the sink. It's best to open oysters with the deeper side of the shell on the bottom, so you can drink the juices that will gather there. ### Basic Cocktail Sauce _This is ready in about three seconds. You can adjust the amounts of horseradish or Tabasco sauce to suit your need for spiciness. This is also good with any kind offried seafood_. MAKES 1 1/2 CUPS **1 (10-ounce) jar commercial cocktail sauce** **Juice of 1 lemon** **2 tablespoons horseradish, or to taste** **Dash of Tabasco sauce** Mix everything together and refrigerate until needed. ### Hard-Crab Stew _Hard-crab stew is always a summer supper because it is so messy to eat that no one wants to serve it in the house. My grandmother only made it once or twice a year, and it was always a big deal. We always assemble at the newspaper-covered picnic table to eat this. All the shells could be rolled up and taken to the trash when we were through. This is the second version of this recipe that I have published. The first was one I found among my grandmother's things after she died. It was in her handwriting, but it wasn't quite what I remembered. Recently, a new version has surfaced. It was given to me by an old friend of the family who had been given it years ago, again in Grandmother's handwriting. Needless to say, this discovery sent a ripple of disbelief through the family ("it was in_ her _handwriting!"), but this newer version seems like the ur-stew to me_. _There is a common variation of this recipe where the bread is omitted and replaced by white cornmeal dumplings. Some of the dumplings are broken up into the stew to thicken it, so no cornmeal is stirred in at the end. I've included a recipe for the dumplings below. Be warned. You will be eating this stew mostly with your hands. Claw crackers would be handy. (We used to get yelled at for cracking the claws with our teeth.)_ SERVES A CROWD **1/2 pound side meat or fatback** **2 medium onions, peeled and cut into large dice** **2 dozen hard crabs, cleaned and halved** **1/2 teaspoon crushed red pepper flakes** **4 bay leaves** **1 teaspoon dried thyme** **6 baking-size potatoes, peeled and cut into eighths** **3/4 cup all-purpose cornmeal, stirred into 2 cups of cold water and shaken in a mason jar** **Salt and black pepper, to taste** **Sliced white bread or Cornmeal Dumplings (recipe follows)** Render the side meat in a large stockpot. Do this slowly on low heat, as it has a low smoking point and you want to extract as much fat as possible before it gets too brown. It will resemble crisp bacon in color when ready. Add the onions and sauté until soft but not brown. Add the crabs and cover with cold water. Add the red pepper, bay leaves, and thyme. Bring to a boil, reduce the heat and simmer for half an hour. Add the potatoes and cook until they are well-done, 15–20 minutes more. Turn up the heat a little (but you don't want a hard boil) and stir in the cornmeal and water. (Omit this step if you are using the dumpling variation.) This will be a little difficult because of the crabs. You need to mix this in thoroughly. Bring back to a simmer until the stew begins to thicken. If you are using dumplings, now is the time to add them. Tuck them around the edge of the pot and spoon a little of the soup over each one from time to time. Season the stew with salt and pepper, keeping in mind that some side meat is saltier than others. To serve, ladle into large soup bowls, giving everyone crabs, a few dumplings, and potatoes. To serve without dumplings, put a slice or two of white bread in the bottom of large soup bowls and ladle the stew, crabs and all, on top. ### Cornmeal Dumplings _These are dense, unleavened dumplings. My great grandmother also cooked them on top of her collards_. MAKES 12 DUMPLINGS **2 cups white cornmeal** **2/3 cups all-purpose flour** **1 teaspoon salt** **1 1/4 cups water** Sift the dry ingredients together into a large bowl, then completely mix in the water. With wet hands, divide the dough into twelve equal portions and form each into an oval-shaped dumpling. Place them around the edges of the stew pot for the last 20 or so minutes of cooking. Spoon the stew over the tops from time to time. Gently stir the stew to keep the dumplings from sticking, and break up a few of them with the spoon. ## Dinnertime Travel often leads to adventures at the dinner table, and all this talk of supper makes me remember a trip to Japan. There is a wonderful restaurant in the Gion district of Kyoto called Kappa Nawate. It's basically a lunch counter surrounding a grill on three sides. It is small, crowded, loud, and merry. I had already eaten a great deal there when I noticed an enormous red crab sitting in a display case. I pointed to it, not realizing that I would be served the whole thing. The preparation is elaborate. It ends with sake being boiled in the carapace. You drink this like soup. Then the cook tells you that you must chew on this mesh of brown fibers with red dots that is found behind the eyes. When in Rome.... It was delicious. This section actually begins with one recipe that is served with main courses—oyster dressing. It's generally served at holidays alongside a turkey or a ham. All the others are main courses. A few are peculiar, but that is deliberate. To me the cultural and the culinary are inseparable, and I wanted this book to illustrate both. The biscuits that are used with the oyster shortcake are delicious with other things as well. ### Oyster Dressing _Someone in my family must not have liked this. It's a favorite holiday side dish everywhere, but, to my memory, we never were served it. Therefore it was a pleasure to try this out. I tried this once with crumbled stale cornbread instead of white bread. It was good but a little less solid_. MAKES A GENEROUS QUART **1 stick unsalted butter** **5 celery stalks, sliced thinly** **1 medium onion, chopped medium** **3 cups cubed and toasted white bread** **1/4 cup chopped fresh parsley** **1/2 teaspoon salt** **1/4 teaspoon black pepper** **1/2 teaspoon dried sage** **1/2 teaspoon dried thyme** **2 eggs, well beaten** **1 pint shucked standard oysters, drained** Preheat the oven to 350°. Melt the butter in a skillet and sauté the vegetables to soften but not brown. Pour the contents of the skillet into a large bowl and combine with the bread and all the herbs and seasonings. Fold in the eggs and oysters. Let it sit a minute so the bread can absorb the moisture. Spread evenly in a buttered baking dish and bake for about 20 minutes or until brown on top and firm at the center. ### Deviled Crabs _I think that there are probably as many recipes for deviled crabs as there are cooks who make them. They are the potato salad of seafood. They are also the first crabs that I clearly remember. I grew up on these. They are essentially a crab cake that has been baked in a crab shell. I went through a stack of church cookbooks to pull together this recipe. Different people have different ideas about what deviled means and about how much of it they want. I found some recipes that called for the tiniest amount of seasoning that I've ever seen. I was raised to believe that deviled meant spicy, but this spice comes from mustard rather than chilies. I did find one with horseradish, but the heat baked out of it and it tasted sour. I didn't like it_. _Growing up, we caught and picked most of our crab so we always had shells to stuff. If you're not doing that and don't live near the coast where you might be able to buy some, use buttered, four-ounce porcelain or glass baking ramekins_. MAKES 8–12 SERVINGS, DEPENDING ON THE SIZE OF THE CRAB BACKS YOU HAVE **1 pound fresh back fin crabmeat, picked over for shell** **3/4 teaspoon salt** **2 tablespoons unsalted butter** **1 small green bell pepper, diced (about 1 cup)** **12 saltine crackers, crumbled, plus a few more to sprinkle on top** **2 tablespoons yellow mustard** **3 tablespoons mayonnaise** **1/4 cup cider vinegar** **1/2 teaspoon Tabasco sauce** **1 tablespoon Worcestershire sauce** **2 tablespoons fresh lemon juice** **2 tablespoons cold unsalted butter** Preheat the oven to 375°. If you are not using crab backs, butter 8–12 ramekins. In a large bowl, toss the crabmeat with the salt and let rest for 10 minutes. Melt the butter in a skillet, add the bell peppers, and sauté just until soft. Pour off the excess butter. Add the cracker crumbs to the crabmeat and toss to combine; stir in the green peppers. In a separate bowl, mix the mustard, mayonnaise, and vinegar together with the seasonings, then fold this into the crab. Fill the crab backs (or ramekins) with the crab mixture. Sprinkle with more cracker crumbs and dot with the cold butter. Bake for about 15 minutes or until hot at the center. The time required will vary some according to what they're cooked in. Crab shells are very thin and will thus heat through more quickly than china ramekins. Serve at once. ### Jean Anderson's Stuffed Crab au Gratin #### _Santola Recheada a Gratinada_ _I love Portugal. Jean Anderson is absolutely nuts about it. Luckily for me, when this talented and remarkably prolific cookbook author decided to move back home to North Carolina, she came to Chapel Hill. We've become friends, sharing the occasional dinner or food-themed excursion. This recipe comes from her classic cookbook_ The Food of Portugal _(first published in 1986). This recipe struck me as perhaps a cousin of our North Carolina–style deviled crab. I especially love the black olives_. MAKES 6 SERVINGS **1 medium yellow onion, peeled and finely chopped** **1 medium carrot, peeled and finely chopped** **1/3 cup finely chopped sweet red peppers** **1 tablespoon olive oil** **1 tablespoon unsalted butter** **2 tablespoons water** **1 pound fresh lump crabmeat, picked over for shell, then flaked** **1 1/4 cups moderately fine soft white bread crumbs, divided** **1/3 cup mayonnaise** **3 tablespoons chopped oil-cured black olives** **3 tablespoons tawny port** **3 tablespoons light cream or half-and-half** **1 tablespoon Dijon mustard** **1 tablespoon fresh lemon juice** **1 tablespoon minced fresh parsley** **1/2 teaspoon salt** **1/4 teaspoon hot red pepper sauce** **1/8 teaspoon black pepper** **2 tablespoons grated Parmesan cheese** Preheat the oven to 400°. In a small heavy skillet over moderate heat, sauté the onions, carrots, and red peppers in the butter and oil for about 2 minutes. Add the water and cover, then turn down the heat as low as possible and let steam for 15 minutes. Meanwhile, place 1/2 cup of the bread crumbs and the rest of the ingredients, minus the cheese, in a large bowl and toss lightly to mix. Add the skillet's contents and mix well. Mound into crab backs or buttered 5- to 6-ounce ramekins. Combine the remaining bread crumbs with the cheese and sprinkle on top of the crabs. Bake, uncovered, for about 20 minutes. The tops should brown a little. ### Crab and Oyster Gumbo _I suspect that every family in Louisiana has its own recipe for gumbo. This is a version of the one that we use at Crook's Corner. Almost every kind of meat or seafood can appear in gumbo, so say you have a handful of leftover breakfast sausage with no home, for example, you can crumble that in_. SERVES A CROWD **12 tablespoons unsalted butter** **2 cups all-purpose flour** **1 small whole chicken (3 pounds or so)** **1 cup diced side meat or bacon** **4 cups diced onions** **4 cups diced green bell peppers** **1/2 pound andouille (or some other sausage that you like), sliced into 1/4-inch rounds** **2 tablespoons chopped garlic** **1 (28-ounce) can crushed tomatoes** **4 bay leaves** **1/2 teaspoon crushed red pepper flakes** **1/2 teaspoon dried thyme** **1/2 teaspoon dried oregano** **1/2 teaspoon dried basil** **1 pound sliced okra (fresh is great, but frozen is fine and comes in 1-pound bags)** **Salt and black pepper, to taste** **1 pound fresh special crabmeat, picked over for shell** **2 pints shucked oysters** **Cooked rice** Preheat the oven to 350°. Melt the butter in a cast-iron skillet. Whisk in the flour until completely incorporated. Stir constantly until the roux has taken on the color of peanut butter. Put the skillet in the oven and bake for 45 minutes or so, stirring from time to time, or until the roux is the color of almost-burnt toast. It can get quite dark before it begins to taste burned. Remove from the oven and let it rest for half an hour. Pour off any oil that has collected on the top. Set aside. (You can do this part days in advance. Just rewarm the roux when you are ready to resume cooking.) Bring a large pot of water to a boil (there should be enough water to float the chicken). Add the chicken and return to a boil. Cook for 15 minutes, then turn off the heat and let the chicken sit for 20 minutes more. Remove the chicken from the pot and refrigerate. When its cool enough to touch, pick the meat from the carcass and return the meat to the refrigerator. Throw the skin and bones back into the stock and bring to a hard boil. Boil for 1 hour and then strain the stock into another pot to cool a bit and to settle. This will also give you the opportunity to degrease the stock. (You don't have to remove every speck of grease, by the way.) Render the side meat or bacon in a heavy-bottomed soup pot until you have some grease. Add the and onions and bell peppers and sauté until soft. Add the andouille and cook for 5 or so minutes. If things brown a little, that's fine. Stir in the garlic. Cook for 3 minutes, then add the tomatoes and 4 cups of the strained chicken stock. Keep in mind that you may have to add more of the stock or water later. Bring to a simmer and add the seasonings. Simmer for 45 minutes, stirring from time to time. Fold the warm roux, bit by bit, into the gumbo. You will need to stir this often since the flour tends to sink and is easy to scorch. Bring to a simmer and add the okra and chicken. Cook until the okra is done, about 10 minutes more. Taste for salt and pepper. At serving time, fold in the crabmeat and oysters. Cook just until the oysters begin to curl. Serve at once in large bowls over a scoop of rice. ### Stuffed Crabs _I came upon this recipe on a second pass through that fantastic cookbook from the Brookhaven Junior Auxiliary. It is in effect an un-deviled crab. The interesting thing here is the use of egg yolks creamed into whole butter as a thickener_. FILLS 10 CRAB BACKS **2 tablespoons unsalted butter** **1 tablespoon bacon grease** **1/2 cup finely chopped green bell peppers** **1/2 cup finely chopped celery** **2 cups heavy cream** **3 tablespoons unsalted butter, softened** **2 egg yolks** **1/2 teaspoon paprika** **1 tablespoon fresh lemon juice** **1 pound fresh crabmeat (grade of your choice), picked over for shell** **1 cup toasted bread crumbs, plus more to dust the tops** **Melted unsalted butter to baste the crabs** Preheat the oven to 400°. Heat the butter and grease in a skillet; add the bell peppers and celery and sauté until soft. Add the cream and bring to a simmer. Cream the butter, egg yolks, and paprika together with a fork until smooth and stir into the simmering cream until it thickens. Do not boil. Add the lemon juice, then the crabmeat. Simmer for a minute or two more, stirring constantly. Fold in the bread crumbs. Fill the crabs shells, dust with more bread crumbs, and drizzle with a little melted butter. Bake for 20 minutes to warm through. Serve at once. ### Crabes Farcis _This must be the Louisiana version of the stuffed crabs from Mississippi. They share that strange ingredient of egg yolks creamed into butter, which I don't recall ever seeing before this project, only this time the egg yolk has been boiled. I was surprised by how much I liked this. It is so much less rich than so many of these recipes are, and I love all of the garlic_. MAKES 6 SERVINGS **12 large crabs, boiled in salted water, or 1 pound fresh back fin crabmeat, picked over for shell** **1 tablespoon unsalted butter** **3 hard-boiled eggs, yolks and whites separated** **3 garlic cloves, minced** **2 tablespoons chopped fresh parsley** **Zest and juice of 1 lemon, divided** **2 tablespoons dry sherry** **2 tablespoons bread crumbs** **Lemon wedges** Preheat the oven to 350°. If you have boiled your own crabs, carefully pick the meat and save the 6 prettiest backs; set aside. If you are not using crab backs, butter 6 ramekins. In a small bowl, mash the butter and egg yolks together. Rough chop the egg whites and place in large bowl. Add the crabmeat, garlic, parsley, lemon zest, sherry, and butter-and-egg-yolk paste and stir until well combined. Fill the crab backs (or ramekins), dust tops with bread crumbs, and squeeze the lemon over them all. Bake for 15–20 minutes. (Ramekins take a little longer than crab backs to heat through.) Serve hot with lemon wedges. ### Soft-Shell Crabs _I like to serve two medium-size crabs per person. At Crook's, I generally order primes, which is the middle classification of five (see page 6). They fall between hotels and jumbos. Over the years I've switched from cornmeal, to corn flour, to Maseca for my seafood breading. Maseca is the corn flour ground to make tamales_. MAKES 4 SERVINGS **8 cleaned soft-shell crabs** **2 cups buttermilk** **2 cups self-rising flour** **2 cups Maseca** **1/2 teaspoon salt** **1/4 teaspoon black pepper** **1/2 cup clarified butter or cooking oil** **1 stick unsalted butter** **4 tablespoons chopped garlic** **1/4 cup fresh lemon juice** **1/2 cup finely shredded basil leaves** Place the buttermilk in a large bowl and submerge the crabs in it. In a separate bowl, combine the flour, Maseca, salt, and pepper. In a large skillet over high heat, heat enough butter or oil to cover the bottom by 1/4 inch. (You don't want to crowd the crabs as they cook or they won't be crisp, so cook them in batches if need be, adding more butter or oil as you go. The cooked crabs may be held in a warm oven while you finish a batch.) When the oil begins to shimmer, working with one crab at a time, remove them from the buttermilk, shake off the excess, put them in the flour to coat completely, and shake off the excess as well. Place as many of the crabs into the pan as you can, shell side down, without crowding. They should sizzle at once. Cook for 3 or 4 minutes on one side or until brown, then flip each crab and brown the other side. If the crabs are browning too quickly, you may need to turn the heat down a little at this point, but you still want the sizzle. As the crabs are brown and clearly cooked through, move them to a serving platter and put in a warm oven. You may need to wipe the pan between batches if there is a lot of dark breading. When all the crabs have been cooked and are in the warm oven, wipe the pan clean and return it to high heat. Toss in the whole butter and swirl the pan as it melts. In a minute or two it will begin to brown and to smell sort of toasty. Throw in all of the garlic and swirl without ceasing. Cook just until the garlic starts to turn brown. Add the lemon juice and basil and stir to combine. Pour the sauce over the crabs and serve at once. ### Crabmeat Cobbler _I got the idea for this recipe in a mysterious, coverless ring-bound collection that my father had. It is attributed to someone named Jay Lippitt, a person unknown to me. I had never seen a seafood cobbler before, and even more intriguing, the crust appeared to be old-fashioned drop cheese biscuits. What's more, the onions are cooked in a double boiler rather than sautéed. My first attempt produced something that resembled a particularly good tuna casserole. After several testings, I came up with this sort of potpie interpretation_. MAKES 6–8 SERVINGS **FOR THE FILLING** **24 pearl onions** **1 stick unsalted butter** **1/2 cup chopped onions** **1/2 cup sifted all-purpose flour** **1 teaspoon dry mustard** **1 cup milk** **1 cup shredded cheddar cheese** **3 hard-boiled eggs, chopped** **1 cup crabmeat** **1 1/2 cups drained diced tomatoes (fresh are good in season, otherwise use canned)** **2 teaspoons Worcestershire sauce** **1/2 teaspoon salt** **FOR THE CRUST** **1 cup all-purpose flour** **2 teaspoons baking powder** **1/2 teaspoon salt** **1/4 cup shredded cheddar cheese** **2 tablespoons shortening, lard, or butter** **1/2 cup milk** Preheat the oven to 450°. Cut off the root ends of the pearl onions and toss into boiling salted water for a few seconds. Transfer to ice water, then peel; set aside. Melt the butter in a double boiler; add the chopped onions and cook until tender, about 10 minutes. Combine the flour and mustard and stir into the onions. Follow with the milk, then the cheese. Cook, stirring constantly, until thick. Fold in the pearl onions, eggs, crabmeat, tomatoes, Worcestershire sauce, and salt and pour into a 2-quart casserole. Start the crust by sifting together the dry ingredients. Using 2 forks, thoroughly blend in first the shortening and then the cheese. Gradually stir in the milk, but add only enough to moisten the flour (you may not need the entire 1/2 cup). Drop the batter with a soup spoon over the top of the warm crab mixture. (You don't need to cover every square inch with crust.) Bake until brown and bubbly around the edges, about 20 minutes. ### Oyster Shortcake _This recipe makes use of an old-fashioned thickening technique—beurre manié, where butter is rubbed into flour, then whisked into hot liquids. The dish is sort of an oyster stew on toast. It calls for just four biscuits, but since there's no such thing as extra biscuits, the recipe here, which makes about twelve, will do nicely. You can also freeze the unused dough and use it for fried croutons. I used leftover biscuits for this, because we often have them at work_. MAKES 4 SERVINGS **FOR THE BISCUITS** **2 cups self-rising flour** **4 tablespoons cold unsalted butter, diced** **3/4 cup buttermilk** **FOR THE OYSTERS** **1 pint shucked oysters, drained, juice reserved** **1 1/2 cups milk** **3 teaspoons unsalted butter** **3 tablespoons all-purpose flour** **Salt and cayenne pepper, to taste** Preheat the oven to 375°. To make the biscuits, cut the butter into the flour until completely blended. I use two forks. Stir in just enough buttermilk to form a dough that is a little sticky but pulls away from the mixing bowl (you may need more or less than 3/4 cup). Turn out the dough onto a floured surface and roll out to 1/2 inch with a floured rolling pin. Cut with a biscuit cutter. Put on a baking sheet, close together but not touching, and bake for 15 minutes or until pretty and brown. Remove from the oven and turn down the oven temperature to 350°. Return the biscuits to the oven to keep warm. To prepare the oysters, in a small saucepan, scald the oyster juice and set aside. In another small saucepan, set the milk to simmer. In a bowl, using your fingers, rub the butter and flour together to form a paste. Whisk the mixture into the simmering milk and cook until it begins to thicken. Stir 1/4 cup of the milk mixture into the oyster juice, then strain this back into the saucepan of milk. Continue to cook until rethickened. It will be a little thinner than gravy. Stir the oysters into the milk and cook just until they begin to curl. Split 4 of the biscuits and put on individual plates. Layer the oysters and sauce between the biscuit slices and serve at once. ### My Grandmother's Crab Pilaf _I have to say first of all that I have absolutely no memory of having been served this. I found it in my father's recipe collection and altered it a tiny bit. The inclusion of salted peanuts made it irresistible_. MAKES 4 SERVINGS **6 tablespoon vegetable oil** **1 small onion, halved and thinly sliced** **1 medium green bell pepper, halved, seeded, and cut into 1/4-inch strips** **2 celery stalks, thinly sliced** **3 cups cold cooked rice** **1/2 pound fresh special crabmeat, picked over for shell** **1/4 cup salted peanuts** **Soy sauce** Heat 4 tablespoons of the oil in a large frying pan. Add the vegetables and sauté until cooked but still a bit crisp, 5–6 minutes. Remove them from the pan. Heat the rest of the oil in the pan over medium heat. Add the rice and cook, stirring, until the grains separate. Return the vegetables to the pan and toss with the rice, followed by the crab and the peanuts. Cook to heat through. Serve in soup bowls accompanied by soy sauce. ### Oyster Loaf, or Bread Box _This is a very peculiar recipe, and if it hadn't shown up in seafood cookbooks so often, I probably wouldn't have included it. I found many variations from all over. The common thread is the loaf of bread. When I first saw this mentioned, I assumed that it would be something like a meatloaf. This is nothing at all like a meatloaf_. MAKES 6–8 SERVINGS **1 loaf unsliced bread** **Melted unsalted butter** **1 pint (2 dozen or so) Fried Oysters (page 56)** **1 large dill pickle, thinly sliced into rounds** **1 lemon, thinly sliced into rounds** **1/2 cup ketchup** Preheat the oven to 350°. Slice the top off of the loaf of bread. Cut down into the loaf about a half an inch in on all four sides to a half an inch from the bottom, being careful not to pierce the opposite side. Remove the center of the loaf and cut off a 1/2-inch slice lengthwise. Paint both sides of the slice and the loaf, inside and out, with butter. Place on a cookie sheet and bake until lightly browned and crispy, 12–15 minutes. Turn the oven down to 300°. Arrange half each of the oysters, pickles, and lemon in the bottom of the loaf. Sprinkle with half of the ketchup. Place the toasted slice on top and then another oyster, pickle, lemon, and ketchup layer. Return the top of the loaf to its place. Bake for 30 minutes. I'm not sure how you are supposed to serve and eat this. In my kitchen, we just stood around it and went at it with spoons. ### Crab Soufflé _Finally a recipe containing canned soup! This recipe was on the back of an envelope that I found in a second-hand cookbook that someone brought me from Louisiana a million years ago. This is actually a rather elegant little dish. You need to serve it the minute it comes out of the oven as it becomes greasy with age. I'm always interested when recipes call for shortening or "cooking oil" instead of butter or lard. There was no mention of either what to cook this in or at what temperature. I used a porcelain pâté terrine that I buttered and lined with parchment because that is what I had on hand. The first time I made this, I set the terrine in a shallow pan of hot water that I had waiting in the oven. This is how one ordinarily cooks soufflés, but it refused to get done at the center and got too brown on top. A soufflé dish would obviously be appropriate. The batter has a volume of around five cups. This is so rich it doesn't really need any sauce, but a little sour cream might be nice on top of each serving_. MAKES 4 OR 5 SERVINGS **4 eggs, separated, at room temperature** **1 cup milk** **1/4 teaspoon salt** **1/8 teaspoon cayenne pepper** **1 cup canned tomato soup** **2 tablespoons cooking oil** **1 small yellow onion, finely chopped** **1 teaspoon curry powder** **1 tablespoon all-purpose flour** **1 cup cooked rice** **1 cup fresh crabmeat, picked over for shell** **A splash of cider vinegar, a pinch of salt, and a speck of cream of tartar** **Sour cream (optional)** Preheat the oven to 375°. Butter a 6-cup soufflé dish; set aside. In a large mixing bowl, beat the egg yolks, milk, seasonings, and tomato soup together well; set aside. In a large sauté pan cook the onion in the oil until soft but not brown. Whisk in the curry and allow to cook for a minute. Whisk in the flour, stirring until all of the lumps are gone (this is easier if you shake the flour into the pan with a kitchen sieve). Slowly stir the egg yolk mixture into the onions. Lower the heat and stir constantly until the liquid begins to thicken. Fold in the rice and the crab; set aside away from the heat. Rinse the mixing bowl with the vinegar and dust it with salt. Swirl the bowl around and dump it out in the sink. Add the egg whites and the cream of tartar and beat to soft peaks. Fold the egg whites by thirds into the crab and transfer to the baking dish. Bake for 40 minutes or until the top is a little brown and a knife blade inserted into the center comes out clean. Serve at once. ### Baked Crab Sandwiches _This recipe comes from my cousin Linda Morris. I discovered it while searching through a cookbook called_ Pass the Plate, _published by Christ Episcopal Church in my hometown of New Bern. I say discovered because since Linda is not an Episcopalian, I hadn't expected to find her among the contributors. Upon further reading, I realized that the women of that church had invited other congregations in town to join them, making the book a New Bern–wide effort. The results of the recipe vary according to the type of bread used. The first time I tried this I had a thin sliced brioche at work, which proved to be too small for the amount of custard in the original text. I've altered the instructions a little_. MAKES 8–10 SERVINGS **12 slices sandwich bread, trimmed of crust if you like, buttered** **1/2 pound fresh special crabmeat, picked over for shell** **1/2 cup grated sharp cheddar cheese** **3 cups milk** **1/2 teaspoon curry powder** **4 eggs, beaten** **1/2 teaspoon salt** Preheat the oven to 325°. Place 6 of the slices of bread, buttered side up, in the bottom of a 9 × 12-inch baking dish. Spread the crabmeat evenly on top. Cover with the rest of the bread, again buttered side up. Distribute the cheese over the bread. In a medium bowl, beat the curry into the milk until there are no lumps, then stir in the eggs and salt. Carefully pour half of this over the sandwiches. Wait 5 minutes for it to be absorbed. The bread should absorb all that you add and look wet but not be swimming (you might not need all of the rest of the egg mixture). Cover and refrigerate for at least 2 hours. (This can be done a day ahead and allowed to sit overnight.) Bake for 45 minutes or until the egg is set at the center and the top is pretty and brown. ### Oyster Rarebit _Rarebits are really just melted cheese poured over toast. My aunt Hi used to make what she called Welsh rarebit from time to time. They did it to amuse me as much as anything, because I always called it "rabbit." I remember that she and her husband were unenthusiastic about the dish. I was little then, and although I knew that people ate rabbits, I was relieved that there were none in this. In this version the bunny has been replaced with sea creatures_. MAKES 4 SERVINGS **1 cup shucked oysters with their juice** **2 eggs, well beaten** **Scant 1/2 cup heavy cream** **1/2 pound grated sharp cheddar cheese** **1/8 teaspoon cayenne** **Freshly grated nutmeg** **Salt and black pepper, to taste** **3 slices sandwich bread, toasted, buttered, and cut into triangles** Strain the oysters and beat their juice thoroughly into the eggs. Heat the cream in a saucepan, then whisk in the cheese until melted and thoroughly blended. (This is a little tricky, but just keep the cheese moving.) Then quickly add the egg mixture, followed by the cayenne and a few scrapes of nutmeg. You are just thickening the sauce not making scrambled eggs, so keep the heat low. Add the oysters and juice and cook just until the oysters begin to curl, about 3 minutes max. If the oysters have made the sauce too runny, fish them out and reduce the sauce without them for a minute. Taste for salt and pepper. Some cheese is saltier than others. Have the toast arranged on four plates and divide the rarebit among them. Serve at once. ### Frances and Ed Mayes's Spaghetti with Lemon and Crab _I love the way Italy cooks. It's simple and often also quick. There is a lot of truth to that old saw about it being difficult to get a bad meal in Italy. In 2012 Ed and Frances Mayes published their perfectly wonderful_ Tuscan Sun Cookbook, _and they have kindly allowed me to reprise this recipe from it. In an aside they suggest that without the crab, this would be "perfect for the day after a crippling feast." Perhaps so, but try it with the crab first_. SERVES 4–6 **1 pound spaghetti** **2 tablespoons extra-virgin olive oil** **1 pound fresh crabmeat** **1/4 cup white wine** **1/2 cup lemon juice** **1/2 teaspoon salt** **1/4 teaspoon black pepper** **1/2 cup grated Parmigiano-Reggiano** **1/2 cup chopped flat-leaf parsley** Cook the pasta in boiling salted water as directed. While the pasta cooks, heat the olive oil in a large skillet over low heat and cook the crabmeat just to warm it. Add the wine, bring it quickly to a boil, then immediately turn the heat back to low. Stir in the lemon juice and the seasonings. Drain the pasta, but reserve a little of the water. Pour the pasta into the pan with the crab. Toss in 1/4 cup of the Parmigiano and the parsley. If the pasta needs more liquid, add a little of the reserved pasta water. Serve in bowls, sprinkling the remaining cheese on top. ### Green Cabbage Slaw _I am a big fan of slaw and have many favorite recipes. It is for this reason that I have dubbed this one Green Cabbage Slaw. I also make carrot slaw or my grandmother's mustard slaw, and at Crook's Corner we serve a red cabbage slaw. We developed this at work at about the same time as I was rediscovering grated onions, a useful but overlooked ingredient_. MAKES ABOUT 2 QUARTS **2 heads green cabbage, trimmed, cored, and quartered** **1 medium onion, peeled and quartered** **2 green bell peppers, stemmed, seeded, and quartered** **6 carrots, peeled** **1/2 tablespoon salt** **1/4 cup mayonnaise** **1 cup cider vinegar** **1 teaspoon dry mustard** **1/2 cup sugar** **Salt and black pepper, to taste** Grate the cabbage, onions, bell peppers, and carrots in a food processor using the coarser blade. Mix all together, squeezing as you do. Stir in the salt, which will accelerate juicing, and place the vegetables in a sieve to drain for 20 minutes. Meanwhile, in a small bowl, whisk together the mayonnaise, vinegar, mustard, and sugar. Squeeze the vegetables really well one last time and transfer to a bowl. Add the dressing and combine well. Season with salt and pepper. Remember that the salt you added earlier will still be present. Chill. ### Two Kinds of Crab Cakes There seem to be two schools of thought on crab cakes. One says they should be bound with beaten egg, while the other says they should be bound, oddly to me, with mayonnaise. I say oddly not because I don't approve of mayonnaise but because it doesn't seem like it would work. After some poking around, I decided to present one traditional version and one that is drifting toward Asia, and lo, both call for mayonnaise. My Cucumber Relish is an excellent dressing for both. If it appeals to you, make it before you begin the crab cakes, so that it can cure in the refrigerator and will be ready when the crab cakes are done. ### Indochinese Crab Cakes _I found that frying these in daringly hot oil makes a great crust. If this spooks you, start the cooking in a frying pan and finish the cooking in a hot oven. The French brought mayonnaise to Indochina_. MAKES 5 OR 6 CRAB CAKES **Cucumber Relish (page 97)** **3 tablespoons mayonnaise** **1 tablespoon whole-grain mustard** **2 teaspoons soy sauce** **1 teaspoon fresh lemon juice** **1 tablespoon chopped fresh parsley** **1 teaspoon grated fresh ginger** **1 teaspoon salt** **1/2 teaspoon black pepper** **Pinch or 2 of cayenne pepper** **1/2 cup bread crumbs, or more as needed** **1 pound fresh special crabmeat (but use a more expensive kind if you like), picked over for shell** **1/2 cup clarified butter or cooking oil for frying (you might need more or less depending on the size of your pan)** **Toasted sesame seeds (optional)** In a large bowl, mix the mayonnaise, mustard, and soy sauce together thoroughly. Add the lemon juice, all of the seasonings, and the bread crumbs. If the mixture seems suspiciously runny, add more crumbs. (The liquid in the ginger will be a factor in this. The mixture needs to be sticky enough to bind the crab.) Carefully fold in the crab and form into 5 or 6 cakes. Chill for an hour to let them set up. Fry the crab cakes in the butter or oil for about 5 minutes on each side or until heated through. These won't brown as nicely as the other recipe. Serve with the Cucumber Relish and sprinkled with toasted sesame seeds, if using. ### More Traditional Crab Cakes _When I was growing up I considered these the fried version of deviled crab. Recipes for both contain many of the same ingredients. Frying, of course, produces a much better crust than baking can_. MAKES 5 OR 6 CRAB CAKES **Cucumber Relish (page 97)** **4 tablespoons unsalted butter, softened a little** **1/3 cup chopped onion** **3 slices dry, stale white bread, roughly crumbled** **2 eggs, beaten** **3 tablespoons mayonnaise** **1 tablespoon yellow mustard** **1 tablespoon Worcestershire sauce** **1/2 teaspoon salt** **1/2 cup clarified butter or cooking oil for frying (you might need more or less depending on the size of your pan)** **1 pound fresh special crabmeat (but use a more expensive kind if you like), picked over for shell** In a large bowl, mix together the butter, onions, and bread. Fold in the eggs, mayonnaise, mustard, and Worcestershire sauce. Carefully fold in the crab and salt, keeping the meat intact as much as possible. Let the mixture rest for half an hour. Form the crab mixture into 5 or 6 cakes and chill for an hour so they will set up. Fry in the butter or oil until pretty and brown, about 5 minutes on each side. Serve with the Cucumber Relish. ### Cucumber Relish _This is a riff on something else I picked up in Mexico—a sort of cucumber slaw. Although it's not quite as clingy as tartar sauce or remoulade, it is a refreshing alternative to those mayonnaise-based sauces. Vary the amount of jalapeño to suit yourself_. MAKES ABOUT 2 CUPS **1 medium cucumber** **1/2 cup minced red onions** **1 tablespoon minced jalapeño** **1/4 teaspoon salt** **1/4 teaspoon black pepper** **1 tablespoon chopped fresh parsley** **1 tablespoon olive oil** **2 teaspoon cider vinegar** If the peel on the cucumber is nice, leave it. Cut it into matchsticks and toss it with the other vegetables and the salt and pepper. Add the rest of the ingredients and chill for at least 20 minutes. Use as a sauce with crab cakes or other fried seafood. ## Drinks Indulge me here. When I first considered writing this book, almost the first thing that came to mind was a michelada with raw oysters that I had one glorious afternoon in Mexico City. A michelada is a drink. Trips to Mexico often have glorious moments, and this particular day began on top of the Pyramid of the Sun in Teotihuacan and ended in an evening of restaurant hopping in the neighborhood of Tlaxapana. As luck would have it, I came across the odd oyster juice tonic later in a pamphlet from before World War I, and thus a chapter on drinks was born. ### Oyster Juice _I found this odd recipe for a tonic in_ Milady's Own Book, _a homemaker's pamphlet from the early nineteenth century. I didn't actually try to make this, but I couldn't resist passing it along_. "Oyster Juice is an excellent stimulant, but in itself contains little real nourishment. In preparing the oysters, care must be taken to scrub the shells absolutely clean. Place them in an enamel or earthenware dish, never in a tin one, with half a cup of cold water and put it in a hot oven. As soon as the shells open wide, take from the oven, drain all the juice from the shells and dish and strain into a cup that has been heated. Serve with long narrow slices of toast." ### Two Micheladas with Oysters _Cervezas preparadas_ , or beer cocktails, are an acquired taste. Then all of a sudden you love them. One summer when I was in college I worked in Montana. People there drank beer mixed with tomato juice. Couldn't stand it. In Britain later on I was introduced to the shandy. Yuck. Then came a cold michelada in a hot, dusty outdoor market with a bunch of amigos in the town of Celaya in central Mexico. The michelada is a Mexican drink that's made by mixing beer with all kinds of things and served in a big, salt-rimmed goblet over ice. Recipes vary. They all call for lime juice and hot sauce. I've seen them with soy sauce and Worcestershire sauce, and some with that vegetable extract called Maggi. A version with soy sauce capped off a pretty much perfect day in Mexico City. It was full of raw oysters. Of course, after I had had a few of these came the second thoughts about eating raw oysters of unknown origins in places with spotty public health systems. This often happens to me when I'm traveling, though, when I get sort of swept up in the moment. I once polished off a very rare steak in a tiny town in the Ecuadorian Andes and I survived that too. My friend Shannon Healy has a bar in downtown Durham, North Carolina, called Alley Twenty-Six. He kindly gave me an assist with this. The first recipe is for a standard version, the second is a fancier variation. They are both splendid with or without the oysters, but with them they make a sort of excellent liquid lunch. Modelo is the brand of Mexican beer much favored for michelada. Putting the Tabasco on top of this drink gives it a great nose, and the first sip's combination of salty rim and spicy burn makes the drink go down fast. Tajín seasoning is a Mexican spice powder with dehydrated lime juice available at most tiendas; the mildest spice level is best here. If you can't find the Tajín brand, there are other comparable ones. If handy, some good mescal makes the fancy version of this drink delightfully smoky. ### Michelada Tlaxapana MAKES 1 SERVING **Half a lime, cut in quarters** **Kosher salt** **Tajín seasoning for rim of glass** **1 (12-ounce) bottle Modelo Especial** **3/4 ounce tomato juice** **1/2 ounce (by volume, use jigger) Tajín powder** **1/2 ounce soy sauce** **1/2 ounce fresh lime juice (from the lime quarters)** **4 freshly shucked oysters, drained** **Tabasco sauce** Moisten the lip of a chilled pint glass with a lime quarter. Roll the lip of the glass through a mixture of half kosher salt and half Tajín powder to coat. Fill the glass two-thirds of the way full with ice. Add 2 ounces of the beer. Add the tomato juice, Tajín powder, soy sauce, and lime juice. Stir in the oysters. Top with the remaining beer. Stir again. Float 3 dashes of Tabasco (or more, to taste) on top. ### Michelada Tlaxapana Obscura MAKES 1 SERVING **Half a lime, cut in quarters** **Kosher salt** **1/2 ounce (by volume, use jigger) Tajín seasoning, plus extra for the rim of the glass** **Several dashes of cinnamon** **1 (12-ounce) bottle Negro Model or other dark beer** **3/4 ounce tomato juice** **1/2 ounce Worcestershire sauce** **1/2 ounce fresh lime juice (from the lime quarters)** **1/2 ounce mescal (optional)** **4 freshly shucked oysters, drained** **Tabasco sauce** Moisten the lip of a chilled pint glass with a lime quarter. Roll the lip of the glass through a mixture of half kosher salt and half Tajín powder and a dash of cinnamon to coat. Fill the glass two-thirds of the way full with ice. Add 2 ounces of the beer. Add the tomato juice, Tajín powder, a dash or two of cinnamon, the Worcestershire sauce, the lime juice, and the mescal, if using. Stir in the oysters. Top with the remaining beer. Stir again. Float 3 dashes of Tabasco (or more, to taste) on top. ## Acknowledgments I always say that the best thing about the food business is the collection of good friends and colleagues that comes along with it. I owe thanks to several of these folks, including Louis Osteen, Jean Anderson, Shannon Healy, and Ed and Frances Mayes, for allowing me to use their recipes. Thanks also to my cousin Linda Morris for her delicious Baked Crab Sandwiches recipe, to the membership of the Junior Auxiliary of Brookhaven, Mississippi, for several recipes and a sort of general inspiration, and to the Episcopal Diocese of East Carolina, which allowed me to quote from _Pass the Plate_ , their splendid collection from the Episcopal Churchwomen of Christ Episcopal Church in New Bern, N.C. I'll be forever grateful to all of those people who contributed to church and community cookbooks all around the South. Thanks are due as well to Elaine Maisner, Alison Shay, and Mary Caviness of the University of North Carolina Press for all of their help, and I might add, patience, with this endeavor. Thanks also to Katharine Walton, my friend and agent, for constant encouragement. Finally, thanks to all of the cooks of my childhood who trained me to expect good food every time I sit down at the table. ## Bibliography Anderson, Jean. _The Food of Portugal_. New York: William Morrow, 1986. Brookhaven Junior Auxiliary. _Cooks from Old Brook_. Brookhaven, Miss.: Brookhaven Junior Auxiliary, Inc., 1982. Elie, Lolis Eric. _Treme: Stories and Recipes from the Heart of New Orleans_. San Francisco: Chronicle Books, 2013. The Episcopal Churchwomen and Friends. _Pass the Plate_. Memphis: Wimmer Bros., 1981. Mayes, Frances, and Edward Mayes. _The Tuscan Sun Cookbook: Recipes from Our Italian Kitchen_. New York: Clarkson Potter, 2012. National Restaurant Association. _ServSafe CourseBook_. 5th ed. Upper Saddle River, N.J.: Prentice Hall, 2008. Osteen, Louis. _Louis Osteen's Charleston Cuisine: Recipes from a Lowcountry Chef_. Chapel Hill, N.C.: Algonquin Books, 1999. Root, Waverly. _The Encyclopedia of Food_. New York: Konecky & Konecky, 1980. ## Index * Appetizers * Crab and Artichoke Dip, * Crab Claws St. Charles, * Crab Claws with Basquaise Sauce, * Crabmeat Salsa, * Crab-Stuffed Eggs, * Deviled Crab Dip, * Dot's Crab Dip, * Fried Oysters, * Oyster Fritters, * Pickled Oysters, * Roasted Beets, * Traditional Oyster Stew, * Baked Crab Sandwiches, * Basic Cocktail Sauce, * Casseroles and Cobblers * Baked Crab Sandwiches, * Crab Cobbler, * Crab Soufflé, * Oyster Dressing, * Oyster Loaf, or Bread Box, * Oyster Shortcake, * Cocktel, * Corinne Dunbar's Artichoke and Oyster Cocktail, * Corn and Crab Chowder, * Cornmeal Dumplings, * Crab and Artichoke Dip, * Crab and Oyster Gumbo, * Crab and Shrimp Calas with a Riff on Tartar Sauce, * Crab Aspic, * Crab Bisque, * Crab Claws St. Charles, * Crab Claws with Basquaise Sauce, * Crabes Farcis, * Crabmeat Cobbler, * Crabmeat Ravigotte, * Crabmeat Remoulade, * Crabmeat Salsa, * Crab Soufflé, * Crab-Stuffed Eggs, * Cucumber Relish, * Deviled Crab Dip, * Deviled Crabs, * Dot's Crab Dip, * Drinks * Michelada Tlaxapana, * Michelada Tlaxapana Obscura, * Oyster Juice, * First courses * Corinne Dunbar's Artichoke and Oyster Cocktail, * Crab and Shrimp Calas with a Riff on Tartar Sauce, * Crab Aspic, * Crabmeat Ravigotte, * Crabmeat Remoulade, * Oyster Fritters, * Oysters in Champagne, * Roasted Oysters, * Frances and Ed Mayes's Spaghetti with Lemon and Crab, * Fried Oysters, * Green Cabbage Slaw, * Hard-Crab Stew, * Indochinese Crab Cakes, * Jean Anderson's Stuffed Crab au Gratin (Santola Recheada e Gratinada), * Louis Osteen's Brown Oyster Stew, * Main dishes * Baked Crab Sandwiches, * Crab and Oyster Gumbo, * Crabes Farcis, * Crabmeat Cobbler, * Crab Soufflé, * Deviled Crabs, * Frances and Ed Mayes's Spaghetti with Lemon and Crab, * Fried Oysters, * Indochinese Crab Cakes, * Jean Anderson's Stuffed Crab au Gratin (Santola Recheada e Gratinada), * My Grandmother's Crab Pilaf, * More Traditional Crab Cakes, * Oyster Loaf, or Bread Box, * Oyster Rarebit, * Oyster Shortcake, * Roasted Oysters, * Soft-Shell Crabs, * Stuffed Crabs, * Michelada Tlaxapana, * Michelada Tlaxapana Obscura, * More Traditional Crab Cakes, * My Grandmother's Crab Pilaf, * Oyster Dressing, * Oyster Fritters, * Oyster Juice, * Oyster Loaf, or Bread Box, * Oyster Rarebit, * Oyster Shortcake, * Oysters in Champagne, * Pasta * Frances and Ed Mayes' Spaghetti with Lemon and Crab, * Pickled Oysters, * Roasted Beets, * Roasted Oysters, * Salads, Relishes, and Slaw * Cucumber Relish, * Crabmeat Ravigotte, * Crabmeat Remoulade, * Roasted Beets, * Green Cabbage Slaw, * Salsas and Dips * Crab and Artichoke Dip, * Crabmeat Salsa, * Deviled Crab Dip, * Dot's Crab Dip, * Sauces * Basic Cocktail Sauce, * Basquaise Sauce, * A Riff on Tartar Sauce, * Tartar Sauce, * Side dishes * Cornmeal Dumplings, * Cucumber Relish, * Green Cabbage Slaw, * Oyster Dressing, * Soft-Shell Crabs, * Soupe Hendaye, * Soups and Stews, * Cocktel, * Corn and Crab Chowder, * Crab and Oyster Gumbo, * Crab Bisque, * Hard-Crab Stew, * Louis Osteen's Brown Oyster Stew, * Soupe Hendaye, * Traditional Oyster Stew, * Stuffed Crabs * Crabes Farcis, * Deviled Crabs, * Jean Anderson's Stuffed Crab au Gratin (Santola Recheada e Gratinada), * Stuffed Crabs, * Tartar Sauce, * Traditional Oyster Stew, * Two Micheladas with Oysters, 1. Introduction 2. Hors d'Oeuvres 3. Deviled Crab Dip 4. Crab and Artichoke Dip 5. Dot's Crab Dip 6. Pickled Oysters 7. Roasted Beets 8. Crab-Stuffed Eggs 9. Two Crab-Claw Cocktails 10. Crab Claws with Basquaise Sauce 11. Crab Claws St. Charles 12. Soups and Stews 13. Crab Bisque 14. Soupe Hendaye 15. Corn and Crab Chowder 16. Louis Osteen's Brown Oyster Stew 17. Cocktel 18. Sit-Down First Courses 19. Corinne Dunbar's Artichoke and Oyster Cocktail 20. Oysters in Champagne 21. Crab Aspic 22. Oyster Fritters 23. Tartar Sauce 24. Crabmeat Salsa 25. Crab and Shrimp Calas with a Riff on Tartar Sauce 26. Crabmeat Remoulade 27. Crabmeat Ravigotte 28. Either/Or 29. Recipes that can be an Appetizer or a Main Course 30. Traditional Oyster Stew 31. Fried Oysters 32. Out in the Yard 33. Roasted Oysters 34. Basic Cocktail Sauce 35. Hard-Crab Stew 36. Cornmeal Dumplings 37. Dinnertime 38. Oyster Dressing 39. Deviled Crabs 40. Jean Anderson's Stuffed Crab au Gratin (Santola Recheada e Gratinada) 41. Crab and Oyster Gumbo 42. Stuffed Crabs 43. Crabes Farcis 44. Soft-Shell Crabs 45. Crabmeat Cobbler 46. Oyster Shortcake 47. My Grandmother's Crab Pilaf 48. Oyster Loaf, or Bread Box 49. Crab Souffle 50. Baked Crab Sandwiches 51. Oyster Rarebit 52. Frances and Ed Mayes's Spaghetti with Lemon and Crab 53. Green Cabbage Slaw 54. Two Kinds of Crab Cakes 55. Indochinese Crab Cakes 56. More Traditional Crab Cakes 57. Cucumber Relish 58. Drinks 59. Oyster Juice 60. Two Micheladas with Oysters 61. Michelada Tlaxapana 62. Michelada Tlaxapana Obscura 63. Acknowledgments 64. Bibliography 65. Index 1. i 2. ii 3. iii 4. iv 5. v 6. vii 7. viii 8. ix 9. xi 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109.
{ "redpajama_set_name": "RedPajamaBook" }
2,232
\section{INTRODUCTION} Networks describe the relationships between entities, such as social relationships within a population, interactions between biological proteins, and co-occurrence relationships within words. Dynamic networks are very common in many application domains with entities and connections changing over time, such as email networks and instant messaging networks. Analyzing these networks can gain insight into evolution patterns in these domains. Recently, dynamic network analysis has received much attention, and many embedding-based approaches have been proposed for temporal link prediction~\cite{CTDNE}, node prediction~\cite{triad}, and multi-label node classification~\cite{DepthLGP}. Thus, utilizing embeddings to investigate network evolution is a promising research direction. \begin{figure} \centering \includegraphics [width=8.4cm]{instroduction.pdf} \caption{An illustration of a social network with the addition and deletion of nodes and edges over time. For example, node 7 disappears at $t = T$, and node 8 appears at $t = T$. The edge between node 1 and node 2 is deleted at $t = 2$, and a new connection between node 1 and node 5 is generated.} \label{fig:example} \end{figure} As illustrated in Figure~\ref{fig:example}, a social network is evolving with the addition and deletion of nodes and edges over time. Each node represents a person, and each edge indicates one kind of events between two persons, such as friendship, contacts, and emails. The weight of an edge denotes how strong this relationship is or the frequency of communication (e.g., the number of emails or calls). Two most important factors in dynamic networks are the proximity between nodes and temporal continuity of stable nodes along the time, which should be preserved in network embeddings for effective network evolution analysis. For instance, node 1 and node 3 share the same neighbor nodes at $t = 1$ in Figure~\ref{fig:example}, but this relation is decreased at $t = 2$ (i.e., proximity similarity). The neighbor nodes of node 4 at $t = 1$ and $t = 2$ are similar. Thus, the embedded vectors of node 4 at the two timesteps should be nearly the same (i.e., temporal similarity). Previous methods for network embeddings, such as matrix factorization~\cite{GraRep}, random walk~\cite{deepwalk,node2vec,struc2vec}, deep neural network~\cite{DNNF,SDNE,CRDM,ANE}, and many others~\cite{LINE,RaRE}, generally focus on static networks to preserve the proximity between nodes. These methods fail to capture the temporal information in dynamic networks. Recently, continuous-time dynamic network embeddings~\cite{CTDNE} aim to learn continuous changes in temporal networks by temporal walk, which is a walking strategy with a time-ascending order. However, this method represents all network information into one embedding and cannot effectively capture the temporal changes of nodes over time, such as evolving node detection. DynamicTriad~\cite{triad} seeks to train all graphs in a dynamic network jointly into a sequence of embeddings by imposing \textit{triad}. However, the triad, a local proximity, only describes the local relation among up to three nodes, and can hardly capture high-order proximities between nodes. In this paper, we attempt to learn discrete-time network embeddings for dynamic network, i.e., each node has the same number of low-dimensional vectors with the number of timesteps and both the proximity and temporal continuity of each node are preserved in the dynamic network embeddings. A direct method to solve this problem is to learn each graph separately by static network embedding methods, and then align all network embeddings into the same vector space by alignment methods~\cite{DWER}. However, it is challenging to align these embeddings due to nonlinear movements of evolving nodes, and such alignment error would reduce the performance of downstream tasks. Thus, we propose a novel dynamic network embedding method to learn a sequence of graphs in a dynamic network and generate continuous embedded vectors for stable nodes without embedding alignments. Dynamic network embeddings can capture neighbor node changes over time for evolution analysis. Generally, this paper has the following contributions: \begin{itemize} \item We propose a general dynamic network embedding method that incorporates random walk on dynamic networks into Bernoulli embeddings. \item Our method is more effective in link prediction when compared with other state-of-the-art techniques. Besides, we generate artificial dynamic networks to verify our method in capturing the temporal evolution of nodes and achieve the best performance. \item Our method can be used to analyze and visualize the trajectories of evolving nodes while preserving the temporal continuity of stable nodes over time. \end{itemize} \section{RELATED WORK} With the rise of social networks (e.g., Facebook and Twitter) and big data (e.g., millions or billions of interaction records), network embedding methods have received considerable attention from both industrial and academic. The key point of network embeddings is to learn a low-dimensional vector representation for each node to preserve the proximity, which can be easily used for several application tasks, for instance, node classification~\cite{deepwalk}, link prediction~\cite{ATPG}, node clustering~\cite{CPNE}, anomaly detection~\cite{AEAT}, and collaboration prediction~\cite{TGAP}. Some network embedding methods are based on matrix factorization~\cite{GraRep}, which constructs a $k$-step transition probability matrix to measure the node similarity at different scales. Inspired by a good performance of word2vec in natural language processing, some researchers incorporated random walk into the skip-gram model~\cite{EEOW} to learn network embeddings, such as DeepWalk~\cite{deepwalk} and node2vec~\cite{node2vec}. These methods use random walk to produce a series of node sequences and apply the skip-gram model to learn network representations. Struc2vec~\cite{struc2vec} focuses on the structural identities of nodes and constructs a weighted multilayer graph for random walk to capture the hierarchical structural similarity. Recently, some embedding methods based on deep neural networks have received considerable attention~\cite{SDNE,DNNF,CRDM} to learn nonlinear mapping functions. To enhance the robustness of representations, Dai et al. employed generative adversarial network to capture latent features in network embeddings~\cite{ANE}. Most previous network embedding methods only focus on static networks, but dynamic network embedding learning is a hot research topic. It is related to dynamic latent space models, such as a dynamic model accounting for friendships drifting over time~\cite{DSNA} and a case-control approximate likelihood~\cite{FIFT}. CTDNE~\cite{CTDNE} incorporates temporal information into existing network embedding methods based on random walk by introducing a time-series order. TNE~\cite{TNE} is a discrete-time dynamic network embedding method based on matrix factorization. DynGEM~\cite{DynGEM} is based on deep autoencoders combined with a layer expansion to generate embeddings of a growing network. DynamicTriad~\cite{triad} focuses on the local structure called triad to learn the proximity information and evolution patterns. TIMERS~\cite{TIMERS} uses a SVD model to learn dynamic network embeddings incrementally based on the initialization of the previous graph. DepthLGP~\cite{DepthLGP} tackles the issue of updating out-of-sample nodes into network embeddings by combining a probabilistic model with deep learning. Previous methods evaluate network embeddings only by static tasks, and the trajectories of evolving nodes have not received much attention. Therefore, one of our evaluations is evolving node detection, and we further visualize the trajectories of evolving nodes in the context of stable nodes for evolution pattern analysis. \section{PROBLEM DEFINITION} In this paper, we seek to solve the proximity and temporal representation problem of dynamic networks, i.e., each node has one embedded vector in each timestep. Recently, the exploration of word meaning evolution in natural language processing has received much attention, and the key is to understand how words change their meanings over time and mine the latent cultural evolution~\cite{semanticsurvey}. Kulkarni et al. proposed an insightful conception of aligning all word embeddings at different timesteps into one vector space before semantic shift analysis~\cite{kulkarni}. Instead of a linear transformation for the alignment, Eger and Mehler presented second-order embeddings to compare the difference of word meanings~\cite{eger}. Moreover, it was shown in \cite{bamler} and \cite{yao} that we can learn diachronic word embeddings in the same vector space jointly. Thus the alignment across of embeddings is simultaneous and accurate. Inspired by these diachronic word embeddings, we propose a novel method to generate embeddings for dynamic networks. The definition of dynamic networks is given as follows: \begin{theorem}[Dynamic Networks] A dynamic network is a series of graphs $\Gamma=\{G_1,...,G_T\}$ and $G_t=(V_t,E_t)$, where $T$ is the number of graphs, $V_t$ is a node set and $E_t$ includes all temporal edges within the timespan $[S_t,S_{t+1}]$. Each $e_i=(u,v,s_i)\in E_t$ is a temporal edge between the node $u\in V_t$ and the node $v \in V_t$ at the timestamp $s_i\in[S_t,S_{t+1}]$. \end{theorem} A dynamic network can be constructed from temporal events, and different construction methods may have different applications, as discussed in Section~\ref{sec:construction}. Our goal is to learn dynamic network embeddings $M_1,...,M_T$ and $M_t\in \mathbb{R}^{|V_t|\times D}$, where $|V_t|$ is the number of nodes at timestep $t$ and $D$ is the dimension of embeddings. Thus, the concept of dynamic network embeddings is defined formally as follows: \begin{theorem}[Dynamic Network Embeddings] Given a dynamic network $\Gamma=\{G_1,...,G_T\}$, dynamic network embeddings aim to project a node $g\in V_t$ into a low-dimensional vector space by a mapping function $\textit{f}:g\mapsto y_g^{(t)}\in \mathbb{R}^D, D\ll\max|V_t|,t\in[1,T]$. \end{theorem} Thus, there is an embedding matrix $M_t\in \mathbb{R}^{|V_t|\times D}$ to represent the proximity and temporal properties for each graph $G_t$. The dynamic network embeddings generally require the following characteristics: \begin{itemize} \item \textbf{Proximity Preservation}. The embeddings should preserve the proximity between nodes, i.e., if a node $u$ and a node $v$ have similar neighbor nodes at timestep $t$, $y_u^{(t)}$ and $y_v^{(t)}$ should be located nearby in the vector space. \item \textbf{Temporal Continuity}. The embeddings should keep the temporal similarity of stable nodes, i.e., if a node $u$ has similar neighbors at timestep $t$ and $t+1$, $y_u^{(t)}$ and $y_u^{(t+1)}$ should be located nearby in the vector space. \item \textbf{Dimension Reduction}. Although dynamic networks can be complex with thousands of nodes and temporal edges, the embeddings should be low-dimensional, i.e., $D \ll max|V_t|$. Therefore, the embeddings can be effectively applied to downstream machine learning tasks. \end{itemize} \section{PROPOSED METHOD} Our method has three main steps. Firstly, we construct a dynamic network from temporal events. Then we apply random walk to generate node sequences matrix for each graph to keep the local proximity for each node in each timestep. Finally, we learn node representations from all node sequences of all timesteps jointly based on Bernoulli embeddings, as shown in Algorithm~\ref{alg1}. \subsection{Dynamic Network Construction} \label{sec:construction} The relationships between entities are generally described by timestamped events in real-world datasets, such as emails, calls, and interaction records. We first need to transform these temporal events into a dynamic network by constructing a sequence of graphs before learning its dynamic network embeddings. One strategy is the fixed time interval $\omega$ (e.g., hours, days, and weeks) for each timestep. Thus, each graph $G_t$ has a time window $[S_t, S_t + \omega)$ and $S_t$ is the earliest timestamp of timestep $t$. The temporal edge set of timestep $t$ is \begin{equation} E_t=\{e_i|s_i\in[S_t,S_t+\omega)\}. \end{equation} Since the events may be not evenly distributed over time, graphs would have different numbers of events. For example, one graph may contain one thousand temporal edges, while the other graph may only have one hundred temporal edges. The dynamic network embeddings learned from these graphs with non-uniform events may have a negative impact on downstream tasks, especially for sparse graphs. Thus, it would be better to choose different time intervals for different timesteps, and construct a dynamic network by a fixed number of events $\varepsilon$ (the other strategy). The events can be generally described as follows (the number of events is $N$): \begin{equation} A=(e_1,e_2,...e_N). \end{equation} Each graph $G_t$ has an event window $(\varepsilon\cdot(t-1),\varepsilon\cdot t]\subset [1,N]$ , and the edge set of timestep $t$ is defined as follows: \begin{equation} E_t=\{e_i|i\in(\varepsilon\cdot(t-1),\varepsilon\cdot t]\}. \end{equation} In this way, each graph has nearly the same number of events. However, the drawback is breaking the equivalent time interval, which may be important for some applications. We can select the fixed time interval or the fixed number of events depending on tasks when constructing dynamic networks. \begin{figure} \centering \includegraphics [width=8cm]{generating_2.pdf} \caption{Dynamic network construction by the fixed time interval or the fixed number of events.} \label{fig:construction} \end{figure} The hard boundary may generate discontinuous network embeddings over time and lead to misleading evolution patterns. To address this issue, the window of each graph has an overlap $\gamma\in[0,1]$ with the previous graph, as shown in Figure~\ref{fig:construction}. For the fixed time interval, the time window $\omega>\Delta t$ (non-overlapping time interval) and the overlap $\gamma$ is defined as $\gamma=(\omega-\Delta t)/\omega$. For the fixed number of events, the event window $\varepsilon>\Delta e$ (non-overlapping events) and the overlap $\gamma$ is defined as $\gamma=(\varepsilon-\Delta e)/\varepsilon$. In our experiment, we choose to adjust $\varepsilon$ for the link prediction and evolving node detection tasks, and adjust $\omega$ for analyzing and visualizing the trajectories of evolving nodes. \begin{algorithm}[htb] \caption{Dynamic Network Embeddings} \label{alg1} \begin{algorithmic} \REQUIRE A dynamic networks $\Gamma=\{G_1,...,G_T\}, G_t=(V_t, E_t)$\\ number of walks on each node $r$\\ length of each walk $L$\\ embedding size $D$\\ context size $cs$\\ negative sample size $ns$\\ \ENSURE Network embeddings $M_1,...,M_T$\\ Initialize embedding matrix $M_1,...,M_T$\\ Initialize context matrix $\alpha$\\ \FOR{$t=1$ to $T$} \FOR{each $g\in V_t$} \STATE $W_g^{(t)}=RandomWalk(G_t,g,r,L)$ \STATE DBE($y_g^{(t)}$,$\alpha_g$,$W_g^{(t)}$,$V_t$,$cs$,$ns$) \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Sequential Random Walks} We use random walk to capture the proximity information of each graph and generate sequences of walks as the input for network embeddings learning. For each graph, we first choose a node as the root of a random walk, then the walk selects uniformly from the neighbors of the last node visited until the maximum length $L$ of a node sequence is reached. The walking process will repeat $r$ times for each node of each graph. Instead of repeating the walking process in each graph iteratively, our approach runs random walk on graphs parallelly to generate one node sequence matrix $W_t$ for each graph $G_t$. In natural language processing, the \textit{context} is composed of the words appearing to both the right and left of the given word. For the network, the \textit{context} means the neighbors of the node, and the nodes before and after the node in the node sequence are the context of the node. \subsection{Dynamic Bernoulli Embeddings} We introduce the dynamic Bernoulli embedding method for latent representation learning of a dynamic networks (Algorithm~\ref{alg2}). A node sequence $w_j^{(t)}=\{v_1^{(t)},...,v_L^{(t)}\}$ is generated from the node $j$ at timestep $t$ by random walk. We define the context of $v_i^{(t)}$ in the node sequence as $c_i^{(t)}=\{v_{i-cs/2}^{(t)},...,v_{i+cs/2}^{(t)}\}\setminus v_i^{(t)}$, where the context size is $cs$. $x_{i}^{(t)}\in \mathbb{R}^{|V_t|}$ is an indicator vector that each entry $x_{ig}$ is zero or one, and $x_{ig}^{(t)}=1$ means $v_i^{(t)}$ is the node $g$. We assign $x_{ig}^{(t)}$ with a conditional distribution $P(x_{ig}^{(t)} | c_i^{(t)})$ based on a Bernoulli probability. Likewise, we define $\alpha_g\in \mathbb{R}^D$ as the context for the node $g$ and we update it in the process of dynamic embedding learning at each timestep. In contrast, we only update the embedding vector $y_g^{(t)}$ of $M_t$ when we train the node $v_i^{(t)}$ at timestep $t$. Finally, we define the representation function of $g$ and $c_i^{(t)}$ at timestep $t$ as follows: \begin{equation} \eta_{ig} = y_{g}^{(t)T}(\sum_{\substack{k\neq i \\ k\in[i-cs/2,i+cs/2]}}\sum_{g\in V_t}\alpha_{g} x_{k{g}}^{(t)}), \label{equ:etaig} \end{equation} where the sum $\sum_{g}\alpha_{g} x_{kg}^{(t)}$ is a filter to select the context vector $\alpha_g$. With the shared context vector $\alpha_g$ for all timesteps, we can train all embeddings in the same vector space and capture the evolution information across all timesteps. To initialize embeddings $M_1,...,M_T$ and $\alpha$, we use Gaussian priors with a diagonal covariance, and set parameters $\lambda_1$ and $\lambda$ according to \cite{DBE}. \begin{align} \alpha_g,y_g^{(1)}&\sim N(0,\lambda_1^{-1}I)\\ y_g^{(t)}&\sim N(y_g^{(t-1)},\lambda^{-1}I). \end{align} To achieve better performance, we can use embeddings pretrained by static methods as the default value of $M_t$ to speed up the convergence of our model. To train dynamic network embeddings, we regularize the \textit{pseudo log likelihood} with the log priors, and then maximize the likelihood to obtain a pseudo MAP estimation. Furthermore, we consider the contributions of positive context nodes and negative context nodes separately, and we define these two likelihoods $\mathcal{L}_{pos}$ and $\mathcal{L}_{neg}$ as follow: \begin{align} \label{equ:Lpos}\mathcal{L}_{pos} & =\sum_{i=1}^{L}\sum_{g\in{V_t}}x_{ig}^{(t)}\log \sigma(\eta_{ig}),\\ \label{equ:Lneg1} \ \mathcal{L}_{neg} & =\sum_{i=1}^{L}\sum_{g\in{V_t}}(1-x_{ig}^{(t)})\log(1-\sigma(\eta_{ig})), \end{align} where $\sigma(\cdot)$ is the sigmoid function to generate the probability. Considering negative context nodes are far more than positive context nodes, we use negative sampling to randomly select nodes except node $g$ as negative context nodes to reduce the computation of $\mathcal{L}_{neg}$. We define the sampling distribution of negative context nodes is $\phi$, and Equation~\ref{equ:Lneg1} is redefined as: \begin{equation} \label{equ:Lneg2} \ \mathcal{L}_{neg} =\sum_{i=1}^{L}\sum_{g\sim\phi}\log(1-\sigma(\eta_{ig})). \end{equation} In this paper, we set the unigram distribution~\cite{unigram} raised to the power of 0.75 as $\phi$. We notice that $y$ and $\alpha$ are the terms of Equation~\ref{equ:etaig} which is included in Equation~\ref{equ:Lpos} and~\ref{equ:Lneg2}. Thus, we set a prior to $\alpha$: \begin{equation} \mathcal{L}_{\alpha}=-\frac{\lambda_1}{2}\sum_{g\in{V_t}}\|\alpha_g\|^2. \end{equation} To penalize the consecutive embedding vector $y_g^{(t-1)}$ and $y_g^{(t)}$ for drifting from each too far apart, we set the prior of $y$ as: \begin{equation} \label{equ:Ly}\mathcal{L}_{y}=-\frac{\lambda_1}{2}\sum_{g\in {V_t}}\|y_g^{(1)}\|^2-\frac{\lambda}{2}\sum_{\substack{g\in{V_t} \\ t\in[1,T]}}\|y_g^{(t)}-y_g^{(t-1)}\|^2. \end{equation} Finally, we group all likelihoods as the optimization objective: \begin{equation} \label{equ:Lya}\mathcal{L}(y,\alpha)=\mathcal{L}_{pos}+\mathcal{L}_{neg}+\mathcal{L}_{\alpha}+\mathcal{L}_{y}. \end{equation} Overall, we use Stochastic Gradient Descent (SGD)~\cite{ASAM} to fit Equation~\ref{equ:Lya} with a proper learning rate. \begin{algorithm}[htb] \caption{DBE($y_g^{(t)}$,$\alpha_g$,$W_g^{(t)}$,$V_t$,$cs$,$ns$)} \label{alg2} \begin{algorithmic} \FOR{each $w_k^{(t)}\in W_g^{(t)}$} \FOR{each $v_i\in w_k^{(t)}$} \STATE $V_{v_i}=NegativeSampling(V_t,v_i,ns)$ \STATE Minimize loss $\mathcal{L}(y,\alpha)$ by SGD($y_g^{(t)}$,$\alpha_g$,$V_t$,$cs$) \STATE Update $y_g^{(t)}$ and $\alpha_g$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \section{EXPERIMENTS} Our method is evaluated by the link prediction task and the evolving node detection task. The former is a classical method to assess the effectiveness in capturing dynamic changes of the proximity in adjacent timesteps. The latter focuses on detecting evolving nodes which are unstable and likely to change over time. For the first task, we use eight datasets collected from Network Repository~\cite{TNDR} and all datasets are temporal and real. Table~\ref{tab:datasets} shows the statistics of these datasets. For the second task, we generate several artificial dynamic networks. Each dynamic network has different numbers of nodes and edges. For the density of networks, we impose a \textit{power-law distribution} on the node degree with different parameters. \begin{equation} degree\propto CX^{-\alpha}. \label{equ:power} \end{equation} There are 4 communities initially and the edges within the community are more than the edges between communities. Furthermore, to simulate the evolving trend of dynamic networks, we randomly choose 10\% nodes in the network as evolving nodes and design the evolution strategies of nodes as follows: \begin{itemize} \item Evolving nodes can change more than two edges at each timestep, while the limit is two for stable nodes. \item Evolving nodes are gradually moved to another community by decreasing the number of edges within the community and increasing the number of edges with another community. The edges of stable nodes can be changed within the community. \item The number of edges is generally stable for evolving nodes, i.e., the number of additions is nearly the same with the number of deletions. \end{itemize} \begin{table} \footnotesize \caption{The statistics of dynamic networks. $|V|$ = number of nodes; $|E_T|$ = number of temporal edges; \={d} = average node degree in all timesteps; $d_{max}$ = max node degree in all timesteps; $S_t$ = the whole timespan (days); $T$ = number of timesteps in the training data.} \label{tab:datasets} \begin{tabular}{rrrrrrr} \toprule Dataset&$|V|$&$|E_T|$&\={d}&$d_{max}$&$S_t$&$T$\\ \midrule IA-CONTACT &274 &28.2K &16 &113 &3.97 &10\\ IA-HYPER &113 &20.8K &7.6 &77 &2.46 &14\\ IA-ENRON &151 &50.5K &4.2 &61 &1137.55 &36\\ IA-RADOSLAW-EMAIL &167 &82.9K &20.8 &239 &271.19 &19\\ IA-EMAIL-EU &987 &332.3K &16.1 &232 &803.93 &44\\ FB-FORUM &899 &33.7K &10.8 &109 &164.49 &10\\ SOC-SIGN-BITCOINA &3783 &24.1K &6.4 &597 &1901.00 &11\\ SOC-WIKI-ELEC &6271 &107K &13.1 &602 &1378.34 &20\\ \bottomrule \end{tabular} \end{table} \begin{table*} \caption{AUC scores of the link prediction task.} \label{tab:link} \begin{tabular}{rccccccc} \toprule Dataset&\textbf{DeepWalk}&\textbf{Node2vec}&\textbf{CTDNE}&\textbf{TNE}&\textbf{DynGEM}&\textbf{DynamicTriad}&\textbf{Our method}\\ \midrule IA-CONTACT &0.845 &0.874 &0.913&0.880 &0.907&0.939 &\textbf{0.951}\\ IA-HYPER &0.620 &0.641 &0.671&0.710 &0.736&0.792 &\textbf{0.816}\\ IA-ENRON &0.719 &0.759 &0.877&0.822 &0.845&0.902 &\textbf{0.935}\\ IA-RADOSLAW-EMAIL &0.734 &0.741 &0.811&0.831 &0.788&0.764 &\textbf{0.845}\\ IA-EMAIL-EU &0.820 &0.860 &0.890&850 &0.864&\textbf{0.907} &0.878\\ FB-FORUM &0.670 &0.790 &0.826&0.810 &0.856&0.825 &\textbf{0.920}\\ SOC-SIGN-BITCOINA &0.840 &0.870 &0.891&0.877 &0.879&0.881 &\textbf{0.895}\\ SOC-WIKI-ELEC &0.820 &0.840 &0.857&0.837 &0.822&0.849 &\textbf{0.859}\\ \bottomrule \end{tabular} \end{table*} \begin{table*} \caption{MAP, MRR and TOP-K scores of the evolving node detection task. $|V_{hot}|$ = number of evolving nodes at each timestep.} \label{tab:detection} \begin{tabular}{r|c|cccccccc} \toprule Parameters setting&Metric&\textbf{DeepWalk}&\textbf{Node2vec}&\textbf{TNE}&\textbf{DynGEM}&\textbf{DynamicTriad}&\textbf{Our method}&$|V_{hot}|$&$|E_T|$\\ \midrule $\alpha=2$ &MAP &0.451 &0.453 &0.764&0.820&0.869&\textbf{0.900}&&\\ $C=10^2$ &MRR &0.193 &0.203 &0.225&0.269&0.259&\textbf{0.281} &50 &5,010\\ $N$=500 &TOP-K &0.389 &0.389 &0.733&0.766&0.789&\textbf{0.822} &&\\ \hline $\alpha=2$ &MAP &0.551 &0.552 &0.689&0.754&0.779&\textbf{0.844}&&\\ $C=10^3$ &MRR &0.225 &0.224 &0.240&0.260&0.254&\textbf{0.273} &50 &10,650\\ $N$=500 &TOP-K &0.511 &0.489 &0.666&0.664&0.711&\textbf{0.789} &&\\ \hline $\alpha=3$ &MAP &0.515 &0.498 &0.656&0.756&0.733&\textbf{0.795}&&\\ $C=10^2$ &MRR &0.220 &0.215 &0.230&0.251&0.244&\textbf{0.267} &50 &20,838\\ $N$=500 &TOP-K &0.444 &0.411 &0.611&0.635&0.655&\textbf{0.700} &&\\ \hline $\alpha=3$ &MAP &0.515 &0.477 &0.674&0.689&0.713&\textbf{0.775}&&\\ $C=10^3$ &MRR &0.201 &0.196 &0.212&0.234&0.224&\textbf{0.247} &50 &50,930\\ $N$=500 &TOP-K &0.414 &0.401 &0.655&0.606&0.636&\textbf{0.711} &&\\ \hline $\alpha=2$ &MAP &0.431 &0.518 &0.697&0.752&0.762&\textbf{0.810}&&\\ $C=10^3$ &MRR &0.031 &0.036 &0.033 &0.045&0.039&\textbf{0.048} &500 &17,670\\ $N$=5000 &TOP-K &0.401 &0.499 &0.663&0.702&0.711&\textbf{0.740} &&\\ \hline $\alpha=2$ &MAP &0.455 &0.558 &0.722&0.777&0.797&\textbf{0.894}&&\\ $C=10^4$ &MRR &0.038 &0.043 &0.045&0.050&0.047&\textbf{0.051} &500 &41,679\\ $N$=5000 &TOP-K &0.423 &0.489 &0.678&0.711&0.737&\textbf{0.822} &&\\ \hline $\alpha=3$ &MAP &0.471 &0.484 &0.708&0.769&0.787&\textbf{0.901}&&\\ $C=10^3$ &MRR &0.039 &0.039 &0.046&0.048&0.046&\textbf{0.051} &500 &63,703\\ $N$=5000 &TOP-K &0.442 &0.438 &0.682 &0.721&0.744&\textbf{0.841} &&\\ \hline $\alpha=3$ &MAP &0.477 &0.498 &0.736&0.801&0.812&\textbf{0.907}&&\\ $C=10^4$ &MRR &0.041 &0.042 &0.046&0.051&0.048&\textbf{0.052} &500 &94,612\\ $N$=5000 &TOP-K &0.466 &0.467 &0.711&0.763&0.776&\textbf{0.837} &&\\ \hline \bottomrule \end{tabular} \end{table*} \subsection{Setup} Our approach is based on random walk and dynamic Bernoulli embeddings, and there are several hyperparameters for the construction of dynamic networks, random walk, and embedding learning. We fix some hyperparameters (i.e., $D$=128, $L$=80, $r$=10, $cs$=4, $ns$=10) as suggested in \cite{CTDNE} and use the fixed event number strategy to construct dynamic networks for discrete-time embeddings methods (i.e., TNE, DynGEM, DynamicTriad and our method) in this section. \subsection{Baseline Methods} Our method is a discrete-time network embedding method, and we select the compared baseline methods from different categories. DeepWalk and node2vec are two representative static methods. For the evolving node detection task, we first apply them to learn each graph separately and use alignment methods~\cite{DWER} to align these embeddings into the same vector space. For dynamic network embedding methods, we select one continuous-time network embedding method (CTDNE) and three discrete-time network embedding methods (TNE, GynGEM, and DynamicTriad). \begin{itemize} \item \textit{DeepWalk}~\cite{deepwalk}. This static method is based on random walk and the skip-gram model. Three hyperparameters are set as default ($D=128$, $r=10$, $ns=10$) and the other two hyperparameters are selected from several values, $L\in\{40,60,80\}$, $cs\in\{6,8,10\}$. \item \textit{node2vec}~\cite{node2vec}. node2vec captures the diversity of connectivity patterns in a network and preserves high-order proximities between nodes. node2vec introduces two new hyperparameters for grid search compared with DeepWalk and we set $p,q\in \{0.25,0.50,1,2\}$ as mentioned in \cite{node2vec}. \item \textit{CTDNE}~\cite{CTDNE}. This is a continuous-time network embedding method based on DeepWalk to capture temporal information by random walk in the chronological order. We set $F_s$ as a linear distribution and $F_t$ as a unbiased distribution as suggested in \cite{CTDNE}. \item \textit{TNE}~\cite{TNE}. This dynamic embedding method is based on matrix factorization to learn discrete-time network embeddings. We choose parameter $\lambda\in\{0.001,0.01,0.1,1,10\}$. \item \textit{DynGEM}~\cite{DynGEM}. This model is based on deep auto-encoders to incrementally generates the embedding of the current graph with an initialization from the previous graph. We fix the parameters as suggested in~\cite{DynGEM}. \item \textit{DynamicTriad}~\cite{triad}. This model utilizes \textit{triad} to capture the dynamic changes in networks. To report the best performance of this method, we set hyperparameters $\beta_1\in\{0.1,1,10\}$ and $\beta_2\in\{0.1,1,10\}$ alternatively. \end{itemize} We repeat all experiments 10 times and report the average performance of each method. \subsection{Link Prediction} \label{sec:link} Link prediction is a common application to evaluate the performance of network embeddings. To generate the training data and testing data, we sort all temporal edges by the time-ascending order as suggested in \cite{CTDNE}. Then we use the first 75\% as the training data and the remaining 25\% as the testing data (one graph). Static methods (DeepWalk and node2vec) use the whole training data as one graph to learn one network embedding, while the training data is further partitioned into a sequence of graphs for TNE, DynGEM, DynamicTriad, and our method. The number of timesteps are listed in Table~\ref{tab:datasets}. We compute the similarity between two nodes by the L2 distance in the current timestep to predict whether the two nodes exist one edge in the next timestep. To evaluate the performance by AUC, we use a \textit{logistic regression model} as the classifier with 5-fold cross-validation. Table 2 demonstrates that our method outperforms the other methods in most cases. \subsection{Evolving Node Detection} The structure of a real-world network changes over time. However, the network structure generally does not change sharply between adjacent timesteps and most nodes are stable in many timesteps. Dynamic network embeddings can be used to detect evolving nodes when their neighbors have changed significantly. Thus, we use the evolving node detection task to evaluate the proximity preservation and temporal continuity of network embedding methods in capturing evolution patterns. For each timestep $t$, we calculate the distance between two embedded vectors $y_g^{(t)}$ and $y_g^{(t+1)}$ in adjacent timesteps for each node $g$, and sort these distances by the descending order. The first 10\% nodes with a large distance are called \textbf{active nodes} in the timespan $[S_t,S_{t+1}]$, and this timestep is an active timestep for these active nodes. We further sort active nodes in all timesteps by the number of active timesteps, and choose the top 10\% nodes with a large number of active timesteps as \textbf{evolving nodes}. Table~\ref{tab:detection} shows the performance of six methods by three metrics: MAP, MRR and TOP-K. CTDNE generates one final embedding for a continuous-time network, and cannot be used for this task. Our method achieves overall the highest gains against other state-of-the-art methods. \subsection{Parameter Analysis} Many hyperparameters of our method have been evaluated by previous work~\cite{deepwalk,DBE}, and we select two hyperparameters (i.e., $\varepsilon$ and $\gamma$) from dynamic network construction and the other two hyperparameters (i.e., $r$ and $cs$) from network embedding learning for parameter analysis by evaluating the performance of our method in the link prediction task. \textbf{Dynamic network construction}. To construct dynamic networks, we need to specify the event window size $\varepsilon$ and overlap ratio $\gamma$. Initially, we fix the other parameters (i.e., $D$=128, $L$=80, $r$=10, $cs$=4, $ns$=10), and then we evaluate the performance with different values of hyperparameters $\varepsilon$ and $\gamma$ by the link prediction task in two datasets, IA-ENRON and FB-FORUM. Table~\ref{tab:ana1} demonstrates the performance of our method in the link prediction task is stable with different values of $\varepsilon$ and $\gamma$. \textbf{Network embedding learning}. We select two hyperparameters, $r$ (i.e., the number of walks of each node) from random walk and $cs$ (i.e., the context size) from Bernoulli embedding learning. We fix the other hyperparameters (i.e., $D$=128, $L$=80, $r$=10, $\varepsilon$=8000, $\gamma$=0.5) and change the values of $r$ and $cs$. Table 5 shows the performance of our method is stable when $r=10$. However, if we fix $cs=4$, the best performance with $r=10$ achieves an average gain of 6.4\% across the other values of $cs$ for FB-FORUM. Thus, the hyperparameter $cs$ is a little sensitive for the link prediction task and we can tune the value of $cs$ to achieve better performance. \begin{table} \footnotesize \caption{AUC scores of link prediction for hyperparameter analysis of $\varepsilon$ and $\gamma$} \label{tab:ana1} \begin{tabular}{rcccc} \toprule $\varepsilon$&2000&4000&8000&16000\\ \midrule IA-ENRON ($\omega$=0.5) &0.900 &0.919 &\textbf{0.930} &0.924\\ FB-FORUM ($\omega$=0.5) &0.908 &0.910 &0.898 &\textbf{0.915}\\ \bottomrule \\ \toprule $\gamma$&0&0.25&0.50&0.75\\ \midrule IA-ENRON ($\varepsilon$=8000) &0.933 &0.930 &\textbf{0.935} &0.920\\ FB-FORUM ($\varepsilon$=8000) &0.901 &0.910 &\textbf{0.920} &0.895\\ \bottomrule \end{tabular} \end{table} \begin{table} \footnotesize \caption{AUC scores of link prediction for hyperparameter analysis of $cs$ and $r$.} \label{tab:ana2} \begin{tabular}{rcccc} \toprule $cs$&2&4&8&16\\ \midrule IA-ENRON ($r$=10) &0.920 &\textbf{0.934} &0.926 &0.905\\ FB-FORUM ($r$=10) &0.920 &\textbf{0.921} &0.892 &0.884\\ \bottomrule \\ \toprule $r$&1&5&10&15\\ \midrule IA-ENRON ($cs$=4) &0.925 &0.926 &\textbf{0.935} &0.934\\ FB-FORUM ($cs$=4) &0.857 &0.870 &\textbf{0.920} &0.866\\ \bottomrule \end{tabular} \end{table} \section{APPLICATIONS} To analyze evolution patterns of real-world dynamic networks, we visualize the trajectories of evolving nodes in 2D space by t-SNE~\cite{t-SNE}. A path with the color from light to dark (such as the light blue to the dark blue) shows the trajectory of an evolving node in a timespan and we draw the node every three timesteps. Note that the two dynamic networks in this section are regarded as undirected and we construct dynamic networks by the fixed time interval for evolution analysis. \subsection{Primary school dynamic network} \begin{figure} \subfigure[2D projection of network embeddings of students and teachers.]{ \label{fig:4:a} \includegraphics[width=8cm]{show_primary.pdf} } \subfigure[Number of evolving nodes in each timestep.]{ \label{fig:4:b} \includegraphics[width=8cm]{analysis_primary.pdf} } \subfigure[Evolving nodes grouped by the number of active timesteps.]{ \label{fig:4:c} \includegraphics[width=8cm]{drift_primary.pdf} } \caption{Evolution analysis of the primary school dynamic network.} \end{figure} \begin{figure} \centering \subfigure[]{ \label{fig:3:a} \includegraphics[width=5cm]{201_6.pdf} } \subfigure[]{ \label{fig:3:b} \includegraphics[width=5cm]{74_6.pdf} } \caption{The trajectories of Student 201 (a) and 74 (b).} \end{figure} The primary school data is collected from face-to-face contacts between students and teachers in a school locating in France during two school days in October 2009~\cite{HMOF}. This data contains 232 students from 10 classes, composed of 5 grades with each of the grades divided into two classes. There are 10 assigned teachers for 10 classes. We choose the first day to analyze its evolution patterns due to the similarity between two days. We create a dynamic network with 92 timesteps by a fixed time interval $\Delta t=6$ minutes and a window size $\omega=60$ minutes with an overlap ratio $\gamma=0.9$. Figure~\ref{fig:4:a} shows the projection of students and teachers at 8:45 am (the first graph). We encode each class and teachers with different colors, total eleven colors. Figure~\ref{fig:4:b} depicts the number of evolving nodes defined in Section~\ref{sec:link} from 8:45 am to 17:05 pm. The evolution pattern is in line with the school schedule reported in~\cite{HMOF}. Two peak points at 11:40 am and 14:45 pm are two breaks and the low part shows the lunchtime from 12:30 pm to 14:00 pm. Furthermore, few students are extremely active as shown in Figure~\ref{fig:4:c}. To analyze the trajectory of active students, we select Student 201 and Student 74 with higher active scores from 8:45 am to 17:05 pm in Figure 4. Student 201 and Student 74 move from Class 2B to Class 5B and then back to Class 2B. Student 201 contacts with the students of Class 5B (e.g., Students 148, 170 and 218) during the lunch break (i.e., from 11:30 am to 13:30 pm), thus we infer they may be friends. We may also conclude that Class 2B and Class 5B share the lunch break, which can be verified from the school schedule. Moreover, we notice that Student 201 contacts with Student 72 after the lunchtime, while Student 74 contacts with Student 72 before the lunchtime. This shows that Student 201 and Student 74 do not have a direct relationship. However, they may be both the friend of Student 72. Based on the triad theory, they will be friends via Student 72. \begin{figure} \subfigure[2D projection of network embeddings colored according to its cluster membership.]{ \label{fig:6:a} \includegraphics[width=8cm]{show_eu.pdf} } \subfigure[Number of evolving nodes in each timestep.]{ \label{fig:6:b} \includegraphics[width=8cm]{analysis_eu.pdf} } \subfigure[Evolving nodes grouped by the number of active timesteps.]{ \label{fig:6:c} \includegraphics[width=8cm]{drift_eu.pdf} } \caption{Evolution analysis of the email communication dynamic network.} \end{figure} \begin{figure} \centering \subfigure[]{ \label{fig:5:a} \includegraphics[width=5cm]{90_1.pdf} } \subfigure[]{ \label{fig:5:b} \includegraphics[width=5cm]{328_1.pdf} } \caption{The trajectories of Researcher 90 (a) and 328 (b).} \end{figure} \subsection{Email communication dynamic network} The email communication network data is collected from a large European research institution composed of 42 departments~\cite{MITN}. The network data contains 986 nodes and 332,334 temporal edges across 803 days. We create a dynamic network with 74 timesteps by a fixed time interval $\Delta t=7$ days and a window size $\omega=14$ days with an overlap ratio $\gamma=0.5$. The data provider hides all labels of departments for the consideration of personal privacy. To show the proximity of this network, we use DBSCAN~\cite{ADAF} to cluster nodes based on the L2 distance, and generate 42 clusters. Figure~\ref{fig:6:a} shows the clustering result at week 1 via t-SNE. Clusters are colored in different colors. As can be seen from Figure ~\ref{fig:6:a}, this network has a clear structure, and researchers may have more emails with each other in the same department. The number of evolving nodes at each timestep is provided in Figure~\ref{fig:6:b}, and the number in each week is relatively stable except few weeks, such as week 10. Most nodes are stable (inactive), as most nodes have zero active weeks (timesteps) in Figure~\ref{fig:6:c}. We focus on two evolution trajectories of Researcher 90 and Researcher 328 for detailed analysis in Figure 6. Due to the lack of labels, neighbor nodes in the embedded space are colored in gray. The neighbor nodes of Researcher 90 change over time and we notice Researcher 90 evolves between two groups. In the first few weeks, Researcher 90 has few neighbors, and the reason may be that Researcher 90 joined the institution as a new researcher. After week 51, Researcher 90 leaves his department to another. Researcher 328 has few neighbors in the first few months. Researcher 90 and Researcher 328 have a similar evolution pattern at the beginning. After that, Researcher 328 moves from the outside of the department to the core position, while the contacts between Researcher 328 and others increases significantly. From the evolution trajectory of Researcher 328, he/she may become a leader or secretary in this department. \section{CONCLUSION} In this paper, we have proposed a novel approach to capturing the changes of dynamic networks with proximity and temporal properties preserved. To evaluate our method, we compare our method with several state-of-the-art methods including static methods and dynamic methods with two tasks, link prediction and evolving node detection. The experiments demonstrate that our method achieves substantial gains and perform effectively in proximity and evolution analysis of dynamic networks. For future work, it is desirable to update nodes which never appear in the network incrementally instead of retraining. Most existing network embedding methods only focus on one noticeable facet of the network, while the network includes diverse facets in the real world. Thus, we would like to design a method to incorporate multiple-facet properties into network embeddings. \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,847
{"url":"https:\/\/brilliant.org\/discussions\/thread\/gamma-function-identity\/","text":"# gamma function identity\n\nHi my brilliant friends.. This identity of gamma function is well known . actually I need help to prove it by using Stirling's approximation. The identity is $\\Gamma (x) = \\lim_{n\\to\\infty} \\frac{n! n^{x-1}}{x(x-1)(x-2)......... (x+n-1)}$ How we can prove it by Stirling's approximation??? Please post an obvious proof.. thanks\n\nNote by Refaat M. Sayed\n4\u00a0years, 1\u00a0month ago\n\nThis discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution \u2014 they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.\n\nWhen posting on Brilliant:\n\n\u2022 Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .\n\u2022 Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting \"I don't understand!\" doesn't help anyone.\n\u2022 Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n\u2022 bulleted\n\u2022 list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https:\/\/brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.\n2 \\times 3 $2 \\times 3$\n2^{34} $2^{34}$\na_{i-1} $a_{i-1}$\n\\frac{2}{3} $\\frac{2}{3}$\n\\sqrt{2} $\\sqrt{2}$\n\\sum_{i=1}^3 $\\sum_{i=1}^3$\n\\sin \\theta $\\sin \\theta$\n\\boxed{123} $\\boxed{123}$\n\nSort by:\n\nThere's a partial proof I could do\n\nHow to insert image in comments??\n\nIt is to long for latexing. Please tell me how to insert images.\n\n1\n\n2\n\n- 4\u00a0years, 1\u00a0month ago\n\nfirst upload any pic u want to insert, on www.postimage.org, they provide a link for that image.\n\nwhile writing solutions , write the following:\n\nYou should replace the word \"link\" by the link provided by that website, then close the paranthesis.\n\n- 4\u00a0years, 1\u00a0month ago\n\n- 4\u00a0years, 1\u00a0month ago\n\n- 4\u00a0years, 1\u00a0month ago\n\ntry to paste the direct links\n\n- 4\u00a0years, 1\u00a0month ago\n\n@Calvin Lin ... How we can insert images in comment?\n\n- 4\u00a0years, 1\u00a0month ago\n\n- 4\u00a0years, 1\u00a0month ago\n\nBINGO :D LOL!\n\n- 4\u00a0years, 1\u00a0month ago\n\nHaha thanks!\n\n- 4\u00a0years, 1\u00a0month ago\n\nCome back to Slack. I'll explain you how I did it.\n\n- 4\u00a0years, 1\u00a0month ago","date":"2019-10-16 15:55:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 9, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.986228883266449, \"perplexity\": 5370.56407461767}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986668994.39\/warc\/CC-MAIN-20191016135759-20191016163259-00129.warc.gz\"}"}
null
null
Q: I want to run teambuild's unit tests (more than once) against multiple different databases I want to run teambuild's unit tests (more than once) against different databases, e.g. I want to test my build compiles, then run the same suite of tests against SQLServer, then Oracle etc. databases. I'm pretty sure I could do something clumsy like build/test against 1st config file then build/test against 2nd config file etc. but I'm looking for something more elegant (preferably without the pointless 2nd recompilation). A: You'll want seperate unit tests (these are considered system tests, really - since they are involving a database) for each database type. If the same unit test can fail for 1 DB platform and pass for another, then it is not telling you anything specific enough when you look at the pass/fail status and the history of the test results for that test over time. Otherwise, consider decoupling or abstracting the db connection in such a way that you can programmatically change it while setting up the test (see [ClassInitialize()] and [TestInitialize()] attributes in the MSTEST unit test framework). The elegant solution is to not depend on the databases for unit tests; create seperate tests for the databases that check the data returned by the queries and procedures you are calling. Visual Studio 2010 Premium has the capability of running unit tests to verify data and behaviour of SQL Server 2005 (and later) DBs. I would be suprised if you could not also find a tool to test Oracle Dbs or roll your own system of testing what is returned (something like ndbUnit might help)
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,673
Oliver Vagner leads The Vagner Group, LLC, an independent consulting firm specializing in helping companies in the restaurant, retail, and consumer products and goods industries develop and implement strategies to leverage data to make better decisions. In addition to the practical elements of planning and implementing analytics strategy, Oliver has been a thought leader in the industry use of artificial intelligence, AI ethics, and data privacy. Throughout his career, Oliver has built a comprehensive background in data analytics, with more than 20 years of consulting experience spanning multiple industry verticals. Before founding The Vagner Group, Oliver was general manager for consumer engagement at NCR where he led efforts to use data to drive better outcomes in consumer engagement technologies, such as loyalty and marketing. Oliver has held other data analytics roles such as senior director of consumer insights and analytics at TGI Fridays, director of enterprise data services at Cox Automotive, and chief architect for cloud solutions at Revolution Analytics. Oliver was involved in the early efforts to use data to drive in-store digital content during his time at Solution Forge, LLC, where he oversaw digital merchandising and analytics for food service, retail companies. Previously, Oliver served as a solution architect for Sun Microsystems, Inc., where he developed high-performance database solutions for OLTP and OLAP and was on the forefront of implementing MPP and NoSQL data analytics solutions. Prior to that, Oliver was a senior manager at Manugistics, Inc., where he led a team to deliver advanced forecasting, leveraging econometric models in the retail and CPG space.
{ "redpajama_set_name": "RedPajamaC4" }
6,414
\section{Introduction} Stellar evolutionary tracks and isochrones calculate physical radius, luminosity, and effective temperature. In order to compare with observable quantities, almost always magnitudes and colors, a transformation is essential. The need for such transformations is also felt when integrated light models (population synthesis models) are constructed for comparisons to colors from galaxies and star clusters. Color-temperature transformations are also used in spectral abundance analysis, design of observing strategies, computation of selection effects, and a host of incidental astronomical problems. One way to approach this problem is to calculate line-blanketed synthetic spectra and integrate under filter transmission functions to get fluxes, which are then zeroed by comparison with Vega or other standard \citep{buser92,bg89}. This is a convenient approach, but vulnerable to errors in all of the steps of the process: incorrect atmosphere structures, incorrect or incomplete line lists, incorrect treatment of convection, turbulence, non-LTE effects, or line broadening, incorrect filter transmissions, inaccurate spectrophotometry of the comparison star or stars, and finally, the photometry of the comparison star itself. Examples of suspicious circumstances include the fact that the blue edge of the $U$ filter is set by the earth's atmosphere and will inevitably change with time and place, the fact that half the lines in the solar spectrum have yet to be identified \citep{k92a}, and the fact that the absolute flux calibration for stars is uncertain by about 5\% [e.g. \citet{berri}] Recent examples of synthetic calibrations are \citet{vc03}, \citet{vaz96}, \citet{lej98}, and \citet{houd00}. There is need for empirical alternatives in the literature, and the present paper attempts to fill in that gap somewhat. The inspiration for this work comes from \citet{green88}. Green describes a global color-$T_{eff}$ calibration generated for attachment to the Revised Yale Isochrones \citep{ryi} that provides colors tabulated for a (long) list of temperatures, surface gravities, and [Fe/H] values. The strategy used by Green was to begin with empirical color-color relations for solar-metallicity stars, and adopt the ridge line as the starting place. Then one attaches a color-$T_{eff}$ relation and adds [Fe/H] and gravity dependence by working differentially within synthetic color tables. The approach here is similar, but stays in the empirical regime longer in that the gravity and abundance dependences are fit to the stars themselves rather than via synthetic photometry. In a second phase, $T_{eff}$ and the bolometric corrections are attached to the fitted multidimensional space of $V-K$ color, gravity, and abundance. Synthetic colors are used at very low weight to guide the fits where there are few or no stars in the sample, but seemed to be superfluous except for metal-poor M giants, which do not exist in nature. Only oxygen-rich stars are considered here. Color-temperature relations for carbon-rich giants are given in \citet{bergeat01}. This paper is divided into sections on procedure, literature comparisons, and a concluding section. Supporting material (color-temperature table and interpolation program) is available at http://astro.wsu.edu/models/. \section{Procedure} \subsection{Stellar Data} The nucleus of the photometry catalog is the compilation of \citet{mm78}, which is firmly Johnson-system. Many other photometry sources were included. These include \citet{veed74}, \citet{bessell91}, \citet{stet81}, \citet{daC90}, \citet{wbf83}, \citet{wb83}, \citet{braun98}, \citet{c83}, \citet{cohen78}, \citet{elias82}, \citet{elias85}, \citet{f78}, \citet{f79}, \citet{p80}, \citet{cohen80}, \citet{f81}, \citet{dac81}, \citet{cf82}, \citet{f83}, \citet{fmb90}, \citet{leggett}, \citet{dahn}, and the 2MASS point source catalog. As a first try at assigning abundance measurements to the list of 4496 stars, the \citet{cay01}, \citet{mcw}, and \citet{edv93} abundance catalogs were consulted. M giants with good photometry were artificially assigned an [Fe/H] of zero except those that belong to clusters, in which case the cluster metallicity was adopted. M dwarfs were assigned an [Fe/H] based on their kinematics, most of which came from \citet{veed74}. Young disk objects were assigned [Fe/H] $= -0.1$, old disk $-0.5$, and halo $-1.5$. Cluster stars naturally inherited the cluster metallicity. Cluster abundances came from mostly secondhand compilations \citep{wor94a, braun98, f78}. An abundance of +0.3 was adopted for NGC 6791 \citep{wj03}. LMC field stars were assigned $-0.3$ and stars in the SMC $-0.6$. Some supergiants and very hot stars were artificially assigned [Fe/H] $=0$ when no abundance was available, but many had literature abundances. Unfortunately, a complete citing of the abundance sources cannot be given, as notes on some of the (perhaps 5\%) abundance assignments have been lost. A total of 2090 useable stars had abundances, although the number is considerably less for any given photometric color. Odd holes appear in the final data set. For instance, a primary source for M dwarf colors is \citet{veed74} from which $J$-band data is missing. $U$-band data is hard to find for cool stars. Available $R$-band data has gaps as well. To try to fill in the ``K dwarf desert'' (see below) we also scoured \citet{nstars1,nstars2,casagrande} for photometry and abundance information. Solar metallicity mean relations for all spectral types from \citet{johnson66} and \citet{bb88} were included in the list, with [Fe/H] $= -0.1$ assumed for these ``stars''. \subsection{Photometric Systems} All, we think, would agree that the collection of the various photometric systems are, collectively, an admirable effort but also a bit of a mess due to the fact that one telescope/site/detector combination is a unique thing, not transferable to other telescopes in other places with different equipment. This is mostly overcome by observing standard stars that have been observed many times by one setup and should therefore be internally homogeneous: a photometric ``system.'' The \citet{mm78} catalog is on the ``Johnson'' photometric system. For colors involving $RI$, the target system was ``Cousins'' and we applied the tranformation equations of \citet{bessell79} and \citet{bessell83} to transform the Johnson data except for $R-I$, for which we used a tracing of Figure 3 from \citet{bessell83} rather than the formula given in the paper. Additional optical data that was already on the Cousins system was left there. Infrared data was imported from 5 different systems (and this is mild compared to the number of systems that have proliferated over the years). As a target system, we chose the homogenized system of \citet{bb88}. Transformations from Johnson-system, CIT-system, and AAO-system were used as provided in \citet{bb88}. Some 2MASS data, mostly attached to NGC 6791 stars in the present stellar catalog, were tranformed via \citet{carp01} formulae to the \citet{bb88} system. Corrections for interstellar extinction were done using the \citet{car89} extinction curve. Note that corrections are applied differently for Johnson $RI$ than for Cousins $RI$ since the filters are at substantially different wavelengths. Such wavelength differences cause negligible correction differences at infrared wavelengths. \subsection{Color-color Fitting} The photometrically-homogenized, dereddened stellar data were then presented to a series of additional processing steps. A multivariate polynomial fitting program (a modification of the one used in \citet{wor94a} to fit spectral indices as a function of stellar atmosphere parameters) was applied to the data. The dependent variable chosen was $V-K$ because it is monotonically increasing with temperature and insensitive to abundance. $V-K$ is a fabulous temperature indicator in stars cooler than the sun, and, because of its monotonicity, can still serve as a temperature-like variable for hotter stars. Terms of up to order $(V-K)^6$ could be included, and up to quadratic terms of log $g$, [Fe/H], and cross-terms. Chemically peculiar stars such as carbon stars were excluded from the fit. The color range was divided into 5 widely overlapping sections and each range was independently fit. For example, the second-hottest temperature section, for the color $V-I$, is displayed in Figure \ref{fig:6panel}. The specific polynomial terms allowed in the fit could be different for each temperature section. This allowed, for instance, [Fe/H] sensitivity to be manually phased out if desired. The fits were done many times. Outlier data points were rejected manually with the aid of a graphical interface that allowed the name and parameters of each star to be scrutinized before rejection. Before one (of seven) color fits in one (of five) temperature regimes passed inspection, it was examined, both raw and as residuals from the fit, against all three variables of color, gravity, and abundance. Data rejection and polynomial term additions and subtractions were done iteratively with the aid of f-test statistics. In an approximation of what appears during the fitting process, Figure \ref{fig:6panel} shows both raw data and residuals after the fit as a function of $V-K$ color, [Fe/H], and log $g$, with symbol types varying as a function of abundance. Synthetic color points are also shown, for purposes of illustration, although we emphasize that the synthetic colors did not influence the fits except for stars that do not exist in nature. \begin{figure} \plotone{6panel.eps} \caption{An illustration of the fitting process in the warm temperature range, using the $V-I$ color. Smaller-size 3-vertex symbols are synthetic photometric points that were not allowed to affect the fit if real stars were present, open for metal-rich ``stars'' and skeletal for metal-poor ``stars.'' Open pentagons are metal-rich stars, open squares are between [Fe/H] = $-1$ and 0, and skeletal squares are metal-poor. These choices can be directly seen in the two, middle [Fe/H] panels. The top row of panels is $V-I$ versus $V-K$, [Fe/H], and log g, and the bottom row of panels is the data minus fit residuals versus the same three variables. These plots vaguely mimic what the fitting program shows as it operates, although the fitting program can better isolate and display arbitrarily defined stellar groups, and also shows fits. \label{fig:6panel} } \end{figure} The final polynomials were combined in tabular form, using a weighted-mean scheme wherein the middle of each $V-K$ section was weighted strongly compared to the edges of each section. [Fe/H] and log $g$ were tabulated in 0.5 dex intervals, $-2.5 \leq$ [Fe/H] $\leq 0.5$, and $-0.5 \leq$ log $g \leq 5.5$. The resultant color-color relations are illustrated in Figures \ref{fig:ub1}, \ref{fig:bv1}, \ref{fig:vr1}, \ref{fig:vi1}, \ref{fig:jk1}, and \ref{fig:hk1}. All stars, even if they were rejected during the fitting process, are included in the figures. Carbon stars are included, but only for illustrative purposes; fits were not attempted and one is again referred to the work of \citet{bergeat01}. \begin{figure} \plotone{f1.eps} \caption{The $U-B$, $V-K$ color-color diagram for unculled stars. Stars have different symbol types for metal rich ([Fe/H] $> -0.2$), metal-poor ([Fe/H] $< -1.2$), and intermediate abundance ranges. Calibrations for typical giant and typical dwarf gravities are drawn in solid for [Fe/H] $= 0$, dashed for [Fe/H] $= -1$, and dotted for [Fe/H] $= -2$. Most carbon stars (asterisks) are not plotted as they stretch beyond the plot limits along a line from the plotted ones up to $(V-K,U-B)\approx(6,6)$. \label{fig:ub1} } \end{figure} \begin{figure} \plotone{f2.eps} \caption{The $B-V$, $V-K$ color-color diagram for unculled stars. Stars have different symbol types for metal rich ([Fe/H] $> -0.2$), metal-poor ([Fe/H] $< -1.2$), and intermediate abundance ranges. Calibrations for typical giant and typical dwarf gravities are drawn in solid for [Fe/H] $= 0$, dashed for [Fe/H] $= -1$, and dotted for [Fe/H] $= -2$. Carbon stars are shown as asterisks. \label{fig:bv1} } \end{figure} \begin{figure} \plotone{f3.eps} \caption{The $V-R$, $V-K$ color-color diagram for unculled stars. Symbols and line styles are as in Figure \ref{fig:bv1}. \label{fig:vr1} } \end{figure} \begin{figure} \plotone{f4.eps} \caption{The $V-I$, $V-K$ color-color diagram for unculled stars. Symbols and line styles are as in Figure \ref{fig:bv1}. \label{fig:vi1} } \end{figure} \begin{figure} \plotone{f5.eps} \caption{The $J-K$, $V-K$ color-color diagram for unculled stars. Symbols and line styles are as in Figure \ref{fig:bv1}. \label{fig:jk1} } \end{figure} \begin{figure} \plotone{f6.eps} \caption{The $H-K$, $V-K$ color-color diagram for unculled stars. Symbols and line styles are as in Figure \ref{fig:bv1}. \label{fig:hk1} } \end{figure} For very hot stars of O and B spectral type an additional color-color table that was crafted by hand from either summary color-spectral type relations or from our own color-color relations was employed to refine the color-color relations for the hottest stars. The sources consulted were \citet{k82,toku,vacca}, and our own color-color plots. There is basically no abundance leverage for very hot stars, so we assumed a zero metallicity dependence. These tabulated average values were included in the polynomial fits as if they were individual stars. In the hot star regime, two uncertain areas came to light that deserve mention as regards dwarf vs. supergiant colors. First, in $U-B$, \citet{k82} data imply a large and distinct color separation between dwarfs and supergiants, but the (few) supergiants available in our list did not follow the literature trend. Nor were the polynomials flexible enough to track these changes, mostly because, for O stars, the difference in surface gravity is very minor [4.15 (dwarfs) vs. 4.09 (supergiants) according to \citet{vacca}]. In the end, we performed a weighted average between the polynomial fits and the tabulated values. There is probably considerable uncertainty left in the supergiant $U-B$ colors, perhaps several tenths of a magnitude. This is one area that could be vastly improved with more photometry, with the caveats that reddening is often a huge factor for these intrinsically bright, usually distant stars and the fact that fast rotation introduces an inclination angle dependence in the colors. Users wishing to avoid this entirely may want to feed our interpolation program artificially high gravities for stars hotter than about 9000 K. The second area of debate was that the tabulated $H-K$ colors of \citet{toku} for O supergiants were about 0.09 mag redder than for O dwarfs. In this case, we saw no trace of such a trend in our data: a few stars were that red, but they were all dwarfs. We allowed the polynomial fit (which, in that regime, was a function of temperature alone) to determine the final color-color relation. In the middle of the temperature range, a small gravity dependence was indicated, but no dependence on [Fe/H] was ever statistically significant. In the regime of cool giants, there is a strong evolutionary effect such that metal-poor stellar populations do not generate M-type giants. The rich globular cluster 47 Tucanae is on the cusp, containing 4 long-period variable stars at the tip of its giant branch at [Fe/H] $\approx -0.8$. The SMC, at present-day [Fe/H] $\approx -0.6$, generates some M and Carbon stars, but mostly because of intermediate-age populations that grow very bright (and cool) asymptotic giant branches. Thus, there is a sharp transition from excellent metallicity coverage for K giants to very limited metallicity leverage for M giants, exacerbated by the fact that M giant abundances are hard to measure. In M dwarfs, where stars of all metallicities exist in our list, there is an interesting, strong convergence of color-color sequences as a function of metallicity so that G dwarfs have a very strong [M/H] dependence, there is a transition in K dwarfs, and M dwarf colors have no detectable [M/H] dependence. In fitting, therefore, the [M/H] dependence was gradually removed for cooler and cooler stars, for the giants because cool, metal-poor stars do not exist, and for the dwarfs because the [M/H] dependence removes itself empirically. \subsection{Temperatures} Due to our approach of fitting color-color relations internally as a function of gravity and abundance, attachment of temperature scales could, in principle, be done for any color-temperature relation in any part of the parameter space. The first iteration of this process was to layer color-temperature relations on top of each other until the whole parameter range was covered, and to take the median in regions where more than one relation applied. This is illustrated in Figs. \ref{fig:mash2gt} and \ref{fig:mash2dw}. For FGK giants \citet{alonso99a} and \citet{alonso99b} were used. These works include a specific [Fe/H] dependence, and the average of both $V-I$ and $V-K$ relations were used. For appropriate runs of temperatures and gravities, \citet{vc03} $V-I$ was translated to $V-K$ via our color-color relations. In a similar manner, the synthetic fluxes of \citet{k92} and \citet{bbsw,bbsw2} fluxes were combined and translated to colors as in \citet{wor94b}. In this case, both $V-R$ and $V-I$ were translated to $V-K$ via the emprical color-color relations and plotted along with the untweaked $V-K$ - $T_{eff}$ relations. \citet{toku} developed average color-temperature relations for sequences of supergiants, giants, and dwarfs using literature temperature scales. \citet{bessell98} gives color-temperature sequences for solar-abundance dwarfs and giants based on different model atmospheres, and we also referred to the empirical cool giant sequences of \citet{rid80,dyck96}. The color-temperature sequences of \citet{johnson66} are also included. For the coolest dwarfs, analysis of the data of \citet{basri00} yielded a relation as a function of $I-K$ color, specifically $T_{eff} = -460.25\times (I-K) + 4323$, valid for $I-K > 2.9$. Adopting this relation meant that the final temperature assignments for the coolest dwarfs needed to wait for the final color-relations to be fixed. Given the disparate ingredients, the final adopted temperatures were hand-guided a fair amount. For example, M dwarfs with known angular diameters, but not separately summarized in existing color-$T_{eff}$ calibrations, were also included in the mix. Eclipsing binaries YY Geminorum \citep{torres} and CM Draconis \citep{viti} were supplemented with interferometrically-derived temperatures from \citet{berger06} and, in the case of Barnard's star, from \citet{dawson04}. $VK$ photometry came from either our own catalog or that of \citet{leggett}. The temperature estimates of \citet{berri} for eleven dwarfs are also plotted. These are more indirect temperature estimates from the ratio of the bolometric flux to the flux at an infrared wavelength, the total to infrared flux ratio method (TIRFM). The position of especially the cooler stars was influential in our adopting a somewhat cooler temperature scale around 3000 K than the bulk of the published calibrations. The fits are good to a limit of $V-K=10.2$. Since cool dwarfs and giants have different temperature scales, this corresponds to approximately $T_{eff}=2700$ K for solar-metallicity giants and $T_{eff}=1914$ K for solar-metallicity dwarfs. Not that it proves or illustrates anything significant, but the sun's $B-V$ comes out to be 0.66 mag in the final calibration, which compares well with literature estimates \citep{taylor, gray1, gray2}. \begin{figure} \plotone{mash2gt.eps} \caption{Temperature-$V-K$ calibrations for cool, solar abundance giants. Lines are color-coded for the calibrations of \citet{alonso99a,alonso99b,vc03,toku,johnson66,wor94b,bessell98}. Our adopted relation is shown as diamonds. \label{fig:mash2gt} } \end{figure} \begin{figure} \plotone{mash2dw.eps} \caption{Temperature-$V-K$ calibrations for cool, solar abundance dwarfs. Lines are color-coded for the calibrations of \citet{vc03,toku,johnson66,wor94b,bessell98} and \citet{basri00}. Red dots with error bars are M dwarfs are from \citet{berger06} and magenta open circles are TIRFM temperatures and photometry from \citet{berri}. YY Geminorum's temperature is from \citet{torres}, Barnard's star from \citet{dawson04}, and CM Draconis's from \citet{viti}. Our adopted relation is shown as diamonds. \label{fig:mash2dw} } \end{figure} \subsection{Bolometric Corrections} The last item to be added was the $V$-band bolometric correction (BC). Since they were that last item in the chain, BCs could be inserted as a function of color or of temperature and for any passband. As for temperature scales, a variety of empirical and theoretical options were intercompared. The \citet{vc03} BCs were adopted for the middle of the temperature range, supplemented by the \citet{vacca} formula for $4.40 < {\rm log}\ T < 4.75$ for the hottest dwarfs and supergiants. \citet{vc03} have a solar BC$_V = -0.09$ mag, and other scales were zero point adjusted to match. The \citet{wor94a} BCs needed a 0.03 mag shift to match that, for example. At the cool end, for both giants and dwarfs, the \citet{vc03} BCs drift slightly from most calibrations, as seen in Figure \ref{fig:bc}. For giants, we adopted the average, empirical-plus-theoretical $K$-band BC from \citet{bessell98}, read from their Figure 20. For cool dwarfs, we adopt the $K$-band (UKIRT IRCAM3 system) BCs of \citet{leggett}. We extended their polynomial slightly to reach our $V-K=10.2$ cool limit. One subtlety regarding the \citet{leggett} calibration should be mentioned. They give two polynomial fits to the $K$-band BC, one as a function of $I-K$ and one as a function of $J-K$. We adopt the $I-K$ version, as the $J-K$ version drifts significantly from the $I-K$ version at warmer temperatures. The cause of this drift is increased scatter in the $J-K$ diagram, or, more fundamentally, the fact that both $J$- and $K$-bands are on the red tail of the blackbody curve for the warm half of the temperature range covered, so that $J-K$ as a temperature indicator has a small temperature range per unit error. \begin{figure} \plotone{f9.eps} \caption{$V$-band bolometric corrections for dwarfs and giants. The top panel is a sequence of giants, and the bottom panel is a sequence of dwarfs, both for near-solar metallicity. The symbol key is marked on the plot itself with sources \citet{vacca} for hot stars, \citet{houd00} for cool stars, \citet{vc03,bessell98,wor94b,plez92,brett95}, and \citet{leggett}, plus ``PHOENIX,'' which refers to fluxes produced from the Phoenix code \citep{allard95} with colors generated as in \citet{wor94b} and ``Plez 1997,'' which refers to a private communication that was subsequently published in \citet{bessell98}. \label{fig:bc} } \end{figure} The main calibrations employed are plotted in Figure \ref{fig:bc}, along with citations. For clarity, the fitted result and also the BCs of \citet{buzzoni} are omitted. Note that, plotted as a function of color, and as predicted by synthetic fluxes, the bolometric corrections are a very weak function of abundance and gravity. This is a degeneracy. That is, increasing a cool giant's abundance (for example) will make it redder and give it a larger (absolute value of the) $V$-band BC. Such vectors lie closely along the trend caused by temperature, so BCs are strongly covariant with $T$, log $g$, and [M/H] when plotted versus $V-K$. We exploit this for cool stars by adopting BCs that vary as a function of color alone. Gravity and abundance dependence then is inherited from the gravity and abundance variations of the color-color diagrams. The various relations were combined via temperature dependent weighted means, where the weights were chosen to de-emphasize outliers. Table 3, the full-length version of which is given in the electronic version of this journal, gives the final calibration in grid form. An ASCII version, interpolation program, and other supporting material is available at http://astro.wsu.edu/models/. \section{Comparisons and Discussion} The wealth of comparison data that we could be checking against is too vast to illustrate completely in the pages of this journal, so we limit ourselves to a few key examples. \subsection{Cool Regime} One region of parameter space that is of keen interest is that of low stellar temperatures. We check our results for cool stars against \citet{vc03}, \citet{lej98}, an update of the \citet{green88} color table used in the Yonsei-Yale isochrones \citep{yi01}, and, for good measure, the synthetic colors of \citet{wor94b} in Figure \ref{fig:bvvi.cool}. The coolest giants are very important for integrated-light studies of spectral features such as TiO that become strong only in these stars, and for surface brightness fluctuation (SBF) magnitudes, especially at red colors, that depend on these stars because of the $L^2$ dependence of an SBF magnitude. In Figure \ref{fig:bvvi.cool} our fits are shown as black lines. They were fitted to the $B-V$ - $V-K$ and $V-I$ - $V-K$ diagrams, so it is no surprise that they still fit in this color-color plane. The updated-Green calibration follows an extrapolation of the giant sequence off into regions not occupied by stars, while the dwarf sequence for solar abundance follows the stars very well. There is considerable metallicity dependence in the Green calibration that the stars do not appear to share. The \citet{vc03} sequences follow both dwarfs and giants fairly well, with a fairly good (small) abundance dependence. The oscillations in the solar metallicity giant track are a reflection of actual values in their data tables. The coolest temperature reached by \citet{vc03} is 3000 K. The \citet{lej98} calibration is based on corrected synthetic fluxes. In this case, the dwarfs and giants track together with little or no gravity separation until, at a temperature well within the tabulated range of applicability, the values become wild. \begin{figure} \plotone{f10.eps} \caption{The $B-V$, $V-I$ color-color diagram for M stars. Red dots are stars with [Fe/H] $> -0.2$ and blue dots are stars with [Fe/H] $< -1.2$ with intermediate stars cyan. Giants are open symbols, which dwarfs are filled. The data are unculled. Calibrations for giant and dwarf color-color sequences are drawn in solid for [Fe/H] $= 0$, dashed for [Fe/H] $= -1$, and dotted for [Fe/H] $= -2$. The color codes for different authors are noted in the figure (``Lejeune''is \citet{lej98}, ``Green'' is the updated \citet{green88} table, ``V\&C'' is \citet{vc03}), and ``Empirical'' refers to this work. \label{fig:bvvi.cool} } \end{figure} \subsection{Colors not Explicity Fit} Besides author comparisons, another way to check our results is to plot colors that were not fitted explicitly to see if the implicit color dependences are correctly modeled. $R-I$ is one such, and is illustrated in Figure \ref{fig:rivk}. For this color, the fits were versus $V-R$ and $V-I$, for slightly different samples of stars. The $R-I$ fitted tracks fall among the stars fairly well, except for a hard-to-see reversal around $V-K=1.5$ where the giants become $\approx 0.02$ mag redder than the dwarfs. This 0.02 mag shift is probably incorrect, but it gives a valuable indication of the reliability of the color-color fits. \begin{figure} \plotone{f11.eps} \caption{$R-I$ is plotted as a function of $V-K$. Stars of different giant/dwarf status and abundance are plotted with different symbols according to the key. Stars with [Fe/H] $> -0.2$ are considered metal rich, stars with [Fe/H] $< -1.2$ are considered metal poor, and stars between these values are considered intermediate in metallicity. Lines are coded as in Figure \ref{fig:bvvi.cool}. This color was not fit during the calibration process. \label{fig:rivk} } \end{figure} \subsection{The K Dwarf Desert} Reliability must be a function of temperature regime. One particular troublesome area is that of K-type dwarfs and the damping of the abundance sensitivity going toward cool stars. In Figure \ref{fig:kdesert}, a $T_{eff}$ of 5000 K corresponds to $V-K \approx 2.2$ and $T_{eff} = 4000$ K corresponds to $V-K=3.4$. It is clear that the magic combination of full photometry plus a good abundance estimate is lacking from our data set for dwarfs in general and metal-poor dwarfs in particular. Note the general lack of open symbols (dwarfs) compared to filled (giants). This lack of data means that the metallicity dependence of the dwarfs is inherited from the plentiful giants in this temperature regime; an undesirable feature. At redder colors, as their surface gravities diverge, the dwarfs and giants separate in color. Simultaneously, the metallicity dependence appears to reverse, at least in the giants. It is not completely clear from the present data what should be happening with the dwarfs, although they do seem to mirror the giants. The polynomials do their best to smoothly flow through all of this, but we judge it unlikely that they have truly captured the essence of the color behavior in this regime, as it is not clear to our eyes exactly what should be happening (it seems likely that some of the photometry is bad). The color reversal with [Fe/H] is an issue only for $U-B$ and $B-V$ colors, although the paucity of K dwarf data is of some concern for all colors, as the metallicity dependence is relatively unconstrained. \begin{figure} \plotone{apjkdesert2.eps} \caption{The figure show a small section of the $B-V$, $V-K$ color-color diagram. Dwarfs are drawn with larger symbols than giants for emphasis according to the key in the figure. Stars with [Fe/H] $> -0.2$ are considered metal rich, stars with [Fe/H] $< -1.2$ are considered metal poor, and stars between these values are considered intermediate in metallicity. Calibrations for giant and dwarf color-color sequences are drawn in solid for [Fe/H] $= 0$, dashed for [Fe/H] $= -1$, and dotted for [Fe/H] $= -2$. At red color, the dwarfs follow the bluer $B-V$ tracks. This is a region of uncertainty, as discussed in the text. \label{fig:kdesert} } \end{figure} \subsection{Error Propagation} The principle source of error in the color-color fits is finding a suitable polynomial to follow the various twists and turns that the colors take. We fit the colors in five segments, with multiply-redundant overlap in color, and used the overlap regions to estimate the error from polynomial fitting. With typically hundreds of stars available for each fit, random photometric uncertainty is not a concern (though, of course, systematic uncertainty is). The median fit uncertainty over all temperatures, gravities, and abundances is listed in Table \ref{tab1}. We also thought it useful to propagate errors in the final subroutine so that uncertainties in the effective temperature scale could be translated to uncertainties in color. For this we used the various $T_{eff}$ relations plotted in Figures \ref{fig:mash2gt} and \ref{fig:mash2dw} and a couple of others to roughly estimate a percentage error as a function of temperature. This is given in Table \ref{tab2}. Note that the errors in Table \ref{tab2} for cool stars are more applicable to giants than dwarfs; dwarf temperatures seem more uncertain than those of giants, but we didn't have enough dwarf calibrations to estimate this very well, so we left it alone. For color $I$ with color error $\sigma_I$ and a temperature error $\sigma_T$, errors propagate in the elementary way: \begin{equation} \sigma^2 = \sigma_I^2 + \bigl( {{{\rm d} I}\over{{\rm d} T}}\sigma_T \bigr)^2 . \end{equation} \begin{deluxetable}{lr} \tablecaption{Median Polynomial Fit Uncertainty \label{tab1}} \tablewidth{0pt} \tablehead{\colhead{Color} & \colhead{$\sigma$ (mag)} } \startdata $U-B$ & 0.071 \\ $B-V$ & 0.017 \\ $V-R$ & 0.010 \\ $V-I$ & 0.011 \\ $J-K$ & 0.004 \\ $H-K$ & 0.002 \\ \enddata \end{deluxetable} \begin{deluxetable}{rr} \tablecaption{Temperature Uncertainty Assumed \label{tab2}} \tablewidth{0pt} \tablehead{\colhead{$T_{eff}$ (K)} & \colhead{$\sigma$ (\%)} } \startdata 50000 & 4.0 \\ 20000 & 2.5 \\ 10000 & 1.0 \\ 6000 & 0.5 \\ 4000 & 0.5 \\ 3500 & 1.0 \\ 3000 & 1.5 \\ 2000 & 4.0 \\ \enddata \end{deluxetable} \subsection{Reddening Estimation Using M Dwarfs} Color-color diagrams have been used to derive a ``color excess'' from which can be inferred a value for the dust extinction \citep{m53}. The metallicity dependent color-color fits of this paper offer a general, if not overly precise, method of generating a color-color plot for any color combination as a function of abundance and gravity. The classic $U-B$, $B-V$ diagram is shown in figure \ref{fig:reds} for dwarfs only. The double inflection redward of zero color represents the rise and fall of the Balmer break in B-type through A- and F-type stars. A defect of this method is that it only works on clusters that have A-type stars, that is, ones younger than about 1 Gyr that still have dwarfs that hot. Interestingly, there is an additional color inflection in the M dwarfs (cf. \citet{lej98}), roughly between 4000 K and 3000 K, that may allow independent reddening estimates for old clusters that have deep photometry. This inflection exists in almost every color, although the $U$ band presents the most dramatic manifestation of it. \begin{figure} \plotone{f13.eps} \caption{The final color-color calibration for dwarfs only is shown in the $U-B$, $B-V$ plane as dots with error bars attached, where the error bars include fitting uncertainties and $T_{eff}$ uncertainties. An additional line is shown that represents the color shift due to dust screening of $A_V = 0.1$ mag. For illustrative purposes, a vector for $A_V = 1.0$ mag is also sketched. The approximate blue limits of the bluest dwarfs at the main sequence turnoffs of isochrones of various ages are marked with the corresponding ages. Near the red bump feature, stellar effective temperatures (degrees K) are indicated. \label{fig:reds} } \end{figure} The wiggle has been seen before in various colors and with variable fidelity \citep{caldwell, bessell91, tapia, bryja} but with modern telescopes and instrumentation, it may turn into an astrophysical tool. It is caused by the onset of molecular absorption (TiO being the number one culprit) across the M temperature range that radically changes the underlying spectra shape, (c.f. \citet{bessell91} ). If the $U$ band is utilized, the coolest stars involved have $U-I = 5.65$ mag according to our colors and $M_I=9.0$ according to diagrams in \citet{legg92}. This leads to an absolute $M_U= 14.65$ mag. If a modest telescope can reach $U=23$ as the KPNO 2.1-meter did in \citet{kr95}, then the $U$ flux is readily detectable to about 500 pc distance. For reference, the nearest ancient open cluster is M67 at about 800 pc, so one would need a slightly larger telescope or better $U$ sensitivity for $U$ to be useful. However, redder colors can also be made to work at about the same confidence level relative to the fitting errors. The fitting errors are shown as error bars on points in Fig. \ref{fig:reds}, as is another color sequence shifted by $A_V = 0.1$, shown as a line. We judge that this extinction is the smallest that can be detected at all simply using the color-color fits we present, and so is not particularly competitive with other methods as it stands. Interestingly, at redder passbands, the M-type deflection becomes less pronounced but the errors also decrease so that any $A_\lambda$ extinction vector stays at about the same statistical significance in most color-color planes. This does not solve the problem, however, because observational measurement error becomes larger than the fit error at $JHK$ wavelengths. Future refinements to this reddening estimation method are possible and should be encouraged. Giants also show such an inflection. However, no Galactic cluster has enough cool giants to populate the inflection region, globular cluster giant branches being too warm, and open clusters being too low mass to have many such giants. There may be limited application for local group galaxy fields with resolved photometry; derivation of reddening maps across the surface, for example. However, the compositeness of the stellar populations of local galaxies may introduce too much error in the scheme for it to be useful. \subsection{Gravity Dependence Comparison} The dimension of temperature is a downstream add-on component using the method of this paper, but the dimensions of [Fe/H] and log $g$ are inherited from the stellar catalog and can therefore be compared to the predictions from previous calibrations in a fairly clean way. In and near the M star temperature regime, we explicitly damped the [Fe/H] dependence away, but the gravity dependence was freely fit. The character of the data changes in that the dwarfs fork away from the giants the cooler one goes, making any dependence more complicated than linear rather suspicious. (No gravity dependence more than linear was used in this work, in this regime.) By way of illustration, we plot some color-derivatives for one color, $B-V$, with abundance held fixed and gravity varied, in Figs. \ref{fig:diffcgbv} and \ref{fig:difgbv}. \begin{figure} \plotone{f14.eps} \caption{ The change in $B-V$ color caused by a shift in log $g$ from 2 to 4, plotted against $V-K$ color. Green lines or symbols indicates [Fe/H] $=-2$ and blue lines or symbols indicates [Fe/H] $=0$. Lines are the present work, and both colors are present, but the lines coincide because there was no crosstalk between [Fe/H] and log $g$ in the color-color fitting process. Symbols are \citet{wor94b}. \label{fig:diffcgbv} } \end{figure} \begin{figure} \plotone{f15.eps} \caption{ The change in $B-V$ color caused by a shift in log $g$ from 2 to 4, plotted against $T_{eff}$. Green lines or symbols indicates [Fe/H] $=-2$ and blue lines or symbols indicates [Fe/H] $=0$. Lines are the present work, small symbols are \citet{wor94b}, and large symbols are the updated \citet{green88} table. \label{fig:difgbv} } \end{figure} Fig. \ref{fig:diffcgbv} and Fig. \ref{fig:difgbv} show the same thing, except for the $X$ axis choice. The $V-K$ color (Fig. \ref{fig:diffcgbv}) was what was fit against, and only calibrations that include both $B-V$ and $V-K$ can be included. Fig. \ref{fig:difgbv} is plotted against $T_{eff}$ and can be compared to more calibrations. In the latter figure, also, the temperature scale difference cause the empirical trends to split. The conclusions from examining these and similar figures for many colors are that the present work (1) resembles in gross other calibrations, (2) tends show the smallest, mildest gravity dependence, and (3) shows similar gravity dependence even at vastly different metallicity regimes. All three of these conclusions appear to be fairly robust, which should be a rather large concern, since the delta-colors are quite substantial for most calibrations. An alarming example of this, not illustrated, is $U-B$ for stars hotter than the sun, for which the empirical (this work) gravity dependence is essentially zero, but most other calibrations put it at $\Delta (B-V) / \Delta ({\rm log}\ g) \approx 0.15$ mag dex$^{-1}$. \subsection{Future Temperature Scale Adjustments} A topic beyond the scope of this paper deserves a comment, and that is attachment of this calibration to existing theoretical stellar evolutionary isochrones for purposes of comparing to star clusters and for purposes of integrated light studies. As a test case, which we intend to publish, multi-band photometry for two open clusters, M67 and NGC 6791, were collected from many sources and assembled into a $UBVRIJHK$ data set. The color-color relations from these data sets agrees well within expected errors with the color-color fits presented here. \begin{figure} \plotone{cmdVI.eps} \caption{ Color magnitude diagram and isochrones for open cluster M67. The \citet{yi01} isochrones at solar metallicity and age 5 Gyr with the present color calibration is shown as ellipses that represent the propagated uncertainties. Distance modulus $(m-M)_V = 9.4$ and reddening $E(V-I) = 0.02$ are assumed. The data is that of \citet{mont93}. \label{fig:m67} } \end{figure} However, the color-magnitude diagrams generated from \citet{yi01} isochrones and this work, when compared to the real clusters, are not so rosy. Figure \ref{fig:m67} shows a color magnitude diagram for open cluster M67, along with ellipses that represent one-sigma errors on our color calibration, and there are drifts between isochrone and data that are substantially more than one sigma. Parenthetically, and with an emphatic lack of surprise, one of the places of mismatch is the late K dwarf region, among the temperatures where the empirical calibrations are competing with the \citet{vc03} semiempirical calibration. In that particular case, it is almost certainly the attachment of the temperatures in our calibration that is causing the wonkiness in the fit to the data. In addition, for the finite set of data and models tried so far, a fit is often satisfactorily only in one color. When $B-V$ and $V$ is fit, for example, $V-K$ and $K$ do not fit for the same age and reddening. Going into the realm of theoretical stellar models introduces another layer of complexity that we are unable to cope with in this paper, but it seems clear that the temperature scale attached to our color-color relations is not, initially, going to mesh easily with existing isochrone sets. We conjecture that the blame will be shared between the temperature scale attached in this work, and the temperature scales established in theory via mixing length theory, or other convection prescriptions. \section{Summary and Conclusion} Johnson/Cousins photometry was combined with literature [Fe/H] estimates to fit color-color diagrams as a function of gravity and abundance. Literature-average temperature and bolometric correction scales are attached to provide a global color-temperature relation for stars with $-1.06 < V-K < 10.2 $. The $RI$ magnitudes are in the Cousins system, and $JHK$ magnitudes are in the Bessell homogenized system. The complete color-temperature table and a Fortran interpolation program is available at http://astro.wsu.edu/models/. Several areas of improvement were noted in the main body of the paper, including filling photometry gaps, obtaining more accurate and on-system photometry, knowing better log $g$ and [Fe/H] values, improving the statistics for data-impoverished groups of stars such as K dwarfs, applying small tweaks in the processing pipeline, and obtaining better empirical temperature and bolometric correction relations, especially for supergiants and M stars. A way to estimate dust extinction from M dwarf colors arises from an inflection that exists in most colors relative to $V-K$. Unlike the classic $UBV$ method, it can be used in old star clusters, but it does not seem to promise much, if any, increase in accuracy for clusters where both methods apply. The most sensitive band relative to photometric error for the new extinction measure is the $U$ band, but if the $U$ band is employed then clusters must be within a few hundred parsecs for ground-based observatories to able to measure adequate $U$ fluxes. \acknowledgements Major funding was provided by the National Science Foundation grants AST-0307487, the New Standard Stellar Population Models project, and AST-0346347. The SIMBAD data base, NASA's Astronomical Data Center, and NASA's Astrophysics Data System were indispensible for this project. GW would like to thank the undergraduates who have typed in data pertaining to stars over the more than 14 years this project has stretched. Brent Fisher \citep{wf96} and Joey Wroten at the University of Michigan and Jared Lohr and Ben Norman at Washington State University.
{ "redpajama_set_name": "RedPajamaArXiv" }
782
Bob Burnquist (ur. 10 października 1976 w São Paulo) – profesjonalny brazylijski skateboarder. Jego największym sukcesem do tej pory była wygrana na zawodach X-Games w 2001. Specjalizuje się w jeździe w pozycji switch, a jego rozpoznawalnym trikiem jest "one-footed smith grind". Czynnie uprawia też skydiving i snowboarding oraz prowadzi akcje promujące zdrowe odżywianie. Jest jak dotąd jedynym skaterem, który zrobił trik switch frontside air na otwartej pętli. Wykonał również grind 50-50 do wielkiego kanionu i wylądował ze spadochronem. Historia osiągnięć na zawodach 1. w 2007 X-Games Big Air 3. w 2006 X-Games Big Air 1. w 2006 The Coolio Games 1. w 2005 X-Games Vert Best Trick 1. w 2003 X-Games: vert dwójki (razem z Bucky Lasek) 2. w 2002 X-Games: vert dwójki (razem z Bucky Lasek) 1. w 2001 X-Games:vert 1. w 2001 Slam City Jam: vert. 1. w 2000 Slam City Jam: vert. 1. w 1995 Slam City Jam: vert. Brazylijscy skaterzy Ludzie urodzeni w São Paulo Urodzeni w 1976
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,331
declare global { namespace LHCI { export interface LighthouseCiConfig { extends?: string; assert?: Partial<AssertCommand.Options>; collect?: Partial<CollectCommand.Options>; upload?: Partial<UploadCommand.Options>; server?: Partial<ServerCommand.Options>; wizard?: Partial<WizardCommand.Options>; } export interface LighthouseRc { ci?: LighthouseCiConfig; lhci?: LighthouseCiConfig; 'ci:client'?: LighthouseCiConfig; 'ci:server'?: LighthouseCiConfig; } } } // empty export to keep file a module export {};
{ "redpajama_set_name": "RedPajamaGithub" }
4,043
Home » National » Rahul Gandhi in Uttar Pradesh LIVE updates: Congress V-P to visit Amethi today. Rahul Gandhi in Uttar Pradesh LIVE updates: Congress V-P to visit Amethi today. Oct 4, 2017 1939 Viewed Pallavi Kumar Comments Off on Rahul Gandhi in Uttar Pradesh LIVE updates: Congress V-P to visit Amethi today. Congress vice-president Rahul Gandhi's three-day visit to Uttar Pradesh begins today. During his Uttar Pradesh visit, Rahul Gandhi is scheduled to hold 'chaupals' with farmers, meetings with locals as well as party workers. Rahul Gandhi will also visit his parliamentary constituency Amethi, for the first time since the Congress Party suffered a major defeat in the uttar PRadesh state assembly elections around six months ago. The Amethi district administration, on September 30, had requested Rahul Gandhi to change the date of his visit. Stating that police personnel were already employed to maintain law and order during Durga Puja and Muharram, the administration had expressed difficulty in providing security for him. However, the dates of the visit were not changed as the authorities gave a go-ahead on Tuesday. Rahul Gandhi's Uttar Pradesh visit LIVE updates: 10.45 am: The Congress vice-president concluded his three-day roadshow to poll-bound Gujarat last week. During the visit, Gandhi held rallies and interacted with people. His visit to temples in the state became a talking point. UP CM Yogi Adityanath took a dig at Gandhi's temple visits, saying, "His (Gandhi's) visits to temples will make no difference. He is a bad omen for the Congress." 10.15 am: Rahul Gandhi is soon expected to assume the position of the party president, replacing his mother, Sonia Gandhi. Elections for the party president's post are scheduled to take place by the end of this month, where Rahul is expected to be elected to the post. According to reports, the Central Election Authority of Congress has decided to begin the process for the election by October 10. 9.40 am: On October 5, the Congress leader will interact with the public at the Munshiganj guesthouse and later meet party workers at Rajiv Gandhi Degree College in Amethi. He will also visit several areas in Salon Vidhan Sabha constituency, after which he will travel to Rae Bareli. There, he will stay the night at the Bhuvemau guesthouse. 9.15 am: Later on, the district administration said that their "intention was misunderstood". Yogesh Kumar, District Magistrate, Amethi, told The Indian Express, "We have been informed about his (Rahul's) three-day visit and will provide him security. First of all, security was never denied to him. The district administration can never deny security to a Member of Parliament. We had just requested him to shift his visit because of ongoing festive season and deployment of police force elsewhere. Police force has also come from adjoining districts for Durga visarjan and other arrangements." 9.00 am: In response to the letter, senior UP Congress leader Akhilesh Singh accused the Yogi Adityanath-led BJP government in the state of "using tactics" to stop Gandhi from visiting his Lok Sabha constituency. "The Uttar Pradesh government is worried and does not want Rahul to visit Amethi, fearing that he might raise issues directly related to the public. This scares the BJP," he said. Singh said the Yogi government was perhaps worried that Gandhi's schedule might "eclipse" the proposed visit by BJP chief Amit Shah and Union ministers Smriti Irani and Nitin Gadkari to Amethi on October 10. 8.45 am: Ahead of Rahul Gandhi's visit, the Amethi district authorities requested him to change his tour dates. "In order to maintain law-and-order, a majority of the district police force will be on duty. Hence, there will be great inconvenience in maintaining peace. Therefore, it is requested that the tour be re-scheduled on any date after October 5," a letter written by the Amethi administration to the district Congress chief said. The letter was signed by District Magistrate Yogesh Kumar and Superintendent of Police Poonam. 8.30 am: Informing about Gandhi's schedule, his representative Chandrakant Dubey said, "On October 4, he will hold chaupal in Kathora village of Jagdishpur area in Amethi and will also visit different villages of his constituency." Gandhi is expected to listen to farmers' grievances at the 'chaupal'. Kathora is the same village for which the Congress vice-president had visited the office of the National Highway Authority of India in Lucknow, two months ago. Farmers of the village, who also accompanied him, alleged discrepancies in compensation given to them in exchange for the land being acquired for a road project. Filed in National Politics Pakistan provocation at UN is 'lonely voice' wasting time: Eenam Gambhir. J&K: Pak violates ceasefire in Poonch for third day; operation underway.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,492
\section{Disentangling Precision and Recall} \label{app:dis_score} Let $l \in \{1, \dots, N_\alpha\}$ be a fixed latent dimension. In phase $1$, we perform $D \in \mathbb{N}$ interventions $\alpha_l = I_d$ on the latent dimension $l$ where $d = 1, \dots, D$. Interventions are chosen from the set of equidistant points on the interval $[-a, a]$ such that $I_d = -a + 2a \cdot \frac{d-1}{D - 1}$. The value $a$ is chosen such that $[-a, a]$ is in the support of the prior distribution $p(\alpha)$. Since $p(\alpha)$ is a standard normal distribution in the case of VAEs and a uniform distribution on the interval $[-1, 1]$ in the case of GANs, we set $a$ to be $1.5$ and $1$ in the case of the VAEs and GANs, respectively. Each intervention $d = 1, \dots, D$ is performed on $n$ samples from the prior distribution $p(\alpha)$ and yields a set of $n$ end states denoted by $\boldsymbol{S}_g^{l-I_d}$. For each intervention we additionally randomly sample a set $\boldsymbol{S}_r$ of $n$ end states corresponding to the training motor data. Note that elements of both $\boldsymbol{S}_g^{l-I_d}$ and $\boldsymbol{S}_r$ are $N_s$-dimensional with $N_s$ being the dimension of the end state space, which are obtained by executing the generated trajectories on the robotic platform.. In phase $2$, we calculate the $\MMD(\proj_j \boldsymbol{S}_g^{l-I_d}, \proj_j \boldsymbol{S}_r)$ for a fixed intervention $d = 1, \dots, D$ and every end state component $j = 1, \dots, N_s$. We first determine if the difference between the sets $\proj_j \boldsymbol{S}_g^{l-I_d}$ and $\proj_j \boldsymbol{S}_r$ is large enough to reject the null hypothesis that samples from $\proj_j \boldsymbol{S}_g^{l-I_d}$ and $\proj_j \boldsymbol{S}_r$ are drawn from the same distribution. We achieve this by performing a permutation test where we pool all the samples from $\proj_j \boldsymbol{S}_g^{l-I_d}$ and $\proj_j \boldsymbol{S}_r$, randomly divide the pooled set into two sets of $n$ elements and calculate the $\MMD$ between the obtained sets. The random split into two sets is performed $100$ times such that we obtain a distribution over the resulting $\MMD$ values. For a predetermined significance level $\eta$, we define the critical value $c_\eta$ to be $(1 - \eta)$-quantile of the obtained distribution over $\MMD$ values. We then say that the intervention $I_d$ was significant for an end state component $j$ if the observed $\MMD(\proj_j \boldsymbol{S}_g^{l-I_d}, \proj_j \boldsymbol{S}_r) > c_\eta$. The calculations in phase $2$ were repeated $p$ times with a resampled set of $n$ training end states $\boldsymbol{S}_r$. In all our experiments we set $p = 10$ and $\eta = 0.001$. Therefore, phases $1$ and $2$ yield functions $c_g: \{1, \dots, N_\alpha\} \longrightarrow \{1, \dots, N_s\}$ and $d_g: \{1, \dots, N_\alpha\} \longrightarrow \mathbb{R}$ defined by: \begin{align*} c_g(l) = \argmax_{j = 1, \dots, N_s} \overline{\MMD}\left(\proj_j \boldsymbol{S}_g^{l-I_d}, \proj_j \boldsymbol{S}_r\right) \quad \textrm{and} \quad d_g(l) = \overline{\MMD}\left(\proj_{c_g(l)} \boldsymbol{S}_g^{l-I_d}, \proj_{c_g(l)} \boldsymbol{S}_r\right) \end{align*} where $\overline{\MMD}$ denotes the average $\MMD$ score calculated on a subset of $p \cdot D$ performed interventions that were significant. For a given latent dimension $l \in \{1, \dots, N_\alpha\}$, $c_g(l)$ represents the dimension of the end state space $\mathbb{R}^{N_s}$ that was most affected by the latent interventions. This is because a high $\MMD$ value indicates a low similarity between $\proj_j \boldsymbol{S}_g^{l-I_d}$ and $\proj_j \boldsymbol{S}_r$, and thus a high effect of the intervention. Moreover, $d_g(l)$ is the average $\MMD$ value obtained on the most affected end state space dimension identified by $c_g(l)$. In phase $3$ we define the final disentanglement score for the generative model $g$ using the functions $c_g$ and $d_g$. Let $\mathcal{P}$ be a subset of $\{ d_g(l): l = 1 \dots, N_\alpha\}$ containing its largest three elements, i.e., the three largest $\overline{\MMD}$ values obtained in phase 2, and let $\mathcal{R} = \{c_g(l): d_g(l) \in \mathcal{P}\}$ be the set of the corresponding end state components. We define the \textit{Disentangling Precision and Recall} $\Dis$ as a pair \begin{align} \Dis(g) = (\Dip(g), \Dir(g)) := \left(\sum_{d \in \mathcal{P}} d, \frac{|\mathcal{R}^{\neq}|}{N_s} \right) \end{align} where $\mathcal{R}^{\neq}$ denotes the subset of unique elements of the set $\mathcal{R}$. The \textit{disentangling recall} $\Dir$ is the fraction of end state dimensions described by three most significant latent dimensions, i.e., by the three latent dimensions on which interventions yielded the largest changes in the end state space. The larger the $\Dir$ value, the more end state space dimensions are captured in the latent space, and thus the latent disentanglement has a higher recall. Similarly, the \textit{disentangling precision} $\Dip$ is the sum effect that the latent interventions on the three most significant latent dimensions have on the affected end state dimensions. The larger the $\Dip$, the stronger was the effect of the latent interventions, and thus the latent disentanglement is more precise. \section{Generative models} \label{app:gen_models} \subsection{Variational Autoencoder} The architecture of the decoder neural network is visualised in Table \ref{tab:gen_arc}. The encoder neural network is symmetric to the decoder with two output linear layers of size $N_\alpha$ representing the mean and the log standard deviation of the approximate posterior distribution. All the models were trained for $10000$ epochs with learning rate fixed to $1e-4$. \subsection{InfoGAN} The architecture of the generator, discriminator and Q neural network parametrizing $Q_\phi(\alpha|\tau)$ are summarised in Tables \ref{tab:gen_arc} and \ref{tab:dis_arc}. All the models were trained for $1000$ epochs with learning rates of the optimizers for the generator and discriminator networks fixed to $2e-4$. \begin{table}[!htb] \begin{minipage}{.4\linewidth} \caption{Architecture of the generator neural network.} \label{tab:gen_arc} \centering \begin{tabular}{l} \hline \hline Linear($N_\alpha$, $128$) + BatchNorm + ReLU \\ \hline Linear($128$, $256$) + BatchNorm + ReLU \\ \hline Linear($256$, $512$) + BatchNorm + ReLU \\ \hline Linear($512$, $7 \cdot 79$) \end{tabular} \end{minipage}% \hfill \begin{minipage}{.4\linewidth} \centering \caption{Architecture of the discriminator and Qnet neural networks.} \label{tab:dis_arc} \begin{tabular}{l|l} \hline \hline \multirow{2}{*}{Shared layers} & Linear($7 \cdot 79$, $256$) + ReLU \\ \cline{2-2} & Linear($256$, $128$) + ReLU \\ \hline discriminator & Linear($128$, $1$) + Sigmoid \\ \hline \multirow{2}{*}{Qnet} & Linear($128$, $64$) \\ \cline{2-2} & Linear($64$, $N_\alpha$) \end{tabular} \end{minipage} \end{table} \end{appendices} \section{Preliminaries} \label{sec:preliminaries} \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figures/policy_network_img_cropped.pdf} \caption{The architecture of the deep action-selection policy $\pi_\Theta$ based on latent-variable generative models. The policy consists of two models, the sub-policy $\pi_\theta(\alpha|s')$ that assigns a distribution over the action latent variable $\alpha$ conditioned on a given state $s'$, and a generative model $\tau' = g_\vartheta(\alpha')$ that maps a latent action sample $\alpha' \sim \pi_\theta(\alpha|s')$ into a trajectory of motor actions $\tau'$. Given the initial state $s'$ and the action trajectory $\tau'$ that is executed on the robot, the environment returns a terminal reward $r'$ according to the reward probability $r' \sim p(r|s',\tau')$.}\label{fig:deep_policy_network} \end{figure} We consider a finite-horizon Markov decision process defined by a tuple $(\mathbf{S}, \mathbf{U}, P, p(s), p(r|s,\tau))$ where $\mathbf{S} = \mathbb{R}^{N_s}$ is a set of $N_s$-dimensional states, $\mathbf{U}$ is a set of motor actions $u_{m}$ for the robot motor $m$, $P: \mathbf{S} \times \mathbf{U} \times \mathbf{S} \rightarrow \mathbb{R}$ is the state transition probability, and $p(s)$ is the initial state distribution. The probability $p(r|s',\tau')$ of the reward $r$, conditioned on a state $s'$ and a fixed-length sequence $\tau' = (u_{t, m})$ of motor $m$ actions $u_{t, m}$ at time step $t$, is unknown to the learning agent. We wish to find a policy $\pi_\Theta(\tau|s')$ based on which a sequence of motor actions $\tau'$ can be sampled given a state of the environment $s'$. The state contains information about the current configuration of the environment as well as the goal to reach in case a goal-conditioned policy has to be obtained. Let $r^*$ be the minimum reward required to successfully complete the task. The policy $\pi_\Theta$ is represented by a neural network with parameters $\Theta$ that are trained to maximize the probability of receiving high rewards $p(r \ge r^*|s', \tau')$ \begin{equation} \begin{split} \Theta^* & = \argmax_{\Theta} \iint p(s)\, \pi_{\Theta}(\tau | s)\, p(r|s,\tau)\, ds \, d\tau \\ & = \argmax_{\Theta} \mathbb{E}_{s' \sim p(s), \tau' \sim \pi_{\Theta}(\tau|s')} [\, p(r | s', \tau')\,], \label{eq:margin_tau} \end{split} \end{equation} where we omitted the reward threshold $r^*$ for simplicity. Our approach is based on training a generative model $g_\vartheta$, parametrized by $\vartheta$, that maps a low-dimensional latent action sample $\alpha' \in \mathbb{R}^{N_\alpha}$ into a motion trajectory $\tau' \in \mathbb{R}^{T\times M}$, where $N_\alpha \ll T\times M$. In other words, we assume that the search space is limited to the trajectories spanned by the generative model. In this case, the feed-forward policy search problem splits into two sub-problems: (i) finding the mapping $g_\vartheta(\alpha')$ and (ii) finding the sub-policy $\pi_\theta (\alpha | s')$, where $\Theta = [\theta, \vartheta]$. Instead of marginalizing over trajectories as in~\eqref{eq:margin_tau} we marginalize over the latent variable by exploiting the generative model \begin{equation} \theta^* = \argmax_{\theta} \mathbb{E}_{s' \sim p(s), \alpha' \sim \pi_{\theta}(\alpha|s')} [\, p(r|s', g_\vartheta(\alpha')\,)\,]. \label{eq:margin_alpha} \end{equation} An overview of our approach is shown in Figure~\ref{fig:deep_policy_network}. Once the models are trained the output of the policy $\pi_\Theta$ is found by first sampling from the sub-policy $\alpha' \sim \pi_\theta(\alpha | s')$ given a state $s'$ and then using the mapping $g_\vartheta$ to get the sequence of motor actions $\tau' = g_\vartheta(\alpha')$. The state $s'$ and the generated trajectory $\tau'$ are then given to the environment which outputs a reward $r'$. In the rest of the text we refer to the sub-policy as the \textit{policy} and omit the parameters $\vartheta$ from the notation $g_\vartheta$ when they are not needed. By abuse of notation, we will drop the explicit distinction between a random variable, e.g., $\alpha$, and a concrete instance of it, $\alpha'$, in the rest of the paper and only write $\alpha$ when no confusions can arise. In the following section, we introduce the expectation-maximization (EM) algorithm for training an action-selection policy using a generative model based on which different motor trajectories suitable for solving a given task can be generated. \section{Conclusion} \label{sec:conclusion} We presented an RL framework that combined with generative models trains deep visuomotor policies in a data-efficient manner. The generative models are integrated with the RL optimization by introducing a latent variable $\alpha$ that is a low-dimensional representation of motor actions. Using the latent action variable $\alpha$, we divided the optimization of the parameters $\Theta$ of the deep visuomotor policy $\pi_\Theta(\tau|s)$ into two parts: optimizing the parameters $\vartheta$ of a generative model $p_\vartheta(\tau|\alpha)$ that generates valid sequences of motor actions, and optimizing the parameters $\theta$ of a sub-policy $\pi_\theta(\alpha | s)$, where $\Theta = [\theta, \vartheta]$. The sub-policy parameters $\theta$ are found using the EM algorithm, while generative model parameters $\vartheta$ are trained unsupervised to optimize the objective corresponding to the chosen generative model. In summary, the complete framework consists of three data-efficient downstream tasks: (a) training the generative model $p_\vartheta$, (b) training the sub-policy $\pi_\theta$, and (c) supervised end-to-end training the deep visuomotor policy $\pi_\Theta$. Moreover, we provided a set of measures for evaluating the quality of the generative models regulated by the RL policy search algorithms such that we can predict the performance of the deep policy training $\pi_\Theta$ prior to the actual training. In particular, we defined two new measures, disentangling precision and recall ($\Dip$ and $\Dir$) and latent local linearity (L3), that evaluate the quality of the latent space of the generative model $p_\vartheta$, and complemented them with precision and recall measure \cite{kynkaanniemi2019improved} which evaluates the quality of the generated samples. We experimentally demonstrated the predictive power of these measures on a picking task using a set of different VAE and GAN generative models. Regardless of the model type, we observe recall to be the most influential property, followed by precision in case of VAEs and by disentangling recall in case of GANs. \section{Experiments} \label{sec:experiments} In this section, we experimentally determine the characteristics of generative models that contribute to a more data-efficient policy training for a picking task performed on a real robotic platform. We first trained several $\beta$-VAE and InfoGAN models with various hyperparameters, and evaluated them by measuring disentanglement, local linearity as well as precision and recall introduced in Section~\ref{sec:eval_generative_model}. Using these models and the proposed EM algorithm presented in Section~\ref{sec:em_policy_training}, we then trained several RL policies and investigated the relation between the properties of the generative models and the performance of the policy. Note that the experimental section of this work focuses solely on the evaluation of the generative models. Readers interested in the investigation of the data-efficiency of the proposed approach in training complex visuomotor skills are referred to \cite{ghadirzadeh2017deep}. Moreover, we emphasize that it is not meaningful to directly compare our approach to neither PPO nor GPS. Training a policy using PPO requires vast amounts of data, while GPS requires a reward at every time step instead of the terminal reward as in our case. \subsection{Experimental setup} \label{sec:exp:setup} We applied our framework to a picking task in which a 7 degree-of-freedom robotic arm (ABB YuMi) must move its end-effector to different positions and orientations on a tabletop to pick a randomly placed object on a table. In this case, raw image pixel value are given as the input to the visuomotor policy. This task is a suitable benchmark to answer our research questions. First of all, the task requires feed-forward control of the arm over 79 time-steps to precisely reach a target position without any position feedback during the execution. Therefore, precision is an important factor for this problem setup. Secondly, reaching a wide range of positions and orientations on the tabletop requires the generative model $g$ to generate all possible combinations of motor commands that bring the end-effector to every possible target position and orientation. This means that $g$ needs to have a high recall. Thirdly, it is straightforward to evaluate the disentanglement of the latent representations as well as the local linearity of the dynamical system that is formed by $g$ and the robot kinematic model. Finally, this is a suitable task for end-to-end training, especially by exploiting the adversarial domain adaptation technique in \cite{chen2019adversarial} to obtain generality for the policy training task. Note that the applicability of our framework (up to minor differences) to a wide range of robotic problems has already been addressed by our prior work. In particular, we successfully evaluated it in several robotic task domains, e.g., ball throwing to visual targets \cite{ghadirzadeh2017deep}, shooting hockey pucks using a hockey stick \cite{arndt2019meta}, pouring into different mugs \cite{hamalainen2019affordance}, picking objects \cite{chen2019adversarial}, and imitating human greeting gestures \cite{butepage2019imitating}. We constructed a dataset containing sequences of motor actions using MoveIt planners \cite{coleman2014reducing}. We collected $15750$ joint velocity trajectories to move the end-effector of the robot from a home position to different positions and orientations on a tabletop. The trajectories were sampled at $10$Hz and trimmed to 79 time-steps ($7.8$ seconds duration) by adding zeros at the end of the joint velocities that are shorter than 79 time-steps. The target positions as well as orientations consisting of Euler angles form a $N_s = 6$ dimensional end state space, and were sampled uniformly to cover an area of $750$ cm$^2 \times 3.1$ rad. \subsection{Generative model training} \label{sec:exp:generative_model} The generative models are represented by neural networks that map a low-dimensional action latent variable $\alpha$ into a $7\times79$ dimensional vector representing $7$ motor actions and $79$ time-steps. In total, we trained $9$ $\beta$-VAE models and $9$ InfoGAN models with latent dimension chosen from $N_\alpha \in \{2, 3, 6\}$. We refer the reader to the Appendix \ref{app:gen_models} for the exact architecture of the models as well as all the training details. The prior distribution $p(\alpha)$ is the standard normal distribution $\text{N}(0, 1)$ in case of $\beta$-VAEs, and the uniform distribution $\text{U}(-1, 1)$ in case of InfoGANs. Table~\ref{tabel:exp:vae} summarizes the parameters of the $\beta$-VAE models together with the values of both the KL divergence and reconstruction term (right and left term in~\eqref{eq:vae}, respectively) obtained at the last epoch. At the beginning of training, we set $\beta = 0$ and gradually increase its value until the value of the KL divergence drops below a predetermined threshold set to $1.5, 2.5$ and $3.5$. The resulting $\beta$ value is reported in Table \ref{tabel:exp:vae} and kept fixed until the end of the training. Table \ref{tabel:exp:gan} summarizes the training parameters and loss function values of the InfoGAN models. We report the total model loss~\eqref{eq:gan} (M-loss), the generator loss (G-loss, middle term in~\eqref{eq:gan}) and the mutual information loss (I-loss, right term in~\eqref{eq:gan}) obtained on the last epoch. The hyperparameter $\lambda$ (right term in~\eqref{eq:gan}) was chosen from $\lambda \in \{0.1, 1.5, 3.5\}$. \begin{table}[H] \centering \caption{Training parameters of the VAE models together with the values of the loss function \eqref{eq:vae} obtained at the last epoch.} \begin{tabular}{c|c|c|c|c} \textbf{index} & \textbf{latent size} $N_\alpha$ & $\beta$ & \textbf{KL loss} & \textbf{reconstruction loss} \\ \hline VAE1 & 2 & $\scnum{1.6e-02}$ & $1.5$ & $\scnum{2.1e-02}$ \\ VAE2 & 2 & $\scnum{6.4e-03}$ & $2.5$ & $\scnum{1.1e-02}$ \\ VAE3 & 2 & $\scnum{3.2e-03}$ & $3.4$ & $\scnum{6.7e-03}$ \\ VAE4 & 3 & $\scnum{1.6e-02}$ & $1.5$ & $\scnum{2.1e-02}$ \\ VAE5 & 3 & $\scnum{7.2e-03}$ & $2.4$ & $\scnum{1.1e-02}$ \\ VAE6 & 3 & $\scnum{3.2e-03}$ & $3.5$ & $\scnum{5.5e-03}$ \\ VAE7 & 6 & $\scnum{1.6e-02}$ & $1.5$ & $\scnum{2.1e-02}$ \\ VAE8 & 6 & $\scnum{7.2e-03}$ & $2.4$ & $\scnum{1.1e-02}$ \\ VAE9 & 6 & $\scnum{3.6e-03}$ & $3.3$ & $\scnum{6.0e-03}$ \\ \end{tabular} \label{tabel:exp:vae} \end{table} \begin{table}[H] \centering \caption{Training parameters of the InfoGAN models together with values of the loss function \eqref{eq:gan} obtained at the last epoch. We report the generator loss (G loss), the information loss (I loss) and the total model loss (M loss).} \begin{tabular}{c|c|c|c|c|c} \textbf{Model} & \textbf{latent size} $N_\alpha$ & $\lambda$ & \textbf{G loss} & \textbf{I loss} & \textbf{M loss} \\ \hline GAN1 & 2 & $0.1$ & $2.81$ & $0.10$ & $0.54$ \\ GAN2 & 2 & $1.5$ & $2.28$ & $0.59$ & $0.78$ \\ GAN3 & 2 & $3.5$ & $2.43$ & $1.07$ & $0.63$ \\ GAN4 & 3 & $0.1$ & $2.52$ & $0.16$ & $0.61$ \\ GAN5 & 3 & $1.5$ & $2.27$ & $1.45$ & $0.75$ \\ GAN6 & 3 & $3.5$ & $2.23$ & $3.16$ & $0.68$ \\ GAN7 & 6 & $0.1$ & $2.50$ & $0.44$ & $0.63$ \\ GAN8 & 6 & $1.5$ & $2.23$ & $4.78$ & $0.73$ \\ GAN9 & 6 & $3.5$ & $2.27$ & $10.32$ & $0.67$\\ \end{tabular} \label{tabel:exp:gan} \end{table} \subsection{Evaluating the generative models} \label{sec:exp:eval_gen_models} We evaluated all the generative models using disentanglement, local linearity, precision and recall measures introduced in Section \ref{sec:eval_generative_model} as well as using the training parameters described in Tables~\ref{tabel:exp:vae} and~\ref{tabel:exp:gan}. We first studied the correlation of each individual evaluation measure and the performance of the policy training, and then the combined effect of all of the measures together. Our analysis empirically shows that any generative model having high recall improves the data-efficiency of the policy training task. Moreover, we observe that $\Dir$ is especially relevant in case of GANs, and precision in case of VAEs, while we do not find local linearity as essential for successful policy training. The details of our evaluation are presented and discussed in detail below. \textbf{EM policy training:} \label{sec:exp:em_training} For each generative model, introduced in Tables~\ref{tabel:exp:vae} and~\ref{tabel:exp:gan}, we trained one policy with three different random seeds. The obtained average training performances together with the standard deviations are shown in Figure~\ref{fig:exp:policy_training_performance}. Using these results, we labeled each generative model with the maximum reward achieved during EM policy training across all three random seeds. As it can be seen from Figure~\ref{fig:exp:policy_training_performance}, the best performance is achieved with VAE6 and VAE9 which have latent size $N_\alpha$ equal to 3 and 6, respectively, and low $\beta$ values. \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{figures/policy_training_results.pdf} \caption{A visualization of the performance of the policy training. We report the average reward together with the standard deviation ($y$-axis) obtained during EM policy training ($x$-axis) across three different random seeds. } \label{fig:exp:policy_training_performance} \end{figure} We investigated the correlation between the characteristics of the generative models and the performance of the policy training in two different ways: (1) we calculated the Pearson's correlation between each \textit{individual} model characteristic and the corresponding model label introduced above, and (2) we studied the \textit{combined} effect that all of the properties have on the policy training using automatic relevance determination (ARD). We fit one ARD model for VAEs and GANs separately using the evaluation results discussed in the following and the training parameters from Tables \ref{tabel:exp:vae} and \ref{tabel:exp:gan} as inputs, and the policy performance as target values. Since ARD is sensitive to outliers, we additionally preprocessed the input values to the ARD using robust scaling with median and interquartile range. The results including Pearson's correlation coefficients (the higher, the better $\uparrow$) together with the corresponding p-values (the lower, the better $\downarrow$) as well as the ARD estimated precision of weights (the lower, the better $\downarrow$) are shown in Tables~\ref{tab:vae_pear_coef} and~\ref{tab:gan_pear_coef}. We first present the evaluation results and Pearson's correlations of each of the measures separately, and then discuss the differences with the ARD results. \begin{table}[h] \centering \caption{Results for the correlation analysis of the VAE models to the policy performance. We report (1) Pearson correlation coefficient R (the higher, the better $\uparrow$) and (2) ARD estimated precision of the weights (the lower, the better $\downarrow$) between the evaluation metrics from Section \ref{sec:eval_generative_model} together with the training parameters from Table~\ref{tabel:exp:vae}, and the policy performance for VAE models visualised in Figure~\ref{fig:exp:policy_training_performance}.} \vspace{0.2cm} \label{tab:vae_pear_coef} \begin{tabular}{r|c|c|c|c|c|c|c} & \textbf{DiP} & \textbf{DiR} & \textbf{L3} & \textbf{Precision} & \textbf{Recall} & $\boldsymbol{N_\alpha}$ & $\boldsymbol{\beta}$ \\ \hline Pearson's R & $0.600$ & \cellcolor[gray]{0.94}{$0.668$} & $-0.100$ & \cellcolor[gray]{0.82}{$0.776$} & \cellcolor[gray]{0.6}$0.969$ & $0.317$ & \cellcolor[gray]{0.80}{$-0.791$} \\ p-value & $0.088$ & \cellcolor[gray]{0.94}$0.049$ & $0.797$ & \cellcolor[gray]{0.82}$0.014$ & \cellcolor[gray]{0.6}$0.000$ & $0.406$ & \cellcolor[gray]{0.8}$0.011$ \\ \hline ARD & $\scnum{8.07e+04}$ & $\scnum{1.81e+04}$ & \cellcolor[gray]{0.8}$\scnum{1.09e+04}$ & \cellcolor[gray]{0.7}$\scnum{4.45e+03}$ & \cellcolor[gray]{0.60}$\scnum{1.93e+01}$ & \cellcolor[gray]{0.9}$\scnum{1.70e+04}$ & $\scnum{1.90e+04}$ \end{tabular} \end{table} \begin{table}[h] \centering \caption{Results for the correlation analysis of the GAN models to the policy performance. We report (1) Pearson correlation coefficient R (the higher, the better $\uparrow$) and (2) ARD estimated precision of the weights (the lower, the better $\downarrow$) between the evaluation metrics from Section \ref{sec:eval_generative_model} together with the training parameters from Table~\ref{tabel:exp:gan}, and the policy performance for VAE models visualised in Figure~\ref{fig:exp:policy_training_performance}. } \vspace{0.2cm} \label{tab:gan_pear_coef} \begin{tabular}{r|c|c|c|c|c|c|c} & \textbf{DiP} & \textbf{DiR} & \textbf{L3} & \textbf{Precision} & \textbf{Recall} & $\boldsymbol{N_\alpha}$ & $\boldsymbol{\lambda}$ \\ \hline Pearson's R & $0.395$ & \cellcolor[gray]{0.82}$0.781$ & $-0.639$ & \cellcolor[gray]{0.80}$-0.801$ & \cellcolor[gray]{0.65}$0.948$ & \cellcolor[gray]{0.78}$0.823$ & $0.368$ \\ p-value & $0.293$ & \cellcolor[gray]{0.82}$0.013$ & $0.064$ & \cellcolor[gray]{0.80}$0.009$ & \cellcolor[gray]{0.65}$0.000$ & \cellcolor[gray]{0.78}$0.006$ & $0.330$ \\ \hline ARD & \cellcolor[gray]{0.80}$\scnum{2.37e+02}$ & \cellcolor[gray]{0.60}$\scnum{5.71e+01}$ & $\scnum{3.20e+02}$ & $\scnum{7.74e+02}$ & \cellcolor[gray]{0.70}$\scnum{1.15e+02}$ & \cellcolor[gray]{0.85}$\scnum{2.67e+02}$ & $\scnum{5.50e+04}$ \\ \multicolumn{6}{c}{} \\ & \textbf{G loss} & \textbf{I loss}& \multicolumn{1}{c}{\textbf{M loss}} \\ \cline{1-4} Pearson's R & $-0.672$ & $0.698$ & \multicolumn{1}{c}{$0.381$} \\ p-value & $0.048$ & $0.036$ & \multicolumn{1}{c}{$0.312$} \\ \cline{1-4} ARD & $\scnum{4.66e+03}$ & $\scnum{1.19e+05}$ & \multicolumn{1}{c}{$\scnum{3.56e+02}$} \end{tabular} \end{table} \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figures/AggIPR_nhood3_s5_n_samples15000.png} \caption{Aggregated precision and recall scores.}\label{fig:ipr_agg} \end{figure} \end{comment} \textbf{Disentanglement:} We measured the disentangling precision and recall using $\MMD$ with kernel parameter $\gamma = 15$. For a given model $g$, we performed $D = 5$ interventions on every latent dimension $l \in \{1, \dots, N_\alpha\}$. We restricted the dimension of the end state space to $N_s = 3$ including only the position of the object and the angle of picking since these are the most relevant factors for performing the picking task. The size of the sets $\boldsymbol{S}_g^{l-I_d}$ and $\boldsymbol{S}_r$ containing the end states was set to $n = 200$. For complete evaluation details we refer the reader to Appendix \ref{app:dis_score}. The obtained disentanglement scores are visualised in Figure \ref{fig:dis_all}. We observe that models with higher latent space dimension $N_\alpha$ (Figure \ref{fig:dis_all}, right) on average achieve higher disentangling recall $\Dir$. In Tables~\ref{tab:vae_pear_coef} and~\ref{tab:gan_pear_coef} we see that $\Dir$ is positively correlated to the performance of the policy, with a stronger correlation in case of GANs, while the positive correlation of $\Dip$ is not significant (p-value $> 0.05$). Therefore, when choosing a generative model based only on disentanglement, it is beneficial that the interventions on the latent dimensions affect all three dimensions of the end state space. In other words, the latent action representations should capture all aspects of the robotic task, while it is less important to capture them precisely. \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figures/MMDalpha15_AggDisentanglementScore_p0p001_paper.png} \caption{Aggregated disentanglement scores.} \label{fig:dis_agg} \end{figure} \end{comment} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figures/MMDalpha15_disentanglementScore_p0p001_papernew.png} \caption{Disentangling precision and recall scores for VAE models (top row) and GAN models (bottom row) with training parameters reported in Tables~\ref{tabel:exp:vae} and \ref{tabel:exp:gan}, respectively.} \label{fig:dis_all} \end{figure} \textbf{Local linearity:} Given a generative model $g$, we randomly sampled $50$ latent actions from the prior $p(\alpha)$. For each such latent action $\alpha$, we set $\varepsilon = 0.2$ and sampled $500$ points from its $\varepsilon$-neighbourhood $N_\varepsilon(\alpha)$. We then fit an affine transformation $f_\alpha$ defined in~\eqref{eq:aff_trans} on $350$ randomly selected neighbourhood points and calculated L3 as the test MSE on the remaining $150$ points. We report the average test MSE obtained across all $50$ points in Table~\ref{tabel:exp:loclin}. We observe that all VAEs except for VAE3 achieve a lower test MSE than GAN models. By comparing Figure~\ref{fig:exp:policy_training_performance} and Table~\ref{tabel:exp:loclin} it appears that local linearity is not related to the performance of the policy training which is on par with the results obtained in Tables~\ref{tab:vae_pear_coef} and~\ref{tab:gan_pear_coef} where we see that the correlation is not significant neither for VAEs nor for GANs. Despite being insignificant, the correlation is negative for all the models which coincides with our hypothesis that a more locally linear model (i.e., a model with a lower MSE) performs better in the policy training. \begin{table}[H] \centering \caption{The Latent Local Linearity (L3) results measured as the mean squared error (MSE) of the affine transformations defined in~\eqref{eq:aff_trans} in the neighbourhoods of 50 latent action representations for all VAE and GAN models.} \vspace{0.2cm} \begin{tabular}{c c c c c c c c c c} \textbf{Model} & VAE1 & VAE2 & VAE3 & VAE4 & VAE5 & VAE6 & VAE7 & VAE8 & VAE9 \\ \hline \textbf{MSE} & 1.6& 3.2& 14.3& 1.5& 1.6& 2.0& 1.2& 1.2& 1.3\\ & & & & & & & & & \\ \textbf{Model}&GAN1 & GAN2 & GAN3 &GAN4 & GAN5 & GAN6 &GAN7 & GAN8 & GAN9\\ \hline \textbf{MSE} &73.1&27.8&41.1&16.7&36.6&43.0&14.2&24.7&5.2 \\ \end{tabular} \label{tabel:exp:loclin} \end{table} \textbf{Precision and recall:} For each generative model $g$, we randomly sampled $15000$ samples from the latent prior distribution $p(\alpha)$. The corresponding set of the generated trajectories $\boldsymbol{T_g}$ was compared to a set $\boldsymbol{T_r}$ of $15000$ randomly chosen training trajectories which were sampled only once and fixed for all the models. The neighbourhood size $k$ was set to $3$ as suggested in \cite{kynkaanniemi2019improved}. The resulting precision and recall scores are shown in Figure \ref{fig:ipr_all}. Firstly, we observe that all the models have relatively high precision except for GAN8-9 which are shown in the bottom right part of Figure~\ref{fig:ipr_all}. Secondly, on average GANs have worse recall than VAEs, which is a consequence of the training procedure (see Appendix \ref{app:gen_models}) and can possibly be improved with a more thorough fine tuning of the models. This is consistent with the results reported in Table~\ref{tab:gan_pear_coef} where precision is negatively correlated to the policy performance in case of GANs but positively in case of VAEs. For both VAE and GAN models, we observe that recall is highly correlated with the performance of the policy training. \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figures/IPR_nhood3_s5_n_samples15000_new.png} \caption{Precision and recall scores for VAE (top row) and GAN models (bottom row).}\label{fig:ipr_all} \end{figure} \textbf{Training parameters:} In Tables~\ref{tab:vae_pear_coef} and~\ref{tab:gan_pear_coef} we additionally calculated the Pearson's correlation between the training parameters, shown in Tables~\ref{tabel:exp:vae} and~\ref{tabel:exp:gan}, and the policy performance shown in Figure~\ref{fig:exp:policy_training_performance}. For VAEs, we used only the $\beta$ coefficient as it is clearly correlated with the KL divergence and the reconstruction loss (see Table~\ref{tabel:exp:vae}). Interestingly, we observe negative correlation between $\beta$ and policy training performance which indicates that disentangled representations are less beneficial as these are obtained with a higher $\beta$ value. This observation is also supported by a low Pearson's coefficient obtained for $\Dir$ and insignificant correlation of $\Dip$ (see Table~\ref{tab:vae_pear_coef}). In fact, increasing $\beta$ positively affects the disentanglement as shown in~\cite{higgins2017beta} on image data, but negatively the precision, because data is in this case reconstructed from a wider approximate posterior distribution $q_\varphi$. Since precision is more important for an efficient policy performance than disentanglement in case of VAEs, the correlation of $\beta$ is negative. Next, a positive correlation of the latent dimension $N_\alpha$ is observed for both VAE and GAN models but for VAEs it is insignificant. Finally, the correlation coefficients for GAN losses imply that the lower the generator loss (G loss) and the higher the information loss (I loss), the better the policy performance. This result is meaningful as higher I loss, obtained with a higher $\lambda$, encourages more disentangled representations, which we also see with a high $\Dir$ correlation coefficient. \textbf{Correlation between evaluation metrics and EM policy training:} Possibly due to the sensitivity of the ARD framework to outliers, the obtained ARD results shown in Tables~\ref{tab:vae_pear_coef} and \ref{tab:gan_pear_coef} differ slightly from the Pearson's R correlation coefficients. In case of VAEs, local linearity seems to be more important due to the large outlying value for the VAE3 model (Table~\ref{tabel:exp:loclin}). Similarly, $\Dip$ appears to be influential in case of GANs due to the large range of values (Figure ~\ref{fig:dis_all} left). For both model types, ARD analysis shows $N_\alpha$ to be important which can also be a consequence of the chosen values $N_\alpha \in \{2, 3, 6\}$. However, ARD scores precision and recall as influential in case of VAEs, as well as $\Dir$ and recall in case of GANs, both of which matches the Pearson's correlation results. \begin{comment} \begin{table}[h] \centering \caption{ARD estimated precision (the lower, the better $\downarrow$) of the weights corresponding to the evaluation metrics from Section \ref{sec:eval_generative_model} and training parameters for VAE models.} \vspace{0.2cm} \label{tab:vae_ard_coef} \begin{tabular}{c|c|c|c|c|c|c} \textbf{DiP} & \textbf{DiR} & \textbf{L3} &\textbf{Precision} & \textbf{Recall} & $\boldsymbol{N_\alpha}$ & $\boldsymbol{\beta}$ \\ \hline $\scnum{8.07e+04}$ & $\scnum{1.81e+04}$ & \cellcolor[gray]{0.85}$\scnum{1.09e+04}$ & \cellcolor[gray]{0.75}$\scnum{4.45e+03}$ & \cellcolor[gray]{0.65}$\scnum{1.93e+01}$ & $\scnum{1.70e+04}$ & $\scnum{1.90e+04}$ \end{tabular} \end{table} For GANs (Table~\ref{tab:gan_ard_coef}), we observe that several more properties are important for forecasting the performance of the policy than in case of VAEs, namely $\Dir$, recall, $\Dip$, $N_\alpha$, local linearity, the total model loss (M loss) and lastly local linearity (listed in the order of importance). Similarly as for VAEs, we see that the ARD results are well aligned with the individual correlations shown in Table~\ref{tab:gan_pear_coef}, however, we observe slightly larger deviations than with VAEs. For example, while we did not observe a significant correlation of $\Dip$, local linearity and model loss to the policy performance, these properties gain importance in the ARD model. On contrary, the observed significant correlation of the generator loss (G loss) as well as the information loss (I loss) appear to be too weak for the ARD model. \begin{table}[h] \centering \caption{ARD estimated precision (the lower, the better $\downarrow$) of the weights corresponding to the evaluation metrics from Section \ref{sec:eval_generative_model} and training parameters for GAN models.} \vspace{0.2cm} \label{tab:gan_ard_coef} \begin{tabular}{c|c|c|c|c|c|c} \textbf{DiP} & \textbf{DiR} & \textbf{L3} & \textbf{Precision} & \textbf{Recall} & $\boldsymbol{N_\alpha}$ & $\boldsymbol{\lambda}$ \\ \hline \cellcolor[gray]{0.85}$\scnum{2.37e+02}$ & \cellcolor[gray]{0.65}$\scnum{5.71e+01}$ & $\scnum{3.20e+02}$ & $\scnum{7.74e+02}$ & \cellcolor[gray]{0.75}$\scnum{1.15e+02}$ & $\scnum{2.67e+02}$ & $\scnum{5.50e+04}$ \\ \multicolumn{7}{c}{} \\ \textbf{G loss} & \textbf{I loss}& \multicolumn{1}{c}{\textbf{M loss} }\\\cline{1-3}$\scnum{4.66e+03}$ & $\scnum{1.19e+05}$ & \multicolumn{1}{c}{$\scnum{3.56e+02}$} \end{tabular} \end{table} \end{comment} Lastly, we wished to compare all the generative models only based on the properties that can be evaluated for both model types. Therefore, we fit three ARD models, one for VAEs, one for GANs and one for all the models combined using only $\Dip$, $\Dir$, local linearity, precision and recall. The obtained estimated precision of the weights are reported in Table~\ref{tab:ard_coef}. While we observe minor differences between the two model types, we see that the most important property of a generative model combined with EM policy training is recall which is consistent with the results obtained in Tables ~\ref{tab:vae_pear_coef} and \ref{tab:gan_pear_coef}. Similarly as before, disentangling precision $\Dip$ gains importance due to the large range of values (Figure~\ref{fig:dis_all}). Therefore, for successful policy training in case of the picking task, it is crucial that the generative model is able to reproduce samples from the training dataset even if the generated samples are less precise. Based on the observations from Tables ~\ref{tab:vae_pear_coef} and \ref{tab:gan_pear_coef} we conclude that GAN models additionally benefit from a higher disentangling recall $\Dir$, while VAEs benefit from a higher precision. Finally, we note that disentangling precision as well as local linearity can become more important when performing tasks represented by more complex data where structured latent representations would be beneficial. \begin{table}[h] \centering \caption{ARD estimated precision of the weights (the lower, the better $\downarrow$) corresponding to the evaluation metrics from Section \ref{sec:eval_generative_model} for all the VAE and GAN models.} \vspace{0.2cm} \label{tab:ard_coef} \begin{tabular}{r|c|c|c|c|c} \textbf{models} & \textbf{DiP} & \textbf{DiR} & \textbf{L3} & \textbf{Precision} & \textbf{Recall} \\ \hline VAEs & \cellcolor[gray]{0.75}$\scnum{2.78e+02}$ & $\scnum{1.83e+04}$ & \cellcolor[gray]{0.85}$\scnum{9.42e+03}$ & $\scnum{2.24e+04}$ & \cellcolor[gray]{0.65}$\scnum{3.56e+01}$ \\ \hline GANs & \cellcolor[gray]{0.75}$\scnum{8.17e+02}$ & \cellcolor[gray]{0.85}$\scnum{1.09e+04}$ & $\scnum{1.25e+04}$ & $\scnum{3.97e+04}$ & \cellcolor[gray]{0.65}$\scnum{2.66e+01}$ \\ \hline ALL & \cellcolor[gray]{0.75}$\scnum{3.62e+03}$ & $\scnum{1.60e+04}$ & $\scnum{1.51e+04}$ & \cellcolor[gray]{0.85}$\scnum{6.27e+03}$ & \cellcolor[gray]{0.65}$\scnum{3.45e+01}$ \\ \end{tabular} \end{table} \begin{comment} \textbf{EM policy training with multiple variational policies:} Figure~\ref{fig:exp:multi_agent} demonstrates the performance of the EM policy training algorithm for different numbers of variational policies in the E-step. We see that using more than one variational policy in the E-step improves the performance in the beginning of the training since the RL agent learns faster with more than 4 variational policies for the first 100 iterations. However, the best performance is still achieved by the EM policy training with one variational policy. This observation requires more investigations which is a part of our future work. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{figures/multi_agent_res.pdf} \caption{The performance of the EM policy training for 1, 2, 4, and 8 variational policies. The results are obtained using VAE9 whose parameters are described in Table~\ref{tabel:exp:vae}.} \label{fig:exp:multi_agent} \end{figure} \end{comment} \section{Generative Model Training} \label{sec:generative_model_training} So far we discussed how to train an action-selection policy based on the EM algorithm to regulate the action latent variable which is the input to a generative model. In this section, we review two prominent approaches to train a generative model, Variational Autoencoder (VAE) and Generative Adversarial Network (GAN), which we use to generate sequences of actions required to solve the sequential decision-making problem. We then introduce a set of measures used to predict which properties of a generative model will influence the performance of the policy training. \subsection{Training generative models} We aim to model the distribution $p(\tau)$ of the motor actions that are suitable to complete a given task. To this end, we introduce a low-dimensional random variable $\alpha$ with a probability density function $p(\alpha)$ representing the latent actions which are mapped into unique trajectories $\tau$ by a generative model $g$. The model $g$ is trained to maximize the likelihood $\mathbb{E}_{\tau \sim \mathcal{D}, \alpha' \sim p(\alpha)}[p_\vartheta(\tau|\alpha')]$ of the training trajectories $\tau \in \mathcal{D}$ under the entire latent variable space. \subsubsection{Variational autoencoders} A VAE \cite{kingma2014auto, rezende2014stochasticvae2} consists of encoder and decoder neural networks representing the parameters of the approximate posterior distribution $q_\varphi(\alpha | \tau)$ and the likelihood function $p_\vartheta(\tau|\alpha)$, respectively. The encoder and decoder neural networks, parametrized by $\varphi$ and $\vartheta$, respectively, are jointly trained to optimize the variational lower bound \begin{equation} \max_{\varphi, \vartheta} \mathbb{E}_{\alpha' \sim q_\varphi(\alpha|\tau)}[\log p_\vartheta(\tau|\alpha')] - \beta D_{KL}(q_\varphi (\alpha | \tau) || p(\alpha)), \label{eq:vae} \end{equation} where the prior $p(\alpha)$ is the standard normal distribution and the parameter $\beta$ \cite{higgins2017beta} a variable controlling the trade-off between the reconstruction fidelity and the structure of the latent space regulated by the KL divergence. A $\beta > 1$ encourages the model to learn more disentangled latent representations \cite{higgins2017beta}. \subsubsection{Generative adversarial networks} A GAN model \cite{goodfellow2014generative} consists of a generator and discriminator neural networks that are trained by playing a min-max game. The generative model $g_\vartheta$, parametrized by $\vartheta$, transforms a latent sample $\alpha'$ sampled from the prior noise distribution $p(\alpha)$ into a trajectory $\tau = g_\vartheta(\alpha')$. The model needs to produce realistic samples resembling those obtained from the training data distribution $p(\tau)$. It is trained by playing an adversarial game against the discriminator network $D_\varphi$, parametrized by $\varphi$, which needs to distinguish a generated sample from a real one. The competition between the two networks is expressed as the following min-max objective \begin{align} \min_\vartheta \max_\varphi \mathbb{E}_{\tau' \sim p(\tau)} [\log D_\varphi(\tau')] + \mathbb{E}_{\alpha' \sim p(\alpha)} [\log(1 - D_\varphi(g_\vartheta(\alpha'))]. \label{eq:gan_original} \end{align} However, the original GAN formulation~\eqref{eq:gan_original} does not impose any restrictions on the latent variable $\alpha$ and therefore the generator $g_\vartheta$ can use $\alpha$ in an arbitrary way. To learn disentangled latent representations we instead use InfoGAN \cite{chen2016infogan} which is a version of GAN with an information-theoretic regularization added to the original objective. The regularization is based on the idea to maximise the mutual information $I(\alpha', g_\vartheta(\alpha'))$ between the latent code $\alpha'$ and the corresponding generated sample $g_\vartheta(\alpha')$. An InfoGAN model is trained using the following information minmax objective \cite{chen2016infogan} \begin{equation} \min_{\vartheta, \psi} \max_\varphi \mathbb{E}_{\tau' \sim p(\tau)} [\log D_\varphi(\tau')] + \mathbb{E}_{\alpha' \sim p(\alpha)} [\log(1 - D_\varphi(g_\vartheta(\alpha'))] - \lambda \mathbb{E}_{\alpha' \sim p(\alpha), \tau' \sim g_\vartheta(\alpha')}[\log Q_\psi(\alpha' | \tau')], \label{eq:gan} \end{equation} where $Q_\psi(\alpha | \tau')$ is an approximation of the true unknown posterior distribution $p(\alpha | \tau')$ and $\lambda$ a hyperparameter. In practice, $Q_\psi$ is a neural network that models the parameters of a Gaussian distribution and shares all the convolutional layers with the discriminator network $D_\varphi$ except for the last few output layers. \subsection{Evaluation of the generative models} \label{sec:eval_generative_model} We review the characteristics of generative models that can potentially improve the policy training by measuring precision and recall, disentanglement and local linearity. Our goal is to be able to judge the quality of the policy training by evaluating the generative models prior to the RL training. We relate the measures to the performance of the policy training in Section~\ref{sec:exp:generative_model}. \begin{comment} The authors in \cite{sajjadi2018assessing} define a learned distribution $Q$ to have precision $\alpha$ at recall $\beta$ with respect to the reference distribution $P$ if there exist distributions $\mu, \nu_P$ and $\nu_Q$ such that $P$ and $Q$ can be decomposed as follows: \begin{align} P = \beta \mu + (1 - \beta) \nu_P \quad \text{and} \quad Q = \alpha \mu + (1 - \alpha) \nu_Q. \label{def:prd_1} \end{align} The distribution $\mu$ models the part common to both the reference distribution $P$ as well as the learned distribution $Q$. The distribution $\nu_P$ represents the part of $P$ that $Q$ does not cover, and $\nu_Q$ represent the part of $Q$ that is irrelevant for $P$, in the sense that its support does not intersect the support of the reference distribution. The precision-recall curve can then be obtained by visualising the set $\PRD(P, Q) = \{(\alpha, \beta): \alpha, \beta \text{ satisfying Eq. \eqref{def:prd_1}}\}$. However, this definition is difficult to work with in practice as it requires an estimation of distributions $\mu, \nu_P, \nu_Q$ for every possible pair $(\alpha, \beta)$. Therefore, the authors provide an approximation that is based only on evaluation of $P$ and $Q$ on finite number of samples: \begin{align*} \alpha(\lambda) = \sum_{\omega \in \Omega} \min(\lambda P(\omega), Q(\omega)) \quad \text{and} \quad \beta(\lambda) = \sum_{\omega \in \Omega} \min \left( P(\omega), \frac{Q(\omega)}{\lambda} \right) \end{align*} where $\lambda > 0$ and $\Omega$ is the state space where $P$ and $Q$ are defined. Intuitively, the function $\alpha(\lambda)$ will be non-negative only on samples $\omega \in \Omega$ where $Q$ is non-negative. On such samples, it then measures how much we need to change the reference distribution $P$ in order to be close to $Q$. This is exactly the precision of $Q$: it will be low if $Q$ deviates from $P$ on the support of $Q$. On the other hand, the function $\beta(\lambda)$ will be non-negative only on samples $\omega \in \Omega$ where $P$ is non-negative. By measuring the difference between $P$ and $Q$ on such samples, we obtain an approximation of the recall: it will be low if $Q$ is zero on the support of $P$. The final approximation of the $\PRD$ set can then be computed by evaluating the functions $\alpha$ and $\beta$ on an equiangular grid of values $\lambda$: \begin{align*} \widehat{\PRD}(Q, P) = \left\{ (\alpha(\lambda), \beta(\lambda)): \lambda \in \left\{\tan\left(\frac{i \pi}{2(m+1)}\right)\right\}_{i=1}^m \right\}. \end{align*} In our experiments, we use the original implementation of this algorithm provided by the authors of \cite{sajjadi2018assessing}. \end{comment} \subsubsection{Disentangling precision and recall} In this section, we define our measure, called \textit{disentangling precision and recall,} for evaluating the disentanglement of latent action representations. A disentangled representation of the motor data obtained from the latent space of a generative model can be defined as the one in which every end state of the system is controllable by one latent dimension determined by the vector basis of the latent space. For example, consider the picking task where the goal is to pick an object on a table top (Section~\ref{sec:exp:setup}). We say that a latent representation given by a generative model is well-disentangled if there exists a basis of the latent space with respect to which each dimension controls one axis of the position of the end-effector. Our hypothesis is that the more disentangled the representation is, the more efficient is the policy training. While disentangled representations have been successfully applied to a variety of downstream machine learning tasks \cite{disentanglement_video, creager2019flexibly, lee2018diverse}, their usefulness has been questioned by \cite{challenging_common} and \cite{van2019disentangled} who have observed disagreements among the existing disentanglement metrics. We experimentally evaluate the effect of disentangled representations on the performance of the policy training in Section~\ref{sec:exp:eval_gen_models}. Our disentanglement measure is based on statistical testing performed on the end state space of the system. Let $\boldsymbol{S}_r$ be the set of end states obtained by executing the training motor trajectories on a robotic platform. If representations given by $g$ are well disentangled, then setting one latent dimension to a fixed value should result in limited variation in the corresponding generated end states $\boldsymbol{S}_g$. For example, if the $1$st latent dimension controls the $x$-axis position of the end-effector in the picking task then setting it to a fixed value should limit the set of possible $x$ positions. In other words, we wish to quantify how dissimilar the set of end states $\boldsymbol{S}_g$, obtained by holding one latent dimension constant, is from the set $\boldsymbol{S}_r$. To compute such dissimilarity we use maximum mean discrepancy (MMD) \cite{JMLR:v13:gretton12a} which is a statistical test for determining if two sets of samples were produced by the same distribution. Using kernels, MMD maps both sets into a feature space called reproducing kernel Hilbert space, and computes the distance between mean values of the samples in each group. In our implementations, we compute the unbiased estimator of the squared MMD (Lemma 6 in \cite{JMLR:v13:gretton12a}) given by \begin{align*} \text{MMD}^2(\boldsymbol{S}_r, \boldsymbol{S}_g) = \frac{1}{m (m -1)} \sum_{i \neq j}^m k(s_r^i, s_r^j) + \frac{1}{n (n -1)} \sum_{i \neq j}^n k(s_g^i, s_g^j) - \frac{2}{mn} \sum_{i = 1}^m \sum_{j = 1}^n k(s_r^i, s_g^j), \end{align*} where, $\boldsymbol{S}_r = \{s_r^1, \dots, s_r^m\}$, $\boldsymbol{S}_g = \{s_g^1, \dots, s_g^n\}$ are the two sets of samples and $k(x, y) = \exp(- \gamma ||x - y||^2)$ is the exponential kernel with hyperparameter $\gamma$ determining the smoothness. Due to the nature of the exponential kernel, the higher the MMD score, the lower the similarity between the two sets. Our measure can be described in three phases. We provide an intuitive summary of each phase but refer the reader to Appendix \ref{app:dis_score} for a rigorous description of the algorithm. In phase $1$ (Figure \ref{fig:disentanglement_graphics} left) we generate the two sets of end states on which we run the statistical tests. For a fixed latent dimension $l \in \{1, \dots, N_\alpha\}$ we perform a series of $D \in \mathbb{N}$ \textit{latent interventions} where we set the $l$th component of a latent code $\alpha_l$ to a fixed value $I_d$, $\alpha_l = I_d$ for $d = 1, \dots, D$. Each intervention $\alpha_l = I_d$ is performed on a set of $n$ samples sampled from the prior distribution $p(\alpha)$. We denote by $\boldsymbol{S}_g^{l-I_d}$ the set of $n$ end states obtained by executing the trajectories generated from the latent samples on which we performed the intervention $\alpha_l = I_d$. For example, $D = 5$ latent interventions on the $1$-st latent dimension yield the sets $\boldsymbol{S}_g^{1-I_1}, \dots, \boldsymbol{S}_g^{1-I_5}$. Moreover, we denote by $\boldsymbol{S}_r$ the set of $n$ randomly subsampled end states that correspond to the training motor data. \begin{figure}[h] \centering \includegraphics[width=0.99\linewidth]{figures/disentanglement_graphics.png} \caption{Visualisation of phase 1 and 2 of our disentangling precision and recall metric $\Dis$. }\label{fig:disentanglement_graphics} \end{figure} In phase $2$ (Figure \ref{fig:disentanglement_graphics} right), we perform the $\MMD$ tests on each pair of sets $\boldsymbol{S}_g^{l-I_d}$ and $\boldsymbol{S}_r$ obtained in phase $1$. In particular, we wish to determine if an intervention on a given dimension $l$ induced a change in any of the components $j = 1, \dots, N_s$ of the end state space. If such a change exists, we consider the latent dimension $l$ to be well disentangled. Moreover, if we can find a set of latent dimensions that induce changes on different components of the end states, we consider the generative model $g$ to be well disentangled. Therefore, for a fixed latent dimension $l$ and a fixed intervention $\alpha_l = I_d$, the objective is to find the component $j$ of the end state space for which $\boldsymbol{S}_g^{l-I_d}$ and $\boldsymbol{S}_r$ are most dissimilar. This translates to finding the component $j$ yielding the largest value $\MMD(\proj_j \boldsymbol{S}_g^{l-I_d}, \proj_j \boldsymbol{S}_r)$ where $\proj_j \boldsymbol{S} = \{ \proj_j s \,|\,s \in \boldsymbol{S} \}$ denotes the set of the $j$th components of the states from the set $\boldsymbol{S}$. Note that if the dimension $l$ is entangled such component $j$ does not exists (see Appendix \ref{app:dis_score} for details). In phase $3$ we aggregate the values of the performed $\MMD$ tests and define the final disentanglement score for the generative model $g$. In phase $2$ we linked each latent dimension $l$ with zero or one end state component $j$, and computed the corresponding value of the $\MMD$ test. Here, we first select $\min(N_s, N_\alpha)$ such pairs of latent space and end state space dimensions that yield the largest $\MMD$ values. In other words, we select the latent dimensions for which the latent interventions resulted in the largest changes in the end state space. We define $\Dip(g)$ as the sum of the selected $\MMD$ values, and $\Dir(g)$ as the number of unique end state space components present in the selected pairs, normalised by the total number of components $N_s$. Finally, we define the \textit{Disentangling Precision and Recall} $\Dis$ as a pair $$\Dis(g) = (\Dip(g), \Dir(g)).$$ Intuitively, $\Dip(g)$ quantifies the sum effect of the latent interventions with the most impact on the end states, and can therefore be thought of as \textit{disentangling precision}. A high $\Dip$ value indicates that the intervened latent samples resulted in end states with significantly limited variation, which in turn means that the latent disentanglement has a high precision. On the other hand, $\Dir(g)$ measures how many different aspects of the end states are captured in the latent space, and can be thus thought of as \textit{disentangling recall}. A high $\Dir$ value indicates that more end state components are captured in the latent space, such that the latent disentanglement has a high recall. The defined score is a novel fully unsupervised approximate measure of disentanglement for generative models combined with the RL policy training. Its absolute values can however vary depending on the kernel parameter $\gamma$ determining its smoothness. Moreover, this measure is not to be confused with the precision and recall from Section \ref{sec:prec_and_recall} where the aim is to evaluate the quality of the generated samples as opposed to the quality of the latent representations. \subsubsection{Latent Local linearity} The linearity of system dynamics plays a vital role in control theory but has not been studied in the context of generative model training. The system dynamics govern the evolution of the states as the result of applying a sequence of motor actions to the robot. Our hypothesis is that a generative model integrated with the environment performs better in the policy training if it satisfies the local linearity property defined below. Let the mapping $\text{Exe}: \mathbb{R}^{T \times M} \rightarrow \mathbb{R}^{N_s}$ correspond to the execution of motion actions on a robot and let $s = \text{Exe}(\tau) \in \mathbb{R}^{N_s}$ denote the end state obtained by executing actions $\tau \in \mathbb{R}^{T \times M}$. Let $N_\varepsilon(\alpha) = \{\alpha': ||\alpha' - \alpha||_2 < \varepsilon\}$ be the Euclidean $\varepsilon$-neighbourhood of a latent action $\alpha$. Then the composition of the maps $\text{Exe} \circ g: \mathbb{R}^{N_\alpha} \rightarrow \mathbb{R}^{N_s}$ mapping from the action latent space to the end state of the system is considered \textit{locally linear} in the neighbourhood of $\alpha$ if there exists an affine transformation \begin{align} f_\alpha: N_\varepsilon(\alpha) \subset \mathbb{R}^{N_\alpha} \label{eq:aff_trans} &\longrightarrow \mathbb{R}^{N_s} \\ \alpha' &\longmapsto A\alpha' + b \nonumber \end{align} such that $\text{Exe}(g(\alpha')) = f_\alpha(\alpha')$ for every $\alpha' \in N_\varepsilon(\alpha)$. We define the \textit{latent local linearity (L3)} of a generative model $g$ to be the mean square error (MSE) of $f_{\alpha_i}$ obtained on $N_\varepsilon(\alpha_i)$ calculated on a subset of latent actions $\{\alpha_i\}$. \subsubsection{Precision and recall} \label{sec:prec_and_recall} Precision and recall for distributions is a measure, first introduced by \cite{sajjadi2018assessing} and further improved by \cite{kynkaanniemi2019improved}, for evaluating the quality of a distribution learned by a generative model $g$. It is based on the comparison of samples obtained from $g$ with the samples from the ground truth reference distribution. In our case, the reference distribution is the one of the training motor trajectories. Intuitively, \textit{precision} measures the quality of the generated sequences of motor actions by quantifying how similar they are to the training trajectories. It determines the fraction of the generated samples that are realistic. On the other hand, \textit{recall} evaluates how well the learned distribution covers the reference distribution and it determines the fraction of the training trajectories that can be generated by the generative model. In the context of the policy training, we would like the output of $\pi_\Theta$ to be as similar as possible to the demonstrated motor trajectories. It is also important that $\pi_\Theta$ covers the entire state space as it must be able to reach different goal states from different task configurations. Therefore, the generative model needs to have both high precision and high recall scores. The improved measure introduced by \cite{kynkaanniemi2019improved} is based on an approximation of manifolds of both training and generated data. In particular, given a set $\boldsymbol{T} \in \{\boldsymbol{T_r}, \boldsymbol{T_g}\}$ of either real training trajectories $\boldsymbol{T_r}$ or generated trajectories $\boldsymbol{T_g}$, the corresponding manifold is estimated by forming hyperspheres around each trajectory $\tau \in \boldsymbol{T}$ with radius equal to its $k$th nearest neighbour $\NN_k(\tau, \boldsymbol{T})$. To determine whether or not a given novel trajectory $\tau'$ lies within the volume of the approximated manifold we define a binary function \[ f(\tau', \boldsymbol{T}) = \begin{cases} 1 & \text{if $||\tau' - \tau||_2 \le ||\tau - \NN_k(\tau, \boldsymbol{T})||_2$ for at least one $\tau \in \boldsymbol{T}$} \\ 0 & \text{otherwise.} \end{cases} \] By counting the number of generated trajectories $\tau_g \in \boldsymbol{T}_g$ that lie on the manifold of the real data $\boldsymbol{T}_r$ we obtain the \textit{precision}, and similarly the \textit{recall} by counting the number of real trajectories $\tau_r \in \boldsymbol{T}_r$ that lie on the manifold of the generated data $\boldsymbol{T}_g$ \begin{align*} \text{precision}(\boldsymbol{T}_r, \boldsymbol{T}_g) = \frac{1}{|\boldsymbol{T}_g|} \sum_{\tau_g \in \boldsymbol{T}_g} f(\tau_g, \boldsymbol{T}_r) \quad \text{and} \quad \text{recall}(\boldsymbol{T}_r, \boldsymbol{T}_g) = \frac{1}{|\boldsymbol{T}_r|} \sum_{\tau_r \in \boldsymbol{T}_r} f(\tau_r, \boldsymbol{T}_g). \end{align*} In our experiments, we use the original implementation provided by \cite{kynkaanniemi2019improved} directly on the trajectories as opposed to their representations as suggested in the paper. \section{Introduction} \label{sec:introduction} Reinforcement learning (RL) can leverage modeling capability of generative models to solve complex sequential decision making problems more efficiently \cite{ghadirzadeh2017deep, arndt2019meta}. RL has been applied to end-to-end training of deep visuomotor robotic policies \cite{levine2016end,levine2018learning} but it is typically too data-inefficient especially when applied to tasks that provide only a terminal reward at the end of an episode. One way to alleviate the data-inefficiency problem in RL is by leveraging prior knowledge to reduce the complexity of the optimization problem. One prior that significantly reduces the data requirement is an approximation of the distribution from which valid action sequences can be sampled. Such distributions can be efficiently approximated by training generative models given a sufficient amount of valid action sequences. The question is then how to combine powerful RL optimization algorithms with the modeling capability of generative models to improve the efficiency of the policy training? Moreover, which characteristics of the generative models are important for efficient policy training? A suitable generative model must capture the entire distribution of the training data to generate as many distinct motion trajectories as possible, while avoiding the generation of invalid trajectories outside the training dataset. The diversity of the generated data enables the policy to complete a given task for the entire set of goal states when training a goal-conditioned policy. On the other hand, adhering to the distribution of the training data ensures safety in generated trajectories which are running on a real robotic platform. In this paper, we (i) propose a learning framework that exploits RL and generative models to solve sequential decision making problems and considerably improves the data-efficiency of deep visuomotor policy training to control a robotic arm given raw image pixels as the input, and (ii) provide a set of measures to evaluate the quality of the latent space of different generative models regulated by the RL policy search algorithms, and use them as a guideline for training the generative models such that the data-efficiency of the policy training can be further improved prior to actual training on a physical robot. Regarding (i), the proposed learning framework divides the deep visuomotor sequential decision-making problem into the following sub-problems that can be solved more efficiently: (a) an unsupervised generative model training problem that approximates the distribution of motor actions, (b) a trust-region policy optimization problem that solves a contextual multi-armed bandit without temporal credit assignment issue which exists in typical sequential decision-making problems, and (c) a supervised learning problem in which we train the deep visuomotor policy in an end-to-end fashion. Regarding (ii), we evaluate generative models based on (a) the quality and coverage of the samples they generate using the precision and recall metric \cite{kynkaanniemi2019improved}, and (b) the quality of their latent representations using two novel measures called \textit{disentangling precision and recall (DPR)} and \textit{latent local linearity (L3)}. Both these measures leverage the end states obtained after execution of the generated trajectories on a robotic platform. Disentanglement measures to which extent individual dimensions in the latent space control different aspects of the task, while local linearity measures the complexity of the generative process and system dynamics in the neighbourhood of each point in the latent space. Our hypothesis is that a generative model that is well disentangled, locally linear and able to generate realistic samples that closely follow the training data (i.e. has high precision and high recall) leads to a more sample efficient neural network policy training. We experimentally investigate this hypothesis on several generative models, namely $\beta$-VAEs \cite{higgins2017beta} and InfoGANs \cite{chen2016infogan}, by calculating Pearson's R as well as automatic relevance determination regression (ARD) to quantify the importance of (a) and (b) for a superior RL policy training performance. This evaluation provides a guideline for training latent-variable generative models in a way that enables data-efficient policy training. In summary, the advantages of the proposed framework are: \begin{itemize} \item It improves data-efficiency of the policy training algorithm by at least an order of magnitude by incorporating prior knowledge in terms of a distribution over valid sequences of actions, therefore, reducing the search space. \item It helps to acquire complex visuomotor policies given sparse terminal rewards provided at the end of successful episodes. The proposed formulation converts the sequential decision-making problem into a contextual multi-armed bandit. Therefore, it alleviates the temporal credit assignment problem that is inherent in sequential decision-making tasks and enables efficient policy training with only terminal rewards. \item It enables safe exploration in RL by sampling actions only from the approximated distribution. This is in stark contrast to the typical RL algorithms in which random actions are taken during the exploration phase. \item It provides a set of measures for evaluation of the generative model based on which it is possible to predict the performance of the RL policy training prior to the actual training. \end{itemize} This paper provides a comprehensive overview of our earlier work for RL policy training based on generative models \cite{ghadirzadeh2017deep, arndt2019meta, chen2019adversarial,hamalainen2019affordance, butepage2019imitating} and is organized as follows: in Section \ref{sec:related_work}, we provide an overview of the related work. We formally introduce the problem of policy training with generative models in Section \ref{sec:preliminaries}, and describe how the framework is trained in Section \ref{sec:em_policy_training}. In Section \ref{sec:generative_model_training} we first briefly overview VAEs and GANs, and then define all of the evaluation measures used to predict the final policy training performance. We present the experimental results in Section \ref{sec:experiments} and discuss the conclusion and future work in Section~\ref{sec:conclusion}. Moreover, for the sake of completeness, we describe the end-to-end training of the perception and control modules in Appendix \ref{sec:perception} by giving a summary of the prior work \cite{levine2016end, chen2019adversarial}. Note that this work provides a complete overview of the proposed framework and focuses on the evaluation of the generative model. We refer the reader to \cite{ghadirzadeh2017deep} for investigation of the data-efficiency of the proposed approach in training complex visuomotor skills. \section{End-to-end Training of Perception and Control} \label{sec:perception} The EM policy training algorithm presented in Section~\ref{sec:em_policy_training} updates the deep policy using the supervised learning objective function introduced in~\eqref{eq:M_loss} (the M-step objective). Similar to GPS \cite{levine2016end}, the EM policy training formulation enables simultaneous training of the perception and control parts of the deep policy in an end-to-end fashion. In this section, we describe two techniques that can improve the efficiency of the end-to-end training. \textbf{Input remapping trick} The input remapping trick \cite{levine2016end} can be applied to condition the variational policy $q$ on a low-dimensional compact state representation, $z$, instead of the high-dimensional states $s$ given by the sensory observations, e.g., camera images. The policy training phase can be done in a controlled environment such that extra measures other than the sensory observation of the system can be provided. These extra measures can be for example the position of a target object on a tabletop. Therefore, the image observations $s$ can be paired with a compact task-specific state representation $z$ such that $z$ is used in the E-step for updating the variational policy $q_\phi(\alpha|z)$, and $s$ in the M-step for updating the policy $\pi_\theta(\alpha|s)$. \textbf{Domain adaptation for perception training} Domain adaptation techniques, e.g., adversarial methods \cite{chen2019adversarial}, can improve the end-to-end training of visuomotor policies with limited robot data samples. The unlabeled task-specific images, captured without involving the robot, can be exploited in the M-step to improve the generality of the visuomotor policy to manipulate novel task objects in cluttered backgrounds. The M-step is updated to include an extra loss function to adapt data from the two different domains: (i) unlabeled images and (ii) robot visuomotor data. The images must contain only one task object in a cluttered background, possibly different than the task object used by the robot during the policy training. Given images from the two domains, the basic idea is to extract visual features such that it is not possible to detect the source of the features. More details of the method can be found in our recent work in \cite{chen2019adversarial}. \section{Expectation-Maximization Policy Training} \label{sec:em_policy_training} The EM algorithm is a well-suited approach to find the maximum likelihood solution to the intractable marginalization over the latent variable introduced in~\eqref{eq:margin_tau} and~\eqref{eq:margin_alpha}. We use the EM algorithm to find an optimal policy $\pi_{\theta^*}(\alpha|s)$ by first introducing a variational policy $q(\alpha|s)$ which is a simpler auxiliary distribution used to improve the training of $\pi_{\theta}$. As the goal is to find an action trajectory $\tau$ that maximizes the reward probability $p(r|s,\tau)$, we start by expressing its logarithm as $\log p(r | s) = \int q(\alpha|s) \log p(r|s) d\alpha$, where we used the identity $\int q(\alpha|s) d\alpha = 1$ and omitted the conditioning on $\tau$ in the reward probability for simplicity. Following the EM derivation introduced in \cite{neumann2011variational} and using the identity $p(r|s) = p(r,\alpha|s)/p(\alpha |r,s)$, the expression can be further decomposed into \begin{align} \log p(r|s) & = \underbrace{\int q(\alpha|s) \log \frac{p(r, \alpha|s)}{q(\alpha|s)} d\alpha}_{\text{I}} + \underbrace{\int q(\alpha|s) \log \frac{q(\alpha|s)}{p (\alpha | r, s)}\,d\alpha}_{\text{II}} \label{eq:marginal_decomposed} \end{align} The second term (II) is the Kullback-Leibler (KL) divergence $D_{KL}( q(\alpha|s) \,||\, p(\alpha | r,s) )$ between distributions $q(\alpha|s)$ and $p(\alpha | r,s)$, which is a non-negative quantity. Therefore, the first term (I) provides a lower-bound for $\log p(r|s)$. To maximize the latter we use the EM algorithm which is an iterative procedure consisting of two steps known as the expectation (E-) and the maximization (M-) steps, introduced in the following sections. \subsection{Expectation step } \label{sec:EM} The E-step yields a trust-region policy optimization objective which solves a contextual multi-armed bandit without temporal complexities. The objective of the E-step is to minimize the KL divergence term (II) in~\eqref{eq:marginal_decomposed} by optimizing $q(\alpha | s)$, which in turn indirectly maximizes the lower bound (I). Since $\log p(r|s)$ does not depend on $q(\alpha|s)$ the sum of the KL divergence term (II) and the lower bound term (I) is a constant value for different $q$. Therefore, reducing (II) by optimizing $q$ increases the lower bound (I). Assuming that $q$ is parametrized by $\phi$, the E-step objective function is given by \begin{align} \phi^* &=\argmin_{\phi} D_{KL}(\,q_\phi(\alpha|s)\,||\,p(\alpha|r,s)\,) \nonumber \\ &= \argmax_{\phi} \mathbb{E}_{\alpha' \sim q_\phi(\alpha|s)}[\log p(r|s, \alpha')] - D_{KL}(\,q_\phi(\alpha|s)\,||\,\pi_\theta(\alpha|s)\,), \label{eq:E_loss} \end{align} where we used the Bayes rule $p(\alpha|r,s) = p(r|\alpha,s) p(\alpha|s)/p(r|s)$ and substituted $p(\alpha|s)$ by $\pi_\theta (\alpha|s)$. In typical RL applications, we maximize the reward value given by a stochastic reward function $r(s,\tau)$. In this case, $\mathbb{E}_{q_\phi(\alpha|s)}[\log p(r| s, \alpha)]$ can be maximized indirectly by maximizing the expected reward value $\mathbb{E}_{q_{\phi}(\alpha|s)}[r(s, \alpha)]$ on which we can apply the policy gradient theorem. Note that by $r(s, \alpha)$ we refer to $r(s, g_\vartheta(\alpha))$. Moreover, $D_{KL}(\,q_\phi(\alpha|s)\,||\,\pi_\theta(\alpha|s)\,)$ acts as a trust region term forcing $q_\phi$ not to deviate too much from the policy distribution $\pi_\theta$. Therefore, we can apply policy search algorithms with trust region terms to optimize the objective given in~\eqref{eq:E_loss}. Following the derivations introduced in \cite{schulman2015trust}, we adopt TRPO objective for the E-step optimization \begin{equation} \phi^* = \argmax_{\phi} \mathbb{E}_{s' \sim p(s), \alpha' \sim \pi_\theta(\alpha|s')} \left[\frac{q_\phi(\alpha'|s')}{\pi_\theta(\alpha'|s')}\,A(s', \alpha') - D_{KL}(q_\phi(\alpha|s') \, || \, \pi_\theta(\alpha | s'))\right ], \label{eq:trpo} \end{equation} where $A(s', \alpha') = r(s', \alpha') - V_\pi(s')$ is the advantage function, $V_\pi(s') = \mathbb{E}_{ \alpha' \sim \pi_\theta(\alpha|s')} [r(s', \alpha')]$ is the value function and $\phi^*$ denotes the optimal solution for the given iteration. Note that the action latent variable $\alpha$ is always sampled from the policy $\pi_\theta(\alpha|s)$ and not from the variational policy $q_\phi(\alpha|s)$. \subsection{Maximization step } The M-step yields a supervised learning objective using which we train the deep policy in an end-to-end fashion. It directly maximizes the lower bound (I) in~\eqref{eq:marginal_decomposed} by optimizing the policy parameters $\theta$ while holding the variational policy $q_\phi$ constant. Following \cite{deisenroth2013survey} and noting that the dynamics of the system $p(r|\alpha, s)$ are not affected by the choice of the policy parameters $\theta$, we maximize (I) by minimizing the following KL divergence \begin{equation} \theta^* = \argmin_{\theta} D_{KL}(\,q_\phi(\alpha|s)\, || \,\pi_{\theta}(\alpha|s)\,). \label{eq:M_loss} \end{equation} In other words, the M-step updates the policy $\pi_{\theta}$ to match the distribution of the variational policy $q_\phi$ which was updated in the E-step. Similarly as in the E-step, $\theta^*$ denotes the optimal solution for the given iteration. The M-step can be combined with end-to-end training of the perception and control modules. We refer the reader to the Appendix \ref{sec:perception} for the details. A summary of the EM policy training is given in Algorithm \ref{alg:training}. In each iteration, a set of states $\{s_i\}$ is sampled from the initial state distribution $p(s)$. For each state $s_i$, a latent action sample $\alpha_i$ is sampled from the distribution given by the policy $\pi_\theta(\alpha|s_i)$. A generative model $g$ is then used to map every latent action variable $\alpha_i$ into a full motor trajectory $\tau_i$ which is then deployed on the robot to get the corresponding reward value $r_i$. In the inner loop, the variational policy $q_\phi$ and the main policy $\pi_\theta$ are updated iteratively based on gradient descent on batches of data using the objective function for the E- and M-steps of the policy optimization method. \input{inputs/algorithm} \section{Related work} \label{sec:related_work} Our work addresses two types of problems: (a) visuomotor policy training based on unsupervised generative model training and trust-region policy optimization, and (b) evaluation of generative models to forecast the efficiency of the final policy training task. We introduce the related work for each of the problems in the following sections. \textbf{Data-efficient end-to-end policy training:} In recent years, end-to-end training of visuomotor policies using deep RL has gained in popularity in robotics research \cite{ghadirzadeh2017deep, levine2016end, finn2016deep, kalashnikov2018qt, quillen2018deep, singh2017gplac, devin2018deep, pinto2017asymmetric}. However, deep RL algorithms are typically data-hungry and learning a general policy, i.e., a policy that performs well also for previously unseen inputs, requires a farm of robots continuously collecting data for several days \cite{levine2018learning, finn2017deep, gu2017deep, dasari2019robonet}. The limitation of large-scale data collection has hindered the applicability of RL solutions to many practical robotics tasks. Recent studies tried to improve the data-efficiency by training the policy in simulation and transferring the acquired visuomotor skills to the real setup \cite{quillen2018deep, pinto2017asymmetric, abdolmaleki2020distributional, peng2018sim}, a paradigm known as sim-to-real transfer learning. Sim-to-real approaches are utilized for two tasks in deep policy training: (i) training the perception model via randomization of the texture and shape of visual objects in simulation and using the trained model directly in the real world setup (zero-shot transfer) \cite{hamalainen2019affordance, tobin2017domain}, and (ii) training the policy in simulation by randomizing the dynamics of the task and transferring the policy to the real setup by fine-tuning it with the real data (few-shot transfer learning) \cite{arndt2019meta, peng2018sim}. However, challenges in the design of the simulation environment can cause large differences between the real and the simulated environments which hinder an efficient knowledge transfer between these two domains. In such cases, transfer learning from other domains, e.g., human demonstrations \cite{butepage2019imitating, yu2018one} or simpler task setups \cite{chen2019adversarial, chen2018deep}, can help the agent to learn a policy more efficiently. In this work, we exploit human demonstrations to shape the robot motion trajectories by training generative models that reproduce the demonstrated trajectories. Following our earlier work \cite{chen2019adversarial}, we exploit adversarial domain adaptation techniques \cite{tzeng2017adversarial, tzeng2020adapting} to improve the generality of the acquired policy when it is trained in a simple task environment with a small amount of training data. In the rest of this section, we review related studies that improve the data-efficiency and generality of RL algorithms by utilizing trust-region terms, converting the RL problem into a supervised learning problem, and trajectory-centered approaches that shape motion trajectories prior to the policy training. Improving the policy while avoiding abrupt changes in the policy distribution after each update is known as the trust-region approach in policy optimization. Trust-region policy optimization (TRPO) \cite{schulman2015trust} and proximal policy optimization (PPO) \cite{schulman2017proximal} are two variants of trust-region policy gradient methods that scale well to non-linear policies such as neural networks. The key component of TRPO and PPO is a surrogate objective function with a trust-region term based on which the policy can be updated and monotonically improved. In TRPO, the changes in the distributions of the policies before and after each update are penalized by a KL divergence term. Therefore, the policy is forced to stay in a trust-region given by the action distribution of the current policy. Our EM formulation yields a similar trust-region term with the difference being that it penalizes the changes in the distribution of the deep policy and a so-called variational policy that will be introduced as a part of our proposed optimization algorithm. Since our formulation allows the use of any policy gradient solution, we use the same RL objective function as in TRPO. The EM algorithm has been used for policy training in a number of prior work \cite{neumann2011variational ,deisenroth2013survey,levine2013variational}. The key idea is to introduce variational policies to decompose the policy training into two downstream tasks that are trained iteratively until no further policy improvement can be observed \cite{ghadirzadeh2018sensorimotor}. In \cite{levine2016end} the authors introduced the guided policy search (GPS) algorithm which divides the visuomotor policy training task into a trajectory optimization and a supervised learning problem. GPS alternates between two steps: (i) optimizing a set of trajectories by exploiting a trust-region term to stay close to the action distribution given by the deep policy, and (ii) updating the deep policy to reproduce the motion trajectories. Our EM solution differs from the GPS framework and earlier approaches in that we optimize the trajectories by regulating a generative model that is trained prior to the policy training. Training generative models enables the learning framework to exploit human expert knowledge as well as to optimize the policy given only terminal rewards as explained earlier. Trajectory-centric approaches, such as dynamic movement primitives (DMPs), have been popular because of the ease of integrating expert knowledge in the policy training process via physical demonstration \cite{peters2006policy, peters2008reinforcement, ijspeert2003learning, ijspeert2013dynamical, hazara2019transferring}. However, such models are less expressive compared to deep neural networks and are particularly limited when it comes to end-to-end training of the perception and control elements of the model. Moreover, these approaches cannot be used to train reactive policies where the action is adjusted in every time-step based on the observed sensory input \cite{haarnoja2018composable}. On the other hand, deep generative models can model complex dependencies within the data by learning the underlying data distribution from which realistic samples can be obtained. Furthermore, they can be easily accommodated in larger neural networks without affecting the data integrity. Our framework based on generative models enables training both feedback (reactive) and feedforward policies by adjusting the policy network architecture. The use of generative models in robot learning has become popular in recent years \cite{ghadirzadeh2017deep, butepage2019imitating, hamalainen2019affordance, chen2019adversarial, arndt2019meta, lippi2020latent, gothoskar2020learning, igl2018deep, buesing2018learning, mishra2017prediction, ke2018modeling, hafner2018learning, rhinehart2018deep, krupnik2019multi} because of their low-dimensional and regularized latent spaces. However, latent variable generative models are mainly studied to train a long-term state prediction model that is used in the context of trajectory optimization and model-based reinforcement learning \cite{buesing2018learning, mishra2017prediction, ke2018modeling, hafner2018learning, rhinehart2018deep, krupnik2019multi}. Regulating generative models based on reinforcement learning to produce sequences of actions according to the visual state has first appeared in our prior work \cite{ghadirzadeh2017deep}. Since then we applied the framework in different robotic tasks, e.g., throwing balls \cite{ghadirzadeh2017deep}, shooting hockey-pucks \cite{arndt2019meta}, pouring into mugs \cite{chen2019adversarial, hamalainen2019affordance}, and in a variety of problem domains, e.g., sim-to-real transfer learning \cite{hamalainen2019affordance, arndt2019meta} and domain adaptation to acquire general policies \cite{chen2019adversarial}. \textbf{Evaluation of generative models:} Although generative models have proved successful in many domains \cite{lippi2020latent, brock2018large, wang2018high, vae_anom, vae_text8672806} assessing their quality remains a challenging problem \cite{challenging_common}. It involves analysing the quality of both latent representations and generated samples. Regarding the latter, generated samples and their variation should resemble those obtained from the training data distribution. Early developed metrics such as IS \cite{IS_NIPS2016_6125}, FID \cite{FID_NIPS2017_7240} and KID \cite{binkowski2018demystifying} provided a promising start but were shown to be unable to separate between failure cases, such as mode collapse or unrealistic generated samples \cite{sajjadi2018assessing, kynkaanniemi2019improved}. Instead of using a one-dimensional score, \cite{sajjadi2018assessing} proposed to evaluate the learned distribution by comparing the samples from it with the ground truth training samples using the notion of precision and recall. Intuitively, precision measures the similarity between the generated and real samples, while recall determines the fraction of the true distribution that is covered by the distribution learned by the model. The measure was further improved both theoretically and practically by \cite{revisiting_pr}, while \cite{kynkaanniemi2019improved} provides an explicit non-parametric variant of the original probabilistic approach. We complement our measures for assessing disentanglement and local linearity of the latent representations with the precision and recall measure provided by \cite{kynkaanniemi2019improved}. Regarding the assessment of the quality of the latent representation, a widely adopted approach is the measure of disentanglement \cite{higgins2018towards, repr_learning_survey, tschannen2018recent}. A representation is said to be disentangled if each latent component encodes exactly one ground truth generative factor present in the data \cite{kim2018disentangling}. Existing frameworks for both learning and evaluating disentangled representations \cite{higgins2017beta, kim2018disentangling, eastwood2018framework, chen2018isolating,kumar2017variational} rely on the assumption that the ground truth factors of variation are known a priori and are independent. The core idea is to measure how changes in the generative factors affect the latent representations and vice versa. In cases when an encoder network is available, this is typically achieved with a classifier that was trained to predict which generative factor was held constant given a latent representation \cite{higgins2017beta, kim2018disentangling, eastwood2018framework, kumar2017variational, chen2018isolating}. In generative models without an encoder network, such as GANs, disentanglement is measured by visually inspecting the latent traversals provided that the input data are images \cite{chen2016infogan, jeon2019ibgan, lee2020high, liu2019oogan}. However, these measures are difficult to apply when generative factors of variation are unknown or when manual visual inspection is not possible, both of which is the case with sequences of motor commands for controlling a robotic arm. We therefore define a measure of disentanglement that does not rely on any of these requirements and instead leverages the end states of the downstream robotics task corresponding to a given set of latent action representations. In contrast to existing measures it measures how changes in the latent space affect the obtained end states in a fully unsupervised way. Moreover, since the generative model in our case is combined with the system dynamics, we complement the evaluation of the latent representations with a measure of local linearity which quantifies the complexity of the system dynamics in a neighbourhood of a given latent sample. \section{TODO list} \paragraph{Final comments} \begin{itemize} \item Eq, Sec, Fig vs Equation, Section, Figure \item action latent variable vs latent action variable \item KL, DKL, KL-divergence? \item Explicit naming of authors in the first part of the related work, why? \item When we have expected values it is not mathematically correct to write neither $\mathbb{E}_{\alpha ~p(.)}[x]$ (not clear what is the dot) nor $\mathbb{E}_{\alpha ~p(alpha)}[x]$ (the left hand side $\alpha$ is a concrete sample while the right hand side one is a function variable). I would therefore suggest to kill the explicit sampling from the notation and always write $\mathbb{E}_{p(\alpha)}[x]$. It is a standard way to denote expectations in the statistics i believe. \item In section GANs, we say that we use InfoGAN because it encouranges the disentanglement. However, we do not mention it in VAEs even though we do use a higher $\beta$ which encouranges disentanglement. I think we should have a sentence about it there as well. \item It seems like I trained GANs for 1000 epochs while you trained VAEs for 10000. This causes some implicit differences in the evaluation metrics (like VAEs are always better), shall we say something about it? \item Read through the appendix \begin{comment} \item Decide on the disentanglement metric. Read the following papers (links included): \begin{itemize} \item \href{https://openreview.net/pdf?id=Sy2fzU9gl}{BetaVAE metric} cited in \href{https://arxiv.org/pdf/1903.11260.pdf}{Small Data Challenges in Big Data Era}, section Disentanglement Metric page 8 \item \href{https://arxiv.org/pdf/1905.13662.pdf}{On the fairness of disentangled representations} - The authors said at NeurIPS that the most reliable metric is DCI \item Potentially also \href{https://arxiv.org/pdf/1905.12506.pdf}{Are Disentangled Representations Helpful for Abstract Visual Reasoning?} - on their poster it looked like they compared many different ones using VAEs, might be relevant \end{itemize} \item Decide on IS and FID score, do we want to implement both? Read the following papers (links included): \begin{itemize} \item \href{https://arxiv.org/pdf/1901.01091.pdf}{Adaptive Density Estimation for GMs} - see Table 3 for different score benchmarks. The IS and FID metrics are explained and referenced in Appendix D. Also, the authors claimed at their poster that they learned disentangled representations but they didn't do any study on that in the paper. \end{itemize} \end{comment} \end{itemize} \begin{comment} \Large{ $\textbf{g}$ \\ ${\color{orange}\{\tau = g(\alpha) \,|\, \alpha_l = I_d \}}$ \\ ${\color{teal}\{\tau\,|\,\tau \sim p(\tau) \}}$ \\ $\textbf{Env}$ \\ ${\color{orange}\boldsymbol{S}_g^{l-Id} = \{s_g^1, \dots, s_g^n\}}$ \\ ${\color{teal}\boldsymbol{S}_r = \{s_r^1, \dots, s_r^n\}}$ \\ $\boldsymbol \proj_j$ \\ ${\color{orange}\proj_j \boldsymbol{S}_g^{l-Id} = \{\proj_j s_g^1, \dots, \proj_j s_g^n\}}$ \\ ${\color{teal}\proj_j \boldsymbol{S}_r = \{\proj_j s_r^1, \dots, \proj_j s_r^n\}}$ $\MMD(\proj_j \boldsymbol{S}_g^{l-Id}, \proj_j \boldsymbol{S}_r)$ \\ {\color{gray} phase 1} \\ {\color{gray} phase 2}} \end{comment} \section*{Acknowledgments} This work was supported by Knut and Alice Wallenberg Foundation, the EU through the project EnTimeMent, the Swedish Foundation for Strategic Research through the COIN project, and also by the Academy of Finland through the DEEPEN project. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,492
{"url":"https:\/\/im.kendallhunt.com\/MS_ACC\/teachers\/2\/5\/24\/index.html","text":"# Lesson 24\n\nUsing Data Displays to Find Associations\n\n## 24.1: Sports and Musical Instruments (5 minutes)\n\n### Warm-up\n\nThe purpose of this warm-up is for students to answer questions about relative frequency of items after finding missing information in a two-way table.\n\nMonitor for\u00a0students who find\u00a0the percentages for the final two questions using different strategies to share during the whole-class discussion.\n\n### Launch\n\nGive students 2 minutes of quiet work time followed by a whole-class discussion.\n\n### Student Facing\n\nFor a survey, students in a\u00a0class answered these questions:\n\n\u2022 Do you play a sport?\n\u2022 Do you play a musical instrument?\n1. Here is a two-way table that gives some results from the survey. Complete the table, assuming that all students answered both questions.\n\nplays instrument does not play instrument total\nplays sport 5 16\ndoes not play sport\ntotal 15 25\n2. To the nearest percentage point, what percentage of students who play a sport don\u2019t play a musical instrument?\n\n3. To the nearest percentage point, what percentage of students who don\u2019t play a sport also don\u2019t play a musical instrument?\n\n### Activity Synthesis\n\nAsk students to share the missing information they found for the table. Record and display their responses for all to see.\n\nSelect\u00a0students previously identified to explain how they found the percentages for the final two questions and what that percentage represents.\n\n1. Students who find a percentage using the values given (for example 31% since\u00a0$$\\frac{5}{16} \\approx 0.31$$), then subtract from 100% (for example 69% since $$100 - 31 = 69$$)\u00a0to answer the question.\n2. Students who find the actual values first by subtracting (for example $$16 - 5 = 11$$)\u00a0then compute the percentage (for example 69%\u00a0because $$\\frac{11}{16}=0.6875$$).\n\nAsk the rest of the class if they agree or disagree with the strategies and give time for any questions they have.\n\n## 24.2: Sports and Music Association (20 minutes)\n\n### Activity\n\nNow that students are more familiar with two-way tables showing relative frequency, they are ready to create their own segmented bar graphs. In this activity, students create two segmented bar graphs based on the same two-way table by considering percentages of the rows and columns separately. After creating the segmented bar graphs, they are analyzed to determine if there is an association present in the data.\n\n### Launch\n\nArrange students in groups of 2. After a brief introduction, give 5\u201310\u00a0minutes of quiet work time. Ask\u00a0students to compare their answers\u00a0with their partner and try to resolve any differences. Finish with a whole-class discussion.\n\nDisplay the two-way table from the previous lesson's cool-down activity containing the data collected about the class's playing sports and musical instruments. If the data is unavailable, the data from this lesson's warm-up can be used.\n\nTell students they should work with their partners to each work on one of the graphs. One student should work on problems 1 and 2 while their partner should work on 3 and 4. After they have completed their graphs, they should work together to understand their partners graphs and complete the last problem together.\n\nAction and Expression: Internalize Executive Functions. Chunk this task into more manageable parts to support students who benefit from support with organization and problem solving. For example, present one question at a time. Some students may benefit from a checklist on how to create a segmented bar graph.\nSupports accessibility for: Organization; Attention\n\n### Student Facing\n\nYour teacher will give you a two-way table with information about the number of people in your class who play sports or musical instruments.\n\n1. Complete this table to make a two-way table for the data from earlier. The table will show relative frequencies by row.\n\nplays instrument does not play instrument row total\nplays sport 100%\ndoes not play sport 100%\n\n2. Make a segmented bar graph for the table. Use one bar of the graph for each row of the table.\n\n3. Complete the table to make a two-way table for the data from earlier. The table will show relative frequencies by column.\n\nplays instrument does not play instrument\nplays sport\ndoes not play sport\ncolumn total 100% 100%\n\n4. Using the values in the table, make a segmented bar graph. Use one bar of the graph for each column of the table.\n\n5. Based on the two-way tables and segmented bar graphs, do you think there is an association between playing a sport and playing a musical instrument? Explain how you know.\n\n### Anticipated Misconceptions\n\nStudents may draw the segmented bar graph incorrectly. Most likely, they will accidentally graph frequency instead of relative frequency. They may also graph relative frequencies, but without stacking them. Both segmented bars should go from 0 to 100.\n\n### Activity Synthesis\n\nTo clarify how to create and interpret segmented bar graphs, ask:\n\n\u2022 \u201cWhat different information can be seen by the two segmented bar graphs?\u201d\n\u2022 \u201cWhy are the numbers in the top left box in the two tables different? What do they mean?\u201d (In the first table\u00a0it represents\u00a0the percentage who also play musical instruments out of all the\u00a0people who play sports. In the second table\u00a0it represents the percentage of people who also play sports out of all the people who play musical instruments.)\n\u2022 \u201cIs there an association between the two variables? Explain or show your reasoning.\u201d (The answer will depend on class data, but the reasoning should include an analysis of the relative frequencies within categories. There is an association if the percentages within one category are very different from the percentages in another category.)\n\nIf there is an association, ask what the segmented bar graphs would look like if there was no association. If there is not an association, ask what the segmented bar graphs would look like if there was one.\n\nWriting, Speaking: MLR1 Stronger and Clearer Each Time. Use this routine to give students a structured opportunity to revise and refine their response to the last question. Ask each student to meet with 2\u20133 other partners in a row for feedback. Provide students with prompts for feedback that will help them strengthen their ideas and clarify their language (e.g., \u201cWhy do you think there is a (positive\/negative) association?\u201d, \u201cHow do the relative frequencies help to answer this question?\u201d, \u201cHow could you say that another way?\u201d, etc.). Students can borrow ideas and language from each partner to strengthen the final product. They can return to the first partner and revise and refine their initial response.\nDesign Principle(s): Optimize output (for explanation)\n\n## 24.3: Colored Erasers (15 minutes)\n\n### Activity\n\nThis activity provides students less structure for their work in creating segmented bar graphs to determine an association (MP4). In addition, the data in this activity is split into more than two options. Students work individually to create a segmented bar graph based on either columns or rows and then share their information with a partner who has created the other segmented bar graph. Together, partners discuss the segmented bar graphs to determine if there is an association between the variables (MP3). In particular, students should notice that there is evidence of an association is the relative frequencies within a category are very different from the relative frequencies in another category.\n\nAs students work, identify groups that use the different segmented bar graphs to explain why there is an association between the color of the eraser and flaws.\n\n### Launch\n\nKeep students in groups of 2. Give 5 minutes quiet work time followed by 5 minutes of partner discussion and then a\u00a0whole-class discussion.\n\nProvide students access to colored pencils. Either assign or have partners choose which will make a graph for each row and which will make a graph for each column.\n\nRepresentation: Access for Perception. Read the directions aloud. Students who both listen to and read the information will benefit from extra processing time. Check for understanding by inviting students to rephrase directions in their own words.\nSupports accessibility for: Language\n\n### Student Facing\n\nAn eraser factory has five machines. One machine makes the eraser shapes. Then each shape goes through the red machine, blue machine, yellow machine, or green machine to have a side colored.\n\nThe manager notices that an uncolored side of some erasers is flawed at the end of the process and wants to know which machine needs to be fixed: the shape machine or some of the color machines. The manager collected data on the number of flawed and unflawed erasers of each color.\n\nunflawed flawed total\nred 285 15 300\nblue 223 17 240\nyellow 120 80 200\ngreen 195 65 260\ntotal 823 177 1000\n1. Work with a partner. Each of you should make one segmented bar graph for the data in the table. One segmented bar graph should have a bar for each row of the table. The other segmented bar graph should have one bar for each column of the table.\n\n2. Are the flawed erasers associated with certain colors? If so, which colors? Explain your reasoning.\n\n### Student Facing\n\n#### Are you ready for more?\n\nBased on the federal budgets for 2009, the table shows where some of the federal money was expected to go. The values are in billions of\u00a0U.S. Dollars.\n\nUnited States Japan United Kingdom\ndefense 718.4 42.8 49.2\neducation 44.9 47.5 113.9\n1. Why would a segmented bar graph be more useful than the table of data to see any associations between the country and where the money is spent?\n2. Create a segmented bar graph that represents the data from the table.\n\n3. Is there an association between the country\u2019s budget and their spending in these areas? Explain your reasoning.\n\n### Activity Synthesis\n\nThe purpose of this discussion is to\u00a0identify\u00a0strategies for creating segmented bar graphs and for analyzing them to determine if there is an association among variables.\n\nAsk, \u201cWhat strategies did you use to create the segmented bar graphs?\u201d (First, we created a new table of the relative frequencies.\u00a0Then we approximated the heights of the segments based on the percentages from the table.)\n\nSelect previously identified groups to share their explanation for noticing an association.\n\n1. Groups that use the segmented bar graph based on rows.\n2. Groups that use the segmented bar graph based on columns.\n\nAfter both explanations are shared, ask students, \"Do you think that noticing the association was easier with one of the graphs?\" (Likely the segmented bar graph based on rows is easier since there are only 2 segments and it is easier to see that the yellow and green erasers are more flawed.)\n\nFinally, ask students, \"If there was not an association between color and flaws, what might\u00a0the segmented bar graph based on the rows look like? What might the segmented bar graph based on the columns look like?\u201d (The segmented bar graph based on the rows would have each segmented bar look about the same. That is, the line dividing the two segments would be at about the same height in each bar. The segmented bar graph based on the columns would have segments that are all approximately equal. That is, each segment should represent about 25% of the entire bar.)\n\n## Lesson Synthesis\n\n### Lesson Synthesis\n\nRemind students that we have been looking for associations in categorical data, and that there is evidence of an association if the relative frequencies of some characteristic are very different from each other in the different groups. Ask:\n\n\u2022 \u201cIs it easier to see evidence of an association in a frequency table or\u00a0a relative frequency table?\u201d (It depends on the data. If the two groups are approximately the same size, it doesn't matter very much, but when they are different sizes, it is usually easier to compare using relative frequencies.)\n\u2022 \u201cHow can we see evidence of an association in a two-way table of either kind?\u201d (By numerically comparing the proportions between the two groups.)\n\u2022 \u201cHow can we see evidence of an association in a bar graph or segmented bar graph?\u201d (By visually comparing the proportions between the two groups.)\n\n## Student Lesson Summary\n\n### Student Facing\n\nIn an earlier lesson, we looked at data on meditation and state of mind in athletes.\n\nIs there an association between meditation and state of mind?\n\nThe bar graph shows that more athletes were calm than agitated among the group that meditated, and more athletes were agitated than calm among the group that did not. We can see the proportions of calm meditators and calm non-meditators from the segmented bar graph, which shows that about 66% of athletes who meditated were calm, whereas only about 27% of those who did not meditate were calm.\n\nThis does not necessarily mean that meditation causes calm; it could be the other way around, that calm athletes are more inclined to meditate. But it does suggest that there is an association between meditating and calmness.","date":"2023-02-08 13:05:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3881922662258148, \"perplexity\": 1552.52243549096}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500813.58\/warc\/CC-MAIN-20230208123621-20230208153621-00244.warc.gz\"}"}
null
null
Le Canon TS-E f/3.5L II est un objectif professionnel à décentrement, grand angle à focale fixe proposant l'équivalent des mouvements des chambres photographiques à soufflet pour les boitiers Canon EOS. Bien qu'il utilise une monture EF, cet objectif n'a pas d'autofocus, la mise au point est donc exclusivement manuelle. Bien que l'on parle d'objectif à décentrement, il y a en fait deux types de mouvements possibles : Le décentrement (shift) de ± permet d'ajuster la position du sujet dans l'image sans déplacer le boitier. Il permet aussi d'éviter la convergence des lignes parallèles qui apparait dans les photos de bâtiments tels que les immeubles. Le décentrement peut aussi être utilisé pour faire des panoramiques de deux photos sans bouger le boitier. La bascule (tilt) de ±8.5° exploite la loi de Scheimpflug pour faire converger les plans de netteté avant et arrière. En absence de bascule ces deux plans sont parallèles et définissent la profondeur de champ. Ce contrôle de la profondeur de champ peut être utilisé aussi bien pour diminuer la zone de netteté que pour l'augmenter. On peut ainsi créer l'effet de maquette en positionnant le sujet proche de la zone d'intersection des plans de netteté, donc proche de l'appareil et en contrebas. Ou, au contraire, avoir une profondeur de champ très étendue et commençant très près en incluant tout le sujet dans le cône de netteté qui commence toujours sur le même plan que l'image. La première version de cet objectif date de 1991 et ne permet pas des mouvements d'aussi grande amplitude que la version II décrite ici et commercialisée depuis 2009. Une des nouveautés de la version II est de permettre de combiner chaque mouvement avec une rotation de ±90°. La combinaison du décentrement, avec ou sans rotation, et de la bascule, avec ou sans rotation, donne pratiquement autant de liberté qu'une chambre photographique à soufflet. En plus de ces capacités spécifiques les critiques disent qu'il s'agit certainement du meilleur objectif construit par Canon. Cette qualité d'image s'explique par le fait que la fonction de décentrement exige de créer une image de qualité sur une surface bien plus grande que le capteur, le centre de l'image étant alors excellent. Références Liens externes Le groupe Tilt Shift sur flickr donne une idée des capacités de ce type d'objectif. 24
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,073
using System; using System.Collections.Generic; using System.Linq; using System.IO; using System.Drawing; using Aga.Controls.Tree; namespace Duality.Editor.Plugins.ProjectView.TreeModels { public abstract class NodeBase : Node { public static string GetNodePathId(string nodePath) { if (Resource.IsDefaultContentPath(nodePath)) return nodePath.ToUpper(); else return Path.GetFullPath(nodePath).ToUpper(); } private string nodePath = null; private bool readOnly = false; public IEnumerable<NodeBase> NodesDeep { get { foreach (NodeBase n in this.Nodes.OfType<NodeBase>()) { yield return n; foreach (NodeBase c in n.NodesDeep) yield return c; } } } public string NodePath { get { return this.nodePath; } set { if (this.nodePath != value) { string oldPath = this.nodePath; this.nodePath = value; this.OnNodePathChanged(oldPath); } } } public string NodePathId { get { return GetNodePathId(this.NodePath); } } public bool ReadOnly { get { return this.readOnly; } } public virtual string TypeName { get { return null; } } protected NodeBase(string path, string name, bool readOnly = false) : base(name) { this.nodePath = path; this.readOnly = readOnly; } public void NotifyVisible() { this.OnFirstVisible(); } public void ApplyPathToName() { this.Text = this.GetNameFromPath(this.nodePath); } public bool ApplyNameToPath() { string outVar; return this.ApplyNameToPath(out outVar); } public virtual bool ApplyNameToPath(out string conflictingPath) { conflictingPath = null; return false; } public virtual string GetNameFromPath(string path) { if (Resource.IsDefaultContentPath(path)) { string[] pathSplit = path.Split(new[] {':'}, StringSplitOptions.RemoveEmptyEntries); return pathSplit[pathSplit.Length - 1]; } else { return path; } } protected virtual void OnNodePathChanged(string oldPath) { } protected virtual void OnFirstVisible() {} } public class DirectoryNode : NodeBase { public DirectoryNode(string path, bool readOnly = false) : base(path, null, readOnly) { this.ApplyPathToName(); } public override bool ApplyNameToPath(out string conflictingPath) { conflictingPath = null; if (this.ReadOnly) return false; string oldPath = this.NodePath; string oldDirName = Path.GetFileName(oldPath); string newPathBase = oldPath.Remove(oldPath.Length - oldDirName.Length, oldDirName.Length); string newPath = newPathBase + this.Text; bool equalsCaseInsensitive = newPath.ToUpper() == oldPath.ToUpper(); if (Directory.Exists(newPath) && !equalsCaseInsensitive) { conflictingPath = newPath; return false; } try { if (equalsCaseInsensitive) { // As Windows doesn't properly apply renames that change character casing // and nothing else, we'll do a two-step rename using a temp path. string tempPath = newPath + "_sSJencn83rhfSHhfn3ns456omvmvs28fndDN84ns"; Directory.Move(oldPath, tempPath); Directory.Move(tempPath, newPath); } else { Directory.Move(oldPath, newPath); } } catch (Exception e) { Logs.Editor.WriteError("Error moving directory from '{0}' to '{1}': {2}", oldPath, newPath, e); return false; } // Between performing the move event and it being received by the FileEventManager there will be a // short window of inconsistency where the existing Resource is still registered under its old name // but the file is already renamed to the new name. To prevent loading the Resource twice, we'll pre-register // it under its new name. foreach (ResourceNode resNode in this.NodesDeep.OfType<ResourceNode>()) { if (resNode.ResLink.ResWeak != null) ContentProvider.AddContent(resNode.NodePath.Replace(oldPath, newPath), resNode.ResLink.ResWeak); } this.NodePath = newPath; return true; } public override string GetNameFromPath(string path) { if (!Resource.IsDefaultContentPath(path)) return Path.GetFileName(path); else return base.GetNameFromPath(path); } protected override void OnNodePathChanged(string oldPath) { base.OnNodePathChanged(oldPath); foreach (NodeBase node in this.Nodes) { node.NodePath = this.NodePath + node.NodePath.Remove(0, oldPath.Length); } } } public class ResourceNode : NodeBase { private IContentRef res = null; private Type resType = null; public IContentRef ResLink { get { return this.res; } } public Type ResType { get { return this.resType; } } public override string TypeName { get { return this.resType != null ? this.resType.Name : null; } } public ResourceNode(string path) : base(path, null, false) { this.res = new ContentRef<Resource>(null, path); this.resType = Resource.GetTypeByFileName(path); this.ApplyPathToName(); } public ResourceNode(IContentRef res) : base(res.Path, null, res.IsDefaultContent) { this.res = res; this.resType = res.ResType; this.ApplyPathToName(); } public void UpdateImage() { this.Image = GetTypeImage(this.resType, this.res); } public override bool ApplyNameToPath(out string conflictingPath) { conflictingPath = null; if (this.ReadOnly) return false; string oldPath = this.NodePath; string oldFileName = Path.GetFileName(oldPath); string newPathBase = oldPath.Remove(oldPath.Length - oldFileName.Length, oldFileName.Length); string newPath = newPathBase + this.Text + Resource.GetFileExtByType(this.resType); bool equalsCaseInsensitive = newPath.ToUpper() == oldPath.ToUpper(); if (File.Exists(newPath) && !equalsCaseInsensitive) { conflictingPath = newPath; return false; } try { if (equalsCaseInsensitive) { // As Windows doesn't properly apply renames that change character casing // and nothing else, we'll do a two-step rename using a temp path. string tempPath = newPath + "_sSJencn83rhfSHhfn3ns456omvmvs28fndDN84ns"; File.Move(oldPath, tempPath); File.Move(tempPath, newPath); } else { File.Move(oldPath, newPath); } } catch (Exception e) { Logs.Editor.WriteError("Error moving Resource from '{0}' to '{1}': {2}", oldPath, newPath, e); return false; } // Between performing the move event and it being received by the FileEventManager there will be a // short window of inconsistency where the existing Resource is still registered under its old name // but the file is already renamed to the new name. To prevent loading the Resource twice, we'll pre-register // it under its new name. if (this.res.ResWeak != null) ContentProvider.AddContent(newPath, this.res.ResWeak); this.NodePath = newPath; return true; } public override string GetNameFromPath(string path) { if (!Resource.IsDefaultContentPath(path)) return Resource.GetNameFromPath(path); else return base.GetNameFromPath(path); } protected override void OnNodePathChanged(string oldPath) { base.OnNodePathChanged(oldPath); this.res.Path = this.NodePath; } protected override void OnFirstVisible() { base.OnFirstVisible(); this.UpdateImage(); } public static Image GetTypeImage(Type type, IContentRef resLink = null) { return (type ?? typeof(Resource)).GetEditorImage(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,749
Have you ever heard someone speak about the primary, secondary, and tertiary of something—only to wonder what they would have called the next element? Well, this crash course on Latin numerals will help clear things up! Posted by Zαck West Reference Terminology 7 Min Read Ever wonder what comes after primary, secondary, or tertiary in counting sequences? These terms come from Latin numerals which are expressed in Cardinal, Ordinal, and several other forms. These terms are used to describe the rank order in which an element exists relative to other elements. 1 Naming the Numerals 2 Primary 3 Secondary 4 Tertiary 5 Quaternary 6 Quinary 7 Senary 8 Septenary 9 Octonary 10 Nonary 11 Denary 12 Beyond Ten For example—the Latin Ordinal primus, meaning first, gives us primary is used to describe the first-order rank of an element. Primary elections, primary contact, primary residence—these all indicate the first of something where there may be others. Naming the Numerals These terms are used in variation throughout the fields of Science, Engineering, and Mathematics as ways to count and signify order as well as sometimes to simply assign arbitrary taxonomic value. Below is a collection of the first 10 Latin numerals though they continue far beyond. Ordinal: Primus Cardinal: Unus The first element of a counting sequence and also used to indicate the principle matter of importance. Examples include primary elections, primary schools, and primary contact numbers. The word Primary finds its root from the Latin words primus and arius used to indicate a primary rank in military contexts or, similar to its modern use, the principal matter of concern. Ordinal: Secundus Cardinal: Duo Secondary describes the term directly preceding the primary term or element in a counting sequence or the primary matter of concern. Examples of secondary include secondary elections, secondary schools, or a secondary coil in certain electrical circuits. The word secondary comes from the Middle English secundarie which is descended from the Latin word secundus that means "to follow" and can be compared with the term sequor which bears the same meaning. Ordinal: Tertius Cardinal: Tres Tertiary is used to describe the third level within a sequence of elements, events, or otherwise meaningful distinctions. Example In Geology, Tertiary refers to the first period of the Cenezoic era. Tertiary comes from the Latin word tertius which means third. Examples include tertiary compounds in chemistry describing those that have been subjected to the substitution of 3 atoms. Quaternary Ordinal: Quartus Cardinal: Quattuor Quaternary is used to describe an element as being of fourth-order rank in a sequence or collection. In Geology, the term quaternary is used to describe the most recent period in the Cenozoic era, directly proceeding the Tertiary Period. Quaternary comes from the Latin quarternarius meaning "to contain or be made of four," which is a combination of the words quaterni and arius. In chemistry, the term quaternary describes certain types of compounds including amines and ammonium salts. Quinary Ordinal: Qunitas Cardinal: Quinque Quinary is used to describe an element of fifth-order rank or, more generally, to describe something made of five sub-units. The term quinary comes from the Latin word quinarius meaning "containing five each" which represents a combination of the terms quinus and arius. In Zoology, quinary refers to an outdated system of classification by which the Animal kingdom was divided into 5 subkingdoms and each subkingdom was divided into five subclasses. Quinary also refers to a numeral system that uses 5 as the base value—possibly having come about in connection to there being five fingers on the human hand. Senary Ordinal: Sextus Cardinal: Sex Senary is used to describe an element of six-order rank within a larger sequence or collection. It comes from the Latin term senarius meaning "consisting of six each" which is a combination of the words seni and arius. In mathematics, a senary numeral system is also known as a base-6 or heximal number system. It is thought the senary numeral systems evolved from the use of five fingers and the closed fist to represent terms 0-5 (six total terms.) Septenary Ordinal: Septimus Cardinal: Septum Septenary is used to describe an element of seventh-order rank within a sequence or collection of elements. It comes from the Latin term septenarius meaning "consisting of seven each" and is made of the terms septeni and arius. Septenary is used in Theosophy to describe the seven principles of man, and also describes the system of counting 7-day weeks in some contexts. Octonary Ordinal: Octavus Cardinal: Octo Octonary is used to describe an element of seventh-order rank within a sequence or collection of elements. It comes from the Latin term octonarius meaning "containing eight" which is made of the terms octoni and arius. Octonary can be used to refer to the style of poetry using 8-lines. While similar in concept, Octonary is not used to describe octal counting systems—a means of numerical representation in computer science. Nonary Ordinal: Nonus Cardinal: Novem Nonary is used to describe an element of ninth-order rank within a collection or sequence of elements. It comes from the Latin term nonarius meaning "containing nine" which is a combination of the terms nonus and arius. Nonary is used to describe less common base 9 counting systems where the digits 10 represent the value 9. In decimals, a nonary counting system uses only digits 0-8 similar in concept to how binary systems use the values 0-1. Denary Ordinal: Decimus Cardinal: Decem Denary is used to describe an element of tenth-order rank within a system, collection, or sequence of elements. It comes from the Middle English term denarie which evolved from the Latin term denarius meaning "containing ten" which was sometimes used as denarius nummus to describe a coin of ten subunits. Denary is often used to describe the decimal counting system which uses a base of 10. Beyond Ten The numeral system presented here is the Latin version (latinate) of the English Ordinal Counting system. There are a great many more in this sequence but, other than a few terms such as hexadecimal, the terms become increasingly rare in their usage. For a full listing of these terms just check out the Wikipedia entry for numerical systems which includes a more exhaustive listing—albeit with a sparser presentation of definition. Zαck West January 17, 2022 Previous Article Boostrap Programs: The Basics of System Startups Next Article Best Online Python Courses for Beginners
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,282
← Cover reveal & 99-cent countdown sale My Writing Process blog tour → Q&A with CeCe Osgood — and a giveaway! When I finally got my act together and joined a chick lit writer's group on Facebook, CeCe Osgood was the first of many lovely ladies with whom I connected. She is a former HBO script analyst turned novelist, and the title of her debut novel is one you simply can't ignore! Exploring life after divorce, The Divorced Not Dead Workshop is a romantic comedy about dating after divorce. After the interview, leave a comment for the chance to win a copy! Hi CeCe and welcome to Skipping Midnight! Your debut novel, The Divorced Not Dead Workshop, is described as a romantic comedy about dating after divorce. What else can you tell us about the story? It's an opening-your-heart-again story. Going through a divorce can make you wary about getting back into dating or even considering the possibility of looking for a partner again. I believe finding out who you are and what you want is the first step: I call it your "me skills" in the novel's workshop. I wanted to deal with serious subject of divorce in a funny and, I hope, helpful way. Another aspect of the story is the friendship with Dorsey and her BFFs, Pilar and Mimi. They're the kind of friends we all want to have: true blue, forgiving and yet they can kick your butt when you need it. Especially Pilar! What inspired you to write it? Frankly, it came like a bolt out of the blue. I woke up one morning with the notion of a dating workshop on a cruise ship and Dorsey Bing, the main character, came ambling toward me in her kitten heels. I liked her so much I couldn't help but fashion a plot for her. It took a few drafts before I worked out the nuances of the romance and how it threaded into the dating workshop. What do think people will love most about this book? I hope they'll love the characters. I've tried to make them funny yet realistic and very human. What do you love about it? I get a kick out of the character interactions; some of them still make me laugh. And I love hearing from readers. One told me she almost fell off her couch laughing so hard, and several mentioned using tips from the "workshop" in their dating lives. Yay! In the spirit of Desperately Ever After, I must ask: If you had to choose, which fairy tale character (doesn't have to be a princess and doesn't have to be Disney), would you be and why? I guess it would be Beauty and the Beast, although I don't remember it as a fairytale from my childhood. Seems to me my first memory of it was in film school when I saw Jean Cocteau's movie, made in the '40s with a lot of incredible special effects for the time. And I adore the Disney animated version. I can still hear Angela Lansbury singing "tale as old as time, song as old as rhyme." The fairy tale's theme of substance over appearance resonates deeply with me. Belle chooses the Beast unaware he is the handsome man in her dreams. She chooses him because of their friendship and her growing awareness of loving him, no matter what he looks like. So, I guess my fairy tale character would be Belle … or, on my bloated and grumpy days, the Beast! Spreading the word is one of the most daunting aspects of being an indie author. How do you get your titles out there? I'm just learning my way around cyber-space and it is daunting. Sometimes I feel like I'm spinning my wheels. But what's been fun is visiting a lot of blogs and connecting with bloggers and reviewers who are really insightful, knowledgeable and so passionate about books. Overall though, I've realized I have to take the long view in regard to getting visible in the self-publishing world. A writer's life is certainly not an easy one—from rejections to dwindling sales, to tough reviews, and so on. How did you get through the bad days? Ice cream! (laughs) Seriously though, I felt a book about divorce had to deal with rejection, disappointment and thwarted expectations, so I did a lot of research on how to get through the down times and the negative thinking that seems to prevail when love and career and life in general appears bleak. I put some of what I learned into the book's workshop and that process has helped me on my bad days. What is the best piece of advice you've received that you can share with aspiring writers? Get a gallon or four of "butt glue" to keep your butt in the chair or wherever you write. And think in terms of many drafts, not one or two, but many. What can fans expect from you next? I'm puzzling out a mystery, although it'll most likely be a comedy mystery since humor keeps popping up. In addition to her website, ceceosgood.com, CeCe is active on Twitter (@CeCeOsgood), Facebook and Pinterest. Her book is available at both Amazon.com (US) and Amazon.co.uk (UK). Comment below to win an e-book copy of The Divorced Not Dead Workshop. The giveaway will remain open until 11:59 p.m. (PST) Tuesday, July 15, at which point a winner will be chosen at random. Best of luck to you all! FEED YOUR BEACH BAG! Desperately Ever After imagines what happened to our favorite fairy tale princesses in a Desperate Housewives/Sex and the City sort of way. It's a 2014 NIEA Chick-Lit Finalist and an Amazon Top 100 seller for both Women's Fiction and Humor. It's on a 99-cent Kindle Countdown sale for the next few days, so snag one before the sequel comes out next month! Click here for Amazon US or here for Amazon UK Next Wednesday, July 16, Hilary Grossman will stop by to talk about her debut novel, Dangled Carat. Don't miss it! Sign up here for exclusive updates and advanced notice of giveaways Filed under Author Interviews, Contests 4 responses to "Q&A with CeCe Osgood — and a giveaway!" cathylykens Laughter is so important to coping with life, especially when you are struggling. I love a book that makes me laugh out loud. I am looking forward to reading The Divorced Not Dead Workshop. CeCe Osgood Thank you. I so agree with you. Mark Twain said "Humor is mankind's greatest blessing." Right on, Mr. Twain. Having laughter in my life keeps me sane. Hope you get some laughs from my novel! Interesting title I wondered about the title quite a bit. A few people didn't care for it at all, others started to laugh. I went with the laughs! Leave a Reply to cathylykens Cancel reply PRAISE for DEA: "Laura Kenyon makes happily ever after desperately delicious!" ~ Stephanie Evanovich, New York Times bestselling author "At times laugh-out-loud funny, and at times very touching, Desperately Ever After is the debut of a real talent." ~ Elizabeth Blackwell, author of While Beauty Slept "An explosive cocktail that will have you laughing out loud and wanting more and more!" ~ Lost in Chick Lit "Empowering and hilarious, I think everyone should read this." ~ Tea Party Princess "It's the perfect book to be reading this summer and it's completely unputdownable!" ~ The YA's Nightstand "If you are looking for a fun and gossipy story to satisfy the holes left when Sex and the City and Desperate Housewives had their series finales (or even if you're going through Once Upon a Time withdrawal come season finale time) look no further than Desperately Ever After." ~ Chick Lit Central "No more damsel in distress, useless without her prince, but modern day woman struggling with and enjoying love." ~ M's Bookshelf "A cracking cocktail of a book." ~ Compelling Reads @Eric_C_Wilder Haha YES 6 months ago Putting my daughter to bed = 525,600 questions. #Parenting 6 months ago #Hamilton makes US history interesting for a new generation; shows the humanity & flaws of our Founding Fathers; sp… twitter.com/i/web/status/1… 6 months ago Hope for a color-blind future laurakenyon.com/2020/06/05/hop… via @Laura_Kenyon 6 months ago Cinderella. Snow White. Sleeping Beauty. Beauty and the Beast. Rapunzel. What if their stories weren't separated b… twitter.com/i/web/status/1… 8 months ago Follow @laura_kenyon BestChickLit.com ChickLit Pad I Heart… Chick Lit Lost in Chick Lit M's Bookshelf The Grimm Report Desperately Ever After Fairy Tale Spin-offs Real-Life Royalty I love comments. I love hearing from you, answering questions, asking you questions, and finding out all about your favorite books and adorable dog, Titus. What I don't like is wading through thinly veiled advertisements, blatant spam, or abusive/irrelevant comments. For this reason, first-time commentators are moderated, and I reserve the right to remove any links. Now let's have fun! Laura is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to laurakenyon.com. · LauraKenyon.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
103
Wilson Elser Awards 2019 Scholarship to Peyton Mosman Legal Analysis The Thomas W. Wilson Sr. Scholarship Named for a Founder of the Firm DALLAS (June 5, 2019) National law firm Wilson Elser is proud to announce that it has presented its Thomas W. Wilson Sr. Scholarship for academic year 2019−2020 to Peyton Mosman, the daughter of Valerie Mosman, an associate in the firm's Dallas, Texas, office. Mosman will enroll as a freshman in the Honors College at the University of Arkansas with plans to major in speech pathology, in pursuit of helping children and adolescents to overcome developmental speech disorders. "Peyton personifies the criteria established for this award," shared Daniel J. McMahon, Wilson Elser chair. "She is graduating from high school near the top of her class with a cumulative 3.9 grade point average, and is a member of the National Honor Society, Senior Executive Board and swim team. Wilson Elser is proud to contribute to Peyton's ongoing success." In addition to her scholastic achievements, Mosman has worked with underprivileged children through her work with the National Charity League for the past six years and has served as a Sunday school teacher and camp counselor. The Thomas W. Wilson Sr. Scholarship, named for one of the firm's founders, was established to support the college education of selected award recipients who are children of current, full-time Wilson Elser employees. This is the 10th scholarship the program has awarded since its inception in 2010. The scholarship provides its recipients with $10,000 per year to attend an accredited college or university. Applicants are evaluated on academic performance, demonstrated leadership and participation in school and community activities, work experience, career and educational goals and objectives, and personal or family circumstances. About Wilson Elser Wilson Elser (www.wilsonelser.com) helps individuals and organizations transcend challenges and realize goals by offering an optimal balance of legal excellence and bottom-line value. More than 800 attorneys strong, Wilson Elser serves clients of all sizes, across multiple industries and around the world. Wilson Elser has 37 strategically located offices in the United States and another in London. It is a founding member of Legalign Global, a close alliance of four of the world's leading insurance law firms, created to assist companies doing business internationally. This depth and scale has made Wilson Elser one of the nation's most influential law firms, ranked in the Am Law 200 and 53rd in The National Law Journal's NLJ 500. Add to My Portfolio Make PDF Wayne Travers, Jr. wayne.traversjr@wilsonelser.com Valerie A. Mosman
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
761
{"url":"https:\/\/astronomy.stackexchange.com\/questions\/21849\/why-is-gravity-only-an-attractive-force","text":"# Why is gravity only an attractive force?\n\nAs per the universal law of attraction, any two bodies (having some mass) experience a force of 'attraction' which is proportionate to ...and ...inverse proportionate ....\n\nThen comes my question: Why it should be force should be of type 'attraction' only ? Why it should not be repulsion \/ any other kind of force ?\n\n\u2022 There was an article in New Scientist on this a while ago. It was describing research into how antimatter (presumed to have a negative mass) reacts under Earth's gravity. It is thought that antimatter (specifically anti-hydrogen, in this case) may rise instead of fall. \u2013\u00a0Beta Decay Jul 28 '17 at 12:41\n\u2022 The aforementioned article \u2013\u00a0Beta Decay Jul 28 '17 at 12:42\n\u2022 @BetaDecay I'm not sure what that article is talking about. No real predictions in physics suggest antiparticles have negative mass. From wikipedia: \"A particle and its antiparticle have the same mass as one another, but opposite electric charge and other quantum numbers.\" \u2013\u00a0zephyr Jul 28 '17 at 15:24\n\u2022 I think the question is stated in a manner that limits generalizatioon. I think the larger question is if gravity is a manifestation of some larger theory under certain boundary conditions. Newton's theory of gravity is based on \"ordinary observations\" and works very well for most human considerations. Calculations based on Newton's theory got men to the moon and back. However for the orbit of Mercury and timing of GPS then relativistic considerations need to be taken into account. So back to what should be the question? Given that \"dark energy\" is causing the universe to expand faster and fas \u2013\u00a0MaxW Jul 29 '17 at 2:15\n\u2022 Same question on the physics board. physics.stackexchange.com\/questions\/11542\/\u2026 You can look up Spin 1 and Spin 2 particles for some explanations, but until gravity is actually understood, all the answers are pretty much hypothesis. Some related answers here as well: quora.com\/\u2026 \u2013\u00a0userLTK Jan 27 '18 at 1:11\n\n### Because mass is positive\n\nTo expand your quote concerning the gravitational force into an equation:\n\n$$F_G = -\\frac{Gm_1m_2}{r^2}$$\n\nThe force of gravity, $F_G$ is proportional to the product of the masses and inversely proportional to the distance, $r$, squared. Let's break this down and see what might cause $F_G$ to be positive.\n\nIn this equation, $r$ cannot be negative because it's a distance between two locations. Two locations cannot be a negative distance apart. And even if they somehow were, the squared would take care of that anyway.\n\n$G$ is the universal constant and always positive. You might argue that it could possibly be negative, but that's not possible. $G$ actually doesn't really exist. It doesn't describe anything fundamental to the physics of the universe. $G$ is simply a bookkeeping constant that allows us to get the right answer for the force based on any choice of units for mass and distance. Technically, if one uses the \"correct\" units for mass and distance (e.g., the Planck units), then $G=1$ and effectively doesn't exist. Since $G$ is just a scaling factor that depends on the choice of units, it will only be a positive number.\n\nThat leaves us with the masses. These are the only things which could possibly be negative. Of course, to get a positive, repulsive force, one mass would have to be positive and the other negative. But what exactly is a negative mass? Mass is the metric which describes \"how much\" of something there is. How can you have less than nothing of something?\n\n### Why can mass not be negative?\n\nIf you want to look at this another way, you can show that if mass could be negative, you'd get nonsensical results! Assuming of course, all other aspects of physics were the same. Recall from Newton's second law that\n\n$$F = ma$$\n\nLet's say there are two blocks sitting on a table. One block has a mass $m_1>0$ which is positive and the other has a mass $m_2<0$ which is negative. Ignore all other forces on these two blocks for the moment.\n\nI go up to $m_1$ and I apply a force to push this mass forward. The acceleration that is induced is: $a = F\/m_1$. Necessarily, the direction in which $m_1$ moves is the same direction in which I'm pushing. That's all well and good.\n\nNow I go over to $m_2$ and I apply the same force, attempting to push it forward on the table. The acceleration induced on $m_2$ will be: $a = -F\/|m_2|$. Note I made $m_2$ positive and pulled out the negative sign. You can see that if my force is forward, the direction the mass moves will be backwards! But here's the problem, my hand is in the way because it's trying to push to mass. As the mass tries to move backwards into my hand, it will be applying a force back on my hand, which by Newton's third law, necessarily mean's my hand is applying more force on the block, which then applies more force on my hand, ... and suddenly infinite forces are being applied or equivalently, these objects are infinitely accelerating. This is described by the concept of Runaway Motion.\n\nIf this seems strange to you, that's because it is. If negative masses existed, we'd live in a very weird universe. Fortunately, we live in a universe where physics makes sense, mass is positive, and by extension gravity is always attractive.\n\n\u2022 As convincing as this explanation seems, electrical charge follows the inverse square law and charge can be positive or negative. I see no reason why mass couldn't theoretically behave the same way. I believe it's actually a \"fundamental mystery\" as to why gravity is the only one of the four known forces that acts only to attract and never to repel. The other 3 fundamental forces can do either. \u2013\u00a0user21 Jul 28 '17 at 15:33\n\u2022 @barrycarter I thought about addressing this in my answer. I guess I should have. The catch here is that Newton's second law is not $F=ea$, it's $F=ma$. You can't apply the argument above to negative electric charges for that reason. The reason mass can't behave in the same way is for the reason I outlined above. It isn't a mystery. If Newton's second law instead was $F=ea$, then electric charge could not be negative. \u2013\u00a0zephyr Jul 28 '17 at 15:48\n\u2022 The second law describes inertial mass, not (necessarily) gravitational mass though. \u2013\u00a0adrianmcmenamin Jul 28 '17 at 16:24\n\u2022 @adrianmcmenamin But all evidence suggests the two are equivalent. In fact, their equivalence is a major component of GR and there has been no evidence so far showing this part of GR is wrong. I described the answer for the universe we appear to live in (aside from the potentiality for negative mass). If you want to throw in all sorts of other complications, that's outside the scope of my answer. \u2013\u00a0zephyr Jul 28 '17 at 16:32\n\u2022 Interestingly, the force of gravity would be negative if the distance was imaginary! So just imagine some mass at a particular distance from you and it will repel you. \u2013\u00a0zephyr Jul 28 '17 at 16:55\n\nWhy is gravity only an attractive force?\n\nTL;DR\nBecause mass is always positive.\n\nThere are different notions of mass, but they're equivalent.\nThere are two distinct notions of mass: gravitational and inertial. The masses in Newton's law of gravitation, $F = \\frac{Gm_1m_2}{r^2}$, are gravitational masses. The mass in Newton's second law of motion, $F=ma$, is inertial mass. Gravitational and inertial mass are implicitly assumed to be the same in Newtonian mechanics. General relativity makes this assumption explicit in the equivalence principle.\n\nBut what if they're not equivalent?\n\nUnlike mathematics, where one can simply make an assumption and see where it leads, assumptions in physics need to be validated. This assumption has been tested with many kinds of materials, both on the ground and in space. Variations on the Cavendish experiment using different kinds of materials have been made. Within the limits of the rather lousy accuracy of the gravitational constant (one part per ten thousand, at best), every one of these is consistent with the null hypothesis (gravitational and inertial mass are the same) and inconsistent with the hypothesis that different materials have measurably different gravitational and inertial masses.\n\nThe Earth's Moon, with its very different near-side and far-side, provides an even better mechanism for testing this equivalence. Rather than the one part per ten thousand (at best) accuracy available to Cavendish-style experiments, the Moon shows that gravitational and inertial mass for sodium and iron are equivalent to within about one part per ten trillion.\n\nSo much for ordinary matter, but what about antimatter?\n\nThat an ordinary matter particle and its antimatter equivalent have the same (positive) inertial mass has been tested over and over in particle colliders around the world. Whether the equivalence principle also applies to antimatter remains a somewhat open question. While there are many reasons to think that the equivalence principle applies to antimatter as well as normal matter, testing that this is the case is very hard. The best results to date are from the ALPHA experiment, which tests whether neutral antihydrogen (a antiproton and an positron) falls up or down. The results are that antihydrogen's gravitational mass lies somewhere between -65 and 120 times its inertial mass. This is not anywhere close to conclusive, but it does lean towards antimatter having a positive gravitational mass, consistent with the equivalence principle.\n\nAlong the same lines with previous answers suggesting \"mass cannot be negative,\" I'd like to add an insight for why that might probably be the case. If Higgs field and particles' varying degrees of interaction with the field is what gives rise to what we call mass, then the theory suggests that photons don't have mass (and constitute the velocity limit through space) because they don't interact with the field at all. I don't think the framework allows for negative interaction with the field or an \"anti-Higgs\" field.\n\nTheoretically, gravity can be \"attractive\" in the sense that objects move towards you when pushed. This can occur from negative mass (doesn't seem to make sense, but theoretically possible). Peter Engels and others have written a paper about it here and it's an interesting idea.\n\nThe idea is that by cooling the atoms to almost absolute zero, they create a Bose-Einstein condensate and act likes waves in the realm of quantum dynamics.\n\n\u2022 That paper in no way suggests that gravity can be reversed. The paper says that atoms within a Bose-Einstein condensate can, under certain conditions involving 1-D expansion of the BEC, \"accelerate against the applied force, realizing a negative effective mass related to a negative curvature of the underlying dispersion relation.\" In other words, the positive-mass rubidium-87 atoms briefly behave as if they had negative mass. The equivalence of inertial and gravitational forces remains uncertain at quantum level, so you can't use this result to argue for \"negative\" gravity. \u2013\u00a0Chappo Hasn't Forgotten Monica Aug 12 '17 at 7:23","date":"2020-10-28 00:38:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6802727580070496, \"perplexity\": 393.09927823666976}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107894890.32\/warc\/CC-MAIN-20201027225224-20201028015224-00491.warc.gz\"}"}
null
null
{"url":"https:\/\/www.bartleby.com\/solution-answer\/chapter-97-problem-15e-calculus-of-a-single-variable-11th-edition\/9781337275361\/conjecture-consider-the-function-fxcosx-and-its-maclaurin-polynomials-p2p4-and-p6-sec-example\/c429fac0-80fd-11e9-8385-02ee952b546e","text":"Chapter 9.7, Problem 15E\n\n### Calculus of a Single Variable\n\n11th Edition\nRon Larson + 1 other\nISBN: 9781337275361\n\nChapter\nSection\n\n### Calculus of a Single Variable\n\n11th Edition\nRon Larson + 1 other\nISBN: 9781337275361\nTextbook Problem\n\n# Conjecture Consider the function f ( x ) = cos x and its Maclaurin polynomials P 2 , P 4 . and P 6 (sec Example 5).(a) Use a graphing utility to graph f and the indicated polynomial approximations.(b) Evaluate and compare the values of f ( n ) ( 0 ) and P n ( n ) ( 0 ) for n = 2 , 4 , and 6.(c) Use the results in part (b) to make a conjecture about f ( n ) ( 0 ) and P n ( n ) ( 0 ) .\n\nTo determine\n\nTo graph: The function, f(x)=cosx and the polynomial approximations, P2,P4,P6 which are the Maclaurin Polynomials of the function.\n\nExplanation\n\nGiven:\n\nThe function, f(x)=cosx and the polynomial approximations, P2,P4,P6 which are the Maclaurin Polynomials of the function.\n\nGraph:\n\nConsider the function,\n\nf(x)=cosx\n\nDifferentiate both sides of the above equation with respect to x.\n\nf(x)=ddx(cosx)\n\nApply the formula, ddx(cosx)=sinx in the derivative,\n\nf(x)=sinx\n\nSubstitute 0 for x in the derivative,\n\nf(0)=sinx=0\n\nSubstitute 0 for x in the function f(x)=cosx,\n\nf(0)=cos0=1\n\nDifferentiate both sides of the function, f(x)=sinx with respect to x.\n\nf(x)=ddx(sinx)\n\nApply the formula, ddx(sinx)=cosx to evaluate the derivative,\n\nf(x)=ddx(sinx)=cosx\n\nSubstitute 0 for x in the derivative,\n\nf(0)=cos0=1\n\nDifferentiate both sides of the function, f(x)=cosx with respect to x.\n\nf(x)=ddx(cosx )\n\nApply the formula, ddx(cosx)=sinx in the above equation,\n\nf(x)=ddx(cosx )=(sinx)=sinx\n\nSubstitute 0 for x in the derivative,\n\nf(0)=sin0=0\n\nDifferentiate both sides of the equation, f(x)=sinx with respect to x.\n\nf(4)(x)=ddx(sinx)\n\nApply the formula, ddx(sinx)=cosx to evaluate the derivative,\n\nf(4)(x)=ddx(sinx)=cosx\n\nSubstitute 0 for x in the derivative,\n\nf(4)(0)=cos0=1\n\nDifferentiate both sides of the equation, f(4)(x)=cosx with respect to x.\n\nf(5)(x)=ddx(cosx )\n\nApply the formula, ddx(cosx)=sinx in the above equation,\n\nf(5)(x)=ddx(cosx )=(sinx)\n\nSubstitute 0 for x in the derivative,\n\nf(5)(0)=sin0=0\n\nDifferentiate both sides of the equation, f(5)(x)=(sinx) with respect to x,\n\nf(6)(x)=ddx(sinx)\n\nApply the formula, ddx(sinx)=cosx to evaluate the above derivative,\n\nf(6)(x)=ddx(sinx)=cosx\n\nSubstitute 0 for x in the derivative,\n\nf(6)(0)=cos0=1\n\nThe definition of Taylor series for a nth degree polynomial is, if f has n derivatives at c, then the polynomial is known as the nth Taylor series for f at c. Here if c=0 then the polynomial is known as the Maclaurin series as shown below.\n\nPn(x)=f(0)+f(c)(x)+f(0)2(x)2++f(n)(0)n(x)n\n\nThe Maclaurin series for the function f(x)=cosx of degree two will be the series for n=2,\n\nP2(x)=f(0)+f(0)(x)+f(0)2(x)2\n\nSubstitute the values for the functions and derivatives, f(0)=1 and f(0)=0,f(0)=1 in the above equation,\n\nP2(x)=1+0(x)+12(x)2=112x2\n\nThe Maclaurin series for the function f(x)=cosx of degree four will be the series for n=4\n\n(b)\n\nTo determine\n\nTo calculate: The value for fnn(0) and Pn(n)(0) for n=2,4,6, where f(x)=cosx and P2,P4,P6 Maclaurin series for function.\n\n(c)\n\nTo determine\n\nThe conjecture about fnn(0) and Pn(n)(0) from the result determined in part (b) for n=2,4,6.\n\n### Still sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\n#### The Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started","date":"2019-10-18 03:56:02","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8757941126823425, \"perplexity\": 1729.8733951278296}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986677884.28\/warc\/CC-MAIN-20191018032611-20191018060111-00199.warc.gz\"}"}
null
null
Klondyke Coke Ovens are heritage-listed beehive ovens at Parker Lane, Brassall, City of Ipswich, Queensland, Australia.It is also known as Klondyke Beehive Coke Ovens and Klondyke Coking Ovens. It was added to the Queensland Heritage Register on 3 December 2007. History The Klondyke Coke Ovens were part of the Klondyke Colliery located at Brassall and were built in the 1940s. Coke production at Klondyke originally commenced in the 1880s. The site had two widely separated periods of coke production and several changes of ownership and names. Its last change of ownership and name occurred in 1948 when the pit had only a small area left to be worked and production ceased in the early 1950s. The site has been lying unused for many years and the coke ovens are overgrown with vegetation. The discovery of coal in Queensland dates from 1825 when outcrops were observed by Major Edmund Lockyer on the banks of the upper Brisbane River. Two years later, when Ipswich was settled as a convict outstation, known as Limestone, the presence of coal was noted between the convict settlement and Brisbane by Captain Patrick Logan. The following year, explorer Allan Cunningham also marked several outcrops on the Bremer River on his survey map for Governor Ralph Darling. Coal was one of the first minerals in Queensland to be commercially mined. Mining originally commenced at Goodna in 1843 with the West Moreton Coalfield at Ipswich following in the early 1850s. Development of the coal mining industry in Queensland was slow and the quality could not match the quality of coal being produced at Newcastle for use by coastal steamers. However, as coal was required for transport and industrial use, there was remarkable growth of mining to the north and east of the town from the mid-1870s onwards. A symbiotic relationship developed between Queensland Government Railways and the coal industry. Queensland Railways was the coal industry's largest customer since coal supplies were essential to the functioning of the rail network. At the same time rail transport was essential to the viability of coalmines as a coal mine could not survive commercially unless it was directly linked to the rail network. Therefore, in Queensland, development of the coal industry was closely linked to the growth of the rail network. A by-product of coal mining was the production of coke. The production of coke was usually regarded as a somewhat unimportant side industry to the actual mining of coal. Unlike coal mining, it was not regulated by legislation and as such no systematic records were kept of the early coke production industry in Queensland. Coke is used as both a fuel and as a reducing agent in smelting iron ore and is produced from baking bituminous coal in ovens at temperatures as high as 2,000 degrees Fahrenheit. The coke ovens erected on the West Moreton Coalfield were exclusively of the beehive type, so called because of their domed appearance. Beehive coke ovens consisted of a brick dome with a small circular opening (an exit flue) at the apex, and a larger arched opening at one side to permit charging and drawing. They were usually constructed in double rows known as batteries. The space between ovens was usually filled with rubble and earth to provide insulation and the whole battery surrounded by a stone retaining wall to resist the outward thrust of the brick domes. The Klondyke ovens had a capacity of four to five tons. Coke ovens were charged and emptied in a set pattern. Coal was shovelled into an empty oven to a height of approximately and ignited. The door was bricked up or closed with an iron plate and plastered with a small hole left for the temporary admission of air. The upper layer of the coal burnt and initiated the distillation of volatile material from the ovens as they met the air supply drawn in through the top of the charging door. The dome was heated to a high temperature and assisted in carbonizing the charge by radiating its heat to the coal. The operation of the distillation and immediate combustion proceeded until the whole of the volatile matter in the coal had been evolved, which took about three days. The coke was then cooled by inserting a water sprinkler through the air hole in the door and was withdrawn manually with rakes. Almost two tonnes of coal was required to produce one tonne of coke. Using beehive coke ovens for coke production was a common practice for many years. However, advances in coke production technology meant beehive coke ovens technology was outdated by the early 1900s. Despite this, beehive coke ovens continued to be built in the Ipswich coal fields well into the mid twentieth century. There are a number of possible reasons as to why this may have been the case. The more advanced technology enabled the use of coke production by-products but was a more expensive production option. It is possible that the simpler and more economical technology of the beehive style oven was sufficient for production needs on the Ipswich fields. The material and labour shortages brought about by the Second World War may also have been another factor in choosing to remain with beehive technology, they were cheap to build, bricks could be obtained to construct them and they did not require a large labour force to work them. A third possibility is that the coal pit did not have a long enough supply of coal left in it to warrant investing in expensive coke production methods. The Klondyke Colliery was one of a number of collieries within the West Moreton Coalfield at Ipswich and was one of the longest running. Originally part of the Chuwar field, the first mine on the site, in 1871, was known as Eastwood mine after its owner, John Eastwood. The Eastwood mine was worked until 1877 and then, due to the owner's interest in other mines, laid idle for several years until it was sold to Brydon, Jones & Company in 1884, who renamed the site Mihi mine after the nearby Mihi Creek. It was at this time that the first battery of coke ovens were constructed on what would become the Klondyke Colliery site. The battery was built on the hillside above Mihi Creek and remained in use until 1891 when attention was diverted a new coal development. It was 1908 before the site was mined again, this time by Paul Francis. In 1923 Francis sold the site to a partnership of miners and a barrister who created and floated the company, Klondyke Collieries Limited. As Klondyke Collieries Limited, coke production at the site flourished. Although it is not known exactly when coke production recommenced at the site, records state in 1942 several new coke ovens were constructed at Klondyke to meet the increased demand for coke for smelting purposes. Records show in 1945 and 1946, Klondyke and Bowen in northern Queensland were the only coke producers in Queensland with Klondyke turning out approximately a sixth of Queensland's coke production. In 1948 the site changed hands again and was renamed Moreton colliery and worked until the early 1950s which was when the pit's supply was exhausted. After the closure of the site, coke production was taken over by the nearby Haighmoor site which remained in production for a further 15 years as Queensland's only cokeworks other than Bowen. The bricks used to construct the ovens are imprinted with a trademark "R" and appear to have come from the local brickworks, Rylance Colliery. Rylance Colliery was established in the 1880s and in 1931 expanded to include a brickworks. The company was sold in 1985 and is now known as Claypave. The land the ovens stand on is now owned by the Department of Natural Resources and Mines. Since production at Klondyke ceased in the early 1950s, the ovens have fallen into a state of disrepair, becoming overgrown with vegetation and subject to vandalism. Description The Klondyke Coke Ovens are located at Brassall. Access is most easily gained via the corner of W.M. Hughes and Musgrave Streets, walking down the hill and then following the pathway around to the right for approximately . The path splits in two, with the left (upper) fork going to the ovens and the right (lower) fork going past the brick retaining wall. The ovens are built into the hillside above Mihi Creek and north of the nearby North Ipswich Railway Workshop area. The ovens form a large earthen covered mound of approximately in length, in width and in height. The mound is heavily covered with vegetation and leaf litter. It is difficult to identify from a distance and the ovens are not visible until in close proximity. There are twelve ovens arranged in two rows, six ovens in each row back to back. They have a domed shaped appearance. They are in diameter and in height. The ovens are constructed of brick and have individual flues. They exhibit various degrees of collapse with two ovens being completely caved in. The ovens have been subject to vandalism such as removal of bricks to form fireplaces and graffiti is evident. The rubble used to fill between the ovens is visible in some sections and consists of a combination of bricks and various sized rocks. On the downslope side, towards Mihi Creek, a brick wall of approximately in length and between in height, marks a drop from the mound containing the ovens down to a levelled area. The wall has been subjected to heavy graffiti. Brick debris, coal and coke is strewn around the site as are metal remnants of machinery such as skips, railway lines, crushing machinery, trammel screens, conveyor buckets and trolley parts. Bricks used to construct the ovens are imprinted with a large "R" in the centre of the otherwise flat, face of the brick. Heritage listing Klondyke Coke Ovens was listed on the Queensland Heritage Register on 3 December 2007 having satisfied the following criteria. The place is important in demonstrating the evolution or pattern of Queensland's history. The Klondyke Coke Ovens are a good representation of the coke production industry in Queensland. They were one of only two collieries producing coke in Queensland during the 1940s. They are important in demonstrating the development of Queensland's coal mining industry and its role in the evolution of Queensland's history. The place demonstrates rare, uncommon or endangered aspects of Queensland's cultural heritage. The relatively intact Klondyke Coke Ovens represent a process of coke manufacture in Queensland that was once common but has since been superseded by new technology and is now uncommon. The place has potential to yield information that will contribute to an understanding of Queensland's history. As an industrial archaeological site and a relatively intact example of its type, the Klondyke Coke Ovens have the potential to yield information that will contribute to a greater understanding of Queensland's industrial history and will aid in comparative analysis of similar places. The place is important in demonstrating the principal characteristics of a particular class of cultural places. The Klondyke Coke Ovens are typical, in scale and type, of coking ovens that were common in the 19th century to the mid 20th century. They exhibit the principal characteristics of beehive coke ovens being dome shaped with individual flues and built in a row in a back-to-back pattern. References Attribution External links Queensland Heritage Register Brassall, Queensland Articles incorporating text from the Queensland Heritage Register Coke ovens Coal mines in Queensland
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,219
Q: Create multidimensional array in php and mysql Currently I am trying create an array multidimensional in php using two queries (Items and Categories). I made a search in the site but I did not found anything similar to what I am looking for. I appreciate if someone can help me using my code. Please se bellow what am looking for and my code. Tables: TABLE Items; +-----------------------------------+ | id | type | name | +----------+------------+-----------+ | 1 | 4 | item_1 | | 2 | 3 | item_2 | | 3 | 2 | item_3 | +-----------------------------------+ TABLE Categories; +-----------------------------------+ | id | Item_id | name | +----------+------------+-----------+ | 1 | 2 | Cat_a | | 2 | 2 | Cat_b | | 3 | 3 | Cat_x | | 4 | 3 | Cat_z | | 5 | 3 | Cat_b | | 6 | 1 | Cat_b | | 7 | 3 | Cat_y | +-----------------------------------+ Result that I am looking for: Array ( [0] => Array ( id => 1 name => Item_1 Type => 4 cats => Array ( [6] => Cat_b ) ) [1] => Array ( id => 2 name => Item_2 Type => 3 cats => Array ( [1] => Cat_a [2] => Cat_b ) ) [2] => Array ( id => 3 name => Item_3 Type => 2 cats => Array ( [3] => Cat_x [4] => Cat_z [5] => Cat_b [7] => Cat_y ) ) ) My code: $result = mysqli_query($link, "SELECT * FROM Categories WHERE Item_id = '233'"); foreach ($result as $key => $value) { $v[] = $value["id"]; } foreach ($v as $key => $res) { $query = mysqli_query($link, "SELECT * FROM Items WHERE category_id = '".$res."'"); foreach ($query as $k =>$att){ $var[$res][] = $att["name"]; } } echo '<pre>' . print_r($var,1) . '</pre>'; A: You can gather the data needed in one SQL statement. Then you basically do a break-sort to group the category with the items. $qstr = "SELECT a.`id` as `item_id`, a.`type` as `item_type`, a.`name` as `item_name`, b.`id` as `cat_id`, b.`name` as `cat_name` FROM `Items` a JOIN `Categories` b ON a.`id` = b.`item_id` WHERE Item_id = '233'"; $result = mysqli_query($link, $qstr); $lastItem = ''; $rslt = array(); $rowno = -1; while($row = mysqli_fetch_assoc($result)){ if($lastItem != $row['table_id']) { $rowno++; $rslt[$rowno] = array( 'id' => $row['item_id'], 'name' => $row['item_name'], 'Type' => $row['item_type'], 'cats' => array($row['cat_id'] => $row['cat_name']) ); $lastItem = $row['table_id']; } else { $rslt[$rowno]['cats'][$row['cat_id']] = $row['cat_nsme']; } } A: Do not serialize the queries. One query is enough. $result = mysqli_query($link, "select i.id, i.type, i.name, c.id as `cat_id`, c.name as `cat_name` from Items i join Categories c on c.Item_id=i.id where i.id = '233' order by i.id, c.id "); $data = []; $id = null; $row_id = -1; while($row = mysqli_fetch_assoc($result)){ if (is_null($id) || $id != $row['id']) { $row_id++; $id = $row['id']; $data[] = [ 'id' => $row['id'], 'name' => $row['name'], 'Type' => $row['type'], 'cats' => [] ]; } $data[$row_id]['cats'][$row['cat_id']] = $row['cat_name']; } A: Use the following, Tested and working $sql = "SELECT i.id,i.name,i.type,c.id AS CatKey,c.name As catName FROM `Items` AS i JOIN Categories AS c ON i.id=c.Item_id ORDER BY i.id ASC "; $result = $conn->query($sql); $res = array(); $i = 0; if ($result->num_rows > 0) { // output data of each row while($row = $result->fetch_assoc()) { $ids = array_column($res, 'id'); if(in_array($row['id'], $ids)){ $res[$i-1]['cats'][$row['CatKey']] = $row['catName']; }else{ $res[$i]['id'] = $row['id']; $res[$i]['type'] = $row['type']; $res[$i]['name'] = $row['name']; $res[$i]['cats'] = array($row['CatKey'] => $row['catName']); $i++; } } } echo '<pre>'; print_r($res); Result:- Array ( [0] => Array ( [id] => 1 [type] => 4 [name] => item_1 [cats] => Array ( [6] => Cat_b ) ) [1] => Array ( [id] => 2 [type] => 3 [name] => item_2 [cats] => Array ( [1] => Cat_a [2] => Cat_b ) ) [2] => Array ( [id] => 3 [type] => 2 [name] => item_3 [cats] => Array ( [3] => Cat_x [4] => Cat_z [5] => Cat_b [7] => Cat_y ) ) )
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,903
class CreateAccessPermissions < ActiveRecord::Migration def change create_table :rglossa_access_permissions do |t| t.belongs_to :user, index: true, null: false t.belongs_to :corpus, index: true, null: false end end end
{ "redpajama_set_name": "RedPajamaGithub" }
5,953
\section{Introduction} We are interested in the following one-dimensional pressureless Euler-alignment system \begin{align} \partial_t \rho + \partial_x ( \rho u) =&~0, \label{EA-rho}\\ \partial_t u + u\,\partial_x u =&\int_\R\phi(x-y)(u(y)-u(x))\rho(y)\dd y,\label{EA-u} \end{align} with initial data \begin{equation}\label{EA-init} \big.(\rho,u)\big|_{t=0}(x)=(\rho_0,u_0)(x). \end{equation} This system, first derived in \cite{HT08}, can be viewed as the macroscopic representation of the agent-based Cucker-Smale model \cite{CS07}, describing the emergent phenomenon of animal flocks. Here, $\rho$ and $u$ represent the density and velocity, respectively. The right-hand side of \eqref{EA-u} is the nonlocal alignment force, where $\phi$ is called the \emph{influence function}. When $\phi>0$, the velocity $u(x)$ intends to align with $u(y)$ as time evolves. Although the global well-posedness theory for the Euler-alignment system in multi-dimensions is still incomplete (one can see \cite{DMPW,HeT17,Shv19,TT14} for interesting partial results), the theory on the 1D Euler-alignment system \eqref{EA-rho}-\eqref{EA-u} has been well-established in the last decade, under the assumption that influence function $\phi$ is non-negative, symmetric, and decreasing in $\R^+$. The behavior of $\phi$ near the origin plays an important role in the global regularity of the system. If $\phi$ is bounded, whether the solution is globally regular depends on the choice of initial data. In \cite{CCTT}, a sharp critical threshold on the initial data is derived, which distinguishes global smooth solutions and finite time singularity formations. If $\phi$ is weakly singular, namely unbounded but integrable at the origin, a different critical threshold has been obtained in \cite{T20}. If $\phi$ is strongly singular, namely non-integrable at the origin, the strong short-range alignment is known to bring dissipation which prevents finite time singularity formations, for all smooth periodic initial data which stays away from vacuum ($\rho_0>0$). Global regularity is shown in \cite{DKRT}, and independently in \cite{ST17, ST18}. The non-negativity assumption on $\phi$ is also crucial for the stability, as well as the long time behavior of the system. Indeed, one can calculate the dynamics of energy fluctuation \begin{equation}\label{EnergyFluc} \frac{d}{dt}\iint_{\R^2}|u(x)-u(y)|^2\rho(x)\rho(y)\dd x\dd y =-\iint_{\R^2}\phi(x-y)|u(x)-u(y)|^2\rho(x)\rho(y)\dd x\dd y. \end{equation} If $\phi$ has a positive lower bound, it is easy to see that the energy fluctuation decays exponentially in time. It leads to a velocity alignment as time approaches infinity. In \cite{TT14}, such fast alignment with an exponentially decay rate has been shown for any $\phi$ which delays sufficiently slow at infinity, such that $\int_0^\infty\phi(r)dr=+\infty$. Finally, if $\phi\geq0$ and degenerate (namely compactly supported), velocity alignment can be shown only for periodic initial data away from vacuum \cite{DS19}, with a sub-exponential rate of convergence. \smallskip In this paper, we focus on a different type of influence function, which is not necessarily non-negative. When $\phi(x-y)<0$, the velocity $u(x)$ intends to misalign with $u(y)$. Such \emph{misalignment} behavior could bring instability to the system. Indeed, it is easy to see from \eqref{EnergyFluc} that the energy fluctuation no longer decays in time. A natural question is, how does the misalignment affect the global well-posedness and long time behavior of the system. A typical choice of the influence function of our concern is \begin{equation}\label{phi-albe} \phi_{\alpha,\beta}(x)=\frac{c_\alpha}{|x|^{1+\alpha}} - \mu \frac{c_\beta}{|x|^{1+\beta}}, \end{equation} where the parameter $0<\beta<\alpha<2$, the coefficient $\mu>0$, and $c_\alpha$, $c_\beta$ are positive constant, defined in \eqref{Lam-alp}. This influence function has two main features: \begin{itemize} \item Strong alignment in the short range: $\phi_{\alpha,\beta}(x)$ behaves like $|x|^{-1-\alpha}$ near the origin. More precisely, \[\frac{c_\alpha}{2|x|^{1+\alpha}}<\phi_{\alpha,\beta}(x)<\frac{c_\alpha}{|x|^{1+\alpha}}, \quad\forall~0<|x|<\left(\frac{c_\alpha}{2\mu c_\beta}\right)^{\frac{1}{\alpha-\beta}}.\] \item Misalignment in the long range: $\phi_{\alpha,\beta}(x)$ becomes negative if $|x|$ is large enough. More precisely, \[\phi_{\alpha,\beta}(x)<0,\quad\forall~ |x|>\left(\frac{c_\alpha}{\mu c_\beta}\right)^{\frac{1}{\alpha-\beta}}.\] \end{itemize} The system \eqref{EA-rho}-\eqref{EA-u} with influence function \eqref{phi-albe} is closely related to the following Burgers type equation \begin{equation}\label{nonlKS} \partial_t u + u\,\partial_x u = -\Lambda^\alpha u + \mu \Lambda^\beta u,\quad u|_{t=0}=u_0, \end{equation} where the fractional differential operator $\Lambda^\alpha=(-\partial_x^2)^{\frac{\alpha}{2}}$ has the expression formula \begin{equation}\label{Lam-alp} \Lambda^\alpha f(x) = c_\alpha\, \mathrm{p.v.} \int_{\R} \frac{f(x) - f(y)}{|x-y|^{1+\alpha}} \dd y,\quad c_\alpha=\frac{2^\alpha\Gamma(\frac{1+\alpha}{2})}{\sqrt{\pi}|\Gamma(-\frac{\alpha}{2})|}. \end{equation} Equation \eqref{nonlKS} can be obtained by formally enforcing $\rho(x,t)\equiv 1$ in the velocity dynamics \eqref{EA-u} associated with $\phi(x)=\phi_{\alpha,\beta}(x)$. When $\mu=0$, \eqref{nonlKS} is known as the fractal Burgers equation. It was studied in \cite{KNS} and global regularity is obtained if and only if $\alpha\geq1$. When $\mu>0$, the equation \eqref{nonlKS} can be viewed as a nonlocal analog of the notable Kuramoto-Sivashinsky equation (which corresponds to $\alpha=4,\beta=2$ in \eqref{nonlKS}). The linear pseudo-differential term $ \Lambda^\alpha u - \mu \Lambda^\beta u$ gives long-wave instability and short-wave stability. The case where $\alpha>1$ and $\beta<\alpha$ was first introduced and studied by Granero-Belinch\'on and Hunter in \cite{GBH}. They proved the global existence, uniqueness and instant analyticity of solutions and also the existence of a compact attractor for the equation \eqref{nonlKS}. We remark that by applying the same process as in \cite{MX19}, one can show the global well-posedness for the critical the case $\alpha=1$ with $\beta<1$. Also, finite time blowup can be shown in the case $0<\alpha,\beta<1$. For our system \eqref{EA-rho}-\eqref{EA-u}, the constant density profile $\rho(x,t)\equiv1$ does not preserve in time. For $\mu=0$, a remarkable discovery in \cite{DKRT} is that, with a density-dependent fractional dissipation, the global behavior of the solution differs from the fractal Burgers equation. In particular, global regularity can be obtained for $\alpha\in(0,1)$. The main goal of this paper is on the global well-posedness of the Eulerian system \eqref{EA-rho}-\eqref{EA-u}, with the influence function $\phi$ containing misalignment. We will focus on periodic initial data $(\rho_0, u_0)$ where $x\in\T$, and $\rho_0(x)>0$ away from vacuum. Without loss of generality, we can set the period to be 1, and let $\T=[-\frac{1}{2}, \frac{1}{2}]$. As a suitable generalization of example \eqref{phi-albe}, we will consider the influence function $\phi(x)=\phi(-x)$ belonging to $C^4(\R\setminus \{0\})$ which satisfies the following assumptions. \begin{itemize} \item[(A1)] \emph{Strong alignment in the short range:} there exist constants $\alpha\in (0,2)$, $a_0>0$ and $c_1\geq 1$ such that \begin{equation}\label{phi-assum1} \frac{1}{c_1} \frac{1}{|x|^{1+\alpha}}\leq \phi(x) \leq \frac{c_1}{|x|^{1+\alpha}},\quad \forall~ 0< |x| \leq a_0, \end{equation} \begin{equation}\label{phi-assum1.2} \Big|\frac{\dd^j\phi(x)}{\dd x^j}\Big| \leq \frac{c_1}{|x|^{1+j+\alpha}},\;\;j=1,2,3,4,\quad \forall~ 0< |x| \leq a_0, \end{equation} \begin{equation}\label{phi-assum1.3} \textrm{the mapping $r\mapsto \phi(r)$ is non-increasing in $r$ on $(0,a_0]$.} \end{equation} \item[(A2)] \emph{Possible misalignment in the long range:} there exists a constant $c_2>0$ such that \begin{equation}\label{phi-assum2} \int_{|x|\geq a_0} |x|^j\Big|\frac{\dd^j \phi(x)}{\dd x^j}\Big| \dd x \leq c_2,\;\;j=0,1,2,3,4. \end{equation} \end{itemize} Such a function is indeed the kernel function of the following L\'evy operator \begin{equation}\label{Lop-exp} \mathcal{L}f(x) = \mathrm{p.v.} \int_\R \phi(x-y) \big(f(x) - f(y) \big) \dd y, \end{equation} which corresponds to the infinitesimal generator of stable L\'evy process (see \cite{Jacob}). Under the periodic setup, the alignment term can be expressed as \[\int_\T\phi^S(x-y)(u(y)-u(x))\rho(y)dy\] with the \emph{periodic influence function} \begin{equation}\label{phi-S} \phi^S(x):=\sum_{k\in\Z}\phi(x+k),\quad\forall\,x\in\T. \end{equation} When $\phi$ satisfies assumptions (A1) and (A2), we assume $a_0 \leq \frac{1}{2}$ with no loss of generality, and noting that $\sum_{k\neq 0} |\phi(x+k)| \leq 3 c_2$ for every $x\in \T$ and $\sum_{k\in \Z} |\phi(x+k)| \leq c_2(1+ a_0^{-1})$ for every $|x|\in [a_0,\frac{1}{2}]$, $\phi^S$ has the following similar properties. \begin{itemize} \item[(A1$^S$)] \emph{Strong alignment in the short range:} \begin{equation}\label{phi-s-assum1} \frac{1}{2c_1} \frac{1}{|x|^{1+\alpha}} \leq \phi^S(x) \leq \frac{2 c_1}{|x|^{1+\alpha}}, \quad \forall~|x|\in (0, \rz], \quad \rz =\min\left\{a_0, ~\Big(\frac{1}{6 c_1 c_2}\Big)^{\frac{1}{1+\alpha}}\right\}. \end{equation} \item[(A2$^S$)] \emph{Possible misalignment in the long range:} \begin{equation}\label{phi-s-assum2} |\phi^S(x)| \leq c_3, \quad \forall~ |x| \in [\rz, 1/2],\quad c_3=c_1r_0^{-(1+\alpha)} +c_2\big(1+a_0^{-1}\big). \end{equation} \end{itemize} Condition \eqref{phi-s-assum2} allows $\phi^S$ to be negative in the long range. This corresponds to the misalignment effect. Figure~\ref{fig:phi} illustrates a typical periodic influence function satisfying (A1$^S$) and (A2$^S$) with misalignment. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=1.5] \coordinate (y) at (0,2.1); \coordinate (x) at (6.3,0); \draw[<->] (y) node[above] {$\phi^S$} -- (0, 0) -- (x) node[right] {$x$}; \draw (0,-1) -- (0, 0) node[left] {$0$}; \draw (3,0.1) -- (3,0) node[below right] {$\frac{1}{2}$}; \draw (6,0.1) -- (6,0) node[below right] {$1$}; \draw (1.5,0.1) -- (1.5,0) node[below] {$\Rz$}; \draw (.6,0.1) -- (.6,0) node[below] {$\rz$}; \draw (4.5,0.1) -- (4.5,0); \draw (5.4,0.1) -- (5.4,0); \draw[dashed] (.6,2.1) -- (.6,0); \draw[dashed] (3,2.1) -- (3,-1); \draw[dashed] (6,2.1) -- (6,-1); \draw[thick, domain=0.75:2,smooth,variable=\x,blue] plot ({\x-.5},{4*((2/\x)^.5-1)}); \draw[thick, domain=0.75:2,smooth,variable=\x,blue] plot ({6.5-\x},{4*((2/\x)^.5-1)}); \draw[thick, domain=1.5:4.5,smooth,variable=\x,blue] plot ({\x},{(\x-3)^2/3-3/4}); \draw[domain=0.15:.6,smooth,variable=\x,red] plot ({\x},{\x^(-1)/3}); \draw[dashed, domain=.6:2,smooth,variable=\x,red] plot ({\x},{\x^(-1)/3}); \draw[<->] (0,-1.2) -- (.6, -1.2); \node at (-.3,-1.5) {Strong alignment}; \draw[<->] (1.5,-1.2) -- (4.5, -1.2); \node at (3,-1.5) {Misalignment}; \draw[<->] (5.4,-1.2) -- (6, -1.2); \node at (6.3,-1.5) {Strong alignment}; \end{tikzpicture} \end{center} \caption{The illustration of the periodic influence function}\label{fig:phi} \end{figure} Now, let us state our main result. \begin{theorem}[Global regularity]\label{thm:GR-EAS} Let the symmetric influence function $\phi\in C^4(\R\setminus\{0\})$ be under assumptions (A1) and (A2) with $0<\alpha<2$. Let $s>\frac{3}{2}$ if $\alpha\in (0,1]$ and $s>\frac{5}{2}$ if $\alpha\in (1,2)$. Assume that the initial data satisfy \[\rho_0 \in H^s(\T), \quad \min_\T \rho_0>0,\quad u_0 \in H^{s+1 -\alpha}(\T), \quad \text{and}\quad G_0 := \partial_x u_0 - \LL \rho_0 \in H^{s-\frac{\alpha}{2}}(\T).\] Then for any $T>0$, the Euler-alignment system \eqref{EA-rho}-\eqref{EA-u} with associated periodic initial data $(\rho_0,u_0)$ generates a unique global smooth solution $(\rho,u)$ on the time interval $[0,T]$. \end{theorem} As a direct corollary, the theorem says that with $\phi(x)=\phi_{\alpha,\beta}(x)$ given by \eqref{phi-albe}, global regularity of the Euler-alignment system \eqref{EA-rho}-\eqref{EA-u} can be obtained for the full range $0<\beta<\alpha<2$, $\mu>0$. In particular, the behavior differs from equation \eqref{nonlKS} when $\alpha\in(0,1)$, where blowup can occur. This is the same phenomenon as the $\mu=0$ case. We shall emphasize, however, the presence of misalignment makes a big difference in the regularity estimates, as well as the long time behaviors of the solutions. When misalignment effect is relatively weak (e.g. $\mu$ is small in \eqref{phi-albe}), then $\phi^S(x)>0$ for any $x\in\T$. In this case, there is overall no misalignment. Global regularity and fast alignment then follow. See related discussions in \cite{KT18}. In particular, two important bounds can be derived. First, the density has a uniform-in-time lower bound (see Remark \ref{rmk:lowbdd}), namely, there exists a positive constant $\rho_m>0$, such that \[\rho(x,t)\geq \rho_m,\quad \forall~x\in\T~\text{and}~t\geq0.\] Second, the density oscillation $\|\partial_x\rho(\cdot,t)\|_{L^\infty}$ is bounded uniformly in time. When misalignment effect is strong enough (e.g. $\mu$ is big in \eqref{phi-albe}), then $\phi^S$ is not necessarily positive everywhere and the typical case is illustrated in Figure \ref{fig:phi}. With the long-range misalignment, the density no longer has a uniform-in-time positive lower bound. Indeed, as verified by numerical experiments, the lower bound on density can decay to zero as time approaches infinity. The presence of vacuum is known to lead to destabilization of the system, and the singularity formations \cite{T19}. Lack of the uniform lower bound on density creates additional difficulties towards the global well-posedness theory. To prove Theorem \ref{thm:GR-EAS}, we first obtain lower/upper bound estimates on density $\rho$, stated in Lemmas \ref{lem:lowbdd} and \ref{lem:uppbdd}. It guarantees that the density is uniformly-in-time bounded and also stays positive in any finite time, although it could go to zero as time approaches infinity, with a exponential decay rate. Next, with the lower/upper bound estimates, we establish the local well-posedness theory, using energy and commutator estimates. Since we consider a large class of general influence functions $\phi$, the crucial commutator estimates needs to be extended to general L\'evy operators $\LL$ that are related to $\phi$. Moreover, a sufficient condition that ensures the global regularity is shown, which extends the result in \cite{DKRT, KT18} to a more general setting. The sufficient condition, described in \eqref{eq:bc}, is related to the boundedness of the density oscillation $\|\partial_x\rho(\cdot,t)\|_{L^\infty}$ for the case $\alpha\in (0,1]$ and $\|\partial^2_x\rho(\cdot,t)\|_{L^\infty}$ for the case $\alpha\in(1,2)$. Finally, we prove that these density oscillations can be bounded in any finite time, using the novel method on modulus of continuity, invented in \cite{KNV} and with applications to the Euler-alignment system in \cite{DKRT}. We adapt it to the Euler-alignment system \eqref{EA-rho}-\eqref{EA-u} with general influence function $\phi$. There are two major difficulties. First, the case $\alpha\in[1,2)$ does not simply follow the same procedures as the $\alpha\in(0,1)$ case. See Remark~\ref{rmk:Omega} as well as Section~\ref{subsec:rho-lip} for related discussions. Second, with the presence of the misalignment, there is a lack of uniform lower bound on the density, and thus $\|\partial_x\rho(\cdot,t)\|_{L^\infty}$ and $\|\partial_x^2\rho(\cdot,t)\|_{L^\infty}$ can grow in time. We manage to get a bound of $\|\partial_x\rho(\cdot,t)\|_{L^\infty}$ with double exponential growth in time and a bound of $\|\partial_x^2\rho(\cdot,t)\|_{L^\infty}$ with triple exponential growth in time. These bounds ensure the global regularity anyway. However, the solutions could be very unstable as time approaches infinity. The rest of the paper is organized as follows. In Section \ref{sec:lem}, we state and show some important lemmas, including the critical lower/upper bound estimates on density and some properties of L\'evy operator $\LL$. In Section \ref{sec:lwp}, we establish the local well-posedness theory, as well as the blowup criteria. In Section \ref{sec:gwp}, we show global regularity of the considered system, and finish the proof of Theorem \ref{thm:GR-EAS}. In Section \ref{sec:MOCes}, we present the detailed proof of auxiliary lemmas related to modulus of continuity, which play crucial roles in the global regularity part. Section \ref{sec:append} is appendix section which deals with the commutator estimates that are useful in the local well-posedness. \section{Auxiliary lemmas}\label{sec:lem} \subsection{Reformulation of the Euler-alignment system} The alignment force in \eqref{EA-u} is known to have a commutator structure. By using expression formula \eqref{Lop-exp} of L\'evy operator $\LL$, it can be written as \begin{equation*} \int_\R\phi(x-y)(u(y)-u(x))\rho(y)\dd y = - \big(\LL (\rho u) - u \LL(\rho) \big) = - [ \LL, u ]\rho. \end{equation*} Note that in the case $\phi=\phi_{\alpha,\beta}$ given by \eqref{phi-albe}, the corresponding operator $\LL=\Lambda^\alpha - \mu\Lambda^\beta$. To capture the commutator structure, we follow the idea of \cite{CCTT}. Apply the operator $\LL$ to the $\rho$-equation \eqref{EA-rho} and get \begin{equation*} \partial_t \LL \rho = -\partial_x \LL (\rho\, u) = - \partial_x ([\LL,u ]\rho) - \partial_x (u\,(\LL\rho)). \end{equation*} Apply $\partial_x$ to the $u$-equation \eqref{EA-u} and get \begin{equation*} \partial_t (\partial_x u) + \partial_x(u\,\partial_x u) = - \partial_x ([\LL,u]\rho). \end{equation*} Combining these two equations together will yield a nice cancelation on the term $\partial_x ([\LL,\rho]u)$. Define \begin{equation}\label{defG} G=\partial_xu - \mathcal{L}\rho. \end{equation} We get \begin{equation}\label{EA-G} \partial_t G + \partial_x (G\,u) =0, \end{equation} The Euler-alignment system \eqref{EA-rho}-\eqref{EA-u} can be reformulated as the following system for $\rho$ and $G$: \begin{equation}\label{EAS-ref} \begin{cases} \partial_t \rho + \partial_x (\rho\, u)=0, \\ \partial_t G + \partial_x (G\, u)=0, \\ \partial_x u = G + \LL\rho. \end{cases} \end{equation} For smooth solutions $(\rho, G)$, we can reconstruct the velocity $u$ from \eqref{EAS-ref} as follows. First, by integrating equation \eqref{EA-rho} in $x$, we get the conservation of mass \begin{equation}\label{b-rho0} \int_\T \rho(x,t)\dd x = \int_\T \rho_0(x) \dd x=:\bar{\rho}_0, \end{equation} where we denote $\bar{\rho}_0$ as the average density in $\T$. Since $G$ also satisfies the continuity equation \eqref{EA-G}, we have \[ \int_\T G(x,t) \dd x = \int_\T G_0(x) \dd x = \int_\T\partial_xu_0(x)\dd x+\int_\T\int_\T\phi^S(x-y)(\rho_0(x)-\rho_0(y))\dd x\dd y=0.\] We also set \begin{equation}\label{theta} \theta(x,t)= \rho(x,t) -\bar{\rho}_0, \end{equation} so that $\int_\T \theta(x,t) \dd x =0$. Thus we deduce that the primitive functions of $\theta(x,t)$ and $G(x,t)$ are periodic. Denote by $(\varphi,\psi)$ the mean-free primitive functions of $(\theta,G)$: \begin{equation}\label{the-varph} \theta(x,t)= \partial_x \varphi(x,t),\quad \int_\T \varphi(x,t)\dd x =0, \end{equation} and \begin{equation}\label{G-varpsi} G(x,t) = \partial_x \psi(x,t),\quad \int_\T \psi(x,t) \dd x =0. \end{equation} Hence, from the relation \eqref{defG}, we see that \begin{equation}\label{u-exp} u(x,t) = \psi(x,t) + \LL \varphi(x,t) + I_0(t). \end{equation} In order to determine $I_0(t)$, we make use of the conservation of momentum. Indeed, from the system \eqref{EA-rho}-\eqref{EA-u}, we have the dynamics of the momentum \begin{equation}\label{eq:momentum} \partial_t(\rho u) + \partial_x(\rho u^2) = \rho(x)\int_\T \phi^S(x-y) (u(y)-u(x)) \rho(y) \dd y. \end{equation} Integrating of \eqref{eq:momentum} on $\T$ and using the fact that $\phi^S$ is an even function on $\T$, it yields \begin{equation*} \frac{\dd}{\dd t}\int_\T \rho u\,\dd x = \int_\T \int_\T \phi^S(x-y) (u(y)-u(x)) \rho(x) \rho(y) \dd x \dd y =0, \end{equation*} thus we get \begin{equation* \int_\T \rho(x,t) u(x,t) \dd x = \int_\T \rho_0(x) u_0(x) \dd x. \end{equation*} Such conservation can be used to determine $I_0(t)$ in \eqref{u-exp}: \begin{equation*} I_0(t) = \frac{1}{\bar{\rho}_0 } \left(\int_\T \rho_0(x) u_0(x) \dd x - \int_\T \rho(x,t) \psi(x,t) \dd x - \int_\T \rho(x,t) \LL\varphi(x,t)\dd x\right). \end{equation*} From \eqref{theta}-\eqref{the-varph} and the property of L\'evy operator $\LL$ (e.g. see \eqref{LKf}), we infer that \begin{equation*} \int_\T \rho(x,t) \LL \varphi(x,t) \dd x = \bar{\rho}_0\int_\T \LL\varphi(x,t)\dd x + \int_\T \partial_x\varphi(x,t) \LL\varphi(x,t) \dd x =0, \end{equation*} thus \begin{equation}\label{I0t} I_0(t) = \frac{1}{\bar{\rho}_0 } \left(\int_\T \rho_0(x) u_0(x) \dd x - \int_\T \rho(x,t) \psi(x,t) \dd x \right). \end{equation} In particular, if $G(x,t)\equiv 0$ then we have $\psi(x,t) \equiv 0$ and $I_0(t)$ is just a time-independent constant. \subsection{Bounds on the density}\label{subsec:bdd-rho} We first derive the crucial lower bound on $\rho$, which guarantees no creation of vacuum at finite time. \begin{lemma}\label{lem:lowbdd} Assume the influence function $\phi(x)=\phi(-x)\in C^4(\R\setminus\{0\})$ satisfies assumptions (A1) and (A2) with $\alpha\in (0,2)$. Let $(\rho,u)$ be a smooth solution to the Euler-alignment system \eqref{EA-rho}-\eqref{EA-init} for $0\leq t\leq T$, with smooth periodic initial data $(\rho_0,u_0)$ satisfying $\min_\T\rho_0(x) >0 $. Then, there exists a positive constant $M_0>0$, depending only on $c_3$ and the initial data, such that \begin{equation}\label{eq:lowbdd} \rho(x,t) \geq M_0 e^{-c_3\bar{\rho}_0 t},\quad \forall~x\in \T, ~0\leq t\leq T. \end{equation} \end{lemma} \begin{proof} We first observe that the quantity $F=G/\rho$ satisfies the following transport equation \begin{equation}\label{Feq} \partial_t F + u\,\partial_x F=0, \end{equation} and it yields that \begin{equation}\label{Flinf-es} \|F(t)\|_{L^\infty(\T)} \leq \|F_0\|_{L^\infty(\T)} = \Big\|\frac{\partial_x u_0 -\LL \rho_0 }{\rho_0}\Big\|_{L^\infty(\T)} <\infty. \end{equation} Note also that $\rho$ satisfies \begin{equation}\label{rho-eq2} \partial_t \rho + u\,\partial_x \rho = -\rho\partial_x u = - \rho \LL\rho - \rho^2 F. \end{equation} Assume $T_*\leq T$ is the maximal time that $\min_{x\in \T}\rho(x,t)$ remains strictly positive. The positiveness of $T_*$ is ensured by $\min_{x\in \T}\rho_0>0$ and the smoothness of $\rho$. For every $0\leq t\leq T_*$, we assume that $\underline{x}\in \T$ is a point that $\theta(x,t)$ attains its minimum ($\underline{x}$ maybe is dependent on $t$ and is not necessarily unique). By virtue of formula \eqref{Lop-exp} and \eqref{phi-S}, we see that \begin{equation* - \mathcal{L}\rho(\underline{x},t) = \mathrm{p.v.}\int_\R \phi(\underline{x}-y) (\rho(y,t) - \rho(\underline{x},t)) \dd y = \mathrm{p.v.}\int_\T\phi^S(y) (\rho(y + \underline{x},t) - \rho(\underline{x},t)) \dd y, \end{equation*} where $\phi^S$ satisfies estimates \eqref{phi-s-assum1}-\eqref{phi-s-assum2}. Since $-c_3<0$ is a lower bound of $\phi^S$ on $\T$, we have \begin{equation}\label{Lrho-es1} \begin{split} - \mathcal{L}\rho(\underline{x},t) \geq -c_3 \int_\T \big( \rho(y+ \underline{x},t) - \rho(\underline{x},t) \big)\dd y = - c_3 \big( \bar{\rho}_0 - \rho(\underline{x},t)\big). \end{split} \end{equation} Combining \eqref{rho-eq2} with \eqref{Flinf-es} and \eqref{Lrho-es1}, we obtain \begin{equation*} \partial_t \rho(\underline{x},t) \geq -c_3 \bar{\rho}_0 \,\rho(\underline{x},t) - \|F_0\|_{L^\infty} \rho(\underline{x},t)^2. \end{equation*} Direct calculation then yields \[\min_{x\in \T}\rho(x,t)\geq\frac{c_3\bar{\rho}_0}{(c_3\bar{\rho}_0 (\min_\T \rho_0)^{-1}+\|F_0\|_{L^\infty})e^{c_3\bar{\rho}_0t}-\|F_0\|_{L^\infty}} \geq\frac{c_3\bar{\rho}_0}{c_3\bar{\rho}_0 (\min_\T \rho_0)^{-1}+\|F_0\|_{L^\infty}}e^{-c_3\bar{\rho}_0t},\] for any $0\leq t\leq T_*$. Moreover, the above formula implies that $T_*=T$. So \eqref{eq:lowbdd} holds as long as the solution stays smooth. \end{proof} \begin{remark}\label{rmk:lowbdd} If the periodic influence function $\phi^S$ has a non-negative lower bound on $\T$, that is, \[ \phi^S(x) \geq \phi_m,\quad \forall~ x\in \T,\quad \textrm{with some constant $\phi_m\geq0$}, \] a similar estimate as \eqref{Lrho-es1} implies \[- \mathcal{L}\rho(\underline{x},t) \geq \phi_m \big( \bar{\rho}_0 - \rho(\underline{x},t)\big).\] Consequently, we have \[\partial_t\rho(\underline{x},t)\geq \phi_m\bar{\rho}_0\,\rho(\underline{x},t) - \big(\|F_0\|_{L^\infty} + \phi_m \bar{\rho}_0 \big) \rho(\underline{x},t)^2,\] where the right hand side stays positive if $\rho(\underline{x},t)<\frac{\phi_m\bar{\rho}_0}{\phi_m\bar{\rho}_0+\|F_0\|_{L^\infty}}$. This leads to a uniform-in-time lower bound on $\rho$ \begin{equation*} \min_{\T\times [0,T^*]}\rho(x,t) \geq \min\Big\{\min_\T \rho_0, ~\frac{\phi_m \bar{\rho}_0}{\|F_0\|_{L^\infty} + \phi_m \bar{\rho}_0} \Big\}. \end{equation*} Compared with Lemma~\ref{lem:lowbdd}, we observe a major difference between systems with or without misalignment. Lack of uniform-in-time lower bound on the density brings additional difficulties to the local and global well-posedness theory. \end{remark} Next we show a uniform upper bound of density $\rho$. \begin{lemma}\label{lem:uppbdd} Let the assumptions of Lemma \ref{lem:lowbdd} be satisfied. Then, there exists a positive constant $M_1>0$ dependent on $\alpha$, $\rz$, $c_1$, $c_3$, and $(\rho_0,u_0)$ but independent of $T$ such that \begin{equation}\label{eq:uppbdd} \rho(x,t) \leq M_1, \quad \forall\, x\in \T,~ 0\leq t\leq T. \end{equation} \end{lemma} \begin{proof} Assume that for every $0\leq t \leq T$, smooth solution $\theta(x,t)$ attains its maximum at some point $\overline{x}\in\T$ ($\overline{x}$ maybe depends on $t$ and not necessarily is unique). We also have \eqref{rho-eq2} as the equation of $\rho$, and we first intend to derive an upper bound of $\LL \rho(\overline{x},t)$, which has the following formula \begin{equation* \mathcal{L}\rho(\overline{x},t) = \mathrm{p.v.}\int_\T\phi^S(z) (\rho(\overline{x},t) - \rho(\overline{x}+z,t) ) \dd y. \end{equation*} The estimates \eqref{phi-s-assum1}-\eqref{phi-s-assum2} of $\phi^S$ ensure that \begin{align}\label{eq:Lx-es} \,\mathcal{L}\rho(\overline{x},t) \geq & \,\mathrm{p.v.} \int_{|z|\leq r_0} \frac{c_1}{2|z|^{1+\alpha}}(\rho(\overline{x},t) - \rho(\overline{x}+z,t) ) \dd y + \int_{r_0\leq |y|\leq \frac{1}{2}} (-c_3) (\rho(\overline{x},t) - \rho(\overline{x}+z,t) )\dd y \nonumber \\ \geq & \,\mathrm{p.v.} \int_{|z|\leq r_0} \frac{c_1}{2|z|^{1+\alpha}}(\theta(\overline{x},t) - \theta(\overline{x}+z,t) ) \dd y - c_3 (1-2r_0) \rho(\overline{x},t) . \end{align} In order to estimate the integral on the right hand side of \eqref{eq:Lx-es}, we use the idea of nonlinear maximum principle originated in \cite{ConV}. Set $\varpi\in C^\infty(\R)$ be a test function such that \begin{equation}\label{chi-prop} \textrm{$0\leq \varpi\leq 1$, \quad $\varpi\equiv 0$ on $[-1/2,1/2]$,\quad $\varpi\equiv 1$ on $\R\setminus [-1,1]$, \quad and \quad $\|\varpi'\|_{L^\infty(\R)}\leq4$.} \end{equation} Denote $\varpi_r(x)= \varpi(\frac{x}{r})$ for every $r>0$. Let $r\in (0,\frac{r_0}{2})$ be a constant to be chosen later. In view of \eqref{the-varph} and the fact that \begin{equation}\label{vphi-Linf-es} \|\varphi(t)\|_{L^\infty(\R)}=\|\varphi(t)\|_{L^\infty(\T)}\leq \|\theta(t)\|_{L^1(\T)} \leq \|\rho(t)\|_{L^1(\T)} + \bar{\rho}_0=2\bar{\rho}_0, \end{equation} we use the integration by parts to infer that \begin{align*} \mathcal{L}\rho(\overline{x},t) \geq & \, \mathrm{p.v.} \int_\R \frac{c_1}{2|z|^{1+\alpha}} \varpi_r(z)(1-\varpi_{r_0}(z)) \big(\theta(\overline{x},t) - \theta(\overline{x}+z,t) \big) \dd z - c_3 \rho(\overline{x},t) \\ \geq & \,\theta(\overline{x},t) \int_{r\leq |z|\leq \frac{r_0}{2} } \frac{c_1}{2|z|^{1+\alpha}} \dd z - \int_\R \frac{c_1}{2 |z|^{1+\alpha}} \varpi_r(z) (1-\varpi_{r_0}(z)) \partial_z \varphi(\overline{x}+z,t) \dd z - c_3 \rho(\overline{x},t) \\ \geq &\, \frac{c_1}{\alpha}(\rho(\overline{x},t) -\bar{\rho}_0) \Big(r^{-\alpha} - \Big(\frac{r_0}{2}\Big)^{-\alpha} \Big)- \frac{c_1}{2}\|\varphi(t)\|_{L^\infty(\R)} \int_\R \Big|\partial_z\Big(\frac{\varpi_r(z)(1-\varpi_{r_0}(z)}{|z|^{1+\alpha}}\Big)\Big|\dd z -c_3\rho(\overline{x},t) \\ \geq & \,\frac{c_1}{2\alpha} \Big(r^{-\alpha} - \Big(\frac{r_0}{2}\Big)^{-\alpha} \Big) \rho(\overline{x},t) -\frac{80c_1}{\alpha}\bar{\rho}_0r^{-(1+\alpha)}-c_3 \rho(\overline{x},t), \end{align*} where in the last inequality we assume $\rho(\overline{x},t)\geq2\bar{\rho}_0$ so that $\rho(\overline{x},t)-\bar{\rho}_0\geq\frac12\rho(\overline{x},t)$, and also \[ \int_\R\Big|\partial_z\Big(\frac{\varpi_r(z)(1-\varpi_{r_0}(z)}{|z|^{1+\alpha}}\Big)\Big|\dd z \leq 2\left[\left(\frac4r+\frac4{r_0}\right)\cdot\frac{1}{\alpha}\left(\frac{r}{2}\right)^{-\alpha}+\left(\frac{r}{2}\right)^{-(1+\alpha)}\right]\leq \frac{80}{\alpha}r^{-(1+\alpha)}.\] Now, let us pick $r$ satisfying $\frac{c_1}{4\alpha} \rho(\overline{x},t) r^{-\alpha} = \frac{80c_1}{\alpha}\bar{\rho}_0r^{-(1+\alpha)}$, that is \[r = \frac{320\bar{\rho}_0}{\rho(\overline{x},t)},\] and we may also assume that $\rho(\overline{x},t)> \frac{640 \bar{\rho}_0}{r_0}$ so that $r\in (0, \frac{r_0}{2})$, then, it follows that \begin{equation}\label{Lrho-lbd} \mathcal{L}\rho(\overline{x},t) \geq \frac{ c_1}{5\cdot 10^5 \alpha} \bar{\rho}_0^{-\alpha} \rho(\overline{x},t)^{1+\alpha} - \Big( c_3 + \frac{2 c_1}{\alpha} r_0^{-\alpha}\Big)\rho(\overline{x},t). \end{equation} Now from the equation \eqref{rho-eq2}, by using \eqref{Flinf-es} and \eqref{Lrho-lbd}, we directly have \[ \partial_t \rho(\overline{x},t) \leq - \rho(\overline{x},t) \LL \rho(\overline{x},t) + \|F(t)\|_{L^\infty} \rho(\overline{x},t)^2 \leq - \frac{c_1}{5\cdot 10^5 \alpha} \bar{\rho}_0^{-\alpha} \rho(\overline{x},t)^{2+\alpha} + \Big(c_3 + \frac{2 c_1}{\alpha} r_0^{-\alpha}+ \|F_0\|_{L^\infty}\Big) \rho(\overline{x},t)^2. \] If we additionally assume that $\rho(\overline{x},t)$ is large enough so that \begin{equation}\label{rho-barx-cd} \rho(\overline{x},t) \geq \Big(10^6c_1^{-1}\big(c_3\alpha + 2 c_1 r_0^{-\alpha} +\|F_0\|_{L^\infty}\alpha \big)\Big)^{\frac1\alpha}\bar{\rho}_0, \end{equation} we get \begin{align*} \partial_t \rho(\overline{x},t)\leq - \frac{c_1}{10^6 \,\alpha} \bar{\rho}_0^{-\alpha} \rho(\overline{x},t)^{2+\alpha} <0. \end{align*} Therefore, noting that the condition \eqref{rho-barx-cd} implies $\rho(\overline{x},t)\geq \max\{2, 1000\,r_0^{-1}\} \bar{\rho}_0$, we conclude the desired uniform-in-time upper bound \begin{equation*} \rho(\overline{x},t) \leq \max\left\{\max_\T \rho_0,\,\, \bar{\rho}_0\cdot \Big(10^6c_1^{-1}\big(c_3\alpha + 2 c_1 r_0^{-\alpha} +\|F_0\|_{L^\infty}\alpha \big)\Big)^{\frac1\alpha}\right\}. \end{equation*} \end{proof} As a direct consequence of Lemmas \ref{lem:lowbdd} and \ref{lem:uppbdd}, we see that \begin{equation}\label{I0t-bdd} |I_0(t)|\leq C,\quad \forall t\in[0,T], \end{equation} with $C$ depending only on the influence function $\phi$ and the initial data $(\rho_0,u_0)$. Indeed, in light of relation \eqref{G-varpsi} and estimates \eqref{Flinf-es}, \eqref{eq:uppbdd}, we get \begin{equation}\label{psi-Linf-es} \|\psi(t)\|_{L^\infty(\T)}\leq \|G(t)\|_{L^\infty} \leq \|F(t)\|_{L^\infty} \|\rho(t)\|_{L^\infty} \leq M_1 \|F_0\|_{L^\infty} , \end{equation} thus from \eqref{I0t} and \eqref{b-rho0} it yields \begin{equation*} |I_0(t)| \leq \frac{1}{\bar{\rho}_0} \Big(\|u_0\|_{L^\infty} \int_\T\rho_0(x)\dd x + \|\psi(t)\|_{L^\infty} \int_\T \rho(x,t)\dd x\Big)\leq \|u_0\|_{L^\infty} + M_1 \|F_0\|_{L^\infty}. \end{equation*} \subsection{Some properties of L\'evy operator $\LL$} Throughout this subsection, we assume that $\LL$ is the L\'evy operator defined by \eqref{Lop-exp} with kernel function $\phi(x)=\phi(-x)\in C^4(\R\setminus\{0\})$ satisfying assumptions (A1)(A2) with $\alpha\in (0,2)$. By taking the Fourier transform on $\mathcal{L}$, we get \begin{equation}\label{Lsymb} \widehat{\LL \, f} (\zeta) = A(\zeta) \widehat{f}(\zeta),\quad \forall \zeta\in\R, \end{equation} where the symbol $A(\zeta)$ is given by the L\'evy-Khintchine formula (see \cite[Eq. 3.217]{Jacob}) \begin{equation}\label{LKf} A(\zeta) := \int_{\R\setminus \{0\}}\left( 1- \cos(\zeta\, x)\right) \phi(x)\dd x. \end{equation} The next lemma concerns the pointwise lower/upper bound estimates of the symbol. \begin{lemma}\label{lem:symb} The symbol $A(\zeta)$ given by \eqref{LKf} of the considered L\'evy operator $\LL$ satisfies that \begin{equation}\label{A-est} A(\zeta)\geq C'^{-1} |\zeta|^{\alpha} - C'/2,\quad \forall \zeta\in\R, \end{equation} and \begin{equation}\label{A-est2} A(\zeta)\leq C |\zeta|^{\alpha} + C,\quad \forall \zeta\in\R, \end{equation} where $\alpha\in (0,2)$ and $C$, $C'$ are positive constants depending only on $\alpha$ and $a_0,c_1,c_2$. \end{lemma} \begin{remark}\label{rmk:symb} From estimate \eqref{A-est}, it is clear that $C'+ A(\zeta) $ is strictly positive. We thus can define the operator $\sqrt{C'\mathrm{Id} + \LL}$ as the following multiplier operator \begin{equation}\label{def:sqrtL} \mathcal{F}\big(\sqrt{C'\mathrm{Id} + \LL\,} f\big)(\zeta) = \sqrt{C' + A(\zeta)} \widehat{f}(\zeta),\quad \forall\zeta\in\R. \end{equation} \end{remark} \begin{proof}[Proof of Lemma \ref{lem:symb}] Recalling that for every $\alpha\in (0,2)$ we have (e.g., see \cite[Eq. (3.219)]{Jacob}) \begin{equation}\label{eq:fact1} |\zeta|^\alpha = c_{\alpha}\; \int_{\R\setminus\{0\}}\left( 1-\cos(x\, \zeta)\right) \frac{1}{|x|^{1+\alpha}} \dd x,\quad \forall \zeta\in\R, \end{equation} and by virtue of the conditions \eqref{phi-assum1} and \eqref{phi-assum2}, we obtain \begin{align*} A(\zeta) & \geq c_1^{-1}\; \int_{0<|x|\leq a_0} \left( 1-\cos(x\, \zeta)\right) \frac{1}{|x|^{1+\alpha}} \dd x - \int_{|x|\geq a_0} \big(1-\cos(x\,\zeta)\big) |\phi(x)| \dd x \\ & \geq c_1^{-1}\; \int_{|x|>0} \left( 1-\cos(x\, \zeta)\right) \frac{1}{|x|^{1+\alpha}} \dd x - c_1^{-1} \int_{|x|\geq a_0} \big(1-\cos(x\,\zeta)\big) \frac{1}{|x|^{1+\alpha}} \dd x - \int_{|x|\geq a_0} |\phi(x)|\dd x \\ & \geq c_1^{-1} c_\alpha^{-1} |\zeta|^\alpha - \frac{2 }{\alpha} c_1^{-1} a_0^{-\alpha} - c_2 , \end{align*} and \begin{align*} A(\zeta) & \leq c_1\; \int_{0<|x|\leq a_0} \left( 1-\cos(x\, \zeta)\right) \frac{1}{|x|^{1+\alpha}} \dd x + \int_{|x|\geq a_0} \big(1-\cos(x\,\zeta)\big) |\phi(x)| \dd x \\ & \leq c_1\; \int_{|x|>0} \left( 1-\cos(x\, \zeta)\right) \frac{1}{|x|^{1+\alpha}} \dd x + 2 \int_{|x|\geq a_0} |\phi(x)| \dd x \leq c_1 c_\alpha^{-1} |\zeta|^\alpha + 2 c_2 , \end{align*} as desired. \end{proof} The differentiability property of $\phi(x)$ in assumptions (A1)(A2) is mainly used to show the following property of symbol $A(\zeta)$. \begin{lemma}\label{lem:A-diff} The symbol $A(\zeta)$ given by \eqref{LKf} of the considered L\'evy operator $\LL$ satisfies that for $n=1,2,3,4$, \begin{equation}\label{ndAzeta-es0} \Big|\frac{\dd^n A(\zeta)}{\dd \zeta^n} \Big| \leq \begin{cases} C |\zeta|^{\alpha-n},\quad & \textrm{for }\;|\zeta|\geq \max\{a_0^{-1},1\}, \\ C |\zeta|^{-n},\quad & \textrm{for }\;|\zeta|\leq \max\{a_0^{-1},1\}, \end{cases} \end{equation} where $C>0$ is a constant depending only on coefficients $\alpha,a_0 ,c_1,c_2$ in $\LL$. \end{lemma} \begin{remark}\label{rmk-lem:Adif} Based on Lemmas \ref{lem:symb} and \ref{lem:A-diff}, we find that for $n=1,2,3,4$, \begin{equation}\label{nd-sqrAz-es} \Big|\frac{\dd^n\sqrt{C' + A(\zeta)}}{\dd \zeta^n}\Big| \leq \begin{cases} C |\zeta|^{\frac{\alpha}{2}-n},\quad & \textrm{for }\;|\zeta| \geq \max\{a_0^{-1}, 1\}, \\ C |\zeta|^{-n},\quad & \textrm{for }\;|\zeta| \leq \max\{a_0^{-1}, 1\}, \end{cases} \end{equation} where $C>0$ is a constant depending only on coefficients $\alpha,C',a_0 ,c_1,c_2$. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:A-diff}] Let $\varpi(x)\in C^\infty$ be a test function satisfying \eqref{chi-prop}, and set $\varpi_r(x)= \varpi(\frac{x}{r})$ with $r > 0$. From \eqref{LKf} and the integration by parts, we infer that \begin{align}\label{A'zeta-es2} |A'(\zeta)| & = \Big|\int_{\R\setminus \{0\}} \partial_\zeta(1-\cos(x\, \zeta)) \,\varpi_r(x) \phi(x) \dd x + \int_{\R\setminus \{0\}} \partial_\zeta(1-\cos(x\,\zeta)) \,\big(1-\varpi_r(x)\big) \phi(x) \dd x \Big| \nonumber \\ & \lesssim \frac{1}{|\zeta|} \int_{\R\setminus\{0\}}(1-\cos(x\, \zeta)) \big|\partial_x \big(x\,\varpi_r(x) \phi(x)\big)\big| \dd x + \int_{0<|x|\leq r} |x| |\sin (x\,\zeta)| \big(1-\varpi_r(x)\big) |\phi(x)| \,\dd x \nonumber \\ & \lesssim \frac{1}{|\zeta|} \int_{\R\setminus \{0\}} \Big( \varpi_r(x) |\phi(x)| + \frac{1}{r} \Big|\varpi'(\frac{x}{r})\Big| \, |x|\,|\phi(x)| + \varpi_r(x) |x|\, |\phi'(x)|\Big)\dd x \nonumber \\ & \mbox{}\quad +\int_{0<|x|\leq r, |x\,\zeta|\leq 1} |x|^2 |\zeta| |\phi(x)|\dd x + \int_{0<|x|\leq r, |x\,\zeta|\geq 1} |x| |\phi(x)|\dd x. \end{align} If the spectrum $|\zeta|$ is large enough, that is, $|\zeta|\geq \max\{a_0^{-1},1\}$, we let $r\leq \min\{a_0, |\zeta|^{-1}, 1\} = |\zeta|^{-1}$ and thus \begin{align*} |A'(\zeta)| \lesssim &\frac{1}{|\zeta| r} \int_{\frac{r}{2}\leq |x| \leq r} \frac{c_1}{|x|^\alpha}\dd x + \frac{1}{|\zeta|} \int_{\frac{r}{2}\leq |x|\leq a_0} \frac{c_1}{|x|^{1+\alpha}} \dd x + \frac{1}{|\zeta|} \int_{|x|\geq a_0} \big(|\phi(x)| + |x| |\phi'(x)| \big) \dd x \\ & + |\zeta| \int_{0<|x|\leq r, |x\,\zeta|\leq 1} |x|^2|\phi(x)|\dd x \\ \lesssim & \frac{1}{|\zeta| r^\alpha} + \frac{1}{|\zeta|} + |\zeta| r^{2-\alpha} \lesssim \frac{1}{|\zeta| r^\alpha} + |\zeta| r^{2-\alpha}. \end{align*} By choosing $r$ to be $ \frac{1}{2 |\zeta|}$, we conclude that $|A'(\zeta)| \leq C |\zeta|^{\alpha-1}$. If $|\zeta|$ is such that $|\zeta|\leq \max\{a_0^{-1},1\}$, we set $r= \min\{a_0, 1\}$ (which satisfies $r\leq |\zeta|^{-1}$), and from \eqref{A'zeta-es2} we directly have \begin{equation}\label{A'zeta-es3} |A'(\zeta)| \lesssim \frac{1}{|\zeta|}\int_{\frac{r}{2}\leq |x| \leq a_0} \frac{c_1}{|x|^{1+\alpha}} \dd x + \frac{1}{|\zeta|}\int_{|x|\geq a_0} \big(|\phi(x)| + |x| |\phi'(x)| \big) \dd x + \int_{0<|x|\leq r} \frac{c_1}{|x|^{\alpha -1}} \dd x \lesssim |\zeta|^{-1}. \end{equation} Hence \eqref{ndAzeta-es0} with $n=1$ follows. Concerning higher-order derivatives $A^{(n)}(\zeta)$, $n=2,3,4$, by using conditions \eqref{phi-assum1}-\eqref{phi-assum1.2} and \eqref{phi-assum2}, we obtain \begin{align*} & |A^{(n)}(\zeta)| = \Big|\int_{\R\setminus \{0\}} \partial_\zeta^n(1-\cos(x\, \zeta)) \,\varpi_r(x) \phi(x) \dd x + \int_{\R\setminus \{0\}} \partial_\zeta^n(1-\cos(x\,\zeta)) \,\big(1-\varpi_r(x)\big) \phi(x) \dd x \Big| \\ & \lesssim \frac{1}{|\zeta|^n} \int_{\R\setminus\{0\}} (1-\cos(x \,\zeta)) \big|\partial_x^n \big( x^n\varpi_r(x) \phi(x)\big) \big| \dd x + \int_{0<|x|\leq r} |x|^n \big(1-\varpi_r(x)\big) |\cos (x\,\zeta + \frac{n\pi}{2})| |\phi(x)| \,\dd x \\ & \lesssim \frac{1}{|\zeta|^n} \int_{|x|\geq \frac{r}{2}} \Big( |\phi(x)| + |x| |\phi'(x)| +\cdots + |x|^n |\phi^{(n)}(x)| \Big)\dd x + \int_{0<|x| \leq r} |x|^n |\phi(x)| \dd x . \end{align*} If $\zeta$ is such that $|\zeta|\geq \max\{a_0^{-1},1\}$, we set $r = 2|\zeta|^{-1}$ (which satisfies $r \leq \min\{a_0,1\}$), and then \begin{align*} |A^{(n)}(\zeta)| & \lesssim \frac{1}{|\zeta|^n} \int_{\frac{r}{2}\leq|x|\leq a_0} \frac{c_1}{|x|^{1+\alpha}} \dd x + \frac{1}{|\zeta|^n} \sum_{j=0}^n \int_{|x|\geq a_0} |x|^j |\phi^{(j)}(x)| \dd x + \int_{0<|x|\leq r} c_1|x|^{n-1-\alpha} \dd x \\ & \lesssim \frac{1}{|\zeta|^n r^\alpha} + \frac{1}{|\zeta|^n} + r^{n-\alpha} \lesssim |\zeta|^{\alpha-n} . \end{align*} If $|\zeta|\leq \max\{a_0^{-1},1\}$, we also let $r= \min\{a_0,1\}$, and it yields $A^{(n)}(\zeta) \lesssim |\zeta|^{-n}$ similarly as deriving \eqref{A'zeta-es3}. Hence the desired estimate \eqref{ndAzeta-es0} follows by combining the above two estimates. \end{proof} As an application of Lemma \ref{lem:A-diff}, we have the $L^\infty$-boundedness property of the L\'evy operator $\LL$. \begin{lemma}\label{lem:Lop-Linf} There exists a constant $C>0$ depending only on $\alpha$ such that the considered L\'evy operator $\LL$ satisfies \begin{equation}\label{eq:Lop-Linf} \|\LL f\|_{L^\infty} \leq C \|f\|_{B^\alpha_{\infty,1}}, \end{equation} where $B^\alpha_{\infty,1}$ denotes the Besov space (see \eqref{Besov-spr} below for definition). \end{lemma} \begin{proof We here adopt the notations of Littelewood-Paley theory introduced in the appendix. Denoting by $k_0 := [a_0^{-1}]+1$, and using estimate \eqref{ndAzeta-es0}, the result of \cite[Lemma 2.2]{BCD11} directly implies that for every $k\geq k_0$ and for every $p\in [1,\infty]$, \begin{align*} \|\Delta_k \LL f\|_{L^p} \leq C 2^{k \alpha} \|\Delta_k f\|_{L^p}, \end{align*} with $C>0$ a constant depending on the coefficients in $\LL$. For the operator $\chi(2^{-k_0}D) \LL$, its kernel function $\widetilde{h}_{k_0}(x) = C_0 \int_\R e^{i x\,\zeta} \chi(2^{-k_0}\zeta) A(\zeta) \dd \zeta$ indeed satisfies that $\|\widetilde{h}_{k_0}\|_{L^1} \leq C$ (due to Lemma \ref{lem:A-diff} and from an easy computation as showing \eqref{eq:claim}), so that we have that for every $p\in [1,\infty]$, \begin{align*} \| \chi(2^{-k_0}D)\LL f \|_{L^p} \leq C \| f\|_{L^p} . \end{align*} Thus the desired estimate \eqref{eq:Lop-Linf} follows from the high-low frequency decomposition: \begin{align*} \|\LL f\|_{L^\infty} & \leq \|\chi(2^{-k_0}D) f\|_{L^\infty} + \sum_{k\geq k_0} \|\Delta_k \LL f\|_{L^\infty} \\ & \leq C \|f\|_{L^\infty} + C \sum_{k\geq k_0} 2^{k\alpha} \|\Delta_k f\|_{L^\infty} \leq C \|f\|_{B^\alpha_{\infty,1}}. \end{align*} \end{proof} \section{Local well-posedness}\label{sec:lwp} In this section, we establish the local well-posedness result for the smooth solution to the Euler-alignment system \eqref{EA-rho}-\eqref{EA-init}. \begin{theorem}\label{thm:lwp} Assume the influence function $\phi(x)=\phi(-x)\in C^4(\R\setminus\{0\})$ satisfies assumptions (A1) and (A2) with $\alpha\in (0,2)$. Let $s>\frac{3}{2}$ if $\alpha\in (0,1]$ and let $s>\frac{5}{2}$ if $\alpha\in (1,2)$. Suppose that the initial data $(\rho_0,u_0)$ satisfies \begin{equation* \rho_0\in H^s(\T),\quad \min_\T \rho_0>0,\quad G_0:= \partial_x u_0 -\LL \rho_0 \in H^{s-\frac{\alpha}{2}}(\T). \end{equation*} Then there exists a time $T_0>0$ depending only on $\phi$ and $(\rho_0,u_0)$ such that the system \eqref{EAS-ref} admits a unique strong solution $(\rho(x,t),u(x,t))$ on $[0,T_0]$, which satisfies \begin{equation*} \rho\in C([0,T_0]; H^s(\T))\cap L^2([0,T_0]; H^{s+\frac{\alpha}{2}}(\T)),\quad u\in C([0,T_0]; H^{s+1-\alpha}(\T)). \end{equation*} Moreover, let $T^*>0$ be the maximal existence time of the above strong solution $(\rho,u)$, then if $T^*<\infty$, we necessarily have \begin{equation}\label{eq:bc} \begin{cases} \int_0^{T^*}\|\partial_x \rho(t)\|_{L^\infty(\T)}^2 \dd t =\infty,& \quad \textrm{for }\;\alpha\in (0,1],\\ \int_0^{T^*} \|\partial_x^2 \rho(t)\|_{L^\infty(\T)}^2 \dd t =\infty,&\quad \textrm{for }\;\alpha\in (1,2). \end{cases} \end{equation} \end{theorem} \begin{proof} The proof of Theorem \ref{thm:lwp} uses the same procedure as that of \cite[Theorem 3.1]{DKRT}, taking into account the misalignment effect. We deal with a general class of L\'evy operator $\LL$ with the larger scope of $\alpha$ belonging to $(0,2)$, which adds difficulties in the analysis. We here mainly sketch the proof on the a priori estimates and the blowup criteria \eqref{eq:bc}. We begin with the equivalent system \eqref{EAS-ref}, and we intend to obtain a priori estimate of the following quantity \begin{equation}\label{Yt} Y(t):= \|\rho(t)\|_{H^s(\T)}^2 + \|G(t)\|_{H^{s-\frac{\alpha}{2}}}^2 \end{equation} on the small time interval $[0,T_0]$ with $T_0>0$ a constant depending only on the influence function and the initial data. By applying the differential operator $\Lambda^s$ to the equation of $\rho$ in \eqref{EAS-ref}, multiplying both sides with $\Lambda^s \rho$ and then integrating in $x$, it follows tha \begin{align}\label{rho-Hs-spl} \frac{1}{2} \frac{\dd}{\dd t}\|\rho(t)\|_{\dot H^s}^2 = & - \int_\T \Lambda^s \rho\cdot \Lambda^s \partial_x(\rho\,u) \dd x \nonumber \\ = & - \int_\T \Lambda^s \rho\cdot (\Lambda^s \partial_x u \, \rho ) \,\dd x - \int_\T \Lambda^s \rho \cdot (u\,\partial_x\Lambda^s \rho) \dd x - \int_\T \Lambda^s \rho \cdot [\Lambda^s\partial_x,u,\rho] \dd x \nonumber \\ =: & I + II +III, \end{align} where in term $III$ we used the notation $[L,f,g]= L(f\,g)- f (L g) - g (L f)$ for some operator $L$. For term $I$, by using relation $\partial_x u= G + \LL \rho$, we have the following splitting \begin{align}\label{I-decom} I = & - \int_\T (\Lambda^s \LL \rho)\cdot (\rho\,\Lambda^s\rho) \dd x - \int_\T (\Lambda^{s-\frac{\alpha}{2}} G)\cdot \Lambda^{\frac{\alpha}{2}}(\rho\,\Lambda^s\rho) \dd x \nonumber \\ = & - \int_\T (\Lambda^s \sqrt{C'\mathrm{Id} +\LL\,} \rho)\cdot \sqrt{C' \mathrm{Id} +\LL\,}(\rho\,\Lambda^s\rho) \dd x + C' \int_\T |\Lambda^s \rho|^2 \,\rho \,\dd x - \int_\T (\Lambda^{s-\frac{\alpha}{2}} G)\cdot \Lambda^{\frac{\alpha}{2}}(\rho\,\Lambda^s\rho) \dd x \nonumber \\ = & - \int_\T |\sqrt{C'\mathrm{Id} +\LL\,}\Lambda^s \rho|^2 \rho\, \dd x - \int_\T \sqrt{C'\mathrm{Id}+\LL\,}\Lambda^s\rho\cdot \big([\sqrt{C'\mathrm{Id} +\LL\,},\rho]\Lambda^s \rho\big)\, \dd x + C' \int_\T |\Lambda^s \rho|^2 \,\rho \,\dd x \nonumber \\ & -\int_\T (\Lambda^{s-\frac{\alpha}{2}}G )\cdot \big((\Lambda^{s+\frac{\alpha}{2}}\rho)\, \rho\big)\,\dd x - \int_\T (\Lambda^{s-\frac{\alpha}{2}}G )\cdot \big( [\Lambda^{\frac{\alpha}{2}},\rho]\Lambda^s\rho \big)\,\dd x \nonumber \\ := &\, I_1 + I_2 + I_3 + I_4 + I_5 , \end{align} where the operator $\sqrt{C' \mathrm{Id} + \LL}$ is defined via formula \eqref{def:sqrtL}. For $I_1$, by denoting $\rho_{\min,t}$ as $\min\limits_{\T\times [0,t]}\rho(x,s)$ (which satisfies $\rho_{\min,t}\geq M_0 e^{-c_3\bar{\rho}_0 t}$ from Lemma \ref{lem:lowbdd}), and using Plancherel's theorem and estimate \eqref{A-est}, we find \begin{align}\label{I1-es1} I_1 \leq - \rho_{\min,t} \int_\T |\sqrt{C' + \LL \rho\,}\Lambda^s \rho|^2 \dd x \leq - C'^{-1} \rho_{\min,t} \int_\T |\Lambda^{s + \frac{\alpha}{2}} \rho|^2 \dd x. \end{align} By virtue of symbol upper-bound \eqref{A-est2}, commutator estimate \eqref{eq:comm-es0} (with $\epsilon = \frac{2-\alpha}{4}>0$) and Young's inequality, the second term can be estimated as follows: \begin{align}\label{I2-es1} |I_2| & \leq \|\sqrt{C'\mathrm{Id} + \LL\,}\Lambda^s \rho\|_{L^2} \|[\sqrt{C'\mathrm{Id} + \LL\,},\rho]\Lambda^s \rho\|_{L^2} \nonumber \\ & \leq C \big(\|\rho\|_{\dot H^{s+\frac{\alpha}{2}}} + \|\rho\|_{\dot H^s} \big) \|\rho \|_{\dot H^s} \|\rho\|_{C^{\frac{2+\alpha}{4}}} \nonumber \\ & \leq \frac{\rho_{\min,t}}{8C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}} + C (1+ \rho_{\min,t}^{-1}) \big(1+ \|\rho\|_{C^{\frac{2+\alpha}{4}}}^2 \big) \|\rho\|_{\dot H^s}^2. \end{align} The estimation of $I_3$ is taking advantage of Lemma \ref{lem:uppbdd}: \begin{align}\label{I3-es} |I_3| \leq C' \|\rho(t)\|_{L^\infty} \|\rho\|_{\dot H^s}^2 \leq C' M_1 \|\rho\|_{\dot H^s}^2. \end{align} By using H\"older's inequality and commutator estimate \eqref{eq:comm-es2}, we similarly get that \begin{align}\label{I4-I5-es} |I_4| + |I_5| & \leq \|G\|_{\dot H^{s-\frac{\alpha}{2}}} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}} \|\rho\|_{L^\infty} + C \|G\|_{\dot H^{s-\frac{\alpha}{2}}} \|\rho\|_{\dot H^s} \|\rho\|_{C^{\frac{2+\alpha}{4}}} \nonumber \\ & \leq \frac{\rho_{\min,t}}{8C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}}^2 + C (1+ \rho_{\min,t}^{-1}) \big(1+ \|\rho\|_{C^{\frac{2+\alpha}{4}}}^2 \big) \big( \|\rho\|_{\dot H^s}^2 + \|G\|_{\dot H^{s-\frac{\alpha}{2}}}^2\big) . \end{align} Next, the term $II$ can be estimated from the integration by parts: \begin{equation*} |II| = \frac{1}{2} \left|\int_\T (\Lambda^s\rho)^2\cdot \partial_x u\,\dd x \right| \leq \frac{1}{2} \|\partial_x u\|_{L^\infty} \|\rho\|_{\dot H^s}^2. \end{equation*} In view of estimates \eqref{Flinf-es}, \eqref{eq:uppbdd} and relation $\partial_x u= \LL \rho + G$, we see that \begin{equation}\label{G-Linf-es1} \|G(t)\|_{L^\infty} \leq \|F(t)\|_{L^\infty} \|\rho(t)\|_{L^\infty} \leq \|F_0\|_{L^\infty} M_1 \leq C , \end{equation} and \begin{equation}\label{u-Lip-es} \begin{split} \|\partial_x u(t)\|_{L^\infty} \leq \|\LL \rho(t)\|_{L^\infty} + \|G(t)\|_{L^\infty} \leq C (1 + \|\rho(t)\|_{B^\alpha_{\infty,1}}), \end{split} \end{equation} thus we also get \begin{equation}\label{II-es} |II| \leq C (1 + \|\rho(t)\|_{B^\alpha_{\infty,1}}) \|\rho\|_{\dot H^s}^2. \end{equation} Taking advantage of the commutator estimate \eqref{eq:comm-es} below, the term $III$ can be estimated as \begin{equation}\label{III-es1} |III| \leq \|\rho\|_{\dot H^s} \|[\Lambda^s\partial_x,u,\rho]\|_{L^2} \leq C \|\rho\|_{\dot H^s} \big(\|\partial_x u\|_{L^\infty} \|\rho\|_{\dot H^s} + \|\partial_x \rho\|_{L^\infty} \|u\|_{\dot H^s} \big). \end{equation} We need to bound the term $\|u\|_{\dot H^s(\T)}$: from \eqref{A-est2}, \begin{align}\label{uHs-es} \|u(t)\|_{\dot H^s(\T)} \leq \|\partial_x u(t)\|_{\dot H^{s-1}(\T)} & \leq C\big(\|\rho(t)\|_{\dot H^{s+\alpha-1}} + \|\rho(t)\|_{\dot H^{s-1}} + \|G(t)\|_{\dot H^{s-1}}\big) \nonumber\\ & \leq C \big(\|\rho(t)\|_{H^s} + \|\rho(t)\|_{\dot H^{s+ \frac{\alpha}{2}}} \big) + C \|G(t)\|_{H^{s-\frac{\alpha}{2}}}, \end{align} where $C>0$ depends on $\alpha,a_0,c_1,c_2$. Thus inserting estimates \eqref{u-Lip-es}, \eqref{uHs-es} into \eqref{III-es1} and using Young's inequality lead to \begin{align}\label{III-es} |III| & \leq C \|\partial_x u\|_{L^\infty} \|\rho\|_{H^s}^2 + C \|\rho\|_{\dot H^s} \|\partial_x \rho\|_{L^\infty}\Big( \|\rho\|_{H^s} + \|\rho\|_{\dot H^{s+ \frac{\alpha}{2}}} + \|G\|_{H^{s-\frac{\alpha}{2}}}\Big) \nonumber \\ & \leq \frac{\rho_{\min,t}}{8C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}}^2 + C (1+\rho_{\min,t}^{-1}) \big(1 + \|\partial_x \rho\|_{L^\infty}^2 + \|\rho\|_{B^\alpha_{\infty,1}} \big) \big( \|\rho\|_{H^s}^2 + \|G\|_{H^{s-\frac{\alpha}{2}}}^2\big). \end{align} By taking the scalar product of $\rho$-equation with $\rho$ itself, we infer that \begin{align}\label{rho-L2-es} \frac{1}{2}\frac{\dd}{\dd t}\|\rho(t)\|_{L^2}^2 = \int_\T \partial_x (u\,\rho)\cdot\rho \dd x = \frac{1}{2} \int_\T |\rho|^2\,\partial_x u\, \dd x \leq \frac{1}{2}\|\partial_x u\|_{L^\infty} \|\rho\|_{L^2}^2. \end{align} Since \begin{equation}\label{Lam-sig-rho-Linf} \|\rho\|_{C^\sigma} \leq C_0 \|\rho\|_{B^\sigma_{\infty,1}} \leq C_\sigma (\|\rho\|_{L^\infty} + \|\partial_x \rho\|_{L^\infty}) \leq C(1 + \|\partial_x \rho\|_{L^\infty}),\quad \forall \sigma\in (0,1), \end{equation} we gather the above estimates on $I$, $II$, $III$ and \eqref{rho-L2-es} to deduce that \begin{align}\label{rho-Hs-es} & \frac{1}{2}\frac{\dd}{\dd t}\|\rho(t)\|_{H^s}^2 + \frac{\rho_{\min,t}}{2C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}}^2 \nonumber \\ \leq & \, C (1+\rho_{\min,t}^{-1}) \big(1 + \|\partial_x \rho\|_{L^\infty}^2 + \|\rho\|_{B^\alpha_{\infty,1}} \big)\big( \|\rho\|_{H^s}^2 +\|G\|_{H^{s-\frac{\alpha}{2}}}^2\big), \end{align} with $C>0$ depending on $\alpha,a_0,c_1,c_2$ and initial data $(\rho_0,u_0)$. Now we consider the estimation of $G$, and from the equation of $G$ in system \eqref{EAS-ref}, we get \begin{align} & \frac{1}{2}\frac{\dd}{\dd t} \|G(t)\|_{\dot H^{s-\frac{\alpha}{2}}}^2 = - \int_\T (\Lambda^{s-\frac{\alpha}{2}} G) \cdot\big(\Lambda^{s-\frac{\alpha}{2}}\partial_x (u\,G)\big) \dd x \nonumber \\ &= - \int_\T (\Lambda^{s-\frac{\alpha}{2}} G)\cdot \big(u\, (\Lambda^{s-\frac{\alpha}{2}}\partial_x G)\big) \dd x - \int_\T (\Lambda^{s-\frac{\alpha}{2}}G)\cdot \big([\Lambda^{s-\frac{\alpha}{2}}\partial_x, u]G\big)\dd x := IV + V, \end{align} where in the second line we have used the notation $[L,f]g=L(f\,g)- f L(g)$ for some operator $L$. The term $IV$ can be treated as $II$ through integration by parts: \begin{equation} |IV| = \frac{1}{2}\left| \int_\T |\Lambda^{s-\frac{\alpha}{2}}G|^2 \cdot\partial_x u \dd x\right| \leq \frac{1}{2} \|\partial_x u\|_{L^\infty} \|G\|_{\dot H^{s-\frac{\alpha}{2}}}^2. \end{equation} By applying estimates \eqref{G-Linf-es1}, \eqref{u-Lip-es}, \eqref{u-L2-es} and commutator estimate \eqref{eq:comm-es3}, we deduce that \begin{align} |V| & \leq C \|G\|_{\dot H^{s-\frac{\alpha}{2}}} \big(\|\partial_x u\|_{L^\infty} \|G\|_{\dot H^{s-\frac{\alpha}{2}}} + \|G\|_{L^\infty} \|\partial_x u\|_{\dot H^{s-\frac{\alpha}{2}}} \big) \nonumber \\ & \leq C \|\partial_x u\|_{L^\infty} \|G\|_{H^{s-\frac{\alpha}{2}}}^2 + C \|G\|_{\dot H^{s-\frac{\alpha}{2}}} \|G\|_{L^\infty} \big(\|\rho\|_{\dot H^{s+\frac{\alpha}{2}}} + \|\rho\|_{\dot H^{s-\frac{\alpha}{2}}} + \|G\|_{\dot H^{s-\frac{\alpha}{2}}} \big) \nonumber \\ & \leq \frac{\rho_{\min,t}}{8C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}}^2 + C(1 + \rho_{\min,t}^{-1}) \big(1 + \|\rho\|_{B^\alpha_{\infty,1}}\big)\big( \|G\|_{H^{s-\frac{\alpha}{2}}}^2 + \|\rho\|_{H^s}^2\big). \end{align} In a similar way as showing \eqref{rho-L2-es}, we also get \begin{equation}\label{G-L2-es} \frac{1}{2}\frac{\dd}{\dd t}\|G(t)\|_{L^2}^2 = \int_\T \partial_x (u\,G)\cdot G \dd x = \frac{1}{2} \int_\T |G|^2\,\partial_x u\, \dd x \leq \frac{1}{2}\|\partial_x u\|_{L^\infty} \|G\|_{L^2}^2. \end{equation} Then collecting \eqref{rho-Hs-es} and the above estimates on $G$ yields \begin{equation}\label{Yt-es} \begin{split} \frac{1}{2}\frac{\dd}{\dd t}Y(t) + \frac{\rho_{\min,t}}{4C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}}^2 \leq \, C (1+\rho_{\min,t}^{-1}) \big(1 + \|\partial_x \rho\|_{L^\infty}^2 + \|\rho\|_{B^\alpha_{\infty,1}} \big) Y(t), \end{split} \end{equation} where $Y(t)$ is defined in \eqref{Yt}. The Sobolev embedding $H^s(\T)\hookrightarrow B^1_{\infty,1}(\T)\hookrightarrow W^{1,\infty}(\T)$ for every $s>\frac{3}{2}$ as well as $H^s(\T)\hookrightarrow W^{2,\infty}(\T)$ for every $s>\frac{5}{2}$, and the following estimate \begin{equation}\label{Lam-alp-rho-Linf} \|\rho(t)\|_{B^\alpha_{\infty,1}} \leq C_\alpha (\|\rho(t)\|_{L^\infty} + \|\partial_x^2 \rho(t)\|_{L^\infty}) \leq C (1 + \|\partial_x^2 \rho(t)\|_{L^\infty}),\quad \forall \alpha\in (1,2), \end{equation} yield \begin{equation} \frac{\dd}{\dd t} Y(t) + \frac{\rho_{\min,t}}{2C'} \|\rho\|_{\dot H^{s+\frac{\alpha}{2}}}^2 \leq C (1+\rho_{\min,t}^{-1}) Y(t)^2, \end{equation} which implies that there exists a time $T_0>0$ depending only on $\alpha$, coefficients in $\LL$, $\min \rho_0$ and $Y(0)=\|\rho_0\|_{H^s}^2 + \|G_0\|_{H^{s-\frac{\alpha}{2}}}^2$ so that $Y(t)$ is uniformly bounded on time interval $[0,T_0]$. By a standard process, one can show that \begin{equation} \rho\in C([0,T_0]; H^s(\T))\cap L^2([0,T_0]; \dot H^{s+\frac{\alpha}{2}}),\quad G\in C([0,T_0]; H^{s-\frac{\alpha}{2}}(\T)), \end{equation} and in combination with the following $L^2$-estimate of $u$ (from formulas \eqref{the-varph}, \eqref{u-exp} and estimates \eqref{I0t-bdd}, \eqref{psi-Linf-es}) \begin{align}\label{u-L2-es} \|u(t)\|_{L^2(\T)} & \leq \|\psi(t)\|_{L^2(\T)} + \|\LL \varphi(t)\|_{L^2(\T)} + |I_0(t)| \nonumber \\ & \leq C_0 \|\psi(t)\|_{L^\infty(\T)} + \|\LL\Lambda^{-2}\partial_x \theta(t)\|_{L^2(\T)} + C \nonumber\\ & \leq C + C \|\theta(t)\|_{H^{\max\{0, \alpha-1\}}(\T)} \leq C + C \|\rho(t)\|_{ H^{\max\{0, \alpha-1\}}(\T)}, \end{align} it also ensures $u\in C([0,T_0]; H^{s+1-\alpha}(\T))$ with \begin{equation} \begin{split} \|u\|_{L^\infty([0,T_0]; H^{s+1-\alpha})}^2 & \leq C_0 \|u\|_{L^\infty([0,T_0]; L^2)}^2 + C_0\|\partial_x u\|_{L^\infty([0,T_0]; \dot H^{s-\alpha})}^2 \\ & \leq C (1+ \|\rho\|_{L^\infty([0,T_0]; H^s)}^2 + \|G\|_{L^\infty([0,T_0]; H^{s-\alpha})}^2 ) <\infty. \end{split} \end{equation} Next we prove the blowup criteria \eqref{eq:bc}. Let $T^*>0$ be the maximal existence time of the smooth solution $(\rho,u)$ constructed as above, the local well-posedness result firstly guarantees the natural blowup criteria: if $T^*<\infty$, then necessarily \begin{equation} \sup_{t\in [0,T^*[}\big(\|\rho(t)\|_{H^s(\T)} + \|G(t)\|_{H^{s-\frac{\alpha}{2}}(\T)}\big)=\infty . \end{equation} On the other hand, taking advantage of Gr\"onwall's inequality, estimate \eqref{Yt-es} together with inequalities \eqref{Lam-sig-rho-Linf}, \eqref{Lam-alp-rho-Linf} implies that for every $0<T<T^*$, \begin{equation*} \sup_{t\in [0,T]}Y(t) \leq \begin{cases} Y(0) \exp\Big\{ C (1+\rho_{\min,T}^{-1}) \int_0^T\big(1 + \|\partial_x \rho\|_{L^\infty}^2 \big) \dd t \Big\}, &\;\; \textrm{for}\;\; \alpha \in (0,1),\\ Y(0) \exp\Big\{ C (1+\rho_{\min,T}^{-1}) \int_0^T\big(1 + \|\partial_x^2 \rho\|_{L^\infty}^2 \big) \dd t \Big\}, &\;\; \textrm{for}\;\; \alpha \in (1,2), \end{cases} \end{equation*} which ensures the blowup criteria \eqref{eq:bc} for the cases $\alpha\in (0,1)\cup (1,2)$. For the case $\alpha=1$, we use the Beale-Kato-Majda's refinement: by arguing as \cite[Eq. (15)]{BKM84}, one can show that \begin{equation}\label{Lam1-rho-Linf-es} \begin{split} \| \rho(t)\|_{B^1_{\infty,1}} & \leq C_0 \|\rho(t)\|_{L^\infty} + C \|\partial_x \rho(t)\|_{L^\infty} \log(e + \|\rho(t)\|_{H^s}) + C \\ & \leq C + C \|\partial_x \rho(t)\|_{L^\infty} \log (e +\|\rho(t)\|_{H^s}^2 ) ; \end{split} \end{equation} so that inserting \eqref{Lam1-rho-Linf-es} into estimate \eqref{Yt-es} leads to \begin{equation} \frac{\dd}{\dd t}Y(t) \leq \, C (1+\rho_{\min,t}^{-1}) \big(1 + \|\partial_x \rho\|_{L^\infty}^2 \big) \log(e + Y(t)) Y(t), \end{equation} and also \begin{equation} \sup_{t\in [0,T]}Y(t) \leq (e +Y(0))^{\exp\left\{C (1+\rho_{\min,T}^{-1}) \int_0^T\big(1 + \|\partial_x \rho\|_{L^\infty}^2 \big)\dd t \right\}}, \end{equation} which proves the blowup criteria \eqref{eq:bc} at $\alpha=1$ case. \end{proof} \section{Global well-posedness}\label{sec:gwp} In this section, we show our main global well-posedness result, Theorem~\ref{thm:GR-EAS}. According to the blowup criterion \eqref{eq:bc} in Theorem \ref{thm:lwp}, we intend to show the boundedness of $\|\partial_x\rho\|_{L^\infty(\T\times [0,T])}$ and $\|\partial_x^2 \rho\|_{L^\infty(\T\times [0,T])}$, for cases $\alpha\in(0,1]$ and $\alpha\in(1,2)$ respectively, for any given finite time $T>0$. Let us fix a time $T$ for the rest of the section. To control $\partial_x\rho$ (and $\partial_x^2\rho$), we adopt the novel method on modulus of continuity, which is originated in \cite{KNV,KNS} (see also \cite{Kis} for further development). The general strategy is to prove that the evolution of the considered equation preserves a stationary (or time-dependent) modulus of continuity, which furthermore implies the desired Lipschitz regularity. \subsection{The modules of continuity} A function $\omega:(0,\infty) \rightarrow (0,\infty)$ is called a \emph{modulus of continuity} (abbr. MOC) if $\omega$ is continuous on $(0,\infty)$, nondecreasing, concave, and piecewise $C^2$ with one-sided derivatives defined at every point in $(0,\infty)$. We say a function $f$ obeys the modulus of continuity $\omega$ if \[|f(x)-f(y)| < \omega(|x-y|) \quad\forall~x\neq y\in \T.\] We start with the following function \begin{equation}\label{MOC0} \omega_1(\xi) = \begin{cases} \delta \left(\xi - \frac{1}{4}\xi^{1+\frac{\alpha}{2}}\right), & \quad \textrm{for }\;0<\xi \leq 1, \\ \frac{3\delta}{4} + \gamma \log\xi, & \quad \textrm{for }\; \xi>1, \end{cases} \end{equation} where $\delta$ and $\gamma$ are small parameters to be chosen later. It is easy to check that $\omega_1$ is a MOC. In particular, concavity can be guaranteed if we pick $\gamma<\frac{\delta}{2}$. In order to make sure the initial data $\rho_0$ obey a MOC $\omega$, we shall construct $\omega$ by the scaling $\omega(\xi)=\omega_1(\xi/\lambda)$, where $\lambda$ is a small parameter called the scaling factor. \begin{lemma}\label{lem:scaling} Let $\omega_1$ be defined in \eqref{MOC0} with $\delta$ and $\gamma$ given. Then, for any function $f\in W^{1,\infty}(\R)$, there exists a small scaling factor $\lambda>0$ such that $f$ obeys the MOC $\omega(\xi)=\omega_1(\xi/\lambda)$. \end{lemma} \begin{proof} Owing to $|f(x)-f(y)|\leq 2\|f\|_{L^\infty}$ and $|f(x)-f(y)|\leq \|f'\|_{L^\infty} |x-y|$, it only needs to show that $\min\{2\|f\|_{L^\infty},\|f'\|_{L^\infty} |x-y|\}< \omega(|x-y|)$. Then from the concavity property on $\omega$, and by denoting $a_1: = \frac{2\|f\|_{L^\infty}}{\|f'\|_{L^\infty}}$, it suffices to show that \[ 2 \|f\|_{L^\infty} <\omega(a_1) =\omega_1(a_1/\lambda).\] Pick a small $\lambda<a_1$. We see that $\omega_1(a_1/\lambda)> \gamma \log( a_1/\lambda)$. Thus by further choosing $\lambda$ small enough, that is, \begin{equation}\label{dkg-cd0} \lambda\leq a_1 e^{-2\gamma^{-1}\|f\|_{L^\infty}} = \frac{2\|f\|_{L^\infty}}{\|f'\|_{L^\infty}} e^{-2\gamma^{-1}\|f\|_{L^\infty}}, \end{equation} we conclude that such an MOC $\omega(\xi)$ is obeyed by the function $f$. \end{proof} We summarize our choice of the MOC \begin{equation}\label{MOC1} \omega(\xi) = \begin{cases} \delta \lambda^{-1} \xi - \frac{1}{4}\delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1+\frac{\alpha}{2}}, & \quad \textrm{for }\;0<\xi \leq \lambda, \\ \frac{3\delta}{4} + \gamma \log \frac{\xi}{\lambda}, & \quad \textrm{for }\; \xi>\lambda, \end{cases} \end{equation} with three small parameters $\delta, \gamma$ and $\lambda$ to be determined later. Both $\delta$ and $\gamma$ are independent of the scaling parameter $\lambda$. We would like to emphasize the behavior of $\omega$ near the origin: \begin{equation}\label{ome-cond} \omega(0+)=0,\quad \omega'(0+)= \delta \lambda^{-1}<\infty, \quad \omega''(0+)=-\infty. \end{equation} Since $\|f'\|_{L^\infty}\leq\omega'(0+)$ for any $f$ that obeys $\omega$, \eqref{ome-cond} implies Lipschitz continuity $f$, with \[\|f'\|_{L^\infty}\leq\delta\lambda^{-1}<\infty.\] Moreover, the last part of \eqref{ome-cond} will be used in Lemmas~\ref{lem:bd-scena} and \ref{lem:bd-scena2}. \subsection{Uniform Lipschitz regularity of $\rho(t)$ on $[0,T]$}\label{subsec:rho-lip} It suffices to show $\rho(t)$ obeys the MOC $\omega$ in \eqref{MOC1} for all $t\in[0,T]$, as \begin{equation}\label{eq:rho-lip} \sup_{t\in [0,T]}\|\partial_x\rho(\cdot,t)\|_{L^\infty(\T)} \leq \omega'(0+) = \delta \lambda^{-1}. \end{equation} From \eqref{dkg-cd0}, we can ensure that $\rho_0$ obeys $\omega$ by picking a sufficiently small $\lambda$ \begin{equation}\label{dkg-cd2} \lambda\leq\frac{2\|\rho_0\|_{L^\infty}}{\|\rho_0'\|_{L^\infty}}e^{-2\gamma^{-1}\|\rho_0\|_{L^\infty}}. \end{equation} We need to prove the preservation of the MOC $\omega$ in time. Let us argue by contradiction. Assume that $t_1\in (0,T]$ is the first time that the modulus of continuity $\omega(\xi)$ is violated by $\rho(x,t)$. We state the following lemma describing the only possible breakthrough scenario. The proof is identical to that of \cite{KNV}, provided that $\omega$ satisfies \eqref{ome-cond}. \begin{lemma}\label{lem:bd-scena} Assume that $\rho(x,t)$ is a smooth function on $\T\times [0,T]$ and $\rho_0(x)$ obeys the MOC $\omega(\xi)$ given by \eqref{MOC1}. Suppose that $t_1\in (0,T]$ is the first time that such an $\omega(\xi)$ is lost by $\rho$, then \begin{equation}\label{eq:scena0} |\rho(\tilde{x},t_1)-\rho(\tilde{y},t_1)|\leq \omega(|\tilde{x}-\tilde{y}|),\quad\forall~ \tilde{x},\tilde{y}\in \T, \end{equation} and there exist two distinct points $x\neq y\in \T$ satisfying \begin{equation}\label{eq:scena} \begin{split} \rho(x,t_1) - \rho(y,t_1) = \omega(\xi), \quad \textrm{with }\,\xi =|x-y|. \end{split} \end{equation} \end{lemma} Moreover, since the range of $\rho$ lies in $[0,M_1]$ by Lemma~\ref{lem:uppbdd}, the equality \eqref{eq:scena} and the positivity of $\rho$ imply $\omega(\xi)\leq M_1$. Therefore, breakthrough could only happen in the region \begin{equation}\label{xi-scope} 0<\xi\leq\Xi:=\omega^{-1}(M_1) = \lambda e^{\gamma^{-1}(M_1-\frac{3}{4}\delta)}. \end{equation} We can pick a small enough $\lambda$ \begin{equation}\label{del-cond1} \lambda \leq \frac{r_0}{4} e^{- M_1 \gamma^{-1}}. \end{equation} to guarantee that the breakthrough can only happen in the short range, with \[\Xi\leq \frac{r_0}{4}.\] where $r_0$ is defined in \eqref{phi-s-assum1}. Next, we intend to prove that for the points $x\neq y\in\T$ satisfying equality \eqref{eq:scena} with $\xi=|x-y|$ in the range \eqref{xi-scope}, it holds \begin{equation}\label{eq:targ} \partial_t (\rho(x,t)-\rho(y,t))|_{t=t_1} <0. \end{equation} If so, the breakthrough scenario won't happen, and it will yield a contradiction and in turn guarantee the preservation of the MOC. For simplicity, we drop the $t_1$-dependence in the sequel. To verify \eqref{eq:targ}, we use the equation of $\rho$ given by \eqref{EA-rho} and get \begin{align}\label{rho-moc-decom} \partial_t\rho(x)- \partial_t\rho(y) & = -\partial_x(u\, \rho)(x) + \partial_x(u\, \rho)(y) \nonumber \\ & = -\big( u\, \partial_x\rho (x) - u\,\partial_x\rho(y)\big) - \big(\rho(x) -\rho(y)\big)\partial_x u(x) - \rho(y)\big(\partial_x u(x) -\partial_x u(y)\big) \nonumber \\ & =:\, \mathrm{I} + \mathrm{II} + \mathrm{III}. \end{align} We first consider $\mathrm{III}$, due to that it is the term containing negative contribution which is crucial in achieving \eqref{eq:targ}. From the relation $\partial_x u = \LL \rho + G = \LL \rho + F \rho$ (recalling $F=\frac{G}{\rho}$ satisfies equation \eqref{Feq}), we further get \begin{equation}\label{IIIdecom} \begin{split} \mathrm{III} &=\, - \rho(y) \big(\LL \rho(x) -\LL \rho(y)\big) - \rho(y) \big(\rho(x) -\rho(y) \big) F(x) - \rho(y)^2 \big(F(x) -F(y)\big) \\ & =:\,\mathrm{III}_1 + \mathrm{III}_2 + \mathrm{III}_3 . \end{split} \end{equation} In order to estimate $\mathrm{III}_1$, we state the following lemma, and postponed the proof in Section~\ref{sec:MOCes}. \begin{lemma}\label{lem:MOCes1} Assume $\rho$ obeys the MOC $\omega(\xi)$, and $x, y$ satisfy the breakthrough scenario described in Lemma~\ref{lem:bd-scena}. Define $D (x,y):= \LL \rho (x) - \LL \rho (y)$. Then $D(x,y)$ can be estimated as \begin{equation}\label{Dest0} D(x,y) \geq D_1(x,y) - 2c_2 \omega(\xi) \end{equation} where \begin{equation}\label{Dexp} D_1(x,y) := \, \mathrm{p.v.}\int_{|z|\leq a_0} \phi(z)\,\left( \omega(\xi)- \rho(x+z)+ \rho (y+ z)\right)\dd z, \end{equation} and it satisfies that for any $\xi=|x-y|\in (0,\frac{a_0}{2}]$ (with $a_0>0$ the constant appearing in (A1)), \begin{equation}\label{Dest} \begin{split} D_1 (x,y)\geq & \, \frac{1}{c_1}\int_{0}^{\frac{\xi}{2}} \frac{2\omega (\xi)-\omega(\xi+2\eta)-\omega (\xi-2\eta)}{\eta^{1+\alpha}}\mathrm{d} \eta \\ & +\frac{1}{c_1}\int_{\frac{\xi}{2}}^{a_0} \frac{2\omega (\xi)-\omega (2\eta+\xi)+\omega (2\eta-\xi) }{\eta^{1+\alpha}}\mathrm{d}\eta . \end{split} \end{equation} Moreover, if we use the MOC defined in \eqref{MOC1}, we have that for any $\xi=|x-y|\in (0,\frac{a_0}{2}]$, \begin{equation}\label{Dest2} D_1(x,y) \geq \begin{cases} \frac{\alpha}{32 c_1}\delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1- \frac{\alpha}{2}}, &\quad \mathrm{for}\;\; 0<\xi \leq \lambda, \\ \frac{2^\alpha-1}{2\alpha c_1} \omega(\xi) \xi^{-\alpha}, &\quad \mathrm{for}\;\; \lambda< \xi \leq \frac{a_0}{2}. \end{cases} \end{equation} \end{lemma} \begin{remark} The $D_1$ term represents the dissipation phenomenon due to strong alignment in the short range. The extra term appears in the right hand side of \eqref{Dest0} takes care of the misalignment effect. We will verify that it can be controlled by the dissipation. \end{remark} Denote by $\rho_{\min,T}$ the minimum of $\rho(x,t)$ on domain $\T\times[0,T]$. Owing to Lemma \ref{lem:lowbdd}, we have \begin{equation}\label{eq:rho-min} \min_{(x,t)\in\T\times [0,T]} \rho(x,t) = \rho_{\min,T} \geq M_0e^{-c_3\bar{\rho}_0 T}. \end{equation} Then, \begin{equation}\label{III1es} \mathrm{III}_1 \leq -\rho_{\min,T} D_1(x,y) + 2c_2 M_1 \omega(\xi) , \end{equation} with $D_1(x,y)$ satisfying the estimate \eqref{Dest2} and $M_1$ the upper bound of $\|\rho(t_1)\|_{L^\infty}$ appearing in Lemma \ref{lem:uppbdd}. For $\mathrm{III}_2$, recalling that $F=\frac{G}{\rho}$ has the $L^\infty$-estimate \eqref{Flinf-es}, we immediately get \begin{equation}\label{III3es} \mathrm{III}_2 \leq M_1 \|F_0\|_{L^\infty} \omega(\xi). \end{equation} Also, $\partial_x F$ and $H :=\frac{\partial_x F}{\rho}$ satisfy the following equations \begin{equation}\label{eq:H} \partial_t (\partial_x F) + \partial_x ( u\, (\partial_x F))=0, \quad\textrm{and}\quad \partial_t H + u\,\partial_x H=0. \end{equation} We directly deduce that \begin{equation}\label{Hlinf-es} \sup_{t\in [0,T]}\|H(t)\|_{L^\infty(\T)} \leq \|H_0\|_{L^\infty(\T)} = \Big\|\frac{\partial_x F_0}{\rho_0}\Big\|_{L^\infty(\T)}. \end{equation} Thus by virtue of \eqref{eq:uppbdd} and \eqref{Hlinf-es}, we have \begin{equation}\label{f-diff} |F(x)-F(y)|\leq\|\partial_xF\|_{L^\infty}\xi\leq\|H\|_{L^\infty}\|\rho\|_{L^\infty}\xi\leq \|H_0\|_{L^\infty}M_1\xi. \end{equation} Hence, the term $\mathrm{III}_3$ can be estimated as \begin{equation}\label{III4es} \mathrm{III}_3 \leq \|H_0\|_{L^\infty} M_1^3 \xi. \end{equation} Gathering estimates \eqref{III1es}, \eqref{III3es} and \eqref{III4es} leads to \begin{equation}\label{rIIIes} \mathrm{III} \leq -\rho_{\min,T} D_1(x,y) + M_1\big(2 c_2 + \|F_0\|_{L^\infty} \big) \omega(\xi) + \|H_0\|_{L^\infty} M_1^3 \xi. \end{equation} Now, we turn to the estimate on $\mathrm{II}$. We state the following lemma on the one-sided bounds of $\LL\rho(x)$ and $\LL\rho(y)$. The idea follows from \cite[Section~4.2.2]{DKRT}, with an additional treatment on the misalignment. The proof is placed in Section \ref{sec:MOCes}. We only need to use the lower bound on $\LL\rho(x)$ here, but will use both bounds later. \begin{lemma}\label{lem:MOCes2} Assume $\rho$ obeys the MOC $\omega(\xi)$, and $x,y$ satisfy the breakthrough scenario described in Lemma~\ref{lem:bd-scena}. Then, we have the following one-sided bounds for every $\xi\in(0,\frac{r_0}{2}]$ \begin{equation}\label{MOCes-LL} -\LL\rho(x),\,\, \LL\rho(y)\leq 4c_1\int_\xi^{r_0}\frac{\omega(\xi+\eta)-\omega(\xi)}{\eta^{1+\alpha}} \dd \eta + 2c_3M_1. \end{equation} Moreover, if we use the MOC defined in \eqref{MOC1}, we have that \begin{equation}\label{MOCes-Lam-alp} -\LL\rho(x),\,\, \LL\rho(y)\leq\,2c_3M_1+ \begin{cases} 4 c_1\overline{C}_\alpha \delta \lambda^{-\frac\alpha2}\xi^{-\frac\alpha2}, & \quad \textrm{for }\;0<\xi\leq \lambda, \\ \frac{12c_1}{\alpha^2} \gamma\xi^{-\alpha}, & \quad \textrm{for }\;\lambda <\xi \leq \frac{r_0}{2}, \end{cases} \end{equation} where $\overline{C}_\alpha$ is a positive constant that only depends on $\alpha$ (see \eqref{barC-alp} for the explicit expression). \end{lemma} Thus by virtue of the relation $\partial_x u= \LL \rho + F \rho$, scenario \eqref{eq:scena}, and using Lemma \ref{lem:MOCes2} and estimates \eqref{Flinf-es}, \eqref{eq:uppbdd}, we obtain \begin{equation}\label{rII-es} \begin{split} \mathrm{II} = &\, \omega(\xi) \big(- \LL \rho(x) - F(x) \rho(x)\big) \\ \leq &\, \omega(\xi) M_1 (2c_3+\|F_0\|_{L^\infty}) + \begin{cases} 4 c_1 \overline{C}_\alpha \delta^2 \lambda^{-1-\frac{\alpha}{2}} \xi^{1-\frac{\alpha}{2}} , & \; \textrm{for }\;0<\xi\leq \lambda, \\ \frac{12c_1}{\alpha^2} \gamma \omega(\xi) \xi^{-\alpha}, & \; \textrm{for }\;\lambda<\xi\leq \frac{r_0}{2}. \end{cases} \end{split} \end{equation} Next, we consider the contribution from the drift term $\mathrm{I}$. The following lemma shows an estimate on the MOC on velocity $u$. The proof is postponed in Section \ref{sec:MOCes}. \begin{lemma}\label{lem:MOCes-u} Assume $\rho$ obeys the MOC $\omega(\xi)$. Then, $u$ obeys the following MOC \begin{equation}\label{MOCu} \Omega(\xi)=\frac{52c_1}{\alpha}\int_0^\xi\frac{\omega(\eta)}{\eta^\alpha}\dd\eta + 8c_1\xi\int_\xi^{r_0+\xi}\frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta + M_1(8c_3+\|F_0\|_{L^\infty})\xi, \end{equation} for any $\xi\in(0,\frac{r_0}{4}]$. Namely, for any $\tilde{x}, \tilde{y}\in\T$, with $\xi=|\tilde{x}-\tilde{y}|\leq\frac{r_0}{4}$, \begin{equation}\label{u-MOC-es} |u(\tilde{x})-u(\tilde{y})| \leq \Omega(\xi). \end{equation} Moreover, let $x, y$ satisfy the breakthrough scenario described in Lemma~\ref{lem:bd-scena}. If we use the MOC defined in \eqref{MOC1}, with \begin{equation}\label{gamma-cd} \gamma < \frac{3\alpha}{4}\delta, \end{equation} then, we have an enhanced estimate \begin{equation}\label{u-enhanced} |u(x)-u(y)|\leq 4c_1^2D_1(x,y)\xi +M_1(8c_3+\|F_0\|_{L^\infty})\xi + \begin{cases} 18 c_1\overline{C}_\alpha\delta\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2}, & \;\textrm{for }\; 0<\xi \leq \lambda, \\ \frac{26 c_1}{\alpha} \omega(\xi) \xi^{1-\alpha}, & \;\textrm{for }\; \lambda <\xi \leq \frac{r_0}{4}. \end{cases} \end{equation} \end{lemma} \begin{remark}\label{rmk:Omega} Estimate \eqref{MOCu} was first introduced in \cite[Lemma]{KNV} on critical quasi-geostrophic equation. It was extended to the Euler-alignment system with $\alpha\in(0,1)$ in \cite{DKRT}. Here, we further generalize the estimate to $\alpha\in(0,2)$, and consider misalignment as well. The misalignment effect contributes to the last term in \eqref{MOCu}. When $\alpha\geq1$, the first term in \eqref{MOCu} can not be controlled by the dissipation. A modified MOC was introduced in \cite{KNV} for the case $\alpha=1$. Here, we propose an enhanced estimate \eqref{u-enhanced} on $|u(x)-u(y)|$, using $D_1(x,y)$ to replace the problematic first term in \eqref{MOCu}. The novel idea allows us to extend the result to the full range of $\alpha\in(0,2)$, without changing the MOC $\omega(\xi)$. \end{remark} By virtue of the relation \[u(x)\partial_x \rho(x)= \lim_{h\rightarrow 0+} \frac{\rho(x + h u(x)) -\rho(x)}{h}\] and using scenario \eqref{eq:scena}, we can obtain (see e.g. \cite{KNV}) \begin{equation}\label{rI-es0} |\mathrm{I}| = |u(x)\partial_x\rho(x)- u(y)\partial_x\rho(y)| \leq |u(x)-u(y)|\omega'(\xi), \end{equation} which combined with Lemma \ref{lem:MOCes-u} and formula \eqref{MOC1} yields \begin{align}\label{rI-es} |\mathrm{I}| \leq &\, 4c_1^2D_1(x,y)\omega'(\xi)\xi +M_1(8c_3+\|F_0\|_{L^\infty})\omega'(\xi)\xi \nonumber \\ & + \begin{cases} 18c_1\overline{C}_\alpha\delta^2\lambda^{-1-\frac\alpha2}\xi^{1-\frac\alpha2}, & \;\textrm{for }\; 0<\xi \leq \lambda, \\ \frac{26 c_1}{\alpha} \gamma\omega(\xi) \xi^{-\alpha}, & \;\textrm{for }\; \lambda <\xi \leq \frac{r_0}{4}. \end{cases} \end{align} Hence, gathering \eqref{rho-moc-decom} and estimates \eqref{rIIIes}, \eqref{rII-es}, \eqref{rI-es}, and in light of \eqref{xi-scope}, we find that for every $0<\xi \leq \Xi$, \begin{align} \partial_t\rho(x)- \partial_t\rho(y) \leq & -\Big(\rho_{\min,T} - 4c_1^2\omega'(\xi)\xi\Big) D_1(x,y) \label{Targ-es1}\\ &+ \begin{cases} 22c_1\overline{C}_\alpha\delta^2\lambda^{-1-\frac\alpha2}\xi^{1-\frac\alpha2}, & \;\textrm{for }\; 0<\xi \leq \lambda \\ \frac{64 c_1}{\alpha^2} \gamma\omega(\xi) \xi^{-\alpha}, & \;\textrm{for }\; \lambda <\xi \leq \frac{r_0}{4} \end{cases}\nonumber\\ & + 2 M_1 \Big(c_2 + c_3 + \|F_0\|_{L^\infty} \Big) \omega(\xi) +M_1\Big( 8c_3+\|F_0\|_{L^\infty}\Big) \omega'(\xi) \xi + \|H_0\|_{L^\infty} M_1^3 \xi. \nonumber \end{align} Our goal now is to show the right hand side of estimate \eqref{Targ-es1} is negative, by appropriately choosing the constants $\delta,\gamma$ and $\lambda$ in MOC $\omega(\xi)$ defined by \eqref{MOC1}. We divide the proof into two cases. Case 1: $0<\xi \leq \lambda$. In this case $\omega(\xi)\leq \delta \lambda^{-1}\xi $, and $\omega'(\xi)\xi \leq \delta\lambda^{-1}\xi$ as well. We first set $4c_1^2\omega'(\xi)\xi\leq 4c_1^2\delta \leq \frac{1}{4} \rho_{\min,T}$, that is, \begin{equation}\label{dkg-cd3} \delta \leq \frac{1}{16c_1^2}\rho_{\min,T}. \end{equation} So the first term in \eqref{Targ-es1} is bounded by \[-\frac{3\rho_{\min,T}}{4}D_1(x,y),\quad\text{where}\,\,D_1(x,y)\geq\frac{\alpha}{32 c_1}\delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1- \frac{\alpha}{2}}.\] The second term in \eqref{Targ-es1} has the same scaling as $D_1$. It could be made smaller than $\frac{1}{4}\rho_{\min,T}D_1$ by choosing $\delta$ small, \begin{equation}\label{dkg-cd5} 22 c_1\overline{C}_\alpha\delta \leq \frac{ \alpha}{128c_1}\rho_{\min,T}, \end{equation} The third term is subcritical in scaling, and hence can be controlled by $\frac{1}{4}\rho_{\min,T}D_1$ by choosing the scaling factor $\lambda$ small. Indeed, \[\delta\lambda^{-1}\xi\leq\delta\lambda^{-1+\frac\alpha2}\xi^{1-\frac\alpha2} =\lambda^\alpha\left(\delta\lambda^{-1-\frac\alpha2}\xi^{1-\frac\alpha2}\right).\] Therefore, we choose $\lambda$ as follows \begin{equation}\label{dkg-cd4} \lambda \leq \delta \quad \textrm{and}\quad M_1\Big( 2 c_2 + 10 c_3+3\|F_0\|_{L^\infty}+M_1^2\|H_0\|_{L^\infty}\Big) \lambda^\alpha < \frac{ \alpha}{128c_1}\rho_{\min,T}. \end{equation} With the choices of $\delta$ and $\lambda$, we conclude \begin{equation}\label{nega} \partial_t\rho(x)- \partial_t\rho(y)\leq-\frac{3\rho_{\min,T}}{4} +\frac{\rho_{\min,T}}{4}+\frac{\rho_{\min,T}}{4}=-\frac{\rho_{\min,T}}{4}<0. \end{equation} Case 2: $\lambda< \xi \leq \Xi$. In this case $\omega'(\xi)\xi =\gamma$. We bound the first term in \eqref{Targ-es1} with \[-\frac{3\rho_{\min,T}}{4}D_1(x,y),\quad\text{where}\,\,D_1(x,y)\geq \frac{2^\alpha-1}{2\alpha c_1} \omega(\xi) \xi^{-\alpha},\] by simply setting $\gamma$ small enough so that $4c_1^2\omega'(\xi)\xi = 4c_1^2\gamma \leq\frac14\rho_{\min,T}$. Note that we have already assumed $\gamma<\frac\delta2$. So, the inequality is satisfied from the assumption \eqref{dkg-cd3}. The second term in \eqref{Targ-es1} is scaling critical, and can be easily made smaller than $\frac14\rho_{\min,T}D_1$ by choosing $\gamma$ small \begin{equation}\label{dkg-cd8} \frac{64c_1}{\alpha^2}\gamma < \frac{2^\alpha-1}{8\alpha c_1}\rho_{\min,T}. \end{equation} The third term in \eqref{Targ-es1} is subcritical in scaling, and can be controlled by choosing the scaling factor $\lambda$ small. To see this, observe $\omega'(\xi)\xi=\gamma<\frac34\delta=\omega(\lambda)\leq\omega(\xi)$, and $\xi^{-\alpha}\geq\Xi^{-\alpha} \geq \lambda^{-\alpha} e^{-\alpha \gamma^{-1} M_1}$ (from \eqref{xi-scope}). Hence, we only need \begin{equation}\label{dkg-cd7} \lambda \leq \gamma e^{-\gamma^{-1} M_1},\quad \textrm{and}\quad M_1\Big(2 c_2 + 10c_3 + 3\|F_0\|_{L^\infty}+M_1^2\|H_0\|_{L^\infty}\Big) e^{\alpha \gamma^{-1}M_1} \lambda^\alpha \leq \frac{2^\alpha-1}{8\alpha c_1}\rho_{\min,T}. \end{equation} We end up with \eqref{nega} as well, finishing the whole proof. We summarize our choice of the stationary MOC $\omega(\xi)$. Define $\omega$ by \eqref{MOC1}. Pick the parameters in the following order: (i) $\delta\in(0,1)$ satisfying \eqref{dkg-cd3} and \eqref{dkg-cd5}; (ii) $\gamma\in(0,\frac{\delta}{2})$ satisfying \eqref{gamma-cd} and \eqref{dkg-cd8}; (iii) $\lambda$ satisfying \eqref{dkg-cd2}, \eqref{del-cond1}, \eqref{dkg-cd4} and \eqref{dkg-cd7}. \begin{remark}\label{rmk:growth} Observe from \eqref{eq:rho-min} that $\rho_{\min, T}$ can decay exponentially in $T$. Then, our choices of parameters $\delta$ and $\gamma$ also decay exponentially in $T$. Then, from \eqref{del-cond1} and \eqref{dkg-cd7}, the bound on $\lambda$ is double exponentially in $T$. Thus, in view of \eqref{eq:rho-lip}, $\|\partial_x\rho(\cdot,T)\|_{L^\infty}$ can grow double exponentially in $T$. Note that without the misalignment effect, it is known that $\|\partial_x\rho(\cdot,T)\|_{L^\infty}$ is bounded uniformly in all time. Our result indicates that the misalignment could destabilize the solution as time becomes large. \end{remark} \subsection{Uniform Lipschitz regularity of $ \partial_x \rho(t)$ on $[0,T]$}\label{subsec:der-rho-lip} When $1<\alpha<2$, the boundedness of $\|\partial_x^2\rho\|_{L^\infty(\T\times[0,T])}$ is required to ensure global regularity. It suffices to show $\partial_x\rho(t)$ obeys the MOC $\omega$ in \eqref{MOC1} for all $t\in[0,T]$. Note that the parameters used in the MOC for $\partial_x\rho(t)$ can be different from the MOC for $\rho(t)$. For instance, to ensure that $\rho_0'$ obeys $\omega$, we need to pick $\lambda$ such that \begin{equation}\label{dkg-asum1} \lambda \leq \frac{2\|\rho_0'\|_{L^\infty}}{\|\rho_0''\|_{L^\infty}} e^{-2\gamma^{-1}\|\rho_0'\|_{L^\infty}}. \end{equation} We shall continue use the notation $\omega(\xi)$ to denote the MOC. But in this part, $\omega(\xi)$ is obeyed by $\partial_x\rho(t)$ rather than $\rho(t)$. Let us denote $\rho'(x,t)=\partial_x\rho(x,t)$. The construction of the MOC for $\rho'(t)$ is partly similar to the argument for $\rho(t)$, with additional subtleties that need to be taken care of. The proof of the preservation of MOC in time will directly imply the desired bound on $\partial_x^2\rho$ \begin{equation}\label{eq:rho'-lip} \sup_{t\in[0,T]}\|\partial_x^2\rho(\cdot,t)\|_{L^\infty(\T)}\leq\omega'(0+)=\delta\lambda^{-1}. \end{equation} First, we state the only possible breakthrough scenario for the MOC on $\rho'(t)$. The statement is similar to Lemma~\ref{lem:bd-scena}. \begin{lemma}\label{lem:bd-scena2} Assume that $\rho(x,t)$ is a smooth function on $\T\times [0,T]$ and $\rho_0'(x)$ obeys the MOC $\omega(\xi)$ given by \eqref{MOC1}. Suppose that $t_1\in (0,T]$ is the first time that such an $\omega(\xi)$ is lost by $\rho'$, then we have \begin{equation}\label{eq:scena2} |\rho'(\tilde{x},t_1)-\rho'(\tilde{y},t_1)|\leq \omega(|\tilde{x}-\tilde{y}|),\quad\forall~ \tilde{x},\tilde{y}\in \T, \end{equation} and there exist two points $x\neq y\in \T$ satisfying \begin{equation}\label{eq:scena3} \begin{split} \rho'(x,t_1) - \rho'(y,t_1) = \omega(\xi), \quad \textrm{with }\,\xi =|x-y|. \end{split} \end{equation} \end{lemma} Denote by $M_{2,T}$ the bound of $\rho'(t)$ on $[0,T]$ appearing in estimate \eqref{eq:rho-lip}, so that we write it as \begin{equation}\label{eq:rho-lip2} \sup_{t\in [0,T]} \|\rho'(t)\|_{L^\infty(\T)} \leq M_{2,T}. \end{equation} Since $\rho'$ lies in $[-M_{2,T},M_{2,T}]$, the equality \eqref{eq:scena3} implies $\omega(\xi)\leq 2M_{2,T}$. Therefore, breakthrough could only happen in the region \begin{equation}\label{eq:Xi1} 0 < \xi \leq \Xi_1: = \omega^{-1}(2M_{2,T}) =\lambda e^{\gamma^{-1}(2M_{2,T}-\frac34\delta)}. \end{equation} We can pick a small enough $\lambda$ \begin{equation}\label{del-cond2} \lambda \leq \frac{r_0}{4} e^{- 2 \gamma^{-1} M_{2,T}}, \end{equation} to guarantee that the breakthrough only happens in the short range, with $\Xi_1\leq\frac{r_0}{4}$. Next, we intend to prove that for the points $x\neq y\in\T$ satisfying \eqref{eq:scena3} with $\xi=|x-y|$ in the range \eqref{eq:Xi1}, it holds \begin{equation}\label{eq:targ2} \partial_t (\rho'(x,t)-\rho'(y,t))|_{t=t_1} <0. \end{equation} From the system \eqref{EAS-ref}, we get the dynamics of $\rho'(x,t)$ as \begin{equation}\label{eq:rho'} \partial_t \rho' + u\, \partial_x \rho' + 2\rho'\, \partial_x u +\rho\, \partial_x^2 u = 0, \end{equation} with \begin{equation}\label{eq:u-rho-rela} \partial_x u= \LL \rho + G,\quad\textrm{and}\quad \partial_x^2 u= \LL \rho' + \partial_x G. \end{equation} Then, we have \begin{align}\label{rho'-moc-dec} \partial_t\rho'(x)- \partial_t\rho'(y) = & -\big( u(x)\, \partial_x\rho' (x) - u(y)\,\partial_x\rho'(y)\big) - 2\big(\rho'(x) -\rho'(y)\big)\partial_x u(x) \nonumber \\ & - 2 \rho'(y)\big(\partial_x u(x) -\partial_x u(y)\big) - \big(\rho(x)\, \partial_x^2 u(x) -\rho(y)\, \partial_x^2 u(y)\big) \nonumber \\ =: &\, \mathcal{I} + \mathcal{II} + \mathcal{III} +\mathcal{IV} . \end{align} Again, we suppress the $t_1$-dependence from now on for simplicity. We start with the estimation on the term $\mathcal{IV}$, through a similar treatment as on the terms $\mathrm{II}+\mathrm{III}$ in the MOC estimates for $\rho(t)$. A main difference is that $\rho(x)-\rho(y)$ does not necessarily has a sign, in opposition to the case on MOC of $\rho(t)$, where the quantity is positive due to \eqref{eq:scena}. Instead, we will perform different decompositions depending on the sign of $\rho(x)-\rho(y)$ as follows. \begin{align*} \mathcal{IV} =& \begin{cases}- \big(\rho(x) -\rho(y)\big)\, \partial_x^2 u(x) -\rho(y)\,\big(\partial_x^2 u(x)- \partial_x^2 u(y)\big)&\text{if}~\rho(x)-\rho(y)\geq0\\ - \big(\rho(x) -\rho(y)\big)\, \partial_x^2 u(y) -\rho(x)\,\big(\partial_x^2 u(x)- \partial_x^2 u(y)\big)&\text{if}~\rho(x)-\rho(y)<0 \end{cases}\\ :=&\,\mathcal{IV}_1+\mathcal{IV}_2. \end{align*} The term $\mathcal{IV}_1$ can be estimated similarly as $\mathrm{II}$. We have \[\mathcal{IV}_1\leq |\rho(x)-\rho(y)|\cdot\big(\max\{-\LL\rho'(x),\LL\rho'(y)\}+ \|\partial_xG(\cdot)\|_{L^\infty}\big).\] In particular, using \eqref{eq:rho-lip2}, $|\rho(x)-\rho(y)|$ can be estimated by \begin{equation}\label{rhoLip} |\rho(x)-\rho(y)|\leq M_{2,T}\xi. \end{equation} Apply Lemma~\ref{lem:MOCes2} on $\rho'$ (instead of $\rho$) and get \begin{equation}\label{MOCes-Lam-alp2} \max\{-\LL\rho'(x), \LL\rho'(y)\}\leq~4c_3M_{2,T}+ \begin{cases} \frac{12c_1}{\alpha-1} \delta \lambda^{-\frac\alpha2} \xi^{-\frac\alpha2}, & \quad \textrm{for }\;0<\xi\leq \lambda, \\ 12c_1 \gamma \xi^{-\alpha}, & \quad \textrm{for }\;\lambda <\xi \leq \frac{r_0}{2}. \end{cases} \end{equation} Here, we make use of the estimate $\omega(\xi) \leq 2 M_{2,T}$ (from \eqref{eq:Xi1}). To estimate $\|\partial_xG(\cdot,t)\|_{L^\infty}$, we use the relation \begin{equation}\label{par-G-eq} \partial_xG=\partial_x(\rho F)=\rho'F+\rho^2 H. \end{equation} Applying \eqref{Flinf-es}, \eqref{eq:uppbdd},\eqref{Hlinf-es} and \eqref{eq:rho-lip2}, we get \begin{equation}\label{par-G-es} \|\partial_x G\|_{L^\infty(\T\times [0,T])} \leq M_{2,T} \|F_0\|_{L^\infty} + M_1^2\|H_0\|_{L^\infty} . \end{equation} Putting together \eqref{rhoLip}, \eqref{MOCes-Lam-alp2} and \eqref{par-G-es}, we end up with an estimate on $\mathcal{IV}_1$: \begin{equation}\label{cIV-2} \mathcal{IV}_1\leq M_{2,T}^2(4c_3 + \|F_0\|_{L^\infty}) \xi + M_{2,T} M_1^2\|H_0\|_{L^\infty}\xi+ \begin{cases} \frac{12 c_1}{\alpha-1} M_{2,T} \,\delta\lambda^{-\frac{\alpha}{2}} \xi^{1-\frac{\alpha}{2}}, & \; \textrm{for }\;0<\xi\leq \lambda, \\ 12c_1 M_{2,T}\, \omega(\xi) \xi^{1-\alpha}, & \; \textrm{for }\;\lambda<\xi\leq \frac{r_0}{2}, \end{cases} \end{equation} which has a similar structure as the estimate on $\mathrm{II}$ in \eqref{rII-es}. Note that in the last part, we use the fact $\gamma\leq\frac{\delta}{2}\leq\omega(\lambda)\leq\omega(\xi)$ for every $\xi> \lambda$. Next, we estimate the term $\mathcal{IV}_2$, similarly as $\mathrm{III}$. In particular, \[\partial_x^2u(x)-\partial_x^2u(y)=\left(\LL\rho'(x)-\LL\rho'(y)\right) +\left(\partial_xG(x)-\partial_xG(y)\right).\] For the first term (corresponding to $\mathrm{III}_1$), applying Lemma~\ref{lem:MOCes1} on $\rho'$, we obtain \[\LL \rho' (x) - \LL \rho' (y)\geq D_1'(x,y)-2c_2\omega(\xi),\] where $D_1'(x,y)$ is defined in \eqref{Dexp} with $\rho$ replaced by $\rho'$, satisfying \begin{equation}\label{D'est2} D'_1(x,y) \geq \begin{cases} \frac{1}{32 c_1}\delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1- \frac{\alpha}{2}}, &\quad \mathrm{for}\;\; 0<\xi \leq \lambda, \\ \frac{1}{4c_1} \omega(\xi) \xi^{-\alpha}, &\quad \mathrm{for}\;\; \lambda< \xi \leq \frac{a_0}{2}. \end{cases} \end{equation} For the second term, use the relation \eqref{par-G-eq} and get \begin{equation}\label{GLipest} \partial_xG(x)-\partial_xG(y)=\left(\rho'(x)F(x)-\rho'(y)F(y)\right) +\left(\rho^2(x)H(x)-\rho^2(y)H(y)\right). \end{equation} The two parts can be estimated similarly as the terms $\mathrm{III}_2$ and $\mathrm{III}_3$ as follows. For the first part, apply \eqref{Flinf-es}, \eqref{f-diff} and \eqref{eq:rho-lip2} \begin{align*} |\rho'(x)F(x)-\rho'(y)F(y)|=&\,|(\rho'(x)-\rho'(y))F(y)+\rho'(x)(F(x)-F(y))|\\ \leq&\, \|F_0\|_{L^\infty}\omega(\xi)+M_{2,T}M_1\|H_0\|_{L^\infty}\xi. \end{align*} For the second part, observe that $\partial_x H$ and $\frac{\partial_x H}{\rho}$ satisfy \begin{equation}\label{eq:par-H} \partial_t (\partial_x H) + \partial_x ( u\, (\partial_x H))=0,\quad\textrm{and}\quad \partial_t\Big( \frac{\partial_x H}{\rho}\Big) + u\,\partial_x \Big( \frac{\partial_x H}{\rho} \Big)=0, \end{equation} which directly implies that \begin{equation}\label{eq:par-H-es} \|\partial_x H\|_{L^\infty(\T\times [0,T])} \leq \|\rho\|_{L^\infty(\T\times [0,T])} \Big\|\frac{\partial_x H}{\rho}\Big\|_{L^\infty(\T\times [0,T])} \leq M_1 \Big\| \frac{\partial_x H_0}{\rho_0}\Big\|_{L^\infty}. \end{equation} Therefore, \begin{align*} |\rho^2(x)H(x)-\rho^2(y)H(y)|=&\, |(\rho^2(x)-\rho^2(y))H(y)+\rho^2(x)(H(x)-H(y))|\\ \leq&\,2M_1M_{2,T}\|H_0\|_{L^\infty}\xi+M_1^3 \Big\| \frac{\partial_x H_0}{\rho_0}\Big\|_{L^\infty}\xi. \end{align*} We summarize the estimate on $\mathcal{IV}_2$ as \begin{equation}\label{cIV-3} \begin{split} \mathcal{IV}_2 \leq -\rho_{\min,T} D'_1(x,y) + M_1(2c_2+\|F_0\|_{L^\infty})\omega(\xi) + M_1^2\Big(3 \|H_0\|_{L^\infty} M_{2,T} + M_1^2 \Big\|\frac{\partial_x H_0}{\rho_0} \Big\|_{L^\infty} \Big) \xi . \end{split} \end{equation} Now, we consider the contribution from terms $\mathcal{II}$ and $\mathcal{III}$ given by \eqref{rho'-moc-dec}. These two terms do not appear in the estimates on the MOC of $\rho(t)$. Yet, they play a crucial role in the estimate on the MOC of $\rho'(t)$. The following key lemma describes the bounds on $\partial_xu(x)$ and $\partial_xu(x)-\partial_xu(y)$, which can be used to estimate $\mathcal{II}$ and $\mathcal{III}$ respectively. The proof is placed in Section \ref{sec:MOCes}). \begin{lemma}\label{lem:MOCes5} Let $\alpha\in(1,2)$. Assume $\rho'$ obeys the MOC defined in \eqref{MOC1}. Then, for any $\tilde{x}\in\T$, we have \begin{equation}\label{par-u-Linf-es} |\partial_x u(\tilde{x})| \leq \frac{4c_1}{(\alpha-1)^2(2-\alpha)}\delta\lambda^{-(\alpha-1)}+\frac{c_3}{2}M_{2,T}+\|F_0\|_{L^\infty}M_1. \end{equation} Moreover, if $x, y$ satisfy the breakthrough scenario described in Lemma~\ref{lem:bd-scena2}, with $\xi=|x-y|\in(0,\frac{r_0}{4}]$, \begin{align} |\partial_x u(x) - \partial_x u(y)|\leq&\, 4c_1^2D_1'(x,y)\xi\,+\,\begin{cases} \frac{54c_1}{\alpha-1} \delta\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2}, & \;\textrm{for }\; 0<\xi \leq \lambda \\ 26 c_1 \omega(\xi) \xi^{1-\alpha}, & \;\textrm{for }\; \lambda <\xi \leq \frac{r_0}{4} \end{cases}\nonumber\\ & + \Big( (16c_3+\|F_0\|_{L^\infty})M_{2,T}+M_1^2\|H_0\|_{L^\infty} \Big) \xi.\label{par-u-diff-es} \end{align} \end{lemma} Apply scenario \eqref{eq:scena3} and estimate \eqref{eq:rho-lip2} to Lemma~\ref{lem:MOCes5} and get \begin{equation}\label{cII-es} |\mathcal{II}| \leq \Big( \frac{8c_1}{(\alpha-1)^2(2-\alpha)}\delta\lambda^{-(\alpha-1)}+ c_3 M_{2,T} + 2\|F_0\|_{L^\infty}M_1\Big) \omega(\xi), \end{equation} and \begin{align} |\mathcal{III}|\leq&\, 8c_1^2M_{2,T}D_1'(x,y)\xi\,+\,\begin{cases} \frac{108c_1}{\alpha-1} M_{2,T}\,\delta\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2}, & \;\textrm{for }\; 0<\xi \leq \lambda \\ 52 c_1 M_{2,T} \omega(\xi) \xi^{1-\alpha}, & \;\textrm{for }\; \lambda <\xi \leq \frac{r_0}{4} \end{cases}\nonumber\\ &+2M_{2,T}\Big( (16c_3+\|F_0\|_{L^\infty})M_{2,T}+M_1^2\|H_0\|_{L^\infty} \Big) \xi.\label{cIII-es} \end{align} Finally, for the drift term $\mathcal{I}$, thanks to the estimate \eqref{par-u-Linf-es}, we argue similarly as \eqref{rI-es0} and directly calculate \begin{equation}\label{cI-es} |\mathcal{I}| \leq \|\partial_x u\|_{L^\infty} \xi \omega'(\xi) \leq \left( \frac{4c_1}{(\alpha-1)^2(2-\alpha)}\delta\lambda^{-(\alpha-1)}+\frac{c_3}{2}M_{2,T}+\|F_0\|_{L^\infty}M_1\right) \omega'(\xi)\xi. \end{equation} Hence, gathering the splitting \eqref{rho'-moc-dec} and estimates \eqref{cIV-2}, \eqref{cIV-3}, \eqref{cII-es}, \eqref{cIII-es}, \eqref{cI-es}, we find that for every $0< \xi \leq \Xi_1$, \begin{align}\label{Targ-es4} & \partial_t \rho'(x) -\partial_t \rho'(y)\, \leq \, -\Big( \rho_{\min,T} - 8c_1^2 M_{2,T} \xi \Big) D'_1(x,y) \\ & \,\,\,+ \left(\frac{4c_1}{(\alpha-1)^2(2-\alpha)}\delta\lambda^{-(\alpha-1)}+\frac{c_3}{2}M_{2,T}+\|F_0\|_{L^\infty}M_1\right) \big( \omega(\xi) + \omega'(\xi)\xi\big)\nonumber\\ &\,\,\, + M_1(2c_2+\|F_0\|_{L^\infty})\omega(\xi)+\tilde{C}_0\xi\,+ \,\begin{cases} \frac{120c_1}{\alpha-1} M_{2,T}\delta\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2}, & \;\textrm{for }\; 0<\xi \leq \lambda, \\ 64 c_1 M_{2,T} \omega(\xi) \xi^{1-\alpha}, & \;\textrm{for }\; \lambda <\xi \leq \frac{r_0}{4}, \end{cases}\nonumber \end{align} where $\tilde{C}_0=\tilde{C}_0(\rho_0,u_0,T)$ is given by \[\tilde{C}_0=M_{2,T}\Big(36 c_3 M_{2,T} + 3\|F_0\|_{L^\infty} M_{2,T} + 6M_1^2\|H_0\|_{L^\infty}\Big)+ M_1^4\Big\|\frac{\partial_x H_0}{\rho_0}\Big\|_{L^\infty}. \] In order to show the right hand side of \eqref{Targ-es4} is negative, we first set $8c_1^2 M_{2,T} \xi\leq\frac14\rho_{\min,T}$. Since $\xi\leq\Xi_1 =\lambda e^{\gamma^{-1}(2M_{2,T}-\frac34\delta)}$ (see \eqref{eq:Xi1}), the bound can be guaranteed by choosing $\lambda$ sufficiently small \begin{equation}\label{dkg'-cd3} \lambda\leq \frac{\rho_{\min,T}}{32c_1^2M_{2,T}}e^{-2\gamma^{-1}M_{2,T}}. \end{equation} It remains to show that the rest of the terms in the second and third lines of \eqref{Targ-es4} are bounded by $\frac{1}{2}\rho_{\min,T}D_1'(x,y)$, or sufficiently, from \eqref{D'est2}, bounded by \begin{equation}\label{D1'bd} \begin{cases} \frac{\rho_{\min,T}}{64 c_1}\delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1- \frac{\alpha}{2}}, &\quad \mathrm{for}\;\; 0<\xi \leq \lambda, \\ \frac{\rho_{\min,T}}{8c_1} \omega(\xi) \xi^{-\alpha}, &\quad \mathrm{for}\;\; \lambda< \xi \leq \Xi_1. \end{cases} \end{equation} Then, we conclude with $\partial_t\rho'(x)-\partial_t\rho'(y)<0$ by \eqref{nega}, that finishes the proof. The bounds can be achieved by choosing $\lambda$ sufficiently small, given $\delta$ and $\gamma$. To see this, we consider two cases. Case 1: $0<\xi\leq \lambda$. In this case $\omega(\xi) \leq \delta \lambda^{-1}\xi $, and $\omega'(\xi)\xi \leq \delta\lambda^{-1}\xi$ as well. Comparing the parameters in \eqref{Targ-es4} and \eqref{D1'bd}: \begin{align*} \delta\lambda^{-(\alpha-1)}\big(\omega(\xi)+\omega'(\xi)\xi\big) \leq&\, 2\delta^2\lambda^{-\alpha}\xi\leq 2\delta^2\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2} \leq (2\delta\lambda) \cdot\delta \lambda^{-1-\frac{\alpha}{2}}\xi^{1- \frac{\alpha}{2}},\\ \big(\omega(\xi)+\omega'(\xi)\xi\big) \leq&\, 2\delta\lambda^{-1}\xi\leq 2\delta\lambda^{-1+\frac\alpha2}\xi^{1-\frac\alpha2} \leq (2\lambda^\alpha) \cdot\delta \lambda^{-1-\frac{\alpha}{2}}\xi^{1- \frac{\alpha}{2}},\\ \xi\leq&\, \lambda^{\frac\alpha2}\xi^{1-\frac\alpha2}\leq \lambda^\alpha \cdot\delta \lambda^{-1-\frac{\alpha}{2}}\xi^{1-\frac{\alpha}{2}},\quad \text{for}\,\lambda<\delta,\\ \delta\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2}\leq&\, \lambda\cdot\delta\lambda^{-1-\frac\alpha2}\xi^{1-\frac\alpha2}, \end{align*} we have $\max\{2\delta\lambda, 2\lambda^\alpha, \lambda^\alpha,\lambda\}\leq2\lambda$. Therefore, setting $\lambda$ small enough will indeed make the terms under control. Case 2: $\lambda< \xi \leq \Xi_1$. In this case we have $\omega'(\xi)\xi =\gamma < \frac{3\delta}{4} = \omega(\lambda) \leq \omega(\xi)$. Also, we recall $\xi\leq\Xi_1\leq C_\gamma \lambda$ with the constant $C_\gamma=e^{2\gamma^{-1}M_{2,T}}$ (from \eqref{eq:Xi1}). Comparing the parameters in \eqref{Targ-es4} and \eqref{D1'bd}: \begin{align*} \delta\lambda^{-(\alpha-1)}\big(\omega(\xi)+\omega'(\xi)\xi\big) \leq&\, 2\delta^2\lambda^{-(\alpha-1)}\omega(\xi)\leq 2\delta^2\lambda^{-(\alpha-1)}\big(C_\gamma\lambda\big)^{\alpha}\omega(\xi)\xi^{-\alpha} \leq (2\delta^2C_\gamma^\alpha\lambda) \cdot\omega(\xi)\xi^{-\alpha},\\ \big(\omega(\xi)+\omega'(\xi)\xi\big) \leq&\, 2\omega(\xi)\leq 2\big(C_\gamma\lambda\big)^{\alpha}\omega(\xi)\xi^{-\alpha} \leq (2C_\gamma^\alpha\lambda^\alpha) \cdot \omega(\xi)\xi^{-\alpha},\\ \xi\leq&\, \big(C_\gamma\lambda\big)^{1+\alpha}\frac{4}{3\delta}\omega(\xi)\xi^{-\alpha}\leq \big(2C_\gamma^{1+\alpha}\lambda^\alpha\big) \cdot \omega(\xi)\xi^{-\alpha},\quad \text{for}\,\lambda<\delta,\\ \omega(\xi)\xi^{1-\alpha}\leq&\, \big(C_\gamma\lambda\big)\cdot\omega(\xi)\xi^{-\alpha}, \end{align*} we have $\max\{2\delta^2C_\gamma^\alpha\lambda, 2C_\gamma^\alpha\lambda^\alpha, 2C_\gamma^{1+\alpha}\lambda^\alpha, C_\gamma\lambda\}\leq 2C_\gamma^{1+\alpha}\lambda$. Therefore, setting $\lambda$ small enough will make the terms under the desired control. \begin{remark} As $T$ becomes large, the $\lambda$ could grow very fast. Indeed, from Remark~\ref{rmk:growth}, we know $M_{2,T}$ can grow double exponentially in $T$. With smallness assumption (e.g. \eqref{del-cond2} and \eqref{dkg'-cd3}) on $\lambda$, we see $\lambda^{-1}$ could grow triple exponentially in time. Thus, the bound on $\|\partial_x^2u(\cdot,t)\|_{L^\infty}$ in \eqref{eq:rho'-lip} is also triple exponential in time. Such possible fast growth does not happen without the presence of the misalignment. \end{remark} \section{Estimates concerning the modulus of continuity}\label{sec:MOCes} In the section, we give the detailed proof of Lemmas \ref{lem:MOCes1}, \ref{lem:MOCes2}, \ref{lem:MOCes-u}, \ref{lem:MOCes5}, respectively in order. All estimates are scaling critical. The idea of the proofs follows from \cite{DKRT}. The main contribution is the inclusion of the misalignment, and the generalization of the influence function $\phi$. \begin{proof}[Proof of Lemma \ref{lem:MOCes1}] First, we decompose $D(x,y)$ into two parts \begin{align*} D(x,y) & = \mathrm{p.v.}\, \int_{\R} \phi(z) \big( \omega(\xi) - \rho(x+z)+ \rho (y+ z)\big) \dd z \\ & = D_1(x,y) + \int_{|z|\geq a_0} \phi(z)\,\big( \omega(\xi) - \rho(x+z)+ \rho (y+ z) \big) \dd z \\ & \geq D_1(x,y) - 2\omega(\xi) \int_{|z|\geq a_0} |\phi(z)| \dd z \geq D_1(x,y) - 2 c_2 \omega(\xi) . \end{align*} Here, $D_1$ is defined in \eqref{Dexp}, which characterizes the dissipation phenomenon in the short range. The second term represents the long range misalignment, and can be bounded by condition \eqref{phi-assum2}. This yields the estimate \eqref{Dest0}. \def\xih{{\frac\xi2}} The dissipation $D_1(x,y)$ has lower bound similar as in \cite[Lemma~4.5]{DKRT}, where $\phi(r)=r^{1+\alpha}$. To work with general influence functions, we adapt the argument in \cite[Lemma~2.3]{DKSV}, with a small variation to treat with influence functions that are compactly supported. Due to translation invariance and symmetry, we can let $x=\xih$ and $y=-\xih$ without loss of generality. In the following calculation, integrals make sense in principle values. \begin{align*} D_1 = &\, \bigg(\int_{-a_0}^{-\xih}+\int_{-\xih}^{-a_0}\bigg)\phi(z) \big( \omega(\xi) - \rho(x+z)+ \rho (y+ z)\big) \dd z\\ = &\, \int_0^{a_0-\xih} \phi\Big(\eta+\xih\Big) \big(\omega(\xi)+\rho(\eta)-\rho(-\eta)\big)\dd \eta +\int_0^{a_0+\xih} \phi\Big(\eta-\xih\Big) \big(\omega(\xi)-\rho(\eta)+\rho(-\eta)\big)\dd \eta\\ = &\,\int_0^{a_0-\xih} \left(\left[\phi\Big(\eta-\xih\Big)+ \phi\Big(\eta+\xih\Big)\right]\omega(\xi)+ \left[\phi\Big(\eta-\xih\Big)-\phi\Big(\eta+\xih\Big)\right] \big(-\rho(\eta)+\rho(-\eta)\big)\right)\,\dd\eta\\ &+\int_{a_0-\xih}^{a_0+\xih}\phi\Big(\eta-\xih\Big) \big(\omega(\xi)-\rho(\eta)+\rho(-\eta)\big)\dd \eta. \end{align*} Due to the monotonicity assumption \eqref{phi-assum1.3} on $\phi$, it is easy to check \[\phi\Big(\eta-\xih\Big)-\phi\Big(\eta+\xih\Big)\geq0,\quad\forall~\eta\in\Big[0,a_0-\xih\Big].\] Moreover, the breakthrough scenario \eqref{eq:scena0} implies $|\rho(\eta)-\rho(-\eta)|\leq\omega(2\eta)$. We can obtain a lower bound on $D_1$: \begin{align*} D_1\geq& \,\int_0^{a_0-\xih} \left(\left[\phi\Big(\eta-\xih\Big)+ \phi\Big(\eta+\xih\Big)\right]\omega(\xi)- \left[\phi\Big(\eta-\xih\Big)-\phi\Big(\eta+\xih\Big)\right] \omega(2\eta)\right)\,\dd\eta\\ &+\int_{a_0-\xih}^{a_0+\xih}\phi\Big(\eta-\xih\Big) \big(\omega(\xi)-\omega(2\eta)\big)\dd \eta\\ = & \,\int_{-\xih}^{a_0-\xi}\phi(\eta)\big(\omega(\xi)-\omega(2\eta+\xi)\big)\dd\eta \,+\int_{\xih}^{a_0}\phi(\eta)\big(\omega(\xi)+\omega(2\eta-\xi)\big)\dd\eta\\ &\,+\int_{a_0-\xi}^{a_0}\phi(\eta)\big(\omega(\xi)-\omega(2\eta+\xi)\big)\dd\eta\\ = &\, \int_0^{\xih}\phi(\eta)\big(2\omega(\xi)-\omega(2\eta+\xi)-\omega(\xi-2\eta)\big)\dd\eta \,+\int_{\xih}^{a_0}\phi(\eta)\big(2\omega(\xi)+\omega(2\eta-\xi)-\omega(2\eta+\xi)\big)\dd\eta. \end{align*} Due to the concavity of $\omega(\xi)$, both terms $2\omega(\xi)-\omega(2\eta+\xi)-\omega(\xi-2\eta)$ and $2\omega(\xi)+\omega(2\eta-\xi)-\omega(2\eta+\xi)$ are positive. Thus assumption \eqref{phi-assum1} implies the wanted inequality \eqref{Dest}. Next, we prove estimate \eqref{Dest2}, which is from direct calculation. Case 1: $0<\xi\leq \lambda$. We only keep the first term. By concavity of $\omega(\xi)$, \begin{equation}\label{omega-fact} \begin{split} \omega(\xi+2\eta)+\omega(\xi-2\eta)-2\omega(\xi) & = 4\eta^2 \int_0^1 \int_{-1}^1 s \omega''(\xi + 2s \tau\,\eta )\,\dd \tau \dd s \\ & \leq 4\eta^2 \int_0^1 \int_{-1}^0 s \omega''(\xi)\,\dd\tau \dd s \leq 2\omega''(\xi) \eta^2. \end{split} \end{equation} Then, we have \begin{align}\label{D1est1} D_1(x,y) & \geq \frac{1}{c_1} \int_0^{\frac{\xi}{2}} \frac{-2\omega''(\xi) \eta^2}{\eta^{1+\alpha}} \dd \eta \geq \frac{\alpha(2+\alpha)}{8 c_1} \delta \lambda^{-1-\frac{\alpha}{2}} \xi^{\frac{\alpha}{2}-1} \int_0^{\frac{\xi}{2}} \eta^{1-\alpha}\dd \eta \nonumber \\ & \geq \frac{ \alpha(2+\alpha) }{ 2^{2-\alpha}(2-\alpha)8c_1} \delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1-\frac{\alpha}{2}} \geq \frac{ \alpha}{16(2-\alpha) c_1}\delta \lambda^{-1-\frac{\alpha}{2}} \xi^{1- \frac{\alpha}{2} } . \end{align} Case 2: $\lambda\leq \xi\leq \frac{a_0}{2}$. We only keep the second term. Due to the concavity of $\omega$, we have for every $\eta\geq\frac\xi2$, \[\omega(2\eta+\xi)-\omega(2\eta-\xi)\leq \omega(2\xi)=\omega(\xi)+\gamma\log2 \leq\frac{3}{2}\omega(\xi),\] where the last inequality holds since $\gamma<\frac\delta2$ and so \[\gamma\log2<\frac38\delta=\frac12\omega(\delta)\leq\frac12\omega(\xi).\] Thus, we find \begin{equation}\label{D1est2} \begin{split} D_1(x,y) \geq \frac{1}{2 c_1}\omega(\xi) \int_{\frac{\xi}{2}}^{a_0} \frac{1}{\eta^{1+\alpha}} \dd \eta \geq \frac{1}{2 c_1\alpha} \omega(\xi) \left[\Big(\frac\xi2\Big)^{-\alpha} - (2\xi)^{-\alpha}\right] \geq \frac{2^\alpha -1}{2 c_1 \alpha} \frac{\omega(\xi)}{\xi^\alpha} . \end{split} \end{equation} Combining \eqref{D1est1} with \eqref{D1est2} leads to \eqref{Dest2}, as desired. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:MOCes2}] The proof is similar to \cite[Lemma 4.5]{DKRT}, with suitable modifications that address the misalignment effect. We will only prove the lower bound on $\LL\rho(x)$. The upper bound on $\LL\rho(y)$ can be obtained using the same argument. Without loss of generality, we assume that $\xi = x-y >0$. By using the periodicity property of $\rho$ and the scenario \eqref{eq:scena}, we see that \begin{align}\label{lam-alp-esp2} \LL \rho(x) &= \,\mathrm{p.v.} \int_\R \phi(x-z)\big(\rho(x)-\rho(z) \big) \dd z =\, \mathrm{p.v.} \int_\T \phi^S(x-z)\, \big( \rho(x) - \rho(z)\big) \dd z \nonumber \\ & = \,\mathrm{p.v.}\int_\T \phi^S(\eta)\,\big(\rho(x)-\rho(y) + \rho(y)- \rho(x-\eta) \big) \dd \eta \nonumber \\ & = \, \mathrm{p.v.}\int_\T \phi^S(\eta)\,\big(\omega(\xi) + \rho(y)- \rho(y+\xi-\eta)\big) \dd \eta. \end{align} We have the following decomposition \begin{align}\label{lam-alp-dec} \LL\rho(x) & = \left(\int_{-\frac{1}{2}}^{-\xi} + \mathrm{p.v.}\int_{-\xi}^\xi + \int_\xi^{2\xi} + \int_{2\xi}^{\frac{1}{2}}\right) \Big(\phi^S(\eta)\big(\omega(\xi) + \rho(y)- \rho(y+\xi-\eta)\big) \dd \eta\Big) \nonumber \\ & = A_{1,\phi} + A_{2,\phi} + A_{3,\phi} + A_{4,\phi} . \end{align} The terms $A_{2,\phi}$ and $A_{3,\phi}$ are nonnegative, which can be seen from scenario \eqref{eq:scena}, estimate \eqref{phi-s-assum1} (with $2\xi \leq r_0$) and properties of $\omega$ (concavity and monotonicity): \begin{equation}\label{A2alp-es} \begin{split} A_{2,\phi} & = \,\mathrm{p.v.}\int_0^\xi \phi^S(\eta) \,\big(2\omega(\xi) + 2\rho(y) - \rho(y+\xi-\eta) - \rho(y+\xi +\eta)\big) \dd \eta \\ & \geq \,\mathrm{p.v.}\int_0^\xi \phi^S(\eta)\,\big(2\omega(\xi) -\omega(\xi-\eta) - \omega(\xi +\eta) \big) \dd \eta \geq 0, \end{split} \end{equation} and \begin{equation}\label{A3alp-es} A_{3,\phi} = \int_\xi^{2\xi}\phi^S(\eta)\,\big(\omega(\xi) + \rho(y)- \rho(y+\xi-\eta) \big) \dd \eta \geq \int_\xi^{2\xi} \phi^S(\eta)\,\big(\omega(\xi) - \omega(\eta-\xi)\big) \dd \eta\geq 0. \end{equation} Next, we obtain the upper bounds of $-A_{1,\phi}$ and $-A_{4,\phi}$. \begin{align*} - A_{1,\phi} = &~\int_{-\frac{1}{2}}^{-\xi} \phi^S(\eta) \big( \rho(y+\xi-\eta) - \rho(y) -\omega(\xi) \big) \dd \eta = \int_\xi^{\frac{1}{2}} \phi^S(\eta) \big(\rho(y+\xi +\eta) -\rho(y) -\omega(\xi) \big) \dd \eta \\ \leq &~ \int_\xi^{r_0} |\phi^S(\eta)|\, \big(\omega(\xi + \eta) -\omega(\xi) \big) \dd \eta +\int_{r_0}^{\frac12} |\phi^S(\eta)| (2M_1)\dd\eta\\ \leq&~ 2c_1\int_\xi^{r_0}\frac{\omega(\xi+\eta)-\omega(\xi)}{\eta^{1+\alpha}} \dd \eta +c_3M_1, \end{align*} where we make use of scenario \eqref{eq:scena0}, and also $\omega(\xi)\leq M_1$ due to \eqref{xi-scope}. $-A_{4,\phi}$ can be estimated in the same way, with the same upper bound as $-A_{1,\phi}$: \[ - A_{4,\phi} = \int_{2\xi}^{\frac12} \phi^S(\eta) \big( \rho(y+\xi-\eta) - \rho(y) -\omega(\xi) \big) \dd \eta \leq~ 2c_1\int_{2\xi}^{r_0}\frac{\omega(\xi+\eta)-\omega(\xi)}{\eta^{1+\alpha}} \dd \eta +c_3M_1, \] Therefore, we conclude with \eqref{MOCes-LL} \[-\LL\rho(x)\leq 4c_1\int_\xi^{r_0}\frac{\omega(\xi+\eta)-\omega(\xi)}{\eta^{1+\alpha}} \dd \eta + 2c_3M_1.\] Next, we prove the estimate \eqref{MOCes-Lam-alp}. Case 1: $0<\xi \leq \lambda$. The concavity of $\omega$ indicates $\omega(\xi+\eta)-\omega(\xi)\leq\omega(\eta)$, and so \begin{align} \int_\xi^{r_0}&\frac{\omega(\xi+\eta)-\omega(\xi)}{\eta^{1+\alpha}} \dd \eta \leq \int_\xi^{r_0}\frac{\omega(\eta)}{\eta^{1+\alpha}} \dd \eta \leq \delta\lambda^{-1}\int_\xi^\lambda\frac{1}{\eta^{\alpha}} \dd \eta +\int_\lambda^{r_0}\frac{\frac{3}{4}\delta+\gamma\log\frac{\eta}{\lambda}}{\eta^{1+\alpha}} \dd \eta\nonumber\\ &\leq\,\left(\frac{3}{4\alpha}+\frac{1}{2\alpha^2}\right)\delta\lambda^{-\alpha}+\begin{cases} \frac{1}{1-\alpha} \delta\lambda^{-\alpha},\quad &\textrm{for }\; 0<\alpha <1, \\ \delta \lambda^{-1} \log\frac{\lambda}{\xi},\quad &\textrm{for }\; \alpha=1, \\ \frac{1}{\alpha-1} \delta\lambda^{-1}\xi^{-(\alpha-1)},\quad & \textrm{for }\; 1<\alpha<2, \end{cases}\nonumber\\ &\leq\,\delta \overline{M}_\alpha(\xi,\lambda),\quad\text{with } \overline{M}_\alpha(\xi,\lambda):=\begin{cases} \frac{1}{\alpha^2(1-\alpha)} \lambda^{-\alpha},\quad &\textrm{for }\; 0<\alpha <1, \\ \lambda^{-1} \left(\log\frac{\lambda}{\xi}+\frac54\right),\quad &\textrm{for }\; \alpha=1, \\ \left(\frac{1}{\alpha-1}+\frac54\right)\lambda^{-1}\xi^{-(\alpha-1)},\quad & \textrm{for }\; 1<\alpha<2, \end{cases}\label{omegaovereta} \end{align} where in the third inequality, we have used $\gamma<\frac\delta2$ and then \[\int_\lambda^{r_0}\frac{\gamma\log\frac\eta\lambda}{\eta^{1+\alpha}}d\eta=\gamma\lambda^{-\alpha}\int_1^{r_0/\lambda}\frac{\log\zeta}{\zeta^{1+\alpha}}\dd\zeta\leq\frac{\gamma}{\alpha^2}\lambda^{-\alpha}\leq\frac{\delta}{2\alpha^2}\lambda^{-\alpha}.\] The term $\overline{M}_\alpha(\xi,\lambda)$ is scaling critical. In order to compare it with the dissipation, we state the following inequality, where we only make use of the fact $\frac\xi\lambda\in(0,1]$ \begin{equation}\label{barC-alp} \overline{M}_\alpha(\xi,\lambda)\leq \overline{C}_\alpha \lambda^{-\frac{\alpha}{2}} \xi^{-\frac{\alpha}{2}},\quad \textrm{with}\quad \overline{C}_\alpha = \begin{cases} \frac{1}{\alpha^2(1-\alpha)} ,\quad &\textrm{for }\; 0<\alpha <1, \\ 2,\quad &\textrm{for }\; \alpha=1, \\ \frac{1}{\alpha-1}+\frac54,\quad & \textrm{for }\; 1<\alpha<2. \end{cases} \end{equation} Case 2: $\lambda<\xi\leq\frac{r_0}{2}$. We use the explicit formula on $\omega$ and get \[ \int_\xi^{r_0}\frac{\omega(\xi+\eta)-\omega(\xi)}{\eta^{1+\alpha}}\dd\eta= \gamma\int_\xi^{r_0}\frac{\log(\xi+\eta)-\log(\xi)}{\eta^{1+\alpha}}\dd\eta \leq \gamma\xi^{-\alpha}\int_1^\infty\frac{\log(1+\zeta)}{\zeta^{1+\alpha}}\dd\zeta \leq \frac{\gamma (1+\alpha)}{\alpha^2}\xi^{-\alpha}. \] Collecting the above estimates yields the desired estimate \eqref{MOCes-Lam-alp}. \end{proof} \def\phiSt{\widetilde{\phi}^S} \begin{proof}[Proof of Lemma \ref{lem:MOCes-u}] We denote $\tilde{x}, \tilde{y}\in\T$ to be arbitrary points with distance $\xi=\tilde{x}-\tilde{y}\in(0,\frac{r_0}{4}]$. Recalling that $u$ has the expression formula \eqref{u-exp} and $I_0(t)$ is uniformly bounded (see estimate \eqref{I0t-bdd}), we have \begin{align}\label{u-es-decom} |u(\tilde{x})-u(\tilde{y})| \leq |\psi(\tilde{x})-\psi(\tilde{y})| + |\LL \varphi(\tilde{x}) -\LL \varphi(\tilde{y})| := U_1 + U_2 , \end{align} where $\psi$ and $\varphi$ are mean-free periodic functions satisfying $G=\partial_x\psi$ and $\theta=\rho-\bar{\rho}_0=\partial_x \varphi$. By virtue of the mean value theorem and estimates \eqref{Flinf-es}, \eqref{eq:uppbdd}, it is easy to see that \begin{equation}\label{U1-es} U_1 \leq \|G(t_1)\|_{L^\infty} \xi \leq \|F(t_1)\|_{L^\infty} \|\rho(t_1)\|_{L^\infty} \xi \leq M_1 \|F_0\|_{L^\infty} \xi . \end{equation} Before estimating $U_2$, we first show the following expression formula of $\LL \varphi$ (one can see \cite[Eq. (4.47)]{DKRT} at the case $\LL =\Lambda^\alpha$ with $\alpha\in (0,1)$, and it also holds for the whole range $\alpha\in (0,2)$): \begin{align}\label{Lam-alp-vphi} \LL \varphi(\tilde{x}) & = \lim_{\epsilon\rightarrow 0}\int_{|z|\geq \epsilon} \phi(z) \big(\varphi(\tilde{x})-\varphi(\tilde{x}+z)\big) \dd z = \lim_{\epsilon\rightarrow 0}\int_{\epsilon \leq |z|\leq \frac{1}{2}} \phi^S(z) \big(\varphi(\tilde{x})-\varphi(\tilde{x}+z)\big) \dd z \nonumber \\ & = - \lim_{\epsilon\rightarrow 0}\int_{\epsilon \leq |z|\leq \frac{1}{2}} \phiSt(z)\, \theta(\tilde{x}+z) \dd z =-\mathrm{p.v.}\int_\T \phiSt(z)\theta(\tilde{x}+z) \dd z, \end{align} with \begin{align}\label{tild-phi-df} \phiSt(z) = \mathrm{sgn}(z)\int_{|z|}^{\frac{1}{2}} \phi^S(r) \dd r,\quad\forall~z\in\T\backslash\{0\}, \end{align} where the second equality follows from integration by parts together with the facts $-\partial_z \phiSt(z)=\phi^S(z)$ for every $z\neq0$, $\widetilde{\phi}^S(\pm\frac12)=0$, and for any $\alpha\in(0,2)$ \begin{align*} \lim_{\epsilon\rightarrow 0} |\phiSt(\epsilon)(2\varphi(\tilde{x})-\varphi(\tilde{x}+\epsilon)-\varphi(\tilde{x}-\epsilon))| & \leq \|\partial_x^2\varphi\|_{L^\infty} \lim_{\epsilon\rightarrow 0} \epsilon^2 \int_\epsilon^{\frac{1}{2}} |\phi^S(r)|\dd r \\ & \leq \omega'(0+) \lim_{\epsilon\rightarrow 0} \epsilon^2 \Big( \int_\epsilon^{r_0 } \frac{2c_1}{r^{1+\alpha}} \dd r + \int_{r_0}^{\frac{1}{2}} c_3 \dd r\Big) =0. \end{align*} Here, we use $\partial_x^2\varphi=\partial_x\rho$, which is bounded by $\omega'(0+)$ at time $t_1$, which is finite due to \eqref{ome-cond}. From \eqref{Lam-alp-vphi} and the oddness of kernel $\widetilde{\phi}^S(z)$, we can rewrite \begin{equation}\label{Lam-alp-vphi2} \LL \varphi(\tilde{x}) = -\mathrm{p.v.}\int_\T \phiSt(z)\rho(\tilde{x}+z)\dd z = \mathrm{p.v.}\int_\T \phiSt(z) \big( \rho(\tilde{x})-\rho(\tilde{x}+z)\big)\dd z. \end{equation} Now, we begin to estimate $U_2$. The idea follows from \cite[Appendix]{KNV}, with modifications to adapt the periodic influence function $\phi^S$ with misalignment. \def\xm{x_*} Denote $\xm=\frac{\tilde{x}+\tilde{y}}{2}$. Decompose $\LL\varphi(\tilde{x})-\LL\varphi(\tilde{y})$ as follows \begin{align*} \LL \varphi(\tilde{x})-\LL\varphi(\tilde{y}) =&\left(\mathrm{p.v.}\int_{|z|\leq 2\xi} \phiSt(z)\big(\rho(\tilde{x})-\rho(\tilde{x}+z)\big) \dd z - \mathrm{p.v.}\int_{|z|\leq 2\xi} \phiSt(z)\big(\rho(\tilde{y})-\rho(\tilde{y}+z)\big) \dd z \right) \\ & + \left(\int_{2\xi\leq |z|\leq \frac{1}{2}} \phiSt(z)\big(\rho(\xm)-\rho(\tilde{x}+z)\big)\dd z - \int_{2\xi\leq |z|\leq \frac{1}{2}} \phiSt(z)\big(\rho(\xm)-\rho(\tilde{y}+z)\big) \dd z \right) \\ := & \,U_{21} + U_{22}. \end{align*} For $U_{21}$, we apply \eqref{eq:scena2} and get \begin{align} |U_{21}|\leq&\, 4\int_0^{2\xi}|\phiSt(\eta)|\omega(\eta)\dd \eta \leq\frac{8c_1}{\alpha}\int_0^{2\xi}\frac{\omega(\eta)}{\eta^\alpha}\dd \eta +2c_3\int_0^{2\xi}\omega(\eta)\dd \eta\nonumber\\ \leq&\, \frac{32c_1}{\alpha}\int_0^\xi\frac{\omega(\eta)}{\eta^\alpha}\dd\eta+4c_3M_1\xi,\label{est:21} \end{align} where in the second inequality, we estimate $\phiSt$ using \eqref{tild-phi-df} and conditions \eqref{phi-s-assum1} and \eqref{phi-s-assum2}: \begin{equation}\label{phiSt-es} |\phiSt(z)|\leq \int_{|z|}^{r_0}\frac{2c_1}{r^{1+\alpha}}\dd r+\int_{r_0}^{\frac12}c_3\dd r\leq \frac{2c_1}{\alpha}\frac{1}{|z|^\alpha}+\frac{c_3}{2},\quad \forall 0<|z|\leq r_0, \end{equation} and in the last inequality, we change variable and use $\omega(2\eta)\leq 2\omega(\eta)$ due to the concavity of $\omega$ \begin{equation}\label{omegashift} \int_0^{2\xi}\frac{\omega(z)}{z^\alpha}\dd z=2^{1-\alpha}\int_0^\xi\frac{\omega(2\eta)}{\eta^\alpha}\dd\eta\leq 2^{2-\alpha}\int_0^\xi\frac{\omega(\eta)}{\eta^\alpha}d\eta. \end{equation} For $U_{22}$, we need to make use of the cancelation. Decompose the term as follows \begin{align*} U_{22}& =\,\int_{\frac52\xi\leq|z-\xm|\leq\frac12}\big(\phiSt(z-\tilde{x})-\phiSt(z-\tilde{y})\big) \big(\rho(\xm)-\rho(z)\big)\dd z\\ &\mbox{}\quad\; +\int_{-3\xi}^{-2\xi}\phiSt(z) \big(\rho(\xm)-\rho(\tilde{x}+z)\big)\dd z - \int_{2\xi}^{3\xi}\phiSt(z) \big(\rho(\xm)-\rho(\tilde{y}+z)\big)\dd z \\ & =:\,U_{22a}+U_{22b}+U_{22c}. \end{align*} In the first part, change variable and use the Newton-Leibniz formula \begin{align*} U_{22a} &= \int_{\frac52\xi\leq| z |\leq\frac12}\left(\phiSt\Big( z -\frac\xi2\Big)-\phiSt\Big( z +\frac\xi2\Big)\right) \big(\rho(\xm)-\rho(x_*+ z )\big)\dd z \\ & = -\xi \int_0^1 \int_{\frac{5}{2}\xi \leq | z |\leq \frac{1}{2}} \phi^S\Big( z -\frac{\xi}{2} + \tau \xi\Big) \big(\rho(\xm)-\rho(x_*+ z )\big)\dd z \dd \tau. \end{align*} From conditions \eqref{phi-s-assum1} and \eqref{phi-s-assum2}, it yields \begin{align}\label{est:22a} & |U_{22a}| \leq \xi \int_0^1 \int_{\frac{5}{2}\xi \leq | z |\leq \frac{1}{2}} \Big|\phi^S\Big( z -\frac{\xi}{2} + \tau \xi\Big)\Big| \omega(| z |)\, \dd z \dd \tau \nonumber \\ & \leq \xi \int_0^1 \int_{\frac{5}{2}\xi \leq | z |\leq \frac{1}{2}, | z -\frac{\xi}{2} +\tau\xi|\leq r_0} \frac{2 c_1 \omega(| z |)}{| z -\frac{\xi}{2} +\tau\xi|^{1+\alpha}} \dd z \dd \tau + c_3 \xi \int_0^1 \int_{\frac{5}{2}\xi \leq | z |\leq \frac{1}{2}, | z -\frac{\xi}{2} +\tau\xi|\geq r_0} \omega(| z |) \dd z \dd \tau \nonumber \\ & \leq 4 c_1 \xi \int_{\frac{5}{2}\xi \leq | z | \leq r_0 +\xi} \frac{\omega(| z |)}{| z |^{1+\alpha}} \dd z + c_3 M_1 \xi \leq 8c_1 \xi\int_{\frac52\xi}^{r_0 + \xi } \frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta + c_3M_1 \xi , \end{align} where in the last line we have used $(|z|-\frac{\xi}{2})^{-(1+\alpha)} \leq (\frac{4}{5}|z|)^{-(1+\alpha)}\leq 2 |z|^{-(1+\alpha)}$ for every $|z|\geq \frac{5}{2}\xi$. For the second part, change variable \begin{align*} |U_{22b}| = \bigg|\int_{\frac32\xi}^{\frac52\xi}\phiSt(\eta+\frac\xi2) \big(\rho(\xm)-\rho(\xm-\eta)\big)\dd\eta\bigg| \leq \int_{\frac32\xi}^{\frac52\xi}\Big|\phiSt(\eta+\frac\xi2)\Big|\omega(\eta)\dd\eta \leq \omega\big(\frac{5}{2}\xi\big) \int_{2\xi}^{3\xi} |\phiSt(\eta)| \dd\eta, \end{align*} and then it can be treated by using \eqref{phiSt-es} and concavity of $\omega$: \begin{equation}\label{est:22b} |U_{22b}|\leq \frac{5}{2} \omega(\xi) \Big( \frac{2c_1}{\alpha(2\xi)^\alpha} + \frac{c_3}{2} \Big)\xi \leq \frac{5c_1}{\alpha}\omega(\xi) \xi^{1-\alpha} + \frac{5c_3}{4}M_1\xi. \end{equation} The third part $U_{22c}$ can be estimated by the same bound as $U_{22b}$. Collecting the estimates \eqref{U1-es}, \eqref{est:21}, \eqref{est:22a} and \eqref{est:22b}, we obtain a bound on $\Omega(\xi)$ \[|u(\tilde{x})-u(\tilde{y})|\leq\frac{32c_1}{\alpha}\int_0^\xi\frac{\omega(\eta)}{\eta^\alpha}\dd\eta + \frac{10 c_1}{\alpha}\omega(\xi) \xi^{1-\alpha} +8c_1\xi\int_\xi^{r_0+\xi}\frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta +M_1(\|F_0\|_{L^\infty}+8c_3)\xi,\] which combined with estimate $\int_0^\xi \frac{\omega(\eta)}{\eta^\alpha}\dd \eta \geq \frac{\omega(\xi)}{\xi} \int_0^\xi \frac{1}{\eta^{\alpha-1}} \dd \eta = \frac{1}{2-\alpha} \omega(\xi)\xi^{1-\alpha}$ concludes the proof of \eqref{u-MOC-es}. Next, we provide an explicit estimate of $\Omega(\xi)$ when $\omega(\xi)$ is chosen as \eqref{MOC1}. For $0<\alpha<1$, one can follow a similar procedure as \cite[Lemma 4.4]{DKRT}. However, it does not work for $\alpha\geq1$. In particular, the first term in \eqref{u-MOC-es} can not be controlled by the dissipation term in the case $\xi>\lambda$. To overcome the difficulty, we introduce an enhanced estimate on $U_2$, when $(\tilde{x},\tilde{y})=(x,y)$ which satisfies the breakthrough scenario \eqref{eq:scena}. For $U_{21}$, we make use of the cancelation and bound the term by the dissipation $D_1(x,y)$ as follows \begin{align*} |U_{21}| =&\, \Big|\int_{|z|\leq2\xi}\phiSt(z)\big(\omega(\xi)-\rho(x+z)+\rho(y+z)\big)\dd z\Big| \\ \leq&\,\int_{|z|\leq2\xi}\left(\frac{2c_1}{|z|^\alpha}+\frac{c_3}{2}\right) \big(\omega(\xi)-\rho(x+z)+\rho(y+z)\big)\dd z\\ \leq&\, 4c_1^2\xi\int_{|z|\leq2\xi}\phi(z) \big(\omega(\xi)-\rho(x+z)+\rho(y+z)\big)\dd z+\int_{|z|\leq2\xi}\frac{c_3}{2}\cdot(2M_1)\dd z\\ \leq&\,4c_1^2D_1(x,y)\xi+4c_3M_1\xi \end{align*} where in the first inequality, we use \eqref{phiSt-es} and the fact that $\omega(\xi)-\rho(x+z)+\rho(y+z)\geq0$, in the second inequality, we use \eqref{phi-assum1} and then \[\frac{1}{|z|^\alpha}\leq\frac{2\xi}{|z|^{1+\alpha}}\leq2c_1\xi\phi(z), \quad \forall |z|\leq2\xi,\] and in the third inequality, we use the definition of $D_1(x,y)$ \eqref{Dexp}. The estimation of $U_1$ and $U_{22}$ is the same as above. Then, we end up with a better estimate on $u(x)-u(y)$: \begin{align*} |u(x)-u(y)|\leq&\, 4c_1^2D_1(x,y)\xi + 8c_1 \xi\int_\xi^{r_0 + \xi } \frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta + \frac{10c_1}{\alpha}\omega(\xi) \xi^{1-\alpha} + M_1(8c_3+\|F_0\|_{L^\infty})\xi. \end{align*} Compared with \eqref{u-MOC-es}, the problematic term is replaced by a new term involving $D_1(x,y)$, which is controllable by the dissipation. Finally, let us calculate explicit bounds on the terms $\xi\int_\xi^{r_0 + \xi} \frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta$ and $\omega(\xi) \xi^{1-\alpha}$ when we choose the MOC in \eqref{MOC1}. Case 1: $0<\xi\leq\lambda$. As a direct consequence of \eqref{omegaovereta} and \eqref{barC-alp}, we have \[\xi\int_\xi^{r_0 + \xi}\frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta \leq\overline{C}_\alpha \delta\lambda^{-\frac\alpha2}\xi^{1-\frac\alpha2}.\] From formula \eqref{MOC1} and the fact $\overline{C}_\alpha \geq \frac{1}{\alpha}$, it follows \begin{equation*} \frac{1}{\alpha}\omega(\xi) \xi^{1-\alpha} \leq \frac{1}{\alpha}\delta \lambda^{-1} \xi^{2-\alpha} \leq \overline{C}_\alpha \delta \lambda^{-\frac{\alpha}{2}} \xi^{1-\frac{\alpha}{2}}. \end{equation*} Case 2: $\lambda<\xi\leq\frac{r_0}{4}$. Direct calculation leads to \begin{align*} \xi\int_\xi^{r_0 + \xi}\frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta =\xi\int_\xi^{r_0 + \xi}\frac{\frac34\delta+\gamma\log\frac\eta\lambda}{\eta^{1+\alpha}}\dd\eta \leq&\,\frac{3 \delta}{4\alpha}\xi^{1-\alpha}+ \frac{\gamma}{\alpha^2}\xi^{1-\alpha}\left(\alpha\log\frac\xi\lambda+1\right)\\ =&\,\frac{1}{\alpha}\xi^{1-\alpha}\omega(\xi)+ \frac{\gamma}{\alpha^2} \xi^{1-\alpha} \leq\frac{2}{\alpha} \omega(\xi) \xi^{1-\alpha}, \end{align*} where in the last inequality, we apply \eqref{gamma-cd} and $\frac\gamma\alpha\leq\frac{3}{4}\delta=\omega(\lambda)<\omega(\xi)$. Collecting all the estimates above, we conclude with \eqref{u-enhanced}, as desired. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:MOCes5}] We first consider estimate \eqref{par-u-Linf-es}. From relation $\partial_x u =\LL \rho + G$ and the estimate $\|G\|_{L^\infty}\leq \|F_0\|_{L^\infty} M_1$ (see \eqref{G-Linf-es1}), it suffices to bound $\LL \rho$. Let $\tilde{x}\in\T$. Through a similar argument as obtaining \eqref{Lam-alp-vphi}, we can verify \begin{equation}\label{LLrho2} \LL\rho(\tilde{x}) = -\mathrm{p.v.}\int_\T \phiSt(z)\rho'(\tilde{x}+z) \dd z, \end{equation} where $\phiSt$ is defined in \eqref{tild-phi-df} satisfying estimate \eqref{phiSt-es}. We compute \begin{align*} |\LL\rho(\tilde{x})|=&\,\bigg|\int_0^{r_0}\phiSt(\eta)(\rho'(\tilde{x}+\eta)-\rho'(\tilde{x}-\eta))\dd \eta +\int_{r_0}^{\frac12}\phiSt(\eta)(\rho'(\tilde{x}+\eta)-\rho'(\tilde{x}-\eta))\dd \eta\bigg|\\ \leq&\,\int_0^{r_0}\omega(2\eta)\left[\frac{2c_1}{\alpha \eta^\alpha}+\frac{c_3}{2}\right]\dd \eta +\int_{r_0}^{\frac12}\frac{c_3}{2}\cdot(2M_{2,T})\dd \eta\\ \leq&\,\frac{2^\alpha c_1}{\alpha}\left[\int_0^{\lambda} \frac{\delta\lambda^{-1}}{\eta^{\alpha-1}}\dd\eta +\int_\lambda^{2r_0}\frac{\frac{3}{4}\delta+\gamma\log\frac{\eta}{\lambda}}{\eta^\alpha}\dd \eta\right]+\frac{c_3}{2}M_{2,T}\\ \leq&\,\frac{2^\alpha c_1}{\alpha(2-\alpha)}\delta\lambda^{-(\alpha-1)}+\frac{2^\alpha c_1}{\alpha(\alpha-1)}\cdot\frac{3}{4}\delta\lambda^{-(\alpha-1)}+\frac{2^\alpha c_1}{\alpha(\alpha-1)^2}\gamma\lambda^{-(\alpha-1)}+\frac{c_3}{2}M_{2,T}\\ \leq&\,\frac{4 c_1}{(\alpha-1)^2(2-\alpha)}\delta\lambda^{-(\alpha-1)}+\frac{c_3}{2}M_{2,T}, \end{align*} which leads to the desired estimate \eqref{par-u-Linf-es}. Next, we consider estimate \eqref{par-u-diff-es}. Let $x,y\in\T$ be the points that satisfy the breakthrough scenario \eqref{eq:scena3}. Then, \begin{equation}\label{par-u-diff-dec} \partial_x u(x) -\partial_x u(y) =\big(\LL \rho(x) -\LL \rho(y)\big) + \big( G(x) - G(y) \big) =: \,\Pi_1 + \Pi_2 . \end{equation} For the term $\Pi_1$, since $\LL \rho(x)$ can be written as \eqref{LLrho2}, we can directly apply the result in Lemma~\ref{lem:MOCes-u}, and obtain \[|\LL \rho(x) -\LL \rho(y)|\leq 4c_1^2D_1'(x,y) + 8c_1 \xi\int_\xi^{r_0 + \xi } \frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta + \frac{10c_1}{\alpha}\omega(\xi) \xi^{1-\alpha} + 16c_3M_{2,T}\xi, \] by repeating the enhanced estimate on $U_2$, directly replacing $(\rho, \LL\varphi, D_1,M_1)$ with $(\rho', \LL\rho, D_1', 2 M_{2,T})$ respectively. For $\Pi_2$, thanks to estimate \eqref{par-G-es} and the mean value theorem, we immediately find \begin{equation*} |\Pi_2| \leq \|\partial_x G(t_1)\|_{L^\infty} \xi \leq \Big( \|F_0\|_{L^\infty}M_{2,T}+M_1^2\|H_0\|_{L^\infty} \Big) \xi. \end{equation*} Hence, based on the above analysis, and using explicit estimates of $\xi\int_\xi^{r_0 + \xi} \frac{\omega(\eta)}{\eta^{1+\alpha}}\dd\eta$ and $\omega(\xi) \xi^{1-\alpha}$ as in Lemma \ref{lem:MOCes-u}, we can conclude estimate \eqref{par-u-diff-es}. \end{proof} \section{Appendix: commutator estimates}\label{sec:append} We first present two Kato-Ponce type commutator estimates. \begin{lemma}\label{lem:comm} Let $x\in \R^d$ or $\T^d$, and $s\geq 0$. Then there exists a constant $C=C(s,d)>0$ so that \begin{equation}\label{eq:comm-es} \|[\Lambda^s \nabla, f,g]\|_{L^2} \leq C \big( \|\nabla f\|_{L^\infty} \|g\|_{\dot H^s} + \|f\|_{\dot H^s} \|\nabla g\|_{L^\infty} \big), \end{equation} and \begin{equation}\label{eq:comm-es3} \|[\Lambda^s \nabla,f]g\|_{L^2} \leq C \big(\|\nabla_x f\|_{L^\infty} \|g\|_{\dot H^s} + \|f\|_{\dot H^{s+1}} \|g\|_{L^\infty} \big). \end{equation} \end{lemma} \begin{proof We here only consider $x\in \R^d$, and the case of $\T^d$ can be similarly extended. We first recall the following Kato-Ponce type commutator estimate proved in \cite[Corollary 1.4]{Li19}: for $s>-1$ suppose $A^s$ is a differential operator such that its symbol $\widehat{A^s}(\zeta)$ is a homogeneous function of degree $s+1$ and $\widehat{A^s}(\zeta)\in C^\infty(\mathbb{S}^{d-1})$, then for $1<p<\infty$ and for any $s_1,s_2\geq 0$ with $s_1+s_2 =s$, we have \begin{align}\label{comm-es-Li} \Big\|A^s(f\,g) - \sum_{|\gamma|\leq s_1} \frac{1}{\gamma !} \partial^\gamma f A^{s,\gamma}g - \sum_{|\sigma|< s_2} \frac{1}{\sigma !} \partial^\sigma g\, A^{s,\sigma}f \Big \|_{L^p} \leq C \|\Lambda^{s_1} f\|_{\mathrm{BMO}} \|\Lambda^{s_2} g\|_{L^p}, \end{align} where $C= C(s,s_1,s_2, p, d)$, $\gamma = (\gamma_1,\cdots,\gamma_d)\in \N^d$, $\partial^\gamma=\partial^\gamma_x = \partial_{x_1}^{\gamma_1}\cdots \partial_{x_d}^{\gamma_d}$, $|\gamma| = \sum_{j=1}^d \gamma_j$, $\gamma ! =\gamma_1!\cdots \gamma_d!$, and the operators $A^{s,\gamma}$, $\Lambda^s$ are defined via the Fourier transform as \begin{align*} \widehat{A^{s,\gamma} f}(\zeta) := i^{-|\gamma|} \partial^\gamma_\zeta\big(\widehat{A^s}(\zeta)\big)\, \hat{f}(\zeta), \quad \textrm{and}\quad \widehat{\Lambda^s f}(\zeta) := |\zeta|^s \hat{f}(\zeta). \end{align*} In order to prove \eqref{eq:comm-es}, we let $A^s = \Lambda^s \partial_{x_j}$ ($j=1,\cdots,d$), $s_1=1$, $s_2= s$, $p=2$, and it follows that \begin{align*} \|[\Lambda^s\partial_{x_j}, f,g]\|_{L^2 } & = \|\Lambda^s\partial_{x_j} (f\,g) - f\, (\Lambda^s\partial_{x_j}g) - g\, (\Lambda^s\partial_{x_j} f)\|_{L^2} \\ & \lesssim \sum_{|\gamma|=1} \|\partial^\gamma f\, A^{s,\gamma}g\|_{L^2} + \sum_{1\leq |\sigma| <s} \|\partial^\sigma g \,A^{s,\sigma} f\|_{L^2} + \|\Lambda f\|_{\mathrm{BMO}} \|\Lambda^s g\|_{L^2} \\ & \lesssim \|\nabla f\|_{L^\infty} \|\Lambda^s g\|_{L^2} + \|\nabla g\|_{L^\infty} \|\Lambda^s f\|_{L^2} + \sum_{2\leq |\sigma| <s} \|\partial^\sigma g\|_{L^{\frac{2(s-1)}{|\sigma|-1}}} \|A^{s,\sigma} f\|_{L^{\frac{2(s-1)}{s-|\sigma|}}}, \end{align*} where in the last line we also used the Calder\'on-Zygmund theorem. Note that $A^{s,\sigma}$ is a multiplier operator with symbol $\widehat{A^{s,\sigma}}(\zeta)$ a homogeneous function of order $s+1-|\sigma|$, the Calder\'on-Zygmund theory also implies that for every $2\leq |\sigma|<s$, \begin{align*} \|A^{s,\sigma} f\|_{L^{\frac{2(s-1)}{s-|\sigma|}}} \lesssim \|\Lambda^{s+1-|\sigma|} f\|_{L^{\frac{2(s-1)}{s-|\sigma|}}} \lesssim \|\Lambda^{s-|\sigma|} \nabla f\|_{L^{\frac{2(s-1)}{s-|\sigma|}}}, \end{align*} thus by using the following interpolation inequalities (e.g. see \cite[Pg. 28 and Lemma 2.10]{Li19}) that for every $2\leq |\sigma|< s$, \begin{align*} \|\partial^\sigma g\|_{L^{\frac{2(s-1)}{|\sigma|-1}}} \lesssim \|\nabla g\|_{L^\infty}^{\frac{s-|\sigma|}{s-1}} \|\Lambda^s g\|_{L^2}^{\frac{|\sigma|-1}{s-1}}, \quad \textrm{and}\quad \|\Lambda^{s-|\sigma|}\nabla f\|_{L^{\frac{2(s-1)}{s-|\sigma|}}} \lesssim \|\Lambda^{s-1} \nabla f\|_{L^2}^{\frac{s-|\sigma|}{s-1}} \|\nabla f\|_{L^\infty}^{\frac{|\sigma|-1}{s-1}}, \end{align*} we infer that \begin{align*} \sum_{2\leq |\sigma| <s} \|\partial^\sigma g\|_{L^{\frac{2(s-1)}{|\sigma|-1}}} \|A^{s,\sigma} f\|_{L^{\frac{2(s-1)}{s-|\sigma|}}} & \lesssim \big(\|\nabla g\|_{L^\infty} \|\Lambda^s f\|_{L^2} \big)^{\frac{s-|\sigma|}{s-1}} \big(\|\nabla f\|_{L^\infty} \|\Lambda^s g\|_{L^2} \big)^{\frac{|\sigma|-1}{s-1}} \\ & \lesssim \|\nabla f\|_{L^\infty} \|\Lambda^s g\|_{L^2} + \|\nabla g\|_{L^\infty} \|\Lambda^s f\|_{L^2} . \end{align*} Hence gathering the above estimates leads to \eqref{eq:comm-es}, as desired. Estimate \eqref{eq:comm-es3} is more or less classical, and it can also be proved by the same argument as above, thus we omit the details. \end{proof} The following commutator estimate involving with L\'evy operator $\LL$ plays an important role in our local well-posedness result. \begin{lemma}\label{lem:com-es} Let $x\in \R$ or $\T$. Let $\LL$ be the L\'evy operator given by \eqref{Lop-exp} with kernel function $\phi(x)=\phi(-x)\in C^4(\R\setminus\{0\})$ satisfying assumptions (A1)(A2) with $\alpha\in (0,2)$, and let the operator $\sqrt{C'\mathrm{Id} +\LL}$ be given via Fourier transform as \eqref{def:sqrtL}. Then we have \begin{equation}\label{eq:comm-es0} \|[\sqrt{C' \mathrm{Id} + \LL\,}, g]f\|_{L^2} \leq C \|f\|_{L^2}\|g\|_{C^{\frac{\alpha}{2}+\epsilon}},\quad \textrm{with\; $\epsilon>0$}, \end{equation} with $C>0$ a constant depending on $\LL,s,\epsilon$. \end{lemma} \begin{remark}\label{rmk:comm-es} Note that estimate \eqref{eq:comm-es0} is a suitable generalization of the following commutator estimate (see \cite[Pg. 32]{DKRT}) \begin{equation}\label{eq:comm-es2} \|[\Lambda^{\frac{\alpha}{2}}, g]f\|_{L^2} \leq C \|f\|_{L^2}\|g\|_{C^{\frac{\alpha}{2}+\epsilon}},\quad \textrm{with $\epsilon>0$}. \end{equation} \end{remark} We first recall some basic knowledge of paradifferential calculus. One can choose two nonnegative radial functions $\chi, \varphi\in C^\infty_c(\mathbb{R} )$ be supported respectively in the ball $\{\zeta\in \mathbb{R} :|\zeta|\leq \frac{4}{3} \}$ and the annulus $\{\zeta\in \mathbb{R} : \frac{3}{4}\leq |\zeta|\leq \frac{8}{3} \}$ such that (see \cite{BCD11}) \begin{equation*} \chi(\zeta)+\sum_{k\in \mathbb{N}}\varphi(2^{-k}\zeta)=1, \quad \forall~ \zeta\in \mathbb{R} . \end{equation*} For every $ f\in S'(\R)$, we define the non-homogeneous Littlewood-Paley operators as follows \begin{equation}\label{LPop} \Delta_{-1}f:=\chi(D)f; \quad \, \quad\Delta_{k}f:=\varphi(2^{-k}D)f,\;\;\;S_k f:=\sum_{-1\leq l\leq k-1} \Delta_l f,\;\;\;\forall~ k\in \mathbb{N}. \end{equation} Now for $s\in \mathbb{R}, (p,r)\in[1,+\infty]^2$, the inhomogeneous Besov space is defined as \begin{equation}\label{Besov-spr} B^s_{p,r}:=\Big\{f\in\mathcal{S}'(\mathbb{R} );\|f\|_{B^s_{p,r}}:=\|\{2^{js}\|\Delta _k f\|_{L^p}\}_{k\geq -1}\|_{\ell^r }<\infty \Big\}. \end{equation} In particular, $H^s = B^s_{2,2}$ for every $s\geq 0$. Besides, Bony's decomposition yields \begin{equation*} f\,g = T_f g + T_g f + R(f,g), \end{equation*} with \begin{equation*} T_f g:= \sum_{k\in \N} S_{k-1}f \Delta_k g,\quad R(f,g)=\sum_{k\geq -1}\Delta_k f \widetilde{\Delta}_k g,\quad \widetilde{\Delta}_k := \Delta_{k-1} + \Delta_k + \Delta_{k+1}. \end{equation*} \begin{proof}[Proof of Lemma \ref{lem:com-es}] We here prove estimate \eqref{eq:comm-es0} for $x\in \R$, and the periodic case can be easily adapted. By using Bony's decomposition, we have the following splitting \begin{equation}\label{sqL-decom} \begin{split} \sqrt{C'\mathrm{Id} +\LL\,} (f\, g) & = \sqrt{C'\mathrm{Id} +\LL\,} T_f g + \sqrt{C'\mathrm{Id} +\LL\,} T_g f + \sqrt{C'\mathrm{Id} +\LL\,} R(f,g) : = J_1 + J_2 + J_3, \\ (\sqrt{C'\mathrm{Id} +\LL\,} f)\, g & = T_{\sqrt{C'\mathrm{Id} +\LL} f} g + T_g (\sqrt{C'\mathrm{Id} +\LL\,} f) + R(\sqrt{C'\mathrm{Id} +\LL\,} f,g): = J_4 + J_5 + J_6. \end{split} \end{equation} Through standard paraproduct calculus and Lemma \ref{lem:symb}, the terms $J_1$, $J_3$, $J_4$, $J_6$ can be treated as follows: \begin{align*} \|J_1\|_{L^2}^2 & = \sum_{q\geq -1}\|\Delta_q \sqrt{C'\mathrm{Id} + \LL\,}T_f g\|_{L^2}^2 \lesssim \sum_{|k-q|\leq 4,k\in \N} (C + C 2^{q\alpha}) \|\Delta_q\big(S_{k-1}f \, \Delta_k g\big)\|_{L^2}^2 \\ & \lesssim \sum_{k\in \N} (C + C 2^{k \alpha}) \|S_{k-1} f\|_{L^2}^2 \|\Delta_k g\|_{L^\infty}^2 \lesssim \|f\|_{L^2}^2 \|g\|_{B^{\alpha/2}_{\infty,2}}^2 \lesssim \|f\|_{L^2}^2 \|g\|_{C^{\frac{\alpha}{2}+\epsilon}}^2, \\ \|J_3\|_{L^2}^2 & = \sum_{q\geq -1} \|\Delta_q \sqrt{C'\mathrm{Id}+ \LL\,} R(f,g)\|_{L^2}^2 \lesssim \sum_{q\geq -1} \sum_{k\geq q-2} (C + C 2^{q\alpha}) \| \Delta_q \big( \Delta_k f\, \widetilde{\Delta}_k g\big)\|_{L^2}^2 \\ & \lesssim \|f\|_{L^2} \sum_{q\geq -1} \sum_{k\geq q-2} 2^{(q-k)\alpha} 2^{k \alpha} \|\widetilde{\Delta}_k g\|_{L^\infty}^2 \lesssim \|f\|_{L^2}^2 \|g\|_{C^{\frac{\alpha}{2}+\epsilon}}^2, \\ \|J_4\|_{L^2}^2 & = \sum_{q\geq -1}\|\Delta_q T_{\sqrt{C'\mathrm{Id} + \LL}f} g\|_{L^2}^2 \lesssim \sum_{|k-q|\leq 4, k\in \N} (C + C 2^{k\alpha}) \|S_{k-1}f\|_{L^2}^2 \, \|\Delta_k g\|_{L^\infty}^2 \lesssim \|f\|_{L^2}^2 \|g\|_{C^{\frac{\alpha}{2}+\epsilon}}^2, \\ \|J_6\|_{L^2}^2 & = \sum_{q\geq -1} \|\Delta_q R(\sqrt{C'\mathrm{Id}+ \LL\,}f,g)\|_{L^2}^2 \lesssim \sum_{q\geq -1} \sum_{k\geq q-2} (C + C 2^{k\alpha}) \| \Delta_k f\|_{L^2}^2 \| \widetilde{\Delta}_k g \|_{L^2}^2 \\ & \lesssim \|f\|_{L^2}^2 \sum_{k\geq -1} (k+2) 2^{k \alpha} \|\widetilde{\Delta}_k g\|_{L^\infty}^2 \lesssim \|f\|_{L^2}^2 \|g\|_{C^{\frac{\alpha}{2}+\epsilon}}^2. \end{align*} Next we are devoted to the estimation of $J_2 - J_5$. For every $q\geq -1$, observe that \begin{align}\label{J2-J5-dec} \Delta_q J_2 -\Delta_q J_5 & = \Delta_q \sqrt{C'\mathrm{Id} +\LL\,} T_g f - \Delta_q T_g(\sqrt{C'\mathrm{Id} +\LL\,}f) \nonumber \\ & = \sum_{|k-q|\leq 4, k\in \N} \Delta_q \Big( \sqrt{C'\mathrm{Id} + \LL\,} \big(S_{k-1}g \Delta_k f\big) - S_{k-1} g \big(\sqrt{C' \mathrm{Id} +\LL\,} \Delta_k f \big)\Big) \nonumber \\ & = : \sum_{|k-q|\leq 4, k\in \N} \Pi_{k,q}. \end{align} We first consider the case that $q\geq -1$ is large enough. Following the idea of \cite{KPV93}, and recalling that $A(\zeta)$ defined by \eqref{LKf} is the symbol of operator $\LL$, we use the Fourier transform to write $\Pi_{k,q}(x)$ as follows \begin{align} \Pi_{k,q}(x) = & \iint \Big(\sqrt{C' + A(\zeta + \eta)} - \sqrt{C' + A(\zeta)} \Big) \varphi_{2^q}(\zeta + \eta) \chi_{2^{k-2}}(\eta) \varphi_{2^k}(\zeta) \widehat{f}(\zeta) \widehat{g}(\eta) e^{i(\zeta+\eta)x} \dd \zeta \dd \eta \nonumber \\ = & \iint m_{k,q}(\zeta, \eta)\, \varphi_{2^k}(\zeta) \widehat{f}(\zeta)\, \chi_{2^{k-2}}(\eta) |\eta| \widehat{g}(\eta)\, e^{i(\zeta+\eta)x}\, \dd \zeta \dd \eta, \label{Pi-kq-exp2} \end{align} where $(\varphi_r,\chi_r,\widetilde\varphi_r,\widetilde\chi_r)(\cdot) := (\varphi,\chi,\widetilde\varphi,\widetilde\chi)(\frac{\cdot}{r})$ for $r>0$, \begin{align}\label{m-kq} m_{k,q}(\zeta,\eta) := \frac{\sqrt{C' + A(\zeta + \eta)} - \sqrt{C' + A(\zeta)} }{ |\eta|} \varphi_{2^q}(\zeta + \eta) \widetilde{\chi}_{2^{k-2}}(\eta) \widetilde{\varphi}_{2^k}( \zeta), \end{align} and $\widetilde{\varphi},\widetilde{\chi}\in C^\infty_c(\R)$ such that $0\leq \widetilde{\varphi},\widetilde{\chi}\leq 1$ and \begin{align*} \widetilde{\varphi}\equiv 1\; \textrm{on}\; \big\{\frac{3}{4}\leq |\zeta|\leq \frac{8}{3}\big\},\;\; \mathrm{supp}\,\widetilde{\varphi}\subset \{\frac{2}{3}\leq |\zeta|\leq 3\},\;\; \widetilde{\chi}\equiv 1\; \textrm{on}\; \big\{|\zeta|\leq \frac{4}{3}\big\},\;\; \mathrm{supp}\,\widetilde{\chi}\subset \{|\zeta|\leq \frac{3}{2}\}. \end{align*} We also have \begin{align*} \Pi_{k,q}(x) = \iint h_{k,q}(y,z)\, \Delta_k f(x-y)\, S_{k-1} \Lambda g(x-z) \dd y \dd z, \end{align*} with \begin{align}\label{h-kq} h_{k,q}(y,z) = C_0 \iint m_{k,q}(\zeta,\eta) e^{i( y\zeta + z\eta)}\,\dd \zeta \dd \eta. \end{align} Note that the assumption that $q$ is sufficiently large is mainly used to ensure the spectrum $\zeta+\eta$ and $\zeta$ in $m_{k,q}(\zeta,\eta)$ satisfies $|\zeta+\eta|,|\zeta|\geq \max\{a_0^{-1},1\}$, thus we may assume that $q\geq q_0$ with $q_0 :=7 + [\log_2 \max\{a_0^{-1},1\}]$. Concerning $h_{k,q}$ in this case we have the following key property (whose proof is postponed later). \begin{lemma}\label{lem:m-h-prop} Let $q\in\N$ be large enough so that $q\geq q_0$, and $k\in \N$ be satisfying $|k-q|\leq 4$. Then $h_{k,q}(y,z)$ given by \eqref{h-kq} satisfies \begin{equation}\label{eq:claim} \iint_{\R^2} |h_{k,q}(y,z)|\,\dd y\dd z \leq C 2^{k(\frac{\alpha}{2} -1)}, \end{equation} with $C>0$ a constant independent of $k,q$. \end{lemma} With Lemma \ref{lem:m-h-prop} at our disposal, we derive that \begin{align}\label{J2-J5-hi-es} \sum_{q\geq q_0}\|\Delta_q J_2 -\Delta_q J_5\|_{L^2}^2 & \leq \sum_{q\geq q_0} \sum_{|k-q|\leq 4,k\in\N} \|\Pi_{k,q}\|_{L^2}^2 \nonumber \\ & \leq \sum_{q\geq q_0} \sum_{|k-q|\leq 4,k\in\N} \|h_{k,q}\|_{L^1(\R^2)}^2 \|\Delta_k f\|_{L^2}^2 \|S_{k-1}\Lambda g\|_{L^\infty}^2 \nonumber \\ & \leq \sum_{q\geq q_0} \sum_{|k-q|\leq 4,k\in\N} 2^{k(\frac{\alpha}{2}-1)} \|\Delta_k f\|_{L^2}^2 \|S_{k-1}\Lambda g\|_{L^\infty}^2 \nonumber \\ & \leq C \Big(\sum_{k\in\N}\|\Delta_k f\|_{L^2}^2\Big) \| g \|_{B^{\frac{\alpha}{2}}_{\infty,1}}^2 \leq C \|f\|_{L^2}^2 \|g\|_{C^{\frac{\alpha}{2}+\epsilon}}^2. \end{align} Next we consider the remaining case $q\leq q_0= 7 + [\log_2 \max\{a_0^{-1},1\}] $. By using \eqref{A-est2} and Plancherel's theorem, we directly obtain \begin{align}\label{J2-J5-lo-es} & \sum_{-1\leq q\leq q_0} \|\Delta_q J_2 - \Delta_q J_5\|_{L^2}^2 \nonumber \\ \leq & C \sum_{-1\leq q\leq q_0} \sum_{|k-q|\leq 4,k\in\N} \Big(\|\Delta_q \sqrt{C'\mathrm{Id} + \LL\,} \big( S_{k-1}g \Delta_k f\big) \|_{L^2}^2 + \|S_{k-1} g \big(\Delta_k\sqrt{C' \mathrm{Id} +\LL\,} f \big)\|_{L^2}^2\Big) \nonumber \\ \leq & C \sum_{-1\leq q \leq q_0} \sum_{|k-q|\leq 4,k\in\N} \|\Delta_k f\|_{L^2}^2 \|S_{k-1}g\|_{L^\infty}^2 \leq C \|f\|_{L^2}^2 \|g\|_{L^\infty}^2. \end{align} Hence estimates \eqref{J2-J5-hi-es} and \eqref{J2-J5-lo-es} leads to \begin{equation}\label{J2-J5-L2es} \|J_2 -J_5\|_{L^2}^2 \leq \sum_{q\geq q_0}\|\Delta_q J_2 -\Delta_q J_5\|_{L^2}^2 + \sum_{-1\leq q \leq q_0} \|\Delta_q J_2 - \Delta_q J_5\|_{L^2}^2 \leq C \|f\|_{L^2}^2 \|g\|_{C^{\frac{\alpha}{2}+\epsilon}}^2 . \end{equation} Gathering \eqref{J2-J5-L2es} and the above estimates on $J_i$ ($i=1,3,4,6$) with decomposition \eqref{sqL-decom} yields the desired estimate \eqref{eq:comm-es0}. \end{proof} It remains to prove Lemma \ref{lem:m-h-prop}. \begin{proof}[Proof of Lemma \ref{lem:m-h-prop}] We first study the differentiability property of $m_{k,q}$. Notice that \begin{align}\label{m-kq2} m_{k,q}(\zeta,\eta) = \int_0^1 \Big(\frac{\partial}{\partial \zeta}\sqrt{C' + A(\zeta + \tau \eta)}\Big) \dd \tau \, \mathrm{sgn}(\eta) \varphi_{2^q}(\zeta + \eta) \widetilde{\chi}_{2^{k-2}}(\eta) \widetilde{\varphi}_{2^k}( \zeta), \end{align} with $\mathrm{sgn}(\eta)$ the usual sign function. Thanks to estimate \eqref{nd-sqrAz-es} and the support property, the multiplier $m_{k,q}(\zeta,\eta)$ given by \eqref{m-kq2} satisfies \begin{align}\label{m-kq-bdd} |m_{k,q}(\zeta,\eta)| \leq C \int_0^1 |\zeta + \tau \eta|^{\frac{\alpha }{2}-1} \dd \tau \, \varphi_{2^q}(\zeta + \eta) \widetilde{\chi}_{2^{k-2}}(\eta) \widetilde{\varphi}_{2^k}( \zeta) \leq C 2^{k(\frac{\alpha}{2}-1)}\widetilde{\chi}_{2^{k-2}}(\eta) \widetilde{\varphi}_{2^k}( \zeta) , \end{align} \begin{align}\label{pa-m-zeta-es} \big|\nabla_{\zeta,\eta} m_{k,q}(\zeta,\eta) \big| \lesssim & \int_0^1 \Big| \frac{\partial^2}{\partial \zeta^2}\sqrt{C' + A(\zeta + \tau \eta)}\Big| \dd \tau \varphi_{2^q}(\zeta + \eta) \widetilde{\chi}_{2^{k-2}}(\eta) \widetilde{\varphi}_{2^k}( \zeta) \nonumber \\ & + 2^{-k} \int_0^1 |\zeta + \tau \eta|^{\frac{\alpha }{2}-1} \dd \tau \,\Big(\widetilde{\chi}_{2^{k-2}}(\eta) + |\widetilde{\chi}'(2^{-(k-2)}\eta)| \Big) \Big(\widetilde{\varphi}_{2^k}( \zeta) + |\widetilde{\varphi}'(2^{-k}\zeta)| \Big) \nonumber \\ \leq\,& C 2^{(\frac{\alpha}{2}-2)k} \Big(\widetilde{\chi}_{2^{k-2}}(\eta) + |\widetilde{\chi}'(2^{-(k-2)}\eta)| \Big) \Big(\widetilde{\varphi}_{2^k}( \zeta) + |\widetilde{\varphi}'(2^{-k}\zeta)| \Big), \end{align} and for $l=2,3$, \begin{align}\label{pa-m-et-es} \big|\nabla^l_{\zeta,\eta} m_{k,q}(\zeta,\eta)\big| \leq\, C 2^{(\frac{\alpha}{2}- l-1)k} \bigg(\sum_{j=0}^l \Big|\frac{\dd^j \widetilde{\chi}}{\dd \eta^j}(2^{-(k-2)}\eta)\Big| \bigg) \bigg(\sum_{j=0}^l \Big|\frac{\dd^j \widetilde{\varphi}}{\dd \eta^j}(2^{-k}\zeta)\Big| \bigg). \end{align} where $\nabla_{\zeta,\eta}=(\partial_\zeta,\partial_\eta)$ is the vector-valued differential operator, and $C>0$ is a constant independent of $k,q$. From estimate \eqref{m-kq-bdd}, it directly follows that \begin{equation}\label{h-kq-bdd} |h_{k,q}(y,z)| \leq C 2^{k(\frac{\alpha}{2}-1)}\iint_{\R^2} \widetilde{\chi}_{2^{k-2}}(\eta) \widetilde{\varphi}_{2^k}( \zeta) \dd \zeta \dd \eta \leq C 2^{k(\frac{\alpha}{2} +1)}. \end{equation} Based on estimate \eqref{pa-m-et-es}, we can also derive the crucial piecewise decay estimate of $h_{k,q}(y,z)$. Noting that \begin{align*} -i(y\partial_\zeta + z \partial_\eta) e^{i(y\zeta + z \eta)} = (y^2 + z^2) e^{i (y\zeta + z \eta)}, \end{align*} we find that for every $(y,z)\neq (0,0)$, \begin{align* h_{k,q}(y,z) & = C_0 \iint_{\R^2} m_{k,q}(\zeta,\eta) \bigg(\Big(-\frac{iy}{y^2+z^2}\partial_\zeta -\frac{iz}{y^2 + z^2 }\partial_\eta\Big)^3 e^{i(\zeta y + \eta z)}\bigg)\,\dd \zeta \dd \eta \nonumber \\ & = C_0 \iint_{\R^2} \bigg(\Big(\frac{iy}{y^2+z^2}\partial_\zeta + \frac{iz}{y^2 + z^2 }\partial_\eta\Big)^3 m_{k,q}(\zeta,\eta)\bigg)\,e^{i(\zeta y + \eta z)}\,\dd \zeta \dd \eta, \end{align*} which leads to that for all $(y,z)\neq (0,0)$ \begin{align}\label{h-kq-dec1} |h_{k,q}(y,z)| \leq & \frac{C }{(y^2 + z^2)^{\frac{3}{2} }} \iint_{\R^2} \big|\nabla_{\zeta,\eta}^3 m_{k,q}(\zeta,\eta) \big| \dd \zeta \dd \eta \leq \frac{C }{(y^2 + z^2)^{\frac{3}{2}}} 2^{k (\frac{\alpha}{2} -2)}. \end{align} Now we prove the desired estimate \eqref{eq:claim} relied on estimates \eqref{h-kq-bdd} and \eqref{h-kq-dec1}. Let $r>0$ be a number chosen later, and by using the change of variables, we have \begin{align*} \iint_{\R^2} |h_{k,q}(y,z)| \dd y\dd z & \leq \iint_{\sqrt{y^2 +z^2}\leq r} |h_{k,q}(y,z)| \dd y \dd z + \iint_{\sqrt{y^2 + z^2}\geq r} |h_{k,q}(y,z)| \dd y \dd z \nonumber \\ & \leq \iint_{\sqrt{y^2 +z^2}\leq r}C 2^{k(\frac{\alpha}{2} +1)} \dd y\dd z + \iint_{\sqrt{y^2 + z^2}\geq r} \frac{C }{(y^2 + z^2)^{\frac{3}{2}}} 2^{k (\frac{\alpha}{2} -2)} \dd y \dd z \nonumber \\ & \leq C 2^{k (\frac{\alpha}{2} + 1)} r^2 + C2^{k(\frac{\alpha}{2}- 2)} r^{-1}. \end{align*} Hence estimate \eqref{eq:claim} follows by choosing $r = 2^{-k}$. \end{proof} \textbf{Acknowledgements.} QM is supported by Beijing Institute of Technology Research Fund Program for Young Scholars. CT is partially supported by NSF grant DMS 1853001. LX is partially supported by NSFC grants (Nos. 11671039 and 11771043).
{ "redpajama_set_name": "RedPajamaArXiv" }
6,247
BattleView 360 'See-through' Armored Vehicle System Makes DSEI Debut 2015-09-18T11:44:00+00:00 18 Sep 2015 BAE Systems has used advanced fighter jet technology to create a situational awareness system that allows armored vehicle crews to 'see through' their vehicles in real time, and gives commanders a complete view of the battlespace. The system, called BattleView 360, will be on display on the CV90 tracked vehicle at the DSEI exhibition in London this week. BattleView 360 is highly adaptable and is being designed to seamlessly integrate with multiple existing vehicle types, systems, and radios. At its core, BattleView 360 is a digital mapping system that collates, displays, and tracks the positions of all surrounding features of interest in two- or three-dimensional modes. This allows a vehicle commander to make rapid and informed decisions and communicate plans and instructions to other vehicles. The displayed imagery helps crews identify friendly and enemy forces, and can be used to generate safer routes out of the view of the enemy. "Knowing what is going on around you has always been a challenge for armored vehicle crews inside noisy machines with limited visibility," said Peder Sjölund, technology manager at BAE Systems Hägglunds, a subsidiary of BAE Systems, Inc. in the United States. "BattleView 360 builds on years of work across BAE Systems to improve situational awareness and integrate information so that crew workload is reduced and they can make fast, yet effective, decisions. The result is increased battlefield effectiveness and survivability." The head-worn part of the system can be synced to vehicle cameras to provide a 'see-through' capability in both visual and infrared. It can also be used by dismounted soldiers to relay information back to the vehicle. In a complete battlespace picture environment, the display can be integrated onto other vehicles or even unmanned aerial systems. BattleView 360 employs a head-down touch-screen display to allow commanders to quickly assess information and make quick and efficient decisions for targeting or other purposes. It also allows the commander to view the display of other crew members, such as a gunner. Investment in advanced technologies for the land domain is a key focus for BAE Systems. Similar investment in an intelligent turret technology is being carried out by BAE Systems Combat Vehicles (U.K.) business. Both the U.K. and Swedish programs are vehicle agnostic and the technology can be integrated with new or existing vehicles via an electronic architecture. BattleView 360's head-down system features include: Displaying Blue Force positions UAV route planning Route progress monitoring Polygon sketching Line sketching Dead ground display Red ground display (ground that can be seen by hostile forces) Best route for self-calculation Best route for hostile-calculation Area of uncertainty from last hostile sighting For multimedia, please visit: https://resources.baesystems.com/?c=5828&k=f17ac7a0b2 Ref. 120/2015 Ola Thorén BAE Systems Hägglunds Office: +46 660 80506 Mobile: +46 708 335000
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,137
SLUG="6thsolution/EasyMVP" BRANCH="master" JDK="oraclejdk8" set -e if [ "$TRAVIS_REPO_SLUG" != "$SLUG" ]; then echo "Skipping deployment: wrong repository. Expected '$SLUG' but was '$TRAVIS_REPO_SLUG'." elif [ "$TRAVIS_JDK_VERSION" != "$JDK" ]; then echo "Skipping deployment: wrong JDK. Expected '$JDK' but was '$TRAVIS_JDK_VERSION'." elif [ "$TRAVIS_PULL_REQUEST" != "false" ]; then echo "Skipping deployment: was pull request." elif [ "$TRAVIS_BRANCH" != "$BRANCH" ]; then echo "Skipping deployment: wrong branch. Expected '$BRANCH' but was '$TRAVIS_BRANCH'." else echo "Deploying snapshots..." ./gradlew publishFromCI echo "Snapshots deployed!" fi
{ "redpajama_set_name": "RedPajamaGithub" }
500
{"url":"https:\/\/www.electro-tech-online.com\/threads\/building-a-midi-keytar.155769\/page-10","text":"# Building a MIDI keytar\n\n##### Active Member\nI seem to have blown up the PIC.\nMJ , In my experience Its pretty difficult to 'Blow up' a PIC .. you sure .. what are its symptoms ?\n\n#### MichaelaJoy\n\n##### Active Member\nIt gets very hot and the ICD3 doesn't recognize it.\n\nAlso, it drags the power supply down.\n\n##### Active Member\nUhoooo .. sounds like you blew it up ... only PIC24 I had get HOT was a FV series , with the Vcap wrong way round ! but you not using that one ..\n\n#### MichaelaJoy\n\n##### Active Member\nLOL no. When I plugged it into the socket, I accidentally plugged it in \"one-off\".\n\nProblem is, I have a 48 pin dip socket and the adapter is 44 pins.\n\nBlew it up in a millisecond.\n\n\"The best lessons are the most costly\".\n\n##### Active Member\nOhh , I did count the socket in your picture . unusual 48 pin job.. Currently I have 44 pin breakout board mating with female headers , once got PIC round the wrong way , but noticed in time !!! bit late but , make a little blank plug with the 4 pins on ... OK you done that sorry ...\n\n##### Active Member\nMJ Seems a solution.. I had to lighten the picture to see it properly on my screen , presumably it presses the chip down on the copper .. how will you connect , sockets wires, I guess male pin headers , and socket wire links would help with swapping stuff around.. I prefer to solder to the TQFP break out boards.. but for a prototype guess be fine , ish ... Have you tried to replace the PIC chip on your board ..\n\n#### MichaelaJoy\n\n##### Active Member\nI can't do it myself, as I've never worked with SMD at all.\n\nI spoke to someone over at proto-advantage, and he said it's not worth having the IC changed by them.\nCheaper to just get a new one, which I've done.\n\nIt should be here by Monday, barring no delays because of snow.\n\nHere's a better picture.\n\nI'll more than likely get this one.\n\n#### wkrug\n\n##### Active Member\nI can't do it myself, as I've never worked with SMD at all.\nI would use TQFP 44 to DIP Adapters.\nLike this:\n\nTo solder in a TQFP 44 Chip is no rocket science.\nLook here:\nThe secret is to have enough flux - as shown in the video.\nAnd the Pads have to have a good surface.\nThe solder wire should be very thin about 0.2 to 0.5mm will work.\nWhen the pcb is older, make new solder at the pads and suck it up with solder suck wick, until the Pads are flat again.\nDont make the PCB to hot!\nThen make a little solder at one pin in one edge.\nMake flux at all pads of the PCB.\nPositioning the IC, with a pincette, that the pins laying in the middle of the pads all around the chip.\nThen heat up the pre soldered pin to fix the IC.\nWhen all fits solder one opposite pin onto the PCB.\nCheck if all fits.\nThen make a little solder at the solder iron and go along all pins at one side - as shown in the video.\nGo along in the edge between the IC pins and the pads.\nSolder in so all the pins.\nWhen there are connection between the pins get them out with solder suck wick until all is fine.\nYou can use a magnifying glass to do this.\n\nWith a little practice it works like shown in the video.\nThe solder tip has not to be too small.\nI guess 1.5 up to 2mm is a good size.\nWith too small tip's You dont get enough heat to the pins and the result is an cold solder joint.\n\nLast edited:\n\n##### Active Member\nW ... Agree the key words are\nWith a little practice\n, and a flux pen but you need some old un-populated boards .. I found after tagging the TQFP corners and soldering one side , to use solder wick across the all the pins ( one side ) then with iron and heat and drag the braid out and off , leaves the 'legs' perfect.. clean up with isopropyl alcohol (IPA 170 ). I succeeded with a 100 pin .4 mm chip this way. (my previous post )\n[Ed] the video... person solders the pads first ... No... the chip will be uneven and will require excess solder to bridge any high pins ... I place the chip clamp it and tag the corners. then solder the pins .. remove any excess \/ bridges with braid.\n\nThe next video was much better and no oribl \"music\"\n\nLast edited:\n\n#### MichaelaJoy\n\n##### Active Member\nOnce I'm sure that everything is working, I'll try my hand at SMD soldering.\n\nHere's a pic of the rebuilt keytar. As you can see, I tightened everything up.\n\n#### MichaelaJoy\n\n##### Active Member\nThe new PIC arrived today. After two missed delivery attempts, they got it over to a place where I could pick it up.\n\nSo I started to check out the board, and it seems I took out the LDO regulator as well as the pic.\nIt's a good thing I had sense enough to order a couple of spares.\n\nSo I checked each component out (caps and resistors on a RCL bridge) and rebuilt the power supply.\n\nSo far, everything is as good as new.\n\nI added a clock pin output. That should serve two purposes:\n\nA) If there's a clock, the CPU is running.\n\nand\n\nB) I want to try state analysis on the logic analyzer. I can program the speed of this clock, so I should be able to set up some nice tests.\n\nLet's see if I can get some nice pics of the keybed in action.\n\n#### MichaelaJoy\n\n##### Active Member\nI've updated the schematic to reflect the new hardware changes.\n\nSee: Post #143 for the new schematic.\n\nOver the next week or so, I will be taking a stab at SPI programming. The display subsystem is built and awaiting testing.\n\nA good friend is starting on the actual case design this weekend, so I've been busy building the 'production' hardware.\n\nWhen everything is working, I'll post some pics.\n\n#### wkrug\n\n##### Active Member\nI've take a look to Your new schematic. That looks very fine.\nI've calculated around with my own project and I figured out, that 1K EEPROM of my controller is too small for all the parameters that have to be stored.\nIn my case about 130 parameter are to store that allow only 7 setup spaces.\nBut I want to have about 100.\nMy Idea is to use an external IIC EEPROM to increase the space.\nI'll suggest You to calculate a little bit too, to figure out if a external EEPROM is needed in Your case.\n\n#### MichaelaJoy\n\n##### Active Member\nwkrug: Thanks.\n\nThat design has the 22pf caps on the signal lines coming from the keybed. (P1-P7 and S1-S7)\nI don't think they'd be needed on a PCB because of the stray capacitance between the signal lines and the ground plane.\n\nThe signals are incredibly clean. You can actually see the delay between the primary and secondary switch lines..\n\nI think understand what you mean. You're considering what would be stored in a preset,\nperhaps making room for 128 presets?\n\nEach of my presets look like this:\n\nI have two layers. Each layer has the following fields:\n\n0: Flags: (1 byte)\nBit #0: Use Fixed Velocity (0: Variable, 1: Fixed)\nBit #1: Split \/ Layer (0: Split, 1: Layer)\n\n(I have room for 6 more bit parameters...)\n\n1: MIDI Channel (0..15)\n2: Transpose (By note) (-11..11)\n3: Octave (-8..8)\n4: Keyboard Split Start (Byte)\n5: Keyboard Split End (Byte)\n6: Velocity fixed value (1..127)\n7: Pitch Bend scale (Upper 4 bits: Left end value,Lower 4 bits: Right end value)\n\nSo, that's 8 bytes per split or 16 bytes for the entire Preset.\n\nfor 128 presets, I would need 2048 bytes. (16 * 128)\n\nSo, I'll probably go for 16 presets (256 bytes), which should be more than I need.\n\n#### wkrug\n\n##### Active Member\nSo, I'll probably go for 16 presets (256 bytes), which should be more than I need.\nOk, then the internal EEPROM is huge enough.\nI've planned in my Master Keyboard to insert 10Pots and 16Switches that all should be free programmable.\nThe Storage Name I would store too with 10 characters, and so I'll get about 130 Byte per one setting.\n\nWith that thing I would steer an Roland JV 880 complete per MiDi Remote.\n\n##### Active Member\nMJ . I expect your well versed with this PIC's data sheet The EEPROM \/ NV memory have a few 'hurdles' to code, there is an unlock sequence. also , interrupt issues and lengthy cycle time to read \/ write.. it is 256 words .. (It is actually top end of flash memory) , I did try and use it some time ago . but found an external EEprom easier !\n\n#### MichaelaJoy\n\n##### Active Member\nI've been having a difficult time with SPI and getting this display to work.\n\nSo, I wanted to run some ideas here.\nI want to run SPI2 as a SPI master with no interrupts at all.\n\nShutting off the SPI2 Interrupt should be easy. Just write an 0 to IEC2\n\nTo get SPI working, the directions of the ports need to be set. In my case, SPI2 is on TRISC.\n\nRC3 36 SDI2\nRC4 37 SDO2\nRC5 38 SCK2\n\nSo, I'm thinking that RC3 has to be an input, and RC4 + RC5 has to be an output.\n\nHere's my port init code\nCode:\n mov.w #0x000f,W0\nmov W0,TRISC\nnop\n\nclr W0\nmov W0,ODCC\nnop\n\nmov #7,W0\nmov W0,ANSC\nnop\n\nmov #0x0100,w0\nmov w0,LATC\nnop\n\n_SPI2Initialize:\nmov.w #0x0320,w0\nmov.w w0,SPI2CON1\nnop\n\nclr.w w0\nmov.w w0,SPI2CON2\nreturn\nHere's the display I have\n\nhttps:\/\/www.digikey.com\/catalog\/en\/partgroup\/lcd-module-4x20\/48932\n\nI have the EA DIP203-4NLW. I soldered a jumper across the SPI pads (in the back of the display)\n\nMy backlight resistor is 680 ohms. I use a 5K trimpot to set contrast.\n\nThe logic analyzer shows me that I have SPI data on the connector.\n\nI still haven't been able to get anything out of the display.\n\nWill keep trying. Any ideas are definitely welcome.\n\n##### Active Member\nMJ Not sure if this correct module . did you solder the jumper .\nis in pdf note ...\nSERIAL MODE Factory setting for interface is parallel with 4 bit or 8 bit data bus. Alternatively the module can be used with serial data stream. For that, solder link SPI has to be closed. Specification for serial operation mode is described in user manual for\nis this the serial data sheet ?\nAlso 680 ohms seems 6 x too high , is their a resistor on the display to limit back lite ? some do some dont ...\nNot done SPI but don't you need a 'select \/ enable 'on the slave.\n\n#### MichaelaJoy\n\n##### Active Member\nMJ Not sure if this correct module . did you solder the jumper .\nis in pdf note ...\nSERIAL MODE Factory setting for interface is parallel with 4 bit or 8 bit data bus. Alternatively the module can be used with serial data stream. For that, solder link SPI has to be closed. Specification for serial operation mode is described in user manual for\nis this the serial data sheet ?\nAlso 680 ohms seems 6 x too high , is their a resistor on the display to limit back lite ? some do some dont ...\nNot done SPI but don't you need a 'select \/ enable 'on the slave.\nThat's the one. I soldered the jumper for SPI.\n\nYes. You need the resistor or you can fry the backlight. I chose 680 ohms, but do you think. it needs to be less?\nMaybe changing it to 470 ohms? The backlight looks pretty bright. You can see it leaking out of the bottom of the housing around\nthe LCD.\n\nYes, it does. I soldered the CS to ground, which I believe enables it all the time.\nThat's something else I have to check. Does CS need to be grounded to enable the device? or does it need to be at Vdd?","date":"2019-06-26 21:58:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2846567630767822, \"perplexity\": 4250.148370209374}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560628000575.75\/warc\/CC-MAIN-20190626214837-20190627000837-00152.warc.gz\"}"}
null
null
\section{Introduction} Distribution-based clustering, such as Gaussian mixture model (GMM), has been proven to be very useful in many practical problems \cite{mclachlan2007algorithm}. This technique has been widely applied in object detection \cite{figueiredo2002unsupervised}, learning and modeling \cite{samuelsson2004waveform}, feature selection \cite{xing2001feature} or classification \cite{povinelli2004time}. The constructed groups are described by optimally fitted probability distributions. Nevertheless, this kind of methods is limited for the case of Euclidean spaces and the clustering of data with respect to Gaussian-like probability distributions in arbitrary data spaces where only distance or (dis)similarity measure is provided still remains a challenge. In this paper we show how to partially overcome this problem and propose a spherical Wards clustering ({\sWards}) which divides data sets with respect to arbitrary dissimilarity measure into groups described by spherical Gaussian-like distributions. Figure \ref{fig:diag} shows the relationship between {\sWards} and related methods. Moreover, we extend the notion of Voronoi diagram to the case of arbitrary criterion function in non Euclidean spaces and apply it for {\sWards} clustering. Introduced method permits an informal interpretation of the notion of spherical Gaussian probability distribution in non Euclidean spaces. The algorithm is capable of discovering spherically-shaped groups of arbitrary sizes (see Example \ref{unbalancedData}). Moreover the clustering results are invariant with respect to the scaling of data (see Example \ref{scaleInvariance}). In fact, data sets with unbalanced groups appear very often in practice, e.g in chemoinformatics where finding of chemical compounds acting on specific disease is rare \cite{gasteiger2003handbook, dixon1999investigation} or in Natural Language Processing where the numbers of documents that belong to particular domains are different \cite{zamir1998web}. Our method can be successfully applied in discovering of populations districts in biological systems modeled by a random walk procedure (see Examples \ref{biloEx1}, \ref{biloEx2}). The method is easy to implement and has the same numerical complexity as the k-means version adapted to non Euclidean spaces \cite{batagelj1988generalized}. Moreover, our algorithm automatically finds the resultant number of groups by reducing unnecessary clusters on-line. Voronoi diagrams for {\sWards}, k-means and their kernelized versions for a mouse-like set with non Euclidean distance function are presented in Figure~\ref{fig:mouse}. \begin{figure}[t] \centering \begin{tikzpicture} \node(title) at (4,4.7){Data set with dissimilarity measure}; \node(subtitle) at (2.1,3.9){Euclidean space}; \draw (0,0) rectangle (8,5); \draw (0.2,0.2) rectangle (4,4.3); \node(km) at (2.1,2.9){k-means}; \node(GMM) at (2.1,1.3){GMM, CEC}; \node(GMM) at (2.1,0.9){\footnotesize (spherical CEC)}; \draw (3.6,2.8) arc (0:360:1.5cm and 0.7cm); \draw (3.6,1.1) arc (0:360:1.5cm and 0.7cm); \node(wards) at (6.1,3.3){k-means Wards }; \node(swards) at (6.1,1.1){spherical Wards}; \draw (7.6,3.2) arc (0:360:1.5cm and 0.7cm); \draw (7.6,1.1) arc (0:360:1.5cm and 0.7cm); \draw[ -triangle 90, line width=0.2mm, postaction={draw, line width=0.05cm, shorten >=0.1cm, -} ] (3.6,2.8) -- (4.6,3.1); \draw[ -triangle 90, line width=0.2mm, postaction={draw, line width=0.05cm, shorten >=0.1cm, -} ] (3.6,1.1) -- (4.6,1.1); \draw[ -triangle 90, line width=0.2mm, postaction={draw, line width=0.05cm, shorten >=0.1cm, -} ] (6.1,2.5) -- (6.1,1.8); \end{tikzpicture} \caption{{\bf Spherical Wards clustering.} The relationship of our method with GMM, k-means and Wards clustering.} \label{fig:diag} \end{figure} \begin{figure*}[t] \centering \subfigure[Wards k-means clustering with $k=4$.]{\includegraphics[width=1.65in]{done/dsaa/bar/kmeans0}} \quad \subfigure[{\sWards} clustering started with 10 initial clusters.]{\includegraphics[width=1.65in]{done/dsaa/bar/spher0}} \quad \subfigure[Wards k-means with RBF dissimilarity function and $k=4$.]{\includegraphics[width=1.65in]{done/dsaa/bar/kkmeans0}} \quad \subfigure[{\sWards} clustering with RBF dissimilarity function started with 10 initial clusters.]{\includegraphics[width=1.65in]{done/dsaa/bar/kspher0}} \quad \caption{{\bf Voronoi diagrams.} The Voronoi diagrams of introduced {\sWards} compared with Wards k-means on mouse-like set with barriers with two types of similarity measures: Euclidean and RBF similarity. The barrier changes the distance between elements. The distance between elements located on the opposite sides of barrier is calculated as a length of the shortest path which does not cross the barrier. Observe that despite the barrier {\sWards} method discovered ``mouse ears'' as a spherical clusters while ``mouse head'' was divided into two smaller groups. The Wards k-means results do not have so intuitive explanation. Kernelized versions of both algorithms gave the satisfactory effects, but the main difficulty lies in finding the appropriate values of RBF parameter. An important thing is that the introduced {\sWards} technique produces comparable partition without the need of parameters tuning.} \label{fig:mouse} \end{figure*} Proposed {\sWards} method is a combination of spherical variant of Cross-Entropy Clustering ({\CEC}) \cite{tabor2013} with the generalized Wards approach \cite{batagelj1988generalized, spath1975cluster}. Generally, spherical {\CEC} describes clusters by optimally fitted spherical Gaussian distributions while Wards method allows for its adaptation to non Euclidean case. Spherical {\CEC} performs a clustering by optimizing a cross-entropy criterion function \eqref{spher1}. Its form is very flexible since it is based on the within clusters sums of squares, the cardinalities of clusters and the dimension of space. \IEEEpubidadjcol Applied Wards approach allows for a generalization of the notion of within cluster sum of squares for the case of any dissimilarity measure \cite{batagelj1988generalized, spath1975cluster}. The key lies in the observation that this quantity can be rewritten in Euclidean space without the use of a mean $\m_Y$ of a cluster $Y$ in the form: $$ \sum_{y \in Y}d^2(y,\m_Y) =\frac{1}{2|Y|}\sum_{y, z \in Y}d^2(y,z). $$ On the other hand, note that a dimension in arbitrary space does not have to be defined. Therefore, to adapt spherical CEC criterion function to general case we recommend to estimate its value from data with use of Maximum Likelihood Estimator of intrinsic dimension \cite{maxdim, comments}. To graphically represent and interpret the results of clustering the notion of Voronoi diagram is widely applied. Its construction requires the answer for the question: to which cluster we should associate an arbitrary unclustered point? In the case of classical k-means the answer is simple: we assign the point to the cluster with the nearest center. In the Wards method we replace it by a generalization of distance of point $x$ from the center of cluster $Y$ given by \cite{batagelj1988generalized} \begin{equation} \label{voronoi} d^2(x;Y):=\frac{1}{|Y|}\sum_{y \in Y}d^2(x,y)-\frac{1}{|Y|}\ss(Y). \end{equation} In our work we calculate the analogue of above formula \eqref{voronoi} for the case of {\sWards} criterion function \eqref{sWards11} (see \eqref{vorInt} for precise formula and Figure \ref{fig:mouse} for sample effects). The practical properties of proposed method are illustrated and examined on synthetic data sets and examples retrieved form the UCI repository \cite{asuncion2007uci}. We compare {\sWards} with similar methods which can be applied for non Euclidean data as k-means, Spectral Clustering and their kernelized versions. Our tests demonstrate that introduced method can be applied for populations detection in simple biological systems. The paper is organized as follows. Next section gives a brief description of related clustering methods. In section 3 we recall Wards approach to k-means and present its application for spherical {\CEC} criterion function. Section 4 demonstrates the generalization of Voronoi diagrams to the case of arbitrary criterion functions in non Euclidean data paying particular attention on {\sWards} method. The results of experiments and potential applications are given in section 5 while section 6 contains the conclusion. \section{Related works} The hierarchical clustering is probably one of the most popular methods to partition data based on any kind of (dis)similarity measure \cite{johnson1967hierarchical}. The well-known k-means algorithm \cite{hartigan1979algorithm} can also be adapted to non Euclidean data by defining a medoid \cite{park2009simple} which plays a role of a generalized notion of mean or by using the Wards method \cite{batagelj1988generalized, spath1975cluster} which reformulates the within cluster sum of squares without the notion of the cluster mean. Despite the wide use of these methods, they are sometimes unable to discover groups with complex structures and different sizes. A lot of modifications were also considered to describe clusters with arbitrary shapes \cite{dhillon2004kernel, wagstaff2001constrained}. Spectral Clustering uses eigenvectors of similarity matrix to divide elements into groups \cite{zhou2007spectral}. Another issue of clustering non Euclidean data sets is the appropriate selection of dissimilarity measure. Examples showed that interesting effects can be obtained by applying Gaussian radial basis function (RBF) \cite{chen1993clustering}. The difficulty is that there is no unified methodology how to choose the radius of this function for particular situation \cite{orr1995regularization, rippa1999algorithm}. In order to perform a distribution-based clustering a GMM is widely used in Euclidean space \cite{mclachlan2007algorithm}. Nevertheless it cannot be directly generalized to arbitrary data sets with dissimilarity measures. On the other hand, a family of density based clustering such as DBSCAN \cite{kriegel2011density} can be applied for non Euclidean data. Although the method is capable of discovering clusters of arbitrary shapes and does not require the specification of the number of groups, it does not adopt well to clusters with large differences in densities. Proposed {\sWards} method joins the simplicity and flexibility of k-means with the effects of GMM. Its can be applied in non Euclidean spaces and is based on Gaussian-like probability distributions. \section{Clustering method} The proposed {\sWards} clustering is a combination of spherical Cross-Entropy Clustering ({\sCEC}) \cite{tabor2013} with a generalized Wards approach \cite{batagelj1988generalized, spath1975cluster}. In this section we first introduce a basic notation and recall the Wards version of k-means. Then, we show how {\sCEC} can be generalized to non Euclidean data sets via Wards method. \subsection{Wards method} Generally, k-means method aims at producing a splitting of data set which optimizes a squared error criterion function. For a group $Y \subset \R^N$ the within cluster sum of squares is defined as: $$ \ss(Y) = \sum_{y \in Y}\|y-\m_{Y}\|^2, $$ where $\m_Y$ is a mean of $Y$. The k-means looks for a partition of $X \subset \R^N$ into $k$ pairwise disjoint sets $Y_1,\ldots,Y_k$ such that the function $$ \sum_{j=1}^k \ss(Y_j) $$ is minimal. Note that the above formulas cannot be used directly for non vector data since the mean is not well-defined for general data sets. There are several alternatives \cite{jain1999,steinley2006k,gan2007data,jain2010,xu2009clustering} which allow to partially overcome this difficulty as k-medoids \cite{kaufman1987clustering} or k-clustering \cite{indyk1999sublinear}. The technique related to k-clustering and k-means is the generalized Wards method \cite{batagelj1988generalized, spath1975cluster} which plays the basic role in our investigations. The key idea is the observation that the within cluster sum of squares in Euclidean space can be formulated equivalently without the notion of the center of cluster: \begin{proposition} \label{th:1} \cite{spath1975cluster} If $Y \subset \R^N$, then $$ \begin{array}{c} \sum \limits_{y \in Y} \! \|y-\m_Y\|^2\, =\frac{1}{2|Y|} \sum \limits_{y \in Y} \sum \limits_{z \in Y} \! \|y-z\|^2\, , \end{array} $$ where $|Y|$ is a cardinality of $Y$. \end{proposition} This allows to reasonably generalize the within cluster sum of squares to general non Euclidean data set. For this purpose let $X$ be an arbitrary data set and let $d~:~X~\times~X~\to~[~0~,~+~\infty~)$ be a symmetric dissimilarity measure on $X$, i.e, \begin{itemize} \item $d(y,y) = 0$, \item $d(y,z) = d(z,y)$, \end{itemize} for $y,z \in X$. Given two subsets $Y, Z$ of $X$ we define a function \cite{indyk1999sublinear} connected with the average linkage function (also called average neighbor function) \cite{sokal1958, gan2007data} as: $$ \begin{array}{c} \D{Y,Z}:=\sum \limits_{y \in Y} \sum \limits_{z \in Z} \! d^2(y,z) \, . \end{array} $$ As a generalized within cluster $Y \subset X$ sum of squares we put \cite{batagelj1988generalized}: \begin{equation} \label{defWard} \begin{array}{c} \ss(Y):=\frac{1}{2|Y|}\D{Y,Y}=\frac{1}{2|Y|} \sum \limits_{y \in Y} \sum \limits_{z \in Y} \! d^2(y,z)\, . \end{array} \end{equation} Then, the goal of Wards method is formulated as follows: \medskip {\bf Wards Optimization Problem \cite{batagelj1988generalized}.} Let $X$ be a data set with a dissimilarity measure $d$ and let $k \in \N$. Find a splitting of $X$ into $k$ pairwise disjoint sets $Y_1,\ldots,Y_k$ which minimizes the generalized squared error function: \begin{equation} \label{wardsA} \EW(Y_1,\ldots,Y_k):=\sum \limits_{i=1}^k \ss(Y_i), \end{equation} where $\ss(\cdot)$ is defined by \eqref{defWard}. \medskip \subsection{Spherical Wards criterion function} The Cross-Entropy Clustering ({\CEC}) is a kind of distribution-based clustering which divides an Euclidean data set into groups such that each group is described by optimally fitted Gaussian probability distribution \cite{tabor2013}. The effects of the clustering are similar to those obtained by GMM, but the optimizing criterion function is different. Its value determines the statistical code length of memorization of an arbitrary element of a data set in the case when each cluster uses its own coding algorithm. In particular, the introducing of one more cluster (coding algorithm) requires an additional cost of its identification (increase of the entropy). In consequence, the maintaining of too many clusters is not optimal and it allows for the automatic reduction of unnecessary groups. Another advantage of {\CEC} is that the clustering is performed in a comparable time to computationally efficient k-means method. For more details the reader is referred to \cite{tabor2013, smieja2013image, tabor2013detection}. Spherical Cross-Entropy Clustering ({\sCEC}) is a variant of {\CEC} which takes into account the family of spherical Gaussian distributions. Since for every group the optimal spherical Gaussian distribution is matched, then data set is partitioned into spherically-shaped clusters. For a splitting $Y_1,\ldots,Y_k$ of $X$ the associated criterion function is defined by \cite{tabor2013} \begin{equation} \label{spher1} \begin{array}{l} \frac{N}{2} \ln(\frac{2 \pi e}{N}) + \sum\limits_{i=1}^k \frac{|Y_i|}{|X|} \cdot \left[ -\ln \frac{|Y_i|}{|X|} + \frac{N}{2} \ln\left( \frac{|X|}{|Y_i|} \tr(\Sigma_{Y_i}) \right) \right], \end{array} \end{equation} where $\Sigma_Y$ is a covariance matrix of group $Y$ and $\tr(\Sigma_Y)$ is a trace of $\Sigma_Y$. Let us first observe that the notion of covariance matrix can be easily removed from the expression \eqref{spher1}. \begin{proposition} \label{prop:tr} If $Y \subset \R^N$ then \cite{tabor2013}: $$ \tr(\Sigma_Y) = \ss(Y). $$ \end{proposition} In consequence the application of Wards approach \eqref{defWard} facilitates its interpretation in non Euclidean case for a fixed $N > 0$. For fully explanation of the formula \eqref{spher1} in the context of non Euclidean space, the value of dimension $N$ has to be specified. As the most reasonable way to set this value we recommend to use the estimation of a dimension of $X$. In the present study we apply the Maximum Likelihood Estimation (MLE) of intrinsic dimension of $X$ proposed in \cite{maxdim} and modified in \cite{comments}. More precisely, given $X = \{x_1,\ldots,x_n\}$ the maximum likelihood estimator of a dimension $N$ of $X$ calculated for each $x \in X$ equals \cite{maxdim}: $$ \hat{N}_k(x) = \frac{1}{k-1} \sum_{j=1}^{k-1} \log \frac{d(x, x_k)}{d(x, x_j)}, $$ for $k \in \{1,\ldots,n\}$. Since the above value is dependent on the choice of $k$ and $x$, then one should average the results over $x \in X$ and $\tilde{K} \subset \{1,\ldots,n\}$ to obtain the final estimator of $N$ \cite{comments}. Nevertheless, one can tune this value in the learning process as well as may set it to any positive number. In the experimental section we show that for high values of $N$ more clusters are created in the clustering while for low values of $N$ the method prefers to reduce a number of groups. From now on, $N$ will be treated as a free parameter selected by the user, but we keep in mind that the easiest way to tune this value is to use the MLE procedure described above. All in all, the generalized Wards approach and the appropriate choice of the dimension parameter $N$ allow for the understanding of spherical cross-entropy criterion function in arbitrary data set with a dissimilarity measure. In consequence, the informal notion of spherical Gaussian probability distribution based on any dissimilarity measure could be considered. We conclude this subsection with a formulation of spherical Wards ({\sWards}) optimization problem: \medskip {\bf Spherical Wards Optimization Problem. } Let $X$ be a data set with a dissimilarity measure $d$, $n \in \N$ be an initial number of clusters and $N > 0$ be a free parameter. Find $k \leq n$ and a partition $Y_1,\ldots,Y_k$ of $X$ which minimizes spherical Wards criterion function \begin{equation} \label{sWards11} \begin{array}{l} \ESW(Y_1,\ldots,Y_k; N):=\\ \frac{N}{2} \ln(\frac{2 \pi e}{N}) + \sum\limits_{i=1}^k \frac{|Y_i|}{|X|} \cdot \left[ \frac{N}{2} \ln(\ss(Y_i)) - \frac{N+2}{2} \ln\left(\frac{|Y_i|}{|X|}\right) \right], \end{array} \end{equation} where $\ss(\cdot)$ is defined by \eqref{defWard}. \medskip \subsection{Clustering algorithm} One can show that the natural modification of the Hartigan algorithm \cite{batagelj1988generalized, hartigan1979algorithm, tabor2013} can be used to minimize the {\sWards} criterion function \eqref{sWards11}. We will now discuss its technical aspects. The procedure can be divided into two parts: initialization and iteration. In the initialization phase $n \in \N$ groups are created randomly. During iteration the algorithm reassigns elements between clusters in order to minimize the {\sWards} criterion function \eqref{sWards11}. More precisely, in the iteration part we repeatedly go over all elements of $X$ applying the following steps: \begin{enumerate} \item Reassign $x \in X$ to this cluster for which the decrease of energy \eqref{sWards11} is maximal, \item If a probability of some cluster is less than a fixed number $\e>0$, then remove this cluster and assign its elements to these groups for which the increase of energy \eqref{sWards11} is minimal, \end{enumerate} until no group membership has been changed. The number $\e$ was introduced to speed up the reduction of redundant clusters. In our experiments we always use the value $\e=1\%$. Thus, the group is removed if it contains less than $1\%$ of all elements of $X$. Clearly, the procedure is not deterministic and leads to a local minimum of \eqref{sWards11} \cite{jain1999}. Therefore, to provide the satisfactory results the algorithm should be evaluated several times -- the final result is that which gives the minimal value of {\sWards} criterion function. The above algorithm can be seen as an online version of standard partitional clustering procedure which is able to reduce unnecessary groups. Every time the element is processed the clusters parameters are recalculated. This implies that to efficiently apply this procedure we have to recompute $$ \ss(Y \cup \{x\}) \mbox{ and }\ss(Y \setminus \{x\}). $$ For this purpose the following formulas are useful: \begin{proposition} \label{HartCor} \cite{spath1975cluster} Let $Y \subset X$ and $x \in X$. a) If $x \not\in Y$, then $$ \ss(Y \cup \{x\}) = \frac{|Y|}{|Y|+1}\ss(Y) + \frac{1}{|Y|+1}\D{\{x\}, Y}. $$ b) If $x \in Y$, then $$ \ss(Y \setminus \{x\})=\frac{|Y|}{|Y|-1}\ss(Y) - \frac{1}{|Y|-1}\D{\{x\}, Y}. $$ \end{proposition} Given $k$ clusters, the computational complexity of one iteration of standard Hartigan procedure requires about $k \cdot N \cdot |X|$ operations (for data sets contained in $\R^N$). When applying the Wards approach this complexity changes to $k \cdot |X|^2$ operations. Since the mean of cluster is not defined in general situation, one has to pay an additional cost of recalculating the within cluster sum of squares during every reassigning. However, we do not need to recalculate the distance between the reassigning elements and the mean of a cluster which decreases the computational cost $N$ times. \section{Generalized Voronoi diagram} There arises a natural problem how to graphically present the clustering results. Clearly, we can mark the elements of each cluster with different label. However, in practice it is usually more clear to show the division of the whole space. In this section we show that we can naturally obtain an equivalence of the Voronoi diagram for any criterion function in non Euclidean space. In particular we apply these results to define the Voronoi diagram for {\sWards}. \subsection{Classical diagram} Let us recall that in the case of classical version of Voronoi diagram ($k$-means method) the point $x$ is associated with this cluster whose center is the closest to $x$. More precisely, it is classified to this cluster $Y_i$ which minimizes $d(x; \m_i)$, where $m_i$ is a mean of $Y_i$. We would like to mention that one can consider the alternative to the Voronoi diagrams as described in \cite{telgarsky2010hartigan}. It provides the partition of data but does not induce a natural partition of the space (see \cite{telgarsky2010hartigan} for more details). To generalize the notion of the Voronoi diagram to non Euclidean space (Wards k-means), we need to be able to compute the distance of a point from the center of the cluster (without using it in the computations). \begin{proposition} \label{th:2} \cite{spath1975cluster} Let $x \in \R^N$ be fixed and $Y \subset \R^N$ be a subset of $\R^N$ with mean $\m_{Y}$. Then $$ \|x-\m_Y\|^2=\frac{1}{|Y|}\sum\limits_{y \in Y} \|x-y\|^2 - \frac{1}{2|Y|^2} \sum \limits_{y \in Y} \sum \limits_{z \in Y} \! \! \|y-z\|^2. $$ \end{proposition} The above allows the formulation of the analogue of the square of the ``classical'' distance of a point $x$ from the center of $Y$. Let $Y$ be a subset of data space $X$ with a dissimilarity measure $d$ and let $x \in X$ be fixed. We define the mean square distance of $x$ from $Y$ by \begin{equation}\label{voro} d^2(x;Y):=\frac{1}{|Y|}(\D{\{x\},Y}-\ss(Y)). \end{equation} Applying the above formula one can draw the equivalence of the Voronoi diagram for Wards k-means, i.e. an element $x \in X$ is classified to this cluster which minimizes \eqref{voro}. \subsection{Diagram for arbitrary criterion function} We are now going to present a reasoning which allows to create a kind of Voronoi diagram for arbitrary criterion function. This will be useful for constructing a division of the space for the case of {\sWards} method. Obtained results are consistent with the classical Voronoi diagram in the case of Wards $k$-means presented in previous section. Let $X$ be a space with a dissimilarity measure $d$ and let $Y \subset X$ represent our data. We extend $X$ by introducing a weight function $$ w: X \ni x \to \left\{ \begin{array}{ll} w(x) \in [0,+\infty) &, x \in Y, \\ 0 &, x \in X \setminus Y, \end{array} \right. $$ which assigns a weight to every element of $X$. Then we consider an extended data set $$ Y^w = \{(y,w(y)): y \in Y\}. $$ We define the operations $\D{\cdot,\cdot}$ and $\ss(\cdot)$ adapted for $Y^w$. Given $Z, Y_1, Y_2 \subset Y$ we put: \begin{enumerate} \item $|Z^w| := \sum\limits_{z \in Z} w(z)$, \item $\D{Y_1^w,Y_2^w} := \sum\limits_{y_1 \in Y_1} \sum\limits_{y_2 \in Y_2} d^2(y_1,y_2) w(y_1)w(y_2)$, \item $\ss(Z^w) := \frac{1}{2|Z^w|} \D{Z^w,Z^w}$. \end{enumerate} Then the analogue of k-means criterion function equals: \begin{equation} \label{wardsW} \EW(Y^w_1,\ldots,Y^w_k) = \sum_{i=1}^k \ss(Y^w_i), \end{equation} where $Y_1,\ldots,Y_k$ is a splitting of $Y$. If $w_{| Y} \equiv 1$ then \eqref{wardsW} coincides with \eqref{wardsA}. In order to explain our technique assume that $Y_1,\ldots,Y_k$ is a splitting of data set $Y$ and $E$ is an arbitrary criterion function. For a fixed point $x \in X$ we consider a mapping $$ \begin{array}{l} E^i_{x, [Y^w_1,\ldots,Y^w_k]}:h \to \\[1.2ex] E(Y^w_1,\ldots,Y^w_{i-1},(Y_i \cup \{x\})^{w+h\delta_x},Y^w_{i+1},\ldots,Y^w_k), \end{array} $$ where $h \geq 0$ and $i \in \{1,\ldots,k\}$. It determines the value of criterion function $E$ when $x \in X$ is associated with $i$-th cluster with a weight increased by $h$. We define the functions (wherever they exist) $$ \partial_i E(x,[Y^w_1,\ldots,Y^w_k]):=(E^i_{x,[Y^w_1,\ldots,Y^w_k]})'(0), $$ for $i \in \{1,\ldots,k\}$. Observe that $\partial_iE$ coincides with the infinitesimal change in energy when we add $x$ to the $i$-th cluster. Thus, in Voronoi diagram the point $x \in X$ should be assigned to this cluster which minimizes $\partial_iE(x,[Y^w_1,\ldots,Y^w_k])$. Let us show that the above reasoning is consistent with the classical results \eqref{voro} for Wards $k$-means criterion function \eqref{wardsW}: \begin{theorem} \label{latweTw} Let $Y$ be a subset of a space $X$ with a dissimilarity measure $d$ and let $w(y) = 1$, for all $y \in Y$, be a weight function. If $E$ denotes the squared error function \eqref{wardsW} and $Y_1,\ldots,Y_k$ is a fixed splitting of $Y$ then $$ \partial_iE(x,[Y^w_1,\ldots,Y^w_k])=d^2(x;Y_i), $$ for $x \in X$ and $i \in \{1,\ldots,k\}$. \end{theorem} \begin{proof} Let $h > 0$. By Corollary \ref{HartCor}, we have $$ \begin{array}{l} \frac{1}{h}[E(Y^w_1,\ldots,Y^w_{i-1},Y^w_i\cup \{(x,h)\},Y^w_{i+1},\ldots,Y^w_k) \\[1.2ex] \,\,\,\,\,\,\, -E(Y^w_1,\ldots,Y^w_k)] \\[1.2ex] =\frac{1}{h}\left[\ss((Y_i\cup\{x\})^{w + h\delta_x}) -\ss(Y^w_i)\right] \\[1.2ex] =\frac{1}{h}\left[\frac{|Y_i^w| \ss(Y_i^w) + \D{\{(x,h)\}, Y_i^w}}{|Y_i^w|+h} - \ss(Y_i^w)\right] \\[1.2ex] =\frac{1}{h}\frac{|Y_i^w| \ss(Y_i^w) + h \D{\{(x,1)\}, Y_i^w} - (|Y_i^w| + h)\ss(Y_i^w)}{|Y_i^w|+h} \\[1.2ex] =\frac{\D{(x,1),Y_i^w}-\ss(Y_i^w)}{|Y_i^w|+h}. \end{array} $$ Since $w_{|Y} \equiv 1$ then $$ \begin{array}{l} \frac{\D{(x,1),Y_i^w}-\ss(Y_i^w)}{|Y_i^w|+h} =\frac{\D{x,Y_i}-\ss(Y_i)}{|Y_i^w|+h}\to \\[1.2ex] \frac{1}{|Y_i|} (\D{x,Y_i}-\ss(Y_i)) \text{ , as } h \to 0, \end{array} $$ which yields the assertion of the theorem. \end{proof} \subsection{Voronoi diagram for {\sWards}} The following theorem presents how to create the Voronoi diagram for {\sWards} criterion function: \begin{theorem} Let $Y$ be a subset of a space $X$ with a dissimilarity measure $d$ and let $w(y) = 1$, for all $y \in Y$, be a weight function. If $E$ denotes the {\sWards} criterion function for a data set with weights and $Y_1,\ldots,Y_k$ is a fixed splitting of $Y$ then $$ \begin{array}{l} \partial_iE(x,[Y^w_1,\ldots,Y^w_k])\\[1.2ex] =\frac{1}{|X|}\left[\frac{N}{2}\left(\ln(\ss(Y_i))+|Y_i|\frac{d^2(x;Y_i)}{\ss(Y_i)}\right)- \frac{N+2}{2} (\ln|Y_i|+1)\right] . \end{array} $$ \end{theorem} \begin{proof} Roughly speaking, Theorem \ref{latweTw} says that $\partial_i \ss(Y^w_i)=d^2(x;Y_i)$. Moreover, $\partial_i|Y^w_i|=1$. Applying the operator $\partial_i$ and the above to \eqref{sWards11} we easily get the assertion of the theorem. \end{proof} \begin{figure}[t] \centering \includegraphics[width=2.0in]{done/vor-ex-do} \caption{{\bf Construction of Voronoi diagram.} Given a partition of $Y \subset X$, the procedure iterates over all data space elements $x \in X$ (including also elements which did not participate in the clustering), calculates the values of assignment function $E_{Y_i}(x)$ for each cluster $Y_i$ and attaches $x$ to this group $Y_j$ which minimizes $E_{Y_j}(x)$.} \label{fig:vorr} \end{figure} Consequently, given a partition $Y_1,\ldots,Y_k$ of $Y$, to associate a point $x \in X$ to a cluster it is sufficient to find $i \in \{1,\ldots,k\}$ which minimizes \begin{equation}\label{vorInt} E_{Y_i}(x) = \ln(\ss(Y_i))+|Y_i|\frac{d^2(x;Y_i)}{\ss(Y_i)}-(1+\frac{2}{N}) \ln|Y_i|. \end{equation} If $X$ is infinite, then one can apply its quantization into a finite number of regions before applying a Voronoi diagram. The reader is referred to Figure \ref{fig:vorr} for more detailed explanation of the above described procedure. \section{Experiments} In this section we discuss some fundamental properties as well as the potential applications of proposed clustering method and present a short evaluation study. The implementation of {\sWards} is available from http://www.ii.uj.edu.pl/{\textasciitilde}smieja/sWards-app.zip\footnote{Contact the first author for the explanations.}. \subsection{Synthetic data sets} In order to show the capabilities of {\sWards} we examined its resistance on the change of scale and its sensitivity on the unbalanced data. We compared the clustering results with the ones obtained with use of related methods which can be applied for non Euclidean spaces: Wards k-means and Spectral Clustering (kernlab R package was used for the implementations of this algorithm \cite{karatzoglou2004kernlab}). Since {\sWards} automatically detects the resultant number of groups, then we ran it with 10 initial clusters while the other methods used the number of groups returned by {\sWards}\footnote{Such a technique for a detection of clusters number was chosen in order to provide the correspondence between clustering results for all methods.}. The value of parameter $N$ (dimension of space) for {\sWards} was set automatically with use of MLE method \cite{maxdim, comments}. To provide more stable results, each algorithm was run 10 times and the result with the lowest value of criterion function was chosen. \begin{example}\label{scaleInvariance}{\bf Scale invariance} In the first experiment we examined the invariance of algorithms on the change of scale. A data set was generated from the mixture of two spherical Gaussian distributions, $$ \frac{1}{2} \G_1(r) + \frac{1}{2} \G_2(1-r) $$ with different covariance matrices $$ C_1 = \begin{pmatrix} r & 0\\ 0 & r \end{pmatrix} \text{ , } C_2 = \begin{pmatrix} 1-r & 0\\ 0 & 1-r \end{pmatrix} \text{, for } r \in (0,1), $$ centered at $$ \m_1 = (-1,0) \text{ , } \m_2 =(1,0). $$ The parameter $r$ controls the width of Gaussians. The Figure \ref{fig:proportion2} presents the ratios of resulted clusters sizes. The {\sWards} method is robust to the change of scale -- the clusters remained almost equally-sized for all $r \in (0,1)$. The clustering result was the most dependent on the widths of Gaussians in the case of k-means. \end{example} \begin{figure}[t] \centering \includegraphics[trim=0cm 0.6cm 0cm 2cm, clip=true, width=3.0in]{done/omega/omega2} \caption{{\bf Scale invariance.} The rations of clusters sizes for a data set generated from the mixture of two spherical Gaussian distributions $\frac{1}{2}~\G_1~(~r~)~+~\frac{1}{2}~\G_2~(~1~-~r~)$ when changing the width $r$ of Gaussians. The optimal curve should be a constant function, $y=\frac{1}{2}$.} \label{fig:proportion2} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=0cm 0.6cm 0cm 2cm, clip=true, width=3.0in]{done/omega/omega} \caption{{\bf Sensitivity on the unbalanced data.} The rations of clusters sizes for a data set generated from the mixture of two spherical Gaussian distributions $\omega~\G_1~+~(~1~-~\omega)~\G_2$. Number $\omega \in (0,1)$ controls the number of elements produced by each Gaussian. The optimal curve should be a linear function, $y =\omega$.} \label{fig:proportion} \end{figure} \begin{example}\label{unbalancedData} {\bf Unbalanced data} We have also tested how the number of elements generated from the individual distributions affects the clustering results. For this purpose data was generated from the mixture of two Gaussians $$ \omega \cdot \G_1 + (1-\omega) \cdot \G_2 \text{ , for } \omega \in (0,1), $$ with identical covariance matrices $$ C_1 = C_2 = \begin{pmatrix} \frac{1}{2} & 0\\ 0 & \frac{1}{2} \end{pmatrix}, $$ but different centers $$ \m_1 = (-1,0) \text{ , } \m_2 = (1,0). $$ The number of elements generated from each Gaussian is determined by the value of parameter $\omega$. The ratios of clusters sizes are shown in the Figure \ref{fig:proportion}. One can observe that the proportions specified by $\omega$ was preserved by {\sWards} method. In the Spectral Clustering the results are less stable. On the other hand Wards k-means has a tendency to build equally-sized clusters. \end{example} \subsection{Dimension estimation} To apply the {\sWards} criterion function in the case of arbitrary non Euclidean space the value of dimension parameter $N$ needs to be specified. In the previous subsection we showed that the reasonable clustering results can be obtained calculating this value using MLE method \cite{maxdim, comments}. We will experimentally show how the clustering effects differ when the value of $N$ changes. \begin{figure}[t] \centering \includegraphics[trim=0cm 0.6cm 0cm 1.8cm, clip=true, width=3.0in]{done/dim-incr/Rplot-1} \caption{{\bf Clusters detection.} The influence of the value of parameter $N$ on the resulted number of clusters. Maximal number of clusters was set to 100.} \label{fig:dimCl} \end{figure} \begin{figure}[t] \centering \subfigure[Wards k-means clustering with $k=3$.]{\label{env2:a}\includegraphics[width=1.55in]{done/dsaa/rand/kmeans0}} \quad \subfigure[{\sWards} clustering started with 10 initial clusters.]{\label{env2:b}\includegraphics[width=1.55in]{done/dsaa/rand/spher0}} \quad \caption{{\bf Populations districts on the space with barriers.} Voronoi diagrams constructed by Wards k-means and {\sWards} on a data space with barriers containing three populations generated from random walk models.} \label{fig:env2} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=0cm 0.6cm 0cm 1.8cm, clip=true, width=3.0in]{done/dim-incr/cost-1} \caption{{\bf Shape of criterion function.} The influence of the clusters number on the shape of {\sWards} criterion function for three exemplary values of $N$.} \label{fig:clCost} \end{figure} \begin{figure}[t] \centering \subfigure[Wards k-means clustering with $k=2$.]{\label{env1:a}\includegraphics[width=1.55in]{done/dsaa/env/kmeans0}} \quad \subfigure[{\sWards} clustering started with 10 initial clusters.]{\label{env1:b}\includegraphics[width=1.55in]{done/dsaa/env/spher0}} \quad \caption{{\bf Populations districts on the space with regions.} Voronoi diagrams constructed by Wards k-means and {\sWards} on a data space divided into two regions $X_1$ and $X_5$ containing three populations generated from random walk models. The speed of movements in $X_5$ is 5 times higher than in $X_1$.} \label{fig:env1} \end{figure} \begin{table*}[t] \caption{{\bf UCI evaluation. }Comparison of clustering results (measured by Rand index) of UCI data sets between {\sWards}, Wards k-means and Spectral Clustering for Euclidean and RBF dissimilarities. The estimated numbers of clusters (Est. Cl.) by {\sWards} were used for other algorithms. The MLE was applied for setting the parameter $N$.} \label{UCI} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcc|cccc|cccr} \hline & & & \multicolumn{4}{c|}{Euclidean dissimilarity} & \multicolumn{4}{c}{RBF dissimilarity}\\ Data & True Cl. & $N$ & Est. cl. & {\sWards} & k-means & Specc & Est. cl. & {\sWards} & k-means & Specc\\ \hline Cmc & 3 & 2.64 & 4 & {\bf 0.61} & 0.57 & 0.58 & 2 & {\bf 0.55} & 0.51 & 0.51 \\ Ecoli & 8 & 3.72 & 9 & {\bf0.88} & 0.83 & 0.77 & 10 & {\bf0.84} & 0.79 & 0.8 \\ Glass & 7 & 3.07 & 8 & {\bf0.71} & 0.7 & 0.68 & 7 & 0.7 & {\bf0.71} & {\bf0.71} \\ Hayes-r. & 3 & 1.85 & 5 & {\bf 0.62} & 0.58 & 0.59 & 5 & {\bf 0.61} & 0.5 & 0.6 \\ Ionosph. & 2 & 5.03 & 4 & 0.55 & 0.52 & {\bf 0.61} & 4 & 0.57 & {\bf 0.61} & 0.58 \\ Iris & 3 & 2.49 & 4 & {\bf0.85} & 0.81 & 0.83 & 5 & {\bf0.85} & 0.84 & 0.83 \\ Tae & 3 & 2.06 & 6 & {\bf 0.61} & {\bf 0.61} & 0.6 & 5 & {\bf0.62} & 0.58 & 0.6 \\ Wine &3 & 1.64 & 4 & {\bf 0.75} & 0.63 & 0.68 & 5 & {\bf 0.58 }& 0.55 & 0.55\\ Yeast & 10 & 4.81 & 11 & 0.64 & {\bf0.73} & {\bf0.73} & 10 & 0.63 & {\bf0.73} & {\bf0.73}\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} \begin{example} {\bf Clusters number detection} Let us first examine the impact of the value of parameter $N$ on the detection of the resultant number of groups. For this purpose a mouse-like set (see Figure \ref{fig:mouse}) was clustered with different values of $N$ starting from 100 initial groups. The resultant number of groups are illustrated in the Figure \ref{fig:dimCl}. The immediate observation is that the increase of the value of $N$ results in the increase of the detected number of groups. One can observe that for $N < 1$ the entire data set was recognized as one group. For $N \in (1,2)$ the mouse-like set was partitioned into three groups which seems to be the most appropriate partitioning. For $N > 2$ the number of groups began to grow rapidly. \end{example} \begin{example} {\bf Shape of criterion function} To get more insight into the influence of dimension parameter on the discovered number of clusters, we analyzed the shape of {\sWards} criterion function for different values of $N$. Since the {\sWards} automatically reduces unnecessary clusters, it is not possible to directly specify the number of groups. Therefore, a mouse-like data set (see Figure \ref{fig:mouse}) was first partitioned into expected number of groups with use of k-means. Then, the {\sWards} criterion function was calculated for each partition. It is clear from Figure \ref{fig:clCost} that the criterion function yields a global minimum for 3 clusters when $N=2$. Therefore, in most cases the algorithm ends with 3 groups. For $N=1$ the cost of maintaining clusters increases and the algorithm generally includes all elements into one group. The function is decreasing for $N=3$. It means that the method rarely reduces clusters. The last case can be a very useful variant of {\sWards} when the resulting number of groups should not be discovered by the algorithm but specified directly by the user. \end{example} \subsection{Applications} In this section we show that the proposed method is very useful in the analysis of biological models of populations. It is assumed that a population follows a random walk model $P(x,n,t)$ on a plane \cite{codling2008random}, where at each unit of time an instance moves randomly in one of four directions: left, right, up or down. More precisely, given a starting point (seed) $x \in X$, $n$-instances are generated from a random walk model assuming $t$ time units. It is worth to mention that a probability distribution of a population converges to spherical Gaussian one \cite{codling2008random}. Given a data set consisting of $k$ populations we would like to discover them during a clustering process. Constructed Voronoi diagram determines the corresponding populations districts in the whole space. Let us observe that, in practice the environment does not represent an Euclidean space. Indeed, a plane is usually crossed by rivers and barriers. Moreover, the environment can be divided into various regions, e.g. meadows, seas, forests etc., which changes the speed of movement of individuals. These modifications change the classical Euclidean metric -- the distance between elements has to take into account all the aforementioned circumstances. In the experiments we analyze two cases of populations environments. \begin{example} \label{biloEx1} {\bf Environment with barriers.} Let us consider three populations living in the environment showed in Figure \ref{fig:env2} crossed by two barriers which modify Euclidean distance function. Basically, the distance between elements located on the opposite sides of barrier is calculated as a shortest path which does not cross the barrier. Regions occupied by populations can be obtained with use of Voronoi diagram. Is is clear from Figure \ref{env2:a} that Wards k-means discovered populations districts as horizontal stripes which is not an appropriate model. More accurate partition results from {\sWards} (see Figure \ref{env2:b}), where detected regions form circular shapes. Partitions agreement measured by Rand index \cite{rand1971objective} for Wards k-means equals $96\%$ while for {\sWards} is $98\%$. \end{example} \begin{example} \label{biloEx2} {\bf Environment with regions.} In the second example let us assume that a data space $X$ is divided into two regions $X_1$ and $X_5$. In $X_5$ the individuals moves 5 times faster than in $X_1$. This inducts a dissimilarity measure on $X$ by: $$ d(x,y):=\left\{ \begin{array}{l} d_E(x,y), \,\,\,\,\,\,\,\,\,\,\,\, x,y \in X_5, \\[0.6ex] 5 d_E(x,y), \,\,\,\,\,\,\,\,\, x,y \in X_1, \\[0.6ex] \inf\{5 d_E(x,z) + d_E(z,y): \text{border point } z\}, \\[0.4ex] \,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,x \in X_1, y \in X_5, \end{array} \right. $$ where $d_E(\cdot,\cdot)$ denotes the Euclidean distance. We consider two populations showed in Figure \ref{fig:env1} with starting points marked with white dots in $X_1$ and $X_5$ respectively. One can observe in Figure \ref{env1:b} that despite the form of the above dissimilarity measure, {\sWards} detected the circular-like districts of populations very well. This result can be compared with k-means clustering (see Figure \ref{env1:a}) where a produced partition does not match populations distributions. The value of Rand index for {\sWards} equals $92\%$ while for Wards k-means is $61\%$. \end{example} \subsection{Evaluation} After establishing the properties as well as demonstrating basic capabilities and potential applications of introduced method we present a short evaluation. We carried out the experiments on selected UCI data sets \cite{asuncion2007uci}. In all experiments the initial number of clusters for {\sWards} was fixed two times higher than the actual number of groups. In order to provide the correspondence between the clustering results the other examined methods assumed the number of groups returned by {\sWards} as the input clusters number. As a measure of agreements between partitions the Rand index (RI) was used \cite{rand1971objective}. It is defined as a ratio between pairs of true positives and false negatives, and all pairs of examples. The values close to $1$ indicate that two partitions are very similar. MLE was used to calculate the optimal value of parameter $N$. Two kinds of dissimilarity measures were considered: the Euclidean distance and the dissimilarity determined by the Gaussian radial basis function (RBF). The value of sigma for RBF was estimated as a median of the squared distances between all pairs of data set elements \cite{caputo2002appearance}. The results presented in Table \ref{UCI} show that {\sWards} reasonably well determined the final number of groups. The advantage of our method over k-means and Spectral Clustering is the most evident for the case of Ecoli data set and Euclidean distance. The worst results were obtained for Ionosphere data set. The use of RBF similarity rarely improved the accuracy of clustering. It could be caused by the fact that it is very difficult to set the optimal value for RBF sigma parameter in particular situation. To extend the above evaluation, in the Figure \ref{fig:dimUci} we present the clustering accuracies of UCI data sets for a wide range of dimension parameter $N \in (0,10)$. One can observe that in most cases the best results were obtained when $N$ was estimated as a dimension of data. The exceptions are Glass and Yeast data sets where a slight improvement was achieved for higher values of $N$. \begin{figure}[t] \centering \includegraphics[trim=0cm 0.6cm 0cm 2cm, clip=true, width=3.0in]{done/uci-1} \caption{{\bf UCI evaluation.} The influence of the value of parameter $N$ on the clustering results of UCI data sets.} \label{fig:dimUci} \end{figure} \section{Conclusion} In this paper a generalization of spherical Cross-Entropy Clustering to non Euclidean spaces was presented. The proposed method uses a Wards approach to modify the cross-entropy criterion function for the case of arbitrary data sets. In consequence, obtained method allows for partitioning of non vector data into spherically-shaped clusters of arbitrary sizes. It is scale invariant technique which detects the final number of groups automatically. Our method works in comparable time to generalized Wards method while the clustering effects are similar to those produced by GMM when focusing on spherical Gaussian distributions in Euclidean spaces. Moreover, we generalized the notion of Voronoi diagram for the case of arbitrary criterion function based on Wards approach. This leads to identical results in the case of classical methods as k-means while it allows for formal division of data space when focusing on non Euclidean methods as {\sWards}. \section*{Acknowledgment} This work was partially funded by the Polish Ministry of Science and Higher Education from the budget for science in the years 2013--2015, Grant no. IP2012 055972 and by the National Science Centre (Poland), Grant No. 2014/13/B/ST6/01792. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
673
Copyright © 2010 by Gary Noesner All rights reserved. Published in the United States by Random House, an imprint of The Random House Publishing Group, a division of Random House, Inc., New York. RANDOM HOUSE and colophon are registered trademarks of Random House, Inc. Grateful acknowledgment is made to Cheryl Hart Frappier for permission to reprint the note on this page. Reprinted by permission of Cheryl Hart Frappier. Library of Congress Cataloging-in-Publication Data Noesner, Gary. Stalling for time : my life as an FBI hostage negotiator / by Gary Noesner. p. cm. eISBN: 978-0-679-60391-7 1. Noesner, Gary. 2. Hostage negotiations—United States. 3. United States. Federal Bureau of Investigation—Officials and employees—Biography. I. Title. HV6598.N64 2010 363.25092—dc22 [B] 2010005888 www.atrandom.com v3.1 _To Carol_ _For her love and support, particularly during the many times_ _I had to be away from home for the FBI_ # Contents _Cover_ _Title Page_ _Copyright_ _Dedication_ _Author's Note_ Preface Chapter One - It's Time to Die Chapter Two - My Start Chapter Three - My First Major Siege Chapter Four - Trouble Abroad Chapter Five - Crisis Intervention: Listen and Learn Chapter Six - From Success to Hubris Chapter Seven - Negotiating with the Sinful Messiah Chapter Eight - Picking Up the Pieces Chapter Nine - A Hell of a Siege Chapter Ten - Prepare the Missiles Chapter Eleven - No Shortage of Challenges Chapter Twelve - Being Our Best When Others are at Their Worst Epilogue _Acknowledgments_ About the Author # AUTHOR'S NOTE # The facts, dates, times, and direct quotations of dialogue are from official reports, personal notes, memos, and conversations as I recall them or as they were conveyed to me by those present. At all times, the recreation of events was done as accurately as possible. Hopefully, those depicted in this book will find their portrayals to be accurate and fair. The opinions, observations, and comments expressed in this book are those of the author only and do not necessarily reflect those of the Federal Bureau of Investigation. Furthermore, they may not reflect those of the editors, endorsers, publisher, FBI Special Agents, or other persons who are described or mentioned in this book. # PREFACE # My line of work tends to inspire curiosity. The minute I tell people that I'm a hostage negotiator, they want to know what it's like to talk to people who have put themselves in truly desperate situations, who might at any moment kill themselves, their hostages, or the law enforcement officers attempting to bring an end to the crisis. Over the last several years, friends and colleagues encouraged me to write a book about these experiences, urging me to share the lessons I learned over years of convincing people to put down their weapons and surrender peacefully. Because I entered the field of hostage/crisis negotiations when it was still a new and evolving discipline, I've observed the process of trial and error that has transformed a rudimentary bargaining approach developed on the fly into a highly effective and flexible method. I've watched colleagues with no background in psychology or negotiation evolve in their tradecraft, many becoming functional street psychologists and crisis counselors, saving many lives and drastically reducing the number of police officers harmed during hostage, barricade, and suicide situations. In the early years of the profession every negotiation seemed to involve two equally challenging components: managing the actual hostage situation, and managing leaders and colleagues captive to the entrenched law enforcement response to hostage events, which emphasized the use of force and viewed negotiators as do-gooder types who only got in the way of them doing their jobs. In those days, just when we had finally established a bond of trust with the perpetrator, moving closer to ending the crisis, we'd sometimes find that a fellow agent or police officer had thrown a rock through the window, ordered a military vehicle driven up on the lawn as a show of force, or turned off the power. This often produced violent resistance and injuries or deaths that might have been avoided. Of course there are times when you are forced to put down the phone and send in the SWAT team, but all too often in those early days, that decision was reached prematurely. I'm particularly proud of the degree to which we've been able to shift the balance toward the primary goal of any hostage negotiation, which is to resolve the crisis while avoiding loss of life. The results have been dramatic. Hostage negotiation is about managing yourself and the people around you. And while the most important relationship may appear to be with the person you have on the other end of the phone, in fact this is often not the case. In the midst of trying to talk someone into giving up, you have to manage the people supporting you, to make sure that you have the help you need at hand to make split-second decisions. And you have to "manage up"—to make sure your commanding officer is paying attention to what you're doing, supporting your decisions, and fending off attempts to take actions that would undermine them. Throughout my career I worked a great many crisis incidents, most of which you've never heard about because they received little or no media attention. Others, like the sieges in Waco, Texas, and Jordan, Montana, were covered feverishly by the national and even international media. Each of these experiences, whether success or failure, taught me valuable lessons about human behavior, interpersonal communication, and conflict resolution, and each helped me to understand how to influence people away from violent courses of action. The observations and lessons that I discuss in this book may be derived from specific hostage negotiations, but many of them apply equally to the kinds of negotiations we face in everyday life, from hammering out contracts to tense interpersonal conflicts with intransigent colleagues or hostile neighbors, not to mention with friends and family. I know my own life relationships have benefited from what I've learned along the way, and I believe that the skills discussed in this book can help anyone to become a better person, a more engaged spouse, a more attentive parent, a better friend, and a more effective leader. Before we can influence others we must first listen and understand. Listening is the cheapest concession we can ever make. # CHAPTER ONE # **IT'S TIME TO DIE** _Time cools, time clarifies; no mood can be maintained quite unaltered through the course of hours_. —MARK TWAIN There it was, hard and direct. "You going to shoot me when I come out?" Charlie said. "No," I responded. "That's not going to happen. You said you wouldn't hurt anyone. You said you'd drop off the pilot somewhere in the mountains. So there's no reason for anyone to get hurt." The logic of this formulation appeared to work for Charlie, perhaps because this was his only chance to go on living with Cheryl and their son, little Charlie. But what I knew that he didn't was that somewhere out in the fields surrounding us, FBI marksmen were poised, waiting to take his life. A large part of a negotiator's job is to establish trust, yet there are fundamental contradictions in that. In order to convince someone that despite all appearances to the contrary, everything will be okay, you have to project sincerity. You have to make him believe that what you are saying is honest and aboveboard. You have to address his primal need for safety and security by establishing a bond. And on rare occasions, you have to lie. "Have you ever been on a helicopter before?" I asked. "No," he said. "You'll enjoy it. The view over the mountains will be spectacular." Of course, I knew that he would never take that ride or experience that view. Once again, the contradiction: he was hearing what he wanted to hear. "Charlie," I said, "I need to ask you an important question." "What?" "The helicopter pilot is an old friend of mine. His name is Tom Kelly. I've known and worked with Tom for many years, so I need your absolute promise that you won't harm him in any way. If anything happens to Tom, I would never be able to live with myself." "I won't hurt him," Charlie said. About ten days before, Charlie Leaf had abducted his estranged former common-law wife, Cheryl Hart, and their young son from her parents' home in Connecticut. After a seven-year relationship, Charlie and Cheryl had separated two years ago. When Cheryl had finally left him, she said she saw him snap. She moved in with her parents, trying to get on with her life, but Charlie, like so many men in such situations, was not willing to let her go. The way he saw it, Cheryl and little Charlie were his possessions, and he wanted them back. Over the next two years he threatened her and physically abused her whenever he found her. He had once even abducted little Charlie for six months, and gave up the boy only when the police intervened. Cheryl had sought and obtained a restraining order a year ago. The next day, right before he had to go to court, Charlie came to kill her. It was on Friday, April 1, 1988, that Charlie cashed his paycheck and purchased a carbine rifle, sawing off the gunstock in order to conceal it. Then he drove to Cheryl's parents' house—they were away for the weekend—and pried open a door leading into the garage. He kicked in the door to Cheryl's bedroom with the rifle in his hand. He beat her and raped her before telling her to pack things for little Charlie. He told her that she could go or die. Fortunately, Cheryl had the instincts of a survivor. She remained calm and said she would come; she convinced Charlie that he didn't have to kill her. "We can go away," she said. "We can start a new life together with little Charlie." Cheryl had made it clear by now that she wanted no part of Charlie, yet he wanted so much to believe her that this gleam of hope obscured his judgment. He gave her a few moments to get the boy out of bed and to gather up some clothes. Then they took off in Charlie's car. Cheryl had no plan other than to try to stay alive. Charlie's plan, to the extent that he had one, was to avoid being caught. Both knew that Cheryl's parents would call the police the moment they discovered she was gone. Both were simply stalling for time. Charlie drove south through the night along the eastern seaboard, and somewhere near the Washington, D.C., area headed west into the mountains of Virginia. Charlie liked mountains. When little Charlie was still an infant, he started to build a log cabin, which remained unfinished when Cheryl left him. Cheryl had grown tired of him, of the idea of living in a remote cabin, and of their relationship, and so she left. On Saturday, April 2, about an hour and a half due west of Washington, D.C., Charlie's car ran out of gas. They abandoned it near Sperryville, Virginia, a scenic little town on the eastern slopes of the Blue Ridge Mountains. The Virginia authorities found Charlie's car on Sunday. By this time, Cheryl's sister had reported her missing when she didn't show up to a planned dinner, so when the police ran the plates, they quickly connected this vehicle with the story of the abduction in Connecticut, then launched an all-out search. Just outside Sperryville, a sleepy country village where tourists came in season to buy apples and view the fall colors, Charlie took his family once again into the woods. This time, he built a simple lean-to. They made their way to a nearby country store, where they purchased food and drinks and a few other supplies. Meanwhile, all around them, a search went on involving the local police, the Virginia State Police, and the FBI. Helicopters flew over the ridges and valleys, while teams on foot searched the woods with tracking dogs. This went on for almost a week, by which time the authorities were ready to give up. Then on Friday the eighth, Charlie waited until after dark, then broke in to the same country store he had visited before and stole additional supplies. This confirmed for the police that their fugitive was still in the area, and the next morning they renewed their search. Investigating the burglary, the authorities showed photographs of Charlie, Cheryl, and little Charlie to the store owner, who made a positive identification. The FBI's efforts in tracking down Charlie and his victims would be led by the Richmond, Virginia, SWAT team, with an assist from members of the SWAT team from the FBI Washington Field Office (WFO). Both groups are tactical operations specialists, that is, the ones who subdue the perpetrators if and when negotiations fail to bring an end to the crisis. In other words, their jobs do not involve establishing trust or empathy, or the contradictions attendant therein. They made a house-to-house search of the area, and late in the afternoon on April 9, Special Agent Barry Subelsky and his team from the WFO SWAT approached a two-story farmhouse, a weekend getaway place for a successful Washington couple, less than a mile off the main road. The sunlight was fading fast, so they wanted to get this done as quickly as possible. Barry conferred with Wayne Waddell, SWAT leader for the Richmond FBI office. These two experienced agents, both Vietnam combat veterans, decided that Barry's team would search the ground floor of the farmhouse and Wayne's team would then take the upstairs. Before they moved in, however, they saw something that made them cautious. The electric meter on the outside of the house was humming along at a brisk pace, more active than what one would expect in an unoccupied dwelling. They summoned an FBI helicopter for support, and it landed in a field some hundred yards away, just as a local sheriff arrived with keys to the house. Barry's team searched for signs of forced entry but found none. They came up on the rickety porch outside the kitchen and went in through the back door, then fanned out to secure the ground floor. Wayne and his team followed in single file up on the porch, through the kitchen and then the family room, turning the corner near the front entryway, then advancing, slowly and carefully, up the creaking main stairs to the second floor. When Wayne got upstairs he found Charlie on the floor of the bedroom holding Cheryl and little Charlie in front of him, a gun to her head. "Back off!" he yelled. "Back off or I'll kill her." Wayne Waddell had spent hours training for situations just like this, and he knew exactly what to do. "We're backing off," he said. "Nobody's going to get hurt." He and the agents moved back down and clustered at the foot of the stairs. Law enforcement often overreacts to threats of the kind that Charlie made, even though in most cases such threats are merely defensive, designed to keep the police at bay. Some law officers hear only the threatened action, "I'll kill this lady," while failing to hear the conditions under which that action will be taken: "if you try to come in here." That is one reason why the most critical skills of a negotiator are self-control and the ability to help those around you keep their cool. Wayne had a lot on his mind as law enforcement settled in for the long haul. Mere chance had made him the group's primary negotiator, and his immediate task was to deescalate the confrontation, and then to convince Charlie that he was here to help him. But he also had to lead the SWAT team and coordinate the actions of the roughly twenty FBI personnel on the scene, as well as communicate all of this to his superiors. Back in Sperryville, other agents and local police officials were setting up a command post at the local firehouse, from which all efforts would be coordinated. State police brought in an armored vehicle, one of those old Brink's trucks that had been converted to a forward command post, which they positioned about a hundred yards away from the farmhouse on the long drive leading to it. Sniper/observer teams took up positions in the nearby woods, and the men inside the house began to wait. As dusk settled in, Wayne and his team decided to turn on the lights. Charlie didn't like this. In response he fired several shots at the lightbulb in the ceiling above the second-floor landing, shattering the bulb and sending shards of glass in all directions. "Relax! Relax!" Wayne yelled. He kept his guys cool, avoiding what could have been a bloodbath right there and then. It was going to be a long night. Wayne now realized he would need a trained negotiator to talk to Charlie. He called in another agent from the Richmond FBI office, Gray Hill, who soon arrived, still in civilian clothes, and assumed the task of talking to Charlie from the bottom of the stairs. Their conversations over the next couple of hours were sporadic, and in the few exchanges that took place Charlie remained adamant: he was not going to give up without hurting Cheryl and the boy. Hill was a veteran agent and had taken the FBI's two-week hostage-negotiation training course, but this was his first actual hostage situation. His job at this point was to relieve Wayne and hold the fort until a resolution strategy was in place. An hour went by, maybe two. Then Charlie called down with his first demand. "We need our clothes out of the dryer. We need you to get the clothes and bring them up here." It actually had been Cheryl's idea to break in to the farmhouse, with clean clothes as the objective. There had been some wet weather in the mountains and she and the boy had been cold and miserable. She had convinced Charlie that they needed to take a warm bath and wash their clothes, which were still in the dryer when Wayne and his crew entered the house. Gray was nothing if not cautious, a negotiator who would play it by the book, and the book says that you do not give a hostage taker anything without getting something in return. His answer to the request for the clothing was no. He did not want to empower Charlie by making concessions to him without getting something in return. But at the same time, he continued to emphasize the themes that Wayne had established: No one had been hurt. The charges that might be brought against Charlie at this point were not that serious. The FBI didn't want to see anyone hurt. These are all standard tactics of hostage negotiation: to minimize the consequences the perpetrator will face once the siege is over, and to assure him that he won't be hurt if he surrenders. The other essential part of the message is that harming someone will only make matters worse. Even so, there are times when playing it by the book won't get the job done, and when a more experienced negotiator might be more willing to improvise. This would prove to be one of those times. It was after one in the morning when the phone rang on the nightstand next to my bed in my home in Fairfax, Virginia. I heard a voice telling me that it was FBI headquarters, calling to tell me about what was going on in Sperryville and asking me to report to the command post there as soon as possible to assist with negotiations and eventually take over direct communication with Charlie. As the negotiation coordinator for the Washington Field Office, I'd been involved with previous such incidents and I knew the drill. While I would have preferred this call at a more convenient hour, I felt the usual charge of excitement that comes with responding to cases of this kind. I quickly jumped out of bed, threw on my clothes, and told my wife, Carol, that I would call her when I could. This was my job, like it or not. I had just come back from an assignment overseas and my FBI car—my "G-ride"—was still parked at the office, which meant that I would have to take the family station wagon. As I got in the car and backed out of the driveway, the absolute calm of the quiet suburban street once again reminded me how different my life was from that of my neighbors. The trip would normally take about ninety minutes, driving out of Fairfax through Warrenton. My family and I had been to Sperryville the previous year to pick apples, so I knew the way. There was almost no traffic at this time of night, and, not having my light and siren, I edged over the speed limit cautiously and made it in about an hour. When I reached the command post at the fire station in town I was given directions to the farmhouse a little ways up the road. I was told to speak to the Assistant Special Agent in Charge (ASAC), Virgil Young. I showed my identification to the trooper on the scene, parked near the armored truck serving as the forward command post, and approached the small group of men standing out in the cold. It was 2:30 a.m. I shook hands with Special Agent Young and also met Wayne Waddell, whose welcome consisted of tossing me a bulletproof vest. The two agents quickly brought me up to speed. When you first come into a situation you want all the information you can get. The agents you are relieving mostly want to go home, so sometimes you have to push a bit to tease out the facts. As I listened to them, I could barely see their faces, but I could tell from his weary voice that Wayne was especially tired. He and his team had been out all day searching when their late-afternoon decision to check out one last house had hit the jackpot. He told me that he and his team had stood down a couple of hours earlier, and that Gray Hill and the group from the Washington Field Office SWAT team were now on shift inside the house. I had worked and trained with this team and knew them well. My job was going to be to relieve Gray right away, although he would stay on as my backup negotiator, or coach, until a replacement arrived. Before I left home I'd called another experienced negotiator, Steve Romano; he and Agent Bill Wang would join me shortly. Wayne Waddell led me toward the farmhouse, which was about a hundred yards off the road at the end of a long, straight driveway edged with tall junipers and shrubs, though in the dark of night I couldn't see it or much of anything else. I was feeling somewhat optimistic because the situation appeared to be stable: Charlie had abducted Cheryl and little Charlie more than a week ago, and he had not yet killed her. The standoff itself had gone on for hours now, and again, Charlie had not gone over the edge. My best guess was that if I could keep him calm until daylight, we might be able to get him to surrender. When we reached the farmhouse we went around to the back, climbed the two or three steps onto the porch, and entered the kitchen door. All the lights were turned off, so Wayne used his flashlight to guide us. I followed close behind as we made our way into the family room. In the flashlight's beam I caught glimpses of photographs on the walls, books on the shelves, and the carpets, drapes, and other furnishings that made this a nice, cozy place for a weekend getaway. But right now it wasn't cozy at all. It was actually colder inside than it had been out on the lawn, and I could see my breath condensing in the air as I spoke. I wished I had brought along a heavier coat and gloves. At least I had the Kevlar vest for a bit of insulation. The SWAT team was clustered near the door where the family room opened onto the entryway at the foot of the stairs. They looked particularly ghoulish in the flashlight beam, each wearing a dark uniform, bulletproof vest, and Kevlar helmet, and each armed with a 9 mm MP5 submachine gun. With them was Barry Subelsky, my squad partner and a guy with whom I'd spent all too many hours in scenes just like this, who greeted me with the respect to which I'd become accustomed: "Now look what the cat dragged in." I wondered, not for the first time, why these guys seemed to enjoy SWAT duty, which usually involved being cold, wet, and hungry, not to mention spending a good deal of time in the direct line of fire. Negotiators usually had a warmer, drier place to work from, a ready supply of coffee, and less likelihood of being shot. The telephone was our usual mode of contact with the perpetrator, and one of the truisms of the profession is that no negotiator has ever been killed over the telephone. Unfortunately, tonight I would not be on the phone. Charlie and his carbine were right upstairs, just a few feet away. Standing with the SWAT team, just beyond the door, was Gray Hill, the negotiator I would be relieving. Wayne took him by the arm and tugged him back into the family room to introduce us. Gray and I shook hands, and then he leaned forward as he gave me the update, speaking in low, confidential tones. "Very quiet for the last hour or so," he told me. "Charlie hasn't been saying very much, though I've tried to keep him engaged. Overall, tense but calm." "Seems like a good sign," I said. "The man's calmed down. No new threats and no one's been hurt. So maybe he'll come to his senses." "Then again," Gray said, "he did blow out the lights the minute we turned them on." We glanced at each other. "Charlie definitely has an edge to him." Gray told me about the clothes in the dryer that Charlie had requested. He told me that he had said no, and asked me what I thought. "Well," I said, "the usual deal is quid pro quo. We give the perpetrator something tangible only in exchange for releasing a hostage. But Charlie isn't holding Cheryl and little Charlie with a clear goal in mind. When a subject doesn't want or need anything from us, the only real tool we have is to show him some respect. So I think it would be a good idea to give him the clothes. If we can create a relationship of trust, we have a better chance of influencing his behavior." Among negotiators, this process of trust building is called the "behavioral change stairway." You listen to show interest, then respond empathetically, which leads to rapport building, which then leads to influence. But influence does not accrue automatically. We can suggest alternatives to violence, but we must first earn the right to be of influence. Would Charlie see us bringing him the clothes from the dryer as a sign of weakness on our part? I didn't think so. I believed that the gesture, without any preconditions, would make us appear less threatening. By appearing more willing to help, we would appear more worthy of respect in his eyes. Gray went down to the basement and brought back up the bag of clothes. "You come, too," he suggested. "I'll introduce you to Charlie." "I think it's better if I give him the clothes," I said. As the new negotiator, I needed to demonstrate that I was here to help. "It'll build trust," I said. Wayne nodded. It was almost 4:00 a.m. when Gray introduced me to Charlie. Standing at the bottom of the stairs, I began with a simple hello. Silence. I tried again. "Hello, my name is Gary and I'm here to help," I said, before adding, "Can you hear me?" "Yeah. I hear you." "Good. That's good, Charlie. You know, we really mean you and your family no harm." I used the word _family_ very deliberately, in an effort to remind Charlie what Cheryl and the boy meant to him. I repeated the standard lines that he had already heard from Wayne and Gray. It is not only what you say that counts, but how you say it. Being sincere and genuine are powerful tools to gain influence. "We don't want to see anyone get hurt, you know. This is just a domestic dispute. If you'll just put down your weapon and come downstairs, I guarantee you that no harm will come to you." I waited, and then in the darkness I heard a single word. "No." I didn't expect him to give up so easily. I just wanted to keep reinforcing the thought that a peaceful ending was still possible. It stood to reason that Charlie would be exhausted. Also hungry. "Charlie," I said, "you need anything? Anything I can do for you?" As I anticipated, he asked if I could get their clothes from downstairs. "Sure, Charlie," I said. "I can do that for you." I had the bag of clothes at my side, but I waited a bit, as if we were just then going down to the dryer. After a while I said, "Charlie, I've got the bag. You want me to throw it up to the top of the stairs?" He told me to go ahead. I threw the bag and it dropped onto the landing. A moment later Cheryl Hart darted into view, wrapped in a blanket, looking terrified. She glanced at me for a split second, then grabbed the bag and disappeared. "It's cold in here," I said to Charlie. "I didn't want you or your family to be uncomfortable." I waited again. "You need anything else?" "I'm good," he said. Then a moment later he added, "Little Charlie's sleeping... so if you guys could keep it down..." "Sure," I said. I wanted to comply with his request. On the other hand, keeping big Charlie awake might encourage him to surrender sooner rather than later. But exhaustion can also make people more impulsive and unpredictable. I waited about forty-five minutes before I reestablished contact. "Everything okay, Charlie?" "Yeah." He sounded half asleep. Most likely he hadn't had any real rest since this whole thing began. "Got some good news for you," I said. By this time we had an army field telephone wired up from the forward command post to the house. I had just heard back from Max Thiel, the FBI agent in New England who was interviewing people back in Trumbull, Connecticut, for us. "Your boss won't lose the bail money he posted for you." Charlie had been charged earlier with failure to appear in court. "Your boss sounds like a good guy. He says he's holding your job for you." Again the message was, _You have a future_. But there was no response from Charlie, so I left him alone with his thoughts. I rubbed my hands together and killed time by talking with the SWAT guys. The weather was always a good topic. It just wasn't supposed to be this cold in April. But I also took the time to ask various members of the team how they sized up the situation. It brought them more into the process, but I also genuinely wanted to know what they had to say. Everyone seemed to think that what we had was as good as it was going to get, at least for now. Early that morning the WFO SWAT team had been relieved by members of the FBI's elite Hostage Rescue Team (HRT). Each FBI office has a SWAT team, but HRT was the FBI's full-time domestic counterterrorism and tactical response unit, stationed at the FBI academy. Just before daybreak, I felt it was time to begin to ramp it up a bit. "Charlie! Good morning. Hope you got some rest." He muttered something unintelligible, so I went on. "I hope we've made it clear that we don't want to hurt you, Charlie." I then reminded him that our agents had not responded when he fired shots at the lightbulb. We had given him the clothes he had asked for, and, before I arrived, we had even provided some food. I later learned that Charlie told Cheryl he was going to kill her after that breakfast. I restated our position that this was simply a domestic dispute between him and Cheryl, once again downplaying the seriousness of the kidnapping. There was no reason for anyone to get hurt. No serious crime had been committed, I said, although that did not mean that we could just walk away. No response from Charlie. I knew that he could hear me, but I had no sense that I was having any effect. For the next two hours I continued this kind of running commentary. In the negotiation business we call this a "one-way dialogue," where the goal is to address concerns that may not have been articulated, and answer questions that haven't been asked. I suspected that Charlie could see the logic in what I was saying, but people who are cornered often fall into a kind of paralysis, so no decision becomes the de facto decision. Fully awake now, Charlie yelled out, "Just get the fuck out of here and leave us alone!" This sudden shift caused me concern. Again, exhaustion could be taking its toll, adding a wild card to his already erratic behavior of late. I asked him if he was considering harming himself. I knew that there was plenty of evidence that bringing up the issue of suicide was not going to plant the thought. And if he was really thinking of considering suicide, I needed to know so I could focus my efforts on suicide intervention. "I'm not going to kill myself," Charlie said. His voice became more intense. "You're going to have to kill me." "We're not going to do that," I said. "There's no reason to do that." His voice continued to rise in intensity. He was now angry. "I'll give you a reason. You're going to have to kill me after I kill Cheryl." My heart sank. I had hoped his prior threats against Cheryl were simply intended to keep the police at bay. But his increasingly angry tone, and the fact that it came after several hours in which we'd demonstrated that we didn't want to hurt him, gave me great concern. Charlie continued to work himself up into a frenzy. "Right now I'm sitting on a chair with Cheryl on the floor beside me," he said. "I have the gun against her head and I am about to pull the fucking trigger!" His voice went up several decibels as he enunciated those last syllables, which were punctuated with what sounded like a choking noise from Cheryl. The two HRT agents closest to me moved in, actually nudging me out of the way as they readied themselves to make an emergency assault. If Charlie fired, they would charge up the stairs to try to save the little boy. I couldn't figure out what had gone wrong. Things had seemed stable just moments before, and now they were slipping out of control. I needed to buy some time. "Don't do it, Charlie," I blurted out. This wasn't inspired or subtle, but I had to say something. I couldn't just wait for the sound of the gun. "Don't kill Cheryl. Don't kill her in front of your son." Charlie started shouting again, calling Cheryl a no-good whore. His voice grew louder and angrier as he spewed out a litany of complaints. She had cheated on him. She had done this, she had done that. Each outburst and accusation seemed to make him more agitated. The HRT operators were locked and ready. It looked like this was going to be it for Cheryl. The only question remaining was whether we could get to the boy before Charlie killed him as well. But then Charlie stopped yelling. There was silence for a moment, and then I heard him whispering, "Charlie, come sit in Daddy's lap." It's hard to explain the experience of that moment. I was absolutely convinced that the next sound I would hear was the gun going off and Cheryl being killed. But for the moment I was out of ideas. I desperately tried to conjure something to stop him. "Charlie, if you really love your son, you won't do this. A boy should not see his mother being killed." "He'll be okay," he said. "I was raised without a mother. My son can get by without a mother." In truth, Charlie's mother had died only a few years earlier of stomach cancer. I took another tack. "You don't want to see your son hurt, do you?" "The only way he's gonna be hurt is if you come up here after I kill Cheryl." "Talk to me instead of hurting her," I said. "I'm going to shoot her in one minute." Then I heard him whisper to Cheryl, "I'm going to blow your fucking head off." "Charlie, is there anything, anything at all, that I can do to keep you from doing this?" "Can't you get us out of here?" Cheryl yelled. She was more desperate than I was—her life was on the line—and her fear had produced an inspired response. Charlie had narrowed his vision to the dynamic going on within this house. Cheryl's question suddenly widened it again. Charlie followed up immediately. "Yeah," Charlie said. "We want to get to that helicopter outside." Suddenly it appeared that we might have a chance after all. At least this was a demand that could be bargained for, another way of stalling for time. I signaled for the HRT operators to step back from the stairs to give me a little room. "Charlie," I said, "this is the first time you've mentioned wanting to leave the house. You know, this is something I could work on. I don't have the authority myself, but I could get on the phone and talk to my boss to get his approval." I waited, knowing that the next time Charlie spiraled out of control, it would probably mean the end for Cheryl. "I'm going to speak with my boss, Charlie. Can I have your promise that you won't hurt Cheryl while I'm talking to him?" "Hurry up." "Is that a promise?" "Just hurry up." When I had communicated with the command post about an hour before, the situation had been stable and the prognosis for an eventual peaceful outcome seemed good. How was I going to explain what we had just experienced? How could I convey how close we'd come to having the situation blow up in our faces? I rang them up on the mobile phone and, standing in the far corner of the family room, speaking in a hushed voice, I explained the situation to Virgil Young, the on-scene forward commander. I told him that the only reason Cheryl was alive was that she had blurted out a desire to get out of the house, and that Charlie had seen the helicopter. Now our subject had this notion that he could fly away. "If I give Charlie a negative response," I said, "there's no doubt in my mind that he'll carry out his threat." Agent Young listened carefully and thoughtfully, accepting nothing at face value. "What exactly did he say?" he asked. "How angry did he appear?" He challenged me to back up each and every one of my recommendations. I asked his express permission to engage in a dialogue with Charlie about going to the helicopter. Getting his approval was important, because if later he said we couldn't make the offer, I would be caught in a lie. Not only would that destroy my credibility with Charlie, but it would probably trigger the murder we all wanted to avoid. I then did something that is extremely rare in the negotiation business. I recommended that I be given permission to lure Charlie out of the house by negotiating with him for access to the helicopter, and that we prepare to have a sniper take him out as he left the house. I could see the look of surprise on some of the SWAT team members' faces, but I really had no other choice. I am not a self-questioning kind of guy. This was an explosive situation and mollification had just about run its course. I saw no other way to keep Cheryl alive. I also knew that commanders don't like deadlines, and that this was a serious request that would require some time to consider. Still, I added, "I need a response very quickly." Waiting for an answer, I went back and reengaged Charlie in discussion. I told him that I had made the request and that I was waiting for the answer from the boss. Again I asked him not to hurt Cheryl. Again, no reply. Yet I sensed I had purchased some time. There was nothing to do now but wait. I tried to maintain a calm appearance, but inside my mind was running at full speed. I was trying to pull a rabbit out of a hat, and the lives of a young woman and her son depended on making the trick work. A half hour later the field telephone rang; it was Virgil Young. He told me that his boss, Special Agent in Charge Terry O'Connor of the Richmond Field Office, had given us the green light. His wording was characteristically dispassionate. "The Special Agent in Charge has authorized you to proceed with the plan you have recommended," he said. He told me nothing about the specifics of the plan HRT had worked out for dealing with Charlie; I didn't need to know. My job was simply to get the subject out of the house and walking toward that helicopter. I walked back to the stairway and directed my voice upstairs. "Charlie," I said, "this is Gary again. I told my boss about you wanting the helicopter. He didn't understand why you don't just put your weapon down and come out. But I also told him you wouldn't take no for an answer. Based on that, my boss has agreed for you to take the helicopter. He didn't like it, but he'll go along if it will keep people from getting hurt." Once again I was trying to establish myself in Charlie's mind as his advocate. I was also trying to make our position seem credible. If I just blithely said okay, Charlie might think we had given in too easily and might become suspicious. This is when I asked Charlie if he had ever flown in a helicopter before. The question seemed to brighten his mood, so I followed up by asking where he wanted to go. He said he would tell the pilot to fly them over the mountains and they would land when he spotted a place that looked good to him. It was then that I asked him not to harm my friend Tom Kelly, the pilot. The tension seemed to be easing as time passed by, enough so that after an hour or so Charlie and I began to talk more casually, first about the farmhouse. Though relieved that we had stepped back from the brink, I was aware that any misstep could set him off. He told me that he had seen the beams down in the basement when they had gone downstairs to use the washer and dryer. He told me how much he admired that kind of solid construction, far better than what was built these days. I asked Charlie about the cabin he had built in Connecticut, and he seemed to take pride in talking about his craftsmanship. We then moved on to talking about camping and the outdoors. I told him that I had an old motor home and that I wanted to take my family camping in New England this coming summer. I asked him about some places we might go, and he gave me some recommendations. Our conversations were becoming more relaxed. For the first time I also mentioned to Charlie that I also had a four-year-old son. We talked about how fascinating it is to watch kids grow. Again I tried to push Charlie toward thinking about the future in a positive way. I reminded him how important it was for a father to show his son the woods and outdoors, to help him grow to be a man. Suddenly he said, "I've got some stuff in the lean-to. Some favorite toys for the boy... and some other stuff. I want to take it on the helicopter with us." This was a complication we could do without, but I had to play along. "Can you tell us where the shelter is in relation to the farmhouse?" I said. "We'll go get your stuff." Charlie's directions were a bit vague, but we dispatched agents to recover the items. "You must be a pretty good woodsman," I said, "to have gone so long in the mountains without being found, especially with so many people looking for you." He seemed to relish the compliment. "Nobody's going to find me if I don't want them to," he said. When I got back on the phone with the command post, they described a four-phase plan that I would have to sell to Charlie. First, two FBI agents would carry the recovered personal items out to the helicopter. They would place black garbage bags filled with these items at the foot of the helicopter and walk away. Phase two called for the helicopter pilot, Tom Kelly, to walk to the aircraft, load the bags onto the copilot's seat, get in, and start the engine. Phase three would have those of us on the ground floor exit the house. Phase four would have Charlie, Cheryl, and little Charlie exit, walk to the helicopter, board it, and fly away as previously agreed. At least that was phase four as we would describe it to Charlie. I went back to my position at the foot of the stairs, and for the rest of the morning, Charlie and I went back and forth over the plan. I wanted to make sure that he fully understood what to expect. I also wanted to reinforce his belief that he was really going to fly away. If he sensed betrayal, this whole thing could blow up in our faces. After a couple of hours our guys came back from their search for Charlie's shelter in the woods and said they still couldn't find it. Charlie tried to explain again, but even after a second try our agents still came up empty. I was growing concerned that Charlie might lose faith in this whole plan, so I asked him to really spell out the directions for us. He drew a crude map on a coloring book and threw it down the stairs. I gave the map to our agents and they went off to try again. "We're hungry," Charlie said. We sent a police cruiser to the closest McDonald's, and when the food arrived the HRT guys covered me as I placed it halfway up the steps. A moment later, Cheryl came down to pick up the food. This was the first time she and I had a chance to look clearly at each other. She was a pretty girl, but frail and terrified. Earlier, the command post and I had discussed whether or not we should ever try to grab her. We weren't sure if she would cooperate if it meant leaving her child, so we'd decided the most we could do was to hold up a sign asking if she was okay, then stand ready to sweep her out of the way if she appeared to want to flee down the stairs. I held up the sign— _Are you okay?_ —and she looked at it, but she made no response. She knew Charlie was watching her every move. An hour or so later our agents returned. They had located the shelter and brought back the Easter candy and other items Charlie had requested. Our four-phase plan was ready to begin. Then the command post called. FBI legal personnel wanted to clearly document that we had given Charlie every opportunity to surrender. They told me that I needed to ask Charlie one more time if he would come out. I explained that this might raise his suspicions and mess up our agreement. Still, I had to do it. I gathered Bill Wang, who had relieved Gray hours earlier, and the HRT operators close around me and quietly explained what I was about to do. I told them they needed to listen carefully to my request for Charlie to surrender, because they needed to be able to describe in court not only what I said but what Charlie said in reply. I wasn't sure how to raise the issue, so I just forged ahead. "Charlie, we're about ready to begin the process, but before we start, my boss wants me to ask you one more time if you'll just put down your weapon and come down." "No way," he said. Then I uttered the dumbest thing I have ever said as a negotiator. "Okay," I said, "I won't kick that dead horse anymore." I could almost see the words drifting up the stairway, and I wanted to reach up after them and pull them back. Charlie didn't miss a beat. "Dead horse? Is that what's going to happen?" "No, Charlie, it's just a figure of speech. What I mean is that if you've made up your mind, then that's the way it is, nothing more." The tension eased again, and I reported back to the command post this final attempt to get Charlie to surrender. We were now ready. The command post wanted to know from what door of the house they would exit. "Charlie," I said, "what door are you going to come out of?" "Why do you want to know? So you can shoot me?" "No. It's just that we're going to go out the back door and we'll be waiting behind the house. I don't want you to come out and trip over us." Charlie never answered me. On the phone, the command center asked where I thought he would exit. I said I was pretty sure he would come out the front. Phase one began with the two agents carrying the bags of personal items to the helicopter. A short time later the helicopter engine revved up and the big rotor began to whirl. "Charlie," I said, "we're leaving the house now. Remember, you said you wouldn't harm the pilot." Silence. "Good luck, Charlie. I hope you and Cheryl will be okay." There was another moment of silence. Then I heard him say, "Goodbye." The six or seven of us who had been on the ground floor now moved through the back door to the outside and hung close to the rear wall. We could not see the area in front where the helicopter was waiting. A tactical radio was within earshot, though, and I heard someone say that Charlie and his family had come out the front door. Minutes passed, then I heard an explosion. I edged to the corner of the house and looked cautiously around it. I saw Cheryl standing alone in the middle of the field, screaming. Lying at her feet were big Charlie and little Charlie. I feared the worst. Someone yelled for her to start running, and Wayne Waddell went after her. Another agent, Terry Neist, picked up little Charlie and cradled him in his arms. Before leaving the house, Charlie had tied the boy onto his back with the cloth belt from a bathrobe. Little Charlie's head had been only inches behind his father's—not much room for a marksman to find a target. Charlie had held Cheryl close in front of him, the carbine pushed into her back. The distance from the house to the helicopter had been about a hundred yards, perhaps a bit more. As Charlie moved forward, shielded by his captives, the snipers had called out over their radios, one after the other. They never had a clear shot. Then, when Charlie was about halfway to the chopper, the machine suddenly lifted off the ground. At that moment, agents tossed flash-bang diversion grenades at Charlie's feet. The noise and bright light must have disoriented him, because he fell to one knee. He said to Cheryl, "This is it, Kitten." But the fall had shifted little Charlie's weight, opening up a space between father and son, and in that split second an FBI marksman fired a shot that entered Charlie's right cheek and exited the rear of his head. I hurried over to Terry and little Charlie and put my hand on the boy's head. "How you doing, Charlie?" I hoped the boy would recognize my voice and find it reassuring. He was shaking and very scared. Cheryl was brought over and took him in her arms. She looked toward the FBI emergency medical technicians, who were frantically trying to revive Charlie. "My God, they're going to bring him back and he's going to do this to us again." But this would not turn out to be the case—the shot proved fatal. My wife later asked me if I'd formed any kind of bond with Charlie, and indeed I had. The moment of going so close to the brink was a kind of shared event, and I think it helped set up the positive interaction that followed. I don't think Charlie ever would have walked out of that house if we hadn't established some sense of trust in this moment. But despite this I felt no remorse about my recommendation that we use deadly force. I was convinced it was the only way to save Cheryl and little Charlie. I realized I had left my jacket in the house and started walking back to get it; Steve Romano joined me and asked me if I was okay with what had happened. "I'm fine," I said. Then I added, "But I'm mad at that son of a bitch for making us do it." It felt like such a waste of life. By the time I reported to the main command post in Sperryville, Cheryl was sitting there calmly with little Charlie in her lap. She rose to greet me, then, with tears streaming down her face, gave me a hug. I did not, could not, say a word. Everyone in the command post was watching. It was a long time before my voice worked. I don't remember what we said or what thoughts we shared. I remember only the incredible sense of relief I felt in seeing them both alive. When I drove home late that afternoon, our neighborhood was as calm and serene as it had been when I left, only this time Carol was standing on the front porch. I parked the car in our driveway and got out, heavy with fatigue. She had been watching the television news and knew how the incident had turned out. With a big smile on her face she said, "Welcome home, Batman. Now take out the garbage." ——— The following Christmas I received a card from Connecticut. It was from Cheryl, and this is what she wrote: _Thank you so much for all you did for little Charlie, big Charlie, and myself. Lil Charlie has grown so much since April. He seems to be doing very well with everything. He goes to counseling every six weeks just so they can keep an eye on him. I'd like to thank you for all you did in Virginia. When you kept on talking, even when Charlie wouldn't talk or let me talk, your voice was so soothing to hear for me and for big Charlie. At one time Charlie was a very nice person and I know he ended up liking you and he had wished you could have met under different terms. I know for myself I will never forget your voice or you, for all your caring and help we hope you and your family have a great holiday season and that God will be with you always_. _Thank you_ , _Cheryl Hart and LiL Charlie_. # CHAPTER TWO # **MY START** _People grow through experience if they meet life honestly and courageously. This is how character is built._ —ELEANOR ROOSEVELT I could not have had a more quintessentially American childhood. I was blessed with loving and supportive parents. I grew up three blocks from the ocean in Atlantic Beach, Florida, near Jacksonville. I spent my summers swimming in the ocean, rafting in a nearby lagoon, building forts in the adjacent woods with my buddies, and later taking up surfing as that craze moved east from California. I was a typical clean-cut child of the fifties and early sixties. At Fletcher High School I was captain of both the track and cross-country teams; I also worked afternoons and Saturdays at an office supply store and mowed lawns for extra money. At the age of twelve, I had an experience that would give me a sense of focus and a goal that I would pursue for the rest of my childhood. Believe it or not, it came while watching _The Mickey Mouse Club_. For those too young to remember, _The Mickey Mouse Club_ was a variety show with cartoons and skits involving a group of wholesome young boys and girls known as the Mouseketeers. I'd often watch it after school. One day not long after my twelfth birthday, the program went to Washington, D.C., to visit the headquarters of the FBI. Those who didn't live through the 1950s and early 1960s would have a hard time understanding the respect with which most Americans treated their government institutions at this time. This was well before campus protests and counterculture movements dominated the news, an era in which rock-and-roll stars such as Elvis Presley and the Everly Brothers were in uniform, Jimmy Stewart starred as an FBI agent fighting the Ku Klux Klan in _The FBI Story_ , and the Bureau was revered as our society's first line of defense against both crime and subversion. The Mickey Mouse Club's producers reflected this, approaching the FBI with palpable, almost worshipful respect. What I remember most from that show was a segment in which a Mouseketeer spoke with J. Edgar Hoover, the legendary director who had headed the Bureau since 1924. Seated on the steel-framed butterfly chair in our family room, I was utterly transfixed. Hoover looked the young host firmly in the eye and spoke about the FBI's mission; he talked about the high caliber of its agents and told stories of these agents chasing gangsters during the Roaring Twenties and tracking down German spies during World War II. It was like a boy's adventure novel come to life! But what really sealed the deal was when the host was taken to a firing range and allowed to shoot a Thompson submachine gun, the weapon of choice for both G-men and Al Capone. I was hooked. When my mom came home from work that day, I could speak of nothing else. Being a good mom, she went out and got me a kids' book about the FBI, which amplified and further dramatized all the stories of derring-do that Hoover had only hinted at. The book contained stories of agents tracking down dangerous fugitives, arresting bank robbers, and securing the release of kidnap victims. From that time forward I never wanted to do anything else. Of course, life was not as simple and sweet as it was portrayed on television, particularly if you lived in the segregated South, as I did. Throughout my childhood I would be reminded regularly that there were people who lived near me in Florida who had a very different kind of life. My first memory of this came on a shoe-buying expedition to Jacksonville, when I first noticed the omnipresent signs indicating separate water fountains, building entrances, and the like. I had never really appreciated the ugly face of discrimination before then, and I didn't like what I saw. I remember my parents sitting me down and telling me that segregation was not right, and emphasizing that we had a responsibility to look out for others less fortunate than us, regardless of their skin color. During my senior year at Florida Southern College, I took secondary education courses and did a teaching internship in history and sociology at Lakeland High School. This was 1972. School busing was causing protests as far north as Boston, and down in Florida, when Lakeland's all-black high school was closed and its students merged into two formerly all-white schools, it did not sit well with many people. During my internship at Lakeland High School, there were frequent altercations in the hallways between white and black students. Whenever the school siren rang at an unscheduled time, all the male teachers were expected to rush out to break up those fights. I had always been a kind of mediator and peacemaker among my friends, but this was my first exposure to crisis containment as an adult. These were kids, technically, but not long after I started on the job, as I stepped between two football players, one black and one white, to create a physical barrier, I realized that they were easily as big as I was, and half crazy with anger. I'm almost six foot two, but they were bigger and stronger than I was. I don't remember what I said, but I was able to use words to calm them down and keep them apart until some of their anger had subsided. I knew intuitively that once the fists started flying, it was all over. For some Americans during this period, the stark contrast between the inspiring goals of the civil rights movement and the reality of everyday life caused them to revolt to varying degrees against America's institutions, including the FBI. But I was raised more traditionally, and I never really embraced the counterculture movement. I continued to dream of being an agent; for me, the FBI still represented justice, something American society seemed to need more than ever. And so when I graduated in the spring of 1972, there was only one job I really wanted. I didn't want to run a business or be a banker. I wanted to be an FBI agent. Problem was, you needed to be twenty-three and have three years of other work experience. I had enjoyed teaching in spite of the time I spent breaking up fights, and thought this would be a great way to gain the required work experience, but full-time positions were scarce, and so I became a substitute. I also met with a recruiter at the local FBI office in Jacksonville. He suggested an idea I hadn't considered before, which was to start as a clerical employee at FBI headquarters, something I could do right away. So I filled out an application, sent it in, and was eventually accepted. A few months later I found myself loading up the 1954 Ford I had purchased from my grandfather and driving up to Washington, D.C. The FBI I joined in 1972 was in a kind of time warp. Even though J. Edgar Hoover had died a few months before I came on board, his presence was still felt, largely in the straitlaced conservatism of the Bureau. No matter how much the world had changed since the Beatles and Bob Dylan had shaken up American culture, agents at the FBI still wore white shirts only; some still even wore fedoras. Not long after I joined, one agent was given a special commendation for nabbing a Top 10 Most Wanted fugitive. But he was also reprimanded when a photograph during the collar showed him wearing a sports jacket rather than a dark suit. This conservative atmosphere didn't dull my wish to be an agent; the only trouble was that I wasn't one yet. I immediately discovered that, far from being thought of as agents in training, clerks were members of a different caste altogether, one whose purpose was to do the entirely unglamorous work of supporting the field agents. I found myself engaged in mundane tasks such as delivering mail and filing paperwork. There was a seemingly endless pile of documents. To say that I was demoralized would not do the experience justice. I hung in, though, and after a few months, I got to know an agent named Jim Sherman, who became a kind of mentor for me. He knew how much I wanted to become an agent, and while he couldn't make that happen any sooner, he did arrange for me to get an interesting assignment assisting him on the Foreign Counterintelligence Squad, collating data on the movement of foreign spies in Washington. It sounds more exciting than it was—but it was certainly a huge improvement over pushing the mail cart. During my time working for Jim I had another stroke of good fortune. One night, about three months after I'd started working for the FBI, I went out with other people from the office and found myself seated across from an attractive young woman in our group. Her name was Carol Drolsbaugh, and I plucked up the courage to introduce myself. She had joined the FBI as a stenographer just a few months earlier, right out of high school. I was immediately attracted to her irreverent wit, which distinguished her from many of the more traditional, restrained southern girls I'd grown up with. I didn't have much money to date in those days (Carol made more as a stenographer than I did as a clerk), but we began to see each other regularly. In the fall of 1973 my dad began having serious back problems, so I requested a transfer from Washington to the FBI Field Office in Jacksonville, Florida, just a few miles from home. This meant being apart from Carol, and we missed each other so much that when she came down for a visit in December we got engaged. We were married in August 1974 and eventually moved in to a great little apartment near the ocean in Neptune Beach. In 1976, after three and a half years as a clerical employee, I received the formal letter appointing me to join the incoming class of special agents for training at the FBI academy in Quantico, Virginia. For the next seventeen weeks, I studied hard for each exam, concentrated on my shooting skills, and got in great physical condition. I scored near the top of my class in every category, and, thanks to my years running track in high school, I also won every distance-running challenge. After graduation in 1976, I was assigned to the fugitive squad of the FBI Field Office in Columbia, South Carolina. Many agents begin their careers doing tedious background investigations, but within two weeks I was apprehending criminals. A few weeks into the job, our office received an alert about a South Carolinian wanted for murder in California. He was on the run, and the thought was that the most likely place for him to be was back home in our area. Relying on the standard gumshoe work of contacting the fugitive's family members and every other known associate, eventually my partner and I tracked him down to a crumbling white frame apartment building surrounded by palm trees and azalea bushes. This was late August, and when the landlord let us in the front door, it was so hot inside I nearly fainted, but I rushed on through the apartment and found the man in bed, reaching for his gun in a holster on the nightstand. Fortunately, he stopped before I had to fire my weapon in self-defense. For a young guy who'd always wanted to be a G-man, this was very exciting indeed. One of my training agents in South Carolina was named Jimmy Calhoon. Jimmy looked the way I'd always thought an agent should look: with dark hair and a square jaw on a rugged face, he was a dead ringer for the cartoon police detective Dick Tracy. He was a tough guy who had played football at Florida State, and he exuded a confidence and authority that I'd never seen before. One night we were looking for a fugitive, a guy who had murdered someone in another state, and we walked into the toughest bar in town, a dark smoke-filled place filled with tough guys. Jimmy moved into the room and slowly stared down each person, to see if the guy we were looking for was there. Not one of them dared to make eye contact with him. But Jimmy wasn't just a tough guy. Over the next two years he would teach me his own kind of street psychology: how to speak with witnesses, victims, and criminals and gain their cooperation. When we were out trying to develop leads, he could adapt his approach for a big-city lawyer or a farmer down at the feed store in a small town. He could tell jokes, and he seemed to be able to talk to anyone about anything, whether it was crops, fishing, dove hunting, or taxes. He was as tough as anyone in the Bureau, but what he showed me was that good law enforcement wasn't just about using a gun or a nightstick; it was also about communication. As much as Jimmy was a positive role model, there were others whose actions taught me what not to do. There were a couple of guys in the office who constantly took unnecessarily confrontational approaches, arrogantly asserting their authority as FBI agents. In one bank robbery case another agent and I were interviewing a guy who we were pretty certain could help us locate the robber. Practically before he had sat down, the agent was accusing him of lying and covering for his friend. I felt my frustration rise as the witness clammed up—I was certain I could have gotten what we needed from him with a more subtle approach. Carol and I enjoyed our two years in South Carolina and even bought our first house there, but "first office" agents get transferred, and in 1978, the FBI summoned me back to the Washington Field Office and assigned me to the Foreign Counterintelligence Squad. It was great to return as an agent, though I made sure to treat the clerks with the respect that they deserved. I began my new Washington life as an agent developing evidence in espionage cases, while also working contacts to recruit defectors from hostile nations as counterspies for us. But the previous years had seen the challenges facing the FBI evolve. A series of crises would awaken it to the threat of international terrorism and to the need for a more rigorous approach toward handling major incidents. Both of these things would become the focus of my work in the years to come. During the Munich Olympics in the summer of 1972, eight members of the Palestinian Black September terrorist organization seized and ultimately killed eleven Israeli athletes. Despite the tensions rising in the Middle East since the 1967 Six-Day War between Egypt and Israel—and the fact that at least one West German forensic psychologist had predicted this hostage event almost exactly as it played out—there was no armed security for the Munich games, no checkpoints. When the hostage taking began, there was no federal authority in place to deal with it, which left local and regional police to make do as best they could. They had no radios, woefully inadequate firepower, and too few snipers to be effective, and they relied on flawed tactics that put police forces in danger from their own cross fire. Once the action began, decision making was mostly ad hoc and cumbersome, with one tactician sharing responsibility with two politicians. For many law enforcement officials, the Munich siege of 1972 was a wake-up call. Before that time, when subjects took hostages, responding police would simply demand that the perpetrator come out and surrender. If the hostage taker refused, the police would then mount an assault. Sadly, that rigid and inflexible approach often resulted in loss of life. Even when it did not, the outcomes depended more on luck than on the application of a well-established set of procedures. In New York City, just one week before the events in Munich, police had bumbled through the botched bank robbery and hostage taking that was later depicted in the Al Pacino film _Dog Day Afternoon_. The actual fourteen-hour siege became a spectacle on live television, and it drew a crowd of three thousand people to the street corner in Brooklyn where the bank was located. Fortunately, the hostages were eventually rescued, and loss of life was limited to one of the perpetrators. But to all observers, it was clear that the NYPD and assisting FBI lacked an effective response. A more egregious example of the state of police crisis procedures (demand compliance, go in if your demand is refused) had been the Attica prison riot, also in New York State, which had taken place only one year earlier. When negotiations failed to bring results, the State Police moved in with tear gas and shotguns, the net result of which was the death of ten inmates and twenty-eight correctional officers. A Special Commission of the State of New York later described it as "the bloodiest one-day encounter between Americans since the Civil War," excepting perhaps the Indian massacres of the late nineteenth century. The New York police would lead the way as law enforcement sought to respond to these kinds of crises. Shortly after the Munich and _Dog Day Afternoon_ events, New York police commissioner Patrick Murphy established a committee to explore ways to respond to crises in a more organized and effective fashion. The committee's conclusions led the NYPD to create a full-time unit—the Emergency Services Unit—that would be responsible for responding to crisis events. No longer would the response to and management of the incident be left in the hands of whoever showed up first. They also established protocols emphasizing proper containment of the situation as well as nonviolent approaches, in contrast to what had previously taken place. In January 1973, the Emergency Services Unit had its first opportunity to apply this new, more restrained approach when officers responded to a robbery in progress at John and Al's Sporting Goods Store in Brooklyn. A group of perpetrators held nine hostages, and an immediate exchange of gunfire resulted in the death of one officer and the wounding of two others. Nonetheless, forty-seven hours later, the situation was resolved with all hostages released and all four perpetrators in custody. A post-incident review concluded that restraint had succeeded far better than earlier, more aggressive approaches. One flaw that the review commented on was that communication with the subjects inside had been uncontrolled and uncoordinated. This prompted NYPD Lt. Frank Bolz and Officer Harvey Schlossberg, a trained psychologist, to be assigned to build the nation's first dedicated hostage negotiation team, selecting and training a group of officers specifically for this purpose. In 1974, the FBI recognized that the NYPD was on to something, and developed its own formal hostage-negotiation training program at its Quantico training academy. This course was designed for use by FBI agents as well as police officers. Those who volunteered for negotiation training, selected from each of the FBI field offices around the country, tended to be mature and experienced agents, known in their offices as solid, effective, and successful. Many had shown a knack for developing informants or gaining confessions from otherwise uncooperative criminals. The negotiation skills they learned during the course further enhanced their ability to communicate with citizens on the street and avoid verbal confrontations. After training, these agents would then work with FBI SWAT teams in regional field offices around the country to help resolve hostage and barricade situations. Then as now, FBI agents were assigned to a SWAT team or field negotiation unit on a part-time basis only. An agent might spend most of his time hunting down mobsters and get called in every now and again when a siege occurred. The original concept developed by NYPD and adopted by the FBI focused primarily on bargaining skills, among them reciprocity; negotiators would in essence say, "If you cooperate with me and do this, I'll cooperate with you and do that." This gave rise to the principle we saw applied early on in Sperryville: never give a hostage taker anything unless he gives you something in return. During my initial training to become an FBI agent, I had made a mental note to try to become involved in this new specialty at the earliest opportunity. In 1978 I mentioned this interest to my partner Ken Schiffer, a very experienced senior agent who knew the WFO training coordinator, the person who decided who got to attend the negotiation training program at Quantico. With Ken's support, two years later, in 1980, I was given the opportunity to attend the FBI two-week negotiation course. During the course I learned the mechanics of the negotiation process, studied abnormal psychology, heard case studies, and participated in role-playing exercises. I was deeply impressed by the power of the simple communication techniques being taught. I was also impressed by the insight of the man teaching these new skills, Agent Fred Lanceley. Fred's great skill was his ability to break down incidents into their component parts and glean the dos and don'ts. He also had a unique ability to draw out stories from the agents and police officers he trained and use this information to build a base of knowledge. Fred taught us that the key to successful negotiation was to discern the subject's motivation, goals, and emotional needs and to make use of that knowledge strategically. Once we understood the hostage taker's real purpose, we had a better chance of convincing him that killing the hostages would not serve that purpose and would only make an already bad situation worse. One of our most effective tools for negotiation is to offer the hostage taker something he wants in exchange for something we want—ideally, the release of at least one hostage. (In the Charlie Leaf situation, as I've said, this didn't work—he didn't want anything from us—so we had to try a different strategy.) Often we'll say something like, "Why don't you help me help you? Give me something to work with, and let's see what we can accomplish working together." If the subject resists making a trade, the negotiator might say, "I'd like to help you, but my boss just won't let me send in what you want until you send someone out in return." By making the subject work hard for everything gained, we wear down his resolve. He realizes he doesn't have as much power or control over the situation as he thought he had. We can not-so-subtly reinforce that realization by showing a visible tactical force capability. This can also be a leverage point in moving negotiations along. For instance, we'll suggest to the hostage taker that we won't kill him as long as he behaves reasonably. Dr. Mike Webster, a Canadian psychologist who has worked with the Royal Canadian Mounted Police and FBI negotiation programs, describes this as the "parallel approach" to crisis resolution, in which we contrast the benefits of cooperation with the risks of resistance. Authorities negotiate in good faith, while simultaneously preparing for and showing their ability to undertake tactical action. Limited demonstration of tactical capability can help the negotiation process along by encouraging dialogue. Too little action can make the subject feel confident and secure, thus less likely to negotiate in earnest. Too much action might trigger a firefight, which is what Webster calls the "paradox of power"—the harder we push the more likely we are to be met with resistance. Law enforcement officials who have become angry and agitated owing to a lack of progress are more inclined to use force in a non-incremental way. Most hostage takers do not begin their day planning to kill someone and then die in a hail of bullets. They are usually focused on getting their demands met. In a small number of cases, a suicidal individual purposefully holds hostages and seeks a confrontation with the clear intention of dying at the hands of the police. These rare cases are classified as "suicides by cop." But most hostage takers want to live; even many who seem bent on self-destruction are, at most, ambivalent. That ambivalence serves as the access point to insert the wedge of negotiation. The duration of most such incidents is usually only a few hours or less, with surrender achieved well over 90 percent of the time when a proper negotiation approach is used. Very few activities in law enforcement yield success rates that high. Even so, it can be a complex and timely process to move the hostage taker to a point where he realizes he won't achieve his goals, and that's why we think about much of our work as stalling for time. After a few hours, days, or even weeks, things may not look as bad as they did at first, both for the hostage taker and for the authorities. Alternatives to violence begin to emerge, and our goal is to move the hostage taker away from the tunnel vision that prevents him from seeing those alternatives. One example from these early days came during a domestic hijacking when a young man brandished a weapon and demanded to be flown to Cuba. The FBI and local authorities surrounded the plane and negotiations began. At some point during the dialogue, the hijacker forcefully demanded that the negotiator send him in a hot cup of coffee with cream and two sugars. The negotiator replied that it would be difficult to get coffee out to the plane, but that he would do his best. Several hours later the coffee showed up. It was black with no cream, very cold, and it contained no sugar at all. A short time afterward the young hijacker surrendered. With the man in custody, the FBI agent asked what had made him decide to give up. He responded: "I figured if I couldn't even get a decent cup of coffee, I certainly wasn't going to be able to fly to Cuba." Few case studies so succinctly illustrate the value of the negotiation process: Contain. Open communications to deescalate tension. Stall for time. Lower expectations. Make him bargain for everything. I came away from the training course excited by what I had learned and anxious to spread the word. Even though I was a brand-new negotiator, I immediately began to run occasional negotiation training courses for officers in the D.C. area, and I became negotiation coordinator for the WFO—all of this in addition to my day job hunting down foreign spies. I really enjoyed those training courses, and they were hugely helpful in developing my own knowledge of the field. I'd travel to a local police station and put on a daylong seminar for fifteen or twenty guys. I'd say, "I don't have a lot of practical experience, but here are the things I've learned from the FBI academy, and here's what they've learned from around the world." In those days the FBI had a rep for taking and not giving, and so the police really appreciated getting this information. In return, they told me about the hostage incidents in which they'd been involved, and we talked about how the FBI methods could've been applied to them. In 1982, as global terrorism became an increasing threat, I transferred to the WFO Terrorism Squad, identifying and arresting suspected terrorists, developing informants to penetrate groups we were concerned about, and monitoring individuals in the United States who we believed were supporting terrorist organizations in the Middle East. In those days, there were probably only a dozen or so FBI agents who knew the difference between Iraq and Iran, or between Sunni and Shi'ite. But I made it a point to learn everything I could about the Muslim world and its troubled politics, as well as the threats presented by Islamic extremists. For the next eight years I would travel the globe working terrorism cases, often applying communications lessons from the FBI negotiation course to the task of recruiting informants and investigating terrorist incidents. One early case for the WFO Terrorism Squad occurred close to home. I was called to assist with the investigation of the kidnapping of Clelia Quinonez, the wife of a former Salvadoran ambassador to the United States. Mrs. Quinonez had been abducted from her home in Miami. FBI teams responded to the Quinonez residence and set up technical support and provided assistance to Roberto Quinonez, who was negotiating for the release of his wife. The first task was to figure out where the kidnappers were. As just about anyone knows who has ever seen a cop movie from this period, in those days law enforcement could trap and trace telephone calls, but it took a while. Digital technology, of course, now makes this process instantaneous. But at the time, the information available immediately was limited to the region of the country a call was coming from. The longer a caller stayed on the line, however, the more specific technicians could be in identifying the town, the neighborhood, and ultimately the precise telephone where a call originated. Sophisticated criminals knew to limit the length of their calls, but the men who had abducted Mrs. Quinonez were not sophisticated. One of them had done odd jobs at the Quinonez house, and he and his partner had seen this contact with a wealthy family as a chance to make some easy money. FBI agents coached Ambassador Quinonez on maintaining just the right degree of cooperation while drawing out the conversation with questions. Again, this was a stalling-for-time strategy to give the perpetrators an incentive to keep talking, while also stretching out the discussion in order to keep them on the phone longer each time. When the abductors made a demand, we instructed the ambassador to break it down and address it in tiny increments, which necessitated additional phone calls. With each phone call, the technical team was able to zero in more closely on where the calls were coming from. Eventually, they narrowed it down to a few square miles in the Washington, D.C., area. We were getting close. The ambassador agreed to a ransom of $1.5 million but insisted on receiving proof that his wife was still alive before making the payment. As we'd expected, the kidnappers agreed, and they told him they would have Mrs. Quinonez herself make the next call so that he could hear her voice. Armed with that information, as well as knowledge of an approximate location, we stood at the ready to respond, rescue, and arrest. We hoped they would bring her to a phone booth. I led a six-person team assigned to a rough part of northwest Washington, ready to respond the moment the technicians identified the specific location from which the call was being made. Additional agent teams from WFO were spread out to cover other parts of the city. Usually, in such a situation, the agents will locate the suspect making the call, then follow him back to the location where the victim is being held. In this case, the perpetrators saved us a step. Four days into the crisis, Agent John Heieck and I had set up shop in a crime-ridden part of town and were sitting in an unmarked FBI car parked on a dark street near the Pitts Hotel, a run-down place whose name seemed entirely apt. We were across the street from the entrance, about fifty feet down the block. Nearby was a phone booth that we suspected might be used by the kidnappers, since we thought earlier calls might have originated from this block. The two other teams I was responsible for were parked at similar locations not far away. Our attention was focused on the radio as we watched the comings and goings of those on the street. The idea was simply to be spread out in this neighborhood, ready to move in quickly when we determined the location the call was being made from. We sat there for the rest of the afternoon and into the evening. In this mostly black neighborhood, two white men sitting in a sedan was a fairly obvious giveaway that we didn't belong, but no one seemed to care. We exhausted all manner of small talk as we endured the tedium of a stakeout. At 9:40 p.m. the radio finally crackled to life with the news we'd been waiting for. Mrs. Quinonez was at that moment talking with her husband from a pay phone, which had been traced to a street location just outside the Pitts Hotel. John and I could see a white woman standing at a phone booth, with a young black man on either side of her. We had a photograph of the victim, and we had binoculars. A closer look confirmed that this was indeed Mrs. Quinonez. We radioed for backup, got out of the car, and drew our weapons as we approached them. When we got twenty yards away, we shouted, "FBI!," flashing ID and pointing our revolvers. "Hands on your heads! Get on the ground! On the ground now!" John moved to handcuff one of the subjects and I moved to handcuff the other. Meanwhile, the other members of our team pulled up, got out of their cars, and grabbed Mrs. Quinonez. She simply dropped the phone, which was now dangling at the end of its cord a few feet away from me. As I moved to handcuff one of the subjects now lying facedown on the ground, later identified as Craig Blas, I noticed his body rise slightly off the ground and his hand move toward his waistband. I saw the butt of a revolver. "He's got a gun," I yelled. Then I pounced on his back and jammed the barrel of my revolver directly into his ear. "Move another inch and I'm going to blow your fucking head off," I said. Then I reached down and confiscated his revolver. Mr. Hoover would not have approved of my language, but it certainly got Blas's attention. With the handcuffs on Blas, I raised him to his feet and quickly frisked him for additional weapons. I then turned him over to two other agents, who took him back to my vehicle for transport. That's when I noticed that the telephone receiver was still dangling, presumably with Mr. Quinonez in Miami still on the other end. I picked up the phone and said, "This is Gary Noesner up here at WFO. We got both subjects and the victim. She's safe." I could hear a loud cheer come over the line from Miami. As I continued to hold frequent training sessions for police officers in Washington, I have to admit that, like any other highly trained professional, I was curious to see how my expertise would hold up in a major hostage crisis. In those early days, I often worked closely with the Washington Metropolitan Police negotiation team and assisted in several of their hostage or barricade incidents. But the siege that would first really put my negotiation skills to the test occurred 240 miles to the south, in Raleigh, North Carolina, in 1982. # CHAPTER THREE # **MY FIRST MAJOR SIEGE** _When the conduct of men is designed to be influenced, persuasion, kind, unassuming persuasion, should ever be adopted._ —ABRAHAM LINCOLN It all began on October 7, 1982, when a passenger listed as W. Rodriquez boarded the 10:40 p.m. Amtrak Silver Star out of Jacksonville, Florida, bound for New York. Accompanied by his sister, Maria, and her two children, Julie, four, and Juan, nine months, he entered the sleeping car they'd reserved and handed the porter a three-dollar tip for carrying his luggage, which instead of clothes and other personal items held a Browning semiautomatic pistol and a fully automatic MAC-10 submachine gun. "W. Rodriquez" was, in fact, one of many aliases used by twenty-nine-year-old Evangelista Navas Villabona. Nicknamed Mario, he was a native of Colombia who had entered the United States illegally and set up shop trafficking drugs into New York. All was quiet on the Silver Star until 5:45 the next morning, when passengers in the adjacent sleeping berth awoke to the sound of children crying and a man and woman arguing loudly in Spanish. The argument grew increasingly heated for the next hour, gaining intensity until, shortly before the train arrived in Raleigh, North Carolina, shots rang out. The conductor radioed ahead for help, and local police were waiting when the train arrived in Raleigh. In charge of managing the incident was Raleigh police chief Frederick K. Heineman, a retired NYPD official, easily identified by his crisp big-city accent amid the southern drawls. Chief Heineman was an experienced and thoughtful law enforcement leader and well aware of the dangers associated with this type of situation. The FBI offered our resources and deployed personnel to the scene. We also went about the task of fully identifying Mario Navas and learning all we could about his criminal and mental-health history, as well as his connections in both Florida and New York. We discovered that in 1976 Mario had been convicted of conspiracy and possession with intent to deliver a narcotic in New York. He had been sentenced to fifteen years and had served time at three prisons before being paroled in 1980 on condition that he return to his home in Colombia. His prison record also indicated that he had an explosive temper and was given to fits of rage. During the next couple of hours, Amtrak officials and police evacuated the other passengers and detached the train car holding Mario, its windows curtained, onto a side track about fifty yards to the right when viewed from the station. There was one empty car immediately adjacent on either side. Responding police officers, shielded by the steel girders of the station, attempted to communicate with the Colombian via loudspeaker, but their overtures were met with silence. Next, a tactical officer under heavy cover crept up to Mario's train car and attached a listening device as well as a speaker. As he worked to set up this means of communication, he noticed a hole where a bullet had exited the compartment door. At around 9:00 a.m., the Raleigh officers on the scene heard another shot ring out from inside the compartment. At this point they considered the option of storming the train, but they simply did not know enough about what was going on inside and who was at risk. At 10:20 a.m., the portion of the train not isolated by the police pulled out of the station to continue with the journey to New York. Based on the few facts available at the time, neither the FBI nor local police had any reason to assume that Mario had boarded a train in Jacksonville with the intention of shooting off his weapons just before arriving in Raleigh. The loud argument reported by witnesses suggested that a domestic dispute had triggered the violence. It made sense that the subject inside had acted spontaneously, was now scared, and probably had no clear plan on what to do next or how to extricate himself. About an hour after the other train cars left the station, several more gunshots rang out from inside the compartment. Once again, the police had done nothing to provoke Mario. Was he killing his captives one by one? Had he killed himself? The police simply didn't know. Chief Heineman knew he had three options. The first was to mount a rescue attempt. The second was to establish a dialogue with Mario to convince him to surrender. The third was to wait and do nothing, and see if he would come out on his own, what the NYPD's Harvey Schlossberg used to call "dynamic inactivity." Heineman questioned Amtrak officials to try to pick up any insights that might help him devise a strategy. He learned that railroad passenger cars are made with heavy-gauge steel in order to survive derailments, which makes them almost impenetrable. He also learned that the thick glass windows were built to withstand gunshots coming in from the outside, which meant that a rescue attempt was not a viable option. He knew it wouldn't be like trying to kick in a wood-framed door in a tenement building; Mario would have plenty of time to kill the children if he was so inclined. And Heineman couldn't simply wait, because the children were at risk. So he was going to have to establish a dialogue with Mario. Unfortunately, the Raleigh Police Department did not have a Spanish-speaking negotiator. Fortunately, one of the EMTs deployed to the scene was Jorge Oliva, a Cuban native. Heineman recruited him on the spot and installed him in another sleeping compartment about fifteen feet away. He took over the effort via bullhorn to elicit a response from Mario. At around 12:30 p.m. officers heard four more shots fired from inside the compartment. So Mario had not killed himself earlier. But what was going on? Was he simply firing off rounds to keep the police at bay? Most of all, were his captives still alive? Throughout that afternoon and early evening, Jorge conveyed to Mario offers of food and drink, with special concern for the children. No response. Then the listening device attached to the compartment door picked up the sound of the children crying. Okay, the kids were still alive. But this only increased the urgency of establishing communication; they were clearly in distress and in danger. At 8:00 p.m. Mario fired another shot. Then silence returned. Four hours later, almost nineteen hours into the standoff, Mario suddenly and inexplicably yelled out to the police in Spanish, "Everything is okay." He told the police to leave him alone. At least he was now communicating with words rather than gunfire. With some coaching by officers on the scene, Jorge stepped up efforts to open a dialogue, throwing out questions like "What's going on? How can we help?" But shortly after midnight, Mario stopped communicating just as abruptly as he had started. At 9:55 a.m. Saturday, Mario broke the silence once again, blurting out that he was holding a gun to the head of one of the children. Again, the police had done nothing to provoke this action or this announcement. Chief Heineman's frustration was growing, along with the pressure on him. He was in command of all of the personnel on hand, a job that included making sure that officers on the perimeter protected the scene from unwanted intrusion. He had to coordinate all of the assisting agency representatives, ensure that sniper/observers were relieved and allowed to get food and rest, speak to the press, and try to come up with a strategy to resolve this situation without loss of life. At 11:00 a.m., Heineman brought in a medical doctor to try to assess the condition of the children, based on the sounds they were or were not making. During the evening it had gotten very cold, but temperatures rose again during the day, which must have made the compartment hot, stuffy, and uncomfortable. Officers attached a better listening device, almost like a large-scale stethoscope, outside the compartment door so that the doctor could listen in. He heard Julie asking her mother to wake up, but nothing else that would indicate the child's own condition or that of her baby brother. At 11:37 a.m., two shots were fired in a rapid burst, the first indication that Mario had a machine gun—yet another fact weighing against a tactical assault. At 1:00 p.m., Mario shouted out again, threatening to kill himself and the children. He also demanded that orange juice and matches be sent in. Jorge offered water, but only if Mario would let the children go. Through the listening device, the officers on hand could hear Julie saying, "Agua, agua." Then they heard Mario telling her to be quiet. Jorge continued to offer food and water, but Mario's only response was to yell obscenities at the police. It was a Saturday and I was at home in Virginia when I received a call from Fred Lanceley, my primary negotiation instructor at Quantico, who had become a good friend. Fred told me he had been asked to help with an incident on an Amtrak train; some shots had been fired and they were trying to negotiate with the guy, but he spoke only Spanish. Did I have someone on my team at WFO who spoke Spanish? I thought immediately of Ray Arras, a thirty-nine-year-old El Paso native who had just recently completed the FBI hostage-negotiation training course. He had come to the FBI at a relatively late age after running the El Paso Zoo. I had been impressed by his confidence and easygoing manner; he would be great in a tense situation like this. Fred told me to have Ray come to Davidson Army Airfield, located at nearby Fort Belvoir, where he would be picked up by an FBI plane and flown directly to Raleigh. This sounded like the kind of challenging case I had been hoping to be involved in. I had handled other crisis situations—someone holed up and threatening suicide, domestic disturbances that turned into barricades—but this was a chance to work a major standoff. And so I asked Fred if he could use my help in Raleigh. He agreed and told me to meet him and Ray at Fort Belvoir. In a couple of hours a four-seat Cessna took the three of us from Virginia to the Raleigh airport, where an FBI sedan ferried us directly to the Amtrak station. Just as we were arriving, at around 6:00 p.m., Mario fired two more shots through the compartment door. Tactical officers had been attempting to deliver the matches that Mario had asked for earlier, and apparently this movement had spooked him. The station in Raleigh is about the size of a typical suburban home, ranch style, with a portico out front facing a small parking lot. Fred, Ray, and I found Chief Heineman inside, looking understandably tired and beleaguered. He was a tall man with salt-and-pepper hair and a mustache; he wore a tie and tweed jacket. I had talked to my share of southern sheriffs over the previous couple of years, and I'd noticed that when meeting an FBI agent, they usually spent time on the slow exchange of pleasantries before getting down to business. In all likelihood they were meeting the federal agent for the first time, and they wanted to get a sense of whom they were dealing with. Heineman, though, got right down to business. His accent told me that he was a New Yorker, and it was obvious that he had a big problem on his hands and needed our help. "Thanks for coming down," he said, directing us to the station manager's office behind a snack bar. He and a couple of other officers briefed us on the situation, focusing on Mario's actions up to this point. Heineman told us that he thought Mario's sister, Maria, was dead. The listening devices were picking up only the voices of Mario, Julie, and the crying baby, Juan. The implications of those three people trapped inside a small train compartment with a decaying corpse under the hot North Carolina sun were not pleasant to think about. He told us about their inability to get Mario to respond, and asked our advice. Fred and I described a strategy to get him to start talking with us. The key in situations of this kind is to vocalize the fears and concerns likely driving the perpetrator's refusal to talk. "I know you're afraid and concerned that we want to hurt you," the negotiator might say. "I want to assure you that no one out here wishes to harm you in any way." Or "I know you're confused about what to say or do. I want you to know that I'm here to help you get safely out of this situation, but I need to be able to speak with you in order to help." We told Chief Heineman that even if the communication is all one-way, the calm and controlled voice of the negotiator can lower tension and create a more comfortable environment that encourages the subject to speak. Even though Mario might not be talking, he was probably listening. Chief Heineman responded that he viewed us as the experts; he would follow our advice. We suggested that Ray be the primary contact with Mario and that we have him take over as soon as possible. I would assist him as a coach, using Jorge to translate what was being said to me so I could in turn provide Ray with suggestions in real time. Fred would be nearby to provide strategic guidance, concentrating most of his efforts on gathering more of a criminal and psychological history on Mario in hopes of uncovering important personality clues that would help us get him to communicate. Also assisting would be FBI agent Lathell Thomas, from the Charlotte FBI Field Office. He was fluent in Spanish—previously he had been assigned to the field office in San Juan, Puerto Rico—and he would be able to help Jorge provide me with instantaneous interpretation of the dialogue between Ray and Mario so I could coach. While we were still inside the station, we received confirmation of what we had dreaded all along: Mario shouted out that his sister Maria was dead. We knew that when one person dies in an incident, the chances of there being additional loss of life greatly increase. What had been the worst-case scenario all along was now more likely than ever: facing a homicide conviction, Mario might decide to kill the children and then kill himself rather than surrender. Ray, Fred, and I walked back out the station's front door, on the side opposite the platform, then circled around through the parking lot, coming back to the rail lines at a point just beyond where Mario's compartment was stranded. We took up a position behind a steel girder supporting the roof over the platform. This put us about a hundred feet from Mario's compartment, just alongside one of the other cars attached to it. The only problem with this location was that Mario stood between us and the command post, back inside the train station proper. This meant that anytime we needed to consult with the chief, or even use the restrooms, we had to follow the same circuitous route through the parking lot to stay outside the potential range of Mario's weapons. Fred in particular made many, many trips, serving as liaison and information source. Still, sniper/observer teams were there, hidden from view, both to protect us and to use force if Mario suddenly came out with guns blazing. As darkness fell, the warmer daytime temperatures dropped precipitously, and Ray and I appropriated blankets from the passenger car sitting on the track next to us. Even wrapped in a blanket, I was still standing on the cold cement train platform in loafers. Then it began to rain, a steady drizzle that would continue through the night. The SWAT team had set up transmitters with microphones and speakers that would allow us to hear Mario and him to hear us. We now put on headphones, and Ray took a deep breath and purposefully picked up the microphone. Ray's an incredibly affable and outgoing guy, one of the more upbeat people I've ever met. As he launched into his monologue he projected a sense of calm and kindness. "Este es Ray," he said. Then, continuing in Spanish, he said, "I'm here to help you. How are the children?" No answer. Ray continued along the lines we'd suggested to Heineman. No one wanted to hurt Mario; he should speak with us so that we could help him. Again, no response, but Ray kept up the patter, which is more difficult than you might imagine. It can be counterproductive to keep saying the same things over and over—as well as torturous for both speaker and listener alike—so I tried to help Ray come up with fresh ways to make the representations that we thought would be most effective. "Think about the children. You don't want them to suffer." "Let us get you some food and some drinks. Those kids need to eat." "Think about yourself. Life is still worth living." I was struck by Ray's ability to come off as entirely genuine, speaking to Mario as if they were brothers. He carefully avoided the stereotypical "voice of authority" so often associated with law enforcement personnel. The cold monotone of Sergeant Joe Friday from _Dragnet_ is not what you need when you're trying to convey empathy and establish rapport. It's also true that individuals likely to engage in a standoff usually have a negative view of the police already. They expect law enforcement to be autocratic, demanding, and stern, so when someone like Ray projects real understanding, it disarms the subject and can help win his cooperation. As my psychologist friend Dr. Mike Webster says, "People want to work with, cooperate with, and trust people that they like." It's hard to like someone who is threatening you or challenging you. Ray had attended the FBI negotiation course at Quantico fifteen months earlier, and now he was getting his initiation by fire—this was in fact his first negotiation. Knowing this, I tried to help as much as I could, first sorting through the translation of all that was said, then whispering or jotting down my suggestions on a yellow legal pad and holding them up for him to see: "Mario, we know you must be scared, but nobody wants to hurt you.... We're really concerned about the children and want to make sure that they have something to eat and drink.... Help us to help you." We soon fell into a comfortable pace that enabled us to keep the monologue going. After two solid hours, Mario finally responded to a comment about the children. He began to shout at Ray: "You no-good son of a bitch. Stop talking to me. You're a no-good bastard. You don't care about the children. You're lying." I looked over to Ray to see his reaction to this outburst, but his demeanor hadn't changed. We had achieved our initial goal of getting him to respond. But we were aware that time, normally on our side, was less available because of the children. We had to keep Mario talking and listen for clues that would help us quickly determine his psychological state and concerns. Ray responded quickly that he did in fact care about the children, and he spent the next twenty minutes explaining this and trying to keep Mario engaged. Again, Mario responded. "If you really want to help the children, I need some IV fluids." He explained that he wanted us to pass a tube through one of the many bullet holes in the compartment door. Even though officers had been shot at the previous day when trying to deliver matches, we desperately wanted to comply with Mario's request. We had to demonstrate good faith. Chief Heineman agreed, and a short time later, he dispatched the SWAT team to undertake the task. We were all incredibly tense as the men entered the train, moving defensively and covering each other as they went. This guy had already fired his gun several times in response to perceived noises. Would he do it again? Ray explained carefully to Mario what we were doing, and as the men drew near Mario's compartment, the chief and I made our way to the other side of the train. We drew our guns and watched through the open window facing the door to Mario's compartment as the SWAT team tried to push the tube through the bullet holes, but apparently the outgoing bullets had taken a zigzag course, and the tube would not pass through. Ray explained the problem to Mario and suggested that he try to gouge out the holes to make them larger. Mario didn't believe us and soon became impatient and agitated, shouting out that we were dishonorable. Then he went silent again. Twelve of the most bleak and tedious hours I've experienced on the job followed. It was freezing cold, and we stood the entire night on the exposed platform, stomping our feet occasionally to try to stay warm. Periodically we heard the kids crying out. Mario's request for an IV had left us more worried about them than ever. Sandwiches or perhaps baby food is what you ask for to deal with hunger. An IV is what you ask for when a life is at risk. We continued to try to engage him throughout the night, but to no avail. When dawn arrived, the rain stopped and the sun came out. It was at that point that we began to notice the stench. At first we thought it might be the toilets on the train, but gradually it became clear that this was something different and an indication of what we had feared all along—Mario's sister was indeed dead and her body was starting to decompose. As the morning progressed the smell got worse. One of the police officers brought us some Vicks VapoRub, which we rubbed under our noses to mask the odor. I could only imagine how bad it must have been inside the compartment. I thought about the young children trapped in that six-by-ten-foot space, their volatile uncle pacing back and forth and firing off weapons, their mother dead on the floor. I quickly pushed those thoughts aside. Finding a way to get them out was a much better use of my time and energy than thinking about what they were going through. Later that morning we summoned a pediatrician to the scene. He warned us that we were very close to the point where the children might die from dehydration. Baby Juan might last another twelve hours without water, the doctor said. The older sister, Julie, could last perhaps another day. Now we had our deadline, but that still didn't give us a plan. We continued to try to get Mario to talk with us, without success. At 2:40 p.m., a shirtless Mario suddenly pushed back the curtains and threw open the window of the compartment. After a fifty-hour siege, and with the North Carolina sun creating sweltering temperatures inside, that train compartment must have been intolerable. Was Mario finally weakening? He stuck his head out and waved, then quickly ducked back in. Moments later, he strung a bedsheet out of the still-open compartment window. He told Ray that he wanted containers of food and water tied to it so that he could pull them in. According to the protocols described earlier, a negotiator typically would demand the release of a hostage in exchange. In this case, I knew that we simply had to seize any opportunity we had to keep the children alive. We conveyed the request to Chief Heineman, and within an hour, police officers had brought doughnuts, sandwiches, and drinks to the station. With Mario's window now open, exposing them to fire, the SWAT team members crawled under the train from the far side and tied the drinks and food up in the sheet. We watched as Mario hauled the sheet up through the window and into the compartment. We had finally been able to demonstrate our desire to take care of him and the children, and Ray immediately emphasized this. "Eat. Drink. Feed the children." He continued with this theme, using what negotiators refer to as "positive police actions," in which we reiterate all the good things we've done. The list also includes all the threatening things we purposely haven't done. For example, he reminded Mario that we had not fired at him when he opened the window. We could hear some movement in the cabin, and after a while Mario spoke up again. "Gracias, Ray." Ray continued to do most of the talking, asking about Julie, the little girl, and about the baby, Juan. Mario now opened up slightly; he gave only brief, noncommittal responses, but he seemed less agitated. He also began to call Ray "señor," a sign of respect. This felt like a major breakthrough after the events of the previous day. Ray continued to develop his rapport with Mario, and early that evening convinced him to surrender one of his weapons in exchange for some cigarettes and soft drinks. Mario wrapped the handgun in the sheet that had been used to deliver the food and lowered it to the ground. It turned out to be a 9 mm automatic pistol that was jammed and unworkable. Still, it represented a step in the right direction. We decided that this was the moment to press Mario to surrender. Ray told Mario that it was time for him and the kids to come out. Mario responded, "Only if my _padrino_ is here." Ray glanced at me and translated, "Godfather." "Who is your godfather, Mario?" He gave us the name of Paul E. Warburgh, a New York attorney who had defended Mario in a prior drug-smuggling case. He wanted Warburgh on the scene to guarantee his safety. The FBI office in New York quickly located Warburgh and spoke with him, and he agreed to help. Even so, I knew it would take time to fly the lawyer down to North Carolina on an FBI plane. We continued to press Mario to release the children. "Señor, what about Julie and the baby? Let's get them out of there, yes?" No response. Ray continued. "Send the kids out now and you can come when your lawyer arrives." He continued along these lines periodically over the next hour. Suddenly I heard Mario speak again, his tone matter-of-fact. I listened for the translation. "The baby is dead. Don't worry about the baby, Ray." Nothing in the way he uttered this sentence could have prepared me for the translation. He spoke as if telling us to get over something, with no hint of remorse. I looked at Ray and saw pure anguish. Mario continued. "I woke up this morning and he was blue and stiff." He blamed us for not having delivered the IV through the holes in the door, as he had demanded earlier. I looked again at Ray and could tell he was devastated. He turned and walked farther down the train platform. Then he knelt down and prayed. Every negotiator handles the loss of a hostage differently. I tend to focus on what needs to be done now rather than what went wrong. But I was afraid Ray would feel responsible for the child's death—that somehow he would think he'd done something wrong. I gave him a minute and then made my way down the platform and placed my hand on his shoulder. "It's not your fault," I said. "You're not the one responsible for this death." Ray lifted his head, but he kept his eyes closed. "We've got to think about Julie now. We still have to get her out alive." This reminder of the mission still before us seemed to give him a new resolve. He stood up, walked back to our position, and picked up the microphone. "I have just made my peace with the Lord," Ray said. "I will not carry the responsibility of the death of that baby. The responsibility is yours." Ray sounded like a new man, a little angry and much more forceful. "I have just gotten up from my knees praying for the soul of that little boy. Also I'm praying for the girl because she's going to die." He was like a father talking sternly to his son. "Julie is fine," Mario protested. His tone also had changed; he seemed defensive and stung by Ray's reality check. "Are you absolutely sure?" Ray asked. "I do not want that little girl to die." "No, Señor Ray, she has eaten and drunk. She's all right." Mario sounded as if he was pleading, trying to convince Ray that he was not such a bad guy. The microphone attached to the train car was sensitive enough that we could hear Julie in the background, complaining to Mario about her stomach. Ray seized on this opportunity and immediately jumped in. "You see? The girl is getting sick. This means that she, too, is going to die from dehydration. Julie needs immediate medical attention." Lathell and I watched as Ray paused for a moment, the microphone to his lips. Then he became increasingly bold, saying to Mario in a manner that invoked a sense of honor, "Will you meet me at the window now and give me Julie? I will come unarmed." Lathell was still interpreting that last statement for me as I saw Ray take the blanket that had been around his shoulders and drape it over his extended arms. He was already moving toward Mario's compartment. My head was racing a million miles an hour, but all I could say was, "Wait a minute." I turned to tell Lathell to radio to command and SWAT and let them know what was going on. I didn't want someone to shoot at Ray by mistake. As I did this I saw Ray remove his revolver from his holster and put it in his hand, hidden from sight by the outstretched blanket. I unholstered my own revolver, the pitifully small five-shot .38 caliber Chiefs Special I had brought with me, and followed a few feet behind him. As he walked forward I followed a few feet behind, hugging as close as I could to the railroad car to stay out of Mario's sight. This situation was moving way too fast, and I knew Ray and I were engaging in a tactical role that had not been planned or coordinated, a big no-no. As negotiators, we shouldn't have been doing this at all. Still, Ray needed some backup in case something went wrong, and I wasn't going to let him go out there alone. Ray stood just below and directly facing the compartment window. His arms outstretched to receive Julie, he was completely vulnerable. Pressed against the train itself, I had some room to roll under if shooting started, but Ray didn't. He simply stood waiting for the child. Back then the FBI didn't give out medals for bravery as they do today, but if anyone ever deserved one, it was Ray. What he did was one of the most courageous things I have ever witnessed an agent do. It seemed like an eternity, waiting for the unknown, but in a few seconds the window opened and Mario reached out to shake Ray's hand. Luckily, Ray was left-handed, and it was in that left hand that he carried his revolver. From my position wedged between the platform and the train I could see Mario for the first time; he was tall and thin and sweating profusely. After shaking Ray's hand he disappeared again into the train and emerged a moment later with Julie's little body cradled in his arms. Ray wrapped her up in the blanket, thanked Mario, and headed toward the station. I turned and walked back the other way toward our protective girder, staying close to the train. Ray walked up to the surprised officers at the command post and handed Julie to an EMT, who then rushed her to the hospital. When Ray got back to our negotiation position he seemed oblivious to what he had just done. I gave him a big hug and said, "You stupid son of a bitch, don't ever surprise me like that again." We laughed together, but I could see the sadness on his face. I looked him in the eye and said, "You aren't God. All you can do is your best to save every life that you can. That's the measure of our success or failure as negotiators. You just saved that little girl." On Monday, very early, Paul Warburgh arrived, escorted by FBI agents. Fred Lanceley made it clear to the attorney that we did not want to turn him into a negotiator and move discussions into legal or other matters that might cause further delay in the surrender. "I get it," Warburgh said. "Let's just wrap this thing up peacefully." We took him down to the train and he said a few words to Mario. "I'm here, my friend. You're going to be safe." Now it was the moment of truth. Ray got back on the speaker and asked Mario to surrender any remaining weapons. Moments later, Mario lowered the sheet, which now contained his MAC-10 submachine gun. "Time to come out," Ray said. At 5:45 a.m., Mario slid back the door to his compartment, raised his hands, and surrendered to the SWAT team. As he emerged from the train, Ray stepped forward and offered him a cigarette. Leaning toward the flame from Ray's lighter, Mario looked him in the eye and said, "I didn't want to hurt anybody." Chief Heineman would later tell the press that his primary concern throughout the seventy-two-hour ordeal—the longest nonprison siege in U.S. history up to that time—was the safety of the children. A more aggressive approach might have led to Julie losing her life and would have placed his officers in clear jeopardy. In a press conference, Chief Heineman said, "I feel good that we didn't fire a single shot. We were all saddened by the loss of the baby, but I felt we got all we could possibly get out of this." He was right. The chief also graciously acknowledged the assistance of the FBI, saying that he had benefited immensely from our expertise. The entire Raleigh community had been closely watching this situation, and the hospital where Julie was in good condition received more than fifty calls from people who said they were willing to be her foster parents. Her relatives soon arrived from Colombia to take her home. Mario would eventually be convicted of first-degree murder and given a life sentence. A few days after the siege ended, _The Washington Post_ ran an editorial titled "Freeing Hostages Safely." It spoke about the Amtrak siege, as well as another hostage incident that had been handled successfully at the same time in New York City by NYPD lieutenant Robert Louden: "Impressive work was done by specially trained hostage negotiating teams, a relatively new phenomenon in law enforcement." After summarizing these two incidents, the editorial concluded: "Such person-to-person bridge-building, psychologists tell us, is just what's needed when dealing with a dangerous person who feels trapped. The objective is to set up voice communication—through a wall or window or over the phone—and keep talking until the gunman has established a trusting relationship with at least one lawman. It takes time, but in almost every case it's far more sensible than attempting to rescue the hostages with force." Our methods were bearing fruit, and we were thrilled that the press and public were beginning to take notice. Unfortunately, many of our law enforcement colleagues still viewed negotiation with skepticism. That, combined with the fact that we all had day jobs doing other things, meant that it would be a while before we could fully consolidate the respect we'd earned in Raleigh. # CHAPTER FOUR # **TROUBLE ABROAD** _There is no den in the wide world to hide a rogue. Commit a crime and the earth is made of glass._ —RALPH WALDO EMERSON Though I found myself increasingly drawn to negotiation work, my primary posting in the early 1980s was still with the Terrorism Squad, a job that suddenly was to become much more demanding. In 1985, a series of violent hijackings across the globe made nearly constant front-page headlines both in the United States and overseas. The first I was involved with directly was the June 14 hijacking of TWA Flight 847 out of Athens, Greece, by Lebanese terrorists. During a standoff that ultimately would last until June 30 and unfold over multiple locations from Algiers to Beirut, U.S. Navy diver Robert Dean Stethem was murdered and his body thrown onto the tarmac at the Beirut airport. Because of a recent U.S. law that had made it illegal to take an American citizen hostage anywhere in the world, I was assigned to lead the case, the first extraterritorial hostage situation investigated by the FBI. Over the next five years I would travel extensively in connection with this and other cases, developing evidence, interviewing U.S. citizens who had been victims, and in the case of TWA 847, coordinating witness testimony for the trial of one of the hijackers, Mohammed Ali Hamadei, who was arrested in Germany, convicted in May 1989, and sentenced to life in prison (though he was paroled in late 2005). In October 1985, only a few months after the TWA hijacking, Palestinian terrorists hijacked an Italian cruise ship, the _Achille Lauro_ , in the Mediterranean Sea. Aboard were a number of Americans, including wheelchair-bound Leon Klinghoffer. The hijackers shot and killed him, then threw his body into the sea off the coast of Syria. Egyptian authorities intervened, working out a deal on their own for the passengers to be released and the four hijackers to be flown to Libya aboard an Egypt Air commercial plane. The U.S. government had its own ideas, and U.S. Navy F-14 jets intercepted the plane over the Mediterranean and forced it to land in Sigonella, Sicily, where the four hijackers were arrested. Based on my experience handling the TWA case, I was initially assigned as the FBI lead agent for this new investigation as well. I traveled to Italy with other agents to conduct interviews aboard the returned cruise ship and to interrogate the four hijackers, now in Italian custody. The Italian authorities at first resisted our efforts to question the hijackers, apparently because they hadn't secured confessions and didn't want us to get them first. Eventually, though, political pressure prevailed. We soon found ourselves escorted to a stark prison in Turin, where we would interrogate the four terrorists, including Majed al-Mulqi. A guard led me and my partner/interpreter down a corridor to a large interview room. We seated ourselves at a table, and Mulqi was brought in, wearing a khaki prison uniform and handcuffs. As he sat down, he gave us a look of such unadulterated hatred that I was momentarily concerned that he might try to jump across the table and kill us. I could feel myself tense in readiness and observed a similar involuntary reaction in my partner; we were now focused and alert. I thought, _There's no way we can get a meaningful statement from this guy_. We began by explaining who we were and telling him that we wanted to ask him about the hijacking. Taking a page from my hostage-negotiation training, I had planned to approach him in an open and unthreatening manner. I assumed the Italians had been more direct and confrontational with him, to say the least, and I thought this would allow us to assume the role of "good cop." I began asking simple questions about his background, which my colleague would frame in the Palestinian-inflected Arabic that Mulqi spoke. His command of the dialect—like Mulqi, he had been raised in Palestine—and his nonthreatening delivery of the questions really surprised Mulqi. Initially, Mulqi said little, but after an hour he became less tense and began to open up. My partner would pose my question and Mulqi would respond, often with a long stream of words, after which my partner would render it in English for me. I periodically glanced at the Italian policemen standing outside the door, and I could tell they were surprised that Mulqi was talking to us for so long. We appealed to his vanity, praising the efficiency of the operation and telling him it was among the boldest and most well-executed hijackings we'd ever seen. As we did this, we embedded questions that encouraged him to give us important details such as who had been in charge. After one exchange, my partner suddenly turned to me and translated a key admission: "I was the leader." Without missing a beat, I asked, "How were you able to keep control of the entire ship and your comrades?" Mulqi seemed to sit up straighter with this acknowledgment of his abilities as a terrorist team leader. A little later I sought to find out why they had targeted one disabled older American. "We're very interested to know what brought about Mr. Klinghoffer's death." Mulqi told us that at one point the ship had been surrounded by news helicopters. He didn't like them flying so close, and so he'd threatened to harm people if the copters didn't move farther away. The helicopters didn't withdraw, so to show that he was serious, Mulqi moved the passengers up on the deck below the bridge and surrounded them with cans of fuel as a warning. But he and his fellow hijackers couldn't move Klinghoffer because of his wheelchair. What followed was a confession that we hadn't expected. "So I wheeled him to the side of the ship and shot him, then threw him overboard for all to see." Few law enforcement officers had ever even talked to a terrorist at this point, and we were momentarily stunned by what had just happened. A hardened terrorist who had refused to reveal this information under prior relentless interrogation had just opened up. This was an important moment for me, when I began to think about the distinction between interrogation and interviewing. The former, at least at face value, seemed the appropriate way to handle someone who had committed the kind of atrocious crimes that Mulqi had. And yet if the goal was to find out useful information, there were at least times when it made more sense to use a nonthreatening and relaxed manner and try to project some sense that we were trying to understand him. Even a hardened terrorist, when handled the right way, might be encouraged to provide important information. One month after the _Achille Lauro_ tragedy the terrorists struck again, when operatives for Abu Nidal, a terrorist organization committed to the destruction of Israel, hijacked Egypt Air Flight 648, again out of Athens. When the three hijackers took over the plane it prompted a shoot-out with an Egyptian sky marshal, who managed to kill one of them. During the shoot-out, the plane's fuselage was pierced by a bullet, leading to cabin decompression and forcing the pilot to fly low. The decompression and a declining fuel supply eventually led the pilot to perform an emergency landing in Malta. Once on the ground, the two surviving hijackers demanded that the plane be refueled so that they could fly it to Libya, but the authorities refused. And so one by one, over several hours, they marched five of the passengers, two Israelis and three Americans, to the open doorway and shot each of them in the head. Amazingly, three of these victims would survive their wounds. The Maltese government had no structured crisis management apparatus or any trained hostage negotiators. The Maltese president and other officials assembled in the airport control tower, but had little idea of how to effectively communicate with the hijackers on the plane. In fact, a big part of their strategy seemed to be to avoid communicating with them altogether and instead await the arrival of Egyptian commandos. A skilled negotiation team might have been able to fully engage and occupy the hijackers, thereby preventing them from feeling compelled to execute hostages in order to have their demands addressed by the authorities. There was no serious attempt to negotiate a nonviolent resolution; instead, Egyptian commandos stormed the aircraft in what would prove to be an exceptionally ill-conceived and poorly executed rescue attempt. As a diversion for a tactical assault, they planted an excessively high-powered explosive charge in the luggage compartment near the rear of the aircraft. When it detonated it killed one of the two remaining hijackers and dozens of hostages. The resulting fire and indiscriminate shooting from the tactical teams resulted in more than sixty-five deaths. I would later assist in debriefing the two surviving Americans from Flight 648, since the FBI investigated this crime against American citizens. I could not help thinking that skillful negotiation might have delivered a better outcome. In addition to the TWA Flight 847 and _Achille Lauro_ incidents, the terrorism squad at WFO (which became known as the extra-territorial terrorism squad) worked an ongoing hostage ordeal in Lebanon involving several Americans who had been taken prisoner by Hezbollah terrorists over a long period of time, including journalist Terry Anderson, who would be held for seven years. I assisted case agent Tom Kelly (the helicopter pilot from Sperryville) in debriefing Reverend Benjamin Weir, the first American to be released from captivity. I also assisted case agent Tom Hansen during the investigation of Royal Jordanian Flight 402, which was hijacked from Beirut by the Amal militia on June 11, 1985. As we investigated the TWA Flight 847 hijacking that happened three days later, we learned that two American citizens had been aboard this other aircraft, giving the FBI investigative jurisdiction. We eventually hatched an operation to lure the hijack's leader, Fawaz Younis, to the Mediterranean Sea; he was arrested by undercover FBI agents, returned to the United States, and eventually convicted. His apprehension on the high seas and return to the United States to face justice was a historic first in the war against terrorism. These were indeed busy times for the very few of us who were working these matters. All the major investigations were intense and time-consuming, and we were up to our necks in work. I would spend five years on the TWA Flight 847 investigation alone, not only helping successfully prosecute Mohammed Ali Hamadei in Germany but pursuing the apprehension of the other two hijackers, Hasan Izz-al-Din and Ali Atwa, both of whom are sadly still at large. All this, and I was still teaching negotiation courses and responding to callouts such as the incident in Sperryville. This wave of international terrorism (which included attacks on the Rome and Vienna airports in December 1985) marked a turning point for the FBI and for the United States; it was the first time that any nation had aggressively gone after terrorists beyond its shores in order to bring them to justice, rather than just target them for assassination, as the Israelis had done to the Munich Olympics terrorists. We were blazing new ground, which was exciting, but because we had limited manpower (in 1985 the WFO terrorism squad, which handled all international hijackings for the FBI, had only six or seven agents; after the tragic events of September 11, 2001, literally thousands of FBI agents began to work terrorism cases), I was called on virtually every time a terrorist incident involving Americans took place. From 1985 to 1990 I was probably on the road at least six months out of the year, and when I was home I was working nights and weekends. After several years of this high-stress, globe-trotting routine, I began to get worn out. I had a ten-year-old, an eight-year-old, and a six-year-old; I missed them; and I felt I needed to be there coaching soccer games and going to piano recitals and swim meets. There were times when good friends had to shovel snow from my driveway because I was out of town. The FBI always said that families came first, but that was not true. The needs of the Bureau always took precedence over family. I recalled speaking to my kids on brief phone calls from overseas, hearing them ask, "Daddy, when are you coming home?" My work was exciting and stimulating, but my family was suffering as a consequence. I had barely returned from spending the summer and fall of 1988 in Germany for the Hamadei trial—where I had been off and on for eight months, testifying twenty times—when Pan Am Flight 103 was bombed over Lockerbie, Scotland, on December 21. The FBI was slow to realize just how much manpower and resources were required to do this kind of work effectively. As soon as we received word of the Lockerbie incident, I was called into my boss's office and told I would be taking the lead on this major terrorism case. But I had reached my breaking point and was in no mood to be brusquely informed that I was being deployed yet again, this time to Scotland. I was tired of being taken advantage of, and we argued. Not getting the understanding I felt I deserved, I left his office, walked down the hall to see his boss, and said, "Enough already." I felt righteously angry at being told I would once more have to work around the clock with limited support, all because the Bureau didn't have the foresight to adequately staff this crucial squad. I suppose that in a way I was being complimented; after all, Lockerbie was the largest single homicide in the history of the United States at the time, and my boss wanted me to head up the investigation. But I had made up my mind. I did later deploy to Lockerbie for a short time to provide an assessment of the investigation, but by then other agents had been brought in to manage the case. After I came back from Lockerbie, I met with my senior Assistant Special Agent in Charge, Nick Walsh, and told him I really needed to get away from terrorism for a while and wanted to transfer off the squad as soon as possible. Nick acknowledged that, as the longest-serving agent on the extra-territorial terrorism squad, I deserved a break, and he said that wherever I wanted to go within WFO, he would see to it that I got there. I had been working terrorism for eight years and a change of pace would be welcome. # CHAPTER FIVE # **CRISIS INTERVENTION:** **LISTEN AND LEARN** _To listen closely and reply well is the highest perfection we are able to obtain in the art of conversation._ —FRANÇOIS DE LA ROCHEFOUCAULD For the next six months I held a quiet job investigating corrupt politicians from the Bureau's Tyson's Corner, Virginia, office, twenty minutes from my home. Life was good. Then an unexpected opportunity arose. One of three full-time positions on the negotiation staff at Quantico came open. I had been asked on previous occasions to transfer to Quantico to become a full-time hostage negotiator, but had declined the offers due to my demanding terrorism work. This time I agreed to apply for the position, and I was selected for the job. I would assume responsibility for the two-week training course and provide operational support during hostage crises. This meant a promotion, as well as a transfer down to Quantico. Finally I'd be able to devote myself to what by then felt like a calling. As one of the three agents assigned to the hostage negotiation program, I was part of the Special Operations and Research Unit (SOARU), which was configured to support tactical, hostage negotiation, and crisis management research, training, and operations for the entire FBI. Those of us on the staff were in a position to greatly influence the direction of the FBI's policy and operational guidelines for these programs. The SOARU was purposefully set up to better coordinate the often conflicting and sometimes contradictory approaches historically practiced by the FBI's SWAT teams and field hostage negotiators. Behind closed doors, our crew was not above jokingly referring to the SWAT guys as Neanderthals and knuckle-draggers. But experience had shown us time and again that hostage negotiators were less likely to achieve a desired surrender when there was no visible show of force and a lack of tactical containment. Conversely, we had also learned that tactical entry was almost always safer and more successful _after_ negotiators had bought time for necessary planning, practice, and implementation. It wasn't that we didn't appreciate the SWAT teams—we knew that we depended on them just as much as they depended on us. I also had my own reasons to appreciate them: while I had been in Germany for the Hamadei trial, information surfaced that terrorists might be targeting me for reprisal. In response, the FBI had dispatched members of the Hostage Rescue Team to my house to guard my family, even accompanying my wife and kids on outings and daily errands. As distinguished from the fifty-six part-time SWAT teams in FBI field offices around the country, HRT was a dedicated national counterterrorism tactical response unit. HRT was located, like SOARU, at the FBI academy, and was staffed by over sixty-five full-time tactical operators who were always either engaged in training rotations or operationally deployed on unique missions anywhere in the United States that required their unique skill sets. Protecting my family became one of those missions. When I wasn't traveling to conduct law enforcement training programs or speaking at law enforcement conferences, my time was spent developing new negotiation instructional training blocks or researching and writing articles for the _FBI Law Enforcement Bulletin_. The objective was to gather information, assess its value, identify the key learning points, and then pass that information to negotiation practitioners. In addition to the intensive two-week hostage-negotiation training courses we conducted four to six times each year at the FBI academy, we also conducted regional training programs for local police departments in the field. The basic negotiation course in Quantico provided essential training for all new FBI negotiators, but we also kept open several slots for domestic and foreign officers. Practically every significant law enforcement leader in the free world cycled through the FBI academy at some point or another. Many of these officials would take time to stop by our unit to learn about the negotiation program, and to tap into our experience and expertise. They collected copies of our training materials and often requested that we travel to their jurisdictions to conduct field negotiation schools for their personnel. During the six months I spent at the Tyson's Corner resident agency, I hardly left the zip code. I remember telling my wife, Carol, that this new assignment at Quantico would be less disruptive to our family life, and that I would not be traveling nearly as much or to such faraway destinations as I had when working overseas hijacking cases. Little did I know that by becoming a full-time negotiator, I was setting myself up not only for continuing worldwide travel but also for round-the-clock duty as a consultant to on-scene negotiators, taking those urgent phone calls seeking advice on nights, weekends, and holidays. The FBI has worked kidnap cases since 1932, when the abduction and murder of the two-year-old son of aviator Charles Lindbergh stirred public outrage. In response, Congress passed a law making it a federal crime to kidnap and transport a victim across state lines. From that point forward the FBI aggressively investigated kidnapping for ransom in the United States and did much to make this a fairly rare crime today. Through the years the FBI developed significant expertise and capabilities in working these cases. Sophisticated electronic, airborne, and ground surveillance and tracking make this a crime with small prospects for success. As a result, while kidnapping for ransom has become a scourge overseas, for the most part criminals in the United States have moved on to different crimes. (Of course, women and children continue to be abducted by sexual predators, not as hostages but as "homicides to be.") When I arrived at Quantico the FBI's negotiation training tended to focus largely on classic hostage situations, in which a perpetrator holds someone against their will in order to compel a third party, usually the police, to do something (or abstain from doing something). During a class Fred Lanceley and I led in Oakland, California, for local, state, and federal law enforcement officers, Fred asked our group of thirty-five experienced hostage negotiators how many had dealt with such a classic bargaining situation. Not one hand went up. Then he asked how many students had negotiated an incident in which a hostage taker was in emotional crisis and had no clear demands, and every hand went up. We were both surprised, though we had felt all along that such emotionally driven incidents, not bargaining interactions, constituted the bulk of what most police negotiators had to deal with. Right then, Fred and I realized that the need was not so much for training in quid pro quo bargaining but for the skills needed in crisis intervention situations, with a heavy dose of active listening. Our students needed to learn the slow and patient communication skills that could defuse the kinds of situations they were most likely to face. When we returned to Quantico, I pitched my boss on revamping our negotiation training curriculum. He agreed, and I set out to redo the program, putting more emphasis on how to deal with individuals under extreme emotional stress. The core of the new curriculum consisted of specific active listening skills first developed by the counseling profession. In brief, this entails creating positive relationships with people by demonstrating an understanding of what they're going through and how they feel about it. By applying this approach, the negotiator can demonstrate empathy and show a sincere desire to better understand what the individual is experiencing. We know that people want to be shown respect, and they want to be understood. Listening is the cheapest, yet most effective concession we can make. The positive relationship achieved through this interaction then sets the stage for the negotiator to exert a positive influence over others' behavior, steering them away from violence. The skills boil down to restatement of contact and reflection of the captor's feelings. Increased use of these techniques would have dramatic results. I sent my new ideas to the fifty-six field offices for input, and incorporated their feedback into the final product, an extensive binder filled with hundreds of new and improved slides. This was probably the most impactful thing I did at the FBI. In those days your local police department depended on its local FBI office for training, which occurred on an entirely ad hoc basis. Now, for the first time, there was a precise, standardized, detailed approach to handling emotionally driven cases. The manual also covered every other aspect of the negotiation process. If you had a siege situation and the media became involved, you could find out what to do. It provided guidance for dealing with family members outside the crisis site. Most important, the manual identified specific active listening skills that could be easily learned and applied to most negotiation situations. The new training slides provided specific guidance, for example, on recognizing a "suicide by cop" situation, in which the subject purposefully engages with the police to bring about his own demise. It also contained a list of indications of progress and a similar list to help identify incidents that were becoming more dangerous. Specific active listening skills were provided, with examples of how they could be incorporated into dialogue to create a relationship of trust with an individual in crisis. The response to these new training materials was enthusiastic and overwhelmingly positive. The number of field-training requests quadrupled; more and more police began to look to the FBI for guidance in this area. On one Fourth of July, I was sitting on a blanket on the Washington Mall, having a picnic with my family and looking forward to the start of the fireworks show, when my beeper sounded. I pulled out my cell phone, punched in the number on the display, and soon reached Mike Duke, an FBI negotiator assigned to South Carolina. He was calling to tell me that a gunman had taken over the USS _Yorktown_ , a decommissioned Navy aircraft carrier and museum in Charleston. According to the best information Mike had, the subject was a Vietnam vet with emotional problems. He had taken a high-powered rifle on board the ship and fired off some rounds, but he was not believed to be holding any hostages. It does not take Sigmund Freud to connect the dots that might link a symbolic U.S. Navy ship, the Fourth of July, and a Vietnam veteran suffering from post-traumatic stress disorder (PTSD). I told Mike that this was probably a classic crisis intervention situation where the gunman had no clear substantive demands, and advised him to suggest to the police our standard approach of establishing rapport with active listening skills and talking this man through his crisis. I then asked him to call me again once he arrived at the command post and had gathered additional information. He said he would, and I returned to my family picnic. About an hour later my beeper went off again, but this time it displayed a different number. When I dialed the number, my call was answered by a voice I didn't recognize. I asked to speak with Mike Duke, but the man on the phone said he didn't know a Mike Duke. I then asked if this was the command post; the man said it was not. I next asked if this was the negotiation team room, and again the man said no. Then he asked me who I was. I told him my name and said that I was a negotiator with the FBI, calling from Washington, D.C. In response, he said, "I guess I'm in bigger trouble than I thought." Dumbfounded, I asked: "Are you by any chance the man with the gun?" "Yes, I am," he said. I learned later that the phone number Mike had sent me was for the souvenir shop where the command post and negotiation team had been set up. What Mike didn't know was that this same number also rang on the ship's bridge, where the gunman was. Now that I had been thrust into the dialogue, I didn't want to just hang up on him. I needed to do what I could to keep him calm, then extricate myself as diplomatically as possible. "What's your name?" I asked him. "Jim." "You okay, Jim?" "I'm okay." "Well, you know that no one wishes you harm. We all want you to come off that ship safe and sound, with nobody getting hurt, either." I didn't want to cross wires with the local police's strategy, but with this basic civility I thought I was on pretty safe ground. "What happened today, Jim?" He responded, "I'm a Vietnam vet and I'm not getting the help I need. I served my country, but nobody cares about me or wants to hire me. I've got nothing to look forward to." He projected hopelessness and helplessness, the most important suicide warning signs. He seemed to be saying that life wasn't worth living, and I was worried that he might take his own life. As I started to acknowledge his feelings, he suddenly interrupted me. "What the fuck was that?" he said. I heard the sound of his phone receiver banging against something—he must have stepped away. Then he picked up the receiver again and said, "You tell those fuckers that nobody better try to come up here. I see anybody coming at me and I'm going to start firing this weapon." I was worried now, and at a serious disadvantage, because for all I knew, whatever sound he'd heard was indeed the SWAT team moving in. But I had to try to contain him. "Jim, no one wants to hurt you. They're there to try to help you." I could hear him breathing heavily, but after a bit he seemed to calm down. "Everything's going to be okay, Jim." "Yeah. So long as nobody tries to come in here." He was silent for a moment. Then he said, "I gotta go...." I could hear the receiver banging around again, but he hadn't hung up. I stayed on the line and waited, hoping for the best. A moment later my beeper went off again and displayed yet another number. I grabbed my wife's cell phone and quickly punched it in, hoping that it was Mike and I'd be able to explain that I had Jim on the other line. My call was answered, but the voice that said hello wasn't Mike's. "Hi," I said. "Is this... Jim?" "Yeah." "Oh my gosh, this is Gary again." I couldn't believe this. This was one of the more ridiculous moments of my career. The command post had given me yet another number that Jim was able to intercept, so I was now speaking to him on one line and on hold with him on another. Meanwhile, I was five hundred miles away on the Washington Mall with my family, holding a plate of potato salad on my lap. "Jim, tell you what. I'm going to hang up on this line, but let's keep talking on the other. Is that okay with you?" The line went dead. I went back to my own cell phone again, and Jim picked up the other receiver at his end. But even before we could speak I heard the second line at his location starting to ring again. I had no idea what kind of circus this was going to turn into. "Jim," I said, "if that's the news media, I'd like you to hang up on them and come back and talk some more with me. But if it's the police, the negotiator down where you are, just let me know and I'll hang up." Jim took the call, then came back on the line with me. "It's the cops," he said. "Good. Listen, they want to help you, not hurt you. They're good guys. You just talk it through with them, and they're going to get you out of there safe and sound, okay?" "Yeah. Sure," he said. Whether that was a sincere response or sarcasm I couldn't tell. Still, I was somewhat optimistic he could be talked out. For the next thirty hours the local police negotiators stayed on the phone with Jim, employing the techniques set forth in our manual, listening to and acknowledging his problems and frustrations. They eventually convinced him that he shouldn't harm himself, and he surrendered. This case was emblematic of something I would see often in the years to come: a guy feels hopeless and acts out in a cry for help. While his depression may lead him to suicide, this attention-seeking behavior indicates that at least part of him wants to live. This creates an opening for the negotiator, who, by the act of listening to him and acknowledging his difficulties, can make him realize that there is hope after all. There are of course cases when the subjects are a great deal more desperate than Jim and have no intention of turning back. Maybe they've already committed a murder or some other serious criminal offense. In situations like this, all the signs point to disaster. But even at times like this it often proves possible to avoid further loss of life. In one instance in Houma, Louisiana, a uniformed police officer named Chad Roy Louviere, driving a marked cruiser, stopped a woman for a purported traffic violation, raped her, and then handed her his business card. Clearly, this was not a man working with any objective other than to act out and to sever his ties with humanity. Information we received later indicated that he was an obsessively controlling husband and that his wife had recoiled from his demands. She insisted on a separation, which had sent him over the edge. At 11:00 a.m., immediately after the rape, Louviere went directly to the small-town bank where his wife worked. When he entered the building, his wife was there along with five other employees and two customers. Waving his gun and shouting orders, he forced the two customers to leave, then lined up his victims and went down the line. "I know you," he said to the first. Then he went on to the next person in line and then the next, saying, "I know you," until he reached a sobbing teller named Pamela Duplantis. "I don't know you," he said, and shot her in the head, killing her instantly. After this, it seemed more likely than ever that neither of the Louvieres would leave the bank alive. This looked to be a classic homicide followed by a suicide. By this time, the building was surrounded by squad cars from local, county, and state authorities. As the Houma police began the painful process of trying to negotiate with one of their own, the chief called on an untrained officer to be their primary negotiator because he was a friend of Chad Roy Louviere's. But what inexperienced crisis managers don't realize is that if it was that easy for troubled individuals to open up to friends, many of these situations would never happen in the first place. It is often easier for a well-trained stranger to develop the necessary relationship with an emotionally troubled subject. A store across the street became the de facto command center, and from there, this officer called the bank repeatedly to plead with Louviere to give up and come out. "No way," Louviere responded again and again. Other friends of Louviere's from the force were brought to the phone across the street, but every one of them focused on the same practical objective—getting the man to surrender. After several hours of his friends saying "Just come on out, Chad," Louviere became so frustrated that he simply refused to answer the phone. Fortunately, he let his hostages answer when the police continued to call. Louviere was known to be a man of few words, but amid his taciturn refusals, his friends never picked up on the one clear message he was trying to send. Several times, as he refused to surrender, he muttered, "I just want to talk to somebody." Shortly after Louviere broke off communication, the Houma police chief and local sheriff called me and asked for my advice. Right away I leveled with them: this situation did not hold out a great deal of promise. The initial rape and the subsequent murder of a random teller at the bank appeared to be the work of a man determined to push himself until there was no way of turning back. Our only hope was to get inside his head and begin to probe what had triggered his rage so that we could disarm it. But because the situation had reached a crisis point, with the hostages' lives as well as the gunman's a trigger pull away from being obliterated, we had to be extremely cautious. As he briefed me, at one point the chief almost offhandedly mentioned Louvier's desire to just talk. I seized on this as a glimmer of hope. Then the chief also mentioned that Gloria Newport, an experienced FBI agent assigned to the New Orleans field office, had just arrived at the scene. I knew Gloria to be a skilled negotiator—I had trained her myself at our course at Quantico. "Make Agent Newport your primary negotiator," I told the chief. "I think he'll open up more with a woman." The chief was taken aback, to say the least, by my suggestion. "This man just raped one woman and murdered another," he said. "What makes you think he wants to talk with a woman? Looks to me like he hates women. I think the last person he'd want to talk to is a woman." "Sometimes a man has an easier time talking with a woman about his emotional life," I said. "I think we need someone who can appear nonthreatening and nonjudgmental, someone who can project a sense of understanding. I think a soothing female voice is what we need to get Louviere back from the edge." "We'll think about it," the chief told me. But he went back to the strategy of calling in more of Louviere's friends from the police force. Finally, when it became clear they were making no progress in the negotiations, the chief relented and put Gloria on the phone. At first, she was able to speak only with the hostages. Louviere's wife reiterated her husband's need to talk. "Do you think he'd talk to me?" Gloria said. "Let me see," she said. And a moment later Louviere picked up the phone. "Chad," Gloria said, "I've heard that you want to talk to somebody. I'm here to listen." Her voice was soft, soothing, and nonconfrontational. In the pause that followed, Gloria heard a loud exhalation. She told me later that it was like a dam bursting, after which Louviere began to talk about his issues. He was an extreme case of a controlling husband who couldn't accept the fact that his wife had a mind of her own. Gloria was the perfect listener, and her ability to deliver basic empathetic responses with absolute sincerity almost immediately calmed him down: "I'm worried about you. Tell me what happened. Tell me all about it." She validated his emotions, often just by naming them. "You sound so angry and frustrated," she said to him. "What do you think your wife would do if you just told her how you feel?" She gave him an attentive ear and, most important, hope. Once Gloria had established a relationship with Chad, she began to lay a foundation that would help her convince him to end the standoff with no further loss of life, suggesting to him that there might be opportunities to fix his relationship with his wife. Chad was a police officer and he knew full well the implications of what he had done. Still, the idea of reconciling with his wife was compelling. And when he was finally given an opportunity to express his hurt, anger, and frustration, this helped to relieve the pressure cooker of his emotions that was about to burst. His rage dissipated significantly. Gloria was finally able to convince him that the best course of action was for him to come out peacefully and not hurt anyone else. She gently encouraged and coaxed him to do the right thing. She had gained influence with him, and because of this, and this alone, he soon surrendered without incident. As of this writing he is still awaiting execution. A small-town police chief might face one situation like this in his career. Working on a national, even global scale, my colleagues and I saw these situations every week, and we'd learned that part of effective resolution is pulling back from the end objective and focusing on how to establish a relationship with this guy, right now, at this moment. I felt Gloria had the right communication skills to make her effective with Chad. Part of Louviere hated his wife, but some other part of him still loved her as well. He simply didn't have the capacity to express that love except as a wish to dominate and control. And it takes nothing away from Gloria's ability to say that she might also have entered the game at an opportune moment. Often, the first negotiator to work with someone gets nothing but incoherent rage. But then, after the subject has vented and calmed down with the passage of time, he can become more willing to engage in a more substantive dialogue. Sometimes it's the change in personnel that triggers the shift. I just knew they were getting nowhere with the approach being taken prior to Gloria's involvement. In the Louviere case, we averted a larger tragedy because police gave the perpetrator time to cool off. This was not the case on July 11, 1993, in Antioch, California, when a man named Joel Souza drove into a parking lot with his five-year-old daughter in his lap, holding a gun to her head. His eight-year-old son was sitting in the backseat. Souza pulled up beside his estranged wife's car. "Get in the car," he told Jennifer Souza. "Do it or I'll blow her head off." She got into Joel's car and they drove to the house they had once shared. There, Joel held her at gunpoint for an hour while he raged at her and peppered her with personal questions. Whom had she been out with? Why hadn't she returned his telephone calls? Like Chad Louviere, he was a controlling ex-husband who seemed to consider family members to be his personal property. After Souza's frightening diatribe was over, he let Jennifer go but kept the children. He warned her: "Tell anybody about this, and I'll shoot the kids." Terrified but not intimidated, she ran to a neighbor's house and called the police. During the phone call she told them that her husband owned at least five different guns. According to court testimony, Officer Michael Schneider was one of the first members of the Antioch Police Department to arrive on the scene. When he asked to speak to Joel and to see the children, Souza retreated with them to an upstairs bedroom and locked the door. Schneider, a trained hostage negotiator, took up a position at the head of the stairs. "Joel, come on out now. I know you don't want to hurt your kids." "It's none of your business," Souza yelled back. "Get out of my house or somebody's going to get hurt." Outside, the SWAT team arrived and established a perimeter downstairs. Their presence made it even more imperative that Schneider keep Souza calm. "Don't worry about those guys," Schneider said. "We're going to stay cool and everything's going to be fine. They are definitely not going to force their way in unless you do something really stupid. But you're not going to do anything stupid, are you, Joel? Because you love your kids, right? It's going to be easy does it." Schneider, who had been trained in one of our regional programs, had thirteen years of negotiation experience. Unfortunately, some members of the Antioch police were less sophisticated in their thinking about how to deal with subjects who chose to barricade themselves against the cops. Taking the old-school approach, they immediately began to eliminate creature comforts. They disconnected the phone, electricity, and water, then, with the house already warm on a hot day, turned up the heating system full blast. As the temperature in the bedroom began to rise, Souza became enraged and began to yell obscenities at Schneider. When he threatened to start shooting, the police turned the heat back down. The only quid pro quo was a promise Schneider extracted from Joel that something would be worked out. Schneider worked to establish and then maintain a dialogue with the subject, relying on the standard approaches of building empathy through active listening. He also tried to get Souza to try to think about what he wanted to happen. How could this situation be resolved so that nobody got hurt? After a while, Joel said that he wanted to exchange notes with his wife. Schneider agreed to deliver one note to Jennifer for every gun lowered by a rope out of the bedroom window. Over the next five hours, four rifles came out this way. Progress was being made, albeit slowly. It was at the five-hour mark in the standoff that an off-duty police captain arrived to take over command of the incident. Here was a clear case of hierarchical authority taking precedence over knowledge and experience—a classic law enforcement mistake when negotiation expertise is not given its due. To make matters worse, this captain immediately suggested setting a time limit. This, of course, violated a basic premise of negotiation, which is that time can be a tool that allows anger to dissipate and better options to enter into the mind of the subject. We never put a deadline on ourselves. Time limits force a decision, yes, but it may be the wrong decision. The whole point of skilled negotiation is to provide the time and encouragement for subjects to make the right decision. The difference of a few hours can be, literally, a matter of life or death. Schneider strongly resisted this imposition, and he continued to work on building rapport with Joel. At times it seemed as if the suspect was close to surrendering. He and Schneider began to discuss the process in detail. "When you come out, Joel, I want you to take your shirt off, okay? That way the SWAT guys will know you're unarmed. Will you do that for me?" This was all good stuff. There was no response. "I can stand in front of you when you come out," Schneider said. "Would that be good? That way you know that nobody's going to take a shot at you." Again, no response from Joel, but no resistance, either. Schneider called out to Joel's son. "Is your dad okay, Danny? Is he listening?" "Yes, sir. He's listening," the boy said. Schneider promised not to handcuff Joel in front of the children, and also to give them some time together. "You know, you haven't really hurt anybody. You haven't even fired any rounds. This whole thing can be worked out, Joel. It's really not so bad." Unfortunately, Joel needed more time to make up his mind. He had not yet gotten over the hurdle of his ambivalence. About four hours after taking command, the captain ran out of patience. "I'm tired of this shit," he said. Then he told Schneider, "Give him ten minutes—then we're coming in." Once again Schneider argued that this was totally inappropriate, but this time the "suggestion" was an order. Reluctantly, Schneider gave in. "Joel, you really have to come out now. It's time to do the right thing. You've got ten minutes." Nine minutes later, three shots rang out. The SWAT team charged into the bedroom to find Joel and the two children dead. As a tragic indicator of how the pendulum can swing either way, Joel Souza was shirtless, just the way Schneider had told him to be when he was ready to surrender. Jennifer Souza would later file a successful lawsuit claiming that the police had been responsible for the "negligent wrongful death" of her children. In his testimony at the trial, the captain said that he'd intended the ten-minute warning as a "bluff," not as an ultimatum. He said that he'd expected the warning to prompt Souza to surrender or to at least participate more fully in the negotiation process. But experience teaches us never to bluff with an armed man forced into a desperate situation. The tragic error in handling Joel Souza was grounded in the captain's inexperience. He failed to appreciate how very different someone else's mental processes can be. Because the captain believed there was no way on earth that he would ever shoot his own children, he assumed the same was true for Joel Souza. Oddly enough, though, the captain and Joel Souza may have had more in common that the captain imagined. The psychological makeup of traditional law enforcement officers tends to include a fair amount of classic controlling behavior, though they may not be self-aware enough to realize it on any conscious level. That typical law enforcement profile can also include a fair amount of arrogance. In the years ahead, the FBI would confront an increasingly diverse array of citizens barricading themselves against the police. In addition to tortured, solitary individuals such as Joel Souza or Chad Louviere, there would be large groups of disaffected people linked together by political or religious conviction. In these cases, the dangers inherent in emotional instability would be compounded by weapons caches and the potential for quasi-military action by tightly bound groups hostile to the government. In the face of these challenges, the FBI was becoming increasingly sophisticated in its negotiation strategies as well as in its tactical operations. But a large problem remained: how could these two aspects of the FBI's role be brought together effectively? Starting in 1991, the FBI would face a series of cases that would expose a fundamental divide between proponents of force and proponents of negotiation. Over this period, I would face the greatest challenge of my career, defending our role to skeptical colleagues increasingly convinced that they didn't need us. It all began with two seemingly separate events: a prison riot in Talladega, Alabama, and an incident involving a right-wing separatist who lived with his family on a ridge in Idaho. # CHAPTER SIX # **FROM SUCCESS TO HUBRIS** _A man must be big enough to admit his mistakes, smart enough to profit from them, and strong enough to correct them._ —JOHN C. MAXWELL When prisoners take hostages, there is great potential for things to spiral out of control. For one thing, prisons, while designed to keep people in, can also be effective in keeping them out. And your average prisoner has a fair amount of pent-up anger and rage over real and perceived mistreatment. Add to this the euphoria of suddenly having power when you've had none, and things can get out of hand quickly. On the morning of August 21, 1991, a group of detainees awaiting repatriation to Cuba rioted and took control of their unit at the Talladega Federal Correctional Institution in Alabama. They seized eight Bureau of Prisons (BOP) employees and three from the Immigration and Naturalization Service (INS). The FBI has jurisdiction over serious crimes at federal prisons, and so I was immediately sent to Alabama. This was not the first time that the FBI had confronted angry Cuban detainees. In November and December 1987, Cuban inmates had seized the Atlanta Penitentiary and the Federal Detention Center in Oakdale, Louisiana. The combined uprisings threatened more than a hundred hostages, lasted more than eleven days, and required protracted negotiations to resolve. Many of the Cubans involved in those prison riots had been arrested after the infamous 1980 Mariel boatlift in which Fidel Castro had emptied his prisons and mental health wards, dumping their residents on an unsuspecting United States. Reports suggest that up to 16 percent of the 125,000 individuals entering the United States in that armada had spent time in Cuban prisons. The INS had detained about 2,500 of these, declaring them "excludable" or unfit to remain in the United States. These individuals were moved to over a dozen federal facilities around the country until their situations were resolved. Cuba initially refused to accept these inmates back, and American authorities could not simply release them. Those with violent criminal records were in perpetual limbo, and their frustrations led to the Atlanta and Oakdale riots. In both cases, peace was restored after a Cuban bishop from Miami was brought in as a mediator, but not before the riots cost the U.S. government more than $100 million and destroyed significant portions of the prison facilities. The U.S. government eventually persuaded Cuba to take back more than 2,500 of the 3,800 Mariel refugees, and Talladega was the last stop for those being deported after their appeals had been exhausted. The Talladega uprising began one day before thirty-four detainees were scheduled to be shipped back to Cuba. Some of these were resigned to going back home, where they awaited an uncertain future in the Cuban legal system. Some were adamantly opposed to their repatriation. Others, who had served their sentences for criminal offenses committed here in the United States, simply wanted to be set free. All felt betrayed by the "agreements" for immediate resolution they thought they had reached at Oakdale or Atlanta. At Talladega, our negotiation team consisted of local and regional FBI negotiators, including several Spanish-speakers who had helped resolve the earlier prison uprisings. We formed two teams staffed with FBI and BOP negotiators, with native Spanish-speakers assigned to each team, and each team on duty for twelve-hour stints. I was the negotiation team leader for the evening shift, with FBI negotiator Pedro Toledo on hand as one of those chosen to communicate with the inmates directly in Spanish. The FBI's Hostage Rescue Team was also on hand, preparing an emergency assault plan in case violence erupted. At a glance, Talladega might be mistaken for a large community college campus. A dozen or so modern, no-nonsense buildings in gray concrete were connected by walkways crossing grassy courtyards. The only indications of this institution's real purpose were the exceptionally small windows and the substantial perimeter fences. Those features clearly marked this as a prison, as did the fact that the BOP had correctional officers in full riot gear—helmets, body armor, weapons—surrounding Alpha Unit, the building the detainees had taken over. Our teams had set up shop in the prison administration building across the courtyard, about a hundred yards away. We had captured the phone lines in and out of Alpha, but when we called in, either the detainees would refuse to answer or they would answer and immediately hang up. They knew that the FBI and prison authorities had outsmarted them last time, so now they were in no mood to talk. The situations in Atlanta and Oakdale had taught them what every negotiator knows: protracted discussion usually works to the advantage of the authorities. Now they wanted their demands met, plain and simple, end of story. Some were willing to return to Cuba; some were dead set against it. But none of them wanted to linger indefinitely in an American jail with no prospect of release. It worked in our favor that inmates here had taken over only a single unit, rather than an entire prison, as had been the case in Atlanta and Oakdale. Modern prisons are highly modular so that problems like this can be contained. Alpha Unit consisted of rows of cells on two levels overlooking common areas, but the traditional bars beloved by the directors of old prison movies had been replaced by electronically operated steel doors with narrow slits for windows. Inside this prison within a prison, the detainees had created makeshift weapons and erected barricades behind which they held their eleven hostages. We had every reason to be concerned about the safety of these abductees. Already, as is usual in a prison uprising, some inmates had used the chaos and confusion to carry out vendettas, with several stabbings as the result. We knew that among the group were dangerous, desperate men who might now reasonably conclude that there was no turning back. It's much easier to gather information on hostage takers when the subjects in question are already prisoners. We had ready access to mental health records, criminal history records, and personal insights and observations from correctional officers who had daily contact with the subjects. We used these data to try to identify potential leaders as well as those most likely to carry out violent acts. In Atlanta, there had been one prisoner so crazy that the Cubans tied him up with duct tape and put him outside. He had been known to throw his own feces at guards, and at one point he actually chewed off one of his own fingers. There appeared to be no one quite that far gone at Talladega, but several inmates had very violent histories, including murder and rape. When an uprising is opportunistic, as this one seemed to have been—there was no evidence of significant advance planning, and inmates appeared to have spontaneously overpowered officers during prisoner recreation—it usually takes several days before a clear leadership structure emerges. This is not entirely surprising, given that few inmates have meaningful organizational skills or leadership abilities. Their normal behavior for working out differences is not consensus and cooperation but threats, intimidation, and violence. In our negotiations, we would try to encourage the reasonable people, but as of yet, no one would communicate except through banners hung from the rooftop asking the press to get involved and advocate on their behalf. The banners said, _We are not hungry for food but for freedom. Give it to us_. On the second day, using the information gleaned from correctional officers, I prepared an assessment for the on-scene commander, Special Agent in Charge Al Whitaker, a man who seemed new to siege management, but who at least had surrounded himself with good people. In my opinion, the inmates' refusal to engage in substantive negotiations reflected their lack of clear purpose and goals, as well as their conflicting agendas, which varied according to the status of each prisoner's case. The point of access I suggested was based on the fact that prisoners are like most people—they get used to creature comforts and a set routine, even if it's simply watching TV or working out in the gym. They don't like it when those simple pleasures are withdrawn, least of all food. At Talladega, inmate cooks prepared food in a central facility, after which meals were brought over to each unit. With no kitchen of their own, the Cuban detainees had gone for days now without food. In every brief conversation they demanded that we send in something for them to eat, but we had made it clear that they had to give us something in return. So far this had not led to anything positive, but I judged that hunger, properly manipulated, provided our best opportunity for leverage. What prisoners want on day one of an incident, or even on day two, is often much different from what they are willing to accept only a few days later, after they come to see that they are not in as much control as they'd initially thought. It's no surprise, then, that a number of significant prison incidents have lasted around ten days or less, with the inmates ultimately accepting on the final day the deal that they could have had on day one. Simply put, inmates are more likely to make concessions or act reasonably when they get hungry, bored, and tired. I suggested that prison employees begin frying bacon and brewing coffee, the smells of which would provide a powerful incentive for the inmates to come out from behind their barricades. This concept of "aromatic warfare," as I called it, had been used effectively by the NYPD in the 1970s, when Frank Bolz once fried bacon in the hallway of a house where he knew the barricaded subject was hungry. The next day, at lunchtime, correctional officers set up a large outdoor grill not far from the front of Alpha Unit. Ostensibly, the grill was there to cook hamburgers for the officers in their riot gear, standing as a visible containment line around the facility. They tried not to obviously flaunt the food, but clearly inmates would be able to see and smell the grill. That night, our Spanish-speaking negotiator, Pedro Toledo, continued to call in, and at last his persistence paid off. An inmate picked up the phone and said, "We want to talk. Outside. Right now." This was our first real breakthrough in days, so Pedro and I immediately left the administration building and started walking across the courtyard. Correctional officers had erected mobile units with floodlights that shone on the walls of Alpha Unit. We could see the inmates in their blue denim prison uniforms emerging from the steel inner door of the unit and gathering against the heavy bars of the outer door. With their tattoos, head scarves, and occasional missing teeth, they looked the part of hardened criminals. Many of them had improvised weapons in their hands, shanks fashioned from scraps of wood or metal. No data show that exposed face-to-face negotiations produce a better result—and all of the dozen or more U.S. negotiators who have been killed performing their duties over the years died in face-to-face situations. In this case, however, the potential payoff appeared to outweigh the risk. With our snipers on alert, Pedro approached the men and began a dialogue; I followed about ten feet behind. Because a negotiator can easily get caught up in the dialogue and inadvertently put himself at risk, part of my job as his coach was to ride herd on him. Sure enough, in the intensity of his conversation, Pedro kept inching forward, and I kept reminding him to step back. After a while I was actually hanging on to the back of his jacket, tugging on it now and then to remind him not to get too close. I kept him at least thirty feet away from the inmates behind the bars at all times. My Spanish is limited, but I could hear him saying the kinds of things we always say: "I hear you. We'll work on it. We need to get on the phone and talk." Their most immediate demand was for food, but there were also heated denunciations of U.S. policy and rambling diatribes about what were seen as the injustices of each man's specific case. Despite Pedro's best efforts, what we emerged with was a grab bag of complaints rather than a coherent list of demands. The next morning we handed off our shift to the other team, briefing them on the exchange we'd had the night before. I told the day shift about the weapons we'd seen, and reminded them to keep their distance if they decided to move forward, as we had. But when we returned twelve hours later to relieve them, I saw surveillance photographs lying on the desk showing my colleague Clint Van Zandt, the other negotiation team leader from SOARU, and one of the other Spanish-speaking negotiators leaning on the bars while speaking to some of the same Cubans. They were within inches of one another. In the photographs you could see the makeshift weapons the prisoners were brandishing. We had a meeting later to discuss this incident, and tempers flared. Van Zandt said that his proximity had been necessary to show the Cuban inmates that we were not afraid of them. He felt that this was important culturally. I didn't agree with his rationale then and I still don't. In my mind, while some risks are unavoidable, safety should always be the primary consideration when negotiating. Thankfully this unnecessary safety breach didn't happen again. Despite these forays and the limited rapport they created, our engagement with the inmates remained more of a running argument than a true negotiation. All in all, however, we were fortunate to be in a stalemate rather than an escalating crisis. The question of when we might need to take decisive action—an assault—was always on our minds, but so far things had remained sufficiently calm for us to pursue our measured course. On Wednesday, August 28, seven days into the standoff, the inmates suddenly asked to meet with Cynthia Corzo, a reporter from _El Nuevo Herald_ , the Spanish-language edition of the _Miami Herald_. Corzo proved willing, and we agreed to the dialogue, demanding in exchange that the inmates release one hostage. A few hours later, Kitty Suddeth, a twenty-four-year-old prison secretary, appeared at the gate looking like death warmed over. For seven days she had lived in the same clothes, without food, without the chance to wash, and in fear for her life. Correctional officers rushed to support her as she came out into the courtyard. Her ordeal had been so terrifying that she would never return to prison work. We told the inmates that they could send several representatives to meet with Corzo. Correctional officers put a table and chairs outside the main door to Alpha Unit and set up a canopy overhead to block the sun. Corzo met with them in three separate sessions, allowing the men to tell their side of the story. Meanwhile, Kitty Suddeth had provided important information that changed our view of the gravity of the situation. She warned us that whatever fragile leadership had once existed was now losing control. The inmates had begun to fight among themselves, and in her opinion, the more dangerous individuals were gaining influence. Our negotiators continued to hold out the promise of food in exchange for hostages being released. We did achieve one concession when the inmates next agreed to allow prison doctors to assist one of the hostages, a correctional officer with high blood pressure. He came to the front gate, and a medic—actually an FBI agent—checked his blood pressure and provided him with necessary medications. This agent was then allowed to converse briefly with and provide first aid to the rest of the hostages at the front entrance, one at a time. During this exchange, some managed to slip notes to our agent expressing their belief that they were about to die and that this might be their last opportunity to communicate with their loved ones. The grim evidence they cited was that they had been ordered to place their identification cards in a pillowcase. They were told that one card would be drawn and that person would be killed. Agent Whitaker took this as a sign that it was time to move in with a tactical assault, and I concurred. I still felt that the long-term prospects for surrender were good, but I also felt that it was highly probable that at least one hostage would die before we would be able to bring the inmates to the point of standing down. Given this very real threat, it was time to move. One of the advantages in dealing with a siege in a prison is the availability of detailed plans of the facility. HRT and BOP tactical officers had removed inmates from another unit within the prison that was identical to Alpha, and they had used it to practice each step of how they would enter, secure the hostages, and subdue the inmates. Then again, prisons are made to keep inmates from getting out, which means that they are none too easy for tactical officers to breach. A half dozen FBI and BOP officials met in the warden's conference room to go over the plan. HRT had already assembled all the equipment and personnel required, including several FBI field SWAT teams. They were under the command of Special Agent Dick Rogers, the new commander of the HRT. Tall, with ramrod-straight posture and red hair clipped in a military style, Rogers had served as a noncommissioned officer in the military. He was clearly type A; I noticed that his jaw seemed perpetually tense, as if he was ready to spring on someone at any moment. His nickname, I would come to find out later, was "Sergeant Severe." Though a tactical operation was in the works, I emphasized to the others that our job as negotiators was not over yet. Crisis management works most effectively when both elements, tactical and negotiation, work in close coordination. Given the challenges of breaking in to a prison, our job would be to soften up the inmates to minimize their ability to resist and maximize the chances that all the hostages would come out unharmed. During an assault, hostage takers' first instinct is to preserve their own lives rather than to harm hostages, but the longer an assault drags on, the greater the possibility of hostage execution. Thus the circumstances placed a premium on quick and decisive action. For our part, the negotiation team developed a plan to lull the inmates into complacency, a plan that I admit sounds like something out of an old folk tale. We recommended that we pretend to give in to their demand for food with no preconditions. We felt that such an apparent victory would make them lower their guard. The risk with this plan was that if for some reason the assault had to be postponed, our having provided them food without getting something in return would weaken our bargaining position. But that was a risk worth taking. At a more biochemical level, the rest of the plan called for the food to be as rich and plentiful as the prison kitchens could manage. We ordered up steaks and potatoes with gravy, as well as cakes and pies. We really went over the top, assuming that these famished men, accustomed in the best of times to a limited prison diet, would gorge themselves at the first sight of this high-calorie feast. That evening Pedro got on the phone and delivered what appeared to the inmates to be a major concession. "Okay, you get your food. We're going to feed you, so don't hurt anyone. Stay cool." In the bright glare of the mobile floodlights, correctional officers brought the food over on heavy aluminum carts. From our vantage point in the administration building, we watched through binoculars as the men came to the gate and immediately began to grab the food, even before the carts were rolled inside. We waited about an hour, and then Pedro called again. Apparently the food had begun to work as we had hoped. There were sounds of celebration in the background, and the inmates who spoke sounded arrogant and cocky. "We want more and better food tomorrow," they said. "You cocksuckers better deliver some real Cuban food this time." It appeared that the inmates had taken our bait and literally swallowed it, hook, line, and sinker. HRT waited until well after midnight, then began to move their men into ready position. Pedro stayed by the phone, but I stepped outside to see two lines of big black SUVs slowly and deliberately roll across the campus toward Alpha Unit. The trucks stopped, and two agents in full armor ran ahead to the front gate, bent down for a moment, then ran back. At precisely 3:43 a.m., ten days after the siege had begun, a series of explosions blew open the front entrance of Alpha Unit and lit up the sky. HRT and SWAT members piled out of the SUVs and stormed the building. They carried ladders, saws that could cut through steel, and additional explosives. Following the plan they had practiced, they fanned out through the building, quickly located the three separate rooms where hostages were being held, and secured their safety. Other teams of HRT and SWAT personnel then moved to their assigned sections and took control of all of the inmates. Despite the risks they faced, these tactical teams performed brilliantly and secured the prisoners and the prison without firing a single shot. All the evidence suggested that our two-part plan had worked like a charm and that the rich feast, combined with a sense of victory, had lulled the inmates into a complacent slumber. All hostages were freed unharmed, and the next day, thirty-one of the Talladega detainees were boarded onto an aircraft and flown back to Cuba. This was a great moment for the FBI, compelling validation—as if more were needed—of the wisdom of the Bureau's standard approach to crisis management, an approach that integrates negotiation and tactical operations as two parts of the same whole. From the beginning of the special operations units in the late seventies, authority for balancing the role to be played by each unit was in the hands of the on-scene commander, usually the Special Agent in Charge for that location. As I saw it, Talladega was a textbook case of how negotiation and tactical operations can work hand in glove to bring a dangerous situation to closure without bloodshed. Here negotiation had worked like an artillery barrage to soften up the opposition and gain critical intelligence information from a released hostage, enabling the tactical forces to move in with far greater chances of success. But as the after-action review took place I began to wonder whether the Bureau fully appreciated the negotiation team's role in the successful rescue. Our negotiation staff at Quantico consisted of three people; HRT had sixty-five and had a budget and training time to match. Negotiation wasn't even really thought of in the same light. Unfortunately, it would soon become clear that in fact negotiators were considered by some as subservient to the tactical team. After Talladega, the balance would shift more toward tactics, giving those in charge of tactical assaults far more power and influence. The limited lesson that some officials took away from the Talladega success would have escalating and tragic consequences. America in the early 1990s saw a series of antigovernment and cult-like groups, driven by extreme religious and political sentiments, retreating into psychological bunkers, as well as actual compounds where they chose to isolate themselves from mainstream society. These groups would provide one of the thorniest problems ever to confront the FBI. First, dealing with these groups would directly expose the unresolved tension within the Bureau that pitted negotiation advocates against those who favored hard-line tactics. Second, FBI missteps would quickly add fuel to the brushfire of separatist antigovernment movements. The flash point for that brushfire was a confrontation involving a former army soldier and Iowa factory worker named Randy Weaver. He and his wife, Vicki, had sought to escape what they saw as a corrupt world by squatting on twenty acres of land in Idaho. Vicki, especially, was deeply religious. She saw the "end times" approaching, the apocalyptical battle described in the book of Revelation between God's chosen few and the forces of evil. The Weavers hoped that they and their children could ride out the turmoil in their cabin on Ruby Ridge, near the town of Naples, in Boundary County, Idaho. In 1984, a series of disputes over trespassing onto a neighbor's property and frequent gunfire brought Randy Weaver to the attention of local authorities. In 1986, Weaver attended a meeting of the Aryan Nations, a right-wing separatist group, where he got to know a man serving as an informant for the Bureau of Alcohol, Tobacco, and Firearms (ATF). In 1989, the informant claimed that Weaver sold him two sawed-off shotguns, weapons that federal law prohibits. In 1990, a grand jury indicted Weaver for making and distributing illegal weapons. Weaver was arrested and released on bail, but he failed to show up for his trial date. At that point, relations between Weaver and the government went from bad to worse. To avoid further exposure to arrest, Weaver stayed holed up in his remote cabin on Ruby Ridge and he never left the property. On August 21, 1992, a surveillance team of six U.S. marshals carrying M16 rifles and wearing night vision goggles climbed up Ruby Ridge to scout out areas where they might arrest Weaver away from his cabin. Their movement and scent alerted Weaver's dogs, which began to bark. Weaver, his fourteen-year-old son, Sammy, and Kevin Harris, a family friend, armed themselves, let the dogs go, and followed along to investigate. How exactly it began is not entirely clear, but an exchange of fire broke out on Ruby Ridge, and within a brief while, Deputy Marshal William Degan lay dead, as did Sammy Weaver and one of the dogs. Randy Weaver and Harris retreated into the cabin, along with Weaver's wife, Vicki, and the other Weaver children. The standoff that ensued would last for ten days. I was in Bermuda with Carol at the time, celebrating our eighteenth wedding anniversary and attempting to get away from it all in a small guesthouse with no telephone or television. It was one of the few times in my career that I've been fully isolated. Nonetheless, I found a newspaper in a local store, and spread across the page above the fold was an article about the siege. I used the phone at a nearby hotel and immediately called my boss, Charlie Prouty, back at Quantico. Charlie briefed me on what he knew and told me to continue with my vacation for now, but to be prepared to respond when I returned. Already en route to Idaho was my partner, Fred Lanceley. With him on a military C-141 aircraft was Dick Rogers, who was in charge of HRT and had led the successful assault at Talladega. His HRT and my team were both at Quantico and we saw each other in the gym daily. He was cordial but a bit of a loner, and we hadn't really gotten to know each other after Talladega. After joining the FBI, Rogers had been a field agent in Arizona, and he had also worked in the bomb tech section at FBI headquarters in Washington. True to his "Sergeant Severe" moniker, he epitomized the tough-guy school of law enforcement. As a grade 15 Assistant Special Agent in Charge (ASAC), the equivalent of a colonel in the army, Dick was one rank higher than either Fred or me. While FBI protocol was that the negotiation and tactical programs were to be given equal weight during any incident, the reality was the HRT had more than sixty-five agents, millions of dollars of equipment, and a more senior manager who, especially after the success at Talladega, had greater access to and influence with key FBI decision makers. The FBI's historical desire to be tough on criminals naturally favored tactics over talk. When they arrived in Idaho, Dick briefed his team on the current situation and issued the rules of engagement. It became clear to Fred that Dick had already decided that this was a tactical situation only, and there would be no negotiation. Despite this, Fred said he would help develop information in the command post and be available if needed for negotiations. Perhaps the death of a U.S. marshal had pushed Rogers immediately into action mode. But those of us who dealt with him could not escape the feeling that he also never appreciated the important supportive role negotiators had played in softening up the Talladega inmates, thus clearing the way for a successful assault. As would become obvious as events unfolded, Rogers had no interest in dealing with Randy Weaver through any means other than force. And in the absence of any other experienced, countervailing FBI command leadership on the ground in Idaho, Dick Rogers would literally call the shots. Almost immediately upon arrival at Ruby Ridge on August 22, Rogers sent FBI HRT snipers and observers up the mountain to reconnoiter the Weaver cabin. He did so with rules of engagement that were substantially less restrictive than those customarily employed. The normal rules state that FBI agents may use their weapons only to protect their own lives or the lives of others, or if they feel they are in danger of serious bodily harm. But according to a Justice Department task force that subsequently investigated the incident, Dick Rogers's rules "instructed the snipers that before a surrender announcement was made they could and should shoot all armed adult males appearing outside the cabin." These rules not only contradicted long-standing FBI policy, they were later found to be unconstitutional. This was a self-fulfilling approach, and it led quickly to disaster. Hearing the noise of an FBI helicopter, Weaver, his sixteen-year-old daughter, and his friend Kevin Harris stepped out of the cabin. They were unaware of the FBI's presence. Without issuing a warning, an HRT sniper fired once, wounding Weaver. As the three retreated back toward the cabin door, the sniper fired again, thinking he had missed with his first shot. This second bullet went through a door, and hit Vicki Weaver in the head; it then passed through Harris's chest. Harris would survive, but Vicki Weaver, who had been standing just inside the door, out of sight, holding her ten-month-old daughter, Elisheba, died on the spot. With Weaver's son Sammy and Vicki now dead, and Randy Weaver and Harris wounded, what remained of the Weaver family stayed in the cabin for another ten days. When FBI headquarters instructed that negotiation efforts commence, Fred went up the hill in an armored personnel carrier and used a bullhorn just outside of the cabin to try to communicate, but they now refused all of his efforts. Based on what had transpired, they could only assume that the government was intent on killing them; not too surprisingly, they were reluctant to talk. Eventually, former Green Beret James "Bo" Gritz, a heavily decorated Vietnam vet who had made a second career as a survivalist, conspiracy theorist, and liaison to various right-wing groups, appeared on the scene, offering to serve as an intermediary in an effort to secure a peaceful surrender. He convinced the team that he had enough in common with Weaver that he would be able to talk him out. Fred coached Gritz on the approach he should take, which was to convince those inside that they would not be harmed if they came out peacefully. Over a period of several days, with Fred's coaching, Gritz and Jack McLamb, another right-wing figure, helped convince Weaver and his family to come out by acting as their escorts down the mountain. Harris surrendered first, followed the next day by Weaver and his three daughters. Despite all the violence in the first hours of this incident, once they established good communications, the situation was resolved without any further loss of life, a testament to the value of negotiations. Weaver was charged in federal court with a variety of crimes, including murder, conspiracy, failure to appear, and making and possessing unregistered weapons. But given countervailing charges of government misconduct—primarily the use of excessive force—he was eventually acquitted on all counts except failure to appear. Ultimately, the federal government awarded Weaver $100,000 in damages and $1 million to each of his daughters. Because of Hurricane Andrew striking Florida at the time, this infamous incident received only limited publicity at first, mostly in regional papers. However, news of the incident quickly spread to members of right-wing militias, becoming a rallying cry and recruiting tool for those opposed to government authority. It would be among the motivations for the 1995 Oklahoma City bombing that killed 168 people. Equally tragic, the FBI made no immediate attempt to learn from its mistakes. The same inclination to use force, or "action imperative," would prevail with even more horrific consequences the next time the FBI was summoned to a major siege incident. This would occur only six months later with another group of people driven to extremes, gathered at a place with the inauspicious name of Ranch Apocalypse. # CHAPTER SEVEN # **NEGOTIATING WITH THE SINFUL MESSIAH** _There is nobody as enslaved as the fanatic, the person in whom one impulse, one value, has assumed ascendancy over all others._ —MILTON R. SAPIRSTEIN On February 28, 1993, I was with my family, just leaving the parking lot of our local hardware store in Virginia, when my beeper sounded. I pulled in to a Burger King parking lot and called my boss, Rob Grace, at Quantico. An armed force of eighty ATF agents had just that morning converged on the isolated compound of a religious group living in Mount Carmel, Texas, near Waco. The plan had been to execute a search warrant on the compound and an arrest warrant on weapons charges against the group's leader, Vernon Wayne Howell, also known as David Koresh. There were also past allegations of child abuse, so the plan included securing the group's children, then conducting a thorough search. But apparently the action had been carried out more like an assault than an investigation. As the lead ATF agent approached the entrance to Koresh's Ranch Apocalypse, all hell broke loose. Four ATF agents and several members of Koresh's group were killed. When I arrived at the small airport in northern Virginia, I saw two FBI planes, one large and one small. I stood on the tarmac and watched as Dick Rogers, along with other senior FBI and ATF officials, boarded the larger one, an executive jet. I boarded the much slower propeller plane to which I was assigned. Virginia to Texas is a long flight for a piston-driven aircraft, especially one that needs to stop in Little Rock on the way to refuel. As I flew west, it occurred to me that the FBI's travel priorities spoke volumes. The idea that the head of HRT needed to be rushed to the scene, while the head of the negotiation team could follow along later, was a clear indication of the mind-set after Talladega. The narrative that had emerged from that prison riot was that HRT had carried the day, to the exclusion of other components, and Dick Rogers's stock had never been higher. The disaster that followed from his preemptive actions at Ruby Ridge had done nothing to tarnish that image within the FBI, at least not yet. If anything, critical accounts of what had happened there created something of a bunker mentality among certain elements at FBI headquarters. For my own part, I was surprised that Rogers still had his job in spite of having overseen the debacle at Ruby Ridge. Then again, meting out punishment to the HRT commander would have been an admission of the gross errors of judgment that had taken place in Idaho. At 10:00 p.m. Central Time, our small plane came down on the runway of a former Air Force base a few miles outside Waco. This facility was now Texas State Technical College, and it would serve as our command post. I entered the hangar and made my way past a massive C-5 military aircraft there for repair, then up a set of concrete stairs along one side. As I reached the top, I observed a large office in which FBI technicians were setting up telephone lines and computers. I continued on toward a smaller office in the rear where I was told I would find Jeff Jamar, the Special Agent in Charge of the San Antonio FBI office. Jamar was the FBI on-scene commander. As I entered the room I saw a big man with broad shoulders, around six feet four inches tall, who had the tense and focused look of a pro football player on game day. Jeff Jamar had a reputation as a no-nonsense leader, and his demeanor was so intimidating that, as I would quickly learn, most of his subordinates tried to avoid him whenever possible. They also expended a great deal of energy speculating about, and trying to accommodate, his changing and often very angry moods. I introduced myself, and he gave me a cordial but perfunctory summary of events so far. Dick Rogers and some of his tactical team had set up a forward command post just outside the Koresh compound, about eight miles away. He also confirmed that while ATF was still nominally in charge, the murder of federal agents was now a matter for the Bureau, not ATF. We were simply waiting for word from Washington that the attorney general had transferred authority to the FBI. The currently operating negotiation team was set up in an old military barracks building a short distance away, and Jamar deputized one of his assistants to show me the way. As we walked across the base, the young agent briefed me on the overall mood of everyone involved. It was clear that the ATF personnel were in shock. He also shared some information about the group we were dealing with, who called themselves Branch Davidians. In sum, David Koresh, born Vernon Wayne Howell, sounded like a charismatic con artist—perhaps more accurately described as an antisocial personality or sociopath. The Branch Davidians were a breakaway sect of the Seventh-day Adventist Church. He and more than a hundred followers had holed up at the ranch just outside town. Like Vicki Weaver, the Davidians believed in the book of Revelation's prophecies that the forces of evil will be unleashed during the "end times," and the righteous will have to do battle with them. In preparation, the Davidians had stockpiled automatic weapons and large amounts of ammunition, practiced defensive actions, grew their own food, and lived without modern amenities. Meanwhile, their unusual communal lifestyle also made them an object of curiosity and even suspicion among their neighbors. The Davidians were known to derive income from dealing in weapons. Koresh had a history of run-ins with the law, and there were persistent questions as to whether he was using his status as a religious leader to sexually exploit his followers, including young children. Koresh's charisma allowed him to gain control over people desperately seeking religious enlightenment. Despite having a learning disability, he had memorized large passages of the Bible at a young age and could string together seemingly unrelated verses of scripture to prove any point he wished. He told his followers that he was both the son of God and a sinner—the sinful messiah. He alone was able to drink alcohol, have sex with most of the women, have air-conditioning in his room, watch television, and avoid doing any physical labor at the compound. In essence, he told his followers to do as he said rather than as he did. Toward the end of 1992, a UPS driver noticed the outline of grenade casings in packages he was delivering to the compound. He alerted authorities, and shortly thereafter, an undercover agent working for ATF infiltrated the Davidian community. He observed that the Davidians had modified certain weapons to make them fully automatic, which was not only a violation of federal law but also a clear sign of their belief that they had to prepare for Armageddon. Whatever had led ATF to proceed with the aggressive show of force they had launched that morning, their hope for success had been based in part on the expectation of surprise. Informants had told them that the Davidians locked away their guns on Sundays, the first day after their Sabbath, and would be focused on working outside on a large addition to the compound. But surprise was simply not in the cards. Early in the morning, television news teams from Waco had been on the road already, heading to the compound, known as Mount Carmel or Ranch Apocalypse. Who tipped them off to impending events? What is known is that one news crew asked directions to Mount Carmel from a rural mailman they encountered at a country crossroads not far away. What they didn't know was that this mailman was David Jones, the brother-in-law of David Koresh. Jones quickly drove back to the compound and relayed this information to Koresh, who at the time was meeting inside the compound with undercover ATF agent Robert Rodriquez, who had rented a home nearby posing as a student and feigning interest in learning about the Davidians' beliefs. Koresh broke off their religious counseling session and told Rodriquez, "They're coming to get us, Robert." Rodriquez hastily departed and immediately reported the comment to his superiors at ATF. Though they had lost the element of surprise, ATF leaders chose to move forward anyway, a fatal error. The exact details of what happened as the ATF tactical units approached the entrance to the compound are unclear. But a horrendous firefight broke out at 9:45 a.m. and continued for two and a half hours. By the time the shooting died down, four ATF agents lay dead and sixteen had been wounded. Five Branch Davidians were killed; many others had been wounded, including Koresh himself. For Koresh, this action only confirmed his view of federal authorities as reckless oppressors. But perhaps its most ill-conceived aspect was that it played into Koresh's interpretation of biblical prophecy. The book of Revelation uses the term _Babylon_ to refer to the earthly powers that oppress the righteous and with whom the righteous will have to do battle before the day of judgment. Here at the door of Ranch Apocalypse, in full tactical gear, were the "Babylonian" ATF agents. Rather than intimidating Koresh and his followers, the hostile display served merely to confirm for them that what the prophecies had foretold was at hand. Shortly after the shooting started, Lieutenant Larry Lynch of the McLennan County Sheriff's Department received a call from Koresh, seeking to broker a cease-fire at the Waco Police Department where a rear command post had been set up. The cease-fire secured, the ATF agents were able to move forward and retrieve their casualties. In return, ATF agreed to call off the raid, withdraw, and stay off the Davidians' property. With live coverage on television, news of the incident quickly spread, and multiple law enforcement agencies, including the Texas Rangers and the Texas Department of Public Safety, rushed to the scene. The FBI negotiation team had set up in a long, narrow barracks that looked to be of World War II vintage. Inside was a large open space, no doubt once filled with military bunk beds. In the rear was a small room where officers had positioned themselves for telephone communication with the Davidians. ATF had no trained negotiators at this time. My first impression upon entering was that there were far too many men in this small space to carry out effective work. About a dozen ATF agents and others sat around in their blue tactical jumpsuits. With heads in hands and ashen faces, many of them looked like soldiers who had just survived an ambush, but without the consolation of victory. They appeared so tired and downtrodden that I was surprised they had not yet been sent home. ATF supervisor Jim Cavanaugh was then functioning as the primary negotiator and was on the phone with Koresh. He introduced me to his ATF colleagues, as well as to some negotiators from the Austin Police Department who had also come up to help out. I also spoke on the phone with FBI Supervisory Special Agent Byron Sage, from the Austin FBI office. He was still at the police department with Lieutenant Lynch; they'd been working all day negotiating on a second phone line in the compound. Cavanaugh told me that tactical units had established an inner perimeter around the compound, with a motor home serving as a forward command post. In a slightly larger concentric circle, the sheriff's department and Texas Department of Public Safety had established an outer perimeter to control access. Beyond that second perimeter, the news media gathered in droves. Cavanaugh described conversations to date with Koresh, which, after the cease-fire, had been perfunctory. He explained to me that they were using two phone lines to communicate with the compound, the one being handled at the police department by Lynch and Sage, which connected to Wayne Martin, an attorney and Davidian who conducted business from inside the compound, and the second reaching Koresh himself. I made a mental note to consolidate those lines when the opportunity arose. To gain control of the situation, we needed to control and limit all communication in and out. In time, we would want to install a military-style field telephone of our own, to avoid any problems should standard phone lines be cut. The more immediate problem was that neither of these existing phone lines had been secured so that those inside could speak only with the authorities. Consequently, these lines were frequently tied up by news organizations attempting to land a big interview. Earlier in the day, the tabloid television show _A Current Affair_ had convinced an operator to break in to an ongoing negotiation call so that their on-camera personality could speak with Koresh. Koresh had also used his phone line to call his mother and give her his last goodbye, something I would not have wanted to happen. On the plus side, I learned that the negotiation process had already borne fruit. At 9:03 p.m., about an hour before I had landed in Waco, the negotiation team had delivered on a promise to have a local radio station recite a verse of scripture. In return, Koresh had allowed two children to leave the compound, and then another two, forty minutes later. Four down and perhaps a hundred left to go. Byron Sage and I linked up in the early morning hours to consult with SAC Jamar. He said that a decision was forthcoming on changing lead agency status to the FBI. Rogers was already at the forward command post, and Jamar wanted our team to be ready to take over negotiations as soon as possible. I immediately recommended that we set up a negotiation operations center, or NOC, inside the hangar, in a separate space immediately adjacent to the FBI command post. I requested that technical personnel act quickly to capture the two telephone lines leading into the compound to thwart further media interference and other outside calls. I also requested authorization from Jamar to bring additional FBI field negotiators to Waco. As I saw it, the negotiation process could become quite complex and protracted. "I think you're right about that," Jamar said. Then he nodded. "Bring in your boys." I then asked about how we would coordinate our negotiation efforts with the tactical command. Jamar said that communication with Rogers's group should go through him, since Rogers was up forward. I should consult with Jamar, and he would communicate with Rogers. Again, this was a shift that should have alerted me to what was to come, since standard FBI protocol called for a closer exchange between negotiators and the HRT. "To tell you the truth, sir, I'd much prefer that we all confer directly, which is the way—" "I think we'll do fine with the procedure I laid out," Jamar said. I looked at him, and his eyes made clear that our discussion was over. I went back to the barracks, where Jim Cavanaugh and most of his team had been working since around noon. They asked me to take on primary negotiation responsibilities through the early morning hours so that they could get some rest. Rick Shirley, an experienced negotiator from Austin PD, and a few others, would stay on to assist me. It was time for me to get on the phone and introduce myself to Koresh. Tired as they were, the ATF men were slow to leave, so to avoid any misunderstandings, I thought it best to be up front and provide some perspective. "If you guys are going to hang around, you have to understand that Koresh is really pissed at the ATF, and to some extent I have to run with that. I have to play up being FBI, not ATF, so if it sounds like I'm making you out to be the bad guys... well, it's just what I have to do. So I hope you understand why I'm doing it." I looked into each of the tired faces staring back at me. Cavanaugh concurred and most of the others nodded in agreement. At 12:20 a.m., just before the ATF team left, the Davidians released two more children (now six in total). Cavanaugh stayed behind to introduce me to Koresh. He rang up the compound and, once he had Koresh on the phone, explained the transfer that was taking place. He then handed me the phone. I took a deep breath and said, "Hi, David. This is Gary. I just got down here, and I want to make sure that you and your family get out of this situation safe and sound." "Hey," he said. "Gary, huh. So who'd you say you were with, Gary?" "The FBI." "Hmm." Koresh sounded tired as well. Obviously it had been a very long day for him. After introductions, we chatted for a while, and I asked him to tell me about what had happened. As he began to describe the raid from his perspective, I was struck by how willing he was to talk about what had happened, and by his relatively calm demeanor. He was angry, but it was a contained anger, directed at ATF. He seemed to be trying to make his case to me. "I just don't get it," he said. "Why did those guys have to come in here shooting the place up? It just wasn't necessary...." And then I heard him groan, which provided an opening. "I understand you were hit by a bullet," I said. "You know, we can get you some medical attention right away, David. You just need to come out of there." "I'm all right," he said. "That's up to you," I said. "But if you come out, I assure you that every one of your people will be treated with dignity and respect." "Yeah," he said. "We're not ready to come out." Throughout the night, Koresh and I would speak on the phone every couple of hours. I had two basic objectives in continuing to call him back. First, I wanted to establish some trust between us. Second, I wanted to try to secure the release of additional children. "You know, David, the FBI is in charge now. We weren't involved in the shootout. We're here for just one reason, and that's to reach a peaceful resolution. After that, we'll investigate what exactly happened and determine the truth. But first we have to end this standoff. Which is why we really need you to come on out peacefully." Koresh continued to brush aside my requests for him to surrender, so I continued to press, but not too hard. "You and I need to keep working to resolve this peacefully. You know, what would really help is if you let some more of your people come out. Would you be willing to do that?" "I'll think about it," he said. Just before dawn, he told me he would release two more children in the morning. At 8:22, he followed through on his promise. The negotiation team had now secured a total of eight youngsters from inside the compound. It was becoming clear to me that we were not going to get any grand surrender right away or all at once, but we might very likely continue to get a few individuals out in periodic clusters. Later that afternoon, at a quarter to five, the attorney general officially passed operational control of the incident to the FBI. We moved the negotiation team over to the FBI command post, which was now fully functioning. One of our first actions was to capture both telephone lines into the compound. Now when the Davidians picked up their phone, they got us and nobody else. With full responsibility for the negotiation effort, I set up two teams operating in twelve-hour shifts, with me as the overall negotiation coordinator. Team leaders would be Byron Sage and Jim Botting, an experienced negotiator who had flown in from Los Angeles at my request. We also relied on Jim Cavanaugh from ATF, who had already developed some rapport with Koresh. My job would be to guide strategy, not to be the person on the phone. I also asked the Austin Police and McLennan County negotiators to remain on our team to assist. As the incident progressed over the coming days and weeks, I would stagger my long, sixteen-plus-hour days over portions of both shifts. My goal was to maintain continuity and a consistent strategy in our approach, and also to act as a bridge between the two teams. Another large part of my job would be to regularly brief the man with overall responsibility, SAC Jamar, as well as the three other SACs who had flown in from New Orleans, El Paso, and Oklahoma City to assist in managing the incident. Managing a crisis properly depends on managing information. In the NOC we posted situation boards on the walls that enabled everyone to stay up to date with critical information. An adjacent smaller room was for the exclusive use of the active negotiation team. Each core team consisted of five individuals. A coach sat next to the primary negotiator, monitoring the call and passing notes as required. Another negotiator operated the phone system and made sure the tape recorder was working properly for post-conversation analysis. The fourth team member served as the scribe, maintaining a log of key points in the discussion. These four negotiators and the shift team leader, as well as myself, were the only ones allowed in the room during live negotiations. The remaining members of the larger negotiation team, as well as the profilers on hand to develop background information, were able to listen via a speaker setup in the larger adjacent room. Immediately after each negotiation session these two groups would sit together to assess the last call and prepare for the next. I made certain that nothing else was undertaken until these steps were completed. This was a hard-and-fast rule so that we would always be prepared for any unexpected next contact from the Davidians. As the crisis continued, each day I would deliver an oral summary of each significant call to SAC Jamar and any other on-duty SACs, then follow up with written reports. We then faxed these summaries and our recommendations to experienced negotiators stationed at FBI headquarters back in Washington, D.C., who would present them and explain their meaning to senior FBI executives. I knew it was essential that our views reach senior management without any filters. Meanwhile, Rogers shuttled between the perimeter and the command post several times a day. I would sometimes see him in Jamar's office, but he rarely did more than stick his head into the negotiation operations center. On March 1, at 4:48 p.m., Koresh released two more children, bringing to ten the total number of those who had come out. At 8:27 that evening, day two of the siege, the number rose to twelve. Each time a child was to be released, the HRT liaison at NOC would radio the tactical agents just outside the ranch and advise them to move forward to pick up the released children. I would then dispatch negotiators to the inner perimeter, eight miles from our location, to pick up the children and drive them back to the NOC. The children came out with notes pinned on them giving instructions as to where they were to be sent, mostly to relatives who were not Davidians. Our agents brought them into the NOC, and often a kid would sit in the negotiator's lap while he or she would call the compound to announce the child's safe arrival. To our surprise, Koresh allowed the parents to come to the phone each time and personally verify that their child was well and being treated with care. We realized that these exchanges helped Koresh to retain his image as a caring and benign autocrat among his followers. I didn't believe he was allowing these children to leave out of genuine concern for their safety; rather his intent seemed to be to embolden the parents who stayed behind, freeing them from parental concern so that they would fight to the death for him. At this stage of the ordeal we were still trying to piece together a complete picture of who was with Koresh inside the compound. By speaking first with the children and then with their parents, we were able to fully identify a large number of the adults. This contact also allowed us to impress upon the parents that we did not want to see any further harm come to anyone inside, to personalize ourselves to them. At this point, we had brought out twelve children, already a far better outcome than one might have expected given the gun battle that had raged only a day before. But despite that progress, our agents' tenderness with the children, and our attention to the parents' concerns, all was not sweetness and light. The Davidians inside the compound had heavy weapons. Two of the ATF agents who died had been killed by a .50 caliber sniper rifle. The barren countryside around the compound provided nothing more than a few mesquite trees as cover, and thus the HRT teams had brought in armored vehicles from the army base at nearby Fort Hood out of necessity, for adequate protection against the Davidians' arsenal. While this was a reasonable precaution, the unintended consequence was to exacerbate the mixed message that permeated the entire undertaking. While negotiators tried to show understanding and find common ground, the tactical people couldn't help but present a warlike image that heightened the tension. An empathetic voice over the phone can only do so much to offset the powerful impression available to the subject's own eyes. With this in mind, we redoubled our efforts to demonstrate peaceful intentions, as well as our resolve to assist the Davidians in coming out and rejoining their children. Our profilers' research told us that Koresh had for some time been preaching the necessity of martyrdom in the final confrontation with Babylon. The biblical imagery was now reinforced by his having been shot. In terms he could appropriate from the book of Revelation, "the lamb had been wounded." We tried not to give him any more evidence to use in convincing his followers that this was the ultimate showdown between the forces of good and the forces of evil. By appearing reasonable and willing to help, we tried to show that the FBI was not, as he might suggest, Babylon. Before the siege, a documentary film about the Davidians had been produced by the Australian version of _20/20_ as a result of child-abuse complaints made by two ejected Davidians from Australia. The film footage was instructive. They had filmed Koresh giving lengthy sermons to his followers at the compound. When our profilers brought these tapes in for us to review, the man we observed, with his silky smile, air of superiority, and emotionally laden sermonizing, came across as a slick con artist more than anything else. But we were detached law enforcement officers, not naive seekers after enlightenment. As we examined the faces of his followers, they appeared absolutely mesmerized, hanging on his every word. Comparing various statements he had made on these tapes, as well as statements he made to us, we could also see how easily he altered his stated beliefs to serve whatever seemed to be in his interest at the moment. If there had ever been any doubt, this persuaded us that arguing religion with him would be a fool's game. He may have truly believed that he had some divine mission, but in my opinion, he was using religion primarily as a tool for manipulating and controlling others. In addition, the local newspaper in Waco had begun running a series of articles titled "The Sinful Messiah," which provided more useful information about the Davidians and how they functioned. During these first few days, we learned that Koresh's tenure with the Davidians had hit a rocky patch a few years earlier. He had pursued the elderly widow of Branch Davidian founder Benjamin Roden and wound up having a romantic relationship with her. This led to a confrontation for control with Roden's son, George, which culminated in a gunfight. Koresh was subsequently arrested and prosecuted for assault, but the jury found him innocent. I discussed this with my negotiation teams, and as a result, our primary negotiators began to use that incident to remind Koresh that the court system could be fair. Henry Garcia, who had become our primary negotiator during the day shift, hit this theme hard. The American legal system had sided with him in the past, so there was no reason for him not to be able to expect a fair trial for the deaths of the ATF agents. At one point he said he would be willing to come out and be judged by what he called "your law," but he did not say when. Later that night, Koresh released two more children, bringing the total to fourteen. At 10:06 p.m. on March 1, Henry was on the phone with Koresh when the Davidian leader made an offer out of the blue. If we allowed him to deliver a nationwide broadcast, then he and his followers would surrender peacefully. With a hand signal I encouraged Henry to pursue this in more detail. "Okay, David," Henry said. "Let's see what we can do. What sort of message do you want to convey?" "I want to speak about the book of Revelation," Koresh said. Around the room, we exchanged knowing glances. Fresh on our minds was the 1978 incident in Jonestown, Guyana, when Reverend Jim Jones coerced over 900 of his People's Temple followers to "drink the Kool-Aid" that led to their deaths. The book of Revelation, with its focus on the apocalypse, could be a dangerous text in the hands of a charismatic and narcissistic leader. Henry asked Koresh if what Jones had done was the kind of thing he had in mind—a farewell statement and then mass suicide. "I'm not having anybody kill themselves," he said. We told him that we would consider the idea, and the next day Koresh repeated his offer to surrender in return for airtime. "So, David, if you just want to talk about the Bible, how about a tape-recorded message? Then we can review it and run it past our bosses." "That's okay," he said. "I can work with that." "And just one more thing. We want you to start out by saying on the tape that if the message is broadcast over nationwide radio, then you and all your followers will peacefully surrender." "That's right," he said. "That's the deal." I asked for a meeting with Jamar and the other SACs and brought with me profiler Pete Smerick from the FBI Behavioral Science Unit. We told them that we saw little risk in playing the tape, assuming that it did not contain any references to suicide. I made clear to the commanders that we had hope, but no guarantee, that Koresh would follow through on his promise. "Why give him anything when there's no positive assurance we get something in return?" Jamar asked me. I explained that this was not a typical bargaining interaction because we had so little leverage. "The only thing Koresh wants from us is for us to go away. We're not going to do that. We can't bargain, since he doesn't want anything else. So really, we're not giving up anything." I knew I risked losing credibility with Jamar if Koresh didn't follow through, but as far as I was concerned, we were putting nothing at risk. If it turned out that he was conning us, we would have demonstrated our goodwill, and the onus would be on him to demonstrate good faith in some other way. Jamar gave his approval, and Koresh made the tape and sent it out for our review. As promised, the recording contained nothing more than a rambling sermon about the book of Revelation. We carefully listened to all fifty-seven minutes and found nothing in it that suggested it was a preamble for mass suicide. We even reached out to religious scholars at nearby Baylor University for their interpretation, and they, too, found nothing problematic. At 8:15 a.m., March 2, Koresh released two more children, bringing our total to sixteen. He also released two women in their seventies who lived in a trailer adjacent to the compound. Unfortunately, though long-term Davidians, they seemed a little out of it and couldn't give us any useful information about conditions inside, or what, if anything, the Davidians were planning. At 1:20 p.m., following further negotiations, Koresh released another two children. That made eighteen children and two adults out. Each additional person released gave us hope that we were headed in the right direction. That afternoon at 1:32, the Christian Broadcasting Network broadcast Koresh's tape nationwide, uncut, as promised. In subsequent conversations Koresh told us that he had heard the broadcast and was pleased with it. Now it was time for him to deliver on his promise to come out peacefully. With Jamar's approval, we worked out a plan whereby Koresh would be carried out on a stretcher by several of the Davidians. The others would then follow in small groups, marching to school buses that would take them to the receiving facility. Koresh's number two man, Steve Schneider, would stay on the phone with us throughout the process to ensure coordinated movement. He would then come out last. Koresh agreed to all these arrangements, and we brought up the buses so that they could be seen from inside the compound. HRT stood by, ready to secure the individuals. I asked Bill Luthin, the HRT liaison officer working in the negotiation room, to take special care to avoid appearing to manhandle anyone, as this would be watched by those still inside. Bill was very experienced and agreed to emphasize this point with HRT team members. We didn't want any misunderstanding that might short-circuit this peaceful end to such a volatile situation. Earlier, Koresh had told us that twenty children, forty-seven women, and forty-three men remained in the compound, and we wanted them all to make it out alive. The negotiation team waited patiently, in radio contact with the frontline tactical people as the appointed time came and went. HRT reported no movement, so we called Steve Schneider. "Steve, what's going on?" "Everybody's lined up with their stuff, ready to go out," he said. He sounded confident, even relieved. "What about David?" "We're trying to get him downstairs on a stretcher, but the wounds make it tough to move him. He's hurting, you know." "Yeah, we know. It must be tough. Just do the best you can." We waited awhile longer, but still no one emerged from the compound. We called back in, but this time Schneider's optimism seemed to have faded. Koresh was still coming, he told us, but this time his assurances sounded vague and unconvincing. "Steve," Henry said, "you've really got to come clean with us. What's going on? We've delivered on everything we'd promised. Everyone's standing by." "David just wants to give everybody a final Bible study lesson before coming out," he said. This sounded like something Koresh would do, so we regained some measure of hope that things were still on track. More time passed, and we called in yet again at 5:59 p.m., and spoke to Schneider. "The Lord spoke to David," he said. "The Lord told David to wait, not to come out." Now we knew we'd been had. Funny how conveniently this divine intervention had appeared. I slipped Henry a note, which he delivered verbatim: "But we delivered on our end of the bargain! We did everything you asked." "I understand. But God has the final word." "Steven, can you put David on the phone, please?" "He's praying. He doesn't want to talk right now." We were extremely disappointed, to say the least, but not totally surprised. As experienced negotiators, we were used to dealing with manipulative people, which is to say that we were accustomed to being lied to. These kinds of setbacks are a normal part of the negotiation process. It's important not to abandon the strategy just because Koresh had reneged in this instance. It's important not to overreact. This created an immediate challenge. I knew that Rogers and Jamar would view it as a sign that Koresh was manipulating the negotiation team and that we were not being firm enough with him. Further, they would view it as an insult to their authority. I went into Jamar's office to explain what had—or had not—happened, and sitting in a chair in front of his desk was Dick Rogers. Both were visibly angry. I reminded them that we had warned them that this kind of thing could happen, but that it shouldn't alter our approach. They listened, but I could see that they'd already decided that they wanted to punish Koresh. It became clear to me that their decision was based on a strong emotional response to what Koresh had done. "This joker is screwing with us," Rogers said. "It's time to teach him a lesson." "I don't think that's going to advance our cause," I said. "It doesn't matter if Koresh is jerking us around. The point is, we're getting people out of there." Rogers and I were talking past each other, both trying to influence Jamar, but his body language showed he agreed with Rogers. "My people can get in there and secure that place in fifteen minutes," Rogers said. "Still too soon for that," Jamar said. "But I agree it's time to teach him a lesson." I protested, saying we might well be able to get things back on track, but they were adamant, violating a core principle of the FBI negotiation program: never confuse getting even with getting what you want. Talladega had buoyed Rogers's confidence about what his team could do, but this was a very different situation; for one thing, they had guns, lots of them. For another, they had children inside. Philosophically, Rogers believed the best way to force them out was to tighten the noose around them, to apply increasing pressure until they capitulated. Yet I knew this approach would be counterproductive. The very first thing I talk about when training new negotiators is the critical importance of self-control. If we cannot control our own emotions, how can we expect to influence the emotions of another party? But I also remind my negotiators that "negotiators negotiate and commanders command." It is the negotiators' responsibility to make the very best strategy recommendations we can, but to also know that the advice we give commanders will not always be embraced. Despite my warning, Jamar ordered the armored Bradley vehicles to move onto Davidian property as a visible display of the FBI's power. I was concerned that this would only ratchet up the tension and damage our credibility. That proved true in our next conversation with Schneider: he angrily denounced us for moving the armored vehicles forward. "You promised to stay off our land," he said. "But David promised to come out. It was a firm commitment, Steve. My bosses are angry and frustrated," said Henry. "Honestly, we were going to come out, but what could we do? God told David to wait." Koresh had conveniently used God as the ultimate trump card, but from everything I could tell, Schneider sincerely believed what he was saying. Under the circumstances, he had to hold his faith in what David had said about God. This firm belief shut down the conversation, at least temporarily. When we spoke to Schneider the next day, now the fourth day of the standoff, he admitted that he was "personally embarrassed" that the Davidians hadn't followed through with the promise to come out. We hoped this signaled the opening of a wedge between Schneider and Koresh. Schneider was far better educated—he held a master's degree in divinity—and more articulate than Koresh, a high-school dropout. We had also learned that Schneider's wife, Judy, had become one of Koresh's concubines. Koresh had even fathered a child with her, whereas she and Steve had never conceived one. There seemed to be more than enough reasons for Schneider to harbor resentment that we could exploit. But that would require that Schneider had reserved some of his mind for independent thought. Of that there was no evidence. The FBI and ATF leadership team began holding regular daily press conferences, with key remarks prepared by my negotiation team. Jamar ran these at first. Our team provided him with the daily talking points we wanted to convey, not just to the world but to the Davidians inside: all we wanted was a peaceful solution, and our primary concern was the safety of the children. The scripted portion of the press conferences generally went well and served our objectives. We were less successful later when, during question-and-answer sessions, one or more of the FBI or ATF leaders would shoot from the hip. More than once during questioning by reporters, officials made offhand remarks casting doubt on the sincerity of Koresh's beliefs, with sarcastic references to his conversation with God. It then fell on our team to backtrack with the Davidians and explain what they meant. This did not help our cause. As governments and corporations have learned through the years, it's far better to have a designated press spokesperson stand before the media rather than the boss. When faced with a tough question, the spokesperson can reply that he or she doesn't have the information sought but will follow up later. This provides much-needed time to formulate and deliver the best answer to the question. It was also problematic that ATF officials continued to be involved in the daily press conferences. This undercut efforts to distance the FBI, and specifically our negotiation team, from this organization that the Davidians hated. Despite my repeated requests to remove the ATF from the press conferences, FBI officials in Washington preferred to try to underscore "unity" by keeping the ATF in the picture. Once the Bradleys moved forward, I realized the internal battle over strategy was going to be as challenging as talking to Koresh. There was a growing disconnect between the strategy we were pursuing as negotiators and the thoughts of the tactical folks on the perimeter. The deeper realization was that Dick Rogers had not been chastened at all by the outcome of his rash orders at Ruby Ridge. Buoyed by Talladega, he was still committed to the tough-guy rule book. Making matters worse, there appeared to be a growing misunderstanding at the forward position about what we negotiators were doing. Coordination between us and HRT was complicated by our separate locations; they were located just outside the Davidian compound, whereas we were eight miles away. I volunteered to brief the HRT operators as they were coming off or going on their shifts, but Dick Rogers declined the offer, saying it wasn't necessary and that he'd tell the guys what they needed to know. I was beginning to sense his personal frustration and growing discontent with the progress of the negotiations. I also recommended that Rogers, Jamar, and I meet face-to-face at regular intervals in order to work out any strategy disagreements, but Jamar again declined to follow my suggestion. He said that the existing system, whereby I met with him and then he went forward and met separately with Rogers, was working to his satisfaction. In truth, it contributed greatly to our problems. Despite the growing tension with the Davidians, we were able to get back on track, and on March 3, the fourth day of the incident, at around four-thirty in the afternoon twelve-year-old Mark Jones was allowed to leave. At seven-thirty the following morning, his eleven-year-old brother, Kevin, followed. Our tally was now two adults and twenty children released, about half the total number of children thought to be inside at the beginning of the siege. Regardless of how else Koresh might be manipulating us, he was letting these children live, and that was a good thing. At 8:41 on Friday morning, March 5, we negotiated the release of nine-year-old Heather Jones, the twenty-first child and twenty-third person overall to leave. Unfortunately, she would be the last person to exit for several days. It seemed that our continuing show of force had failed to make Koresh more compliant, and in fact had made him angry enough to break off contact. Perhaps more than a lack of communication and poor coordination, fundamentally different views of how to resolve the matter began to erode the trust between the HRT and negotiation teams. Dick Rogers called me personally from the forward area one evening, enraged that the Davidians were pointing one of their .50 caliber sniper rifles toward HRT personnel. He angrily told me to make contact and tell them in no uncertain terms that they should remove the weapon immediately or be fired on. I instructed the primary negotiator to do just that. He spoke with Schneider and the weapon was quickly pulled back. Days later I would learn that HRT personnel were absolutely livid that the negotiators had told the Davidians to pull back the heavy weapon. They preferred knowing where it was so they could keep an eye on it. I asked Lloyd Sigler, the capable HRT representative now working in the NOC, to explain to the HRT team members that we had been ordered by Dick Rogers himself to have the weapon removed. Lloyd passed on the information, but it never seemed to filter down to the team members. At no time did Dick Rogers ever explain to his own team, during or after the incident, that it was he who had ordered the weapon removed. Instead, HRT team members were left with the impression that we had undercut them. Despite these problems, Jamar continued to approve my suggested initiatives to get things back on track with the Davidians. The next day, we offered to send in a suture kit to treat Koresh. To personalize ourselves as human beings, rather than as some faceless enemy, we included a brief videotape showing each of the primary negotiators who had spoken with him so far. Each one of us held up a photograph of our own family and stated how important we knew Koresh's large family to be to him. We each signed off by stating our strong desire to see everyone come out unharmed. This was unprecedented. By Sunday, March 7, day eight, we found the negotiations with Koresh becoming increasingly challenging. His resistance to our efforts clearly increased; he began to subject the negotiators on the night shift to long religious diatribes, which only among ourselves we would call his "Bible babble." Our conversations to date had been pretty practical and secular in nature, but now his religious worldview came to dominate his side of the conversations. We tried to discourage these long telephone conversations when we realized that they kept Koresh up all night, which meant that he would sleep much of the following day. When he was asleep, no one else inside the compound could authorize the release of any more children. We also knew that no children had been released while we had been talking about religion. We would notice that while Koresh spoke in lofty terms of his religious philosophy, he occasionally digressed into decidedly less spiritual realms. One evening in the midst of one of these religious diatribes Koresh stopped and asked one of the negotiators what the team was eating for dinner. He was told by negotiator John Cox that they usually sent someone out to Whataburger, a nearby fast-food chain and the only place open late at night. In response, Koresh said, "Whataburger! That meat is terrible. If it turns out that I am the son of God, the world will find out about Whataburger." This did not sound to us like the comment of a man who truly believed in his own divine status. Just after lunch on March 7, Koresh told us that he would send out another child if we could accurately tell him the meaning of the third seal in the book of Revelation. Aware of the limits of our own biblical knowledge, we again consulted religious scholars at Baylor University. Armed with what they told us about the most common interpretation of the third seal, we reported back. Koresh listened, then told us that we were not even close, but said nothing more. He made no effort to tell us how we were wrong, and he refused to release anyone. It's unlikely he would have agreed with any answer we came up with. Koresh was regaining his strength and returning to his normal pattern of manipulating those around him, including us. While it might have been tempting to be confrontational with him, we continued to try to push toward our goal of getting people out. My task was not to settle scores with a sociopath but to save all the lives I could. Soon after this we were presented with an unexpected opportunity. One of our negotiators heard a story on the then-popular Paul Harvey radio show about a fast-moving, guitar-shaped nebula blasting across the skies at thousands of miles per hour. Koresh was a guitar player and led a band made up of his followers, so we thought we could present it as a sign that it was time to come out. The negotiator on duty called up Schneider and asked if he had heard about the comment on the radio. Schneider said that he had not, but he became very excited, speculating that this could be the sign that Koresh was waiting for, a message from God that they should come out. We contacted Paul Harvey's staff and requested that his show rebroadcast the report. After the rebroadcast we called back and asked Schneider if Koresh had heard it. Schneider sounded disappointed. They had listened, but Koresh's only response was to say, "That's not very fast." That was the end of that. I was beginning to feel increasing pressure to show results if we were going to delay more aggressive tactical action. When Koresh indicated that the children inside the compound needed milk, we decided to call on McLennan County sheriff Jack Harwell to help work out a deal to get that milk to them. Harwell was not only a down-to-earth and straightforward law enforcement professional but an affable and easygoing man. He seemed to know how to talk to and get along with almost anyone. We felt that he might be able to serve as an intermediary and help overcome Koresh's resistance to our entreaties. Since the incident began, Jack had spent a great deal of time sitting in the negotiation room wearing his white cowboy hat, patiently listening to our conversations with Koresh. In the fall of 1992, when accusations of child abuse had first been leveled at Koresh, it was Jack who met with him to discuss the issue. The gentle sheriff, who didn't even wear a gun, had been polite and respectful, and they seemed to have gotten along well. At one point Koresh had even said that he might well have surrendered had Harwell come to arrest him rather than the heavily armed men from the ATF. A little after one on Monday afternoon, March 8—the ninth day of the siege—Jack called the compound, and the rest of us listened on headsets. Jack was magnificent. With just the right tone he was able to project his genuine concern. Koresh greeted him warmly; he obviously had great respect for the sheriff. Jack soon asked Koresh what he could do for the children, and, as expected, Koresh mentioned the need for milk. Jack told him he would make it happen. Even though we had been prepared to send in milk all along, having it appear to come from Jack would, we hoped, reestablish his bona fides with Koresh. It would also show he could get things done with the FBI. At 3:50 that afternoon, we left six gallons of milk just outside the compound. A couple of hours later we were surprised to receive a videotape of Koresh, his wife, Rachel, and their children, and also some of the other children Koresh had fathered. We popped it into our VCR and observed him on the tape commenting about the tape we had sent, the one in which we talked about our kids. I had the sense that he appreciated what we had done and that this was his way of reciprocating. He even introduced us to several of his family members on tape. He also took the opportunity to show us his wounds. During the initial shootout a bullet had grazed his wrist just above his left thumb. He showed us where that same bullet had continued on and struck his left side, leaving a clearly visible bullet hole. He told us that the sutures we had sent in earlier had helped, and he thanked us for them. Using a faux John Wayne voice, he joked that the pain was nothing a tough guy like him couldn't handle. Koresh was idiosyncratic and unpredictable, but we seemed to have reestablished a kind of rapport. We looked on that tape as a very positive sign, and I conveyed this to Jamar. But again, a moment of progress would be thwarted by the actions of our colleagues. At two-thirty the following morning, unbeknownst to me, Jamar approved a recommendation from someone to turn off all the power going into the compound. Coming as it did on the heels of the successful conversation between Koresh and Jack Harwell, the milk delivery, and the Koresh video, the timing could not have been worse. My team rightly felt that the rug had been pulled out from under us yet again. And so in addition to trying to convince Jamar to coordinate better, I also had to manage their growing frustrations. Even Steve Schneider questioned how he was expected to keep the just-delivered milk cold if the power was off. He had an excellent point. Turning off a hostage taker's power during a siege can be a useful tool. Lack of electricity makes those inside less comfortable, which can sometimes make them more willing to compromise. But turning off power should never be done without weighing the pros and cons. This technique is also much less effective in a situation such as Waco, when all the subject wants is for us to go away. Moreover, the Davidians already lived a very spartan lifestyle; Koresh's quarters were the only part of the compound that had electricity anyway. While I didn't know for certain, I suspected that Jamar's actions came as a result of pressure from Rogers. I imagined Dick asking how in the hell we could send milk to these guys without getting people out, particularly after Koresh had jerked us around. I went to Jamar and expressed my grave concern that we were working at cross-purposes and that turning off the power was going to negate the progress we'd just made. He listened to my views but said that he saw no inconsistency between what we were saying and doing. I couldn't for the life of me figure out how he couldn't see the inconsistency there, but I knew he wouldn't change his mind. To be fair, Jamar was under a great deal of pressure by this point, with a long and expensive siege going on, and he had the impression that Koresh was manipulating the FBI and that we needed to be more confrontational. But the result of this was that he was choosing to support both my recommendations and those of Dick Rogers, which would continue to put us at cross-purposes. I renewed my efforts to convince him that the more we tried to bully Koresh, the more he'd dig his heels in. Seven hours after the power had been turned off, I was able to convince Jamar to turn it back on, just in time for the Davidians to watch the regularly scheduled 10:30 a.m. press conference with SAC Bob Ricks from the Oklahoma City FBI office. Of all the various FBI commanders who came to Waco, Bob took the most time to visit with the negotiation team and listen to the conversations we conducted with those inside the compound. He supported our efforts. We gave him talking points for these encounters with the media, and he was very good about getting them across in a natural way. Mostly, the idea was to take the main themes we had discussed with Koresh and Schneider on the phone, and to use television and radio broadcasts to drive these home to all the Davidians in the compound. Throughout these days, we kept pushing for the release of more kids. Of the twenty-one released so far, the last had come out on March 5. On March 7, after we continued to press for more releases, Koresh had finally snapped at us, saying, "Hey! You don't understand. The rest of the kids in here are _my children_ —they're not coming out!" We found this angry declaration worrisome, to say the least; we knew we had a major problem on our hands. The innocent children were always our primary concern; their parents had made their own choice to follow Koresh. I now put in motion an idea that we had gotten from the Koresh videotape. I sent a team of negotiators to the Child Protective Services home where all of the released Davidian children were being kept. We had worked with Texas authorities to ensure that the children would be kept there in Waco awaiting their parents' surrender. We made clear to the parents that, contrary to their instructions, their children were not being sent off to live with relatives. Instead, they were waiting here for the parents to come out. I hoped that this fact would weigh on their minds and lead them to conclude that they had to go on living for their children rather than throw their lives away defending Koresh. We made a short videotape of all of the Davidian children in the home where they were being kept. We showed them playing, relaxing, and clearly being well cared for. We made arrangements to have the tape and still photos of the kids delivered on March 9 at 2:04 p.m. One detail we had not noticed was a young boy named Bryan Schroeder sitting on the floor looking forlorn. His mother, Kathy Schroeder, was one of the most strident and angry women in the compound. She and her second husband, Michael, had lived there with three of their children: two boys from Kathy's prior marriage and their son Bryan. Michael was killed on the day of the initial shootout with the ATF, and Kathy was understandably bitter. All three children had left the compound, and Kathy's first husband, who had joint custody, came immediately to Waco to claim the two older boys. With the two older half brothers removed, this left young Bryan alone in the home with the other Davidian children. As the parents inside the compound watched the video, Kathy noticed that Bryan looked sad and lonely, and she became concerned. At 10:25 on the evening of March 9, Koresh sent out another tape in response to ours. This tape showed additional Davidian families who were living in the compound, further helping us to identify individuals and better understand the relationships at work. This seemed to be the kind of positive exchange we were trying to promote. The next evening, the power was shut off again, then turned back on in time for the next nightly press conference. In my view, turning the power off and on only served to demonstrate that we were trying to aggravate those inside, which is not helpful. Despite these less than ideal conditions, in an attempt to build on our earlier efforts the negotiation team filmed a second video of the Davidian children and sent that in as well on March 11, just after 1:00 p.m. While we were attempting to build on the rapport we had established, the forward command continued on its separate and contradictory course. They requested additional Bradleys and some M1 Abrams tanks, the biggest and most imposing in the U.S. arsenal. These vehicles arrived at 9:30 that evening, after which Jamar made a rare visit to the negotiation room. Standing in front of a schematic of the interior of the compound, he enthusiastically cited statistics on the powerful armament: weapons capability, fuel capacity, engine power, weight, and size. Then, placing his finger on the map of the compound, he pointed out how an M1 was powerful enough to drive from one end of the long compound all the way through and out the other side without stopping. He seemed excited by the possibility. The negotiators in the room were speechless. Surely he wasn't serious. Had he forgotten about the women and children inside? Having dealt with SAC Jamar repeatedly, I was somewhat accustomed to this type of bravado, but I could see the shock on the faces of the other negotiators. Jamar soon left the room, leaving a stunned group shaking our heads. For the first time, they understood what I had been dealing with. Earlier on Thursday, March 11, Kathy Schroeder had called the negotiation team to complain about her son Bryan being left in the Child Protective Services home without his older half brothers. She said that she could see on the videotape we had sent in earlier that the boy was clearly upset. John Dolan, now the primary negotiator, patiently explained the legal issues that had forced us to turn over the two older boys to Kathy's ex-husband. Still, Kathy continued to rail at us. As John listened to her, I wrote a note on a three-by-five index card and passed it to him. The note said: _Bryan needs a hug from his mommy_. John looked up at me, a big smile on his face, and nodded. Kathy continued to vent her anger. Then John quietly and gently said, "You know, Kathy, I think what Bryan really needs now is a hug from his mommy." Kathy's silence spoke volumes. John waited, then built on the emotion he knew she was feeling. He talked about how the young boy needed her and how she was the only one in the world who could take care of him. By the time their conversation ended, he had convinced her to come out and give Bryan that hug. Without hesitation or equivocation, Kathy told us she would exit the next day. Koresh had repeatedly said anyone was free to come out at any time, but we were not convinced that he was not pressuring the adults to do otherwise. After the call, John gave me a high five and said, "You give good note." At 10:41 the following morning, Friday, March 12, the thirteenth day of the standoff, Kathy Schroeder came out of the compound and surrendered to us. We immediately took her to a building nearby where her son Bryan was waiting for her. Once again, we had our video camera on hand to capture this emotional reunion of mother and son, hugging each other and crying for joy. The smile on little Bryan's face when he saw his mother brought tears to all our eyes. We then arranged for a second delivery of milk to be sent in around three that afternoon. Two hours later, we put Kathy Schroeder on the telephone to speak with Steve Schneider. By this time she had visited not only with Bryan but also with all of the other children being taken care of at the Child Protective Services home. Kathy sang the praises of those caring for the children, providing reassurance to all the parents inside that their kids were well. She also spoke of the gentle treatment she had received at our hands. Steve Schneider seemed stunned, which was just the kind of reaction we were after. By being restrained and professional, we hoped to win them over, one by one. At 6:00 p.m., a nineteen-year-old named Oliver Gyafras decided that he, too, wanted to come out. Both he and Kathy had each been forced to sit through a lengthy "exit interview" with Koresh before they could leave. During his talk with each of them, Koresh never told them they could not leave, but he reminded them that they were surrendering to the forces of evil that had attacked the compound and persecuted them for their religious beliefs. Fortunately, their desire to come out and live was stronger than any loyalty they felt for Koresh. We had now brought out two non-elderly adults, both previously devoted Davidians. We believed that, after a certain number of such defections, even Koresh might come around, if only to salvage what he could of his leadership status. He seemed more committed to his followers' adulation than to any particular principle. I tried to explain to Jamar and the other commanders the nature of our "trickle, flow, and gush" strategy. I told them we were aiming not at a grand resolution strategy that would bring everyone out at once, but rather at a steadily increasing attrition of individuals leaving the compound. Our hope was that each subsequent release would weaken that much more the stranglehold Koresh held over the larger group. At 9:15 p.m., we sent in a copy of the video showing Kathy's emotional reunion with young Bryan. We thought it sent a powerful message about embracing life. Two hours later, Steve Schneider told us once again that the forward command had cut off all the power. It was sad that we heard this from Schneider before we were notified by our own people. On Sunday afternoon, despite the progress we were making through negotiation, Dick Rogers ratcheted up the coercive pressure by installing high-power lights aimed at the compound. This meant that our perimeter people could see the Davidians, but not the other way around. Was this a necessary protection, or another form of harassment? We learned from our HRT liaison that on Monday morning, March 15, Jamar authorized the use of several armored combat engineering vehicles (CEVs) to clear away trash piles fifty yards to the rear of the compound. His rationale was that, conceivably, the Davidians could come out and hide behind those piles and fire on our agents on the perimeter. But the Davidians had not tried to exit the compound and had not fired on the FBI at any time. Where had this newfound concern come from? And why now? Not surprisingly, the introduction of these heavy vehicles was seen by those inside the compound as being a decidedly hostile act. More than once, to our surprise, these huge machines severed the dedicated phone line installed to communicate with those inside. This forced us to set up a loudspeaker system to send messages and to alert the Davidians that the line had been cut and that we would supply a replacement. Even with these setbacks, we were able to keep our efforts moving forward. The same day the CEVs showed up we had our first face-to-face meeting with the Davidians. At 4:20 in the afternoon, Byron Sage and Sheriff Jack Harwell went forward and had an open-air discussion with Steve Schneider and Wayne Martin just inside the perimeter. The primary item on the agenda was the safety of the children, and our desire to get them all out. This also allowed us to demonstrate our willingness to address their concerns about property seizure, continuing ministry in jail, preservation of the crime scene for their defense, and other issues of concern. These two were willing to talk about possibilities in a calm manner, but, unfortunately, both of them were loyal to Koresh and repeatedly made it clear that he alone made all the decisions. On Tuesday, day seventeen of the situation, another SAC, Dick Schwein from El Paso, arrived to assist SACs Jamar, Ricks, and Dick Swensen. A caricature of the gung ho type, he wore a dark blue SWAT-type uniform (the other SACs wore casual civilian clothes) complete with a web belt holding a canteen. He also seemed surprisingly cavalier and flippant about the process. Once I heard him say in passing, "No use trying to talk to these bastards. We've got to just go in there and cut their balls off." SAC Schwein contributed perhaps the single strangest element of the whole sad saga of the Waco siege: harassing the Davidians by blasting bizarre sound recordings—Tibetan chants, recorded sounds of dying rabbits (used by hunters to attract coyotes), Nancy Sinatra's "These Boots Are Made for Walking"—over loudspeakers. Schwein had picked up the idea from the U.S. Army. They had used such tapes during the Panama invasion, when they were trying to force General Manuel Noriega out of the papal nuncio's residence, where he had sought sanctuary. When I learned of this plan, again from our tactical liaison, I went immediately to Jamar and urged him not to allow this. I made the case that playing harassing music was not a recommended negotiation tactic, that it was not something taught by the FBI, and that it would send the wrong message to the Davidians, who were now starting to cooperate again. At best these tapes would be ineffective; at worst they would make us look foolish. But most fundamentally, what SAC Schwein failed to realize was that this technique had not succeeded in Panama, and it wasn't likely to work for us in Waco. Jamar assured me that he would speak to Schwein when he came on duty that night and make sure the tapes were not played. Feeling somewhat reassured, I staggered back to my motel just before midnight. I took a shower, then turned on the television. There on the news, covered live, was the Davidian compound, brightly illuminated, with torturous sounds blaring over speakers. I was both embarrassed for the FBI and personally enraged. I immediately called the command post and asked to speak with Jamar, but he had gone for the evening. The next morning I raised the issue with him again. Jamar was unaware that the tapes had been played. Evidently he had forgotten to speak with SAC Schwein on the matter, and he assured me it wouldn't happen again. But it did happen again—the very next night. When I confronted Jamar about this again, he shrugged off my complaint, saying that Schwein had nothing better to do on the night shift. He brushed it off as no big deal. It took several more nights before we were able to bring it to an end, and then only by going behind his back to appeal to leaders at FBI headquarters. One of the most frustrating aspects of the whole affair was that many critics of the FBI's handling of Waco have blamed the use of these audiotapes on the negotiation team. They assumed the use of these tapes was part of our negotiation strategy, when nothing could be further from the truth. Again, the FBI was hamstrung by a failure to appreciate—and teach its regional leaders—the skills necessary for crisis management. That was coupled with an ingrained hubris that served to foster a false sense of capability and skill where in fact it did not exist. When Steve Schneider next spoke with us, he asked in seeming disbelief what message the FBI was trying to convey to them with these sounds. Schneider said he had been working to convince more individuals to come out, but the tapes had put an end to it. Instead of being able to build on our success, we now had to dig ourselves out of yet another hole created by others. Around 6:20 p.m., on March 18, day nineteen, matters went from bad to worse. Again over my protest, SAC Jamar authorized HRT to advance with the armored vehicles and knock down and remove four fuel tanks located on the right side of the compound. He also authorized them to remove a bus parked near the building. These removals were done recklessly, with no effort to minimize damage. It seemed that the FBI was deliberately seeking to irritate the Davidians. Some of my negotiators began to speculate that this was being done to deliberately undercut the negotiation process. I went to Jamar again and reminded him that the fuel tanks had been there since the beginning of the incident and had not been seen as a problem before. I asked him why it was so critical to remove them now. His response was short, vague, and off the point—something about the Davidians being able to use the fuel to blow up our vehicles. Yet again, I suspected this idea had originated with Dick Rogers. Even though Koresh was angry, he continued to take our calls. Henry Garcia, John Dolan, and the other negotiators continued to float the idea that if Koresh came out, he would be able to continue to meet with his followers while in jail awaiting trial. We reminded him that he might be found innocent in court as having acted in self-defense, an idea we didn't believe but hoped he might. Surprisingly, this possibility seemed to intrigue him. We drafted a letter signed by Sheriff Harwell and SAC Jamar, which verified that Koresh would be allowed to meet regularly with his followers in jail while awaiting trial. We also sent in copies of national magazines that Koresh wanted to see, each with his photograph on the cover. Again his ego was coming to the fore, and again we tried to use this. We suggested that if he killed himself, he would receive only brief coverage, whereas if he was going through a trial, he would be in the media constantly. We continued to pursue our negotiation strategy and once more things began to turn our way. At 8:00 p.m. on March 19, the twentieth day of the siege, Brad Branch and Kevin Whitecliff, both in their thirties, came out, bringing our total to twenty-seven. We had yet to achieve the tipping point that might convince Koresh to get out in front of where his followers were heading, but we were definitely getting back on track. Two days later, seven more adults came out: Victorine Hollingsworth, Anetta Richards, Rita Riddle, Gladys Ottman, Sheila Martin, Ofelia Santoya, and James Lawter. These people had simply grown weary of the standoff and wanted to leave, further evidence that the "trickle, flow, gush" approach was working. When I reported to Jamar, he acknowledged this achievement but made clear that he wanted everybody out now. He apparently didn't value our incremental success. But once again, it was as if the command was purposely derailing our momentum. Three hours later armored CEVs were sent out again, this time to remove various items from the no-man's-land between the HRT perimeter and the compound. One such item was a beautiful, completely restored red Chevy Ranchero. In case Koresh wasn't getting the point as he watched from the compound, the CEV crushed the car flat as a pancake before dragging it off. To me this was the purest manifestation to date of the HRT's frustration, because it made absolutely no sense. I couldn't believe they had done this when nine individuals had come out over the preceding three days. Were they blind to this fact? Once again I made the case to Jamar that positive behavior—the release of individuals—needed to be met with positive reinforcement, not humiliating punishment. This is one of the most basic tenets of psychology going back to Pavlov. If you want to train your dog to fetch a newspaper, you don't kick the dog when it brings you the paper. We had just kicked the dog for doing what we wanted. Despite this, at 10:12 a.m. on Tuesday, March 23, Livingstone Fagan came out, bringing the total to thirty-five. Two days later, on March 25, more clearing operations took place. These acts of unnecessary exercise in pique—crushing a car and destroying other property needlessly—convinced me that our opportunity for meaningful negotiations had passed. Koresh quickly confirmed my intuition. Clearly angry, Koresh called the negotiation team and stated flatly, "No one else will be coming out." Jamar and Rogers's actions had finally put us negotiators in a hole so deep that we couldn't dig our way out. Steve Schneider got on the phone and pleaded with us to explain why things had suddenly turned so ugly when we had been working so well together. We had no good answer for him. I had the same question for SAC Jamar. He looked at me with fire in his eyes and said that not enough people were coming out. We needed to punish Koresh for not moving fast enough. But I think the real story was that, with the FBI seemingly helpless to compel the Davidians to surrender, he was feeling the heat. The entire nation was watching, and the FBI was spending about $128,000 a day, a rate of expenditure that would add up to more than $5 million before all was said and done. This was a serious concern, but not as serious as the lives of dozens and dozens of men, women, and innocent children. Venting my frustrations more strongly than before, I told him that I didn't think we would get anyone else out after these recent actions. He appeared unconcerned. I realized then that he had already determined what he was going to do. I met with my team and told them that we were on a crashing airplane. We could parachute to safety or we could try to control the descent and minimize destruction on the ground. Despite their anger and disappointment, and despite the bad decisions coming down from our commanders, the entire negotiation team felt we needed to continue our efforts. That night I received a call in my motel room from Rob Grace, my boss at Quantico. He thanked me for my work on the case but said it was time for me to step down as negotiation coordinator. Negotiators usually stayed on for three weeks, and I was well into my fourth. Only Byron Sage had been on the scene longer than me. I have to admit that I was relieved, but I was also concerned about who would take over and how they would manage. Despite my many disagreements with SAC Jamar, I believe he is an honorable man who did what he thought was best. This could also be said of Dick Rogers, but he consistently failed to recognize the progress we were making. His aggressive approach continually undercut negotiation progress. It was his attitude that infected HRT operators at the scene, SAC Jamar, and some leaders back at FBI headquarters. (Later I learned that Rogers had complained that I was personally impeding HRT's efforts to take a more aggressive approach with the Davidians to resolve the situation sooner. That was certainly true enough.) Rob told me that a high-level official at FBI headquarters wanted Clint Van Zandt, a former member of our unit, to replace me. I expressed deep concerns about this and recommended other negotiators I felt were better suited for the job. My main concern was that Van Zandt had a history of not being a team player. I also knew him to be a vocal born-again Christian, which is fine under normal conditions, but that might present issues when dealing with the self-deluded "Lamb of God." I worried that Van Zandt would attempt to try to convince Koresh to surrender by presenting his own competing interpretation of biblical prophecy. None of the thirty-five individuals released during the negotiation process so far had come out because of anything having to do with theology, and so I felt that attacking the group's beliefs was a dangerous way to proceed. Given Koresh's love of religious debate, such talk was far more likely to draw out the negotiations than persuade him to abandon his stand. Van Zandt was approved nonetheless, and when he arrived at the scene, accompanied by Rob Grace, the three of us met to formalize the handover of leadership. I expressed my belief that attempting to engage Koresh on religious issues was a dead end. Van Zandt assured both Rob and me that he would not try to inject his own beliefs into the negotiation process. At six in the morning on March 25, Van Zandt took over. No further Davidians would come out. I had been at Ranch Apocalypse for twenty-six days, and I left exhausted, frustrated, and emotionally drained, but there was little time to dwell on any of that. Before the incident began, I had scheduled a trip to Amman, Jordan, for a negotiation training mission. I had a few days at home with my family before I had to leave again for the Middle East. After I handed over the negotiations to Van Zandt, the situation in Waco deteriorated. Despite his promises, Van Zandt did in fact spend many hours on the phone trying to convince Koresh that his interpretations of the Bible were wrong. Various negotiators on the team later told me of their frustrations with this nightly religious debate, which only served to keep Koresh awake all night, then sleeping all day. I also learned that Van Zandt did not get along with SAC Jamar, who cut him out of the decision-making process. Byron Sage became the de facto team leader and through the remainder of the incident played the key negotiation leadership role in trying to save the lives of those who remained inside the compound. Part of this effort was allowing attorneys Dick DeGuerin and Jack Zimmerman to speak on the phone and later go inside the compound to meet with Koresh. Their objective was to convince him that he had a valid legal defense against the charges that would be brought. Allowing defense attorneys to walk into an active crime scene did not sit well with the tactical team. When Sage accompanied the attorneys forward he noticed one of the Porta-Johns on which the words _Sage is a Davidian_ had been scrawled into the accumulated dust, presumably by an angry tactical team member—a sign of continuing discontent and misunderstanding. But the attorneys' forays seemed to offer some hope. Koresh told them he would surrender as soon as he wrote down his unique interpretation of the seven seals described in the book of Revelation. The lawyers appealed for time to allow Koresh to undertake and complete that effort, but after a number of days passed, it became increasingly clear that Koresh was again stalling. Days later, Steve Schneider confirmed that Koresh had not even started to write. For FBI decision makers, this was the ultimate confirmation that Koresh had no intention of coming out peacefully. My own view is that Koresh was still ambivalent. Part of him wanted to live, and part of him was attracted to martyrdom. Despite his attorneys' efforts to convince him otherwise, he must have known that he was unlikely to avoid the death penalty for killing the ATF agents. This knowledge may have set the stage for the mass suicide he seemed to be planning. Consider that it was our formidable task to try to convince Koresh and his devoted followers to lay down their weapons and come out to face four counts of first-degree murder in the state that leads the nation in capital punishment. The most appalling aspect of Koresh's narcissism and megalomania was that he seemed to have no concern for the innocent people who would die with him—he appeared to see nothing but his own personal drama. Amid growing frustration, an FBI delegation flew to Washington to brief newly appointed Attorney General Janet Reno, but in fact it was more a sales pitch for one course of action than a complete presentation of all the information. Jamar brought Dick Rogers from HRT but no one from the negotiation team. The delegation expressed their legitimate concerns that the now significantly deteriorating sanitary conditions within the compound were endangering the lives of the children inside. They also made much of the suspicion that Koresh was sexually abusing underage girls in the compound. But even though this was alleged in past reports, and later confirmed by witnesses, we had no evidence that this was currently ongoing. And, if it was ongoing, why had it not been an issue over the preceding fifty days of the siege? Having provided a very one-sided picture of Waco as a crisis in need of immediate tactical intervention, Jamar requested authorization to use tear gas as a way to drive the Davidians out. Persuaded that children were indeed very much at risk, Attorney General Reno approved. Just before 6:00 a.m. on April 19 the Davidians awoke to winds gusting at sixty miles per hour and a calm message from Byron Sage on the telephone. He told Schneider that they were about to be subjected to nonlethal tear gas. It wasn't an assault, Sage told him, but everyone was being ordered to exit the compound immediately. A minute later the Davidians threw the field telephone we had installed for them out the front door. There seemed to be no further need to talk. Sage then began to broadcast his appeals for surrender over the speaker system. I was now back from Jordan and at FBI headquarters watching as the armored CEVs began pumping in the gas. Shortly after, those on the perimeter began to hear the ping of ricocheting bullets around them. The Davidians had begun to fire at them from inside the compound for the first time since the shootout with the ATF fifty-one days prior. No one was coming out. This was followed by a series of assumptions and decisions that would quickly bring the crisis to a head. Rogers speculated that the women and children were being physically blocked from leaving. And so Jamar ordered the CEVs to begin smashing into the compound's walls, opening up holes large enough that those who wanted to leave could do so. Still no one came out. As I watched, I wondered how the Davidians could see this as anything other than an assault. How on earth could mothers with children be expected to rush to safety toward armored vehicles when those same vehicles were punching holes into their home? An argument for inserting tear gas and letting it slowly do its work could perhaps be made; however, smashing holes in the compound constituted a dramatic escalation from the approved plan. At 12:13 that afternoon, the FBI observed a curl of smoke emerge from the southwest corner of the building and soon more smoke and then flames. Hidden-microphone recordings, reviewed after the incident but not monitored live, picked up the voice of Schneider ordering a conflagration, and an HRT observer testified that he saw a Davidian pouring gas on piles of straw and lighting them. Stoked by the high winds, the fire quickly engulfed the compound. Only nine of the remaining Davidians would make it out of the compound; the others were back in the center. Seven of the nine who came out that day had accelerants (fuel) on their clothing (sleeves and pant legs). One woman actually tried to go back into the burning compound but was tackled and brought to safety by a heroic HRT operator, Jim McGee. The crime scene examination that followed showed that most of the bodies were located in a central area where Koresh had assembled his followers to await their fate. The autopsy report suggested that some of the young children had been killed, presumably by their parents, to spare them the pain of burning to death. Koresh's body was found next to Steve Schneider's. Koresh had a bullet wound to the brain. Schneider had a bullet wound to the upper palate inside his mouth. It appeared as though Koresh had ordered Schneider to shoot him, after which Schneider killed himself. In all, seventy-five individuals died; an independent investigation would verify that the Davidians had started the fires that killed them. As I watched the television pictures of the compound going up in flames, I felt sick to the pit of my stomach. I was as angry as I have ever been in my life. How could this have ended so badly? I was mostly angry at Koresh and the senseless waste of life he had ordered, but I was also mad that the FBI had not handled this as well as I knew we could have. I'm certain that with a little more patience and finesse we could have saved many more lives. I stood and walked out of FBI headquarters without saying a word to anyone. I didn't ask permission to leave; I just walked out in disgust and drove home. It was the saddest and most painful day of my career. That day and into the night I called every individual on the negotiation team I could reach to assure them that what happened was not their fault and not their failure. I told them how proud of them I was and that their efforts had saved thirty-five people who otherwise would have perished. In fact, I'm as proud of the work of this team as I am of anything else in my entire career. Waco was for the FBI a self-inflicted wound that would take years to heal. It caused the public to doubt the organization as never before, and once a reputation is tarnished, it's extremely difficult to regain the public's confidence. Some good would eventually come from it: several official inquiries and congressional hearings made clear that the negotiation and tactical teams had been at cross-purposes, and those sitting in judgment came to appreciate that the negotiation team had been on the right track and that Rogers and Jamar had got it wrong. Neither man was dismissed, however Waco would prove to be the effective end of both men's career advancement. At the time of this writing, the FBI has not managed a major siege operation in over a decade. Few if any current top leaders in the FBI have even been present during a significant siege incident, and none has commanded one. It is my hope and desire that they will learn much by reading the account of what went wrong at Waco. If I anger some former colleagues with my candor and my effort to assist in this process, then so be it. The future of the FBI, its standing with the American people, and the maintenance of its hard-fought and well-deserved reputation cannot afford anything less than excellence in these matters. # CHAPTER EIGHT # **PICKING UP THE PIECES** _We seek the truth, and will endure the consequences_. —CHARLES SEYMOUR The morning after the fire at Ranch Apocalypse, I was sent to the Southern Ohio Correctional Facility in Lucasville, where an inmate uprising had been going on for a little more than a week. Ordinarily, I was always ready to say, "Put me in, coach," but at this point, utterly exhausted and emotionally spent, I was hoping the incident would be over before I got there. Then again, I also needed to get Waco off my mind, and maybe this situation would give us a chance to get it right. Lucasville was a complex situation, though, with what in fact were three competing hostage situations taking place simultaneously. I flew to Columbus, Ohio, where a local FBI agent picked me up and drove me to the prison, just north of the Kentucky border. After a two-hour trip through farmland, we saw the light gray buildings spread out on the edge of town. With its simple lines and no structure more than two stories, the prison could have been a typical American high school. Except, of course, for the double perimeter of ten-foot-high fences topped with concertina wire. We pulled up to the administration building, which, again, had a driveway that looked suitable for parents to drop off and pick up their kids. Off to my right, just outside the fencing, I saw the twisted carcass of a state police helicopter that had crashed in the early days of the standoff—engine trouble. Fortunately, no one had been killed. I got out and walked inside, where the wide hallways and linoleum floor extended the high school look and feel. But instead of football banners, the wall outside the warden's office displayed a photograph of bulky inmates working out with weights. The caption said, _What are you doing to stay in shape?_ My driver introduced me to FBI Cincinnati Field Office Assistant Special Agent in Charge Paul Mallett, the senior FBI official on the scene. In turn, he introduced me to the Lucasville prison warden, Arthur Tate, who had been an FBI agent earlier in his career and was very receptive to having our assistance and advice. Through our conversation, Art struck me as the kind of thoughtful leader you want managing a crisis. He told me that he very much wanted a negotiated resolution if at all possible. He wasn't just interested in punishing the inmates for what they had done; his focus and attention were on doing what was best to secure the safe release of his people and restore order to the facility. Also present was profiler Larry Ankrom, an FBI colleague and good friend with whom I had worked at WFO and Quantico. Larry, who was from Ohio, had been in the area visiting family when prison officials asked the Bureau to supply them with a profiler. After the movie _Silence of the Lambs_ came out, local authorities often asked for a profiler when what they really needed was a negotiator. Larry was an excellent profiler, but he also knew his limitations in the negotiation arena, so he had suggested that they send for me. On the muted television set in the warden's office, left on to monitor media reporting of the situation, I faced the rebuke of a seemingly endless loop of footage from the Waco fire the previous day. Mallett and Tate both asked me about it, but I tried to get past their questions as quickly as possible. I told them that it had been the most difficult situation I had ever worked. I also said that it was a great tragedy that I wanted to help avoid here. These men seemed both open and receptive to advice, which made them polar opposites from Jamar and Rogers. I couldn't help wondering how the standoff with the Davidians might have turned out differently if men like these had been in charge. The more immediate question was whether or not these men could maintain their demonstrated self-control long enough to peacefully resolve their own very complex situation. The crisis had begun on Sunday morning, April 11, when inmates staged a fistfight in the prison's exercise area, an expansive series of playing fields. Inmates have endless hours to study how correctional officers respond to flare-ups, and they know that the guards in the yard don't carry guns. Typically, prisoners stage a scuffle, which forces the guards to converge. At that point the shanks (homemade knives) come out. The inmates overpower a few guards, then use their captives as bargaining chips to gain control over others. Seven hundred inmates, roughly half the prison population, had been out in the recreation area when the first hostage taking occurred. Three hundred twenty-one of the inmates on the scene wanted no part of what was happening and retreated to the other side of the field, where prison authorities over loudspeakers instructed them to remain. The others went back into L Block and barricaded themselves inside with a total of eight correctional officers held as hostages. The warden explained that the initial uprising, which occurred on Easter Sunday, was driven by several of the prison's black Muslims, who objected on religious grounds to a prison requirement that they be inoculated against tuberculosis. But this was merely the fuel tossed on a preexisting tinderbox of grievances. After the firestorm broke out, other factions within the prison population took the opportunity to forward their own agendas. The two other principal groups in play were the Gangster Disciples, a black gang with no pretense of religious motivation, and the all-white Aryan Brotherhood. Given the dynamics within the prison, it appeared that each group felt the need to take their own hostages to, as they saw it, guarantee themselves a voice in any resolution that emerged. One faction barricaded themselves in the gymnasium, another in the cafeteria, the third in a classroom. At dawn on April 12, prison officials cut off water and electricity. Later that day, correctional officers calling from the negotiation center in the main administration building contacted each of the three different inmate groups by telephone and negotiated the transfer of the bodies of the six prisoners known to have been killed during the initial riot. Correctional officers also moved into the yard and escorted away the 321 prisoners who had remained outside, then set about housing them in other units. My first impression from that initial briefing was that Lucasville had a long history of overcrowding, as well as violence between inmates and staff and between the convicts themselves. The warden told me that negotiators had already heard a litany of grievances, including the desire to stop forced integration, the need to hire more black guards, the need to remove certain white supervisors, access to the media, relaxation of time limits on activities, and increasing recreational and educational activities. "Those don't strike me as unreasonable demands," I said. "Which may improve the odds that we can negotiate our way to a resolution." On the positive side, the warden and his colleagues told me, the white prisoners had released correctional officer Darrold Clark on the evening of April 15. In exchange, prison officials had allowed a live broadcast on local radio by an Aryan Nations inmate. The next day, the Muslims requested a similar exchange. They released correctional officer James Demons, after which they were allowed to make a broadcast to air their grievances. "We were pretty optimistic," the warden said. "But then, later that day, we found out that one of our guys, correctional officer Robert Vallandingham, had been murdered. We think an inmate just had it in for him from before the riot." There had been no demands made before his execution, so I suspected that was correct. Then Warden Tate looked at me. "We're getting frustrated. If we thought the murder of Officer Vallandingham was the shape of things to come, we'd be ready to move in. It's important that we begin to make some progress on getting the hostages released. I'd like you to meet with our negotiation team, absorb what they're doing, and then make some recommendations. I'd like your help figuring out how we get ourselves out of this mess." The negotiation room was a large conference area just down the hall from the warden's office. Glancing out the window, I could see L Block, a nondescript slab of concrete, some two hundred yards away. The team consisted of eight Ohio correctional officer negotiators brought in from other facilities. All had received negotiation training but none had yet had any actual experience. They were being assisted by Dave Michael, the negotiator from the Dayton Police Department who had trained them. They had set up a dedicated telephone line for communicating with the inmates. We went around the table and introduced ourselves. Michael was the only one who had ever negotiated an actual hostage incident, but they all seemed focused on the mission at hand and prepared to do what they needed to do. Michael himself seemed a bit aloof at first. I sensed that he viewed my arrival as threatening to his status as the most experienced negotiator on the scene, but once I established that I wasn't there to take over, he proved to be a good team player. I quietly listened as the team presented me with a comprehensive briefing on what they had been doing and how they saw the situation they faced. When they were done, I told the team that as I saw it, our first task was to help these inmates figure out and clearly articulate what they wanted. It's hard enough to negotiate with one person when he doesn't know what he wants. At Lucasville, we faced three opposing factions that did not communicate well, had different agendas, and refused to cooperate with one another. In addition, no one group wanted to give up their bargaining chips—the correctional officers they held hostage. "You guys know prisons," I started in. "But my experience with prison incidents tells me that inmates sometimes need several days to vent their emotions, exercise their frustrations, and generally act out. Perhaps they're finally getting to a point where they can be channeled toward a productive dialogue." The correctional officers nodded in agreement. "My hope is that we're nearing the time when they can settle down and make some better decisions. Maybe some cooler heads can start to step up." "We need to find those cooler heads," Michael added. "And find a way to encourage them along," I added. The challenge I put on the table was this: how could we help the inmates get organized so they could figure out what they were after? Only when realistic concerns were on the table could we identify underlying needs and begin to structure an effective negotiation approach and then a reasonable resolution strategy. We knew we had to help them assemble a leadership team that we could negotiate with. We needed to focus on the most sensible of the inmates the team had spoken with so far, and to find ways to promote these individuals. Of course, this was easier said than done. The primary negotiator was Dirk Prise from the Ohio Department of Corrections. His major challenge was going to be fielding calls from each of the three major factions. "I've got individual inmates calling up at random to rant and rave," he said. " 'I want my charges dropped.' 'I want to see my girlfriend.' 'I want to talk to Jesse Jackson.' " "What do you tell 'em?" I asked. "Mostly I say, 'I'm not authorized to do that.' " I smiled at him. Dirk seemed far more affable and far less inflexible and authoritarian than some of the correctional officers I had known in the past. I felt he would do very well as the primary negotiator. "Step one, I think, is to get beyond that. We need to give the inmates a sense that they've been heard. That will reinforce the idea that they can gain some concessions." We began making a list of action points. There was bad blood between the inmates and the authorities. To overcome long-standing mistrust, we needed to introduce an outside intermediary who could facilitate discussions and serve as a neutral arbitrator. To begin having serious talks, we had to impose some structure. This meant encouraging each faction to list their concerns and have one individual as their representative to meet with us. After Tate and Mallett agreed with our assessment, we sought an agreement to meet face-to-face with inmate leaders away from the larger prison population, so that they would feel no pressure to posture or perform. We proposed setting up two tables facing each other, one for the inmates and one for the authorities, on opposite sides of the inner perimeter fence. It took a while, but Dirk got the inmates to agree, in part by allowing them to feel that the format was their idea. On April 21, the eleventh day of the incident, at 10:40 a.m., the selected inmate leaders, one from each of the factions, came out and sat down opposite officials of the FBI and Ohio State Police. Also sitting outside the fence was attorney Niki Schwartz, a well-known criminal defense attorney and former head of the Ohio chapter of the American Civil Liberties Union, who was serving as the neutral intermediary. After a few minutes of preliminary discussion, the inmates agreed to having Department of Correction representatives, including Dirk Prise, come to the table to join in on the talks. Larry and I watched with binoculars through a window in a hallway near the negotiation room, eager to observe the body language. Everyone seemed calm and respectful—no grandstanding or obvious signs of anger. Maybe this could be the breakthrough we needed. The lengthy discussion produced a cogent list of twenty-one demands, which Dirk carefully copied down for the record: 1. The prison will follow all administrative rules of the Ohio Department of Rehabilitation and Correction. 2. Inmate discipline will be administered fairly and without bias. 3. Complete medical attention will be given to the prisoners. 4. The inmate surrender will be witnessed by a religious leader. 5. The prison's unit management system (a policy to hear inmate complaints) will be reviewed. 6. The prison will review the _White v. Morris_ federal case that permits prison cell integration. 7. All high-security inmates will be transferred out of L Block, as has been done in Cellblock K. 8. The procedure for early release will be reviewed and changed if warranted. 9. An attempt will be made to reduce overcrowding. 10. Policies involving inappropriate supervision will be rigidly enforced. 11. Medical staff will be reviewed. 12. Plans to install a new phone system will be speeded up. 13. Work opportunities will be evaluated and improved. 14. There will be no retaliation against rebelling inmates. 15. Policies concerning mail and visitor privileges will be reviewed. 16. Prompt transfer will be conducted for those eligible. 17. An attempt will be made to improve communications between inmates and officials. 18. Commissary prices will be reviewed. 19. The Ohio Department of Health will be contacted about tuberculosis testing. 20. The FBI will monitor prisoners' processing to ensure that their civil rights are upheld. 21. Prisoners' requests to be transferred out of state will be taken seriously if those inmates can prove an Ohio prison cannot provide a safe environment for them. I was pleased that the inmates were now focused on reasonable and obtainable objectives, but this list of demands wasn't going to be an easy sell. The prison tactical teams were not in a conciliatory mood, especially after the murder of their colleague. Warden Tate had his doubts, too, until I pointed out to him the way each point was qualified with phrases such as "an attempt will be made," "will be reviewed," "will speed up," "changed if warranted," and "will be taken seriously." It was clear to me that the inmates wanted to end the siege and that we were hardly giving away the store or making promises that could not reasonably be kept. With only slight modifications, the authorities agreed to these demands, and Tate signed a document to that effect. When the inmates later asked if the agreement was enforceable, Schwartz told them quite honestly that it had no legal foundation. He quickly added, however, that based on his conversations with Tate, he was convinced that every effort would be made to honor the agreement and that each and every issue would be given full examination by the authorities. The inmates found this acceptable. That evening, as a reward meant to reinforce the inmates' good behavior, correctional officers brought out a short ration of food and left the carts about a hundred feet from the L Block entrance. As agreed, the inmates left in exchange a videotape of the hostages that could be shown to their concerned families. The next day, according to a protocol previously arranged, corrections officers delivered laundry bags to the yard for the inmates to use to carry out their personal belongings. Then at 3:55 p.m. the actual evacuation process began. We purposefully called it an evacuation, which sounded better to the inmates than surrender. The injured were the first to come out, four on stretchers and twenty-two walking. Two hours later, a group of inmates who had been identified by the others as predatory and subsequently isolated came out. After this, the rest came out in a trickle, a process that took many hours. At 10:25 that night I stood at the window of the administration building and watched as three corrections officers, Richard Buffington, Michael Hensley, and Larry Dotson were walked out of L Block into the brightly lit prison yard. Five minutes later, Jeffrey Ratcliff and Kenneth Daniels came out. As they limped down the long hallway leading out of the facility, unshaven, their faces bruised but smiling, these released hostages were met by waves of applause from their emotional fellow officers, who lined the hallway. Twenty minutes later, 407 inmates were in custody and the surrender was complete. L Block had been thoroughly trashed, with shredded mattresses and smashed television sets everywhere. While going through the debris, officers discovered the bodies of two additional inmates, the only remaining negative in an otherwise positive outcome. Afterward, Tate and some members of the negotiation team received a number of harsh comments from other corrections officers who were angry over what had happened to Officer Vallandingham and frustrated by the lack of retribution. But if we had learned one thing from Waco, it was the lesson about not letting our own emotions overpower our responsibilities. The negotiation team at Lucasville managed to save the lives of seven of the eight officers held. The other's death had not come as a result of failed negotiations. If those in charge had acted on the basis of their understandable anger over the death of their colleague, there would have been a great many more deaths to mourn. Their restraint paid off. I flew home from Lucasville the next day. I felt good about the outcome and my small role in helping, but I was still thoroughly drained from the Waco experience and I wanted nothing more than to spend some time with my family. Thinking about the death of Robert Vallandingham, I could only imagine the pain his wife and son were feeling. It made me appreciate all the more the fact that I was going home, and what I was going home to. At this time my kids were thirteen, eleven, and nine, and I wanted to be at their weekend soccer games, have dinner with them and Carol, see friends, and just be home to deal with everyday, normal "honey-dos." While the successful resolution at Lucasville had offset some of my frustration about Waco, the reverberations of the events in Texas were far from over. Several television documentaries would attempt to assess what had happened, thereby keeping the issue in the spotlight. Some were accurate; some were sensationalized and filled with errors. And like Ruby Ridge, Waco also became a huge rallying point for domestic extremist groups and the far right wing. In their comments, Republican congressmen railed against what they called the ineptitude of Janet Reno and Bill Clinton in dealing with the crisis. The Department of Justice and Treasury Department conducted separate inquiries and issued reports, mostly blaming David Koresh, and rightly so, but also questioning the FBI's aggressive and contradictory approach. ATF took the worst criticism for the ill-advised raid that sparked the standoff. The political leaders also raised questions about the appropriateness of the FBI's use of tear gas in a compound that held so many young children. Meanwhile, conspiracy theories abounded. One held that the FBI purposely set the fires to kill all inside—this despite clear evidence that the Davidians had lit the flames. It was in the context of outrage over Waco that the Ruby Ridge incident was brought back to public attention as yet another example of FBI failure. Attorney General Janet Reno publicly accepted responsibility for what had happened at Waco, an unheard-of act of honesty by a government official. Many praised her candor, but her political opponents sensed weakness and pounced. In reality, she had been brand-new to the job when she attended that FBI briefing with all the talk of children being sexually abused. I've often wondered what decisions would have resulted if a negotiation representative had been allowed to participate in the briefing on the final days of the siege to answer her questions and offer a somewhat different perspective on the existing risk to the children versus the risks of the planned operation. Byron Sage, Jeff Jamar, Dick Rogers, and I were all called to testify before the House and later the Senate during congressional hearings. The lengthy and difficult task of assembling data on the incident fell mostly to Byron Sage, who gathered all the negotiation tapes, recommendations, call logs, and other information for the various inquiries. The four of us shared a table with four microphones and faced our questioners, who grilled Jamar and Rogers on the decision making and the tactical operation. The criticism of their actions was so withering that there was no need for me to add my own broadsides. The format also called for us to respond to direct inquiries rather than to offer prepared statements. I used the opportunity to describe what we had accomplished on the negotiation front and articulate the strategy our team had pursued to secure the safe release of thirty-five individuals. The fact that the Bureau had failed to effectively manage the larger operation was painfully obvious to everyone. This was my first congressional hearing, and I was not at all impressed with what I saw. In the hallways during recess, Democratic congressmen sought our help in formulating questions that would defend the Bureau against Republicans, normally supportive of the FBI, who seemed willing to throw the Bureau to the wolves solely for the purpose of embarrassing President Clinton—who as far as I could tell had very little input into the events at Waco. Many congressmen, former pop star Sonny Bono among them, showed up in the room only when it was their time to talk. You could see them enter the hearing room and receive a quick briefing from a twenty-one-year-old aide before asking us their questions, often repeating the very same ones already asked numerous times before. It seemed that most were simply posturing for the cameras. Rather than a serious attempt at fact-finding, it seemed to be an opportunity for the members to score political points against each other. After our testimony, the four of us went back to FBI headquarters to confer with the Bureau's legal counsel. Rogers, Jamar, Sage, and I were kept waiting for a while in a conference room. Neither Rogers nor Jamar was an expressive man, but they both nodded and thanked me for what I had said—or, more accurately, not said. While I fundamentally disagreed with Rogers's approach and Jamar's management of the incident, I believed both were dedicated public servants who had tried to do what they thought was best. They knew that I could have raked them over the coals. The truth was, I felt that I should emphasize the positive story of the negotiation effort without heaping scorn on the mistakes they had made. The evidence of their misjudgments spoke clearly enough. Despite it all, I was heartened time and again to hear from a wide array of knowledgeable colleagues and experts in the crisis management and negotiation field that the negotiation team at Waco had it right, that the patient "trickle, flow, gush" strategy we pursued should have been supported and allowed to continue without the ill-advised tactical activities that led to disaster. Louis Freeh became the new FBI director after Waco, and fortunately he was a vocal supporter of the negotiation process. He and several other high-level Bureau officials thanked me for my testimony. While I was completely honest in what I testified to, I didn't sell the Bureau down the river by emphasizing the negatives. It seemed to me that FBI leadership mistakes at Waco were by now quite obvious to all concerned. All in all, there were five major investigations, the last of which was headed by former Missouri senator John Danforth. With the benefit of the passage of time and a calmer atmosphere, the Danforth Commission conducted the most comprehensive and thorough examination of the incident. Once again, the commission's findings correctly faulted many FBI management decisions while praising the undervalued and underappreciated negotiation effort. No FBI internal after-action meeting was ever held, so the rift that had surfaced between negotiators and tactical team operators was never adequately addressed and resolved. The lingering ill will would diminish with time, but it remained a sad situation for me. Not long after the congressional hearings I went through a difficult period emotionally, moping around as I had never before done in my life. I was forty-three years old and had been in the FBI for twenty-one years. I had enjoyed a varied career that most agents would never know. I still loved my job, but something wasn't right. I felt very sad. I also wasn't sleeping well, and I became so withdrawn that my wife became concerned and close friends began to ask me what was going on. I wasn't sure myself, so I didn't know what to say to them. Eventually I talked through my state of funk with several negotiation colleagues who had also been at Waco. I learned that a lot of the negotiators from my team were having a tough time, too. Particularly supportive among my friends were John Dolan and Jim Botting, who had been a negotiator and a team leader, respectively, at Waco, and Dr. Mike Webster, the Canadian psychologist who had helped us develop much of the theoretical basis for our crisis intervention program and who specialized in counseling law enforcement officers. We talked on the phone regularly about work, and our conversations would naturally turn to these personal issues and how I was coping. I told them that I was in a total funk—low energy and not myself. I had been an informal counselor for dozens of law enforcement negotiation friends going through difficult times after tough incidents, never thinking I would someday need the same type of help. But I did. Mike in particular was able to help me recognize and address my anger over what had happened at Waco and my frustrations over the failure of some FBI leaders to take responsibility for what had gone wrong. I felt like I was rowing upstream against the current of arrogance. Police agencies around the world took our negotiation training as gospel and followed our guidelines and recommendations to the letter, yet it seemed that within the FBI itself the negotiation program might be destined to remain undervalued and unappreciated. As unlikely as it seems, the thing that snapped me out of my funk was more work. Some months after Waco, my longtime partner, mentor, and friend, Fred Lanceley, decided to retire. Ruby Ridge had been especially tough on Fred. The success of him and his team in negotiating an end to that tense standoff had never received appropriate recognition from the Bureau. SWAT gets the medals, but getting someone to surrender looks easy—until you have to convince a hostage taker to put down his weapons and come out. I was promoted—put in charge of the entire FBI negotiation program and given the newly created title of chief negotiator. This would give me the opportunity to grow the FBI negotiation program and advocate on behalf of the strategies I had spent more than a decade helping to develop. These new responsibilities contributed greatly to helping me move past the funk I had been in. Simply put, I was far too busy to dwell on what had happened in the past. Almost every night calls came in asking me to assist with ongoing incidents or seeking guidance. It was a rare evening when I was not on the phone for an hour or more. Carol, for her part, pleased to see me happy again, never objected to my going to work and traveling, but she hated those calls interrupting our family life. She used to say, "If you're going to be working, then go to work. If you're going to be home, then be at home." She was right, of course, but the FBI had a way of demanding all you could give and then some. Fortunately, the Waco experience also led to deeper reforms within the FBI. The FBI is an entrenched bureaucracy that tends to be resistant to change, but after Waco, the public criticism demanded it. Like me, Bob Gleason from our unit, now called the Crisis Management Unit (CMU), had long known that management was the weak link in the FBI's crisis response. Now he was tasked to put together an advanced crisis management curriculum for FBI leadership. I was assigned to develop a companion negotiation training curriculum for all potential FBI incident commanders. I later summarized this block of instruction in an article called "Negotiation Concepts for Commanders," published in the _FBI Law Enforcement Bulletin_. It remains one of the most widely circulated and heavily used negotiation training and operational guidance tools for law enforcement today. Bob and I gave this training to every SAC in the FBI. Following a directive from Attorney General Janet Reno, I gave Leon Schenck, one of the newly assigned negotiators on my five-man team, the task of putting together the FBI Hostage Barricade Database System (HOBAS), bringing together a complete statistical summary of all such incidents based on FBI and police data we collected. Today HOBAS is the primary source of hostage and barricade data in the world and is nationally available to all law enforcement negotiation teams. Eventually, Waco spurred Director Freeh to create the Critical Incident Response Group (CIRG), housed at the FBI academy. For the first time, this placed HRT, behavioral profilers, crisis managers, crisis negotiators, and some additional specialized components under one unified command. Henceforth, CIRG would manage all major sieges with the objective of ensuring proper coordination and management of the many skilled resources the FBI could bring to bear. The FBI would no longer simply rely on the capabilities or limitations of the local Special Agent in Charge. Before these initiatives, very little high-quality training had been provided to FBI leaders. Too often, the Bureau assumed that because an individual had risen to a high rank within the FBI, he or she automatically knew how to manage a crisis. But few executives in the FBI, or even throughout the larger U.S. government, had the training or experience necessary to function competently in such situations. Sad to say, this remains largely true today. In addition to providing this training, the FBI negotiation program was increasingly recognized by police departments around the country as the place to get expert negotiation assistance twenty-four hours a day, seven days a week. The FBI's negotiation expertise was increasingly in demand abroad as well. Between 1990 and 1993, we deployed negotiators overseas in response to the kidnapping of American citizens more than thirty times; the number of cases would rise to over 120 by 2003. Each deployment was time-intensive and operationally challenging, not only for the negotiators deployed but also for our unit at Quantico, since we actively deployed, managed, and directed the response to these cases. In the years after Waco our hostage negotiation team established itself as a crucial component of the FBI's efforts at crisis response. While we received plaudits from Director Freeh and other senior officials, the best part of our job in those days was the feedback we received from American and foreign police departments that we'd helped in times of crisis. More often than not, we heard that our assistance had been critical to reaching a positive resolution. We knew our work was saving lives, and we got a tremendous amount of satisfaction from this. The general public, of course, still associated the FBI with Waco, and it would continue to do so until we had a chance to show in a high-profile case that we'd incorporated its lessons. This chance finally came in 1996, in an incident that would truly test the patience of everyone involved. # CHAPTER NINE # **A HELL OF A SIEGE** _Perseverance is more prevailing than violence; and many things which cannot be overcome when they are together, yield themselves up when taken little by little_. —PLUTARCH The day was warm and clear as we stood in an open field near Jordan, Montana, about four hundred miles east of Missoula. Up here near the Canadian border, the land seemed to go on forever, and yet it still seemed insignificant compared to the endless blue above, the rationale for Montana's nickname, Big Sky Country. My colleague and I wore short-sleeved shirts beneath our bulletproof vests, but we kept glancing back at the sky as though we feared this unusual feast of good weather would quickly evaporate, bringing back the gloom we had come to expect. June in Montana was so fickle that you could experience all four seasons in a single day. The rain was even worse than the snow because any soaking typically left the dirt roads rutted like an old washboard. One FBI SWAT team agent had already been killed driving too fast on the treacherous roads, trying to get to his shift on time. Once a shoe or boot got sucked into the spring mud—called "gumbo" by the locals—usually nothing came back out but a bare foot. I'm sure anthropologists in the distant future will have a field day trying to figure out why so many single shoes and boots were found just beneath the surface of the earth near Jordan, Montana. Dwayne Fuselier, standing beside me in the low brown grass, complained about having to wear the hot and heavy Kevlar, but I insisted we keep the vests on. I didn't want negotiation students to see me in a news photo not following safety regulations. Several hundred yards behind us was an HRT SUV containing a sniper/observer team there to cover us if something went wrong. We were not armed, as agreed to by all parties, but we couldn't help but wonder if we could really trust representatives of the Freemen, a radical militia group, to leave their weapons behind as promised. I had personally observed that Russ Landers, one of their more hotheaded and intransigent members, had worn a gun to a prior meeting at this same location, even though all had agreed to come unarmed. I found little comfort knowing that, after the fact, our own marksmen would probably take out anyone who put a bullet in me. Still, the risk was reasonable and we were well covered. In reality, we were more anxious about last-minute changes of heart than we were about potential dangers. The folks we were dealing with seemed to have great difficulty making up their minds. A vehicle emerged from around a hill and drove toward us. The car stopped about a hundred yards from the cattle guard marking the edge of the Freemen's property, where we stood waiting out in the open. Edwin Clark got out of the car and walked toward us in an unhurried manner. Forty-six years old, five foot nine, 235 pounds, he wore a baseball cap, collared shirt, jeans, and work boots. He looked like the farmer and rancher he was, out on a routine errand. Was he about to say that he had convinced the others to end the siege, or was the momentum we thought we had rolling about to stall? Among the Freemen, Clark was the voice of reason we had been counting on to bring the others around. But we had been at this for eighty-one days now, the longest siege in U.S. history. At this point, everyone was so exhausted that, quite honestly, anything could happen. The weather could not have been more different when I first arrived in Montana months earlier. I remember an extremely clear, extremely cold night in March with billions of stars; I was driving north from Billings with another FBI agent and a United States attorney who would be prosecuting the case against the Freemen. We were one of several groups who were quietly converging on Jordan, once described by _National Geographic_ magazine as the most remote city in the continental United States. The nearest town of any size was Miles City, eighty-three miles away. I had been in Moscow the year before, teaching agents of the Federal Security Service, the successor agency to the Soviet-era KGB. If you had told me those were the steppes of Russia outside the car window that night, I would have believed you. Our first goal that night driving up from Billings was to reach the Garfield County Fairgrounds, a mile or so outside of Jordan, without being noticed. A great deal of secrecy had gone into moving the personnel and equipment forward to implement our operation, code-named "Gray Sunset." A makeshift command post was to be set up in a couple of the barnlike cinder-block buildings located on the fairgrounds. The bulk of the FBI personnel deployed were waiting back in Billings to be called forward when needed. This included a large contingent of negotiators brought in from around the country. Tom Kubic, the Special Agent in Charge of the Salt Lake City office, was in command of the entire operation. We were joined by Robin Montgomery. A well-regarded and steady FBI leader, he was also a Marine who had won a Silver Star in Vietnam. Robin was the SAC in charge of the newly created Critical Incident Response Group and my boss back at Quantico. While the SAC of CIRG would not be the final decision maker on the scene, his influence on all strategy decisions gave him de facto veto power over anything he deemed reckless. We would link up with Assistant Special Agent in Charge Roger Nisley, who had replaced Dick Rogers as the commander of the HRT. Unlike his predecessor, he struck me as very easygoing and levelheaded, with a healthy respect for the negotiation process. It was still dark when we reached the fairgrounds. We pulled our vehicle into the largest of the fairground buildings and closed the door behind us to stay out of sight. It was unbelievably cold outside, but it was even colder inside. Now there was nothing to do but stamp our feet on the dirt floor and wait. Like most of the others, I had to retreat back into the car now and then to get warm. Maybe it was my Florida upbringing, but I never seemed to bring enough cold-weather gear on these operations. A separate team of FBI agents had been working for months to get close to Leroy Schweitzer and Dan Peterson, the Freemen's leader and his number one assistant. An undercover agent had managed to insert hidden microphones in the Freemen's property, to assist in monitoring their activities. While we waited at the fairgrounds outside Jordan, Schweitzer and Peterson were heading to a snow-covered hillside they had picked as the site for a new radio tower, which the Freemen hoped to use to broadcast their common-law ideology far and wide. Their driver was the man who had helped finance this project and had brought in the team of "construction workers" now standing by. He was also an undercover FBI agent. Their car reached the top of the hill and parked. Schweitzer and Peterson got out to survey the large pile of metal poles and other materials lying on the ground. At that moment the workers—in reality, members of the FBI Hostage Rescue Team—seized both men, hustled them to the ground, then disarmed and handcuffed them. The two were then placed in separate vehicles and quickly driven away. A short time later, the HRT team with the Freemen's leadership in custody arrived at the fairgrounds where we stood waiting. Federal law enforcement had definitely internalized the first and most basic lesson of Waco—the ATF's fatal mistake in not trying to arrest David Koresh when he was outside and away from his followers. Step one of our plan in dealing with the Montana Freemen was to place their leaders in federal custody. That goal was now accomplished. The second step was to try to reach the rest of the Freemen and get them to surrender. Schweitzer and Peterson had been apprehended by undercover agents driving civilian cars. For the long ride to a holding cell in Billings, they needed to be transferred to a vehicle outfitted for transferring prisoners. Schweitzer seemed like a tough nut to crack, but SAC Kubic asked me to approach Peterson during that brief transfer and see if I could talk him into helping us convince the others to surrender peacefully. When it comes to trying to create any kind of dialogue with those in custody, it's often a negotiator who gets the assignment. The cars carrying the two prisoners rolled inside the big fairgrounds building, and the two men were brought out in handcuffs. Peterson was wearing jeans, a baseball cap, and a jacket with a fleece collar. I said, "Mr. Peterson, I'm Gary Noesner and I'm with the FBI. I'd like to chat with you for a minute." He wouldn't so much as look at me. The crotch area of his pants was soaking wet. Apparently the arrest had been a very big surprise for him. The Freemen were yet another small, loose-knit group of individuals who, like Randy Weaver, held antigovernment right-wing views that led them to believe they were sovereign and a law unto themselves. They did not recognize the authority of the U.S. government in any way. Like other antigovernment groups, they refused to pay taxes, obey laws other than their own, obtain driver's licenses, or display tags on their vehicles. Like survivalist and militia groups elsewhere, they derived a large part of their income by filing false liens against anyone they considered a nuisance or a problem, particularly public officials. Using these bogus liens, the Freemen would then draw fraudulent certified checks. As a result, the Freemen had committed numerous acts of financial fraud, mail fraud, and wire fraud, using some of their illegitimate financial instruments to pay off IRS debts, purchase vehicles, and pay home, ranch, and farm mortgages. The financial loss to the victim entities was both real and substantial. The Freemen had also threatened a federal judge, and at one point they brandished weapons as they took over a meeting of the Jordan city council. As a result of this widespread and continuing criminal activity, a sealed federal indictment had been obtained charging various individuals with a multitude of criminal violations. Local charges were also pending when the Freemen took refuge on a group of ranches owned by members of the Clark family, a remote stretch of property about twenty-five miles west of Jordan. Local law consisted of the sheriff, his under-sheriff, and two officers of the Montana Highway Patrol—not a force large enough to take on a well-armed group of malcontents. Unfortunately, this understandable reluctance to confront the Freemen merely served to embolden them. These right-wing militiamen had begun to believe their own propaganda about not being subject to the law. After local warrants were issued against them, the Freemen publicly threatened to abduct Sheriff Charles Phipps and Garfield County prosecutor Nick Murnion, try them in common-law court for treason, and hang them. They also threatened an ABC news crew that came to interview them, and stole their expensive camera equipment at gunpoint. These acts had put the Freemen on an unavoidable collision course with the government, which is when the local authorities turned to the FBI for assistance. With memories of Waco still fresh and painful, this time the FBI was going to do everything humanly possible to avoid a violent assault. For the past year, the Freemen had held weekly two-day seminars in a classroom on the Clark ranch, teaching common law, how to file Uniform Commercial Code liens, and procedures for paying debts or purchasing property with bogus certified checks. A brand-new and very expensive motor home rested on the property, illegally obtained through these fraudulent financial instruments. Approximately twenty-five individuals attended each class, and at the conclusion of each training session the attendees were each given a fake certified check signed by Leroy Schweitzer. Of great concern was that Schweitzer was also identifying and trying to recruit individuals from these classes to assist with his abduction plans against the sheriff and the prosecutor. More than three hundred individuals had attended these courses, most of them farmers or ranchers who had fallen on hard times and were desperate for something that would help them keep their properties. Some had grossly mismanaged their financial affairs, borrowed far too heavily, and now faced foreclosure. Sadly, these naive individuals believed it when Schweitzer and the others said that they could rightfully ignore the federal government and live as they pleased. Many got suckered into criminal behavior and would pay the price for it later. The FBI obtained a warrant to conduct surveillance, which ultimately confirmed the continuing criminal activities of the Freemen. FBI informants and undercover agents were assigned to the Freemen classes, each making regular observations and securing information that supported the criminal indictment eventually obtained. In addition to Schweitzer and Peterson, approximately twenty-five other individuals, adults and children, had taken refuge on the Clark property. They had renamed the area "Justus Township" and posted signs to let all know that this was a sovereign enclave. The word _Justus_ had a double meaning: both "justice" and "just us." It was not a single property but rather five or six separate but adjoining ranches spread over a large geographical area. The individuals living on these sites often had different backgrounds, different interests, and different levels of commitment to the Freemen cause. Long before we took any action, we held a series of detailed planning meetings. Steve Romano, my deputy in the unit, helped FBI investigators and profilers from CIRG put together a comprehensive playbook, including detailed background information on each of the Freemen, individual photographs, details of past criminal activity, personality attributes, and other relevant facts about their associates and activities. It also included information about likely intermediaries and family members we could use to influence them. The next phase of our plan called for us to try to capitalize on the confusion and uncertainty that no doubt would overtake the Freemen when they realized that Schweitzer and Peterson were in custody. Our plan called for the opposite of an ATF-style frontal assault. Our intention was to call each of the five or six individual sites where the remaining Freemen were located, urging them to immediately and peacefully surrender. We hoped they would assume that arrest warrants were also about to be executed against their locations, but we wanted them to understand and appreciate that they were being given an opportunity to avoid a tactical confrontation. Waco and Ruby Ridge held out lessons not just for federal authorities but also for those who opposed that authority. If this approach was unsuccessful, and we knew it might be, we were prepared to use intermediaries to speak directly to the Freemen on our behalf. What the Freemen did not know was that we had no intention of executing arrest raids at their homes. This was to be the smarter and more thoughtful FBI, very much aware of the paradox of power. We had no tactical perimeter set up around any of these locations. In fact, with our reserves being held in Billings, the FBI was nowhere to be seen. With no visible way to enforce the order, we were simply asking the Freemen to give themselves up at a designated location nearby. The negotiators who called each of these locations identified themselves as FBI agents, then explained that Schweitzer and Peterson were now in custody. Sometimes those who answered the calls listened briefly, but most refused to speak with us at all, simply saying that we had no "venue" or jurisdiction over them, then hanging up. We were not surprised by their response. Even so, it's much easier to go tactical after failed negotiations than to negotiate after failed tactics. I know of at least one case in which a police marksman missed when he took a shot at a barricaded perpetrator. It was then a major challenge for the negotiators to try to convince the subject that the authorities were really there to help and didn't want to do him harm. The arrest of Schweitzer and Peterson accomplished its goal of removing a venomous influence, but the resulting leadership void presented its own problems. I'm convinced that if the ATF had removed David Koresh from the equation at Waco, Schneider ultimately would have cooperated and led everyone out. But the Freemen were a much looser group. Eventually, we would need someone we could negotiate with, someone who could influence the others. Here, as at Lucasville, part of our job was going to be creating a leadership structure. "Justus Township" consisted of 960 acres of rolling farmland in a very remote and rugged setting. It was a forty-five-minute drive from Jordan, with its sixteen streets and 450 people. There were four main houses on the properties, in addition to four small fishing cabins. We set up observation points where we could watch from a distance, but we were careful to avoid any encroachment on their land—another lesson learned both at Waco and at Ruby Ridge. Most people view their home, no matter how humble, as their castle. For members of groups like the Freemen, this feeling is magnified, particularly where the government is involved. FBI tactical agents and Montana Highway Patrol units, working as combined teams, set up a very loose perimeter to control who went in and out. They made a concerted effort to engage in friendly small talk with local citizens and tried to downplay the sense that a siege was under way. We allowed local ranchers to move through the roadblocks at the various strategic crossings near Justus Township, but no one was allowed onto the Clark ranch without our approval. FBI personnel wore "soft" clothing: casual work clothes rather than the ominous-looking black or military green tactical equipment usually worn by SWAT elements during a siege operation. Ralph Clark, age sixty-five, and his brother Emmett, sixty-seven, were the elders of the group, living on the property in separate homes. They had gotten caught up in the Freemen ideology and had allowed Schweitzer and others to seek refuge on their land. Our undercover FBI agent said that Emmett didn't appear to understand the Freemen ideology but had embraced it and the group as a means to save his land, which had been foreclosed. Mostly he just wanted to be left alone. In addition to the Freemen ideologues—Schweitzer, Peterson, Dale Jacobi, Rodney Skurdal, and Steve Hance—there were also Russ Landers and his wife, Dana Dudley, who were simply con artists and swindlers with a long string of charges against them. For them, the Freemen belief system provided a pretext to flout the laws they had broken. Another family, the Mangums, were also on the lam, fleeing outstanding criminal charges from another state. And of course there were the Clarks and the related Stantons, too. Eventually, I brought the entire negotiation team up from Billings and we established our negotiation operations center in one of the fairground buildings. Before the larger team arrived, I spent the first cold nights in that unheated barn sleeping on a cot near the phone banks our technicians had set up. We wanted to have someone available at all times of the day if any of the Freemen locations called out to us, but they never did. In time we brought in portable heaters, but they produced only enough warmth to prevent actual frostbite. When the rest of the negotiation team arrived a few days later, we moved in to spartan but warmer motel rooms in Jordan. Life became more bearable when hot showers and warm meals became part of the routine. The primary locations on the properties were the school building where the Freemen classes had been taught; the residence of the Stanton family; the four fishing cabins; Emmett's residence; Ralph's residence; Ralph's trailer, where Landers and Dudley lived; and the house of Emmett's son Edwin. Edwin's wife, Janet, a nurse at the local medical clinic in Jordan known to be uninvolved with the Freemen activities or ideology, simply went on with her normal life. We stopped her car the first day as she left the property to go to work, and we explained that we were trying to work out a peaceful resolution with the group. She was cordial but told us that her husband and the others refused to speak with us. "I can't control these men," she said. "I don't know what to tell you. And right now I'd just like to get to my job." We decided to permit Janet to come and go as she wished, believing that later on she might become a useful liaison between the outside world and the close circle in which the Freemen lived. Also on that first day, Clark neighbor Jeff Loomis visited the Freemen and then agreed to talk with us. He reported essentially the same thing that Janet had, namely, that the Freemen outright refused to speak with the FBI. On the second day, a couple of members of the right-wing Montana Militia showed up at the command post and demanded to speak with the FBI and find out what was going on with the siege. We were under no obligation to explain ourselves to them, but rather than brush aside their request, Special Agent Tom Canady, the lead investigative case agent, and I were designated to talk with them and try to defuse any potential problems they might cause. Tom and I met with the militia members at the Hell Creek Bar in Jordan that afternoon. The four of us introduced ourselves and then sat down at a booth in the back to have a cup of coffee. The two militiamen were dressed for winter ranch work in jeans and heavy fleece-lined coats. They greeted us warily, as if this could be a trap and we might suddenly cart them off to jail. With narrowed eyes and jaws jutting forth, they also seemed primed for a confrontation. I figured we had nothing to hide from these guys, so instead of playing tough or arguing, which evidently was what they'd expected, Tom and I tried to disarm them with openness and candor. They asked us why we were doing what we were doing, and we gave them detailed information about the Freemen's fraudulent and threatening activities. Tom described the outstanding federal charges that had been brought for bank fraud, embezzlement, aiding and abetting, conspiracy to impede or injure a federal officer, mailing threatening communications, mail fraud, interference with commerce by threat of violence, felony possession of firearms, possession of a firearm by a fugitive, and carrying firearms during a crime of violence. These were in addition to numerous state charges. I told them that we were trying to avoid the kind of outcome that had happened at Waco and Ruby Ridge, and that we planned to negotiate in good faith with the Freemen. We wanted to make sure these men understood that we weren't in Montana to stamp out freedom of expression; instead, we were there to arrest individuals who had violated the law and threatened their fellow citizens. We explained that the Freemen's actions had left us no choice. We even explained the low-key approach we were taking, aimed at a peaceful resolution. One of the militiamen asked if U.S. military personnel and tanks were being used to surround Justus Township. I knew that the Posse Comitatus Act, the law that prohibits the use of military forces to enforce civilian law, was very much a mainstay of militia ideology. I assured him that the military was not involved in any way, adding that in fact none of the authorities had set foot on the Clark ranch or other properties. I explained that only FBI and Montana Highway Patrol personnel were involved in the operation and that they were staying a safe and respectful distance away. I offered to personally drive both of them wherever they wanted to go to see for themselves. No response. For each and every additional question they raised we answered respectfully and truthfully, effectively taking the wind out of their sails. We gave them our names and numbers and requested that they call us if they had any further questions, heard any rumors, or wanted to know what we were doing. We asked them to please explain all of these things to others in their group so they would know the truth. As we left the bar, these men shook our hands much more vigorously than they had coming in, nodded, and made eye contact. They weren't saying it, but it appeared they appreciated the time we had spent with them. We never heard from the Montana Militia again. If we'd disarmed at least some of our right-wing critics, we still had the media to contend with. Local and national television and print reporters began to descend in droves. Most were up front in saying they had come to witness another disaster like Ruby Ridge or Waco, but FBI media coordinators worked to dispel their prejudices. We did not want inflammatory coverage, and we certainly did not want to give them another tragic story to report. Once it became clear that the Freemen were not going to speak with us, we moved on to the next phase of our plan, which was the use of third-party intermediaries. As is usually the case in hostage/barricade/suicide incidents, we had a briefing book that contained whatever background information was available on the subjects. For Justus Township, that book included overhead photos of the land and houses and of each individual, as well as family histories that included marriages, relatives, and friends. We had to be cautious in using intermediaries because the people most often in a position to help can be difficult to control. They frequently have their own agendas—a grievance, perhaps, or a desire to influence. Also, bringing them to the scene might expose them to danger. Family members sometimes criticize police for not letting them speak with loved ones during a crisis incident. In truth, the police rarely know enough about the existing relationships between the perpetrator, family members, and friends to take the risk. In one case, a distraught husband found out his wife was having an affair, took her hostage, and threatened to kill her. After many fruitless hours trying to negotiate with the man, the local police sought out potential intermediaries. An individual came forward and claimed he was the perpetrator's best friend. He told the cops he was confident that he could talk reason to his pal. The police readily agreed to allow this man to make a telephone call into the apartment to speak with the perpetrator. When the perpetrator answered the phone and heard this man's voice, he exploded in rage and fired his weapon. The SWAT team immediately forced its way in, only to find both him and his wife dead. As it turned out, the man who had volunteered to call in was the wife's lover. It's important to fully interview such individuals and try to find out as much as possible about the existing relationship before deciding whether or not to use them as intermediaries. With the Freemen we did not have a typical situation. We faced some self-deluded, hardheaded, and confused individuals who were potentially very dangerous but in our opinion not suicidal. Some had distorted ideological perspectives; others simply didn't want to face justice in the courts. The hard-core ideologues—Skurdal, Jacobi, Landers—believed that by avoiding talking with us they could deny FBI jurisdiction over them. The Clarks just wanted to keep their property and be left alone. Unfortunately, given their actions, this was no longer possible. Within the first five days of the incident, my negotiation team identified, interviewed, coached, and deployed fifteen intermediaries. After we briefed them on what we were after and how to handle themselves, they then made their own way onto the Justus Township property and arranged to meet with their friend or loved one. This was done in a low-key, casual way that did not arouse anyone's suspicions, at least not at first. After each visitor came out, we again met at the fairgrounds or the coffee shop to debrief them. We gained a clearer picture of what the Freemen were thinking, but otherwise, results were mixed. Sometimes the person would say, "I sat down with him for an hour and he wouldn't budge. He's dead set on hanging tough." At other times what we heard was, "I don't know... he seems a little worried. He may be open to coming out." It was my sense now that we had to attack the Freemen's intransigence on two tracks. The first was to continue to use the intermediaries, appealing to specific individuals on a one-to-one basis. On the second track, we would identify and use people who were not personally connected to the Freemen but whom they would view as interested in discussing their political theories. These people would target the hard-core Freemen ideologues who clung to misguided beliefs about personal sovereignty. For this we called on Karl Ohs, a Montana legislator. A staunch Republican who would go on to become lieutenant governor, Ohs was a close friend of Butch Anderson, the birth father of Val Stanton, one of the women living in Justus Township. Butch and Karl agreed to go inside to visit Val and the others, which they did repeatedly over a period of weeks. A soft-spoken, intelligent rancher, Karl once rode his horse in to see the Freemen when the weather made the road impassable to cars. On horseback he looked exactly like the famous Marlboro Man. He had a genuine desire to help resolve this incident, and he would sacrifice a great deal of his personal time and energy over the course of the standoff. Even though the Freemen continued in their refusal to meet with the FBI and periodically rejected other intermediaries during the ordeal, they always agreed to see Karl. I focused most of my personal attention on meeting with Ohs and coordinating his many visits. With his deep roots in this community, Karl was the best possible guide to the mind-set of these fiercely independent westerners, as well as a superb ambassador. I also spent much of my time with the senior FBI management team on the scene, briefing them at our twice-daily management team meetings. Here, unlike Waco, all the component leaders met frequently to keep on the same sheet of music. It made a huge difference to have their support and buy-in, rather than resistance. We also made sure that one of the negotiators attended each SWAT team shift briefing. We wanted to make sure that the tactical units knew everything we were doing and why. FBI director Louis Freeh's support for the negotiation process became very clear when he mandated that I should participate in all the daily teleconferences between him and on-site senior management in Jordan. So each day at the designated time, I would join the three or four Special Agents in Charge, along with HRT commander Roger Nisley, to participate in a telephone briefing for the director. We did not discuss Waco or Ruby Ridge, but those incidents were the subtext as we made every decision. As the standoff continued and we reached the second week of the siege, a few of the midlevel managers began to voice concern that this situation was taking too long. Whatever their commitment to a survivalist mind-set, these Freemen were supremely self-reliant Montana ranchers, the kind of people who had elk meat in the freezer and vegetables put away from the summer. The whole time we were camped outside their land, they were free to hunt for game on that land. This was not a group that could be squeezed very easily for the basic elements of comfort and survival. Some within the FBI began to wonder out loud if we had overlearned the lesson of past mistakes and become gun-shy, frightened to take decisive action. The FBI had been making these kinds of raids and arrests throughout our long history, so why should this incident be any different? I listened to some of this grumbling and wondered whether those who felt this way truly understood the implications for the FBI if this incident went the way of Waco and Ruby Ridge. A time might come when we had to take action to save lives, but we weren't at that point yet. Director Freeh soon put a stop to such complaints. During one of our conference calls for the senior managers in Montana he said, "Gary, it's important for you to know that as your director, I am in no hurry to end this incident. I want to make sure that we take whatever time necessary to negotiate this out the right way." He didn't have to refer to Waco directly. Waco was the eight-hundred-pound gorilla in every room we entered in those days. When Director Freeh said this, I looked at all the faces around the table. My colleagues clearly understood that this was a new era in crisis management for the FBI. William Sessions, Freeh's predecessor at the Bureau, had been a very detached administrator, which had been part of the problem. Freeh was much more hands-on, decisive, and engaged. He was on the phone with us in Montana every day. The negotiators were now setting the direction and tone, and the director was squarely behind us. End of story. On April 4, the eleventh day of the siege, Karl Ohs, after several attempts, was at last able to facilitate a direct meeting with the Freemen, himself, and three additional Montana state legislators. We had no expectation that this one conversation was going to resolve all the differences, but we hoped that it would at least help focus and refine the issues, which might then help us find some common ground. A table and chairs were set up in an open area near a cattle guard at the boundary of the Clark property. The Freemen drove up the road, got out of their cars and pickups, and sat down. Then Karl and the other legislators listened as the Freemen ranted about the legal system and advanced their claim that the government had no jurisdiction over them. They said they wanted to have their rights protected in common-law courts, meaning that their cases would be decided by individuals who believed as they did. As the legislators listened, they continually urged the Freemen to address these issues in federal court and to talk directly with the FBI to resolve the standoff. They even promised to hold a legislative forum on the common-law issues. The following day, talks continued in the motor home on the property, but little if anything was accomplished. The Freemen continued to insist that the federal government had no jurisdiction over them and that they had not broken any laws. They insisted that their financial liens and checks were legal under common law. They were also angry that the FBI had trespassed on their property to arrest Schweitzer and Peterson, the event that had triggered the siege. In addition, the Clarks were not willing to leave their land, and the others, their guests who had sought refuge there, were not willing to leave to face other criminal charges. The only problem with our post-Waco restraint was that the Freemen knew as well as we did that the FBI could not risk another televised debacle. This meant that the threat of force, the key element used to encourage most negotiations, was effectively removed. With their wells pumping water and their freezers full of food, why not just wait it out? In our briefing materials used to coach all the intermediaries, we outlined a long list of positive police actions we had taken—or not taken, as the case might be. We had them emphasize to the Freemen that the FBI had not trespassed onto Justus Township after the arrest of Schweitzer and Peterson, and that telephone calls into the locations were respectfully kept at a minimum or discontinued so as not to irritate them. We had made it clear that anyone was free to leave anytime they wanted. We had allowed medication to be delivered for twenty-one-year-old Casey Clark, the son of Edwin and Janet Clark. A large number of relatives and friends had been freely allowed to visit and call at their discretion. At the Freemen's request, the media had been kept far away. The Freemen had been allowed to meet with Montana state legislators, and through the legislators, the Freemen had received an unbiased report on the conditions of Schweitzer, Peterson, and Richard Clark, another relative and Freeman arrested separately, away from Jordan. Furthermore, we had not set up any high-power lights or broadcast any loud noises, and no individuals in Justus Township had been harassed in any way. Electricity and water service had not been interrupted. No press statements critical of the Freemen had been issued by the FBI. There had been no fly-overs by FBI helicopters. When lumped together, these made a compelling case in support of our position, but like the prisoners at Lucasville, the Freemen needed time to calm down and begin to think more rationally. Justus Township neighbor Jeff Loomis became a regular contact. We had Jeff carry in a note that outlined three basic areas where we were willing to compromise: 1. We stated that we were extremely flexible on the mechanics of their surrender—the how, the when, and the procedure. 2. We were willing to make any arrangements suitable to the Freemen that would guarantee the preservation of their so-called evidence. This consisted of the voluminous common-law writings they had assembled that they felt supported their case. We promised to allow any mutually agreed-upon third party to safeguard these papers for their defense team to use in court. 3. We agreed to help facilitate the legislative forum agreed to by the Montana state legislators to address their perceived rights and legal position. We hoped that by stipulating these commitments so explicitly, we would lend form and substance to the idea that this was the best deal they were going to get from the government. Unfortunately, the same three holdouts continued to obstruct any forward movement that the others might have accepted. Freemen Rodney Skurdal and Dale Jacobi clung to their common-law convictions and remained unwilling to consider any compromise. Russ Landers continued to object to any settlement that would end up putting him in legal jeopardy. As he continued his own shuttle diplomacy, Karl Ohs was usually accompanied by Butch Anderson, Val Stanton's birth father. We had targeted Val as someone we might persuade to leave the property. We felt that if we could convince those who did not face charges, or serious charges, to come out, we would begin to erode the group's solidarity. Butch managed to find times to speak with Val away from the influence of the hard-core Freemen. While Karl distracted the true believers on larger issues, Butch was working on Val to come out. On the twelfth day of the siege, those efforts paid off. Val Stanton decided to take her young daughter, Mariah, and walk out, the first major blow to the Freemen since the arrest of Schweitzer and Peterson. While we never felt Val or Mariah was in great danger, we were nonetheless relieved when they reached our side of the fence line. Karl Ohs also focused his attention on Edwin Clark, who in his view was far and away the most reasonable individual among the Freemen. With this in mind, we had Ohs push the reluctant Edwin to assume a leadership role. He had the unique status of being the de facto leader of the Clark family, with special authority over the land and those living on it. At one point, Edwin Clark relented and told Ohs that he was willing to meet with us. But then the other Freemen got Clark to change his mind. Edwin was a friendly and likeable individual, but he placed too high a value on consensus. Karl Ohs was extremely disappointed, and he told me he was losing heart. "I don't know if these boys are ever going to come to their senses," he told me. But then we got a break. Tom Spillum, owner of one of the two small motels in Jordan where FBI personnel stayed, was the stepson of diehard Freeman William Stanton, who had been arrested long before the siege and was serving time in jail. Several of our negotiators had gotten to know Tom pretty well. We knew that he was in touch with his mother, Agnes, wife of the jailed Freeman, and according to Tom, she was torn about what she should do. With encouragement from us, Tom began a series of telephone calls and visits aimed at convincing Agnes and her son, Ebert Stanton, to come out. It was Ebert's wife, Val, who had already departed Justus Township with her daughter. On the eighteenth day of the siege, Tom at last prevailed, and Agnes and Ebert both decided to leave. In response to this second major blow to the Freemen's solidarity, the hard core of the group delivered an ultimatum: there would be no more unsupervised phone calls or other contacts with anyone outside. For the next two weeks the siege continued unchanged. We had learned that leaving agents to work long hours in remote locations with no relief in sight could allow frustration to build up, so we rotated all FBI personnel, including negotiators, in and out of Jordan. My deputy, Steve Romano, came in to relieve me as negotiation team leader. Steve knew as much about the Freemen as anyone, and he and I shared a common philosophy and approach to the negotiation process. For the remainder of the siege we would alternate two-week stints in Jordan, trying to make the negotiation operations as seamless as possible. This siege was shaping up to be one of a kind. Not only did we have a perimeter that was loosely defined, but there were no longer any telephone negotiations. All of our contact was undertaken through the various intermediaries, who were now carefully scrutinized by the Freemen. For safety reasons the intermediaries were allowed to travel in and out only during daylight hours, so other than an overnight skeleton crew, negotiation operations essentially shut down at the end of the business day. In at least this one respect, this was a very civilized operation. Each morning we would leave our rather modest motel accommodations in town and drive out to the command post at the fairgrounds, where the government had set up a big communal kitchen to take care of us. We had three meals a day dished up by Forest Service cook crews. The menu, designed for wilderness firefighters, was varied and good but also extremely high in calories. No one lost weight during this operation. These crews, like most everyone else in Jordan, were friendly and welcoming to the entire FBI team. They told us that they appreciated us moving against the Freemen, who'd been giving them problems for years. Around Easter the children from the local school delivered handmade drawings thanking the FBI for being there. In addition to the fairgrounds, we had a second, albeit informal, center of operations at the Hell Creek Bar, where the negotiation team and other FBI elements would gravitate most every evening after dinner. The owner, Joe Herbold, who had only recently purchased the place, was a real people person, the kind of guy who might have been a successful negotiator. The bar had a bona fide Wild West swinging saloon door and a huge carved bar that must have been installed in the 1880s. On a normal Saturday night Joe might have a dozen customers come into his establishment, some of them fairly wild and woolly. (One night I sat and talked to a guy whose job was as a government coyote hunter.) Joe looked more like a guy who might work in a cubicle in Chicago or Seattle, but here he was, maintaining this frontier outpost. With the siege and all the new customers it brought in, his bar looked not only Wild West but gold rush boomtown as well. Initially, there were three groups of patrons, each maintaining a respectful separation from the others: the regular local citizens, the deployed FBI agents, and members of the news media. After a few weeks, these groups began to mingle a bit and get to know one another. Joe maintained a strict policy of no shop talk, which helped maintain peace and civility. (As I said, with a little training he might have become an excellent member of our negotiation team.) The Hell Creek was a great place to blow off steam. One evening I joined an impromptu concert with a local rancher playing the bass, a news media reporter on piano, and me playing guitar. It was surreal, to say the least. As if to rescue the audience, a cowboy burst in the front door and loudly announced: "Two-headed calf." That was all it took. A couple of dozen bar patrons emptied out onto the dirt road where, in the back of his pickup truck, the proud cowboy displayed a stillborn calf with two distinct and separate heads. We all stood around the bed of his truck and looked on in amazement. A multitude of jokes about being "two-faced" quickly followed. The people of Jordan were hardworking, law-abiding folks, and they were embarrassed by the unwanted attention the Freemen had brought to their town. During the siege, Montana had come under further scrutiny when the FBI located the source of a series of mail bombs that had killed three people and injured twenty-three. The man they arrested, Theodore Kaczynski, also known as the Unabomber, was a mathematician who'd soured on technology and had taken to living in a primitive cabin in a remote area between Missoula and Helena. The citizens of Jordan had extra incentive to show the world that not everyone in Montana was alienated, armed, and dangerous. Local citizens started to appear on the streets wearing FBI T-shirts and hats they had bartered for with various agents. One cowboy who frequented the bar got one of my FBI negotiator's shirts in exchange for some pronghorn antlers. I'd say I negotiated the better end of the deal on that one. Then, on day thirty-four of the siege, after a long stretch of little progress, two uninvited people showed up at the fairgrounds and asked the state troopers guarding the perimeter if they could speak with the FBI. One of these was Bo Gritz, who had assisted in resolving the Ruby Ridge incident; the other was Randy Weaver himself. Gritz had already made a name as a spokesman for disaffected militia groups and as a liaison between them and the government. In militia circles, Weaver, of course, was a cause célèbre. We invited them in and sat them down to hear what they had to say. It turned out that these two thought they could get through to the Freemen where we could not. I had not been at Ruby Ridge, so I did not know them personally, and I wasn't at all sure of their intentions. With his bushy mustache and weight lifter's build, Gritz came across as the former Green Beret that he was—bold, brash, confident, and self-assured. I also wasn't sure that he would be able to accomplish anything with the Freemen. On the other hand, I didn't see any particular downside to his intervention, assuming we could keep his ego under control and properly channel his enthusiasm. I was also concerned that our refusal to allow him to make the effort would be misinterpreted by the right wing as a sign that we weren't really committed to ending the siege in a peaceful way. Weaver, on the other hand, was a soft-spoken man who seemed for the most part content to let Gritz do the talking. I asked him directly how he thought he could help. He looked at me with sad eyes and said, "Maybe I can convince them not to make the same stupid mistakes I made. You can't get into a shooting war with the U.S. government and win. If I had it to do over again, I'd surrender. Then maybe my wife and son would be alive today." His words hung in the air for a moment, and though I was surprised by his candor, I had no doubt he was sincere. I wondered if he was motivated in part by trying to bring meaning to the loss he had suffered. For sure, few other individuals could make as strong an argument for surrender as Randy Weaver himself. After meeting Gritz and Weaver, I went to the FBI command team to discuss whether we should avail ourselves of their offer. Everyone agreed that we should use Gritz, but Weaver was a different matter. I tried to convince them that he, better than anyone else, would be able to explain what they had to lose. But my colleagues were adamant; they were concerned that involving him would subject the FBI to criticism. Perhaps even more than that, they believed it inappropriate to use him. They truly held Weaver in contempt for what he had done. As a group, we called Director Freeh. He agreed with my recommendation to use Gritz but sided with the others in opposing the use of Weaver. We had Karl Ohs contact the Freemen and ask if they would be willing to meet with Bo Gritz, telling them that Gritz had come on his own and wanted to speak with them. The Freemen said they were not interested, but Gritz—who some say was the real-life model for the movie character Rambo—would not take no for an answer. We allowed him to go forward to the Freemen property line, and he was there only a short time before the Freemen came out to talk with him, then brought him inside. Over a four-day period, Gritz met with the Freemen at the schoolhouse. Afterward, we learned that they never warmed up to him as they had with Karl Ohs—to the Freemen's taste, he was just too pushy. When we debriefed him, Gritz admitted that he had made only limited progress in getting through to the men in Justus Township. He told us that he had taken it upon himself to scare some sense into them, and scare them he did. He said that if they continued to resist, the government would come in to get them in the dead of night. He told them there would be loud explosions and flashes of bright light that would disorient them. Then they would be dragged out in handcuffs through the mud and humiliated in public. While I never would have endorsed this approach, I think it may have had a positive effect, at least in planting the idea that the FBI's patience was not endless and that a tactical assault was still a very real possibility. After failing to barter the truce he was seeking, a frustrated Gritz appeared before the news media, where he harshly criticized the Freemen and questioned their motives. This exercise in pique undid whatever good Gritz had done, and the Freemen refused any further meetings with him. He and Randy Weaver left town in a rush and were not seen around those parts again. After the Gritz initiative, we heard that there was going to be a rally in support of the Freemen in Jordan, but only eight individuals showed up. There were more reporters covering the story than there were participants. Gritz's diatribe may well have killed off whatever outside support the Freemen had once enjoyed. The news media—particularly the print media in Montana—were extremely critical of the Freemen and their ideology, and cartoons poking fun at the group began to appear in the state papers. Family members were also unsympathetic. When we asked one to pen a letter to his brother, who was inside with his two sons, he wrote the following: _I can see why you would want to kill yourself_.... _If you must end your life, at least be clear about why, it's not about taxes or bad government or anything else. It is about the rottenness inside yourself. So go ahead and end it_. As you might imagine, we chose not to send in this letter. On day fifty-three, the Freemen requested and were allowed to meet with Charlie Duke, a right-wing member of the Colorado state senate well known in militia circles. Steve Romano and Dwayne Fuselier orchestrated and managed the encounter while I was back in Virginia. They met with Duke to brief him, after which the senator went into Justus Township. He emerged with the Freemen's authorization for him to act as their intermediary with the FBI. He even convinced them to meet face-to-face with us for the first time. Once again, our agents set up a table and chairs at the same cattle guard where the Montana legislators had met with the Freemen. Our objective was to listen to what the Freemen had to say and to schedule another meeting, and that was it. Some on-scene leaders were expecting more, though I felt that we should simply show our respect for them and indicate our genuine desire to help them out of this situation. But it was also essential to establish the prospects for a dialogue. Steve and Special Agent Dwayne Fuselier represented the FBI at the table. If the Freemen who came had expected to find demanding and authoritative feds, what they got instead were two very reasonable agents who showed openness and concern. Over the course of the next six days the two groups continued to meet twice a day. The problem was, we simply didn't speak the same language or, you might say, even live in the same universe. For all our cordiality, anytime they got down to real discussion of the issues they would hit a roadblock. They would say the Freeman had to come out and face charges; the Freeman would counter that the FBI had no authority to demand anything of them. The Freemen would not give up their insistence that they be tried by their peers in a common-law court. Steve and Dwayne suggested that the best course of action was for the Freemen to come out and tell their side of the story in an authorized court, but the Freemen weren't buying it. When I came back from Virginia, the management team was frustrated and met to discuss how to move things along. We decided that our best option was to create an illusion of impatience and mounting anger. I have argued against showing a bellicose face when it contradicts and undermines the negotiations. But this was a case in which negotiation alone was not moving us forward. The best approach is always a carefully modulated combination of earnest talk backed up by the option of tactical intervention. Now we needed to reinforce that option. Until this point, we had allowed reporters and television crews to cluster on a hill where they could observe and film Justus Township from a safe distance, but in full view of the Freemen. We now decided to move the media away, hoping to plant the thought that something was about to happen that we wanted hidden from the media. We then brought several armored personnel carriers to the command post, spurring the media to report on their sudden arrival. The FBI actually borrowed two of these vehicles from local law enforcement, painted them black, and stenciled _FBI_ in white letters on the sides. One of these trucks was in fact inoperable, but we were the only ones who knew that. Then a team of agents went forward and constructed a new gate in a stretch of fence where there was not even a road—a fairly clear suggestion that something was up. Would the Freemen assume that we were planning to bring in more large equipment, maybe even assault vehicles? We hoped that was what they would fear. Also for the first time, we allowed an FBI helicopter to fly close enough to be seen and heard by those at Justus Township. We were careful not to fly over the property itself, but came just close enough to get their attention. The next step was to cut off power to the ranch. Some local officials had publicly criticized the FBI for not doing this sooner, but what they didn't know was that we needed the power on, especially the power going into the schoolhouse, where one of the hidden microphones had been planted prior to the siege. We had picked up useful information by listening in, but at this point we were willing to sacrifice this access in order to gain the psychological effect of the power blackout. Finally, and perhaps most important, I met with Janet Clark, the nurse who was still coming and going to her job each day, and explained that the situation had dragged on far too long. I told her that despite our past patience, we were not getting the cooperation we needed from her husband and the others. I told her that authorities in Washington now wanted us to resolve this matter with all deliberate speed. I never said we were going to launch an assault; I simply implied that something was going to happen soon. I was confident that Janet would pass this information on to Edwin, the husband she loved, who was inside with their son, Casey. We did all this between days sixty-six and seventy-one of the siege. The news media unwittingly did their part, repeatedly issuing reports that the FBI seemed ready to move. At this time we also began to get word through various means that many of those on the ranch were growing as weary of the siege as we were. As a result of these initiatives, on day seventy-five Edwin Clark for the first time mustered the courage to come out alone and meet with us. Through Karl Ohs, we had been pressuring him relentlessly to assume a leadership role in order to take control and preclude violence. Edwin sat down with Dwayne Fuselier and me in a motor home we'd deployed near the property. He was cordial and polite, but we could also see that he had a lot of responsibility on his broad shoulders, and he was fatigued. He voiced concern about his son, Casey, his father, Ralph, and his uncle Emmett. His father, especially, needed medical attention. Edwin appeared to me to be a normal, hardworking guy who had made some bad decisions. Those decisions had led to circumstances that now had escalated beyond his control. He had no criminal record. He also didn't seem like someone foolish enough to really fall for the nonsense being spouted by the Freemen, yet here he was. He voiced his frustrations with the intransigence and indecisiveness of the others, but he still wanted to be respectful of their beliefs. Edwin also wanted to be hospitable to the individuals who had sought refuge on his property. Over the next several days Dwayne and I met with Edwin at various times inside the motor home. At each meeting it became increasingly clear that Edwin was a likeable, down-to-earth guy. As was common among men in this region, he would engage in small talk with us before moving into substantive discussions. He told us about his love for hunting and about his collection of dinosaur bones, which he'd found on his property. I liked him, and this bolstered my desire to help make sure he and his family were not hurt in this siege. At one such meeting he told us that he wished to come out, but that he needed to speak to Leroy Schweitzer to get his advice on how to proceed. He also indicated a desire to have the help of the Cause Foundation, a right-wing equivalent of the ACLU, known for defending the legal rights of right-wing extremists in trouble with the law. Earlier, the foundation had sent a letter offering its assistance, an offer we had passed along to the Freemen. I told Edwin that enlisting the help and involvement of the Cause Foundation was not a problem. In fact, I told him that it was wise for him and the others to get legal assistance, which we had recommended all along. I told him candidly that a visit to Leroy Schweitzer in jail in Billings was going to be more of a challenge. We would have to transport Edwin to Billings, let him meet with Schweitzer, and then allow him to return to Justus Township. I told him that nothing like that had ever been done before. We didn't even know if Schweitzer would meet with him, or what Schweitzer would say. I told Edwin that if I was going to support this unprecedented undertaking, I would need to know what he wanted to talk to Schweitzer about. Edwin said he wanted to get Schweitzer's instructions on how they should preserve their common-law evidence, the various papers and documents they believed validated their common-law rights. He also wanted to tell Leroy that he planned to surrender. I asked Edwin what he would do if Schweitzer instructed him to continue the siege and not surrender. Edwin looked me in the eye and said, "If that's what he says, then I'll have to make my own decision about what's the right thing to do." I was convinced that Edwin had made up his mind to surrender; he just wanted Schweitzer's blessing in order to feel better about it. Based on that exchange, Dwayne and I met with the command team. As expected, there was no problem with contacting the Cause Foundation and allowing them to become involved, so we began making arrangements to bring them to Montana. Dwayne and I also strongly recommended that we fly Edwin to Billings to meet directly with Schweitzer. If Schweitzer opposed the surrender, I told the command team, then Edwin would most likely disregard those instructions and come out anyway. Edwin was clearly tired of the whole mess and wanted it over. The on-scene commander, Tom Kubic, and CIRG leader Robin Montgomery agreed on both counts, which took a great deal of courage. Not only was this action unprecedented, but if something went wrong, the FBI would be roundly criticized. On the seventy-ninth day of the siege, Edwin Clark left the property, secretly boarded an FBI plane, and flew to Billings, Montana, where he met in jail with Leroy Schweitzer. To our relief, Schweitzer agreed that it was time to end the siege; his only interest was that the government not be allowed to destroy the documents he believed would support their claims of sovereignty. Three representatives of the Cause Foundation, director Kirk Lyons, Dave Holloway, and South Carolina attorney Larry Salley, soon arrived in Billings, where they, too, were allowed to meet with Schweitzer. Edwin was quietly flown back to Jordan and allowed to return to Justus Township. With Schweitzer's blessing and the involvement of the Cause Foundation, we worked out the specifics of a surrender process. Five key issues were agreed to: 1. Karl Ohs would take custody of the Freemen's evidence. 2. The Freemen would maintain 51 percent control of their appointed counsel. (This was a nonissue, a concern based on their misunderstanding of the law. Suspects always retain 100 percent control over their own defense.) 3. The United States attorney would not oppose bail for Ralph and Emmett Clark. 4. The Freemen would be allowed to meet with one another in jail. 5. Their appointed counsel would be sworn in under common law. The Freemen agreed to these points. We were cautiously optimistic. On day eighty, the three Cause Foundation representatives entered Justus Township to assist the Freemen in assembling their evidence. These outsiders were extremely helpful in keeping the disorganized Freemen on task. Later in the morning, Ashley Landers, daughter of Russ Landers, the fugitive from justice, suddenly walked out, suitcase in hand, and surrendered on her own. We had heard that she wanted to get out and away from her parents, and apparently she couldn't wait another minute. Shortly thereafter, Karl Ohs drove a rental truck onto the property and loaded up the Freemen's evidence to be secured for their defense in court. On the eighty-first day of the longest siege in United States history, Dwayne and I stood on our side of the cattle guard in our short-sleeved shirts and bulletproof vests, waiting for the final word from Edwin Clark. We watched as his car approached from the horizon on their side and came toward us along the dirt road. Edwin stopped his car and turned off the ignition. As he got out and walked toward us, I could see a broad smile break out from under his bushy mustache. "Well, boys," he said, "we had a hell of a siege, didn't we?" We shook hands, and I felt a tremendous sense of relief. As we had learned more of his story, both Dwayne and I had developed a bond with Edwin. I felt sorry for him, and I genuinely liked both him and his wife, Janet. I hoped that life would work out better for them in the years ahead. The three Cause Foundation guys now came out and joined us to witness the surrender process. We all watched as the individual Freemen drove up to the cattle guard that marked the edge of the property. They got out of their vehicles, shook hands with the Cause Foundation representatives and Edwin, then crossed the cattle guard where Dwayne and I met them. It had taken a very long time, but not a shot was fired. The Freemen's land would be lost and their lives would be changed, but they were alive, and if they so chose, most would be able to move on. I certainly hoped that would be true for Edwin and his family. As we were ready to leave, I observed that one of the Cause Foundation guys had a tear in his eye. He had always considered himself an enemy of the FBI, he told me, but no longer. He also said that he was proud to have been a part of such a professional and creative operation, one that ended so well. I glanced at Dwayne and saw that his eyes were welling up also. That's what made him such a good negotiator—the fact that he really cared. Our success in Montana was a validation of what the FBI negotiation program stood for, what we had learned and practiced for over two decades, and what we had taught to cops around the world. The trauma of Waco and Ruby Ridge had been answered by handling this situation the way it needed to be done. As Edwin Clark said, it was indeed, in a very good way, a hell of a siege. # CHAPTER TEN # **PREPARE THE MISSILES** _Wise men talk because they have something to say; fools, because they have to say something_. —PLATO Like almost every police group in the country, the Texas Rangers had never run a major siege operation. Their inclination was to take decisive action against criminals in the act of breaking the law. Even though they'd been at Waco in a support role to the FBI and had witnessed the tragic ending up close, they remained a decisive and action-oriented outfit. Yet they soon would confront their own major standoff and have to make the tough choice between immediate tactical action and more thoughtful negotiations. In April 1997, almost one year after the Montana incident, I was back out west, standing beside a motor home command post overlooking the Davis Mountains of Texas, not far north of the border with Mexico. Once again I was on the scene to try and coax a bunch of right-wing separatists to come out from behind their barricade. Unlike the case with the Freemen in Montana, this time we had a clear leader to deal with. Unfortunately, he was not nearly as reasonable and likeable as Edwin Clark. Richard McLaren, self-proclaimed chief ambassador and consul general of the Republic of Texas, the militia group in question, was a pompous, self-important man with a passionate belief in the righteousness of his cause. He and his fellow Republic of Texas members were also known to be armed, and they were responsible for a recent kidnapping. This was only one of many reasons the Texas Rangers were not going to be nearly as tolerant of delay as we had been at Justus Township. Captain Barry Caver of the Texas Rangers was the overall tactical commander on the scene for the Republic of Texas siege, with sheriff's deputies, municipal police, correctional authorities, Border Patrol, and FBI agents there to help. We had a ham radio set up on a table outside the motor home, and Caver and I, plus Ranger sergeant Jess Malone, stood in the desert twilight and listened to McLaren's irritating, high-pitched voice. The Republic of Texas leader was broadcasting yet another of his requests for fellow believers from other right-wing militias to come to his aid. McLaren's style on these transmissions was always overexcited, his manner of speaking rapid-fire, but this time his rhetoric grew ever more inflated with rage. I leaned in and listened more intently as he solemnly intoned, "Prepare the missiles." This message wasn't meant for his followers, I realized; it was meant for us. And in that instant, I knew our strategy was working. "Prepare the missiles" meant that McLaren was desperate and scared. He and his men—our best estimate was that there were perhaps thirteen people involved—were known to have automatic weapons, but the idea that he would have missiles was preposterous. For days, via telephone, fax, and Internet, he had issued his "red alert." He had spoken freely to media outlets and declared war on the authorities. All of his statements had seemed cocky and self-assured, as if, based on his own assessment of the fallout from Waco and our patience in Montana, he was confident that the Rangers and the other officers from the Texas Department of Public Safety would not assault his position. His appeals for assistance had attracted at least one group of militia members, driving a car with its trunk full of weapons, who were arrested trying to come to his aid. But as he broadcast his latest appeal, McLaren sounded like a cornered man at his wits' end, no longer confident of anything. With daylight fading and darkness coming on, he seemed to be desperately posturing, trying to ward off any tactical action from the Rangers during the night. Thirty-eight-year-old Jess Malone was the primary negotiator at the scene. In this collaboration, the FBI advisors were known as "the suits," and the Rangers were known as "the hats." Malone wore a white western shirt, white cowboy hat, and blue jeans with a custom leather belt and holster. From what I could tell of him, the western wear was not just a fashion statement. Muscular, tall, and tough-looking, Malone moved with all the no-nonsense authority you would expect of a Texas lawman. Earlier, Jess had told McLaren over the telephone, "This has gone on a long time. We're getting tired and we're getting frustrated, and I have to tell you, the sooner you all come out, the safer it'll be for everyone." Malone waited a beat, then added calmly, "We don't want to see anyone get hurt." His tone was nonthreatening, but of course his words held the unavoidable implication that failure to cooperate could lead to some very unpleasant consequences. Most important, he never provided McLaren the reassurance he sought that the Rangers would not attack. Operations might very well take place during the night. The authorities were determined to maintain measured control, but their patience was not endless. My involvement with the Republic of Texas had begun two months earlier, in February 1997, when Davis County sheriff Steve Bailey asked the local FBI office to send experts to advise him on how to deal with McLaren and his followers. FBI profiler Al Brantley and I flew to Texas to size up the situation and offer whatever assistance we could. Sheriff Bailey's team was understaffed and isolated, and confronted by a man who seemed to be just about as grandiose as David Koresh. The sheriff wanted to keep things in Davis County from coming to a head and slipping out of control. He wanted no part of anything resembling the debacle at Ranch Apocalypse. Holed up in a house trailer that he referred to as "the embassy," McLaren had been making quite a nuisance of himself and, like the Freemen in Montana, spewing out hundreds of bogus liens against anyone who criticized him. He had even demanded that then governor George W. Bush and other officials vacate their offices. Members of the Republic of Texas (ROT)—tax protesters, political extremists, and con artists—believed, like the Freeman, that they were not subject to any state or federal laws. The ROT's principal claim was that Texas had been illegally annexed by the United States in 1845. Accordingly, the state was actually an independent, sovereign nation, and the federal government had no jurisdiction. Nor did the state government have any jurisdiction over the ROT because the politicians in Austin had subordinated themselves to Washington, rendering themselves impotent. But McLaren's views were so radical that he'd actually been impeached from the ROT mainstream, after which he had retreated to his property not only to avoid being arrested on charges related to the bogus liens but also to rally support and gain sympathy by taking a stand. Day after day, McLaren railed over the Internet against all forms of authority, saying that any attack against him would set off the liberation of America from the "new world order." Like the Texas patriots at the Alamo, he vowed to never surrender. With a little digging, what Al Brantley and I learned was that Richard Lance McLaren was a forty-three-year-old married Ohio native who had come to Texas from Missouri eighteen years earlier. The man who proclaimed himself chief ambassador and consul general of the ROT had not even been born in the Lone Star State. Although McLaren and several of his fellow ROT members were under investigation for mail and bank fraud as well as conspiracy, there were no substantial federal charges against them, which meant there were legal limitations on the amount and type of assistance the FBI could provide. It was our shared view that any overt federal involvement would only make the situation worse by appearing to confirm McLaren's charges that the federal government was heavy-handed and oppressive. Al's judgment was that McLaren was a man who mostly enjoyed being the center of attention, and I saw no reason to disagree. McLaren's inability to get along with others meant that he could only preside over a small group of weak individuals who passively allowed him to speak on their behalf. With his Koresh-like verbal ability, he frequently engaged in rambling lectures to show others how smart he was. Much of his current anger stemmed from being ostracized by the ROT mainstream. He had no history of violence up to this time, and when arrested previously he had not resisted. Bottom line, Al and I felt that McLaren was mostly bluster. But if the authorities moved against him, McLaren and his followers would most likely defend themselves. To ensure against anything resembling the siege at Waco, the better idea would be to lure McLaren away from the property and away from his bodyguards—the leadership decapitation we had carried out in Montana. But Sheriff Bailey had only one full-time and two part-time deputies, so this was not going to be an easy strategy to carry out. Accordingly, the sheriff decided to watch and wait and hope for the best. Having done the best we could under our limited mandate, Al and I headed back to Washington and the FBI academy. On April 27, 1997, in the community of Davis Mountain Resort, Joe Rowe and his wife, Margaret Ann, were just about to eat a quiet Sunday lunch when three men and a woman, armed and dressed in military fatigues, burst in on them. The Rowes' property stood adjacent to land owned by McLaren's Republic of Texas, and the Rowes and the ROT had been engaged in an ongoing land dispute. For months members of the group had been patrolling the area and openly brandishing weapons, sometimes walking onto the Rowes' property. That morning, Joe Rowe called the sheriff to complain about an armed trespasser. Responding to the scene, Sheriff Bailey arrested a forty-three-year-old man named Robert Jonathan Scheidt. "Captain of the embassy guard" for the ROT, Scheidt was carrying two assault rifles. The ROT's plan was to hold the Rowes hostage in exchange for Scheidt's release from the county jail. The ROT also demanded the release of Jo Ann Turner, an ROT member arrested in Austin the previous week in connection with the group's filing of bogus liens to obtain fraudulent loans. It appeared that the sheriff's watch-and-wait tolerance had given McLaren the impression that he could get away with anything. Now the authorities had no choice but to act. State troopers barricaded the road leading in and out of Davis Mountain Resort and asked the more than eighty nearby residents to stay inside their homes. Those who chose to leave were allowed out, but not back in. After negotiating for more than twelve hours, the Texas Rangers agreed to trade the Rowes for Scheidt. Joe Rowe, who had been cut by flying glass during the invasion of his home, had a heart condition, and the Rangers were concerned for his health. As Joe Rowe was taken to the hospital, the ROT gang retreated to their compound. Meanwhile, state authorities filed kidnap charges against McLaren and the other ROT members—Richard Keyes, Gregg Paulson, and Karen Paulson—who had committed the break-in and abduction. While standard FBI procedure is never to exchange hostages, provide weapons, or furnish illegal drugs, we also leave room for flexibility, particularly when lives are at stake. This trade moved the Rowes out of harm's way, which meant that McLaren now had to be more concerned than ever about the potential for an assault since he no longer had them to use as shields. And even though Scheidt was out of jail for the moment, no one had any intention of letting him walk free. The downside of the exchange was that McLaren claimed it as a great victory. It also fueled his grandiosity. He telephoned the media and said that he would not end the standoff until the authorities agreed to a referendum on Texas independence. "Maybe somebody will talk to us now," he said. "We've been trying for two years to get someone to talk to us." McLaren escalated even further, characterizing law enforcement authorities as "foreign agents" and demanding that the international court in The Hague recognize their legitimacy as a sovereign nation. As the hours passed he became increasingly agitated and his rhetoric more vitriolic. I flew back to El Paso late Tuesday afternoon, April 29, and was told to check in to a motel and wait there. Profiler Al Brantley had come in just ahead of me and was already en route to Fort Davis in his rental car. Meanwhile, McLaren's attorney, Terry O'Rourke, gave Captain Caver of the Texas Rangers some letters from McLaren that expressed some of the ROT leader's typically inflated concerns. McLaren not only wanted the Rangers and the other authorities to stay away, but he wanted to be dealt with as the ambassador of an independent country. The conditions Caver offered were considerably less grand. Fundamentally, they called for McLaren's surrender. Not too surprisingly, the talks broke off, at which point I was told to proceed to Fort Davis. The following morning, Wednesday the thirtieth, I got up early and drove to the forward command post, which was in a nearby fire department building. In my initial briefing I learned that a negotiation team had already been established, consisting of Texas Ranger Jess Malone, the primary negotiator who would speak to McLaren, and FBI negotiators Lane Akin and Carlos Conejo. Al and I were there to assist. The Texas Rangers are known for acting decisively to enforce the law and restore order, a heroic reputation summed up in the adage "One riot, one Ranger." Accordingly, they were already voicing great frustration over their dealings with McLaren, and they were seriously considering a full tactical assault. Texas Ranger senior captain Bruce Casteel was the senior official at the scene when I arrived. He asked me what I thought would happen if they initiated tactical probes against McLaren. I voiced my opinion that ROT members would most likely respond with violence. Before the siege, McLaren had been thought to have at most thirteen people holed up with him. We knew that Robert Scheidt was there, along with Richard Keyes and Gregg Paulson and his wife, Karen—the party that had invaded the Rowes' home. The rest of the cast of characters were Robert Otto, a bodyguard, who claimed to be a Native American and was also known as "White Eagle"; Evelyn Horak, McLaren's common-law wife; and Mike Matson. Beyond these, we weren't sure. McLaren, Otto, and several others were believed to be carrying semiautomatic weapons. Given our lack of specific knowledge, I was concerned that we were moving toward a "linear approach" to incident management, which could lead to another debacle. It goes something like this: "First we try to talk them out, then if that doesn't work, we drop those ongoing efforts and tactically force them out." I suggested that the Rangers should consider the "parallel approach": authorities negotiate in good faith while simultaneously preparing for and showing their ability to undertake tactical action. Limited demonstration of tactical movement can help the negotiation process along by encouraging dialogue. Too little action can make the subject feel confident and secure, and thus less likely to negotiate in earnest. Too much action might trigger a firefight. The Rangers accepted my suggestion and undertook an effort to be more visible to McLaren and the others at the "embassy." At the same time, they were careful not to encroach on ROT property or get close enough to trigger a violent response. But McLaren now refused to speak with the Rangers. I suggested that we use the media to get our position out to the public. So far, McLaren was making all the significant statements, characterizing himself as a victim of federal intrusion and portraying the authorities as harassing him and his followers. I felt we could use the press to provide an accurate account of the situation, discourage supporters from rallying to his aid, and encourage McLaren to return to negotiations. The Rangers asked that I work with Department of Public Safety (DPS) press spokesman Mike Cox to develop a strategy. Mike had been plenty busy responding to endless media inquiries about McLaren, the ROT, and the charges pending against those hiding at the "embassy." He was now happy to use his microphone in a more proactive way. We worked out the following points for Mike to deliver in his press conference: 1. "This is a State of Texas matter only. This is a case of Texas authorities trying to execute arrest warrants based on probable cause that crimes have been committed. McLaren and others have not been convicted of anything; they have only been charged and must appear to answer those charges according to Texas law and be tried by a jury of their fellow Texans." Obviously, we felt this last piece was particularly important to help remove the argument that the big bad federal government was coming after McLaren for his beliefs. 2. "This is not a federal matter and should not be viewed as the U.S. government trying to move against the ROT. The DPS and Texas Rangers are in charge of this incident; the FBI is only here in an advisory role." We couldn't deny that the FBI had some personnel on the scene, as this had already been widely reported. A total denial would be a lie. 3. "The ROT claims that people should be free in their homes, yet ROT members violently invaded a home and at gunpoint took two people hostage. The people of Texas expect law enforcement authorities to investigate and prosecute such crimes. Texas authorities are attempting to serve valid arrest warrants; they are not concerned about Mr. McLaren's political beliefs." I hoped this point would elicit a response from McLaren, perhaps even bring him back to negotiations. I didn't think McLaren would want to be characterized as he was in our statement, which pulled the rug right out from under his self-serving interpretation of events. Right-wing movements always rail about the sanctity of individual rights, saying that a man's home is his castle and that no one has a right to come into someone else's home without permission. Yet here we were clearly showing that the ROT had violated this principle. 4. "Texas authorities have continued to undertake extensive efforts to open and maintain dialogue with McLaren. He has broken off contact. Texas authorities have shown patience and are committed to a peaceful resolution; we await Mr. McLaren's contact." Again, this statement put enormous pressure on McLaren to reach out. The version of events he had presented to the media and distributed over the Internet was now a myth blown out of the water. Our statement served to make Texas authorities look rational while at the same time showing McLaren as obstructing efforts to reach a peaceful conclusion. 5. "We plan to serve the warrants." And we wanted to make it clear that law enforcement was not going away. While Mike Cox was preparing to give the press conference to deliver these points, the Rangers also dropped off a written response to McLaren's demands from the previous evening. This package included a letter from attorney Terry O'Rourke encouraging McLaren to come out. We also recommended that the package include a personal letter from Jess Malone, our primary negotiator, requesting that McLaren pick up the telephone to speak with him. We felt this would personalize Jess and demonstrate our sincere willingness to talk. Our earlier and repeated efforts to call in and speak to McLaren had been rebuffed. Mike's appearance before the press received significant coverage, the overriding theme being that the citizens of Texas would demand that authorities bring to trial anyone who violated the sanctity of any citizen's home. We didn't have to wait long for the ROT to take the bait. At about three that afternoon, Gregg Paulson phoned right-wing radio personality Doug Town. Based in Tampa, Florida, Town had already been on the phone with various individuals inside the ROT compound. Town contacted the FBI, and negotiators from my unit at Quantico spoke with him, trying to make sure that he understood that this was not another Waco, that the FBI was not in charge, and that we did not have a significant presence at Fort Davis. At the request of negotiator Jim Duffy back at my unit in Quantico, Town set up a direct call between the ROT and negotiator Jess Malone. On that call, Paulson ranted about our characterization of him as having violently kidnapped the Rowes. In contrast, Jess maintained an easygoing, nonconfrontational manner and ran circles around Paulson. Jess carefully avoided getting into an argument. He also stressed the importance of keeping open a direct line of communications in the hope of achieving a peaceful resolution. Jess then asked Paulson if he and McLaren would, with a guarantee of safety and their right to return to the "embassy," be willing to come out for a "summit meeting." We knew the word _summit_ would appeal to the ROT's sense of itself as a sovereign nation. Paulson went for it. He said he would call us back at seven-thirty that night. This telephone exchange with Paulson reminded me that the authorities had so far not captured the ROT phone lines. Typically, one of the first things we try to do in a siege is isolate the phone lines so that the subjects can only speak with or through the negotiators—but the ROT still had unfettered access to the outside world. I suggested all efforts be undertaken right away to change that, meaning that whenever the ROT picked up the phone they would get us and only us. This would make the negotiators their sole broker for all communications, which was exactly what we wanted. Unfortunately, this suggestion was misunderstood by someone on the law enforcement side. The result was a screw-up in which the ROT phone was cut off entirely. For some time we had been asking McLaren to speak with us, telling him he could do so whenever he wanted, and now we'd made that impossible. Making matters worse, the lines were cut just before the 7:30 p.m. call we had scheduled with Paulson. Mistakes do happen during crisis situations, most often because the left hand of law enforcement doesn't know what the right hand is doing. In this case, someone may have been confused about the instructions, or may have simply assumed that we were following the policy of many police departments, which is to cut phone lines and power immediately when responding to an incident. I don't think authorities should ever do anything just because it's been done in the past. Each action has to be carefully considered in the context of the specific situation, with both the positive and the negative potential taken into account. Similarly, some SWAT teams prefer that negotiators not make initial contact with the subject until the team has the perimeter effectively contained, which can take time. These teams are concerned that the subject may want to come out before a surrender can be accomplished in a tactically preferred manner. I've always believed that if someone really wants to surrender before we are fully ready, the officers on hand will somehow make it work, even if it isn't entirely according to the SWAT playbook. The notion of asking a subject to wait to surrender makes no sense to me. In my opinion, the very first thing an agency should try to achieve is "verbal containment." This means establishing a dialogue, trying to keep the barricaded person calm, and explaining each of our movements in advance so that he does not see them as threatening. Fred Lanceley used to tell negotiation classes that our job is not just about what happens over the phone line. It is everything the subject observes from his window as well as everything he hears on radio or sees on television. All of these stimuli have to be carefully controlled and managed in an integrated manner. Unfortunately, we were now in a situation in which the press conference had set us up for progress, but we had no way to capitalize on it because we had no way to talk to the subjects. At 10:00 Thursday morning I requested a meeting with Captain Caver. He was a tall, lean man, clean-cut with dark hair. By all appearances he was approachable, laid-back, and easygoing. I'd never encountered him at Waco, but I knew that he, too, had been part of the law enforcement team outside the Branch Davidian compound. I was hoping he had taken away from that ill-fated siege the same lessons I had. Though he listened patiently while Al Brantley and I explained our concerns and made our recommendations, I could tell he was growing impatient with McLaren and that he was leaning toward quick and decisive action. I thought it would be appropriate to share my concerns with him about any direct tactical action, so I took him aside and asked him a few key questions. "What will you say to the widow of one of your Rangers when you have to deliver the news that her husband has been killed making this assault? If the widow looks at you with tears in her eyes and wants to know if you had done everything possible to avoid putting her husband in harm's way, will you be able to say you had? Will you be able to say that you explored every possible alternative? If you can't say yes to those questions, then perhaps you need to consider attempting some other initiatives before you send your people into harm's way. I believe we should try to reach our objectives without the use of force if at all possible." Caver looked startled. "What do you suggest I do?" I told him I thought we might be able to undertake some incremental actions that would encourage McLaren to negotiate with us in earnest to avoid bloodshed. We could posture that we were going to assault him, without actually moving in, to try to convince him to surrender. "I'm open to your suggestions and will do anything reasonable to keep my men safe," he said. In this conversation, he also acknowledged that he had never been told about the call scheduled with Paulson for seven-thirty the night before. He said that cutting the phone lines had been his decision, but he also said that he never would have done so if he had known about the call. Inadvertently, we had experienced the same communications problems that had plagued us at Waco. Even a well-meaning and open-minded commander can wind up spending more time with the tacticians than with the negotiators, which means that they may not be fully aware of the progress of the dialogue with those inside. When teaching the FBI negotiation course, I would describe this as the "crisis within the crisis," emphasizing the critical need for negotiation leadership to have direct access to the on-scene commander. I used to tell my students that negotiating with an on-scene commander was often more difficult than dealing with the perpetrator. But Captain Caver made it clear that he wanted our input, and that he would do whatever he reasonably could to avoid an escalation to violence. Shortly after our meeting, Caver directed that a telephone line be sent in to McLaren to support the reopening of negotiations. Two hours later Jess was able to reestablish contact. McLaren was surprisingly civil, but he also complained that he had been trying to communicate but hadn't been able to get a call out. Jess Malone rose to the occasion, casually stating that there had been some problems with the telephone lines. McLaren then announced that he was sending out a "diplomatic pouch" with a letter we should read. The pouch arrived around three in the afternoon. It contained a formal-looking ROT document written to look like a legal affidavit, along with a formal cease-fire agreement that McLaren had signed and wanted the Rangers to sign as well. This second document requested mediation by a neutral country. The inflated diplomatic tone was vintage McLaren. About an hour later, the negotiation team met to formulate a response. The group decided to send McLaren a letter informing him that the authorities could not sign the agreement as drafted. The carrot included along with that stick was that the authorities would, however, honor their earlier promise of a humane surrender process. This letter was delivered to McLaren at around eight that evening. At around nine, McLaren called Jess Malone and said that the letter was not what he had expected. He began to posture about his official standing as an ambassador of the ROT and rambled on in his convoluted way. After listening patiently to this for some time, Jess, becoming somewhat frustrated, responded, "You are not an ambassador." McLaren, who evidently did not appreciate the reality check, immediately hung up. Late that evening, Paulson called Jess and engaged in a lengthy discussion. Paulson seemed scared, seeking reassurance that the Rangers would not be coming in during the night to get them. In keeping with our prepared strategy, Jess provided no such reassurance. Paulson's willingness to die for the cause appeared to be waning. We let him sleep on it. At 9:05 the following morning, Paulson called once again. He told Jess that the only things keeping him from coming out were his honor and his duty. He said that he had written orders from the ROT command to defend their flag, and these were orders he could not disobey. I sensed that Paulson was fishing for help. In keeping with my role as the primary negotiator's coach, I passed a note to Jess saying, _Let's find out who, besides McLaren, Paulson would accept orders from_. The coach always sits shoulder to shoulder with the primary negotiator, listening on a headset while remaining absolutely quiet. Learning to write good notes that help the primary negotiator stay on track is an art unto itself. A well-timed note can make a big difference, allowing the negotiator to seize a sudden and unexpected opportunity that arises in the dialogue. Jess asked Paulson about the ROT chain of command. Paulson said that President Boyce Halbison and Major General Melvin Kriewald were the only people who could order him to come out. It appeared that Paulson didn't know either of these individuals personally. He even had difficulty pronouncing their names as he shuffled through papers, reading from a list. Getting Paulson to come out would be a real breakthrough. He could give us much-needed information about McLaren's intentions, accurately identify who was inside, and tell us what weapons they had and what defensive measures they had taken. It would also serve to loosen McLaren's control over the others inside the compound. As soon as the exchange with Paulson ended, we immediately set about trying to locate Halbison and Kriewald. We hoped that their split from McLaren, and McLaren's subsequent embarrassing behavior, would encourage them to help us before he did further damage to the ROT's reputation. A few minutes later, we learned that while we had been on the phone with Paulson, Robert Scheidt had walked down from the property and surrendered. He carried documents from McLaren, including last wills and testaments from those inside. I saw these wills as little more than a stunt to dramatize their willingness to fight to the death. Shortly thereafter, Paulson called back and asked to speak to Scheidt. Paulson said that Scheidt had a code word that he was supposed to report back with, and tried to suggest that Scheidt had not surrendered but had been captured instead. Jess countered that Scheidt had walked out entirely on his own. Paulson's response was to call Scheidt a traitor. We decided not to let Paulson speak directly with Scheidt, at least not until we knew more about why Scheidt had decided to come out. Had he been sent as a spy to report back on how many officers were surrounding them and where our command post was located? Was his surrender a test to see how well they would be treated? We simply didn't know. We came to believe that he had been sent out merely to deliver the wills, and that surrendering had been his own improvisation. With Paulson and now Scheidt losing heart, McLaren's control seemed to be waning. At around two that afternoon, a letter arrived from Halbison ordering McLaren and all inside to surrender immediately. Jess made several attempts to call in and speak with McLaren after this, but was told McLaren was busy. The letter had clearly taken them off guard. At 3:38, Paulson called Jess and said that the letter from Halbison was not the authorization he needed. He said that he needed a "proper" military order directly from Kriewald. We argued that Halbison, as the ROT president, was the commander in chief and therefore Kriewald's superior, but Paulson remained steadfast. Was he playing games with us? More likely, McLaren had raised this technical issue of chain of command, desperately trying to keep Paulson from bailing out. Meanwhile, we continued our efforts to reach Kriewald. I felt it was time to inject a new element into our efforts, a nonaggressive yet highly visible tactical movement to create tension for those inside. We wanted to show McLaren and the others that we had the ability to bring this to an end whenever we wanted. Captain Caver agreed, and at around six that evening, the Rangers and DPS authorities deployed a military construction tank, two armored personnel carriers, and more than ninety officers in vehicles to within a quarter mile of McLaren's position. With darkness coming on, this show of force would not be comforting. Obviously, if there had been hostages held by the ROT, such an initiative would not have been advisable. Trying to rattle an already unstable and cornered individual is too risky when hostages' lives are at stake. In this case, the tactical action initiated by the Rangers was meant as a carefully calibrated warning, meant to demonstrate the benefits of a negotiated resolution. And it got results. McLaren immediately took to the ham radio asking for help from militia forces elsewhere. He also appeared to be broadcasting orders to his men on the perimeter, but we had no evidence that they even had ham receivers. That's when I heard him say, "Ready the missiles," and I knew he was running out of gas. At around ten that night, McLaren's wife, Evelyn, placed a call to Jess. She said that she was afraid of our apparent intentions to escalate. "Can't you just give him what he wants?" she pleaded. Jess patiently appealed for her to use her influence with McLaren and the others to come out peacefully and avoid bloodshed. Again, Jess was careful to avoid the term _surrender_. No one—least of all a rather pathetic individual with an inflated sense of self-importance—wants to be humiliated. I was always moved by General Grant's gesture to General Lee in allowing the Confederate leader to keep his sword during the surrender at Appomattox that ended the Civil War. That small but symbolic act cost Grant nothing, yet gained so much by allowing the venerated Lee to maintain his dignity and positively influence his loyal followers. Many of the individuals we deal with are hardly venerated warriors, but they are repeat offenders. If we treat them poorly when they surrender, they may not be so willing to cooperate with us the next time they're in a jam. The next morning we got up early and drove back to the command post. It was only seven forty-five when Evelyn called to tell Jess that she had decided to come out. She said it would be at eleven or so that morning. Immediately I met with Captain Caver and urged him to make sure that his men treated her with dignity and respect. He promised that he would. Not only was that simply the right way to do things, but I wanted her to be able to speak positively to McLaren and the others about her reception. At eleven-thirty she delivered on her promise, and Caver delivered on his. Evelyn came out, bringing with her a proposed agreement stating McLaren's terms and conditions for "surrender" (his choice of words). It seemed that he was still trying to save face. His proposal was written as a formal international cease-fire agreement between the ROT and the state of Texas. The district attorney and the Rangers reviewed the letter and felt they could sign two of the three pages, but they balked at the third page because it gave McLaren international recognition. I reminded everyone that while the path of deception is indeed a slippery slope, any document signed under duress had no legal standing. This helped convince the district attorney that Texas authorities could safely sign the proposed agreement without fear of legal entanglements. We then sat down with Evelyn and made sure that she understood the gravity of the situation and the importance of convincing McLaren and the others to surrender. At around two in the afternoon she called and spoke with her husband. He asked for a code number, which she provided, apparently a confirmation that she had been well treated. She urged McLaren to come out, adding that she had learned that this would be the last chance for him to do so before the authorities launched an assault. She also told him that we had tanks and hundreds of officers assembled and were ready to come in with all necessary force. Then she told him that we had agreed to sign the letter, but that there would have to be some minor changes. The changes we had made were mostly window dressing—we felt that if we simply signed his document without revision, he might smell a rat. When Evelyn read the letter, McLaren seemed relieved. He asked her to reread several parts to him, then indicated that it was acceptable to him as amended. I was still convinced that he was eager to come out and that all this fine-tuning of the agreement was an effort to save face. When Jess picked up the conversation with McLaren, he quickly transitioned into the process for a peaceful surrender. In such a situation, it's very important that both sides fully understand and agree to what, where, when, and how everything will happen. The two men agreed that McLaren and his bodyguard White Eagle would walk down from the property to the law enforcement perimeter and give themselves up. The remaining four individuals, fewer than we'd thought were there, would stack their weapons, stand by the ROT flag, and wait to be taken into custody. At five-thirty that afternoon, seven days after the standoff began, McLaren surrendered. We had promised that he would be allowed to see Evelyn briefly when he was brought to the command post. This was a promise that we could stand behind. When the tactical team moved forward to secure the ROT property, they found only Paulson and his wife waiting for them as agreed. Richard Keyes and another ROT member we had later learned was inside, Mike Matson, aided by the rough terrain, had managed to sneak through the law enforcement perimeter and escape. So now dog teams, riders on horseback, and helicopters all gave chase. After three days, the officers with dogs began to get close. Matson told Keyes that he couldn't run anymore and that Keyes should go on without him. Keyes continued on foot as the officers closed in. Then Matson shot and killed a tracking dog, which prompted officers to return fire, killing Matson. Keyes made his way to civilization, where he was assisted by a militia group that transported him away from the Fort Davis area. On September 19, 1997, he was located by FBI agents and arrested. While the ROT members may have appeared disorganized and incompetent, they had posed a very real risk to law enforcement officers as well as to their neighbors. A search of their property yielded a wide range of weapons. The ROT members also had set up explosive traps that might have harmed officers had they attempted to carry out the arrests by force. McLaren and most of the others were tried and convicted on charges ranging from engaging in organized criminal activity to burglary, failure to appear, civil contempt, aggravated kidnapping, and attempted capital murder. There was little more for me to do out west. Al and I said goodbye to Captain Caver, Jess, and the others we had worked with, and headed to our rental cars. We wanted no part of any press conference. This was not a time for the FBI to be seen; our whole strategy had been to make it a local matter. As I drove to the airport, I smiled, thinking about one of my favorite TV shows as a kid— _The Lone Ranger_. I chuckled at the thought of someone seeing us drive off and asking, "Who were those masked men?" # CHAPTER ELEVEN # **NO SHORTAGE OF CHALLENGES** _The human race is challenged more than ever before to demonstrate our mastery—not over nature but of ourselves_. —RACHEL LOUISE CARSON The approach that worked so successfully in other standoff situations also was effective in a very different situation that took place in Puerto Rico in May 2000. Since 1941, the U.S. Navy had used the twenty-one-mile-long Vieques Island in Puerto Rico as a practice range for shelling and bombing, but over time the site had become more and more of a point of contention with the local population. According to the Navy, there was no alternative location that would allow them to carry out critical live-fire training. But when a security guard was accidentally killed during a bombing exercise in April 1999, protesters demanded that the Navy leave Vieques immediately. The cause had a great deal of political and public support in Puerto Rico and among some political figures on the U.S. mainland. Despite a permanent court injunction against their trespassing, several protesters affiliated with the Vieques Fishermen's Association, the Puerto Rico Independence Party, and others occupied a portion of the island in the "live impact area," where the ordnance would land. The trespassers, thought to be about fifty individuals, set up eight separate camps scattered throughout the nine-hundred-acre site on the eastern tip of the island. The Navy occupied 75 percent of Vieques's thirty-three thousand acres. A presidential panel had recommended that the Navy resume live-fire training but leave within five years. Pedro Rosselló, the governor of Puerto Rico, insisted that there be no further live-fire training at all and that the range be returned to civilian use. The Navy insisted that use of Vieques was an important matter of national security, and with that in mind, requested FBI assistance in removing the trespassers from the live-impact area. The director of the FBI went on record with the attorney general stating that this was not a law enforcement issue and that the FBI should not be forced to make a tactical response. Despite that position, the FBI, along with the United States Marshals Service and the Coast Guard, were sent down to resolve this matter. Everyone at CIRG viewed this as a no-win situation. The Vieques Island issue had galvanized the Puerto Rican people, and the island's three political parties, the Catholic Church, and university students were unified in seeking the cessation of Navy bombing operations. If the FBI removal operation was met with resistance and resulted in harm to any of the protesters, there would be huge political and reputational damage to the organization. Despite those concerns, the CIRG deployed a large contingent of personnel, including negotiators and the HRT. We would use the Navy base at Roosevelt Roads, across the channel from Vieques, as a staging point. The action plan called for relying on the element of surprise to quickly gain control over the protesters and remove them from the live-impact area without incident. At least that had been the hope. I flew to Puerto Rico to lead the negotiation team, deployed in the event of a standoff, but being there provided an unexpected pleasure. Captain Keith Naumann, the chief of staff for Rear Admiral Kevin Green, the senior Navy officer at Roosevelt Roads, was my best friend since childhood and had been the best man at my wedding. Admiral Green was the Navy's point man on the Vieques Island issue, so as his chief of staff, Keith was very familiar with the history and issues surrounding this problem. His background information and perspectives were very helpful to me. For the first time in our long respective government careers, his as a naval aviator and mine as an FBI special agent, these two boys from Atlantic Beach, Florida, were working together on a mission. Keith told me that the Navy had threatened to close the entire Roosevelt Roads naval base if they lost the use of the practice site. Such a move would have a major economic impact on Puerto Rico. But apparently none of the politicians or protesters took the Navy's threat seriously. The day before we planned to initiate the removal operation, a helicopter flew over the protesters' camps to help determine the number of individuals we would be confronting. This surveillance mission came back with some very troubling news: television camera crews were already set up at several locations. So much for the element of surprise. Roger Nisley, now the SAC in charge of the CIRG, brought Chris Whitcomb and me into a meeting to discuss the implications of this new information. Chris formerly had been an HRT operator but now served as the CIRG media coordinator. We were all concerned that the protesters, knowing they were being filmed, would tend to act out and provide greater resistance than they might otherwise. This could only serve to inflame the political aspects of this confrontation. I recommended that we revise the plan to have teams of negotiators lead, rather than follow each of the tactical teams as they approached the various camps. It would be the job of the Spanish-speaking negotiators to open up a peaceful and nonthreatening dialogue with the protesters, hoping to secure their cooperation in leaving, or at least in being taken into custody without any theatrics. We also agreed to have tactical personnel wear ordinary clothing and to advance toward the protesters "slow and easy" rather than "hard and fast." Roger's embrace of this new approach demonstrated a willingness to think outside the box. The operation was less than twenty-four hours away, yet we were suddenly changing the plans we had developed over many weeks. The next morning various FBI teams assembled and set off for Vieques. As planned, the teams arrived at each protest site simultaneously; stepping ahead of each team in a slow and confident manner were two Spanish-speaking negotiators, several of Puerto Rican ancestry. They projected genuine respect for the protesters and an understanding of their cause, but also inserted just the right degree of firmness. Treated this way, the protesters remained calm and fully complied with our directions. Many of them later commented that they were genuinely surprised by and greatly appreciative of the calm and professional way the FBI removed them from the island. Even better was the fact that the news media filmed this evidence of a "kindler and gentler" FBI removal operation. This no-win situation had suddenly become a big win for the FBI. At the front gate of the Vieques live-impact area, a large, angry group had assembled, along with several television news teams, to protest the removal operation. FBI negotiator Liane McCarthy, a fluent Spanish-speaker from the FBI Boston office, and Henry Nava, a fluent Spanish-speaking negotiator from the FBI San Antonio office, calmly stood in front of this crowd and patiently explained what the FBI was doing and how we were doing it. Back in Washington, Attorney General Janet Reno watched a live television broadcast as Liane and Henry expertly controlled the large crowd and calmed their anger. This was the type of news coverage the attorney general enjoyed seeing, and she conveyed her personal appreciation to Liane and Henry. The Navy was also extremely grateful. The entire operation was a huge success, and the verbal skills of the negotiators supporting each tactical team had been the key element. Predictably, because no one was killed and nothing was burned down, this news event quickly fell off the national radar screen. But I couldn't have been more pleased and proud of my negotiation team. Despite the overwhelming success of this operation, the issue of Vieques never really went away. Under heavy political influence the Navy eventually was forced to give up their target range, and true to their word, they shut down the Roosevelt Roads naval base as well. Puerto Rican politicians were shocked and dismayed at the closing of the base and the significant loss of jobs and local revenue that resulted. Several said they had wanted the Navy out of Vieques only, not Roosevelt Roads. It seems they wanted to have their cake and eat it, too. Perhaps they should have listened to the Navy sooner. The results the FBI had begun to achieve in the 1990s, with skilled negotiation being applied in crisis situations, brought us significant international attention over the years. The purview of FBI negotiators was now global, with increasing levels of work outside the boundaries of the United States. Particularly challenging were cases in which American citizens were kidnapped abroad. In all, we would work on more than 120 international kidnappings, in addition to other incidents, often painfully aware that outside the United States we had far less control over how the situation would be handled. And outside the United States, the lessons learned by the FBI had not necessarily penetrated to all of the foreign governments involved. The longest siege of my career began on December 17, 1996, when fourteen members of the Túpac Amaru Revolutionary Movement (MRTA) invaded the residence of the Japanese ambassador in Lima, Peru, during a party honoring the sixty-third birthday of Emperor Akihito. The guest list meant that they took as hostages six hundred high-level diplomats, government officials, military leaders, and business executives, as well as Peruvian president Alberto Fujimori's mother and sister. The United States ambassador to Peru, Dennis Jett, had left the function just before the terrorists gained entry, but seven other U.S. diplomats were not as lucky. When news of the incident reached Washington I was immediately deployed to Lima aboard a U.S. military aircraft along with other representatives of the multiagency Foreign Emergency Support Team (FEST). On the long plane ride to Lima, I discussed the dangerousness of the situation with Alanna Lavelle, one of the experienced negotiators assigned to my team at Quantico. In addition to being a great negotiator, Alanna also spoke fluent Spanish. Only a few months earlier, during a kidnap case in Ecuador, she had posed as a family friend and expertly stretched out the telephone calls with the kidnappers. This allowed the Ecuadorian authorities to trace the calls, locate the kidnappers, and then rescue the victim, John Heidema, a fifty-four-year-old American computer scientist. He had been taken hostage while vacationing in the rain forest with his daughter, who smartly feigned an asthma attack, which convinced the kidnappers to leave her behind. Her father was held for over thirty days in difficult conditions before he was rescued. When Alanna and I arrived in Lima we met with Ambassador Jett, who expressed grave concern about the Americans and other hostages due to the MRTA's violent history and instructed me to make an assessment of the situation and keep him informed. As I was leaving his office I received a message that someone from the British Embassy wanted to see me. It turned out to be Mike Dixon, the head of Scotland Yard's negotiation team, a good friend with whom I'd worked on other cases. We were soon joined by Dale McKelvey from the Royal Canadian Mounted Police (RCMP), who, like Mike, had attended my negotiation course. We would form a kind of ad hoc team, sharing information and making strategy recommendations to our respective governments, to be passed along further to President Fujimori. We would meet daily in my hotel room to exchange information about what we had learned and what recommendations we would make. The MRTA's primary demand was the release of four hundred of their members being held in Peruvian jails. Another major problem we faced was that President Fujimori had risen to power on his tough stance against the MRTA and the Sendero Luminoso (Shining Path). These were both Marxist terrorist groups whose actions had led to thousands of deaths over the years. Mindful of his domestic constituency, President Fujimori refused to communicate with the terrorists, despite the fact that several hostages had been unilaterally released with messages saying the MRTA wanted to talk with the government. Fujimori's apparent refusal to open a dialogue with the MRTA demonstrated that he had not heard of the concept of verbal containment. He was taking a serious risk by not attempting to open such a dialogue, as the MRTA might begin executing some of their hostages at any time to force the issue. From what we could gather, there was no clear command structure controlling the various government elements surrounding the residence. To make matters worse, President Fujimori made frequent bellicose statements to the press that merely served to agitate the terrorists inside. Also, the government's failure to control the perimeter around the Japanese ambassador's residence would cost us an opportunity to gain vital information. Just a few days into the crisis, the MRTA unilaterally released large numbers of hostages. When the hostages emerged from the residence, the multiple Peruvian police units surrounding the residence simply sent them home. No one intercepted them to conduct a debriefing. We lost a chance to find out how many terrorists were inside, what weapons they had, what they were saying about their intentions, and how they were treating the hostages. Management of this siege was turning into a three-ring circus, with Fujimori as the inept ringmaster. Luckily, one of the released hostages was Anthony Vincent, the Canadian ambassador. He volunteered to become an intermediary between the terrorists and the Peruvian government. Working through the RCMP, our ad hoc international negotiation team was able to rely on the ambassador to inject our assessment and advice into the process. The head of the Peruvian office of the International Committee of the Red Cross (ICRC), Michel Minnig, was also released. Acting more on his own than under guidance from Fujimori, he returned to the residence to deliver food and water to those still being held. He would return every day to bring more food and take out the trash, and he soon began to carry messages directly from the terrorists to the government. When I learned that he was doing this without guidance from the government, I set up a meeting with him in order to find out more about what he was doing. While his insights were interesting, he made it clear that his role with the ICRC prevented him from playing any role other than a humanitarian one. While Fujimori allowed this contact to take place, he still distanced himself from direct involvement. Leaving the ICRC to operate on its own was hardly the ideal way to manage contact with terrorists during a siege, but it was the best thing we had going. Through this ICRC effort, and partially because of space restrictions within the residence, the terrorists began to release additional hostages, including all the women. This allowed them to better manage the one hundred or so captives who remained. They were not physically abusive to the hostages, but toilets began to overflow, and in spite of Minnig's efforts, food and fresh water were in short supply. After the first week, the MRTA released more hostages, including all the remaining American diplomats. We were delighted, but we recognized that this was, in fact, a smart strategic move on the part of the MRTA. By releasing all the American victims, they hoped to eliminate the potential for the United States to use its own tactical forces to conduct a rescue mission. During the second week Monsignor Juan Luis Cipriani joined Ambassador Vincent in an attempt to mediate the crisis. Ambassador Jett made an appointment for me to meet with Cipriani to provide him with some ideas on how he might enhance his efforts as an intermediary. At my meeting with Cipriani, I stressed the importance of patience and keeping the dialogue open. I recommended that he always set the next meeting time with the MRTA before ending the current contact. I thought he also might be able to explore creative ways to address the MRTA's demands for prisoner release, such as sending some to a third country. As I discussed these and other suggestions, he and his assistant furiously scribbled down every single word. When I was done, he put his pen down, looked up at me, and said haughtily that he had already thought of all of these things. With the American hostages released, I returned home just in time for Christmas, but kept in daily contact with other deployed FBI, RCMP, and Scotland Yard negotiators for the remainder of the ordeal. Monsignor Cipriani's efforts yielded little, and I soon became convinced that President Fujimori wasn't seriously pursuing a peaceful resolution, supporting limited negotiation contacts only as a means to buy time while preparing his commandos for a tactical assault. And in fact engineers were already at work, excavating tunnels that led underneath the street and into the residence. Several of the hostages later commented that the MRTA could hear that tunnels were being dug—they just didn't know what to do about it. As a diversion meant to mask the sounds, Fujimori ordered loud military parades with marching bands to roll by on the street in front of the residence. During one of the parades a soldier riding in an armored vehicle stuck his middle finger in the air aimed directly at the terrorists. In response, an irritated MRTA terrorist cranked off a round from his AK-47, the first shot fired since the residence was taken over. A news videotape shows a bullet striking and ricocheting off an armored personnel carrier just inches away from the gesturing soldier. Everyone ducked for cover, and the parade quickly came to an end. Had that soldier been hit, the shot might have prompted an immediate assault with significant loss of life. Through all of this the MRTA remained firm in its demands for the release of incarcerated terrorists, something Fujimori resolutely refused to consider. We were lucky that the MRTA did not start executing hostages to press their demands. The FBI's advisory role expanded into a new arena—garbage collection. One of our agents suggested that we start to examine all the trash being carried out by the ICRC after each food delivery, looking for messages as well as any other clues to what might be going on inside. This job was extremely unpleasant and ended up being something the Peruvians didn't seem much interested in doing. So highly skilled FBI agents donned gloves and masks and did the job for them, finding a number of important handwritten notes from hostages, including one asking the government to acknowledge receipt of their notes by having the military band play a certain song. After many days, they were able to make this happen to let the hostages know their messages were being received. In addition, one of the Peruvian hostages had been able to keep his cell phone hidden, and he periodically transmitted information. Further insight came through several hidden microphones that were secretly introduced into the residence. The single most salient fact picked up through these efforts was that every day at a certain time most of the young MRTA terrorists played a game of indoor soccer in a residence living room that had been cleared of furniture. Weeks and then months passed with little progress. It was taking a long time to dig the tunnels. Finally, on April 22, 1997, 126 days after the siege had begun, military commandos placed a large charge of explosives inside a tunnel directly underneath the living room. They detonated it as the daily MRTA soccer match was in full swing, instantly killing many of the terrorists. Peruvian commandos then stormed the residence from multiple points of entry, killing the remaining terrorists and freeing the hostages. Although one hostage, two commandos, and all of the terrorists died in this rescue, seventy-seven hostages were rescued. Time purchased through delays, more by luck than design, had enabled the commandos to devise and execute their plan with precision—another testament to the value of stalling for time. Later, critics of the government made the accusation that several terrorists were summarily executed after surrendering, but that was never proven, and in any event there was no sympathy for them among the Peruvian public. The whole nation rightfully took pride in what they saw as a brilliant rescue. Fujimori was the hero, and he was videotaped in the news triumphantly touring the just-cleared residence, looking down at the bodies of terrorists. I worried that other governments would examine this incident only from the narrow perspective of the successful outcome. My main complaint about Fujimori was that he placed all his eggs in one basket—the tactical rescue. Without ongoing negotiations to keep a lid on the tension, the MRTA might have initiated violence at any time. Had they done so, there would have been no way to quickly and safely intervene to save hostages. Fujimori and his followers saw him as a masterly tactician. Perhaps, but he had also been very, very lucky. It would be foolish to expect other terrorists to be so patient. Could meaningful negotiations have resolved this situation without any loss of life? It's hard to say. However, I know that there will always be terrorists who, when given an option, will choose life over death. It's the job of the negotiation team not only to buy time but also to genuinely attempt to convince those wavering extremists to pursue a course of action in which they and their hostages can survive. While we always prepare for the worst, we still try to pursue the best outcome we can. One positive outcome of the Peruvian incident was that Canadian, British, and American negotiation teams agreed to come together to conduct an after-action review at a conference I organized in Alexandria, Virginia. This led to an agreement to continue working together on other international negotiation matters. Our core group continued to meet annually, and I eventually expanded the group to create the International Negotiation Working Group (INWG), which now includes more than fifteen countries from around the world. This group, in turn, inspired me to try to enhance further the FBI's level of support for domestic police negotiation teams throughout the country. With that goal in mind, in 1999 I invited seven experienced police negotiation colleagues to the FBI academy for a conference. This gave rise to a national coordinating body, the National Council of Negotiation Associations (NCNA), to assist the various regional organizations around the country already serving police negotiators. One of the early achievements of the newly formed NCNA was to ratify a set of negotiation guidelines that I drafted. With minor modifications these NCNA guidelines became, and remain today, the national standard. Endorsed by the NCNA member organizations representing several thousand law enforcement and correctional negotiators in the United States and Canada, they have codified the underlying philosophy and recommended negotiation approaches for all types of hostage, barricade, and suicide incidents. Now for the first time, negotiation teams could provide their incident managers with nationally approved written guidance on handling critical events. This ability empowered and supported negotiation teams by allowing them to argue to incident commanders that their departments' handling of any situation would be assessed according to how well they followed the NCNA guidelines. I'm pleased that these same guidelines have been used to successfully defend police departments during several wrongful death lawsuits around the country. In April 1998, the FBI elevated the FBI negotiation program and established the Crisis Negotiation Unit (CNU), with me named its first unit chief. More important, this promotion elevated me to the same rank as the Assistant Special Agent in Charge of the HRT. With ten full-time negotiation supervisory special agents and three support staff, we managed the training and deployment of more than 350 negotiators assigned to FBI field offices around the country, responding to law enforcement negotiation needs at home and abroad. Our two-week negotiation training course was now known as the National Crisis Negotiation Course (NCNC). Police officers from around the globe continued to request opportunities to attend this prestigious program. We could only conduct a few classes each year, so we never had enough slots to satisfy the requests for attendance. Unfortunately, I had to fight internal budget wars each year in an attempt to maintain funding. CNU had raised the profile of the negotiation program around the world, but within the FBI, getting the necessary budget dollars for training, or even finding available classroom space at the FBI academy, was never easy. I don't think that FBI officials at the highest levels ever fully appreciated or understood the significant national and international goodwill this training program brought to us, a situation I'm afraid persists to this day. Over the last several years of my FBI career, overseas kidnappings of American citizens increasingly demanded a significant amount of my time and energy. There was rarely a time that my FBI negotiation team was not actively deployed abroad. In 1990 I had flown to Zaire on one of the FBI's first overseas kidnap cases and helped secure the release of American Brent Swan from the terrorist group FLEC-PM. In the first years of the new millennium, we were engaged in trying to resolve the kidnapping of oil-field workers by Ecuadorian guerrillas, as well as an incident in the Philippines in which the victim was a young man traveling to meet a young woman he had met online, whose relatives turned out to be terrorists who saw the young American as an opportunity for revenue. In another Philippine incident, the kidnap victims were missionaries. On May 27, 2001, missionaries Martin and Gracia Burnham were celebrating their eighteenth wedding anniversary at the upscale Dos Palmas resort on Palawan Island, having saved just enough for a one-night stay. Terrorists from the Abu Sayyaf Group (ASG), Islamic separatists who operated primarily in the southern Philippines, chose that same night to travel across the sea by speedboat from their base on Basilan Island to gather up hostages at the resort. Martin and Gracia were part of a group of eighteen people seized that night and whisked back to the ASG stronghold. The group included another American who had been on vacation when captured, Guillermo Sobrero. He was reportedly wounded during an early skirmish between the ASG and the Philippine military. After one month, unable to keep up with the frequent movement and forced marches dictated by the ASG in order to avoid Philippine military actions, he was beheaded. Most of the other hostages were eventually ransomed by their families. However, to protect its missionaries from kidnapping, the Burnhams' sponsoring organization steadfastly refused, as a matter of policy, to pay the $1 million ransom the kidnappers were demanding for the couple. The ASG was ideologically aligned with Osama bin Laden, so I was deeply concerned about the fate that awaited Martin and Gracia. I quickly deployed a team of negotiators to the Philippines. For many months we tried to develop and maintain contact with the band of terrorists holding them. We eventually exchanged several text messages with the kidnappers. The U.S. military was also providing significant assistance to the Philippine military in support of their search efforts. Teams of FBI negotiators rotated in and out of the Philippines every three weeks. Well into the Burnhams' captivity, we attempted to mount a sting-type operation by offering to pay a $300,000 ransom. Our plan was to pay the money, secure the safe release of the Burnhams, and then sweep in to destroy the ASG element and recover the money. The ASG agreed to our offer and the plan moved forward, but then they kept the money and didn't follow through with the promised release. At least the funds allowed the ASG to purchase much-needed food and supplies that Gracia later said helped them during a very lean period in their captivity. But then the group disappeared deeper into the jungle, and the already long and painful plight of the Burnhams continued. I spoke with my deployed negotiators almost every day during this yearlong case, always attempting to develop approaches that would establish dialogue with the ASG. Limited negotiation via text messaging was the best we were able to do. Just days after the one-year anniversary of the Burnhams' capture, a Philippine military unit located the ASG camp where they were being held, and initiated a rescue operation. Tragically, the assault included indiscriminate shooting, which resulted in Martin's being killed, not by the ASG but by the rescuing forces. He was hit by three gunshots to the chest and died at the scene. Gracia received a gunshot wound in the right thigh but survived. A Philippine nurse also being held hostage was killed as well. Gracia was rescued by Philippine soldiers and taken to Manila for medical care. Those who believe that military action is the only strategy against terrorism should view this sad ending as a cautionary tale for what can go wrong when bullets start to fly. The tactical capability of law enforcement and military units in the developing world is often limited. Unfortunately, bullets cannot tell good guys from bad guys. The ASG contingent and its leaders who held Martin and Gracia were later hunted down by the Philippine military and destroyed. On July 25, 2002, Gracia traveled to the Washington, D.C., area and kindly appeared before the team of FBI negotiators who had been involved in trying to secure her safe release. It was a bittersweet meeting, with all in attendance thankful for her survival but deeply grieved by Martin's death. As I listened to Gracia recount her ordeal, I was yet again reminded how very important the work of our negotiators is, and how close to the line between life and death we usually operate. In January 2002, my unit also provided significant assistance after the kidnapping of _Wall Street Journal_ reporter Danny Pearl. We were never able to sustain a meaningful dialogue with his captors, but the limited contacts we did have assisted FBI investigators in identifying those responsible through their use of an Internet café in Pakistan. # CHAPTER TWELVE # **BEING OUR BEST WHEN OTHERS ARE AT THEIR WORST** _If you can keep your head when all about you are losing theirs..._ —RUDYARD KIPLING Having joined the FBI several days after my twenty-second birthday, I often joked that my parents had given me to the FBI as a child. The FBI had never been a job to me; it was a calling, an honor, and a privilege. Being a special agent wasn't just what I did for a living, it was who I was. It had been a demanding ten-to-twelve-hour-a-day commitment, working nights and weekends and often being away from my family, but the rewards had far outweighed the burdens. By 2002, I had achieved most of my goals for the FBI's crisis (hostage) negotiation program and felt it was the right time to retire, and by the beginning of fall I had the necessary paperwork all filled out and submitted. But just like in all those pulp fiction detective novels, I had one more case to work. This final case would be very different from anything I or anyone else had ever worked. We were dealing with an unknown adversary engaged in a rampage that terrorized everyone within a large metropolitan community over a period of several weeks. This incident filled the news as nothing before ever had. It all began at 5:20 p.m. on Wednesday, October 2, 2002, when a bullet flew through the front window of the Michaels craft store on Georgia Avenue in Wheaton, Maryland, a suburb of Washington, D.C., fortunately not hitting anyone. Forty-four minutes later, fifty-five-year-old James D. Martin was walking across the parking lot at the Shoppers Food Warehouse not far away when a bullet struck him in the chest, killing him. What was going on? Was this the action of a lone madman, or perhaps the work of a group of violent Islamic terrorists attempting to strike fear in Americans in our own homeland? No one claimed credit for these shootings and no one knew the answers to those questions. Over the next two days a total of six individuals in Maryland and Washington, D.C., were felled by a sniper's bullet. There was no apparent pattern to the shootings and no indication of any grievance against these seven individual victims, who were white, black, Hispanic, and Indian, male and female, and ranging in age from twenty-five to seventy-two. Every law enforcement officer in the metropolitan area was in a state of high alert. Citizens in the area were panicked; parents, particularly, were worried about the safety of their children as they traveled to and from school and even as they sat in the classroom, but on October 4, police announced that the schools were safe and that parents should continue to send their kids to class. Then on October 7, a thirteen-year-old boy was shot and seriously wounded at Tasker Middle School in Bowie, Maryland. It seemed as if the shooter was listening to the news and responding to what was being said. At one point an "expert" suggested that the shooter would likely stay near his own familiar area of comfort; the shooter's next victim was about sixty miles south, in Fredericksburg, Virginia. On another occasion a retired FBI profiler suggested that the shooter was apparently not a skilled marksman, since he had shot several victims in the torso and not the head; the next victim died of a bullet to the head. Her name was Linda Franklin, and ironically she was a support employee of the FBI. My family was as worried as anyone else. My twenty-two-year-old daughter, Kelly, had driven away from the parking lot in Fredericksburg, Virginia, just a short time before a forty-three-year-old white female was shot in the back while loading packages into her car. My other daughter, Katie, age twenty, attending Mary Washington College in Fredericksburg, regularly filled up her car at the same Exxon gas station where fifty-three-year-old Kenneth Bridges was shot and killed on October 11. My son, Rusty, eighteen, had been named homecoming king at Robinson High School in Fairfax, Virginia, where we lived. Like any proud parents, my wife and I looked forward to seeing our son honored in the homecoming parade that would culminate at the football stadium. But like so many schools in the area, Robinson was forced to cancel all outdoor activities. Yet these were minor concerns compared to the grief that the sniper was causing so many families in the Washington, D.C., area. Because several victims had been shot while fueling their cars, some gas stations hung large drapes near their pumps so that customers would not be scared away. People crouching down while pumping gas became a common sight. There were thousands of stories of individuals and families changing their routines and exercising high levels of caution in every aspect of their daily lives. The FBI and ATF, along with other local, state, and federal agencies, quickly set up a task force to help identify, locate, and apprehend whoever was doing these shootings. The public came to know Chief Charles Moose of the Montgomery County Police Department as the leader of the investigation. In reality, there was a triumvirate of sorts in charge, consisting of Chief Moose and senior representatives of the FBI and ATF. This group attempted to bring some structure and coordination to the challenging task that was facing the many agencies working over a wide area encompassing Maryland, the District of Columbia, and Virginia. As head of the Crisis Negotiation Unit within the Critical Incident Response Group, I had assigned the agents in my unit to geographic territories that matched up with FBI field offices, with one supervisor assigned to several regions to provide support. Vince Dalfonzo was responsible for Maryland. He happened to be a Baltimore native, so I attached him to the joint command post that had been established in Montgomery County. Vince joined a multiagency negotiation team that had been assembled in the hopes of drawing the sniper into a dialogue. That team also helped craft the daily press messages from Chief Moose and the other leaders. Until we could establish a direct dialogue with the sniper, our only means of communication was through these daily statements. It was important that the authorities avoid saying anything that might agitate the sniper and prompt him to kill again. A large team of negotiators from the FBI and other involved agencies stood ready to open a dialogue with the shooter if we could successfully get him to contact us. On October 7, the sniper had left a tarot card near Tasker Middle School, where the thirteen-year-old boy had been seriously wounded. Written on the tarot card was "Mr. Policeman, I am God." The negotiation team expended much effort trying to interpret this message, but we also realized that its mere existence could be useful. If we kept the card secret from the press, we might use it to verify that we were talking with the real sniper if he contacted us. Unfortunately, this information was leaked to the press in a matter of hours. On October 17, a man claiming to be the sniper called the public information officer for the Montgomery County police, saying, "I'm God." The three-minute call consisted of a very angry man demanding, "Don't you know who you're dealing with?" The caller also made reference to a crime in "Montgomery," which we assumed referred to the shootings in Montgomery County, Maryland. Later the sniper would call the police again and was quickly put through to the negotiation team room. FBI negotiator Marina Murphy took the call and attempted to draw him into a dialogue, but the caller seemed to become scared, and he simply hung up. The next day, October 18, the sniper contacted a priest in Ashland, Virginia, Monsignor William Sullivan, the pastor of St. Ann's Church, and again said, "I am God." He also referred once more to a crime in "Montgomery." Unfortunately, the monsignor did not report this call to the police initially, believing that it was a prank call. Officers and agents were busy chasing down more than sixteen thousand leads and following up on more than a hundred thousand phone calls to a telephone tip line. Despite their efforts, on Saturday, October 19, a thirty-seven-year-old man was shot in the abdomen in the parking lot of a Ponderosa Steakhouse in Ashland, Virginia. He was critically wounded but survived. A search of the crime scene revealed that the sniper had left a note in a wooded area from where the shot was fired. Wrapped in plastic and tacked to a tree was a four-page message, the cover sheet of which said, "Call me God," along with, "For you, Mr. Police" and "Don't release to the press." The letter demanded that $10 million be wired to a stolen platinum Bank of America Visa credit card. The account number and PIN were included. The note said, "We will have unlimited withdrawal at any ATM worldwide." Despite this demand, I didn't believe the crime spree was about money. There had been no demand for money up front, and if money is what you want, there is no need to keep killing people before you've made that demand. In his note, the sniper complained that the authorities had made it hard for him to make contact to begin ransom negotiations. He denounced the operators of the tip line, saying that he had called four times and been taken "for a hoax or a joke." He went on to say that "your failure to respond has cost you five lives" and "your children are not safe anywhere at any time." Most unusual was the sniper's demand in the note that the police announce that they had "caught the sniper like a duck in a noose." It made absolutely no sense, but that was what he wanted us to say. After analyzing this demand, the negotiation team drafted the following message for Chief Moose to deliver in response: "You asked us to say, 'The sniper has been caught, like a duck in a noose.' We don't understand why you want us to say this, but we know it's important to you. That is why we are saying it now, to stop the killing." In one of the rare instances in which profilers and negotiators disagreed, the FBI profiling team argued against making any such statement, believing that it would simply empower the sniper. Now working at the command post, I countered that the sniper was already feeling very empowered and that our failure to attempt to address this demand could prove fatal for more victims. Both Jim Cavanaugh (my ATF colleague from Waco) and Chief Moose expressed their agreement with me, but when I went home and turned on the television to watch the chief make our recommended statement, he omitted the critical portion. I later found out that SAC Gary Ball, the head of the FBI Baltimore office and the senior FBI official managing the incident, had sided with the profiling team and blocked the reference to "a duck in a noose." I was furious. Chief Moose did issue a direct appeal to the sniper through the news media, saying, "We do want to talk to you. Call us." Following up on the earlier call to the monsignor, investigators discovered that the sniper's reference to "Montgomery" concerned an unsolved murder-robbery on September 21 at a liquor store in Montgomery, Alabama. It turned out that a gun magazine had been left behind at the crime scene with a clear fingerprint. When the FBI ran that fingerprint, which was on file owing to an earlier juvenile offense, it led us to a young man named Lee Boyd Malvo. FBI agents sent out to investigate his background quickly discovered that he had spent the previous few years with an older man named John Muhammad. With their first solid bit of evidence, agents quickly turned up the heat to locate these two. At about six in the morning on October 22, the sniper shot and killed Conrad Johnson, a thirty-five-year-old bus driver, as he stood in the doorway of his bus near Silver Spring, Maryland. He was the thirteenth person shot, the tenth to die. The sniper left a note near the scene saying that he was angry with the police for not doing what he had asked, which was to announce that the sniper had been caught like a duck in a noose. I took no pleasure from the fact that this validated the position I had advocated: if we had included the sniper's wording as demanded, we might have prevented the death of Conrad Johnson. Meanwhile, investigators traced John Muhammad to Tacoma, Washington, where he and Lee Boyd Malvo once lived. In the backyard of Muhammad's former residence they found a tree stump where he had practiced shooting. In the stump they recovered metal casings that matched those found near the scene of the sniper killings. Police then learned (and made public) that Muhammad and Malvo were driving a Caprice. On the twenty-second, the night of the Johnson killing, an alert citizen spotted the vehicle in a rest area off a highway in Maryland. Members of the HRT approached the vehicle and arrested the two sleeping suspects. Malvo and Muhammad were both convicted of murder. Muhammad was given the death penalty; because of his youth, Malvo was given multiple life sentences. Why had they undertaken this killing spree? It turned out that Muhammad's divorced wife and children lived in the D.C. area. Authorities learned that Muhammad hoped to add her to the list of those killed by the sniper, thus making her death appear random and certainly not related to her ex-husband. Muhammad's ultimate objective was to regain custody of his children. Malvo, the younger accomplice, was just a pathetic figure who had been captivated and manipulated by the older Muhammad. So, in essence, my career had come full circle. Just as with Charlie in Sperryville and Mario on Amtrak, I was once again confronting men whose extreme violence was driven by nothing more than their inability to cope with various stresses and emotional frustrations in their lives. At the time of the D.C. sniper incident, I had been in the FBI for thirty years, and the FBI's chief negotiator for the past ten years. I had been eligible for retirement since turning fifty two years earlier, but I didn't feel quite ready at first, and the events of September 11, 2001, prompted me to stick around a bit longer. I wasn't sure if I could make any further contribution to the war on terrorism, but it just didn't seem to be the right time to leave the FBI. By 2003, though, I was ready. My three children were all in higher-education degree programs, and the reality of three tuitions provided an incentive for me to start drawing my pension while also taking on another job. I had grown weary of the administrative side of being a unit chief in a big bureaucracy, too. Fighting for budget dollars and manpower needs and attending endless meetings had never been my favorite things. So January 3, 2003, became the effective date of a decision that had been a long time coming: the official end of my FBI career. The Bureau had sent me to all fifty states and to more than forty countries. Steve Romano took over the helm at CNU and carried forward the great legacy of the FBI negotiation program. John Flood would eventually take over when Steve retired. I started this book with a case in which I recommended using deadly force. At first glance, this may seem strange in a book that argues for the primacy of negotiation. But as I hope I've made clear, there are times when we must conclude that negotiation isn't enough. In Charlie Leaf's case, I believed that he simply wasn't going to let Cheryl go; even if we managed to stall him for a bit longer, at some point he would very likely kill her and perhaps little Charlie. When negotiators start a dialogue with a threatening individual, we immediately begin to track the progress of our efforts. Has he become less angry and more willing to discuss reasonable alternatives to violence? Has his emotional equilibrium returned to a more normal state? Has the negotiator been able to establish a level of rapport that will enable him or her to begin to positively influence the behavior of the individual? In the overwhelming majority of cases the answer to these questions is yes, but there will always be times when the risks increase, when you have to move on to a tactical rescue. As I did in Sperryville, at this point the negotiator assumes a key role supporting the tactical operation by providing the time, intelligence, and opportunity required for success. If I've gained any wisdom in my FBI career, it has come from recognizing the degree to which everyday life can mirror the dynamics of the destructive standoffs I faced in my FBI job. Each of us is called upon to negotiate stressful situations in business, social encounters, and family life time and again. From what I've observed, the happiest and most successful people tend to be those who are able to remain calm at these difficult times and put aside emotions such as pride or anger that stop them from finding common ground. We all need to be good listeners and learn to demonstrate our empathy and understanding of the problems, needs, and issues of others. Only then can we hope to influence their behavior in a positive way. You might even say that all of life is a negotiation. # EPILOGUE # When I retired from the FBI I went to work for Control Risks, the premier kidnap-response consultancy in the world. My primary role was to assist clients in preparing for and operationally managing the kidnapping of one of their employees or family members to achieve the best outcome possible. My travel schedule increased significantly as a consultant, but I found I enjoyed the comparative freedom from the bureaucratic burdens that came with being a unit chief at the FBI. However, my operational work was not over. From 2003 through 2008, I worked a lengthy and very complex kidnap incident involving three American defense contractors who were seized by a terrorist group, the Revolutionary Armed Forces of Colombia (FARC). This case received significant interest and active participation from a host of agencies within the U.S. government. It was among the most difficult I ever worked, and once again, dealing with parties other than the kidnappers often created a crisis within the crisis. The government is staffed with many hardworking and capable individuals. It has tremendous resources and can be of great assistance in these matters, but it also has the capacity to make matters unnecessarily complicated. The government did much good in supporting Colombian military intelligence-gathering that eventually proved to be key in this incident. But constricted thinking and outdated policy guidelines often proved to be an impediment to creative problem solving that might have helped achieve an earlier release for the hostages. Despite a number of government mistakes, after five and a half years of captivity the hostages were rescued by the Colombian military and returned home safely to their families. Working this case alongside the government, but this time from the perspective of the victims' families and employers, provided me with additional insights into what I see as shortcomings in the way our government sometimes responds to terrorist situations. Even among government leaders, the word _terrorism_ evokes a great deal of emotion. This response can often lead to constricted thinking. The fact that a hostage is taken and held by a terrorist group isn't the most important factor to consider when developing an effective resolution strategy. What's more important to understand is what the terrorists are trying to achieve. If money or some other tangible item is their goal, then a classic negotiation strategy can be employed, usually with great success. However, if the demands are political, then the situation is infinitely more complicated and challenging, but not necessarily hopeless. Such cases require great patience and creative thinking. In 1990, we secured the safe release of Brent Swan from terrorists in Africa, not by paying the ransom they sought but by providing office and medical supplies as an alternative. This creative and flexible approach worked. Often tactical intervention is necessary, but not in every case. Unfortunately, many government officials do not appreciate the different and nuanced aspects of terrorism. Instead they simply react to the word _terrorist_ , concluding that the demands must be political and therefore, they must respond in a firm, unyielding, and inflexible manner. This one-size-fits-all reaction may not be the best response to the kidnappers' true motivations or allow for thoughtful consideration of the wider range of resolution strategies that might be possible. In reality, most kidnap victims don't care if they are taken by criminals or terrorists, held for money or for political objectives. They and their families simply want them to be free, and I believe everything reasonable should be done to make that happen. There is no legal prohibition against a U.S. family or corporation paying a ransom in a criminal kidnap case. However, if an American is held by a group on the State Department's terrorist list, paying a ransom may violate the prohibition against providing material support to a terrorist organization. Congress intended that prohibition to apply to organizations raising funds in the United States for terrorist groups abroad. It was never envisioned to apply to kidnap cases. In my opinion, it should never be used to prevent a family or corporation from securing the safe release of a loved one or employee taken hostage, as some in government have tried to suggest. In the days following September 11, 2001, there was a hard and noticeable turn toward use of the military as the exclusive response mechanism for dealing with such situations. Many officials felt compelled to repeatedly declare that the United States would not negotiate with terrorists. These strong declarations have helped promote the use of military action as a response to any crisis. As the saying goes, if you've got a hammer, you tend to think everything is a nail. But saying we will not negotiate with terrorists has never been shown to protect American citizens from being kidnapped abroad. In fact, Americans remain among the most sought-after individuals to kidnap. I concur that the U.S. government should not make substantive concessions to terrorists. (I am not speaking of families or employers here.) However, this should not be interpreted, as it so often is, to mean that U.S. authorities will not hold discussions—that is, negotiate—with terrorists. I'm confident that the FBI would indeed attempt to actively negotiate with terrorists holding hostages on an aircraft at JFK Airport. To do otherwise would be dangerous and foolish. But negotiating with terrorists doesn't mean we will comply with their demands. It is counterproductive to restrict ourselves from opening a line of communication with the hostage takers simply because they happen to be terrorists and we feel a need to appear and sound tough. This is what President Fujimori did in Peru, and he was lucky he avoided a total catastrophe. I look at the recent effective efforts of the U.S. military in Iraq to reach out to extremist factions and even bring some onto our payroll as a tool to stop violence. Such creative and effective negotiations save American lives. I believe it is sufficient to say that it is our policy as a nation not to make substantive concessions to terrorists. It is certainly true that payment of ransom to a criminal or even a terrorist group in order to secure the safe release of a hostage serves to encourage further kidnappings. But what is the alternative? Do we allow a hostage to languish in the jungle for years or be killed? Simply put, in an overwhelming majority of kidnap cases, no ransom payment means there will be no release, plain and simple. In my view, our efforts should first and foremost be focused on the safe release of the hostage. After that, we can and should vigorously pursue the kidnappers in order to bring them to justice, or when appropriate use our military capabilities to punish them for having taken an American hostage. We should continue to track them relentlessly. Only when terrorists learn that there will be a price to pay for holding Americans will this crime be reduced or eliminated. But we should not let our desire to punish terrorist kidnappers cloud our judgment and restrict our options. Saying we refuse to negotiate simply does not make the problem go away. I know from firsthand experience that the current worldwide terrorism threat is both real and substantial, and that we must remain prepared to deal with this problem through a wide array of response strategies. Recently, Somali pirates have engaged in abroad campaign of hijacking ships in international waters to secure ransom payments. In these cases, ransom payments may be required on humanitarian grounds to secure the safe release of the crews and ships involved. However, that action should be closely followed by the full force of military operations. The pirates will stop their hijacking spree when they begin to suffer the consequences of their actions, no sooner. Capturing boats loaded with kidnappers and letting them go because they've not yet attacked a ship does nothing to discourage this terrible crime. I firmly believe in negotiations, but that does not preclude strong punitive military action when necessary. Yet we need to understand that when it is appropriate to conduct negotiations as a strategic tool, such an effort should not be viewed as a decision to acquiesce to terrorism. The world's positive perception of America took a sharp decline in recent years. Some believed that we were acting with arrogance and disregard for the views of others, that we rejected cooperation with the international community and would go our own way. Fortunately, that trend seems to have abated. Diplomacy and negotiation are allied skills. The process of listening carefully to others, acknowledging their points of view, and crafting appropriate strategies enables us to positively influence their behavior. We need to do a better job of understanding that others may see the world and its problems differently than we do. That doesn't necessarily mean that they are right or that we are wrong; it's just a different perspective that needs to be understood and acknowledged. I was pleased to read not long ago that Robert Gates became the first secretary of defense to say that the United States needed more diplomats and the funding to support their activities. It speaks to his appreciation of the fact that the "hammer" alone will not solve all of our problems as a nation. We must have a wide range of tools available in our toolbox, including negotiation, and learn to use them appropriately. As with law enforcement SWAT teams, U.S. military power should be used only when we are left with no recourse, and not simply because we can. Whenever possible we should follow Martin Luther King Jr.'s advice to "pursue peaceful ends through peaceful means." Force should always be viewed as the least desirable and last option. My thoughts and observations are based on almost three decades of directly dealing with terrorism around the world. I am not opposed to the use of force when necessary. My recommendation to use deadly force to save lives at Sperryville and my support for the HRT's assault at Talladega are two dramatic examples of that. I've also had the great honor and privilege to work with the U.S. Army Delta Force and the U.S. Navy SEALs on both exercises and real-life operational deployments. I'm a great supporter of their dedication, capabilities, and commitment to saving American lives. Further, I also happen to be the very proud father of a Navy SEAL. Yet, I know that it's absolutely vital that government leaders not use these brave soldiers and sailors, and the tremendous capabilities they represent, unless it's absolutely necessary. The 2002 Moscow theater incident, in which a tactical action to dislodge Chechen terrorists led to the deaths of 129 hostages, the 2004 Beslan School incident in the Caucasus, when 334 hostages died, including 186 children, and the botched Egyptian rescue attempt in Malta discussed earlier, show the continuing danger of trying to resolve situations through force alone. Just because a situation may appear nonnegotiable shouldn't mean we don't try to negotiate. None of the U.S. military counterterrorism teams has negotiators; that role is reserved for the FBI. But if the leaders who dispatch our military don't think negotiators will be required in a terrorist incident, based on their preconceived notions about terrorist behavior, they won't deploy them. That would eliminate the use of one of our most important and successful tools. I also remain concerned that leaders in our government today still have, for the most part, insufficient experience in managing a major siege incident. The FBI has not handled one in more than a decade. The public assumes the required skills to manage a crisis incident are inherent within the organization, but are they? Past crisis management training exercises have concentrated on assembling resources, sorting out jurisdiction, establishing joint interagency command posts, deploying improved computer programs to track intelligence, and linking communications capabilities. All of that is important, but it does nothing to actually prepare an incident commander or key decision maker for the most important task he or she will face: determining how to effectively communicate with the terrorists. There will be much we will need to understand. What are their goals? What have they demanded? What do their actions and behaviors suggest to us? How do we effectively communicate with them in response to their demands? How do we forestall violence? How do we buy time to better prepare for possible tactical intervention? How can the negotiators assist the tactical forces that may have to intervene? These are some of the critical questions that need to be addressed, yet no management training program that I know of adequately addresses these questions. I believe it's time for our nation to become better prepared for a terrorist siege event. The terrorist attack in Mumbai, India, in late November 2008 should serve as a warning that a similar incident could happen here in the United States. If it does, will we have the right resources and capable managers to effectively resolve the crisis with the least loss of life possible? The terrorists have to be good only once to do serious harm. We have to be good all the time. # ACKNOWLEDGMENTS # Without the hard work, dedication, and vision of the hostage negotiation pioneers in law enforcement who came before me, this book and the story it tells would not be possible. Their efforts in the negotiation field helped start this important discipline down the path to become the true profession it has become today. My own growth and development as a hostage negotiator were greatly influenced by these forward-thinking individuals, as well as the many skilled police and FBI negotiators around the world whom I worked with through the years. I will forever be in their debt. I continue to be in awe of their dedication to saving lives in the most challenging of situations. I want to thank the FBI for giving me the opportunity and great honor to serve my country for over thirty years. I will always appreciate the unique opportunity I had to travel throughout the United States and a good bit of the world on so many challenging, interesting, and varied assignments. Few others in law enforcement will ever have such opportunities. I will always be proud of having been an FBI special agent and for all that stood for. The FBI's motto, Fidelity, Bravery, and Integrity, meant much more to me than just words. Special recognition goes to Fred Lanceley, who was my mentor and partner during my early years as an FBI hostage negotiator. Fred's insightful analysis of hostage, barricade, and suicide incidents was a great influence on my own thinking. His review of the section of this book on the Ruby Ridge incident was most helpful. I would also like to thank Lt. George Bradford (retired), of the Washington Metropolitan Police Department, for his friendship and support during my early fieldwork as a negotiation practitioner. The entire MPD negotiation team that Lt. Bradford led was instrumental in helping me first put theory to practice. I would also like to give thanks to my old friend and negotiation colleague Jim Botting, FBI Los Angeles (retired), who has been and remains today a great source of wisdom, support, and friendship. Also, Dr. Mike Webster, my Canadian psychologist friend, has inspired me both professionally and personally for almost two decades. It's appropriate that I recognize the members of the original FBI Critical Incident Negotiation Team, of which I was honored to be a part. This small hand-picked group of select FBI negotiators contained some of the best agents the FBI has ever produced. You know who you are. No crazier, more outrageously funny, more talented, and more resourceful group of FBI agents was ever assembled. Despite their zany antics, their manifest skills and abilities influenced countless law enforcement and correctional negotiators across this nation. I am proud to have led the FBI negotiation program for the last ten years of my career. Being named the first chief of the FBI Crisis Negotiation Unit was a singular honor that will remain my proudest career achievement. The opportunity to advance the negotiation profession from that leadership position was something I will always cherish and appreciate. As chief of the CNU, I viewed my most important task as directly serving the training and operational needs of the FBI's 350 negotiators assigned throughout the field. Serving this special group of individuals was both an honor and a privilege. It's important that I thank the many FBI agents and support employees with whom I worked during my various career assignments. You are too numerous to mention, but there are no finer or more dedicated public servants than these individuals. I've also been extremely fortunate to work alongside many skilled negotiators involved in the International Negotiation Working Group and the National Council of Negotiation Associations. I was proud to have played a role in helping form these important professional organizations that continue to promote the negotiation profession far and wide. This book began as an idea many years ago. In exploring the process of writing a book, I reached out to my friend Peter Bergen, who has written several books about Osama bin Laden. Peter's insights and suggestions were most helpful to me. His most important recommendation was to work with literary agent Tina Bennett. Without Tina's encouragement, support, and guidance, this book would never have been written. I would also like to thank William Patrick for his excellent work helping to edit the original lengthy manuscript. Bill's skill and talents were of extraordinary help in organizing the material that went into this book. My editor at Random House, Tim Bartlett, was also an enormous help in crafting the kind of book that I wanted to write. I thank him for the many hours he spent with me on the phone going over the material. His patience and thoughtful suggestions were key factors in achieving the final product. I also want to recognize my former FBI colleague and dear friend Steve Romano, who graciously read over the manuscript to ensure its accuracy. His attention to detail is legendary and his insightful suggestions were a big help to me. Former FBI colleague Byron Sage was also kind enough to provide assistance by reading over the Waco chapter and providing me with critical feedback. It's also an honor to give special thanks and recognition to Cheryl Hart Frappier, whose personal ordeal and courage are written about in the first chapter of this book. She kindly reviewed the Sperryville chapter and provided important insights that will help the reader better understand the ordeal she experienced. I'm continually inspired by her heroic story of survival. Having a loving and supportive family is the key to my success in life. This book is dedicated to my wife, Carol, but I was never more grateful for the investment of college tuition than when my daughter Katie Salzman used her English degree to proofread the early chapters put together by dear old dad. She offered many helpful suggestions and provided me with a much needed critical review of the book's tone and content. Katie, her younger brother, Rusty, and her older sister, Kelly Brady, remain the true joys of my life. No father has ever been more proud of his children and their successes in life. I also want to thank my sister, Nancy Kennedy, for always encouraging and supporting her little brother. It's also appropriate to thank my oldest and dearest friends, Keith Naumann, Larry Collins, Tom Broner, and Bill Strate, for forty-five years of camaraderie and endless laughter. Finally, I want to recognize Bill and Doris Noesner, my wonderful parents. I only wish they were alive today to read this book. I hope they would be proud of it and what it stands for. It could never have been possible without their enduring love and constant support. Starting out with good parents has been the best good fortune in my life. I recommend it to everyone. # ABOUT THE AUTHOR # GARY NOESNER (pronounced _Nes-ner_ ) retired from the FBI in 2003 following a thirty-year career as an investigator, instructor, and negotiator. A significant focus of his career was directed toward investigating Middle East hijackings in which American citizens were victimized. In addition, he was an FBI hostage negotiator for twenty-three years of his career, spending the last ten years as the chief negotiator for the FBI. He retired as the chief of the FBI's Crisis Negotiation Unit, Critical Incident Response Group, the first person to hold that position. In that capacity he was heavily involved in numerous hostage, barricade, and suicide incidents; covering prison riots, right-wing militia standoffs, religious zealot sieges, terrorist embassy takeovers, airplane hijackings, and over 120 overseas kidnapping cases involving American citizens. Following his retirement from the FBI, he became a senior vice president with Control Risks, an international risk consultancy, and most recently spent five and a half years working a kidnap case involving three American defense contractors taken hostage by the FARC in Colombia, South America. He speaks to law enforcement and other groups and continues to do kidnap-management consulting work for Control Risks part-time. He has three grown children and resides in Virginia with his wife, Carol.
{ "redpajama_set_name": "RedPajamaBook" }
6,053
{"url":"https:\/\/lefschetzseminar.org\/2022\/02\/","text":"## Sagun Chanillo (Rutgers\u00a0University)\n\n11 February 2022\n\nA Local version of Courant\u2019s Nodal Domain Theorem\n\nLet (M^n, g) denote a smooth and compact Riemannian manifold with no boundary equipped with a smooth Riemannian metric g. Courant\u2019s nodal domain theorem asserts that for the Laplace-Beltrami operator on M, if we order the eigenvalues in increasing order with multiplicity, then the eigenfunction for the k-th eigenvalue has at most k connected components(the nodal domains) where the eigenfunction does not vanish.\u00a0 C. Fefferman and H. Donnelley proposed about 30 years ago a local version of this result on every ball in M. This local question is connected with the question of S.-T. Yau on the length of the zero set of the eigenfunctions. We propose to give answers to this question. This work is joint with A. Logunov E. Mallinikova and D. Mangoubi.\n\n## Azahara DelaTorre Pedraza (Sapienza Universit\u00e0 di\u00a0Roma)\n\n4 February 2022\n\nThe fractional Yamabe problem with singularities\n\nThe so called Yamabe problem in Conformal Geometry consists in finding a metric conformal to a given one and which has constant scalar curvature. From the analytic point of view, this problem\u00a0becomes a semilinear elliptic PDE with critical (for the Sobolev embedding) power non-linearity. If we study the problem in the Euclidean space, allowing the presence of nonzero-dimensional singularities can be transformed into reducing the non-linearity to a Sobolev-subcritical power. A quite recent notion of non-local curvature gives rise to a parallel study which\u00a0weakens the geometric assumptions giving rise to a non-local semilinear elliptic PDE.\n\nIn this talk, we will focus on metrics which are singular along nonzero-dimensional singularities. In collaboration with Ao, Chan, Fontelos, Gonz\u00e1lez and Wei, we covered the construction\u00a0of solutions which are singular along (zero and positive dimensional) smooth submanifolds in this fractional setting. This was done through the development of new methods coming from\u00a0conformal geometry and Scattering theory for the study of non-local ODEs. Due to the limitations of the techniques we used, the particular case of \u201cmaximal\u201d dimension for the singularity\u00a0was not covered. In a recent work, in collaboration with H. Chan, we cover this specific dimension constructing and studying singular solutions of critical dimension.","date":"2023-01-28 17:22:58","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8071190118789673, \"perplexity\": 707.1875159701657}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499646.23\/warc\/CC-MAIN-20230128153513-20230128183513-00148.warc.gz\"}"}
null
null
Q: PDF of volume of tetrahedron with random coordinates Question What is the probability distribution function (PDF) of the absolute volume of a tetrahedron with random coordinates? The 4 random tetrahedron vertices in $\mathbb{R}^3$ are $$ \mathbf{\mathrm{X}_1} =(x_1^1,x_1^2,x_1^3),\;\; \mathbf{\mathrm{X}_2}=(x_2^1,x_2^2,x_2^3),\;\; \mathbf{\mathrm{X}_3}=(x_3^1,x_3^2,x_3^3),\;\; \mathbf{\mathrm{X}_4} =(x_4^1,x_4^2,x_4^3)$$ where $x_i^j$ are independent standard normal distributed variables $$x_i^j\sim\mathcal{N}(0,1)$$ The non-oriented volume of a random tetrahedron instance is $$V=\frac{1}{6}\left| ( \mathbf{\mathrm{X}_1}- \mathbf{\mathrm{X}_4})\cdot \left(( \mathbf{\mathrm{X}_2}- \mathbf{\mathrm{X}_4} ) \times ( \mathbf{\mathrm{X}_3}- \mathbf{\mathrm{X}_4} )\right) \right| \tag{1} $$ $$=\frac{1}{6}\left|x_1^2 x_2^3 x_3^1 - x_1^1 x_2^3 x_3^2 + x_1^3 x_2^1 x_3^2- x_1^2 x_2^1 x_3^3 + x_1^1 x_2^2 x_3^3- x_1^3 x_2^2 x_3^1 + x_1^3 x_2^2 x_4^1- x_1^1 x_2^2 x_4^3 + x_1^1 x_2^3 x_4^2- x_1^2 x_2^3 x_4^1 + x_1^2 x_2^1 x_4^3- x_1^3 x_2^1 x_4^2 + x_2^3 x_3^2 x_4^1 -x_2^1 x_3^2 x_4^3 + x_2^1 x_3^3 x_4^2- x_2^2 x_3^3 x_4^1 + x_2^2 x_3^1 x_4^3- x_2^3 x_3^1 x_4^2 + x_1^2 x_3^3 x_4^1- x_1^1 x_3^3 x_4^2 + x_1^3 x_3^1 x_4^2- x_1^2 x_3^1 x_4^3 + x_1^1 x_3^2 x_4^3- x_1^3 x_3^2 x_4^1 \right|$$ Known relations The expectation value of $V$ is $$\mathbb{E}[V]=\frac{2}{3}\sqrt{\frac{2}{\pi}}\tag{2}$$ A proof can be found in a Math Stack Exchange post. The variance of $V$ is $$\mathbb{Var}[V]=\mathbb{E}[V^2]-(\mathbb{E}[V])^2=\frac{2}{3}-\frac{8}{9\pi}\tag{3}$$ where $\mathbb{E}[V^2]$ can be calculated by multiple integration. Approximate relations based on empirical data The remaining part contains only unproven statements that could give indications of the true solution. The probability distribution of empirical data of $V$ can be fitted quite well with a function of the form $$f(V)=\text{exp}\left(-\left(\frac{V}{c_2}\right)^{c_1}\right)c_3\tag{4}$$ where $c_1,c_2,c_3$ are fit parameters. As a PDF must fulfill the conditions $$\int_0^\infty f(V) \mathrm{d}V=1\ \,\, \text{and}\ \int_0^\infty V f(V)\mathrm{d}V=\mathbb{E}[V]$$ the fit parameters $c_2$ and $c_3$ in eq.(4) can be expressed in dependence of $c_1$ $$c_2=\mathbb{E}[V]\frac{\Gamma(1/c_1)}{\Gamma(2/c_1)}\ ,\;\; c_3=\frac{c_1}{c_2\Gamma(1/c_1)}$$ with $\Gamma$ being the Gamma function. Only $c_1$ remains to be fitted. The best fit is for $c_1\approx\pi/4$, i.e. $c_1\approx 0.7854, c_2\approx 0.3491, c_3\approx 2.4944$. However it is not known whether eq.(4) is the true form of the PDF at all. It just models well experimental data. A: Since the coordinates of the vertices are i.i.d. as standard normal variables, then the coordinates of the difference of two vertices are i.i.d normal variables with $0$ mean and variance $2$. But the differences taken wrt the same vertex are no longer uncorrelated and this fact complicates the treatment. Herewith I am providing, for the moment, the PDF-CDF of the volume of a random tetrahedron with a vertex at the origin, and the other three vertices having independent standard normal coordinates. This Mathworld article indicates that the PDF of the product of three normal independent variables can be expressed through a Meijer G-Function. To find by this way the distribution of the sum of the triplets appearing in the algebraic definition of the triple product is therefore unviable. However the distribution of each vertex has the nice property of spherical symmetry and we are going to exploit that via the trigonometric version of the triple product. a) Polar distribution Using Cartesian and Spherical (geographic convention) coordinates in parallel $$ \left\{ \matrix{ x = r\cos \phi \cos \theta \hfill \cr y = r\cos \phi \sin \theta \hfill \cr z = r\sin \phi \hfill \cr} \right.\quad \left| \matrix{ \; - \pi /2 \le \phi \le \pi /2 \hfill \cr \; - \pi < \theta \le \pi \hfill \cr} \right.\quad dx\,dy\,dz \leftrightarrow r^{\,2} \cos \phi \,dr\,d\theta \,d\phi $$ we can write the spatial distribution of a vertex as $$ \eqalign{ & p_p ({\bf V}) dV = \,{\cal N}_{\sigma ^{\,2} } \,(x){\cal N}_{\sigma ^{\,2} } \,(y){\cal N}_{\sigma ^{\,2} } \,(z)\,dx\,dy\,dz = \cr & = \left( {{1 \over {\sigma \sqrt {2\pi } }}} \right)^{\,3} e^{\, - \,{1 \over 2}\left( {{r \over \sigma }} \right)^{\,2} } r^{\,2} \cos \phi \,dr\,d\theta \,d\phi = \cr & = \left( {{1 \over {\sqrt {2\pi } }}} \right)^{\,3} e^{\, - \,{1 \over 2}\left( {{r \over \sigma }} \right)^{\,2} } \left( {{r \over \sigma }} \right)^{\,2} \,d\left( {{r \over \sigma }} \right)\,\cos \phi \,d\theta \,d\phi = \cr & = \left( {{1 \over {\sigma \sqrt {2\pi } }}} \right)^{\,2} {\cal N}_{\sigma ^{\,2} } \,(r)\,dA_{\,r} \,dr = {1 \over {2\pi }}{\cal N}_1 \,(r/\sigma )\,dA_{\,r/\sigma } \,d\left( {{r \over \sigma }} \right)\, = \cr & = \left( {{1 \over {2^{3/2} \Gamma \left( {3/2} \right)}}} \right)e^{\, - \,{1 \over 2} \left( {{r \over \sigma }} \right)^{\,2} } \left( {{r \over \sigma }} \right)^{\,2} \,d\left( {{r \over \sigma }} \right)\,\cos \phi \,d\theta \,d\phi = \cr & = {1 \over {4\pi }}\,\chi _3 \,(r/\sigma )\,d\left( {{r \over \sigma }} \right)\,\,\cos \phi \,d\theta \,d\phi = \cr & = {1 \over {2\pi }} \,\,((r/\sigma )^{\,2} )\,\left( {{r \over \sigma }} \right)d\left( {{r \over \sigma }} \right) \,\,\cos \phi \,d\theta \,d\phi = \cr & = \,{1 \over {4\pi }}\chi _3 \,(r/\sigma )\,{{dA_{\,r/\sigma } } \over {(r/\sigma )^{\,2} }}\,d\left( {{r \over \sigma }} \right) = {1 \over {4\pi }}\,\chi _3 \,(r/\sigma )\,d\Omega \,d\left( {{r \over \sigma }} \right) \cr} $$ where: * *we generalize to the case of zero mean and generic variance $\sigma ^{\,2} $; *$dA$ is the surface area element; *$d\Omega$ the solid angle in steradians; *$\chi _3$ and $\chi _3^2$ are respectively the chi and chi-square distributions. The radial distribution is instead $$ \eqalign{ & p_r (r)dr = \int_{\phi = - \pi /2}^{\pi /2} {\int_{\theta = - \pi }^\pi {\left( {{1 \over {\sigma \sqrt {2\pi } }}} \right)^{\,3} e^{\, - {1 \over 2}\left( {{r \over \sigma }} \right)^{\,2} } r^{\,2} \cos \phi \,dr\,d\theta \,d\phi } } = \cr & = 4\pi r^{\,2} \left( {{1 \over {\sigma \sqrt {2\pi } }}} \right)^{\,3} e^{\, - {1 \over 2} \left( {{r \over \sigma }} \right)^{\,2} } dr = 4\pi \left( {{r \over \sigma }} \right)^{\,2} \left( {{1 \over {\sqrt {2\pi } }}} \right)^{\,3} e^{\, - {1 \over 2}\left( {{r \over \sigma }} \right)^{\,2} } d\left( {{r \over \sigma }} \right) = \cr & = \left( {{{\left( {r/\sigma } \right)^{\,2} } \over {2^{\,1/2} \Gamma \left( {3/2} \right)}}} \right) e^{\, - {1 \over 2}\left( {{r \over \sigma }} \right)^{\,2} } d\left( {{r \over \sigma }} \right) = 2\left( {{r \over \sigma }} \right)^{\,2} {\cal N}_1 \,(r/\sigma )\,\,d\left( {{r \over \sigma }} \right) = \cr & = \chi _3 \,(r/\sigma )\,d\left( {r/\sigma } \right) = 2\left( {r/\sigma } \right)\chi _3^2 \,\,((r/\sigma )^{\,2} )d\left( {r/\sigma } \right) \cr} $$ b) Cross product Passing to unitary variance for simplicity, we can compute the distribution of the modulus ($c$) of the cross product of two vectors (two vertices) by fixing one vector $\bf v$ of modulus $v$ and then the set of vectors having a component $c /v$ normal to the first, and thus lying over the cylindrical shell of radius $c/v, (c+dc)/v$ around $\bf v$, and integrate over $v$. That is $$ \eqalign{ & p_c (c)dc = \int\limits_v {p_r (v)\,dv{\cal N}_1 \,(c/v){{dc} \over v}{\cal N}_1 \,(0)2\pi {c \over v}} = \cr & = \int\limits_v {2v^{\,2} {\cal N}_1 \,(v)\,\,dv{\cal N}_1 \,(c/v){{dc} \over v} {\cal N}_1 \,(0)2\pi {c \over v}} = \cr & = {{4\pi } \over {\sqrt {2\pi } }}cdc\int\limits_v {{\cal N}_1 \,(v)\,\,{\cal N}_1 \,(c/v)} \;dv = \cr & = {2 \over {\sqrt {2\pi } }}cdc\int_{v = 0}^\infty {e^{\, - \,{1 \over 2}\left( {v^{\,2} + c^{\,2} /v^{\,2} } \right)} \,dv} \cr} $$ and the corresponding CDF being $$ \eqalign{ & P_c (c) = {2 \over {\sqrt {2\pi } }}\int_{t\, = \,0}^c {\int_{v\, = \,0}^\infty {t\,e^{\, - \,{1 \over 2}\left( {v^{\,2} + t^{\,2} /v^{\,2} } \right)} dt\,dv} } = \cr & = {2 \over {\sqrt {2\pi } }}\int_{v\, = \,0}^\infty {e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} dv \int_{t\, = \,0}^c {t\,e^{\, - \,{1 \over 2}\left( {t^{\,2} /v^{\,2} } \right)} dt\,} } = \cr & = {1 \over {\sqrt {2\pi } }}\int_{v\, = \,0}^\infty {v^{\,2} e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} dv \int_{t\, = \,0}^c {2\,\left( {{t \over v}} \right)\,e^{\, - \, {1 \over 2}\left( {t^{\,2} /v^{\,2} } \right)} d\left( {{t \over v}} \right)\,} } = \cr & = {2 \over {\sqrt {2\pi } }}\int_{v\, = \,0}^\infty {v^{\,2} e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} dv \int_{u/2\, = \,0}^{\left( {c^{\,2} /v^{\,2} } \right)/2} {\,e^{\, - \,{1 \over 2}u} d\left( {{u \over 2}} \right)\,} } = \cr & = \sqrt {{2 \over \pi }} \int_{v\, = \,0}^\infty {v^{\,2} e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} dv \left( {1 - e^{\, - {1 \over 2}\,\left( {c^{\,2} /v^{\,2} } \right)} } \right)} = \cr & = \sqrt {{2 \over \pi }} \left( {\int_{v\, = \,0}^\infty {v^{\,2} e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} dv} - \int_{v\, = \,0}^\infty {v^{\,2} e^{\, - \,{1 \over 2}\left( {v^{\,2} + c^{\,2} /v^{\,2} } \right)} dv} } \right) \cr & = 1 - \sqrt {{2 \over \pi }} \int_{v\, = \,0}^\infty {v^{\,2} e^{\, - \,{1 \over 2}\left( {v^{\,2} + c^{\,2} /v^{\,2} } \right)} dv} \cr} $$ c) Dot product For the modulus (absolute value) $q$ of the dot product of two vectors , we fix again a vector $\bf v$ and integrate over a plane normal to it at distance $q/v$, and double the result to include the symmetric plane, which means $$ \eqalign{ & p_d (q)dq = 2\int\limits_v {2v^{\,2} {\cal N}_1 \,(v)\,dv{\cal N}_1 \,(q/v){{dq} \over v}} = \cr & = 4dq\int\limits_v {v\,{\cal N}_1 \,(v)\,\,{\cal N}_1 \,(q/v)dv} = \cr & = {2 \over \pi }dq\int_{v\, = \,0}^\infty {v\,e^{\, - \,{1 \over 2}\left( {v^{\,2} + q^{\,2} /v^{\,2} } \right)} dv} \cr} $$ and $$ \eqalign{ & P_d \left( q \right) = \int_{t\, = \,0}^q {{2 \over \pi }dt\int_{v\, = \,0}^\infty {v\,e^{\, - \,{1 \over 2}\left( {v^{\,2} + t^{\,2} /v^{\,2} } \right)} dv} } = \cr & = {2 \over \pi }\int_{v\, = \,0}^\infty {v\,e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} dv \int_{t\, = \,0}^q {e^{\, - \,{1 \over 2}\left( {t^{\,2} /v^{\,2} } \right)} dt} } = \cr & = \sqrt {{2 \over \pi }} \int_{v\, = \,0}^\infty {v^{\,2} \,e^{\, - \,{1 \over 2}\left( {v^{\,2} } \right)} \, {\rm erf}\left( {{q \over {\sqrt 2 v}}} \right)dv} \cr} $$ d) Triple product Now it is easy to combine the results above and obtain for the volume $v$ of a parallelepiped , so leaving apart the factor of $1/6$ $$ \eqalign{ & p_t (v)dv = 2\int\limits_c {p_c (c)dc{\cal N}_1 \,(v/c){{dv} \over c}} = \cr & = 2\int_{c\, = \,0}^\infty {{2 \over {\sqrt {2\pi } }}c\,dc{\cal N}_1 \,(v/c){{dv} \over c} \int_{t = 0}^\infty {e^{\, - \,{1 \over 2}\left( {t^{\,2} + c^{\,2} /t^{\,2} } \right)} \,dt} } = \cr & = {2 \over \pi }dv\int_{c\, = \,0}^\infty {e^{\, - \,{1 \over 2}\left( {v^{\,2} /c^{\,2} } \right)} dc \int_{t = 0}^\infty {e^{\, - \,{1 \over 2}\left( {t^{\,2} + c^{\,2} /t^{\,2} } \right)} \,dt} } \cr} $$ and the CDF $$ \eqalign{ & P_{\,t} (v) = {2 \over \pi }\int_{t = 0}^v {dt\int_{c = 0}^\infty {e^{\, - \,\,{1 \over 2}\left( {{{t^{\,2} } \over {c^{\,2} }}} \right)} dc\int_{u = 0}^\infty {e^{\, - \,{1 \over 2}\left( {u^{\,2} + c^{\,2} /u^{\,2} } \right)} \,du} } } = \cr & = {2 \over \pi }\int_{c = 0}^\infty {dc\int_{t = 0}^v {e^{\, - \,\,{1 \over 2}\left( {{{t^{\,2} } \over {c^{\,2} }}} \right)} dt \int_{u = 0}^\infty {e^{\, - \,{1 \over 2}\left( {u^{\,2} + c^{\,2} /u^{\,2} } \right)} \,du} } } = \cr & = \sqrt {{2 \over \pi }} \int_{c = 0}^\infty {\,\,c\;{\rm erf}\left( {{v \over {\sqrt 2 \,c}}} \right)dc \int_{u = 0}^\infty {e^{\, - \,{1 \over 2}\left( {u^{\,2} + c^{\,2} /u^{\,2} } \right)} \,du} } \cr} $$ which are definitely less complicated than expected, and allow in case to work out some approximations. Note that all the CDF formulas above have been checked by numerical simulation, and besides that all of them correctly evaluate to 1 at $\infty$. Since they are obtained by exact integration of the PDF's, also the latter should be correct. -- update -- I just discovered thanks to A0, that the inner integral above is discussed in many other posts, e.g. in this, and that it is simply $$ \int_{x = 0}^\infty {e^{\, - \,{1 \over 2}\left( {x^{\,2} + c^{\,2} /x^{\,2} } \right)} \,dx} = \sqrt {{\pi \over 2}} \,e^{\, - \,\,\sqrt {c^{\,2} } } $$ By that, for the triple product (volume of a parallelepiped) we get $$ \eqalign{ & p_t (v)\,dv = dv\sqrt {{2 \over \pi }} \int_{t\, = \,0}^\infty {e^{\, - \,{1 \over 2}\left( {v^{\,2} /t^{\,2} + 2t} \right)} dt\,} \cr & P_{\,t} (v) = \int_{t = 0}^\infty {\,\,t\;e^{\, - \,\,t} \, {\rm erf}\left( {{v \over {\sqrt 2 \,t}}} \right)dt\,} \cr} $$ e) General distribution Concerning a Tetrahedron (parallelepiped) in a general position, i.e. the scheme considered above plus a translation of the origin, the numerical simulation suggests that the $P_{\,t} (v)$ above converts to $P_{\,t} (2 v)$. I am thriving to find a justification of that. A: We'll try to solve the following similar problem. It's easy to see how the solution of such problem answers to your question. Given $$ \mathbf{\mathrm{X}_1} =(x_1^1,x_1^2,x_1^3),\;\; \mathbf{\mathrm{X}_2}=(x_2^1,x_2^2,x_2^3),\;\; \mathbf{\mathrm{X}_3}=(x_3^1,x_3^2,x_3^3)$$ where $x_i^j$ are independent standard normal distributed variables $$x_i^j\sim\mathcal{N}(0,1)$$ What is the PDF of the volume of a parallelepiped whith vertices on $(0,0,0), \mathbf{\mathrm{X}_1}, \mathbf{\mathrm{X}_2}, \mathbf{\mathrm{X}_3}, \mathbf{\mathrm{X}_1}+\mathbf{\mathrm{X}_2}, \mathbf{\mathrm{X}_1}+\mathbf{\mathrm{X}_3}, \mathbf{\mathrm{X}_2}+\mathbf{\mathrm{X}_3}, \mathbf{\mathrm{X}_1}+\mathbf{\mathrm{X}_2}+\mathbf{\mathrm{X}_3}$? The idea to solve this problem is to decompose each random vector in two random variables that correspond to their unit vector and their (eucidean) norm. Luckily, the variables obtained can be shown to be independent. So, we define $$\mathbf{\mathrm{U}_i}=\mathbf{\mathrm{\hat{X}}_i}$$ $$\mathbf{\mathrm{N}_i}=||\mathbf{\mathrm{X}_i}||_2$$ We observe that the volume (with sign) $V$ of the parallelepiped can be expressed as $$V=N_1N_2N_3(U_3\cdot(U_1\times U_2))$$ The distribution of the $N_i$'s is known to be the chi distribution with parameter $k=3$. Let's then focus on $U_3\cdot(U_1\times U_2)$. It can be shown that: 1).the angle $\theta$ between $U_1$ and $U_2$ has a uniform probability distribution on the interval $[-\pi,\pi]$. 2).the angle $\phi$ between $U_1\times U_2$ and $U_3$ has a uniform probability distribution on the interval $[-\pi,\pi]$. 3).The random variables $\phi$ and $\theta$ are independent. Moreover, $U_3\cdot(U_1\times U_2)=\text{sin}(\theta)\text{cos}(\phi)$. It's not too hard to find that the probability distribution of $\text{sin}(\theta)$ (which is equal to that of $\text{cos}(\phi)$) is $\frac{1}{\pi \sqrt{1-x^2}}$. I don't think we can go further than that. If we want the PDF explicity we should apply some convolution on all those density functions, and the result (if you can get one) wouldn't be pretty at all.
{ "redpajama_set_name": "RedPajamaStackExchange" }
103
Колонија Мичоакана има више значења: Колонија Мичоакана (Чалко), насеље у савезној држави Мексико у Мексику Колонија Мичоакана (Наволато), насеље у савезној држави Синалоа у Мексику Колонија Мичоакана, Луис Велез (Наволато), насеље у савезној држави Синалоа у Мексику
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,254
Q: server not reading array from body I have created an API to post some data and store it in MongoDB using mongoose and one of its fields is an array. I am sending an array as a string and array as plain ( tried both ) in payload but that field is undefined on the server and not able to read due to some reason and getting undefined for that image the posting is working and I can save stuff on MongoDB but that one array field is always empty. my request payload : { "name" :"name" ,"description" :"description" ,"price" :123 ,"image_array" : "['image','image2']" ,"category" :"category" ,"country" :"country" ,"city" :"city" ,"thumbnail" :"thumbnail" } my response: 200 OK { "data": { "name": "name", "description": "description", "price": 123, "thumbnail": "thumbnail", "images": [], "category": "category", "country": "country", "city": "city", "user": "633710ae6238121bd9ff4f46", "status": "active", "likes": 0, "_id": "633c794ef9449f8b86c948b0", "createdAt": "2022-10-04T18:19:58.440Z", "updatedAt": "2022-10-04T18:19:58.440Z", "__v": 0 }, "message": "Product added successfully", "Array_type": "undefined" } route: router.post("/", async (req, res) => { console.log("post product"); try { const new product = new Product({ name: req.body.name, description: req.body.description, price: req.body.price, image: req.body.image, category: req.body.category, country: req.body.country, city: req.body.city, user: req.user._id, thumbnail: req.body.thumbnail, }); data = await new product.save(); res.status(200).json({ data, message: "Product added successfully", Array_type: type of req.body.images, }); } catch (err) { console.log(err); res.status(500).json(err); } }); product schema const ProductSchema = new mongoose.Schema( { name: { type: String, required: true }, description: { type: String, required: true }, price: { type: Number, required: true }, thumbnail: { type: String, required: true }, images: [String], category: { type: String, required: true }, country: { type: String }, city: { type: String }, user: { type: mongoose.Schema.Types.ObjectId, ref: "User" }, status: { type: String, default: "active" }, likes: { type: Number, default: 0 }, createdAt: { type: Date, default: Date.now }, updatedAt: { type: Date, default: Date.now }, }, { timestamps: true } ); A: Your request payload is not valid for the image_array array. All you've sent is a string literal that looks sort of like an array. In short, never manually create JSON. Always use built-in serialisation tools like JSON.stringify() const product = { name: "name", description: "description", price: 123, images: ["image", "image2"], // just a normal array category: "category", country: "country", city: "city", thumbnail: "thumbnail", }; const res = await fetch("/", { method: "POST", body: JSON.stringify(product), headers: { "content-type": "application/json" }, }); // or if using Axios const res = await axios.post("/", product); On the server side, make sure you use req.body.images to set the images property... images: req.body.images, or more easily just use all req.body properties const new product = new Product(req.body);
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,516
{"url":"http:\/\/www.physicsforums.com\/showthread.php?t=442713","text":"Parallel and perpendicular vectors\n\nIf a mass takes a path down a slope of a parabola. The force downward is its weight. I need to resolve the perpendicular and parallel (to direction of motion) forces.\n\ny=ax^2+bx+c, it has a y intercept of h and a repeated root l. a>0. The mass slides from x=0 to x=l with no friction.\n\n$d\\vec{r} = \\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right)\\text{dx}$\n\nA unit vector in this direction:\n\n$\\hat{u} = \\frac{1}{\\sqrt{1+(2ax+b)^2}}\\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right)$\n\nTo get the size of the component of the force parallel to the curve:\n\n$mg \\cos (\\theta) = mg\\frac{(2ax+b)}{\\sqrt{1+(2ax+b)^2}}$\n\n$mg \\cos (\\theta) \\hat{u} = mg\\frac{(2ax+b)}{\\sqrt{1+(2ax+b)^2}} \\frac{1}{\\sqrt{1+(2ax+b)^2}}\\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right) = mg \\frac{(2ax+b)}{1+(2ax+b)^2}\\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right)$\n\nI don't think this is right\n PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study\n This can't be right because then $\\int \\vec{F}\\cdot d\\vec{r} = Mg(al^2 + bl) \\ne Mgh$\n Could someone have a look at this\n\nMentor\nBlog Entries: 10\n\nParallel and perpendicular vectors\n\n Quote by Gregg If a mass takes a path down a slope of a parabola. The force downward is its weight. I need to resolve the perpendicular and parallel (to direction of motion) forces. y=ax^2+bx+c, it has a y intercept of h and a repeated root l. a>0. The mass slides from x=0 to x=l with no friction. $d\\vec{r} = \\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right)\\text{dx}$ A unit vector in this direction: $\\hat{u} = \\frac{1}{\\sqrt{1+(2ax+b)^2}}\\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right)$ To get the size of the component of the force parallel to the curve: $mg \\cos (\\theta) = mg\\frac{(2ax+b)}{\\sqrt{1+(2ax+b)^2}}$\nThere is a subtle mistake at play here. You want the magnitude of the parallel force component, i.e., its absolute value. So there should be a big absolute value sign on the r.h.s. in that last equation.\n\nThe absolute value is not an issue for m, g, and the square-root expression since those are all positive. But what about the 2ax+b term, is that negative or positive?\n\nHints:\n1. We are only considering 0<x<l.\n2. Express a and b each in terms of h and l, if you have not already done so.\n $mg \\cos (\\theta) \\hat{u} = mg\\frac{(2ax+b)}{\\sqrt{1+(2ax+b)^2}} \\frac{1}{\\sqrt{1+(2ax+b)^2}}\\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right) = mg \\frac{(2ax+b)}{1+(2ax+b)^2}\\left( \\begin{array}{c} 1 \\\\ 2ax+b \\end{array} \\right)$ I don't think this is right\ntest","date":"2013-05-24 13:44:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5052459836006165, \"perplexity\": 552.3595056728101}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368704662229\/warc\/CC-MAIN-20130516114422-00003-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
Q: Plot factor proportion in R I have a dataset which looks like this: edu default 1 1 0 2 3 1 3 1 1 4 1 0 5 2 1 6 2 0 ... and I could make a plot using R : ggplot(rawdata, aes(x = edu, fill = default)) + geom_bar() + labs(x = 'Education') + theme_excel() Instead of counts of 1s and 0s in default, I want to plot the proportion of 1s like this: I calculated the proportion separately, store the results in another data frame and made this plot. My question is: Is there a compact way that I could do this in a single ggplot() command like I did in the previous plot? Update: I forgot to mention that the data type of default is factor. So applying mean does not work. A: We recall that the proportion of 1's in a binary vector is simply its mean. The way to plot mean values per x in ggplot is using the stat_summary function. So we get: ggplot(rawdata, aes(x = edu, y = default)) + stat_summary(fun.y = 'mean', geom = 'bar') Or: ggplot(rawdata, aes(x = edu, y = default)) + geom_bar(stat = 'summary') #include fun.y = 'mean' to avoid the message Both give:
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,231
{"url":"http:\/\/mywebspider.com\/sequential-numbering\/sequentially-numbered-zip-ties.html","text":"###### Although sequences are a type of function, they are usually distinguished notationally from functions in that the input is written as a subscript rather than in parentheses, i.e. an rather than f(n). There are terminological differences as well: the value of a sequence at the input 1 is called the \"first element\" of the sequence, the value at 2 is called the \"second element\", etc. Also, while a function abstracted from its input is usually denoted by a single letter, e.g. f, a sequence abstracted from its input is usually written by a notation such as {\\displaystyle (a_{n})_{n\\in A}} , or just as {\\displaystyle (a_{n})} . Here A is the domain, or index set, of the sequence.\n\nStandard sizes for NCR forms are half page (5.5\u2033 x 8.5\u2033), full page (8.5\u2033 x 11\u2033) and legal (8.5\u2033 x 14\u2033) although custom sizes can be ordered to meet your specific need. The design orientation can be either vertical or horizontal. The link below gives you access to design templates that can be used to lay out your form in the appropriate size and orientation.\n\n## How do I set up the basic template for text numbering in Word 2008 for the Mac? Here is what is driving me nuts: I want to generate numbered lists - lots and lots of them. The first level numbered lines will have second level under them in many cases, also numbered and indented below the first level. The second level will also have third level under them, intended some more and also numbered. This \"sort-of\" works fine. It USED to work excellently. Here is what I do. I click the Numbering button in the Bullets and Numbering palette. The line indents and a 1) is aut...\n\nApple has a formalized version number structure based around the NumVersion struct, which specifies a one- or two-digit major version, a one-digit minor version, a one-digit \u201cbug\u201d (i.e. revision) version, a stage indicator (drawn from the set development\/prealpha, alpha, beta and final\/release), and a one-byte (i.e. having values in the range 0\u2013255) pre-release version, which is only used at stages prior to final. In writing these version numbers as strings, the convention is to omit any parts after the minor version whose value are zero (with \u201cfinal\u201d being considered the zero stage), thus writing 1.0.2 (rather than 1.0.2b12), 1.0.2 (rather than 1.0.2f0), and 1.1 (rather than 1.1.0f0).\n\n### Creating a sequential list of numbers, in Word, sounds like an intimidating task. Sure, you can create a numbered list quickly enough, but that feature works with additional text - you're numbering something. If you want a list of just numbers, you have to work a bit harder. Word's SEQ field might come to mind, but that solution is better suited to template-type numbering. In order words, this field works great if you're numbering documents, labels, and so on. It doesn't work so well if you just want to create a list of sequential numbers. You can use it that way, but it'll be more work than it's worth.\n\n###### It's easy to enter a SEQ field manually. Begin by pressing [Ctrl]+[F9]. When Word displays the blank field, enter seq list1, as shown in Figure A. List1 is a required argument; you can use any descriptive name you like as long as it begins with an alpha character. The parentheses distinguish the numbers within the text. They're not part of the field code. Highlight the field code and press [F9] to display the field's value. As you can see in Figure B, the first field returns (1).\n\nLION carries a wide range of high quality heavy-duty automatic numbering machines from the 5-wheel model to the 13-wheel model. Various specifications are available for your specific individual need. LION numbering machines are precision crafted of one-piece hardened steel frame. All metal interior construction will provide years of reliable use. This self inking numbering machine is ideal for sequential numbering operations to use as a date and number stamp, serial number stamp and inspection stamp, etc. Custom numbering machines are also available so that it can work to your exact need.\nAn overwhelming majority of companies use designation-based part-numbering systems. A Design Management Procedure, for example, may be numbered as SOP 4.4-1. With the previous revision of the ISO 9001 standard, it meant that this document related to the element 4.4, design management. Well, it does not mean the same with the new ISO 9001 revision, simply because design management clause now has a different number: 7.3. What is the solution? The solution is simple: no part numbers, and no designators!\nIf a sequence converges, then the value it converges to is unique. This value is called the limit of the sequence. The limit of a convergent sequence {\\displaystyle (a_{n})} is normally denoted {\\displaystyle \\lim _{n\\to \\infty }a_{n}} . If {\\displaystyle (a_{n})} is a divergent sequence, then the expression {\\displaystyle \\lim _{n\\to \\infty }a_{n}} is meaningless.\nSo I spent some time trying to figure it out, playing with Normal.dotm and the various styles (List paragraph, List Number, List Bullet etc etc). And finally, when I've got Normal.dotm open (i.e. I'm editing that template file), I get my result: I apply a standard numbered list, and it comes up flush left (i.e. not indented) and hanging at 1.0cm (cos I don't use inches...) and with a tab stop applied at 1.0cm as well - funky stuff!\n\n\nI want to a sequential number to fill in automatically each time the form is filled out. Malissa, A simple way would be to use something like this, you could assign it to a button, an open or before print event. Sheets(\"Sheet1\").Range(\"A1\").Value = _ Sheets(\"Sheet1\").Range(\"A1\").Value + 1 For other ways to do this or if this is going to be used in a temple have a look here http:\/\/www.mcgimpsey.com\/excel\/udfs\/sequentialnums.html -- Paul B Always backup your data before trying something new Please post any response to the newsgroups so others...\nTo design a certificate from scratch, you can either start with a completely blank publication or adapt an existing publication. Small-sized publication types, such as business cards, postcards, and labels can be adapted to serve as coupons. Flyers or brochures can be adapted for use as gift certificates. For more information, see Create a publication in Publisher.\nEnglish Spanish Dictionary | English Italian Dictionary | English German Dictionary | English Portuguese Dictionary | English Russian Dictionary | Medical dictionary English French | Computer dictionary English French | Computer dictionary English Spanish | Business dictionary English French | English Arabic Dictionary | English Hebrew Dictionary | English Dutch Dictionary | English Polish Dictionary\nFootnotes, after all, are always numbered sequentially and update when you add or remove one. The problem is that each time you add a footnote you get an extra space down at the bottom of the column. The solution? Make a paragraph style for your footnotes that specifies a .1 pt tall size with a 0 (zero) leading, then choose that paragraph style in the Document Footnote Options dialog box.\nWhat is the max number of records that can be put into a table in Microsoft Access? Does it vary from version to version? Thanks in advance. \"Mike C\" wrote in message news:BC4F1F10-A96C-4EC7-9E33-670828E53A0A@microsoft.com... > What is the max number of records that can be put into a table in > Microsoft > Access? Does it vary from version to version? Thanks in advance. Google or the online help would be your friend on this one. There's no fixed limit, there's only the limit on the overall size of the database file...\nIf you want numbered headings to be underlined, but do not want a line under the number, it can be difficult if you don't know how it works. This is because by default, the format of the number follows the format of the text that follows it. For example, let's say you want to underline a paragraph in a Heading\u00a02 style. Chances are it will look like this:\nAnd of course, it\u2019s not only when you add or delete counters that the numbering auto-updates, but also when you copy or move the text, as when you\u2019re rearranging your listed points. This InDesign inline counter now works exactly like the counters in my old, beloved XyWrite word processors \u2014 except I cannot have several counters with separate numbering in the same text story. In XyWrite I could have nine, using only the codes c1, c2,\u2026c9. But for 95% of one\u2019s counter needs, one counter per story is quite ample \u2014 as compared to none.\nDesign your ticket, use excel or libre's version and create the numbers. Save those numbers as text, I always make the first one xxx then 001 and so on, xxx will be the master page. Use data merge from Indesign to create the master ticket, you will need to make a text box for the number. Once it looks good to you draw a text box around the whole ticket. At the bottom of the data merge tab is a button that gives you the options how you want your layout, columns or rows, etc. even has a preview. once you click create it will create another file with all your tickets sequentially numbered. It'll be a couple of hours before I'm at work but can post the link I used to create these for the first couple of times.\nHi Scott, I had a question regarding the sequential numbering Apex example\u2026I am looking to automatically restart the sequence every month, which is not a problem using your example (i.e. changing the year to month in the expression). However, I would also like to add in a condition for year, so that the sequence restarts for each month of each year (i.e. my problem is for example that Feb 2011 last sequence number is 5, and then in Feb 2012 this becomes 6, where I would like it to be 1). I am wondering what the syntax would be. Thanks in advance, Lawn.\nNext we will look at scenario 2. The variation here is that\u00a0Apex wants to start each year with anew sequence of numbers. Also, they want to\u00a0include the year when displaying this number. We still use a DMax, but we need\u00a0to add criteria to determine the year. So Apex needs to have a field in their\u00a0table that indicates the date. Generally, such inquiries will have a date\/time\u00a0stamp to indicate when received or the user will input the date and time. So we\u00a0will assume there is a field in record called InquiryDate. We will also add an\u00a0Integer field named Sequence. On the form we will add a control bound to\u00a0Sequence, but set its Visible property to No. The line of code will look like\u00a0this:\nIn bulleted lists, each paragraph begins with a bullet character. In numbered lists, each paragraph begins with an expression that includes a number or letter and a separator such as a period or parenthesis. The numbers in a numbered list are updated automatically when you add or remove paragraphs in the list. You can change the type of bullet or numbering style, the separator, the font attributes and character styles, and the type and amount of indent spacing.\nMy issue is trying to create small dot labels or equivalent to make up sequential alpha numeric labels to identify each individual item that I have in my shop, retrospectively. I have possibly 6-8thousand individual items that need coding for stock take purposes yet I can find no outlet that supply such thing. Do you have any suggestions. My line is antiques\/collectables, predominantly china with items ranging in size from 2-3cm to 5\/600cm. I would be most grateful for any solutions or suggestions. Best regards. Pete.\nOK I found the ControlSource property but it is on the Job No text box, do I enter the code there or am I entering my code on the button I created to save and get new number? If I put it behind the button, when I open the form it goes to the first record so I go to the last record and hit save get new number button and it gives me the number 1\u2026\u2026Is it because when I open the form it goes to the first record and not a new record????\nOntario has the oldest exit number system, having started posting exit numbers sequentially in the 1960s along Highway 401; it switched to mile-based numbering before Canada went metric. Most short freeways do not have exit numbers, and until about 2000 (with Highways 11 and 69\/400), incomplete freeways also did not have exit numbers. Interchanges with multiple exits are lettered A-B.\n\n#### I\u2019d like to share my solution. It came to me partially in my sleep, I tried refining it this morning but because of time, finally had my production person print the manually numbered tickets so that we could deliver them to the customer who needed them today. Here is my solution. I deduced that it would be better to let a program designed to count, do the counting. I used Excel. I then let InDesign CS4 do the merging. Here\u2019s the formula.\n\nPerhaps your explanation already addresses this, but I can\u2019t see it. Is there any way this script can be used for printing multiple-up in numeric sequence? For example, if I\u2019m running 1000 postcards 4-up (on 250 sheets). I need the 4 cards on page 1 to be numbered 1, 251, 501, 751; then the 4 cards on page 2 numbered 2, 252, 502, 752; etc., so that when the sheets come out of the printer and are cut into 4, I have a stack of 1-250, a stack of 251-500, a stack of 501-750 and a stack of 751-1000.\n###### I'm producing gift certificates for a restaurant and they need to be numbered sequentially from 0001 to 0250. Is there any way to do this easily as opposed to numbering each manually? I'm sure I could probably work it out with a print shop, but the job was thrust on me last minute and my options are limited by the short turn around time. Any help would be appreciated. Thanks!...\n\nHi, is there any limit on the number of E-Mails ? I created an archive of 270000 E-Mails (IMAP) and it caused trouble. Can I have that amount in a local folder ? Are there any recommended number ? It locked that 50000 starts being a problem on IMAP already. How else would you handle an archive that you need frequently ? Thanks for your help Stephan If it were me... exporting them(selectively) to user created properly named Windows Explorer blank folders on the hard drive and backing up to a different drive(internal\/external\/cd\/dvd) outside of Windows Live Mail woul...\nCan anyone tell me what the maximum no. of worksheets is in Excel? Ton From Help...Limited by available memory (default is 3) -- HTH Nick Hodge Microsoft MVP - Excel Southampton, England nick_hodgeTAKETHISOUT@zen.co.uk.ANDTHIS \"Ton\" wrote in message news:EB5EE739-9250-4D83-AA7C-EE82C02C0AA3@microsoft.com... > Can anyone tell me what the maximum no. of worksheets is in Excel? The maximum number only depends on the amount of memory available. -- Best Regards Leo Heuser Followup to newsgroup only please. \"Ton\"\nIn answer to your first question, I don\u2019t believe there\u2019s any way to add your own heading names to the list of Chapter heading options in the Caption numbering dialog box. I tried creating a new heading style and setting it to Level 1 (on the Paragraph dialog box), but it didn\u2019t put it into the list. That\u2019s all I could think of that might set it, but it didn\u2019t work.\nHi, I am creating a process map in Visio, is it possible to get Visio to number my boxs on my flowchart. At the moment I create the shape, put the text in but I have to manually put the number before the text, can visio do this automatically. Thanks Glenn In Visio 2003, try Tools > Add-ons > Visio Extras > Number Shapes. The same feature can be found in previous versions under Tools > Macros. -- Mark Nelson Microsoft Corporation This posting is provided \"AS IS\" with no warranties, and confers no rights. \"Glenn Robertson\"\nThat\u2019s enough tips for now. You\u2019ll be filling your fundraising thermometer template How to Create Your Custom Excel Fundraising Thermometer Template How to Create Your Custom Excel Fundraising Thermometer Template Use an Excel thermometer chart to visually keep track of your financial goals. Whether you're saving for a new gadget or fundraising for a good cause, here's a step by step tutorial. Read More in no time. Let\u2019s get to the tickets.\nI have been generating 150-400 page reports with multiple lists in tables. Word's auto numbering would only go so far in applying sequential numbering but then it just stops and I could not use it any more. I had to manually type in the numbered list which was quite annoying and very time consuming. Then I came across your Word Tip. Awesome! It worked. Thanks so very much.\n\n\n### I have a table named Artifact Catalog in which there is a field Collection Point ID and a field Artifact ID. On the form I have created the user will input the Collection Point ID, for example: 2-1050. I need to find a way to have this Collection Point ID automatically generate a corresponding Artifact ID, i.e when you click the save button the first record under Artifact ID becomes: 2-1050.1 and the second becomes 2-1050.2 and so on.\n\nI like where your idea is going, but I cannot figure out how to consecutively number across text frames on one document. So far I have created a csv document in Excel, drew a textbox, imported the csv file into the Data Merge window and dragged it into the text frame. Now, I've got \"<<00001>>\" in the text frame. When I click \"Create Merged Document\" I get an error message: \"Cannot creat merged document because no placeholders are present...\" Now what?\nI removed the required setting on the table level. The form does not give me an error message now, but does not close on its on. I closed it via a command then looked at the Design Projects table to see if the new record, #896, shows up. A new record is there, with all the entered data EXCEPT the very important field of the Project ID. That field is blank.\nCorel crashed repeatedly on my production person today while he was trying to number them using the plug-in. I couldn\u2019t figure out how to make Mike\u2019s script work. It appeared designed for variable numbers in one place. My customer needed her tickets numbered in two places, on the ticket and on the stub. Because she needs them tomorrow there is no time to send them out to be numbered manually. I\u2019ll keep watching this space for more info. Thanks folks.","date":"2019-01-17 01:33:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.38568729162216187, \"perplexity\": 1383.3855685486287}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583658662.31\/warc\/CC-MAIN-20190117000104-20190117022104-00130.warc.gz\"}"}
null
null
Radio talk show host Thabiso Kotane is one of the longest serving members at Limpopo's Capricorn FM. The station is celebrating its 9 year this month and Kotane has been around from day one. SowetanLIVE caught up with the host as he reflected on the station and his own personal growth. Kotane took a chance to work at a fledgling station after a call from the chairman Given Mkhari. He left his job as a senior producer at SAFM and moved to Limpopo. The Soweto born host stated that he was very skeptical about it as he had never been to Limpopo before, once there of course everything changed. He recalls his fondest memories of his time so far at the station is when one of his readers called in for help and received assistance within a week. The woman who was unemployed with children, wanted to give up on life but through the help of listeners, was employed within a week. She also called in to thank everybody. Another memory is when the station built a house for a destitute family. Kotane believes that the stations greatest achievement is in how they develop their employees. Especially since most of the people they employ are from rural areas, or have little experience. Kotane believes by developing these people the station contributes to empowering the community. He mentions that his show has taught him a lot of things about himself and his listeners. He also said that working as a radio talk show host has shown him that the closer you bring yourself to people's issues, the more popular you become. As this shows them that you understand and will champion them. Kotane who describes himself as a "A proud father, avid reader and passionate broadcaster "believes that a good radio host should be well informed. He also imparted some advice to aspiring radio talk show hosts. In celebration of its birthday month, Capricorn FM is giving away hampers and cash worth over R200 000.00. Rofhiwa "Tholi B" Bologo, the Station Manager had this to say about the stations milestone.
{ "redpajama_set_name": "RedPajamaC4" }
4,604
Q: Split View Controller not root controller I have been working on a method of converting an existing iPhone app to an iPad app. Amongst the various challenges that come with this I am trying to display a UISplitViewController in the app to display a Master - Detail arrangement that exists in the iPhone app. I have considered changing the root controller to be a SplitViewController as Apple suggests but I have multiple master - Detail arrangements in the application and Apple doesn't explain how to have that arrangement with only one SplitViewController as the root (they do explain how to have multiple Detail views for one Master, but that is something different). To achieve my aim I have done the following: On my View that contains my main menu (my first page) I load the UISplitViewController on a button push in the following manner: -(void)showSplitViewController { UIStoryboard *myStoryboard = self.storyboard; UISplitViewController *splitViewController =(UISplitViewController *)[myStoryboard instantiateViewControllerWithIdentifier:@"SplitViewController"]; // Detail UINavigationController *navigationController = [splitViewController.viewControllers lastObject]; DetailedViewController *detailViewController = (DetailedViewController *)navigationController.topViewController; splitViewController.delegate = detailViewController; detailViewController.managedObjectContext = self.managedObjectContext; // Master UINavigationController *masterNavigationController = splitViewController.viewControllers[0]; MasterViewController *controller = (MasterViewController *)masterNavigationController.topViewController; controller.managedObjectContext = self.managedObjectContext; AppDelegate *appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate.window setRootViewController:splitViewController]; self.navigationController.viewControllers = nil; } This shows the UISplitViewController and all is well. On the split controller detail view I have a 'Home' button which takes you back to the landing page (the one with the button that triggers the code above). This code looks like this: -(void)goHome { // Return to the root view. AppDelegate *appDelegate = [AppDelegate sharedAppDelegate]; // Create the Home View Controller UIStoryboard *myStoryboard = [UIStoryboard storyboardWithName:@"MainStoryboard-iPad" bundle:nil]; HomeViewController *hvc = (HomeViewController*)[myStoryboard instantiateViewControllerWithIdentifier:@"HomeViewController"]; hvc.managedObjectContext = self.managedObjectContext; // Create the Navigation controller UINavigationController *navigationController=[[UINavigationController alloc] initWithRootViewController:hvc]; // Remove the current root view controller [self.view.window setRootViewController:navigationController]; //[navigationController presentViewController:hvc animated:YES completion:nil]; UIViewController *currentViewController = [navigationController presentedViewController]; } The problem is that doing this generates a memory leak that I do not have the skill to track down. If you repeatedly switch from the Home View to the Split View and back again the memory consumption just keeps climbing. I think that a view or something is not being released correctly but I cannot track it down. Does anyone have any suggestion of what I can do please? If I change all of my code so that the UISplitViewController is the root how do I make it work with multiple master - detail arrangements? If this is not going to work how would you suggest I straighten out my code above please?
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,763
{"url":"https:\/\/cs.stackexchange.com\/questions\/2546\/find-string-that-minimizes-the-sum-of-the-edit-distances-to-all-other-strings-in","text":"# Find string that minimizes the sum of the edit distances to all other strings in set\n\nI have a set of strings $S$ and I am using the edit-distance (Levenshtein) to measure the distance between all pairs.\n\nIs there an algorithm for finding the string $x$ which minimizes the sum of the distances to all strings in $S$, that is\n\n$\\arg_x \\min \\sum_{s \\in S} \\text{edit-distance}(x,s)$\n\nIt seems like there should, but I can't find the right reference.\n\nThe problem is known as \"median string problem\" and it is NP-complete; some results can be found searching with Google; in particular \"2-Approximation Algorithms for Median and Centre String Problems\". If $x$ must be one of the points in $S$ then the problem becomes solvable in polynomial time.","date":"2022-08-15 00:36:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4591483175754547, \"perplexity\": 262.8931067896377}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572089.53\/warc\/CC-MAIN-20220814234405-20220815024405-00384.warc.gz\"}"}
null
null
static void enable_bsd_fragmentation(int fd) { #ifdef IP_DONTFRAG const int x = 0; setsockopt(fd,SOL_IP,IP_DONTFRAG,&x,sizeof x); #endif } static void enable_linux_fragmentation(int fd) { #ifdef IP_MTU_DISCOVER #ifdef IP_PMTUDISC_DONT const int x = IP_PMTUDISC_DONT; setsockopt(fd,SOL_IP,IP_MTU_DISCOVER,&x,sizeof x); #endif #endif } int socket_udp(void) { int fd = socket(PF_INET,SOCK_DGRAM,0); if (fd == -1) return -1; fcntl(fd,F_SETFD,1); blocking_disable(fd); enable_bsd_fragmentation(fd); enable_linux_fragmentation(fd); return fd; }
{ "redpajama_set_name": "RedPajamaGithub" }
169
For the Jewish neighborhood, the top of the center a long time and the emergence of the trendy countryside introduced the promise of equivalent citizenship in addition to the potential lack of Jewish company identification. The felony maxim dina de-malkhuta dina (the legislation of the country is legislations) invoked in Talmidic instances to justify the attractiveness of the king's legislations and certified within the center a long time via Maimonides and Rashbam to incorporate the requirement of consent by way of the ruled underwent extra redefinition through Jews within the Napoleonic age. Graff makes a speciality of the fight among 18th and 19th-century Jewish non secular reformers and traditionalists in defining the bounds of dina de-malkhuta dina. He lines the motivations of the reformers who, of their zeal to realize equality for the previously disenfranchised Jewish groups in Western Europe, have been ready to render unto the kingdom compromising authority over Jewish non secular existence lower than the rubric of dina de-malkhuta dina used to be meant to strike a stability among synagogue and country and never for use as a pretext for the liquidation of the community's company existence. Graff observes that the importance of dina de-malkhuta dina and its interpretation ids very important for an knowing of contemporary Jewish existence in addition to the connection of Diaspora Jews to the Jewish group within the kingdom of Israel. Each Shabbat in synagogues worldwide and throughout the US, sermons from the neighborhood rabbi are an immense section of worship. This ebook brings jointly thirty-five sermons preached to the congregation of a customary small southern urban, Lake Charles, Louisiana. integrated are a number of sermons dependent upon the weekly parashah (assigned biblical element from the Pentateuch), a chain of messages introduced in the course of the excessive holy days (Rosh Hashanah and Yom Kippur) of 2007, 3 funeral sermons, a different Yom ha-Sho'ah (Holocaust-memorial) deal with, and a quick speak about freedom, given on July four, 2008. This quantity collects revised types of essays from a 2011 workshop held in Munich on Remembering and Forgetting in Early moment Temple Judah . The authors of the essays handle those concerns from either normal methodological views and during case reviews rising out or linked to a variety of texts from the prophetic literature, the Pentateuch, the ancient books, Psalms and Lamentations. The authority of canonical texts, particularly of the Bible, is usually defined in static definitions. despite the fact that, the authority of those texts used to be received in addition to exercised in a dynamic technique of transmission and reception. This publication analyzes chosen features of this old process. realization is paid to biblical master-texts and to different texts on the topic of the "biblical worlds" in a number of historic sessions and contexts.
{ "redpajama_set_name": "RedPajamaC4" }
683
\section{}\label{} \section{Introduction}\label{introduction} Inferring the causal structure of a set of related variables is key to understanding how a system works. By interpreting directed edges as implying causal relationships, a causal network model extends standard (non-causal) graphical models by specifying the distribution of the data when one or more variables are manipulated \citep{Pearl}. A large body of work exists on learning the structure of graphical models from observational data, that is, data passively collected without any experimental intervention performed on the system under study; see \cite{Daly2011} for a comprehensive review. However, typically, observational data alone can only reveal the structure of a graphical model up to the Markov equivalence class containing the true data-generating graph \citep{Verma91}. To fully determine the structure of a causal network without making additional assumptions about specific functional model classes and error distributions, interventional data are needed to resolve the directionality of uncompelled edges \citep{Peters2011}. Interventional data result from experiments in which one or more nodes have been actively manipulated, for example, by activating or inhibiting the expression of a gene in a model organism \citep{Sachs05,Nagarajan2013}. Crucially, different intervention experiments yield different amounts of information about the causal structure. Thus, since experiments are often expensive and time-consuming, it is advantageous to select interventions that provide the maximum amount of information. Optimal experimental design (OED) methods, also referred to as active learning algorithms, attempt to optimize this experiment selection process by providing a means of evaluating which experiments should be performed next given the current state of knowledge. From the Bayesian perspective, a naive approach would be as follows: for each candidate experiment, generate hypothetical datasets from the posterior predictive, perform posterior inference on each dataset, and compute a functional of the posterior that summarizes the amount of information gained. Averaging over many datasets would yield an estimate of the posterior expected amount of information gain for each candidate experiment. However, this naive approach would involve an inordinate amount of computation. In this article, we develop a novel Bayesian OED technique that is principled and computationally tractable. Roughly speaking, we consider the asymptotic information gain that each experiment would yield in the limit of infinitely many replicates, as a proxy for the expected gain from finitely many replicates. Under fairly general conditions, in this limit, the posterior is simply obtained by restricting the current posterior to a subset of the parameter space. Thus, it turns out the reduction in entropy can be easily computed using samples from the current posterior, without generating or performing inference on any hypothetical datasets. This leads to a vast reduction in the computation burden required to select experiments. Based on this principle, we introduce a class of entropy-based criteria for determining the optimal intervention to perform in the next experiment. After the selected experiment is performed and new experimental data is obtained, we update the posterior on graphs and use it to select the next experiment. To sample from the posterior distribution over graphs, we employ an existing Markov chain Monte Carlo (MCMC) algorithm with efficient dynamic programming-based proposals \citep{Eaton07_hybridMCMC}. By iterating between experimentation and analysis in this cyclical fashion, we focus the data collection efforts in a way that reduces posterior uncertainty as rapidly as possible. We compare our method to two other active learning approaches and a random intervention approach in the context of several simulated data sets as well as the Sachs cell-signaling network, a commonly studied benchmark in the causal network literature \citep{Sachs05, Eaton07_DP, Cho, Ness}. The article is organized as follows. In Section \ref{sec:general-criterion}, we derive our general criterion for selecting optimal experiments. In Section \ref{sec:criterion-causal-network-models}, we apply our general criterion to causal network models. In Section \ref{sec:practical-implementation}, we lay out the overall proposed framework, along with implementation details about the entropy-based criteria and the MCMC algorithm. Section \ref{sec:previous-work} discusses related previous work on OED and active learning methods. In Section \ref{sec:simulation-results}, we present a collection of simulation studies. Section \ref{sec:application} contains an application to the Sachs network, using both real experimental data and simulated data. We conclude with a brief discussion of our findings and directions for further research. \section{General criterion}\label{sec:general-criterion} \subsection{Intuition}\label{subsec:intuition} Before discussing OED in the context of causal networks, we first consider the more general case of identifying an object of interest from a large set of possible objects by asking a sequence of questions. For intuition, we illustrate the basic idea of our method in terms of the popular game Twenty Questions. In this game one person, the ``answerer," thinks of an object. The other player, the ``questioner," then asks a sequence of ``yes" or ``no" questions with the goal of guessing the answerer's object using fewer than twenty questions. At the beginning of the game, the questioner has a prior over objects, representing the probability that the answerer has selected a given object. A question such as, ``Is the object living?" partitions the objects into two parts: living and non-living. The subsequent answer provides information that allows the questioner to eliminate the objects in one of the parts and update their posterior beliefs accordingly. If the prior is uniform, then the most efficient strategy is to select questions that partition the set of remaining objects roughly in half (Figure \ref{fig:partitionGeneral}). More generally, if the prior is not uniform, then it is most efficient to split the posterior probability roughly in half at each step. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Images/PartitionGeneral.PNG} \caption{Schematic illustration of sequential partitioning by a series of questions, Q1-Q3. The answer to each question indicates the part (shaded region) containing the object of interest (denoted by a star.)} \label{fig:partitionGeneral} \end{figure} The following observations about Twenty Questions will be helpful to consider when we discuss the causal network setting in Section \ref{sec:criterion-causal-network-models}. First, there are many possible ways of partitioning a space into two parts of equal size (or equal posterior probability, more generally). Different questions partition the space along different features of the objects. Second, we might relax the restriction to yes/no questions and also allow questions such as, ``Is the object a vegetable, animal, or mineral?" Such questions partition the space of objects into more than two parts. In choosing what question to ask next, the questioner is implicitly considering the informativeness of the partition induced by their question. \subsection{General criterion} \label{subsec:general-criterion} We now formalize the ideas laid out in Section \ref{subsec:intuition}. Suppose $(\theta,\nu)\sim\pi$, $X_1,\ldots,X_N|\theta,\nu\sim P_{\theta,\nu}$ i.i.d., and $f(\theta)$ is a function of $\theta$ with the following three properties. \begin{condition}\label{condition:general} ~ \begin{enumerate}[label=(\alph*), ref=\ref{condition:general}(\alph*)] \item\label{condition:cond-indep} $\theta \perp\!\!\!\!\perp X_{1:N} \mid f(\theta)$. \item\label{condition:identifiable} $f(\theta)$ is identifiable, in the sense that there is a function $g$ such that $g(P_{\theta,\nu}) = f(\theta)$ almost surely, and \item\label{condition:finite} $f(\theta)$ can only take one of finitely many values. \end{enumerate} \end{condition} For interpretation, in the Twenty Questions illustration, $\theta$ represents the object selected by the answerer, $\pi$ is the prior distribution, $f(\theta)$ is the true answer to question $f$, and $X_1,\ldots,X_N$ represent noisy answers to question $f$. Note that $X_1,\ldots,X_N$ all pertain to the same question about $\theta$, not $N$ different questions. Condition \ref{condition:cond-indep} means that once we know the true answer to the question, the noisy answers provide no additional information about $\theta$. Condition \ref{condition:identifiable} means that the answer $f(\theta)$ is uniquely determined by the distribution of $X_n$, and thus, in the limit as $N\to\infty$, $f(\theta)$ can be recovered from $X_1,\ldots,X_N$. In an experimentation context, $\theta$ is a parameter of interest, $\nu$ is a nuisance parameter, $\pi$ is the prior, $f(\theta)$ is the answer to a research question $f$ (for example, is a certain hypothesis true), and $X_1,\ldots,X_N$ are data from $N$ replicates of an experiment performed to obtain information about $f(\theta)$. The nuisance parameter $\nu$ is sometimes needed for the i.i.d.\ assumption to hold. For the causal network models that we consider in subsequent sections, $\theta$ is a directed acyclic graph and $f$ corresponds to an equivalence class of graphs. First, however, we consider a general model and any $f$ that satisfies Condition \ref{condition:general}. \subsubsection{Approximate information gain}\label{subsec:approx-info-gain} The entropy of a random variable $Y$ is defined as $H(Y) := -\int p(y)\log p(y) d\mu(y)$ where $p(y)$ is the density of $Y$ with respect to some dominating measure $\mu$, or more succinctly, $H(Y) = - \mathrm{E}(\log p(Y))$. Similarly, $H(Y|Z) = -\mathrm{E}(\log p(Y|Z))$, where the expectation is over the joint distribution of $Y$ and $Z$; thus, unlike the conditional expectation $\mathrm{E}(Y|Z)$, the conditional entropy $H(Y|Z)$ is not a random variable. By standard properties of entropy, since $f(\theta)$ is a function of $\theta$, the posterior entropy is $$ H(\theta\mid X_{1:N}) = H(\theta \mid f(\theta), X_{1:N}) + H(f(\theta) \mid X_{1:N}). $$ By Condition \ref{condition:cond-indep}, $H(\theta \mid f(\theta), X_{1:N}) = H(\theta \mid f(\theta))$. Further, $H(\theta \mid f(\theta)) = H(\theta) - H(f(\theta))$ again using that $f(\theta)$ is a function of $\theta$. By Conditions~\ref{condition:identifiable} and \ref{condition:finite}, we have $H(f(\theta)\mid X_{1:N})\to 0$ as $N\to\infty$, because the posterior on $f(\theta)$ is guaranteed to concentrate at a single value \citep{Doob_1949,miller2018detailed}; see Lemma~\ref{lemma:doob-entropy} for details. Thus, we have the following result. \begin{theorem} \label{theorem:general} If $(\theta,\nu)\sim\pi$, $X_1,\ldots,X_N|\theta,\nu\sim P_{\theta,\nu}$ i.i.d., and $f(\theta)$ satisfies Condition~\ref{condition:general}, then \begin{equation} \label{eqn:entropy-approx} H(\theta\mid X_{1:N}) \xrightarrow[N\to\infty]{} H(\theta) - H(f(\theta)). \end{equation} \end{theorem} In other words, when $N$ is sufficiently large, the difference between the prior entropy $H(\theta)$ and the posterior entropy $H(\theta|X_{1:N})$ is approximately equal to $H(f(\theta))$. Put another way, the information gained---in terms of the reduction in entropy---is approximately equal to the entropy of the answer $f(\theta)$ \textit{under the prior}. Thus, to estimate the information to be gained by a particular question $f$, we need only work with the prior --- not the posterior $\theta|X_{1:N}$ for a yet unobserved dataset $X_{1:N}$. \subsubsection{Selection of experiments using approximate information gain} \label{sec:selection-of-experiments} To apply this result to select experiments, suppose that instead of $\pi(\theta,\nu)$ being the prior, $\pi(\theta,\nu)$ is the current posterior given all the data from any previous experiments. Let $\mathcal{E}$ denote a set of possible experiments. For each experiment $e\in\mathcal{E}$, let $X_1^e,\ldots,X_N^e|\theta,\nu \sim P_{\theta,\nu}^e$ i.i.d.\ be hypothetical random data from $N$ replicates of experiment $e$. Suppose $f_e(\theta)$ satisfies Condition~\ref{condition:general} above. Then $p(\theta \mid X_{1:N}^e) \propto p(X_{1:N}^e \mid \theta)\,\pi(\theta)$ is the posterior distribution of $\theta$ given the new data $X_{1:N}^e$ as well as data from any previous experiments. The expected posterior entropy of $\theta$ after experiment $e$ is then \begin{align} \label{eqn:expected-posterior-entropy} H(\theta \mid X_{1:N}^e) = \int H(\theta \mid X_{1:N}^e = x_{1:N}^e)\, p(x_{1:N}^e)\, d x_{1:N}^e \end{align} where $p(x_{1:N}^e)$ is the posterior predictive distribution for experiment $e$ given the data from previous experiments. We would like to choose $e$ to minimize the expected posterior entropy $H(\theta \mid X_{1:N}^e)$. However, approximating $H(\theta \mid X_{1:N}^e)$ via Monte Carlo is computationally intensive, since for every $e$, it would typically involve (i) simulating $T$ hypothetical datasets $x_{1:N}^{e,1},\ldots,x_{1:N}^{e,T}$ from the posterior predictive $p(x_{1:N}^e)$, (ii) approximating the resulting new posteriors, $p(\theta \mid X_{1:N}^e = x_{1:N}^{e,t})$ for $t = 1,\ldots,T$, for example, by running $T$ MCMC chains, and (iii) approximating the entropy $H(\theta \mid X_{1:N}^e = x_{1:N}^{e,t})$ for $t = 1,\ldots,T$, in order to form a Monte Carlo approximation $H(\theta \mid X_{1:N}^e) \approx \frac{1}{T}\sum_{t=1}^T \widehat{H}(\theta \mid X_{1:N}^e = x_{1:N}^{e,t})$ using Equation~\ref{eqn:expected-posterior-entropy}. In contrast, the computation is vastly simplified using the approximation in Equation \ref{eqn:entropy-approx}. First, note that by Equation \ref{eqn:entropy-approx}, \begin{align} \label{eqn:expected-entropy-approx} H(\theta \mid X_{1:N}^e) &\approx H(\theta) - H(f_e(\theta)). \end{align} Since $H(\theta)$ does not depend on $e$, this implies that minimizing $H(\theta \mid X_{1:N}^e)$ is approximately equivalent to maximizing $H(f_e(\theta))$. Further, since $H(f_e(\theta))$ depends only on $\pi$ and $f_e$ (and not on $P_{\theta,\nu}^e$ or $X_{1:N}^e)$, it is often relatively easy to approximate $H(f_e(\theta))$ using posterior samples $\theta_1,\ldots,\theta_T$. Specifically, we can generate a single set of samples $\theta_1,\ldots,\theta_T$ from the current posterior $\pi$, and then for each potential experiment $e\in\mathcal{E}$, compute \begin{align} \label{eqn:approx-entropy-over-partition} H(f_e(\theta)) &= -\sum_{y} p(f_e(\theta) = y) \log p(f_e(\theta) = y) \approx -\sum_{y} \hat{p}_e(y) \log \hat{p}_e(y) \end{align} where $\hat{p}_e(y) := \frac{1}{T}\sum_{t=1}^{T}\mathds{1}\big(f_e(\theta_t) = y\big)$ and $\mathds{1}(\cdot)$ is the indicator function. In Equation~\ref{eqn:approx-entropy-over-partition}, the sum is over all values $y$ in the range of $f_e$. Thus, our proposed method of choosing $e$ is as follows. \begin{enumerate} \item Generate samples $\theta_1,\ldots,\theta_T$ from the current posterior $\pi$. \item Compute $\hat{p}_e(y) = \frac{1}{T}\sum_{t=1}^{T}\mathds{1}\big(f_e(\theta_t) = y\big)$ for each candidate experiment $e$. \item Select the experiment $e$ with the largest value of $\hat{h}_e := -\sum_{y} \hat{p}_e(y) \log \hat{p}_e(y)$. \end{enumerate} Note that, equivalently, $\hat{h}_e = -\sum_{A\in\mathcal{A}_e} \hat{\pi}(A) \log \hat{\pi}(A)$ where $\hat\pi = \frac{1}{T}\sum_{t=1}^T \delta_{\theta_t}$ and $\mathcal{A}_e$ is the partition of $\theta$-space induced by $f_e$, since $\hat{p}_e(y) = \hat{\pi}(A)$ where $A = \{\theta : f_e(\theta) = y\}$. Therefore, we can interpret $\hat{h}_e$ as an approximation to the current posterior entropy of the partition induced by $f_e$. It is important to note that although our criterion is motivated by the asymptotics as the number of replicates goes to $\infty$, it accounts for finite sample uncertainty due to the fact that the posterior $\pi$ quantifies our uncertainty in $\theta$ based on finitely many previous experiments and finitely many replicates of each previous experiment. Also, a further advantage of our approach is that $f(\theta)$ often takes a small number of values, such as two for a binary function, and thus, $H(f(\theta))$ is often much easier to estimate from samples than $H(\theta \mid X_{1:N}^e)$ or even $H(\theta)$. \section{Criterion for causal network models}\label{sec:criterion-causal-network-models} In this section, we apply our general criterion to the setting of causal network models. First, we define the model we will use, and provide some intuition for partitions of graph space that are informed by interventional experiments. \subsection{Causal network models}\label{subsec:causal-network-models} We use the standard causal network model specification, which we review here. Note that it is common to refer to these models as ``Bayesian networks'' \citep{Pearl}, but we avoid this term because the Bayesian aspect of our methodology comes from the use of posterior distributions to quantify uncertainty, rather than from features inherent to the model itself. All graphical models use nodes to represent random variables and edges to represent the probabilistic relationships among nodes. In contrast to traditional graphical models, which only specify the joint distribution in the observational setting, a causal network model also specifies the joint distribution when one or more nodes are manipulated. To represent this causal structure, it is standard to use a directed acyclic graph (DAG) along with the conditional probability distribution (CPD) of each node given the values of its parent nodes. The directed edges in this structure represent cause and effect relationships between parent and child nodes. We refer to a DAG topology along with all the CPDs as a \textit{causal network model}. In a causal network model, the graph topology and the CPDs can be viewed as specifying an algorithm for generating data under manipulation of the nodes. Here, we assume interventions that assign some subset of nodes to values that may be fixed or random, but are independent of all other nodes. Thus, when intervening on node $i$, we effectively sever all incoming edges to node $i$. The data generating process under such an intervention can be described as follows: each manipulated node is set to its assigned value and each non-manipulated node is drawn from its CPD, proceeding in an ordering of the nodes that ensures parents are drawn before their children. In the observational setting (that is, when no nodes are manipulated), this reduces to the usual graphical model specification; that is, the joint distribution of the nodes $\mathcal{V} = (X_1, \dots, X_V)$ factors as $p(X_1, \dots, X_V \mid \beta,\,G) = \prod_{i=1}^{V}p(X_i\mid X_{\mathrm{pa}(i)}, \beta_i,\,G)$ where $G$ is the graph topology and $\beta_i$ contains the parameters of the CPD for $X_i$. However, the algorithmic nature of the causal network also specifies that, when some subset of nodes $S$ is independently manipulated, the joint distribution is $p^*(X_1, \dots, X_V \mid \beta,\, G) = \prod_{i=1}^V p(X_i\mid X_{\mathrm{pa}(i)}, \beta_i,\, G)^{\mathds{1}(i\not\in S)} p^*(X_i\mid\beta_i^*)^{\mathds{1}(i\in S)}$ where $p^*(X_i\mid\beta_i^*)$ is the distribution of $X_i$ when intervening on node $i$. Here, we define $\beta := (\beta_1,\ldots,\beta_V,\beta_1^*,\ldots,\beta_V^*)$. For a data set $D = \big((X_{1,n}, \dots, X_{V,n}) : n=1,\ldots,N\big)$ consisting of $N$ samples of the $V$ nodes under interventions on subsets $S_1,\ldots,S_N$, respectively, the marginal likelihood is \begin{align}\label{eqn:general-marginal-likelihood} p^*(D|G) &= \int p^*(D|\beta,G)\,p(\beta|G)\, d\beta \\ &= \int \Big(\prod_{n=1}^N p^*(X_{1,n},\ldots,X_{V,n} \mid \beta,\,G)\Big)\,p(\beta|G)\, d\beta \\ &= p_O^*(D|G) p_S^*(D) \end{align} where \begin{align} p_O^*(D|G) &= \prod_{i=1}^{V} \int \big(\prod_{n=1}^{N} p(X_{i,n} \mid X_{\mathrm{pa}(i),n},\, \beta_i, \, G)^{\mathds{1}(i \not\in S_n)} \big)\, p(\beta_i | G)\, d\beta_i \\ p_S^*(D) &= \prod_{i=1}^{V} \int \Big(\prod_{n=1}^{N} p^*(X_{i,n}\mid \beta_i^*)^{\mathds{1}(i \in S_n)} \Big)\, p(\beta_i^*)\, d\beta_i^*. \end{align} When $p(\beta_i | G)$ and $p(\beta_i^*)$ are conjugate priors, these integrals can be computed in closed form. Since $p_S^*(D)$ does not provide any information about $G$, it is often omitted, however, we include it for theoretical purposes. In this paper, we use categorical CPDs with Dirichlet priors, and thus the marginal likelihood $p^*(D\mid G)$ can be computed in closed form. Specifically, we assume that the CPD of each node is \begin{align} p(X_i = k \mid X_{\mathrm{pa}(i)} = j,\; \beta,\, G) = \beta_{i j k} \end{align} for $i\in\{1,\ldots,V\}$, $j\in\{1,\ldots,q_i\}$, and $k\in\{1,\ldots,r_i\}$. Here, $j$ enumerates the possible joint states of $X_{\mathrm{pa}(i)}$, and we abuse notation slightly by writing $X_{\mathrm{pa}(i)} = j$ to mean that $X_{\mathrm{pa}(i)}$ takes the $j$th possible state. We use the BDeu Dirichlet prior, $\beta_{i j} \sim \mathrm{Dirichlet}(\boldsymbol{\alpha}_{i j})$ with $\alpha_{i j k} = 1/(r_i q_i)$, following standard practice in the categorical setting \citep{Heckerman1995, CooperYoo1999, Eaton07_hybridMCMC}. Similarly, for the interventions, we assume $p^*(X_i = k \mid \beta_i^*) = \beta_{i k}^*$ and for simplicity, $\beta_i^* \sim \mathrm{Dirichlet}(1/r_i,\ldots,1/r_i)$. The BDeu prior has the favorable property of likelihood equivalence, which we use in Section \ref{sec:markov-tse-equivalent} \citep{Buntine1991,Heckerman1995}. For the Dirichlet-Categorical case with BDeu prior, \begin{align} \label{eqn:marginal-likelihood} p_O^*(D \mid G) &= \prod_{i=1}^{V} \prod_{j=1}^{q_i} \frac{\Gamma(\alpha_{i j})}{\Gamma(\alpha_{i j}+N_{i j})} \prod_{k=1}^{r_i} \frac{\Gamma(\alpha_{i j k}+N_{i j k})}{\Gamma(\alpha_{i j k})} \end{align} where $\Gamma$ is the gamma function, $N_{i j k} = \sum_{n=1}^N \mathds{1}(i\not\in S_n,\, X_{\mathrm{pa}(i)}=j,\, X_{i,n}=k)$ is the number of samples in which node $X_i$ is observed (not manipulated) to have state $k$ when its parents have state $j$, $N_{i j} = \sum_{k=1}^{r_i} N_{i j k}$, and $\alpha_{i j} = \sum_{k=1}^{r_i} \alpha_{i j k}$. The OED method we propose can be used with other CPDs as well, so long as the marginal likelihood can be computed or approximated. \subsection{Intuition for partitions of graph space} \label{sec:graph-motivation} In the Twenty Questions example, we considered the partition induced over a set of objects by asking a question about the object of interest. In this section, to provide intuition for how this applies to the causal network setting, we illustrate examples of partitions that intervention experiments induce over a space of graphs. For expository purposes, suppose the true graph $G$ is known to be one of the four graphs shown in Figure~\ref{fig:perturb}; this example is inspired by an example from \cite{Pournara}. In general, the set of graphs under consideration, $\mathcal{G}$, would consist of all possible DAGs on nodes $\{A,B,C,D\}$ rather than just these four graphs. First, consider what features of a graph a single node intervention on node $e$ could help reveal. For example, we can expect an intervention on $A$ to have observable downstream effects on at least some of the descendants of $A$, but it would not affect any ancestors of $A$. Therefore, intervening on $A$ should give us information about which nodes are descendants of $A$. Thus, one possible choice for $f_e(G)$ is the set of descendants of the manipulated node. Since $f_e(G)$ induces a partition of the set of graphs, we refer to it as a partition scheme. To select which node to manipulate, we compare the information each candidate intervention is expected to yield with respect to a given partition scheme. Suppose $A$ is manipulated. Figure \ref{fig:perturb} shows the partition of graphs according to the descendants of $A$; that is, $G$ and $G'$ are in the same part if $f_e(G) = f_e(G')$. In the first three graphs, $A$ has no descendants, whereas in the last graph, $\{B, C, D\}$ are all descendants of $A$. Thus, as long as intervening on $A$ has an effect on one or more descendants, we could distinguish whether $G \in \{G_1, G_2, G_3\}$ or $G = G_4$ after sufficiently many replicates of the intervention on $A$. Meanwhile, if node $C$ is manipulated, a different partition over $\mathcal{G}$ is induced since $C$ has a different pattern of descendant sets than $A$; see Figure \ref{fig:perturb}. In general, an intervention partitions the set of graphs into equivalence classes such that (i) the graphs in each class are indistinguishable with respect to this intervention (corresponding to Condition~\ref{condition:cond-indep}), and (ii) graphs in different classes are distinguishable (corresponding to Condition~\ref{condition:identifiable}). For the Dirichlet-Categorical model that we use, the equivalence classes induced by the likelihood have an elegant graph-based characterization; see Section \ref{sec:markov-tse-equivalent}. However, we have also found several other partition schemes to be useful in practice; see Section \ref{sec:diffPartitions}. After an intervention is performed, the generated data provide evidence to suggest which parts of the partition are compatible with the experimental data --- specifically, parts that are more compatible with the data will have higher posterior mass. Roughly speaking, we would like to choose an experiment that narrows down the set of compatible graphs as much as possible. For instance, in the toy example in Figure~\ref{fig:perturb}, intervening on $C$ is preferable to intervening on $A$, since $C$ induces a finer partition of $\mathcal{G}$. However, in general it is also important to consider the posterior probability of the graphs given the data from any previous experiments, since there is no point in finely partitioning regions of the space with very low probability. To make this precise, we apply our general entropy-based criterion from Section \ref{sec:general-criterion} to the causal network setting, as described next. \begin{figure} \centering \includegraphics[scale = 0.60]{Images/grayscaleGraphs.PNG} \caption{Top panel: Graphs in $\mathcal{G}$. Middle panel: Partition induced by intervening on $A$. The manipulated node is shown in black while descendants of the manipulated node are shaded grey. Bottom panel: Partition induced by intervening on $C$.} \label{fig:perturb} \end{figure} \subsection{Applying the experiment selection criterion to causal networks} \label{sec:graph-criterion} Specializing from the general setting of Section~\ref{sec:general-criterion} to the case of causal networks, we define the unknown parameter of interest to be $\theta := G$ and let $f_e(G)$ be a partition scheme. Further, in the Dirichlet-Categorical case, we define $\nu := \beta$, that is, the nuisance parameter $\nu$ is the collection of CPD parameters $\beta$. Our goal is to perform experiments that make the posterior on graphs concentrate at the true graph as quickly as possible. We quantify concentration using the entropy of the posterior on graphs, $H(G)$, where $G$ is distributed according to the posterior given all experiments so far. Thus, we wish to perform experiments that minimize $H(G)$. In many cases, inferring the entire graph $G$ is overly ambitious. For instance, if the number of nodes is even moderately large, the number of possible graphs is extremely large, making it infeasible to infer $G$ completely. However, often, one only needs to infer a specific feature of $G$, such as whether node $X_1$ is an ancestor of node $X_2$, or whether $X_3$ mediates the effect of $X_1$ on $X_2$. In such cases, one can instead define $\theta$ to be a function of $G$, say, $\theta:=\varphi(G)$ and focus on minimizing $H(\varphi(G))$ rather than $H(G)$. To prioritize experiments, we apply the general criterion derived in Section \ref{sec:general-criterion}. Specifically, given samples $G_1, \dots, G_T$ from the current posterior, we choose the next experiment $e$ to maximize \begin{align}\label{eqn:hhat} \hat{h}_e = -\sum_y \hat{p}_e(y) \log \hat{p}_e(y) \end{align} where $\hat{p}_e(y) = \frac{1}{T}\sum_{t = 1}^T \mathds{1}(f_e(G_t) = y)$. Thus, we choose the intervention that maximizes the posterior entropy (under the current posterior) of the partition induced by $f_e$. If Condition~\ref{condition:general} is satisfied, then this minimizes the approximate expected entropy of the new posterior given the additional data from the experiment. Meanwhile, if Condition~\ref{condition:general} is not fully satisfied, then this approach is not guaranteed to reduce the entropy optimally, but is still a sensible way of choosing interventions that quickly reduce the entropy. \section{Practical implementation of the method}\label{sec:practical-implementation} In this section, we provide practical details on implementing the criterion in Section \ref{sec:graph-criterion}, including specifics regarding partition schemes, equivalence classes of graphs, sampling from the posterior on graphs, and an overall algorithm. \subsection{Partition schemes} \label{sec:diffPartitions} In the context of graphs, Condition~\ref{condition:cond-indep} is that given $f_e(G)$, the graph $G$ is conditionally independent of data from experiment $e$, and Condition~\ref{condition:identifiable} is that $f_e(G)$ is identifiable with respect to the distribution of the data under experiment $e$. Condition~\ref{condition:finite} is always satisfied since there are finitely many graphs $G$. While in theory we use Condition~\ref{condition:general} to justify the method, in practice Conditions \ref{condition:cond-indep} and \ref{condition:identifiable} are not strictly necessary. Recall that the optimality theory is based on the expected information gain from asymptotically many replicates of the next selected experiment. Thus, in practice, it may be possible to obtain excellent performance using a partition scheme that violates Condition \ref{condition:cond-indep} or \ref{condition:identifiable}. Consequently, we define a variety of partition schemes here, and we empirically compare their performance in Section~\ref{sec:simulations}. We consider experiments $e$ that intervene on a single node, and for brevity we use $e$ to denote the manipulated node. Consider the following partition schemes. \begin{enumerate} \item Markov equivalence class (MEC): $f_e(G)$ equals the Markov equivalence class of graphs when intervening on node $e\in\mathcal{V}$; see Section \ref{sec:markov-tse-equivalent}. \item Child Set (CS): $f_e(G)$ equals the set of children of node $e\in\mathcal{V}$. \item Descendant Set (DS): $f_e(G)$ equals the set of descendants of node $e\in\mathcal{V}$. \item Parent Set (PS): $f_e(G)$ equals the set of parents of node $e\in\mathcal{V}$. \end{enumerate} We also consider the following slightly different approach. \begin{enumerate}\setcounter{enumi}{4} \item Pairwise Child (PWC): Maximize $\sum_{v\in\mathcal{V}} H(f_{e,v}(G))$ where $f_{e,v}(G) = \mathds{1}((e,v)\in G)$ is the indicator of whether $G$ has an edge from $e$ to $v$. \end{enumerate} \subsection{Markov equivalence classes} \label{sec:markov-tse-equivalent} Two DAGs are said to be \textit{Markov equivalent} if they represent the same set of conditional independence relations. While all of the conditional independence relationships entailed by a graph can be computed using the d-separation algorithm \citep{Pearl88}, the following elegant result provides a simpler way to determine whether two DAGs are Markov equivalent based on their topology. \begin{theorem}[\cite{Verma91}] \label{theorem-markov-equiv} Two DAGs $G_1$ and $G_2$ are Markov equivalent if and only if they have the same skeleton and the same v-stuctures. \end{theorem} The \textit{skeleton} of a graph is its topology ignoring edge directions. A \textit{v-structure} is a triple of nodes $(x,y,z)$ with topology $x \rightarrow y \leftarrow z$, where there is no edge connecting $x$ and $z$. In general, observational data alone cannot distinguish between Markov equivalent graphs unless one assumes specific error distributions or functional model classes \citep{Peters2011}. A rich literature exists on Markov equivalence; see, for example, \cite{Andersson1997} and \cite{Chickering:UAI96}. While Markov equivalent graphs represent the same conditional independence relationships, they differ in the causal relationships they encode since it is the direction of arrows, not just the skeleton, that is important for causal interpretation. Interventions can help distinguish among graphs in the same Markov equivalence class. However, even after an intervention is performed, some graphs may still be indistinguishable. \cite{Hauser2012} consider performing a sequence of interventions, and they provide a generalization of Theorem~\ref{theorem-markov-equiv} that characterizes the equivalence classes of graphs that are indistinguishable with respect to the whole sequence of interventions. For our approach, however, we only need to consider the partition induced by a single candidate intervention (rather than the whole sequence of interventions), since the information from previous interventions is already represented in the posterior distribution. Thus, for our approach, a natural choice of partition scheme is to define $f_e(G)$ to be the Markov equivalence class of $G^e$, where $G^e$ is the DAG obtained from $G$ by removing all edges from $\mathrm{pa}(e)$ to $e$. This is referred to as the ``MEC'' scheme in Section~\ref{sec:diffPartitions}. Following the notation of Sections~\ref{sec:general-criterion} and \ref{sec:criterion-causal-network-models}, we write $P_{G,\beta}$ for the distribution of $(X_1,\ldots,X_V)$ given graph $G$ and CPD parameters $\beta = (\beta_1,\ldots,\beta_V,\beta_1^*,\ldots,\beta_V^*)$. Define $\beta^e$ to be a modified copy of $\beta$ in which $\beta_e^*$ takes the place of $\beta_e$ and $\beta_{e 1}$ takes the place of $\beta_e^*$. Thus, when intervening on node $e$, the distribution can be written as $P_{G^e,\beta^e}$. Consider a sequence of interventions in which a single node is manipulated at a time. For instance, suppose we have performed $N_k$ replicates intervening on node $i_k$ for $k = 1,\ldots,K$, and we are considering intervening on node $i'$ for the next set of $N'$ replicates. The joint model is then \begin{align} \label{eqn:sequential-model} \begin{split} & (G,\beta) \sim \pi \\ & X_1^e,\ldots,X_N^e \mid G,\beta \text{ i.i.d.} \sim P_{G^e,\beta^e} \text{ for $(e,N) \in \{(i_1,N_1),\ldots,(i_K,N_K),(i',N')\}$} \end{split} \end{align} where $P_{G,\beta}$ is the categorical model, $\pi(\beta|G)$ is the BDeu-based prior defined in Section~\ref{subsec:causal-network-models}, and $\pi(G)$ is an arbitrary prior on DAGs. \begin{theorem}\label{theorem:cond-indep} Under the joint model in Equation~\ref{eqn:sequential-model}, if $f_e(G)$ is the Markov equivalence class of $G^e$ then $$ X_{1:N'}^{i'} \perp\!\!\!\!\perp G \mid f_{i'}(G),f_{i_1}(G),\ldots,f_{i_K}(G). $$ \end{theorem} \begin{theorem}\label{theorem:identifiable} Assume the joint model in Equation~\ref{eqn:sequential-model}, and let $D = (X_{1:N_1}^{i_1},\ldots,X_{1:N_K}^{i_K})$ denote the data observed so far. If $f_e(G)$ is the Markov equivalence class of $G^e$ then there is a function $g$ such that $g(P_{G^{i'},\beta^{i'}}) = f_{i'}(G)$ almost surely when $(G,\beta)\sim p(G,\beta\mid D)$. \end{theorem} Now, to employ Theorem~\ref{theorem:general}, observe that under the model in Equation~\ref{eqn:sequential-model}, if we condition on $D$ then we obtain the following model: \begin{align*} & (G,\beta) \sim p(G,\beta \mid D) \\ & X_1^{i'},\ldots,X_N^{i'} | G,\beta \text{ i.i.d. } \sim P_{G^{i'},\beta^{i'}}. \end{align*} This follows the form of the abstract model in Section~\ref{sec:general-criterion}, with appropriate notational substitutions. As above, let $f_e(G)$ be the Markov equivalence class of $G^e$. Condition~\ref{condition:cond-indep} is that $X_{1:N'}^{i'} \perp\!\!\!\!\perp G \mid f_{i'}(G)$ in this conditional model given $D$, or equivalently, $X_{1:N'}^{i'}\perp\!\!\!\!\perp G \mid f_{i'}(G),D$ under the joint model in Equation~\ref{eqn:sequential-model}. By Theorem~\ref{theorem:cond-indep}, $X_{1:N'}^{i'} \perp\!\!\!\!\perp G \mid f_{i'}(G),f_{i_1}(G),\ldots,f_{i_K}(G)$, so we can expect that $X_{1:N'}^{i'}\perp\!\!\!\!\perp G \mid f_{i'}(G),D$ holds approximately when $N_1,\ldots,N_K$ are sufficiently large, since $D = (X_{1:N_1}^{i_1},\ldots,X_{1:N_K}^{i_K})$ and $X_{1:N_k}^{i_k}$ pertains to $f_{i_k}(G)$. Condition~\ref{condition:identifiable} is that there exists $g$ such that $g(P_{G^{i'},\beta^{i'}}) = f_{i'}(G)$ almost surely when $(G,\beta)\sim p(G,\beta\mid D)$, which is precisely what Theorem~\ref{theorem:identifiable} shows. Finally, Condition~\ref{condition:finite} is that $f_{i'}(G)$ takes finitely many values, which is true since there are only finitely many graphs $G$ on $V$ nodes. Therefore, Theorem~\ref{theorem:general} indicates that selecting the next intervention using the strategy in Section~\ref{sec:selection-of-experiments} with $f_e(G)$ chosen to be the Markov equivalence class of $G^e$ is a natural choice to optimally reduce entropy, under the asymptotic approximation that the number of replicates in each experiment is sufficiently large. \subsection{Sampling from the posterior distribution on graphs} \label{sec:sampleDAGs} A large body of work exists on MCMC methods for sampling from $p(G|D)$. This is a challenging task since the number of DAGs increases super-exponentially with the number of nodes and the posterior on graphs is often highly multi-modal. Some have proposed searching the space of graphs using local proposals that add, delete, or reverse edges at random \citep{Madigan1995} and others have improved chain mixing by sampling over the space of node orderings \citep{Friedman2003, Ellis2006}. We use a clever MCMC algorithm developed by \citet{Eaton07_hybridMCMC} that uses dynamic programming (DP) to construct proposals. This method explores the space of DAGs using a Metropolis-Hastings algorithm with a proposal distribution that is a mixture of local moves (edge deletions, additions, or reversals) and a global move that proposes a new graph in which an edge exists between two nodes with probability equal to the exact marginal posterior edge probability, computed using DP. Key to the DP algorithm's ability to compute exact marginal posterior edge probabilities is the assumption of a ``modular prior" over structures. Rather than directly specifying a prior over DAGs, a modular prior requires specifying a prior over node orderings and a prior that gives weight to sets of parents (and not to their relative order). Together these terms define a joint prior over graphs and orders. Defining the prior in this way allows the contribution to the marginal likelihood for nodes with the same parent sets to be cached and re-used for efficient exact computation, regardless of the orderings of the parents; see \citet{Koivisto2006} for details of the DP algorithm and see \citet{Koivisto04}, \citet{Friedman2003}, and \citet{Ellis2006} for further discussion of priors on orderings and graphs. A modular prior tends to favor graphs that are consistent with more orderings, such as fully disconnected graphs and tree structures. In fact, the modular prior favors tree structures over chains even if the two structures are Markov equivalent. For instance, tree structure $1 \leftarrow 2 \rightarrow 3$ has higher prior probability than the chain $1 \rightarrow 2 \rightarrow 3$ under a modular prior since the tree structure is consistent with two node orderings \citep{Eaton07_hybridMCMC}. While one may want to use a uniform prior over DAGs in the absence of prior knowledge, \cite{Ellis2006} and \cite{Eaton07_hybridMCMC} show how a uniform prior over orderings and flat prior over parent sets together encode a highly nonuniform prior over DAGs. The hybrid MCMC-DP approach that we use \citep{Eaton07_hybridMCMC} overcomes this limitation of the DP algorithm. With MCMC-DP, we can use an arbitrary prior on graphs and draw valid samples from $p(G|D)$, while benefiting from a fast, data-driven proposal distribution to help traverse the DAG space. We implemented our method in MATLAB (version 2017a) and we use the BDAGL package \citep{Eaton07_hybridMCMC} to sample from $p(G|D)$ using the MCMC-DP algorithm. Source code is available online at https://github.com/mzemplenyi/OED-graphical-models. \subsection{Overall algorithm} The inputs to our proposed algorithm are (i) a set of candidate experiments $\mathcal{E}$, (ii) a mechanism for generating i.i.d.\ samples $(X_1,\ldots,X_V)$ from $P_{e,0}$, the true distribution under experiment $e\in\mathcal{E}$, and (iii) a partition scheme $f_e(G)$ for each experiment $e\in\mathcal{E}$. For the first experiment, we generate observational data by not intervening on any nodes. Each subsequent experiment sets a single node from $\mathcal{E}$ to a fixed value. The algorithm proceeds as follows. Let $D$ denote the collection of data from the experiments so far. \begin{enumerate} \item Obtain posterior samples from $\pi(G) = p(G|D)$ and approximate the posterior entropy, $H(G)$. \item Check the stop criteria. Stop the algorithm if either: \begin{enumerate} \item $H(G)$ falls below a given entropy tolerance threshold, or \item the maximum number of allowed experiments has been reached. \end{enumerate} Otherwise, continue. \item\label{item:enumpartition} For each $e \in \mathcal{E}$, enumerate the partition \textit{over the sampled graphs} induced by $e$ and calculate $\hat{h}_e$, the approximate posterior entropy over the partition (Equation~\ref{eqn:hhat}). \item Select the experiment $e$ that maximizes $\hat{h}_e$ as the next intervention experiment to perform. \item Generate data for experiment $e$: draw $N$ i.i.d.\ samples from $P_{e,0}$. \item Combine the new data with the existing data and repeat from the beginning. \end{enumerate} For computational efficiency, note that in step \ref{item:enumpartition}, it is only necessary to consider those parts $A$ in the partition $\mathcal{A}_e$ that contain one or more posterior samples. Since it is not necessary to consider all parts in the partition, this can provide a computational advantage in cases where there are an intractable number of parts. \section{Previous work}\label{sec:previous-work} \label{sec:methodsreview} Previously proposed methods for learning causal network models from observational and interventional data in the non-OED setting typically fall into one of two categories: (1) constraint-based methods, such as the PC-algorithm \citep{Spirtes2001}, that test for conditional independence constraints in the data and select models that match those constraints, and (2) score-based methods, wherein the space of structures is searched for ones that that are most supported by the data, as quantified by scores such as the Bayesian marginal likelihood or BIC. More recently, OED methods, also referred to as active learning methods, have been developed for both approaches. \cite{HeGeng2008}, \cite{Eberhardt08}, and \cite{Hauser2012b:EuroPGM} propose constraint-based active learning methods that first use observational data or prior knowledge to construct a partially directed acyclic graph (PDAG), also called an essential chain graph. They then use graph-theoretic results to select the interventions required to orient all edges in the essential chain graph, using variations on criteria that generally seek to minimize the number of undirected edges in the post-intervention equivalence class of graphs. These algorithms take as a starting point a known observational essential graph, which would require infinite observational data in principle, and in practice is estimated from finite observational data. However, they often do not perform as well in finite sample settings where estimation errors introduced in the initial chain graph can lead to an incorrectly estimated DAG. \cite{Hauser2012} demonstrate that, in the finite sample setting, estimation errors can be reduced by using interventional data to refine not only the directionality of uncompelled edges in the chain graph (as done by He and Geng), but also the skeleton of the chain graph. \cite{TongKoller} and \cite{Murphy2001} develop score-based active learning methods for Bayesian networks. They use MCMC to sample from the space of node orderings (Tong and Koller) or graphs (Murphy) along with a decision-theoretic framework to select interventions. Both methods are computationally expensive since they involve computing the predictive density for each sampled graph subject to each possible intervention. Other methods forgo the predictive sampling step and instead use selection criteria with lower computational burdens. \cite{Pournara} consider the equivalence classes of high-scoring networks (determined by a greedy hill-climbing search) and select interventions that tend to partition transition sequence equivalence classes into smaller and smaller subclasses. \cite{LiLeong} propose a `non-symmetrical entropy' criterion closely related to Tong and Koller's loss function, but use the DP algorithm rather than MCMC to calculate edge probabilities between nodes. \cite{Cho} adapt the active learning framework of Murphy (2001) to the Gaussian Bayesian network setting. \cite{Ness} use a Bayesian framework that allows one to directly encode prior causal knowledge about each edge, which then induces a prior over graphs. Their method, bninfo, takes a set of highly scoring PDAGs and returns the minimally-sized batch of interventions that is expected to correctly orient the greatest number of edges; this method shares elements of both the score-based and constraint-based methods. The previous work that is most similar to ours is the method of \citet{Almudevar}, who explore the performance of an entropy-based criterion that uses the idea of partitioning the space of graphs to minimize the entropy on the posterior on graphs. While our method is based on a similar theoretical justification as that of \citet{Almudevar}, we generalize the approach to a large class of partition schemes and we improve performance by using a more efficient MCMC procedure. Further, in contrast to the limited simulation study of \citet{Almudevar}, we provide a more extensive set of empirical results on a wide variety of networks, we compare with other leading algorithms, and we make our software publicly available. For a comprehensive review of optimal experimental design methods for other models, including Boolean networks and differential equation models, see \citet{Sverchkov}. \section{Simulation results}\label{sec:simulation-results} \label{sec:simulations} In this section we first evaluate the performance of our OED method under various partition schemes. Then we compare our method to the methods of \citet{LiLeong} and \citet{Ness}. Throughout, we evaluate performance using two metrics: \begin{enumerate} \item Mean Hamming distance: After each experiment, we calculate the posterior probability of an edge between nodes $X_i$ and $X_j$ via: \begin{align} p(i \rightarrow j\mid D) = \sum_{G \in \mathcal{S}_{i \rightarrow j}} p(G|D) \end{align} where $\mathcal{S}_{i \rightarrow j}$ is the set of graphs containing the edge $i \rightarrow j$. We then construct the median probability graph, defined as the graph containing only those edges for which $p(i \rightarrow j\mid D) \geq 0.5$ \citep{Castelletti2018, Peterson2015}. The Hamming distance between the median probability graph and the ground truth network is equal to the number of false detected edges (false positives) and missing edges (false negatives) in the median probability graph. We then take the average of this distance over all simulations. \item Mean true positive rate (TPR): After each experiment, we calculate the proportion of correctly detected edges present in the median probability graph among the edges in the ground truth network. We then find the average of this proportion over all simulations. \end{enumerate} For all figures, error bars represent the standard error of the mean over 50 simulations. \subsection{Comparison of partition schemes} Our first simulation study compared the five different ways, defined in Section \ref{sec:diffPartitions}, to partition graphs sampled from the posterior $p(G|D)$. We randomly generated a discrete graph with 10 binary nodes and generated observational and intervention samples from this ground truth network; see Figure \ref{fig:graph-structures} for the graph's structure. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[scale = 0.6]{Images/Simulations/TenNode/tenNodePPT.PNG} & \hspace{1cm} \includegraphics[scale = 0.6]{Images/Simulations/Asia/asiaNetworkPPT.PNG} \end{tabular} \caption{Left: Structure of ten-node network. Right: Structure of Asia network as defined by the Bayesian Network Repository available in the \emph{bnlearn} R package \citep{Scutari}.} \label{fig:graph-structures} \end{figure} For each partition scheme, we ran $n_{sims} = 50$ simulations wherein each simulation consisted of a series of $n_{exp} = 7$ experiments. For the first experiment, we generated $n_{obs} = 1000$ observational samples to form the initial dataset $D$. For each of the six subsequent experiments, we generated $n_{intv} = 1000$ additional intervention samples and appended them to $D$, where the manipulated node was selected via our entropy-based selection criterion based on the partition scheme under consideration. After generating data for each experiment, we used MCMC-DP to draw 250,000 posterior samples from $p(G|D)$, discarded the first 150,000 samples as burn-in, and used the remaining 100,000 samples for posterior inference. We used a uniform prior over graphs and did not allow a node to be manipulated more than once. In addition to the five partition schemes, we also evaluated a ``random learner'' that randomly selected the next node to be manipulated, rather than using an entropy criterion. Figure \ref{fig:internal} shows the mean Hamming distance and mean TPR for the five entropy-based methods and the random learner on the 10-node network. Each of the entropy-based methods performed better than the random learner according to both metrics. For this network, the entropy-based methods all performed similarly. We found this to be the case in simulations using other randomly generated networks as well. The benefit of an entropy-driven experimental design method over randomly selected interventions is evident from Figure \ref{fig:internal}. To fall within a given mean Hamming distance from the true graph required far fewer interventions using the OED methods compared to the random learner. \begin{figure} \centering \includegraphics[scale = 0.65]{Images/Simulations/TenNode/combined_tenNode_MEC.pdf} \caption{Mean Hamming distance and mean TPR for five entropy-based OED methods (each using a different partition scheme) and the random learner on a ten-node binary network. Settings: $n_{sim} = 50$, $n_{exp}$ = 7, $n_{obs} = 1000$, $n_{intv} = 1000$. MEC = Markov equivalence class; CS = child set; DS = descendant set; PS = parent set; PWC = pairwise child; R = random learner.} \label{fig:internal} \end{figure} \subsection{Comparison with other methods} In addition to comparing various partition schemes, we also compared our OED algorithm to two other active learning methods. The first is a method proposed by \citet{LiLeong} that also uses an entropy-based criterion. In fact, what Li and Leong refer to as their non-symmetrical entropy criterion for selecting interventions is equivalent to the pairwise child (PWC) entropy criterion described in Section \ref{sec:diffPartitions}. The difference between the methods, however, is that \citet{LiLeong} do not use MCMC to sample from the posterior on graphs. Rather, they evaluate their criterion using exact edge probabilities computed using a DP algorithm \citep{Koivisto2006}; see Section~\ref{sec:sampleDAGs} for details on the DP algorithm and its assumptions. We refer to the method of Li and Leong as ``DP" in subsequent figures. The second method, ``bninfo" \citep{Ness}, evaluates the expected causal information gain of candidate interventions and outputs a minimally-sized batch of interventions expected to maximize that gain. \citet{Ness} define causal information gain as the increase in correctly oriented edges in the causal network. Their algorithm constructs the recommended batch of interventions one node at a time, in descending order of expected causal information gain. In order to compare our method with bninfo, for a given causal network, we took the sequence of interventions that bninfo recommended and used MCMC-DP to sample from the posterior distribution on graphs between each recommended intervention. This allowed us to construct the median probability graph and calculate the Hamming distance and TPR after each intervention that bninfo recommended. Note that \citet{Ness} provide a way to encode prior knowledge on each edge in the graph, but to facilitate comparison with the other methods, we use a uniform prior on the graph topology. \subsubsection{Asia network} We first assessed performance of the various methods on the Asia network, a commonly used network for comparing network inference methods, first described by \citet{Lauritzen1998}. The Asia network consists of eight binary nodes that describe the relationship between lung diseases and visits to Asia (Figure \ref{fig:graph-structures}). We used the conditional probability table provided by the \emph{bnlearn} R package to generate observational and interventional data \citep{Scutari}. Figure \ref{fig:asia} compares the performance of our method (using the MEC partition scheme), DP, bninfo, and the random learner. For this simulation study we used the following settings: $n_{sim} = 50$, $n_{exp} = 9$, $n_{obs} = 300$, $n_{intv} = 300$. For the MCMC methods, we drew 250,000 samples, discarded the first 150,000 samples, and used the remaining 100,000 samples for inference. For the sake of clarity, we omitted the other partition schemes since they performed similarly to the MEC partition scheme. After the first intervention experiment, the four methods performed nearly identically. The methods then diverged at experiment 3, with the random learner and bninfo lagging behind MEC and DP. Note that the maximum batch size of interventions recommended by bninfo across the simulations consisted of seven nodes, so the bninfo results in Figure \ref{fig:asia} only extend to the eighth experiment (the observational experiment followed by seven interventions). For the other methods, by the ninth experiment, all 8 nodes had been manipulated since we did not allow for repeat interventions. Thus, it makes sense that the MEC and random learner results align by the last experiment; they have each sampled the same data, albeit in different orders. Even though the DP method had also sampled intervention data for all nodes by the ninth experiment, it does not converge with the other methods because the DP method uses a different prior over graphs (a non-uniform prior induced by its modular joint prior over node ordering and parent sets as described in Section \ref{sec:sampleDAGs}). \begin{figure} \centering \includegraphics[scale = 0.6]{Images/Simulations/Asia/combined_asia_MEC.pdf} \caption{Mean Hamming distance and mean TPR on the 8-node Asia network with the following settings: $n_{sim} = 50$, $n_{exp}$ = 9, $n_{obs} = 300$, $n_{intv} = 300$. MEC = our method with MEC partition scheme; DP = dynamic programming method of \citet{LiLeong}; R = random learner.} \label{fig:asia} \end{figure} \subsubsection{Effect of network topology on inference} Next, we explored the effect of network topology on performance of MEC, PWC, DP, bninfo, and the random learner. Here, as above, ``PWC'' refers to using our method with the PWC partition scheme. We include PWC for direct comparison with the DP method in order to illustrate how two methods that employ the same entropy criterion for selecting interventions---but differ in how posterior edge probabilities are calculated (see Section~\ref{sec:sampleDAGs})---may perform differently depending on network topology. We considered two 8-node networks (Figure \ref{fig:chain-tree-structures}), one with a chain structure and one with a tree structure. For the chain network, we used the settings $n_{sim} = 50$, $n_{exp}$ = 7, $n_{obs} = 1000$, $n_{intv} = 1000$. The DP algorithm performed worse than the other methods, including the random learner, until the fourth experiment. Interestingly, DP performed worse than PWC, even though the two use the same entropy criterion. This can be explained by the fact that, as described in Section \ref{sec:sampleDAGs}, the non-uniform prior over graphs that the DP algorithm uses in order to achieve its computational efficiency puts less mass on chain structures relative to other structures. Meanwhile, the hybrid MCMC-DP approach we used in the PWC method does not have such constraints on its prior over structures, thus, it performed better in this setting since a uniform prior on graphs was used. For the tree network, we used the settings $n_{sim} = 50$, $n_{exp}$ = 8, $n_{obs} = 200$, $n_{intv} = 200$. MEC outperformed the other methods according to both mean Hamming distance and TPR, but all methods performed well, falling within a mean Hamming distance of one from the true network by the third experiment. While the DP method initially had a higher mean Hamming distance and lower TPR than the other methods, by the fourth experiment DP outperformed the PWC method. This illustrates that the DP method can work better than PWC when the true graph is more probable under its implicitly assumed prior. \begin{figure} \centering \begin{tabular}{ll} {\includegraphics[scale = 0.4]{Images/Simulations/Line8/network.png}} & {\includegraphics[scale = 0.4]{Images/Simulations/Tree8/network.png}} \end{tabular} \caption{Topology for the 8-node chain structure (left) and 8-node tree structure (right).} \label{fig:chain-tree-structures} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.6]{Images/Simulations/Line8/combined_line8_MEC.pdf} \caption{Mean Hamming distance and mean TPR for the 8-node chain structure. MEC = our method with MEC partition scheme; PWC = our method with pairwise child partition scheme; DP = dynamic programming method of \citet{LiLeong}; R = random learner.} \label{fig:8-node-chain} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.6]{Images/Simulations/Tree8/combined_tree8_MEC.pdf} \caption{Mean Hamming distance and mean TPR for the 8-node tree structure. MEC = our method with MEC partition scheme; PWC = our method with pairwise child partition scheme; DP = dynamic programming method of \citet{LiLeong}; R = random learner.} \label{fig:8-node-tree} \end{figure} \section{Application to a cell-signaling network}\label{sec:application} OED methods for graphical models are often developed with the goal of inferring biological networks, such as gene regulatory networks or cell-signaling networks \citep{Cho, Ness, Pournara, Sverchkov}. Especially in light of recent advances in the precision of gene-editing technologies, the ability to iterate between experimentation and analysis by adaptively selecting experiments is a promising avenue for reconstructing biological networks. In this section, we first apply our OED method, as well as the DP and bninfo methods, to human T-cell signaling data collected by \citet{Sachs05}. We then explore the performance of the methods on a simulated data set based on the Sachs network. \subsection{Analysis on real experimental data from the Sachs network} The Sachs data set consists of concentration levels measured via flow cytometry for 11 proteins involved in activating the immune system. The true network describing the relationship between these proteins is unknown. Many network inference studies have explored this data set, but what they define as the benchmark graph often varies by 1-2 edges; this is likely because the biologists' consensus network is complex and contains a bidirectional relationship that would induce a cycle (Figure~\ref{fig:sachsNetwork}, left panel). Here, we use the benchmark network provided by \cite{Scutari} (Figure~\ref{fig:sachsNetwork}, right panel) as well as the discretized data set available via the \emph{bnlearn} package. A portion of the data, 1800 samples, was gathered under no targeted interventions and the remaining 3600 samples were collected after activating or inhibiting five signaling proteins: Mek12, Pip2, Akt, PKA, and PKC. (Mek12, Pip2, and Akt were each inhibited in 600 samples, PKA was activated in 600 samples, and PKC was inhibited in 600 samples and activated in an additional 600 samples.) We compared how well the MEC, DP, bninfo, and random intervention methods inferred the cell-signaling network over a series of six experiments. For all methods, the first experiment used the 1800 observational samples. For subsequent experiments, each method then chose from the set of five candidate interventions performed by \cite{Sachs05}. Figure~\ref{fig:sachs} summarizes the results. While all methods end closer to the benchmark structure after accumulating data from the five interventions, no method performs particularly well. Curiously, the mean Hamming distance initially gets worse after the first couple of experiments before getting better. See Figure~\ref{app:fig-adj-mat} (Appendix B) for a comparison of the benchmark adjacency matrix to the matrices estimated by the MEC, DP, and bninfo methods. To determine the upper and lower bounds on performance for this data, we also considered all 120 possible permutations of the sequence of five manipulated nodes. Figure~\ref{fig:sachs} shows the results for the best- and worst-performing fixed sequences in dashed lines, as determined by mean Hamming distance averaged over the six experiments. By ``fixed" sequence, we mean the sequence of manipulated nodes was prespecified before the first experiment as opposed to adaptively or randomly chosen over the course of the experiments. Even the fixed sequence with the lowest mean Hamming distance over the six experiments (Figure~\ref{fig:sachs} orange dashed line) initially moves further from the benchmark network after the first intervention. There are several reasons why the methods perform differently on this data set than they did in simulation studies. First, if the true biological network consists of cycles, then the directed acyclic graphical models assumed by each of the algorithms would be misspecified. The consensus network determined by biologists suggests a short cycle among PIP3 $\rightarrow$ PLC$\gamma$ $\rightarrow$ PIP2, so model misspecification is a concern. \cite{Mooij2013} identified another reason why the model might be misspecified: \cite{Sachs05} used an experimental intervention that changed the \emph{activity} of the target proteins rather than directly intervening on the abundance of the protein. An intervention that affects the underlying topology of the network in ways other than only removing arrows into the manipulated protein differs from the type of edge-breaking intervention that our model assumes. Even if the acyclicity and edge-breaking intervention assumptions were not violated, the Hamming distance at the end of the six experiments was likely high because we were limited to interventions on five candidate proteins rather than all 11 proteins. Our results would also change if we used a different discretization of the data than the one provided by \cite{Sachs05}. However, given that \cite{Cho} used a continuous version of the Sachs data set for their Gaussian Bayesian network and encountered similar difficulties in network reconstruction, we do not believe our results would improve substantially if we used a different discretization of the data. We note that the performance of the bninfo method reported here differs from that published by \cite{Ness} for the Sachs network. This is likely due in large part to differences in the data sets used in our analyses. \cite{Ness} used 11,672 observational samples (they refer to this as ``historic data"), whereas we used only the 1800 observational samples provided in the \emph{bnlearn} R package since \cite{Ness} were unable to provide us with access to the larger data set they used in their analysis. \begin{figure} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=1.0\linewidth]{Images/Sachs/Sachs2005Fig2.PNG} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=1.0\linewidth]{Images/Sachs/sachsScutari.PNG} \end{minipage} \caption{Left: Signaling network diagram taken from \citet{Sachs05}. Reprinted with permission from AAAS. Right: Network structure from the Bayesian Network Repository available in the \emph{bnlearn} R package (\cite{Scutari}) used here as the benchmark network.} \label{fig:sachsNetwork} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Images/Sachs/combined_realSachs_MEC.pdf} \caption{Mean Hamming distance and true positive rate on the cell- signaling data from the 11-node Sachs network data. MEC = our method with MEC partition scheme; DP = dynamic programming method of \citet{LiLeong}; R = random learner. Dashed lines represent the best-case and worst-case fixed sequence of interventions. } \label{fig:sachs} \end{figure} \subsection{Analysis on simulated data from the Sachs benchmark network} To understand whether the poor performance seen on the Sachs data was due to misspecification, we tried simulating data from the benchmark network to see how the methods perform when the model assumptions hold. Figure \ref{fig:simSachs} shows the results of a simulation study comparing the same methods as in Figure \ref{fig:sachs}, but using simulated data generated from the benchmark network in Figure \ref{fig:sachs}, using a conditional probability table estimated from the Sachs data and available in the \emph{bnlearn} R package. The MEC method performed well and fell within a Hamming distance of one from the benchmark network by the fourth experiment, on average. Bninfo also initially performed well, but then plateaued sooner than the other OED methods, failing to reach a Hamming distance of zero or TPR of 1 by the seventh experiment. The higher mean Hamming distance of the DP method for the first two experiments arose from a combination of both a lower true positive rate and higher false negative rate than the other methods. This is likely because the DP prior over graphs tends to encourage sparsity, but the ground truth network contains nodes like PKC and PKA with five and six children, respectively. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Images/Sachs/combined_simSachs_MEC.pdf} \caption{Mean Hamming distance and true positive rate on the cell-signaling data from the simulated 11-node Sachs network data. MEC = our method with MEC partition scheme; DP = dynamic programming method of \citet{LiLeong}; R = random learner.} \label{fig:simSachs} \end{figure} \section{Conclusion}\label{sec:conclusion} We presented a novel Bayesian OED methodology for optimizing the experiment selection process in a computationally tractable way. The core of the method is a criterion for selecting the experiment that is expected to yield the greatest reduction in posterior entropy. We found that the method efficiently infers causal relationships in networks with various topologies, with the greatest gains in information coming from the first few optimally-chosen interventions. We provided a theoretical justification for using Markov equivalence classes as the choice of partition in our method, and in simulations, we found that this entropy criterion generally performs well empirically. Currently, our method is limited to networks with less than 25 nodes due to the super-exponential growth in the number of candidate graphs with respect to the number of nodes and the computational limits of the DP-based MCMC proposals. Scaling up the method to work on larger networks is an area for future work. The difficulty that many OED and active learning methods, including our own, have in inferring the Sachs network suggests that additional research is needed on ways of relaxing the acyclicity assumption and being more robust to model misspecification in general. Additionally, the OED and active learning fields would benefit from additional data sets similar in nature to the Sachs data with a mix of observational and intervention data. These will be helpful for evaluating and comparing OED methods. As recent advances in gene-editing technologies make targeted interventions more feasible, we expect these types of data sets will become more widely available, and the demand for OED methods in the biological sciences will grow in tandem. \begin{appendix} \section{Theory} \begin{lemma} \label{lemma:doob-entropy} Suppose $(\theta,\nu)\sim\pi$, $X_1,\ldots,X_N|\theta,\nu\sim P_{\theta,\nu}$ i.i.d., and $f(\theta)$ satisfies Conditions~\ref{condition:identifiable} and \ref{condition:finite}. Then $H(f(\theta) \mid X_{1:N}) \to 0$ as $N\to\infty$. \end{lemma} \begin{proof} Since $g(P_{\theta,\nu}) = f(\theta)$ a.s.\ under the prior, then the same also holds a.s.\ under the posterior. Thus, for any value $y$ in the range of $f$, \begin{align*} p(f(\theta)=y \mid X_{1:N}) &= p(g(P_{\theta,\nu})=y \mid X_{1:N}) = \mathrm{E}(\mathds{1}(g(P_{\theta,\nu})=y) \mid X_{1:N}) \\ &\xrightarrow[N\to\infty]{\mathrm{a.s.}} \mathds{1}(g(P_{\theta,\nu})=y) \overset{\mathrm{a.s.}}{=} \mathds{1}(f(\theta)=y) \end{align*} where $\mathds{1}(\cdot)$ is the indicator function. Here, the limiting value is a random variable in which $(\theta,\nu)\sim\pi$, whereas $(\theta,\nu)$ is integrated out in the probabilities/expectations. Thus, since the range of $f$ is finite, $$ -\sum_y p(f(\theta)=y\mid X_{1:N})\log p(f(\theta)=y\mid X_{1:N}) \xrightarrow[N\to\infty]{\mathrm{a.s.}} 0,$$ with the convention that $0 \log 0 = 0$. Since the entropy of a random variable on a finite set is bounded, then by the dominated convergence theorem, $$ H(f(\theta)\mid X_{1:N}) = \mathrm{E}\Big( -\sum_y p(f(\theta)=y\mid X_{1:N})\log p(f(\theta)=y\mid X_{1:N})\Big) \xrightarrow[N\to\infty]{} 0. $$ This completes the proof. \end{proof} \begin{lemma} \label{lemma:likelihood-equivalence} Let $X$ and $\theta$ be random variables with joint density $p(x,\theta)$. Suppose $f(\theta)$ is a discrete random variable such that: for any $\theta,\theta'$, if $f(\theta)=f(\theta')$ then for all $x$, $p(x|\theta) = p(x|\theta')$. Then $X \perp\!\!\!\!\perp \theta \mid f(\theta)$. \end{lemma} \begin{proof} Let $Y = f(\theta)$. Let $y$ be any value such that $p(y) > 0$, and define $A = \{\theta : f(\theta) = y\}$. Then for all $\theta,\theta'\in A$, we have $p(x|\theta,y) = p(x|\theta) = p(x|\theta')$ by assumption. Hence, \begin{align*} p(x |y) &= \int p(x |\theta,y)\, p(\theta|y) \lambda(d\theta) = \int_A p(x |\theta,y)\, p(\theta|y) \lambda(d\theta) \\ &= \int_A p(x |\theta')\, p(\theta|y) \lambda(d\theta) = p(x|\theta') = p(x\mid\theta,y) \end{align*} where $\theta,\theta'\in A$, and $\lambda(d\theta)$ is the dominating measure for $p(\theta|y)$. Thus, $p(x|y)p(\theta|y) = p(x|\theta,y)p(\theta|y) = p(x,\theta|y)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:cond-indep}] For notational brevity, denote $e=i'$ and $N=N'$, and define $f(G) = (f_e(G),f_{i_1}(G),\ldots,f_{i_K}(G))$. Suppose we can show that for all $G_1$ and $G_2$, if $f_e(G_1) = f_e(G_2)$ then $X_{1:N}^{e} | G_1$ is equal in distribution to $X_{1:N}^{e} | G_2$. Then the result will follow by Lemma~\ref{lemma:likelihood-equivalence}, since if $f(G_1) = f(G_2)$, then in particular, $f_e(G_1)=f_e(G_2)$. We show that if $f_e(G_1) = f_e(G_2)$ then $X_{1:N}^{e} | G_1 \overset{\mathrm{d}}{=} X_{1:N}^{e} | G_2$. First observe that the assumed prior factors as $\pi(\beta|G) = \prod_{i=1}^V p(\beta_i|G) p(\beta_i^*)$, and therefore, since node $e$ has no parents in $G^{e}$, \begin{align}\label{eqn:marginal-likelihood-equivalence-proof} p(X_{1:N}^{e} = x_{1:N} \mid G) = p(X_{1:N} = x_{1:N} \mid G^{e}) \frac{p(X_{e,1:N}^{e} = x_{e,1:N} \mid G)}{p(X_{e,1:N} = x_{e,1:N} \mid G^{e})} \end{align} where $x_{e,1:N}$ denotes $(x_{e,n} : n = 1,\ldots,N)$. Since $e$ has no parents in $G^e$, $p(X_{e,1:N} = x_{e,1:N} \mid G^{e})$ does not depend on $G$. Similarly, since $p(X_{e,1:N}^{e} = x_{e,1:N} \mid G) = \int \big(\prod_{n=1}^N p^*(x_{e,n} \mid \beta_e^*)\big) p(\beta_e^*) d\beta_e^*$, this does not depend on $G$ either. By Theorem 5 of \citet{Heckerman1995}, the BDeu metric is likelihood equivalent, which implies that for any $G_1,G_2$ such that $f_e(G_1) = f_e(G_2)$, we have $p(X_{1:N} = x_{1:N} \mid G_1^{e}) = p(X_{1:N} = x_{1:N} \mid G_2^{e})$. Therefore, applying these invariance properties to Equation~\ref{eqn:marginal-likelihood-equivalence-proof}, we see that if $f_e(G_1) = f_e(G_2)$ then $p(X_{1:N}^{e} = x_{1:N} \mid G_1) = p(X_{1:N}^{e} = x_{1:N} \mid G_2)$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:identifiable}] For notational brevity, denote $e=i'$ and $N=N'$. A distribution $P$ is said to be \textit{faithful} to a graph $G$ if the set of conditional independence relations that are true for $P$ are all and only those implied by $G$. More precisely, given a distribution $P$ on $(X_1,\ldots,X_V)$, define $g(P) = (\mathds{1}(X_A\perp\!\!\!\!\perp_P X_B\mid X_C) : A,B,C\subseteq \{1,\ldots,V\}\}$, that is, $g(P)$ is a binary vector indicating which conditional independence properties hold under $P$. Meanwhile, given a DAG $G$ on $\{1,\ldots,V\}$, define $f(G) = (\mathds{1}(X_A\perp\!\!\!\!\perp_G X_B\mid X_C) : A,B,C\subseteq \{1,\ldots,V\}\}$, that is, $f(G)$ is a binary vector indicating which conditional independence properties are implied by $G$ according to the d-separation criterion. Then $P$ is faithful to $G$ if and only if $g(P) = f(G)$. Let $B(G)$ be the support of the prior $\pi(\beta|G)$. Let $\lambda_G$ denote the dominating measure of $\pi(\beta|G)$ on $B(G)$. (Colloquially, one might refer to $\lambda_G$ as ``Lebesgue measure on $B(G)$'', but technically there are sum-to-one constraints on the probability vectors, so technically it is Lebesgue measure on a lower-dimensional subspace.) By Theorem 7 of \citet{Meek1995}, for any $G$, the set $\{\beta\in B(G) : P_{G,\beta} \text{ is not faithful to } G\}$ has measure zero under $\lambda_G$. In particular, $\{\beta^e\in B(G^e) : P_{G^e,\beta^e} \text{ is not faithful to } G^e\}$ has measure zero under $\lambda_{G^e}$. Let $\pi^e$ denote the distribution of $(G^e,\beta^e)$ when $(G,\beta)\sim p(G,\beta \mid D)$. Suppose we can show that $\pi^e(\beta^e | G^e)$ has a density with respect to $\lambda_{G^e}$. Then it follows that, almost surely under $\pi^e$, $P_{G^e,\beta^e}$ is faithful to $G^e$. In other words, $g(P_{G^e,\beta^e}) = f(G^e)$ almost surely when $(G,\beta)\sim p(G,\beta\mid D)$. The conclusion of the theorem follows since, by construction, there is a one-to-one mapping between $f(G^e)$ and $f_e(G)$. To complete the proof, we need to show that $\pi^e(\beta^e | G^e)$ has a density with respect to $\lambda_{G^e}$, or in mathematical notation, $\pi^e(\beta^e | G^e) \ll \lambda_{G^e}$. To see this, first observe that $\pi(\beta|G) \ll \lambda_G$, and thus, $p(\beta|G,D) \ll \lambda_G$. Next, we argue that $p(\beta^e|G,D) \ll \lambda_{G^e}$. Recall that $\beta^e$ is a function of $\beta$ that is obtained by copying $\beta$ and then (i) putting $\beta_e^*$ in place of $\beta_e$, and (ii) putting $\beta_{e 1}$ in place of $\beta_e^*$. Let $A^e \subseteq B(G^e)$ such that $\lambda_{G^e}(A^e) = 0$, and define $A = \{\beta\in B(G) : \beta^e \in A^e\}$. Then $\lambda_G(A) = 0$, since $\lambda_G$ is the product of identical measures (colloquially, ``Lebesgue measure on the probability simplex'') for each $\beta_{i j}$ and each $\beta_i^*$. Hence, $p(\beta^e\in A^e \mid G,D) = p(\beta\in A \mid G,D) = 0$. This implies that $p(\beta^e|G,D) \ll \lambda_{G^e}$. Letting $H$ be a function of $G$ defined by $H = G^e$, we have \begin{align*} p(\beta^e|G^e,D) = p(\beta^e|H,D) = \sum_G p(\beta^e|G,H,D)p(G|H,D) = \sum_{G\,:\,G^e=H} p(\beta^e|G,D)p(G|H,D), \end{align*} and thus, $p(\beta^e|G^e,D) \ll \lambda_{G^e}$. Since $\pi^e(\beta^e|G^e)$ is just another way of writing $p(\beta^e|G^e,D)$, then $\pi^e(\beta^e|G^e) \ll \lambda_{G^e}$, as claimed. \end{proof} \section{Adjacency matrices across methods for Sachs network}\label{appendixC-adj-mat} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[scale = 0.55]{Images/Sachs/BenchmarkDAG.png} & \includegraphics[scale = 0.55]{Images/Sachs/MEC_avgAdjMat_afterExp6.png} \\ \includegraphics[scale = 0.55]{Images/Sachs/DP_avgAdjMat_afterExp6.png} & \includegraphics[scale = 0.55]{Images/Sachs/bninfo_avgAdjMat_afterExp6.png} \\ \end{tabular} \caption[Adjacency matrices after the sixth experiment for Sachs data]{Adjacency matrices after the sixth experiment for the Markov equivalence class (MEC), dynamic programming (DP), and bninfo methods on the Sachs data. Top left: benchmark DAG provided by \cite{Scutari}. Rows denote parent nodes and columns denote child nodes. Yellow indicates presence of a directed edge while blue indicates absence of an edge.} \label{app:fig-adj-mat} \end{figure} The adjacency matrices in Figure~\ref{app:fig-adj-mat} for the MEC, DP, and bninfo methods all converge to the same DAG after experiment six. Note, however, that this estimated DAG differs from the structure of the benchmark DAG shown in the top left panel of Figure~\ref{app:fig-adj-mat}. The DAGs differ by a Hamming distance of nine, which upon further inspection is due to the estimated DAGs including the following false positive edges: \begin{enumerate} \item raf $\rightarrow$ akt \item mek $\rightarrow$ akt \item mek $\rightarrow$ jnk \item pkc $\rightarrow$ plcg \item pkc $\rightarrow$ pip3 \item pkc $\rightarrow$ erk \item p38 $\rightarrow$ plcg \item jnk $\rightarrow$ plcg \item jnk $\rightarrow$ p38 \end{enumerate} \end{appendix} \newpage \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,862
\section{Introduction} In this manuscript we will consider two types of stochastic differential equations (SDEs), the so-called mixed SDEs (see e.g. \cite{kubilius,missh11}) and rough SDEs (see e.g. \cite{Lyons-bk, Friz}) driven by standard Brownian motion and fractional Brownian motion with Hurst parameter $H>1/2$. The mixed SDE reads as \begin{align} X^{M}_t=x_0+\int_0^t a^M(X^M_s)\mathsf{d} s+\int_0^t b^M(X^M_s) \mathsf{d}^{\mathsf{I}} W_s+\int_0^t c^M(X^M_s)\mathsf{d} B^H_s, \qquad t \geq 0 , \label{eqM} \end{align} while the corresponding rough path equation is given by \begin{align} X^{R}_t=x_0+\int_0^t a^R(X^R_s)\mathsf{d} s+\int_0^t b^R(X^R_s) \mathsf{d}^{\mathsf R} W_s+\int_0^t c^R(X^R_s)\mathsf{d}^{\mathsf R} B^H_s, \qquad t \geq 0, \label{eqRough} \end{align} where $W=(W_t)_{t\geq 0}$ is an $m$-dimensional Wiener process, $B^H=(B^H_t)_{t\geq 0 }$ is an $\ell$-dimensional fractional Brownian motion with Hurst index $H>1/2$, $x_0\in \mathbb{R}^d$, and the coefficients $a^M,a^R\colon \mathbb{R}^d\to \mathbb{R}^d$, $b^M,b^R\colon \mathbb{R}^d\to \mathbb{R}^{d\times m}$, $c^M,c^R\colon \mathbb{R}^d\to \mathbb{R}^{d\times \ell}$ satisfy suitable smoothness assumptions. The precise definitions are given below.\\ The difference between both equations is the definition of the stochastic integrals with respect to the Brownian motion. When dealing with mixed equations, $\int_0^t b^M(X^M_s) \mathsf{d}^{\mathsf{I}} W_s$ is understood as It\=o integral, while for rough equations $\int_0^t b^R(X^R_s) \mathsf{d}^{\mathsf R} W_s$ corresponds to a Stratonovich integral. Both equations have been well studied so far; in \cite{mbfbm-limit,Shev-Delay} the unique solvability of the mixed SDE was shown, while the well definedness and unique solvability of the rough path SDE have been obtained e.g. in \cite{CQ}. In this manuscript we establish a correction formula (see Sections \ref{rptomixed} and \ref{mixedtorp}) between equations \eqref{eqM} and \eqref{eqRough}, which extends the It\=o-Stratonovich correction formula that goes back to two articles of W. Wong and M. Zakai (\cite{WZ1,WZ2}). This allows to transfer results valid for rough SDEs to mixed SDEs and vice versa. We will illustrate this by establishing the smoothness of the solution map and by recovering a limit theorem for the mixed SDE \eqref{eqM} in Section \ref{rptomixed_appl}; furthermore, we point out how this limit theorem can be used to construct and analyse numerical methods for mixed SDEs, and we show that the ``natural'' Euler scheme for the rough SDE converges to the rough solution. \medskip \section{Preliminaries} \label{Prelim} \subsection{Notation and Definitions} In what follows we will work on a filtered probability space $\left(\Omega, \mathcal F, (\mathcal F_t)_{t\geq 0} , \mathbb P\right)$, which is rich enough to contain all the objects defined below. Let $W= (W^{(1)}_t,\ldots, W^{(m)}_t )_{t\geq 0}$ be a standard $m$-dimensional Wiener process, $B^H= (B^{H,(1)},\ldots,B^{H,(\ell)} )_{t\geq 0}$ be an $\ell$-dimensional fractional Brownian motion (fBm) with Hurst index $H\in (1/2,1)$, that is a collection of centered, independent Gaussian processes, independent of $W$ as well, with covariance function $$ R_H(t,s)=\frac 1 2 \left(t^{2H}+s^{2H}-|t-s|^{2H} \right), \qquad s,t \geq 0. $$ The Kolmogorov theorem entails that fBm has a modification with $\gamma$-H\"older sample paths for any $\gamma<H$ and we will identify $B^H$ with this modification in the following. We will use the following standard notation: $|\cdot|$ stands for an absolute value of a real number, the Euclidean norm of a finite dimensional vector or of a matrix. Moreover, for a function $f\colon [a,b]\to \mathbb{R}$ we define the following (semi-)norms: \begin{align*} \norm{f}_{\infty,[a,b]}& =\sup_{x\in [a,b]}|f(x)|, \qquad \norm{f}_{\gamma,[a,b]}=\sup_{\substack{x,y\in[a,b]\\x\neq y}}\frac{|f(x)-f(y)|}{|x-y|^\gamma},\\ & \norm{f}_{\gamma,\infty,[a,b]}=\norm{f}_{\gamma,[a,b]}+\norm{f}_{[a,b],\infty}. \end{align*} If there is no ambiguity we will omit an interval index $[a,b]$. For a vector valued function $f=(f_1,\ldots,f_d)\colon [a,b]\to \mathbb{R}^d$ a corresponding (semi-) norm is defined as the sum of the (semi-) norms of the coordinates $f_i$. Next, for a function $f\colon [a,b]^2\to \mathbb{R}$, that vanishes on a diagonal, i.e. $f(t,t)=0$ for $t\in [a,b]$, we set $$ \norm{f}_{\gamma,[a,b]^2}=\sup_{\substack{t,s\in[a,b]\\t\neq s}}\frac{|f(t,s)|}{|t-s|^\gamma}. $$ The set of such functions with a finite norm $\norm{f}_{\gamma,[a,b]^2}$ is denoted by $\mathcal C^\gamma_2([a,b]^2)$, that is $\mathcal C^\gamma_2([a,b]^2)=\{f\colon [a,b]^2\to \mathbb{R} \mid f(t,t)=0,\,t\in [a,b],\, \norm{f}_{\gamma,[a,b]^2}<\infty \}$. Moreover, we will use the notation $C_{b}^{k,\delta}(\mathbb{R}^{d_{1}}; \mathbb{R}^{d_2})$ for functions $f: \mathbb{R}^{m_{1}} \rightarrow \mathbb{R}^{m_2}$, which are bounded, $k$-times differentiable with bounded derivatives and whose $k$-th derivative is H\"older continuous of order $\delta >0$. Finally, if a map $c: \mathbb{R}^k \rightarrow \mathbb{R}^{k,m}$ is fixed, we introduce the differential operators $\mathcal D^{(i)}_c=\sum_{l=1}^k c_{l,i}^M(\cdot)\partial_{x_l}$. \subsection{Integrating with respect to standard Brownian motion and fBm with $H>1/2$}\label{integratingfbm} For basic facts for It\=o or Stratonovich integration with respect to standard Brownian motion we refer e.g. to \cite{KS,KP}. The integral with respect to a fractional Brownian motion with Hurst parameter $H>1/2$ is understood in the pathwise Young sense. Namely, for $f\in \mathcal C^{\nu}([a,b]; \mathbb{R})$ and $g\in \mathcal C^{\mu}([a,b];\mathbb{R})$ with $\nu+\mu>1$ the integral $\int_a^b f(x)\mathsf{d} g(x)$ can be defined as the limit of its Riemann sums and satisfies the so-called Young inequality $$ \left| \int_{a}^{b} (f(s)-f(a) ) \mathsf{d} g(s) \right| \leq C_{\nu, \mu } \| f\|_{\nu, [0,T]} \|g\|_{\mu,[0,T]} |b-a|^{\lambda+ \mu} $$ for all $a,b \in [0,T]$, where $C_{\nu,\mu}>0$ is a constant independent of $f$ and $g$. Thus, the integral $\int_a^b f(s)\mathsf{d} B^{H,(i)}_s$ for a function $f\in \mathcal C^{\beta}([a,b];\mathbb{R})$ is well defined provided that $\beta>1-H$. More details can be found e.g. in \cite{young,Friz}. \subsection{Mixed SDEs} The mixed equation \eqref{eqM} reads as \begin{align} \label{mixed_eq_2} X^{M}_t=x_0+\int_0^t a^M(X^M_s)\mathsf{d} s& +\sum\limits_{j=1}^m \int_0^t b^{M,(j)}(X^{M}_s)\mathsf{d}^{\mathsf{I}} W^{(j)}_s \\ &+ \sum\limits_{j=1}^{\ell}\int_0^t c^{M,(j)}(X^M_s)\mathsf{d} B^{H,(j)}_s, \qquad t \in [0,T], \nonumber \end{align} where $(\cdot)^{(j)}$ denotes the $j$-th column of a matrix. As mentioned previously the integrals with respect to the Brownian motions are understood as It\=o integrals, while the integrals with respect to the fractional Brownian motions are understood as Young integrals. This equation has been analysed in a series of articles (\cite{kubilius,missh11, mbfbm-limit,Shev-Delay}). The most general result on the existence of a unique solution can be found in \cite{Shev-Delay}: \begin{theorem}\label{mixed:main_result} Assume that \begin{itemize} \item[(i)] $a^M, b^{M,(i)}, c^{M,(j)}\in C^{1}(\mathbb{R}^d; \mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$, \item [(ii)] $a^M, b^{M,(i)}, c^{M,(j)}$, $i=1, \ldots, m$, $j=1, \ldots, \ell$, satisfy a linear growth condition, i.e. there exists $C>0$ such that $$|a^M(x)| + \sum_{i=1}^m|b^{M,(i)}(x)| + \sum_{j=1}^{\ell} |c^{M,(j)}(x)| \leq C(1+ |x|), \qquad x \in \mathbb{R}^d, $$ \item[(iii)] $ \sup_{j=1, \ldots, \ell} \sup_{x \in \mathbb{R}^d} |(\operatorname{D} c^{M,(j)})(x)| < \infty$. \end{itemize} Then equation \eqref{eqM} has a unique solution, i.e.~there exists a unique continuous and $(\mathcal{F}_t)_{t \in [0,T]}$ adapted process $X=(X_t)_{t \in [0,T]}$, which satisfies equation \eqref{eqM} for almost all $\omega \in \Omega$. \end{theorem} The above solution is in fact obtained as the limit (in probability) of the solutions of It\=o SDEs with random coefficients, namely of $$ X^{M,n}_t=x_0+\int_0^t \big (a^M(X^{M,n}_s)+c^M(X^{M,n}_s) \dot{B}^{H,n}_s \big ) \mathsf{d} s+\int_0^t b^M(X^M_s) \mathsf{d}^{\mathsf{I}} W_s, \qquad t \in [0,T], $$ where $B^{H,n}_t=n\int_{(t- 1/n)\vee 0}^t B^H_s \mathsf{d} s$, $t \in [0,T]$, $n=1,2, \ldots$, is a smoothed fBm, see \cite{mbfbm-limit}. \subsection{Rough paths} Here we briefly recall some notions of the rough path theory, following the algebraic integration approach given in \cite{Guba} and the recent monograph \cite{Hai}. For a detailed exposition the reader is sent to \cite{Lyons-bk,Friz,Guba,Hai}. \begin{definition} Let $\gamma >1/3$. A pair $(x,\mathbf{x})=(x_s,\mathbf{x}_{s,t})_{0\leq t,s\leq T}\in \mathcal C^{\gamma}([0,T]; \mathbb{R}^m)\times \mathcal C^{2\gamma}_2([0,T]^2; \mathbb{R}^{m\times m})$ is called a $\gamma$-rough path if $\mathbf{x}_{t,t}=0$ for all $t\in [0,T]$ and for all $0\leq s<u<t\leq T$ we have $$ \mathbf{x}_{s,t}-\mathbf{x}_{s,u}-\mathbf{x}_{u,t}=(x_s-x_u)\otimes(x_t-x_u). $$ \end{definition} The function $(\mathbf{x}_{s,t})_{s,t,\in [0,T]}$ is called L\'evy area. \begin{remark} \label{rem_levy} \begin{itemize} \item[(i)] A L\'evy area for the fractional Brownian motion $B^H$ with Hurst parameter $H>1/2$ is defined as the collection of Young integrals $$\mathbf{B}=( \mathbf{B}_{s,t})_{0\leq s <t\leq T}=\left\{\int_s^t\int_s^u \mathsf{d} B^{H,(i)}_v\mathsf{d} B^{H,(j)}_u; \, 1\leq i,j\leq \ell\right \}_{0\leq s<t\leq T}. $$ \item[(ii)] The L\'evy area for standard Brownian can be constructed using Stratonovich integration, i.e. $$\mathbf{W}=(\mathbf{W}_{s,t})_{0\leq s<t\leq T}= \left \{\int_{s}^t \int_s^u \circ\, \mathsf{d} W^{(i)}_v \circ\mathsf{d} W^{(j)}_u; \, 1\leq i,j\leq m \right\}_{0\leq s<t\leq T},$$ where $\int \circ \mathsf{d} W$ denotes the Stratonovich integral. \item[(iii)] In the same way a L\'evy area for $(t,W_t,B_t)_{t \in [0,T]}$ can be constructed, i.e. using $\mathbf{W}$ for the iterated integrals of $W^{(i)}$ with respect to $W^{(j)}$, $i,j=1, \ldots,m$, and Young integrals for all other iterated integrals. \end{itemize} \end{remark} Together with the notion of the L\'evy area the following concept is at the core of the algebraic integration approach of M. Gubinelli. \begin{definition} We say that a path $y\in \mathcal C^\nu([0,T]; \mathbb{R}^k)$ with $ \nu \in (1/3,\gamma]$ is a weakly controlled path based on $x\in \mathcal C^{\gamma}([0,T]; \mathbb{R}^m)$ if the following decomposition holds \begin{align} \label{weak:dcp} y_t-y_s=z_s(x_t-x_s)+r_{s,t}, \qquad 0 \leq s \leq t \leq T, \end{align} with $z\in \mathcal C^{\nu}([0,T]; \mathbb{R}^{k\times m})$ and $r\in \mathcal C^{2\nu}_2([0,T]^2;\mathbb{R}^k)$. \end{definition} When $(y,z)$ is a weakly controlled path, then the rough integral of $y$ along $x$ can be defined as \begin{align} \label{crucial-1} \int_0^t y^{(i)}\mathsf{d} x^{(j)}_s=\lim_{|\mathcal P|\to 0}\sum_{t_k\in \mathcal P} \left( y^{(i)}_{t_k}(x^{(j)}_{t_{k+1}}-x^{(j)}_{t_k})+\sum_{\ell=1}^m z_{t_k}(i,\ell)\mathbf{x}_{t_k,t_{k+1}}(\ell,j) \right), \end{align} for $i=1, \ldots, k$, $j=1,\ldots, m$, see e.g.~Corollary 2 in \cite{Guba}. Here the limit is taken over all partitions $\mathcal{P}=\{0=t_{-1}=t_0 <t_1 < \ldots <t_n=t_{n+1}=t\}$ such that $|\mathcal{P}|= \sup_{t_k \in \mathcal{P}} |t_{k}-t_{k-1}| \rightarrow 0$. Weakly controlled paths are stable under smooth transformations: \begin{proposition}\label{cp:weak-phi} Let $ (y,z)$ be a weakly controlled path based on $x$ with decomposition (\ref{weak:dcp}), and let $\varphi\in C^{2}_b(\mathbb{R}^k;\mathbb{R}^n)$. Then $\varphi(y)$ is a weakly controlled path based on $x$ with decomposition $$ \varphi(y_t)-\varphi(y_s)= \hat{z}_s (x_t-x_s) +\hat{r}_{s,t},$$ with $$ \hat{z}_s= (\operatorname{D} \varphi)(y_s) z_s, \qquad s \in [0,T]. $$ \end{proposition} Using appropriate estimates for the integrals the solution to the rough paths equation $$ d y_t= \sigma(y_t) \mathsf{d} x_t, \quad t \in [0,T], \qquad y_0 =a \in \mathbb{R}^d, $$ with $ \sigma: \mathbb{R}^d\to \mathbb{R}^{d\times m}$ is obtained via a fixed point argument. \label{roughpaths} \begin{theorem} Suppose that $\kappa \in (1/3 ,\gamma)$, $x\colon [0,T]\to \mathbb{R}^m$ is a $\gamma$-rough path and let $\sigma\in C_b^{2,\delta}(\mathbb{R}^d;\mathbb{R}^{d\times m})$ such that $(2 +\delta)\gamma >1$. Then the equation $$ y_t=a+\int_0^t \sigma(y_s)\mathsf{d} x_s, \qquad t \in [0,T], $$ possesses a unique solution in the space of the functions $z\in \mathcal C^\kappa([0,T]; \mathbb{R}^d)$ with $z_0=a$. Moreover, $(y,\sigma(y))$ is a weakly controlled path based on $x$. \end{theorem} As a consequence of this Theorem and Proposition \ref{cp:weak-phi} we have \begin{align} \label{crucial} & \int_0^t \sigma^{(i)}(y_s)\mathsf{d} x^{(i)}_s \\ \nonumber & \qquad =\lim_{|\mathcal P|\to 0}\sum_{t_k\in \mathcal P} \left( \sigma^{(i)}(y_{t_k})(x^{(i)}_{t_{k+1}}-x^{(i)}_{t_k})+\sum_{\ell=1}^m \mathcal D^{(\ell)}_{\sigma} \sigma^{(i)}(y_{t_k})\mathbf{x}_{t_k,t_{k+1}}(\ell,i)\right). \end{align} The solution map for a rough equation is locally Lipschitz continuous with respect to the initial value and the driving signal. More precisely, we have: \begin{theorem}\label{stability} Let $\kappa \in (1/3,\gamma)$, $\sigma\in C_b^{2,\delta}(\mathbb{R}^d;\mathbb{R}^{d\times m})$ such that $(2 +\delta)\gamma >1$, $a,\tilde{a}\in \mathbb{R}^d$, and let $x$, $\tilde{x}$ be $\gamma$-rough paths with corresponding L\'evy areas $\mathbf{x}$, $\tilde{\mathbf{x}}$. Finally, let $(y_t)_{t\in[0,T]}$, $(\tilde{y}_t)_{t\in[0,T]}$ be the solutions of the RDEs \begin{align*} y_t=a+\int_0^t \sigma(y_s)\mathsf{d} x_s, \quad \tilde{y}_t=\tilde{a}+\int_0^t\sigma(\tilde{y}_s)\mathsf{d} \tilde{x}_s, \qquad t \in [0,T]. \end{align*} Then there exist an increasing function $C_T\colon [0, \infty) \to [0, \infty)$ such that \begin{align*} \norm{y-\tilde y}_{\gamma,\infty,[0,T]}\leq C_T\big(\norm{x}_{\gamma,\infty,[0,T]}+\norm{\tilde x}_{\gamma,\infty,[0,T]}+\norm{\mathbf{x}}_{\gamma,[0,T]}+\| \tilde{\mathbf{x}}\|_{\gamma,[0,T]} \big)\\ \times (|a-\tilde a|+\norm{x-\tilde x}_{\gamma,\infty,[0,T]}+\|\mathbf{x}-\tilde{\mathbf{x}}\|_{2\gamma,[0,T]}). \end{align*} \label{convTheorem} \end{theorem} Returning to our original rough SDE, i.e. to \begin{align}\label{Req2} X^{R}_t=x_0+\int_0^t a^R(X^R_s)\mathsf{d} s& +\sum\limits_{i=1}^m \int_0^t b^{R,(i)}(X^{R}_s)\mathsf{d}^{\mathsf R} W^{(i)}_s \\ & +\sum\limits_{j=1}^{\ell}\int_0^t c^{R,(j)}(X^R_s)\mathsf{d}^{\mathsf R} B^{H,(j)}_s, \qquad t \in [0,T], \nonumber \end{align} the previous results yield a unique solution, if $a^R, b^{R,(i)}, c^{R,(j)} \in C_b^{2,\delta}(\mathbb{R}^d;\mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$, for $\delta >0$ arbitrarily small, where the L\'evy area for $(t,W_t,B_t)_{t \in [0,T]}$ is constructed as in Remark \ref{rem_levy}. \section{From the rough paths equation to the mixed equation} \label{rptomixed} Here we show that the solution of the rough SDE \eqref{Req2} is the solution of the mixed equation \eqref{mixed_eq_2} with \begin{align} a^M(x)=a^R(x)+\frac 1 2 \sum_{i=1}^d \mathcal D^{(i)}_{b^{R}} b^{R,(i)}(x), \, \, b^M(x)=b^R(x), \,\, c^M(x)=c^R(x), \,\,x \in \mathbb{R}^d. \label{coeff_rel_1} \end{align} \begin{theorem} Let $\delta >0$ and $a^R, b^{R,(i)}, c^{R,(j)} \in C_b^{2,\delta}(\mathbb{R}^d;\mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$. Then the solution $X^R$ of the rough equation \eqref{Req2} and the solution $X^M$ of the mixed equation \eqref{mixed_eq_2} with coefficients given by \eqref{coeff_rel_1} coincide $P$-almost surely, i.e. we have $$ P\big(X_t^M=X_t^R, \, t \in [0,T]\big)=1.$$ \label{rptomixed_thm} \end{theorem} \begin{proof} For the $m+\ell+1$ dimensional rough path $g=(\operatorname{id},W,B^H)$ denote its L\'evy area by $\mathbf{G}=(\mathbf{G}_{s,t})_{0\leq s<t\leq T}$. Now fix $t \in [0,T]$. Using relation \eqref{crucial} for $\sigma=(a^R,b^R,c^R)\colon \mathbb{R}^d\to \mathbb{R}^d\times \mathbb{R}^{d\times m}\times \mathbb{R}^{d\times \ell}$ we can write \begin{align*} \int_0^t \sigma^{(i)}(X^R_s)\mathsf{d}^{\mathsf R} g^{(i)}_s=\lim_{|\mathcal P|\to 0}\sum\limits_{t_k\in \mathcal P} \left( \sigma^{(i)} (X^R_{t_k}) g^{(i)}_{t_k,t_{k+1}}+\sum\limits_{j=1}^{1+m+\ell} \mathcal D^{(j)}_{\sigma} \sigma^{(i)}(X^R_{t_k})\mathbf G_{t_k,t_{k+1}}(j,i)\right ) \end{align*} for all $i=1,\ldots, 1+m+\ell$. Since $g^{(1)}=\operatorname{id}$ and the integrand is continuous, we have $$ \lim_{|\mathcal P|\to 0}\sum\limits_{t_k\in\mathcal P}\sigma^{(1)} (X^R_{t_k}) g^{(1)}_{t_k,t_{k+1}} \stackrel{P-a.s.}{=} \int_0^t a^R(X^R_s)\mathsf{d} s. $$ For $i=1, \ldots, m$, we have $$ \lim_{|\mathcal P|\to 0}\sum\limits_{t_k\in\mathcal P}\sigma^{(i+1)} (X^R_{t_k}) g^{(i+1)}_{t_k,t_{k+1}} \stackrel{L^2(\Omega)}{=} \int_0^t b^{R,(i)}(X^R_s) \mathsf{d}^{\mathsf{I}} W^{(i)}_s, $$ by definition of the It\=o integral since $b^{R,(i)}(X^R_s)$, $s \in [0,T]$, is bounded and adapted. Moreover, the sample paths of $X^{R}$ are $\gamma$-H\"older continuous for all $\gamma<1/2$ and $B^H$ are H\"older continuous of all orders $\lambda<H$, thus we have $$ \lim_{|\mathcal P|\to 0}\sum\limits_{t_k\in\mathcal P}\sigma^{(i+m+1)} (X^R_{t_k}) g^{(i+m+1)}_{t_k,t_{k+1}} \stackrel{P-a.s.}{=} \int_0^t c^{R,(i)}(X^R_s)\mathsf{d} B^{H,(i)}_s $$ for $i=1, \ldots, \ell$ by definition of the Young integral. Now consider the summands involving the L\'evy area terms. If $(i,j)\not \in (2,\ldots,m+1)^2$, then the Young inequality gives $$ \sup_{\substack{t,s\in[0,T]\\t\neq s}} \frac{ \left|\int_s^t (g^{(i)}_u-g^{(i)}_s)\mathsf{d} g^{(j)}_u \right|}{|t-s|^{1+\varepsilon}} < \infty \qquad P-a.s.$$ and hence it follows $$\lim_{|\mathcal P|\to 0}\sum\limits_{t_k\in\mathcal P}\mathcal D^{(j)}_{\sigma} \sigma^{(i)}(X^R_{t_k})\mathbf G_{t_k,t_{k+1}}(j,i) \stackrel{P-a.s.}{=}0.$$ Next suppose $(i,j)\in (2,\ldots,m+1)^2$ and $i\neq j$, then $\mathbf G_{t_k,t_{k+1}}(i,j)=\int_s^t (W^{(i)}_u-W^{(i)}_s)\mathsf{d}^{\mathsf{I}} W^{(j)}_u$, since the Stratonovich and the It\=o integral coincide due to the independence of $W^{(i)}$, $W^{(j)}$. Exploiting the independence of $G_{t_k,t_{k+1}}$ from $\mathcal{F}_{t_k}$ we obtain \begin{align*} & \Ex{\Big|\sum\limits_{t_k\in\mathcal P} \mathcal D^{(j)}_{\sigma}\sigma^{(i)}(X^R_{t_{k}})\mathbf G_{t_k,t_{k+1}}(j,i)\Big|^2}\\ & \qquad=\sum\limits_{t_k\in \mathcal P} \Ex{|\mathcal D^{(j)}_{\sigma}\sigma^{(i)}(X^R_{t_k})|^2\Big (\int_{t_k}^{t_{k+1}} (W^{(j)}_u-W^{(j)}_{t_k})\mathsf{d} W^{(i)}_u\Big)^2}\\ & \qquad = \sum\limits_{t_k\in \mathcal P} \Ex{|\mathcal D^{(j)}_{\sigma}\sigma^{(i)}(X^R_{t_k})|^2}\Ex{\Big(\int_{t_k}^{t_{k+1}} (W^{(j)}_u-W^{(j)}_{t_k})\mathsf{d} W^{(i)}_u\Big)^2} \\ & \qquad = \frac 1 2 \sum\limits_{t_k\in \mathcal P} \Ex{|\mathcal D^{(j)}_{\sigma}\sigma^{(i)}(X^R_{t_k})|^2}|\mathcal P|^2. \end{align*} The last term clearly vanishes for $|\mathcal P|\to 0$. Finally, if $i=j$ we have $$ \int_{t_k}^{t_{k+1}} (W^{(i)}_u-W^{(i)}_{t_k}) \circ\mathsf{d} W^{(i)}_u = \frac{1}{2} (W^{(i)}_{t_{k+1}}-W^{(i)}_{t_k})^2$$ and we obtain \begin{align*} \lim_{|\mathcal P|\to 0 } \sum\limits_{t_k\in \mathcal P} \mathcal D^{(i)}_{\sigma} \sigma^{(i)}(X^R_{t_k})(W^{(i)}_{t_{k+1}}-W^{(i)}_{t_k})^2 \stackrel{P-a.s.}{=} \int_0^t \mathcal D^{(i)}_{\sigma} \sigma^{(i)}(X^R_{s}) \mathsf{d} s . \end{align*} The latter follows from $$ \sup_{s \in [0,T]} \left| \sum_{k=0}^{n-1}{\bf 1}_{[0,s]}(t_k)(W^{(i)}_{t_{k+1}}-W^{(i)}_{t_k})^2-s\right|\stackrel{P-a.s.}{\longrightarrow} 0$$ as $|\mathcal P|\to 0$ and a density argument. By passing to a subsequence, we deduce that \begin{align*} \sum_{i=1}^{m+\ell+1}\int_0^t \sigma^{(i)}(X^R_s)\mathsf{d}^{\mathsf R} g^{(i)}_s & \stackrel{P-a.s.}{=} \int_0^t\Big ( a^R(X^R_s)+\frac 1 2 \sum_{i=1}^m \mathcal D^{(i)}_{b^R} b^{R,(i)}(X^R_s)\Big) \mathsf{d} s\\ & \qquad +\sum_{i=1}^m \int_0^t b^{R,(i)}(X^{(R)}_s)\mathsf{d}^{\mathsf{I}} W^{(i)}_s+\sum_{i=1}^{\ell} \int_0^t c^{R,(i)}(X^R_s)\mathsf{d} B^{H,(i)}_s \end{align*} for all $t \in [0,T]$. Since both sides are continuous in $t$ for almost all $\omega \in \Omega$, the exceptional set can be chosen independently of $t \in [0,T]$. Hence the assertion follows. \end{proof} \section{From the mixed equation to the rough paths equation} \label{mixedtorp} Throughout this section we assume that $a^M,b^M,c^M\in C^2_b$. Under this assumption the solution $X^M=(X^M_t)_{t\in[0,T]}$ to \eqref{mixed_eq_2} exists, is unique and satisfies $ \mathsf{E} \| X\|_{\theta}^p < \infty$ for all $p \geq 1$ and $\theta <1/2$, see \cite{Shev-Delay}. Now, we will show that $X^M$ is the solution to \eqref{Req2} with the coefficients \begin{align} \label{coeff_rel_2} a^R(x)=a^M(x)-\frac 1 2 \sum_{i=1}^m\mathcal D^{(i)}_{b^M}b^{M,(i)}(x),\,\, b^R(x)=b^M(x),\,\, c^R(x)=c^M(x), \,\, x \in \mathbb{R}^d. \end{align} In order to do to this, we have to show that $(X^M(\omega), \sigma(X^M(\omega))$ is a weakly controlled path based on $\{g(t)(\omega), \mathbf{G}_{t,s}(\omega)\}_{0\leq s<t\leq T}$ for almost all $\omega \in \Omega$, where $\sigma=(a^M,b^M,c^M)$. However, this is a consequence of the following two Lemmata. \begin{lemma} Let $h\in C^2(\mathbb{R}^d; \mathbb{R})$. Then for all $\gamma\in (0,1/2)$ there exist almost surely finite random variables $K_{T,h,\gamma,g}$ such that $$ \left |\int_s^t (h(X^M_u)-h(X^M_s)) \mathsf{d} g(u) \right|\leq K_{T, h, \gamma,g} \cdot |t-s|^{2\gamma}, \qquad s,t \in [0,T], $$ where $ g \in \{ \operatorname{id}, B^{H,(i)}\}$ with $i \in \{1, \ldots, \ell\}$. \end{lemma} \begin{proof} Since $X^M\in C^\gamma([0,T])$ for all $\gamma <1/2$, the assertion is a direct consequence of the Young inequality, which gives $$ \left|\int_s^t (h(X^M_u)-h(X^M_s)) \mathsf{d} g(u) \right| \leq C_{\gamma,\gamma'} |t-s|^{\gamma + \gamma'} \| f(X^M) \|_{\gamma,[0,T]} \|g \|_{\gamma',[0,T]}$$ for $1-\gamma < \gamma' <H$. \end{proof} \begin{lemma} \label{estimInt} Suppose $(Z(t),\mathcal F_t)_{t\in [0,T]}$ is a stochastic process with $\theta$-H\"older trajectories for all $\theta\in (0,1/2)$, such that $ \mathsf{E} \|Z \|_{\theta}^p <\infty $ for all $p\geq 1$. Then, for all $\eta \in (0, \theta)$, there exists an almost surely finite random variable $K_{T,\eta}$ such that $$\left|\int_s^t (Z(v)-Z(s))\mathsf{d}^{\mathsf{I}} W_v \right|\leq K_{T,\eta}|t-s|^{1/2+\eta}, \qquad s,t \in [0,T].$$ \end{lemma} \begin{proof} Let $\theta \in (0,1/2)$. Applying the Rodemich-Garcia-Rumsey inequality, we obtain that $$ \sup_{\substack{t,s\in[0,T]\\t\neq s}} \frac{ \left|\int_s^t (Z(v)-Z(s))\mathsf{d}^{\mathsf{I}} W_v \right|}{|t-s|^{1/2+\eta}} \leq C_{\theta,\eta ,p} \left(\int_s^t\int_s^t \frac{|\int_x^y (Z(v)-Z(s)) \mathsf{d}^{\mathsf{I}} W_v|^{2p}}{|x-y|^{ (1 + 2\eta) p +2}} \mathsf{d} x\mathsf{d} y\right)^{1/2p}.$$ Now put $$ K_{T ,\eta }=\left(\int_s^t\int_s^t \frac{|\int_x^y (Z(v)-Z(s)) \mathsf{d}^{\mathsf{I}} W_v|^{2p}}{|x-y|^{ (1 + 2\eta) p +2}} \mathsf{d} x\mathsf{d} y\right)^{1/2p}.$$ The Burkholder-Davis-Gundy inequality gives $$ \mathsf{E} \left|\int_x^y (Z(v)-Z(s)) \mathsf{d}^{\mathsf{I}} W_v \right|^{2p} \leq C_{\theta,p} |y-x|^{(1+2\theta)p}. $$ Choosing $p$ such that $2(\theta- \eta)p > 1$ we obtain \begin{align*} \mathsf{E} K_{T,\eta}^{2p} &= \int_s^t\int_s^t \frac{ \mathsf{E} |\int_x^y (Z(v)-Z(s)) \mathsf{d}^{\mathsf{I}} W_v|^{2p}}{|x-y|^{ (1 + 2\eta) p +2}} \mathsf{d} x\mathsf{d} y \leq C_{\theta,p} \int_s^t\int_s^t |x-y|^{ -2(\theta- \eta)p -2} \mathsf{d} x \mathsf{d} y < \infty . \end{align*} \end{proof} Now we can exploit the representation \eqref{crucial-1} and Proposition \ref{cp:weak-phi} to work backwards through the proof of Theorem \ref{rptomixed_thm}, which gives: \begin{theorem}\label{transfer_mixed_rough} Let $a^M,b^{M,(i)},c^{M,(j)}\in C^{2}_b([0,T]; \mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$, and suppose that $X^M=(X^M_t)_{t\in[0,T]}$ is the solution to \eqref{mixed_eq_2}, then $X^M$ is a solution to the rough equation \eqref{Req2} with coefficients given by \eqref{coeff_rel_2}. \end{theorem} Note that the smoothness of the drift coefficient of the arising rough path SDE is $C^1_b$. Within the algebraic integration framework it is not known (up to the best of our knowledge) whether such an equation has a unique solution. \section{Application to numerical methods} \label{rptomixed_appl} \subsection{Limit theorem for mixed equations} At the core of the theory of mixed equations is a limit theorem in \cite{mbfbm-limit}, which we will briefly recall here. Let $n \in \mathbb{N}$ and define $$ B^{H,n}_t=n\int_{(t- 1/n)\vee 0}^t B^H_s \mathsf{d} s, \qquad t \geq 0. $$ Note that $B^{H,n}$ is an $(\mathcal{F}_t)_{t \geq 0}$-adapted Gaussian process such that its trajectories are a.s.~differentiable and $$ \dot{B}^{H,n}_t=n(B^H_t-B^H_{(t-1/n)\vee 0}), \qquad t \geq 0.$$ Suppose that $X^{M,n}=(X^{M,n}_t)_{t\in [0,T]}$ is a solution to \begin{align} \label{mixed_approx} X^{M,n}_t=x_0+\int_0^t \big( a(X^{M,n}_s)+c(X^{M,n}_s) \dot{B}^{H,n}_s \big) \mathsf{d} s+\int_0^t b(X^M_s) \mathsf{d}^{\mathsf{I}} W_s, \qquad t \in [0,T]. \end{align} Then we have \begin{align} X^{M,n}\to X^M,\,n\to \infty\,\,\, \text{uniformly in probability}. \label{result} \end{align} Setting $g^n=(\operatorname{id},W,B^{H,n})$, we can apply the stability of rough paths equations and the relation between mixed and rough SDEs to recover and strengthen this result. First note that the integral $\int_0^t c(X^{R,n}) \mathsf{d}^{\mathsf R} B^{H,n}_s$ coincides with the ordinary Young integral $\int_0^t c(X^{R,n}_s)\dot{B}_s^{H,n} \mathsf{d} s$. Proceeding analogously to the proof of Theorem \ref{rptomixed_thm} we have: \begin{proposition} Let $\delta >0$ and $a, b^{(i)}, c^{(j)} \in C_b^{2,\delta}(\mathbb{R}^d;\mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$. Then the solution $X^{R,n}$ of the rough equation \begin{align} \label{eqSmoothed} X^{R,n}_t=x_0 & +\int_0^t \widetilde{a}(X^{R,n}) \mathsf{d} s +\int_0^t b(X^{R,n}) \mathsf{d}^{\mathsf R} W_s +\int_0^t c(X^{R,n}) \mathsf{d}^{\mathsf R} B^{H,n}_s, \quad t \in [0,T], \end{align} with $$\widetilde{a}(x)=a(x) - \frac 1 2 \sum_{i=1}^m \mathcal D^{(i)}_b b^{(i)}(x), \qquad x \in \mathbb{R}^d, $$ and the solution of equation \eqref{mixed_approx} coincide $P$-almost surely. \end{proposition} Our aim is now to prove: \begin{proposition}\label{aux_prop} Let $\tilde{a},b^{(i)},c^{(j)}\in C^{2,\delta}_b(\mathbb{R}^d; \mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$, and $\gamma \in (1/3, 1/2)$. Then we have $$ \| X^{R,n}- X^R \|_{\gamma, \infty, [0,T]} \stackrel{P-a.s.} \longrightarrow 0 \qquad \textrm{as} \quad n \rightarrow \infty. $$ \label{conv} \end{proposition} This result directly implies: \begin{corollary}\label{app_mixed_cor} Let $a,b^{(i)},c^{(j)}\in C^{2,\delta}_b(\mathbb{R}^d; \mathbb{R}^d)$, $i=1, \ldots, m$, $j=1, \ldots, \ell$, $\gamma \in (1/3, 1/2)$ and $ \sum_{i=1}^m \mathcal D^{(i)}_b b^{(i)} \in C^{2,\delta}_b([0,T]; \mathbb{R}^d)$. Then we have $$ \| X^{M,n}- X^M \|_{\gamma, \infty, [0,T]} \stackrel{P-a.s.} \longrightarrow 0 \qquad \textrm{as} \quad n \rightarrow \infty. $$ \label{conv} \end{corollary} Recalling that $g^n=(\operatorname{id},W,B^{H,n})$ Proposition \ref{aux_prop} follows from Theorem \ref{stability} and the following two Lemmata. (Note that the following estimates are not covered by Section 15.5 in \cite{Friz}, since $B^{H,n}$ is not a mollifier approximation.) \begin{lemma} For all $0 <\gamma< \gamma' < H$ there exists an almost surely finite random variable $K_{T,\gamma, \gamma'}$ such that $$ \| B^{H,n}- B^H \|_{\gamma, [0,T]} \leq K_{T,\gamma, \gamma'} \cdot n^{-(\gamma'-\gamma)}. $$ \end{lemma} \begin{proof} Clearly, it is sufficient to consider the one dimensional case. For $ t \leq 0$ define $B^H_t=0$. Fix $t,s\in [0,T]$ and $\gamma'\in (\gamma,H)$. First, consider the case $|t-s|\geq \frac 1 n$. Here, we have \begin{align*} |B^H_t-B^H_s-B^{H,n}_t+B^{H,n}_s| & \leq n \left|\int^t_{t-1/n}(B^H_u-B^H_t) \mathsf{d} u\right|+n\left|\int_{s-1/n}^s (B^H_u-B^H_s) \mathsf{d} u\right| \\ & \leq 2\norm{B^H}_{\gamma',[0,T]} \frac{1}{n^{\gamma'}}\leq 2 \norm{B^H}_{\gamma',[0,T]}|t-s|^{\gamma} \frac{1}{n^{\gamma-\gamma'}}. \end{align*} Next, when $|t-s|\leq \frac 1 n $ one has \begin{align*} |B^H-B^H_s-B^{H,n}_t+B^{H,n}_s| & \leq |B^H_t-B^H_s|+|B^{H,n}_t-B^{H,n}_s|\\ & \leq \norm{B^H}_{\gamma',[0,T]}|t-s|^{\gamma'} +n \left|\int_{-\frac 1 n}^0 (B^{H}_{t+u}-B^{H}_{s+u}) \mathsf{d} u \right| \\ & \leq 2\norm{B^H}_{\gamma',[0,T]}|t-s|^{\gamma'}\leq 2\norm{B^H}_{\gamma',[0,T]}|t-s|^{\gamma} \frac{1}{n^{\gamma'-\gamma}}. \end{align*} \end{proof} \begin{lemma} Let $1/2 <\gamma < \gamma' <H$. Then, there exists an almost surely finite random variable $K_{T,\gamma,\gamma'}$ such that $$ \| \mathbf G^n - \mathbf{G} \|_{\mathcal C^\gamma_2([0,T]^2)} \leq K_{T,\gamma,\gamma'} \cdot n^{-(\gamma'-\gamma)} . $$ \end{lemma} \begin{proof} First, we prove the convergence the elements of the L\'evy areas which correspond to the smoothed fBm to the ones of fBm. Fix $t,s\in [0,T]$, $i,j\in \{1,\ldots,\ell\}$. Clearly, we have \begin{align*} \Delta^{(1)}(s,t) & = \left |\int_s^t (B^{H,(i)}_u-B^{H,(i)}_s)\mathsf{d} B^{H,(j)}_u-\int_s^t( B^{H,n,(i)}_u-B^{H,n,(i)}_s)\mathsf{d} B^{H,n,(j)}_u \right | \\ & \leq \left |\int_s^t \big( ( B^{H,(i)}_u-B^{H,(i)}_s)-(B^{H,n,(i)}_u-B^{H,n,(i)}_u) \big) \mathsf{d} B^{H,(j)}_u \right| \\ & \qquad +\left|\int_s^t ( B^{H,n,(i)}_u-B^{H,n,(i)}_s) \mathsf{d} (B^{H,n,(j)}_u-B^{H,(j)}_u)\right|. \end{align*} Set $Z_u^{n,(i)}=B^{H,(i)}_u-B^{H,n,(i)}_u$. Applying the Young inequality with $1/2< \mu <H$ yields \begin{align*} \left |\int_s^t (Z^n_u-Z^n_s)\mathsf{d} B^{H,(j)}_u \right | & \leq C_{\mu} \norm{B^{H,(j)}}_{\mu}\norm{Z^{n,(i)}}_{\mu}|t-s|^{2\mu}, \\ \left |\int_s^t (B^{H,n,(i)}_u-B^{H,n,(i)}_s)\mathsf{d} (B^{H,n,(j)}_u-B^{H,(j)}_u)\right| & \leq C_\mu \norm{B^{H,n,(i)}}_\mu \norm{Z^{n,(j)}}_{\mu}|t-s|^{2\mu}, \end{align*} i.e. we obtain \begin{align} \frac{\Delta^{(1)}(s,t)}{|t-s|^{2\mu}} \leq C_{\mu} \left( \norm{Z^{n,(i)}}_{\mu} \norm{B^{H,(j)}}_{\mu} + \norm{Z^{n,(j)}}_{\mu} \norm{B^{H,(i)}}_{\mu} + \norm{Z^{n,(j)}}_{\mu} \norm{Z^{n,(i)}}_{\mu} \right). \end{align} Now, we proceed with the parts that correspond to the iterated integrals which involve the Wiener process and the smoothed fBm. Fix $i\in\{1,\ldots, m\}$ and $j\in\{1,\ldots, \ell\}$, $s,t\in [0,T]$. Again, applying the Young inequality gives \begin{align*} \Delta^{(2)}(s,t)& =\left |\int_s^t (W^{(i)}_u-W^{(i)}_s) \mathsf{d} (B^{H,(j)}_u -B^{H,n,(j)}_u) \right| \\ & \leq C_{\lambda,\mu} \norm{Z^{n,(j)}}_\mu \norm{W^{(i)}}_\lambda |t-s|^{\lambda + \mu} \end{align*} with $0<\lambda<1/2, 0< \mu< H$ such that $\lambda + \mu >1$. It is only left to deal with \begin{align*} \Delta^{(3)}(s,t)&= \left |\int_s^t(B^{H,(j)}_u-B^{H,(j)}_s)\mathsf{d} W^{(i)}_u-\int_s^t (B^{H,n,(j)}_u-B^{H,n,(j)}_s) \mathsf{d} W^{(i)}_u \right |. \end{align*} But using the integration by parts formula for Young integrals we obtain \begin{align*} \int_s^t (Z_u^{n,(j)}-Z_s^{n,(j)})\mathsf{d} W^{(i)}_u=(Z^{n,(j)}_t-Z^{n,(j)}_s)(W^{(i)}_t-W^{(i)}_s)-\int_s^t( W^{(i)}_u-W^{(i)}_s )\mathsf{d} Z^{n,(j)}_u. \end{align*} Using the previous step yields $$ \Delta^{(3)}(s,t) \leq C_{\lambda,\mu} \norm{Z^{n,(j)}}_\mu \norm{W^{(i)}}_\lambda |t-s|^{\lambda + \mu}. $$ The elements of the L\'evy area involving $t$ and the ``smoothed'' fBms are easily treated. Here we have \begin{align*} \Delta^{(4)}(s,t):= \left| \int_s^t (B_s^{H,(i)} - B_s^{H,n,(i)}) \mathsf{d} s\right| \leq \norm{Z^{n,(i)}}_{\mu} |t-s|^{1+\mu}, \\ \Delta^{(5)}(s,t):= \left| \int_s^t s \mathsf{d} (B_s^{H,(i)} - B_s^{H,n,(i)}) \right| \leq \norm{Z^{n,(i)}}_{\mu} |t-s|^{1+\mu}. \end{align*} Setting now $\gamma=(\lambda + \mu)/2$, the assertion follows from the previous lemma. \end{proof} \subsection{Constructing numerical methods for mixed equations} The almost sure convergence in the ${\gamma}$-H\"older norm of \begin{align} X^{M,n}_t=x_0+\int_0^t \big( a(X^{M,n}_s)+c(X^{M,n}_s) \dot{B}^{H,n}_s \big) \mathsf{d} s+\int_0^t b(X^M_s) \mathsf{d}^{\mathsf{I}} W_s, \qquad t \in [0,T], \label{mixed_num_2} \end{align} with $$ \dot{B}^{H,n}_t= n(B^H_t-B^H_{(t-1/n)\vee 0}), \qquad t \in [0,T], $$ to $X^M$ can be exploited to construct and to analyse numerical methods for mixed equations, proceeding similar to \cite{DNT,Riedel}. In the latter references approximations of rough SDEs have been obtained by discretising their Wong-Zakai approximations. For example, applying an Euler discretisation with stepsize $\Delta=1/n$ to \eqref{mixed_num_2} yields the approximation $$ x_{k+1} = x_{k} + a(x_k) \Delta + b(x_k)(W_{(k+1)\Delta} - W_{k\Delta}) + c(x_k)(B^H_{k\Delta} - B^H_{(k-1)\Delta}), \qquad k=0,1, \ldots $$ where $B^H_{-\Delta}=0$. Equation \eqref{mixed_num_2} is an It\^{o} SDE with random coefficients, the only technical difficulty being the unboundedness of $\dot{B}^{H,n}$. Using a localization procedure as e.g. in \cite{num_math} and standard estimates involving the It\=o isometry and Gronwall's lemma one can show that $$ \sup_{k=0, \ldots ,\lceil T/n \rceil} | X^{M,n}_{k \Delta} - x_{k}| \stackrel{P-a.s.}{\longrightarrow} 0, \qquad n \rightarrow \infty.$$ Corollary \ref{app_mixed_cor} then implies the convergence of this skewed Euler scheme, i.e. $$ \sup_{k=0, \ldots, \lceil T/n \rceil} | X^{M}_{k \Delta} - x_{k}| \stackrel{P-a.s.}{\longrightarrow} 0, \qquad n \rightarrow \infty.$$ \subsection{The natural Euler scheme for rough SDEs} Using the correction formula, one can establish the convergence of the ``natural'' Euler scheme \begin{align} \label{euler_rp} x_{k+1} = x_{k} & + \Big( a(x_k) + \frac 1 2 \sum_{i=1}^m \mathcal D^{(i)}_b b^{(i)}(x_k) \Big) \Delta \\ & + b(x_k)(W_{(k+1)\Delta} - W_{k\Delta}) + c(x_k)(B^H_{(k+1)\Delta} - B^H_{k\Delta}), \qquad k=0,1, \ldots \nonumber \end{align} for the rough SDE \begin{align} \label{rough_num_2} X^{R}_t=x_0+\int_0^t a(X^{R}_s) \mathsf{d} s + \int_0^t b(X^R_s) \mathsf{d}^{\mathsf R} W_s + \int_0^t c(X^R_s) \mathsf{d}^{\mathsf R} B_s^H, \qquad t \in [0,T], \end{align} at least for $m=\ell=1$. The notion ``natural'' is based on the following observations: for $b=0$ equation \eqref{rough_num_2} is an SDE driven by fractional Brownian motion with Hurst parameter $H>1/2$, for which \eqref{euler_rp} with $b=0$ is a convergent approximation, see e.g. \cite{Davie,Friz}, while for $c=0$ equation \eqref{rough_num_2} is a Stratonovich SDE, for which \eqref{euler_rp} with $c=0$ is again a convergent scheme, see e.g. \cite{KP}. Using the results of \cite{ShevMishura-Euler} and Theorem \ref{transfer_mixed_rough} we have: \begin{proposition} Let $a,b,c\in C^{2,\delta}_b(\mathbb{R}; \mathbb{R})$. Moreover let $ b'b \in C^{2,\delta}_b(\mathbb{R}; \mathbb{R})$ and $\inf_{x \in \mathbb{R}} c(x)>0$. Then there exists $C>0$ such that $$ \sup_{k=0, \ldots, \lceil T/ \Delta \rceil} \left( \mathsf{E} | X_{k \Delta}^R - x_k|^2 \right)^{1/2} \leq C \cdot \big( \Delta^{1/2} + \Delta^{2H-1} \big).$$ \end{proposition}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,006
El palau Bo (Palazzo Bo en italià) és l'edifici principal de la Universitat de Pàdua, al centre de Pàdua, enfront del palau Moroni, seu del consistori de la ciutat. En els primers decennis del es duu a terme el trasllat de les diferents escoles, disseminades pels barris de la ciutat, al complex d'edificis coneguts amb el nom de palau Bo, apel·latiu que deriva de l'emblema del famós hotel Hospitium Bovis, o 'alberg del bou' ("Bo" en llengua vèneta), situat prop de l'antic carrer de les carnisseries. A la fi del sorgien en aquesta àrea grups de cases pertanyents al patriciat de la ciutat, entre les quals es trobava precisament la que més endavant seria ocupada per l'Hospitium Bovis. Les obres de reforma per a ús universitari començaren el 1493 i acabaren a principis del , mentre una nova sèrie d'intervencions s'hi feu a partir de 1889. L'edifici íntegrament, inclòs el pati modern, va ser completat entre 1938 i 1942, per voluntat del llavors rector, Carlo Anti, per obra de l'arquitecte Ettore Fagiuoli, mentre que la decoració artística i el mobiliari són obra del famós arquitecte Gio Ponti. El pati antic i els blasons Iniciat al 1546, és obra d'Andrea Moroni, el major arquitecte de Pàdua de mitjan segle XVI. És una de les construccions més belles del Renaixement a la ciutat, envoltat per una lògia doble amb dos ordres, amb columnes dòriques al nivell inferior i jòniques al superior. Les parets i voltes de les porxades estan completament decorades amb els blasons dels rectors i dels consellers de les dues universitates, artista i jurista, que es remunten al lapse de 1592 a 1688, any en què la República de Venècia va prohibir la col·locació d'"altres memòries en el Bo". També l'Aula Magna està decorada amb blasons originals. Aula Magna Des del segle XVI fins al va allotjar l'"Escola gran de legistes" i s'hi van donar lliçons: hi va ensenyar també Galileo Galilei, a qui l'aula està hui dedicada. En la primera meitat del va servir com a aula de dibuix. Per ser destinada a Aula Magna fou restaurada (1854-56) i decorada amb els frescs del sostre, al centre del qual es troba l'al·legoria La saviesa i les altres disciplines, obra de Giulio Carlini. La paret del fons, on seuen els membres del Senat Acadèmic durant les cerimònies més importants (inauguració del curs, atorgament de llicenciatures honoris causa, etc.) és obra de Gio Ponti (1942). S'hi pot llegir l'antic lema de la universitat: Universa Universis Patavina Llibertes. Sala dels quaranta La sala pren el nom dels 40 retrats col·locats a les parets: estrangers il·lustres, estudiants a Pàdua però provinents de tots els països d'Europa. Realitzats en tèmpera per Giangiacomo dal Forno (1942), sense arribar a pretendre fidelitat iconogràfica, retraten entre altres, a: Antonio Augustin, ambaixador de papes i de Felip II; Michel de L'Hospital, col·laborador de Caterina de Mèdici i canceller de França; Thomas Linacre, metge d'Enric VIII i docent a Oxford; William Harvey, cèlebre pels estudis sobre la circulació de la sang i fundador de l'escola mèdica anglesa; Olof Rudbek el Vell, docent de botànica, anatomia i medicina a la Universitat d'Uppsala, promotor d'un jardí botànic sobre el model paduà; Thomas Bartholin, un dels fundadors de l'escola mèdica danesa; Nicolau de Cusa, il·lustre filòsof alemany del i cardenal; Werner Rolfinck, promotor dels estudis d'anatomia i química a Alemanya; Peter Vasiljevic Postnikov, enviat a Pàdua per Pere I de Rússia per estudiar medicina; Esteve Bathory, hongarès que esdevingué rei de Polònia el 1576; Ioannis Kapodístrias, grec, nomenat el 1828 president dictador del govern hel·lènic; Emanuele Sciascian, armeni, metge de la cort imperial de Constantinoble i promotor del primer institut superior de medicina a Turquia. Càtedra de Galileu La sala dels Quaranta allotja la càtedra que, segons la tradició, van muntar els estudiants amb la finalitat que Galileu pogués ensenyar a la "sala gran dels legistes" (l'actual Aula Magna), perquè no cabia a les altres aules la munió que acudia a les seues lliçons. La càtedra va ser conservada a l'Aula Magna fins a mitjan segle XIX. Galileu va ensenyar a la Universitat de Pàdua durant divuit anys (1592-1610), que recordava com els millors de la seua vida. Fou molt admirat pels estudiants i tutelat pel govern venecià; a Pàdua va donar inici al mètode científic modern. Teatre anatòmic Van encarregar-ne la construcció el 1594 al cèlebre professor d'anatomia Hieronymus Fabricius d'Acquapendente seguint els suggeriments de fra Paolo Sarpi. És el primer teatre estable del món, ja que prèviament, per assistir a les autòpsies, es construïen estructures desmuntables. És el més antic i està encara perfectament conservat. És una estructura de fusta amb forma de con a l'inrevés, amb planta el·líptica, amb sis nivells concèntrics d'esglaons que s'eleven al voltant de la taula d'anatomia. Els balustres són de fusta de noguera tallada. Al principi les finestres eren cegues (es van obrir només al 1844) i la lliçó d'anatomia es duia a terme amb llum de ciris i torxes. Utilitzat per l'ensenyament fins a 1872, el teatre va sofrir modificacions als anys 1842-44 i fou restaurat el 1991-92. A la saleta adjacent al teatre, denominada en el passat "cuina" del teatre, és a dir, lloc on es preparaven els cossos que s'havien de seccionar, hi ha una petita exposició permanent. Aula de medicina Una de les sales acadèmiques més boniques i de les més antigues de l'edifici és l'aula que hui allotja les discussions de les tesis de llicenciatura dels estudiants de medicina i d'altres facultats. És l'única aula on s'impartien les lliçons teòriques d'anatomia, però els seus orígens són més remots. El sostre tallat, perfectament conservat, i el fris típicament medieval que decora les parets, recorden que la sala formava part d'una de les tres cases nobiliàries de la família Carrara, que al constituïen el nucli sobre el qual va sorgir l'hostal del Bo. Primera dona graduada del món Sobre la base d'una de les dues àmplies escalinates que condueixen al pòrtic superior del pati Antic es veu l'estàtua d'Elena Lucrezia Cornaro Piscopia, la primera dona graduada al món, que el 1678 va aconseguir la llicenciatura en filosofia de la Universitat de Pàdua. Referències Pàgina de la Universitat de Pàdua amb una breu explicació del palau Bo (en italià). Fullet de visita al palau Bo (en italià). Pàgines amb traduccions sense revisar Pàdua
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,902
Q: Replace content from a Span by "..." after 70% of parent's width UPDATE: ACTUAL PROBLEM: Why does "(ciclo)" go to a line a bit lower than the rest? : http://jsfiddle.net/MV8q5/2/ Example: http://jsfiddle.net/vAhCS/ I have this structure: <div> <span class="limitCharacters">CONTENT TO LIMIT</span><span>(ciclo)</span> </div> What I want is that, if limitCharacters span content occupies more than 70% of it's parent's div width, jQuery must remove the rest of the span replacing it by a "...". So that it shows in a single line the remained content of the span, plus the "...", plus another inline span that goes right after this limitCharacters class Span (in the above code, "(ciclo)"). I solved this calculating limitCharacters' % width based on it's parent (calculates it correctly), and in case this result is 70 or more, it adjust span's width to 60%. The problem is that I must avoid any line break, and this can only be done by the white-space: nowrap css method, with the result that the 60% does not apply. What I actually need is this: What I actually have is this (see jsfiddle): If I avoid using nowrap method, it breaks line. If I use overflow:hidden, it takes out the "..." and puts the second ("ciclo") in a lower line height for some reaon I do not know (see example): http://jsfiddle.net/vAhCS/ A: So you can use some css .limitCharacters { width: 70%; // Or you can put some fixed pixel value for 70% white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } I'm not sure how well a span would work, but a div would work well. Check this fiddle: http://jsfiddle.net/MV8q5/ Updated answer: For the updated part of your question. It's because "ciclo" is out of the div and in a span. If you want it in the same line, put the <span>(ciclo)</span> into the div tag like: http://jsfiddle.net/MV8q5/3/ If you want it in a new line, put it in a new div like: http://jsfiddle.net/MV8q5/4/ A: you could also use text-overflow: ellipsis; c.f css tricks A: You don't need to do this with jQuery. (Shock?). Pure CSS will be sufficient. Take a look at this example. I have added background colors to make this really obvious as to what elements are which. Live Demo Example http://jsfiddle.net/vAhCS/2/ CSS .tableContent { width: 250px; background: red; } .contentColumn40{ height: 36px; width: 70%; border:1px solid #e0e0e0; display: inline-block; overflow: hidden; background: yellow; } .limitCharacters { display: block; white-space: nowrap; width: 100%; overflow: hidden; text-overflow: ellipsis; }
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,640
{"url":"http:\/\/www.lofoya.com\/Solved\/1551\/in-a-race-the-odd-favour-of-cars-p-q-r-s-are-1-3-1-4-1-5-and","text":"# Easy Probability Solved QuestionAptitude Discussion\n\n Q. In a race, the odd favour of cars $P,Q,R,S$ are $1:3, 1:4, 1:5$ and $1:6$ respectively. Find the probability that one of them wins the race.\n \u2716\u00a0A. 9\/17 \u2716\u00a0B. 114\/121 \u2714\u00a0C. 319\/420 \u2716\u00a0D. 27\/111\n\nSolution:\nOption(C) is correct\n\nLet the probability of winning the race is denoted by $P(\\text{person})$\n\n$$P(P)=\\dfrac{1}{4},\\;P(Q)=\\dfrac{1}{5},\\;P(R)=\\dfrac{1}{6},\\;P(S)=\\dfrac{1}{7}$ All the events are mutually exclusive (since if one of them wins then other would lose as pointed out by rahul) hence, Required probability: $=P(P)+P(Q)+P(R)+P(S)$ $=\\dfrac{1}{4}+\\dfrac{1}{5}+\\dfrac{1}{6}+\\dfrac{1}{7}$ $=\\dfrac{319}{420}$ ## (11) Comment(s) Anonymous () The probability of winning is not correct. It should be 3\/4+4\/5+5\/6+6\/7 If odds are stated as an A : B chance of success then the probability of success is given as P = B \/ (A + B) Shristi () if one wins then others are definitely losing then we have to multiply their losing probability too Payal () the probabilities taken for calculation are for their failure. I guess we need to use 1-1\/4...etc for all to make sure if one wins Payal () it should be 3\/4+4\/5+5\/6+6\/7 Poonam Pipaliya () can't we do like this,$P(A)(1-P(B))(1-P(C))$because here one wins then others definitely loose. Miya () That is only one of the possibilities, what about$P(B)(1-P(A))(1-P(C))$or similar events where$B\\$ wins?\n\nOne may proceed with your approach but that would take lot of counting for larger numbers.\n\nIt would be better to take advantage of counting techniques such as used in solution.\n\nRahul\n()\n\nhow are the events mutually exclusive ?\n\nIf P wins the race then others will lose the same race.\n\nDeepak\n()\n\nYes, Rahul, you answered it yourself.\n\nBased on your input, updated the solution.\n\nThank you for for making it more useful for others.\n\nSumit\n()\n\nI think the\u00a0answer is wrong.\n\nAbhishek\n()\n\nkindly explain, how come it is mutually exclusive....\n\nVikas\n()\n\nQuestion has different values than answer values!","date":"2018-01-18 14:02:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8268121480941772, \"perplexity\": 1249.5534095859025}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084887414.4\/warc\/CC-MAIN-20180118131245-20180118151245-00132.warc.gz\"}"}
null
null
Our board-certified physicians are among the world's best spine surgeons. Specializing in neuroglial and orthopedic spine conditions, we are pioneers in minimally invasive surgery. Forget major incisions and lengthy hospital stays. Our team routinely performs complex spinal procedures on an outpatient basis. At DISC, you benefit from coordinated care across physician specialities. Using a range of diagnostic and treatment procedures, our physicians first determine the underlying cause of your pain and identify the best solutions for your lifestyle. Commonly treated conditions include herniated discs, sciatica, and lumbar stenosis. Procedures include minimally invasive spine surgery, spine fusion, anterior cervical discectomy and fusion, disc replacement, lateral lumbar interbody fusion, and ALIF. Minimally invasive spine surgery (MISS) is based on innovative techniques, cutting-edge technology, and evidence-based medicine. MISS allows us to deliver better results, faster recoveries, and lower scarring than traditional surgery. With MISS, surgeons use high-powered microscopes to operate through small incisions, thus minimizing trauma to the surrounding tissue. Our physicians have even helped develop these microscopes. Microsurgical techniques translate to reduced post-operative pain and consistent, lasting relief for spinal disorders as well as chronic neck and back pain. Our practice combines the knowledge of world-renowned experts across specialties. See their takes on the latest industry news and advice. Exercises aimed at relieving the symptoms of cervical disc disease and chronic neck pain may be just what your body needs!
{ "redpajama_set_name": "RedPajamaC4" }
6,226
module.exports = require( "./script/main.js" );
{ "redpajama_set_name": "RedPajamaGithub" }
3,204
{"url":"http:\/\/mathhelpforum.com\/differential-equations\/110310-how-solve-y-4i-1-y-y-0-a.html","text":"how to solve $\\displaystyle y''+(4i+1)y'+y=0$?\nhow to solve $\\displaystyle y''+(4i+1)y'+y=0$?\nThe same way you solve any linear constant coefficient homogeneous equation. Use a trial solution of the form $\\displaystyle y(x)=e^{\\lambda x}$ get the characteristic equation which you solve, then the general solution is a linear combination of the solutions found this way (with the usual caveats for double roots).","date":"2018-12-17 14:04:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9558407068252563, \"perplexity\": 124.28514618040725}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376828507.84\/warc\/CC-MAIN-20181217135323-20181217161323-00300.warc.gz\"}"}
null
null
Q: Hidden Youtube autoplay on click image Need some help please, I am making an unordered list with once an image once an embed iframe from youtube, when clicking on the image it replaces the image with the hidden iframe from above using the .replace and .prev function of jquery I was wondering if after clicking on the image the youtube video could autoplay, when adding autoplay to the parameters of youtube itself, it starts on page load, even when being set to display none. I am doing this with using only classes, because the list will get big and I don't want to add for every separate id some jquery. I am not sure if this is even possible by only using classes. Could someone guide me a little. this is what I came up so far: javascript: $('.coverimageforplayer').click(function () { $(this).replaceWith($(this).prev('li.showme').show()); }); html: <ul> <li class="showme" style="display:none"> <iframe width="480" height="360" src="//www.youtube.com/embed/TZMoS2QBc8U?autoplay=0" frameborder="0" allowfullscreen></iframe> </li> <li class="coverimageforplayer"> <img src="http://www.nasa.gov/images/content/711375main_grail20121205_4x3_946-710.jpg" /> </li> <li class="showme" style="display:none"> <iframe class="iframer_1" width="480" height="360" src="//www.youtube.com/embed/TZMoS2QBc8U" frameborder="0" allowfullscreen></iframe> </li> <li class="coverimageforplayer"> <img src="http://www.nasa.gov/images/content/711375main_grail20121205_4x3_946-710.jpg" /> </li> </ul> thank you very much A: I would ditch the showme list item as this adds unnecerary html and makes your code too complex in the long run. I've written a small fiddle to demonstrate a more elegant approach, stash the video url in a data attribute on the list image, then insert a iframe on click and voila! Here's the fiddle: http://jsfiddle.net/TxbHx/1/ The html: <ul> <li>autoplays:</li> <li class="coverimageforplayer" data-videoSRC="//www.youtube.com/embed/TZMoS2QBc8U?autoplay=1"> <img src="http://www.nasa.gov/images/content/711375main_grail20121205_4x3_946-710.jpg" /> </li> <li>Does not autoplay:</li> <li class="coverimageforplayer" data-videoSRC="//www.youtube.com/embed/TZMoS2QBc8U?autoplay=0"> <img src="http://www.nasa.gov/images/content/711375main_grail20121205_4x3_946-710.jpg" /> </li> </ul> The javascript: $('.coverimageforplayer').on('click', function() { var element = $(this); // reuse variables for best practice. http://code.tutsplus.com/tutorials/quick-tip-jquery-newbs-stop-jumping-in-the-pool--net-22142 var videoSRC = element.attr('data-videoSRC'); // Get the video URL from the data attribute var iframe = '<iframe width="480" height="360" src="'+videoSRC+'" frameborder="0" allowfullscreen></iframe>'; // create the iframe string element.html(iframe); // insert the iframe });
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,955
Wawrzyńcowice [], alemán Lorenzdorf es un pueblo ubicado en el distrito administrativo de Gmina Strzeleczki (Gemeinde Klein Strehlitz), dentro del Condado de Krapkowice, Voivodato de Opole, en el el sur de la región polaca occidental de Alta Silesia. Se encuentra aproximadamente a 7 kilómetros al suroeste de Strzeleczki (Klein Strehlitz), a 12 kilómetros al suroeste de Krapkowice, y a 29 kilómetros al sur de la capital regional Opole. Antes de 1945, el área fue parte de Alemania. Desde el entonces 2006 el pueblo, como el resto de la comuna, ha sido bilingüe en alemán y polaco. El pueblo tiene una población de solo 86 personas. Historia El pueblo surgió en 1679 como Wawrzinowice. El nombre de la ciudad deriva del nombre de Lawrence (Wawrzyniec en polaco, Lorenz en alemán), que es también la derivación del nombre alemán del pueblo, Lorenzdorf (pueblo de Lawrence). Inicialmente el pueblo estaba en posesión de la noble Casa de Schaffgotsch, luego en 1821 pasó a manos del Barón Seherr-Thoss, quien lo vendió en la década de 1860 al Mayor Thiele-Winckler von Miechowitz, cuya familia fue propietaria del pueblo hasta la Segunda Guerra Mundial. Antes de 1945 pertenecía al distrito de Landkreis Neustadt O.S. En 1945 Silesia fue entregada a Polonia y la población alemana de Lorenzdorf fue expulsada en gran parte. El pueblo pasó a llamarse Wawrzyńcowice y se anexionó a la recién creada Voivodato de Silesia. En 1950 fue reasignado al Voivodato de Opole, y en 1999 reasignado del condado de Prudnik (antes Neustadt O.S.) al condado de Krapkowice. El 17 de mayo de 2006 todo el municipio de Strzelecki/Klein Strehlitz fue declarado bilingüe en alemán y polaco, y el 24 de noviembre de 2008 se oficializó también el antiguo nombre alemán de Lorenzdorf. Referencias Enlaces externos Voivodato de Opole
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,886
Telehouse is a major carrier-neutral colocation, information and communications technology services provider based in Docklands, London. Established in 1988, it operates eight facilities in London, Paris and Frankfurt. Part of the global Telehouse network of data centres, the brand has 45 colocation facilities in 26 major cities around the world including Moscow, Istanbul, Johannesburg, Cape Town, Beijing, Shanghai, Hong Kong, Singapore, Vietnam, Seoul, Tokyo, New York and Los Angeles. KDDI, Telehouse's Japanese telecommunications and systems integration parent company, operates data centre facilities in America and Asia. Operations London Operational since 1990, Telehouse North became Europe's first purpose-built neutral colocation facility. LINX traffic has been moving through the carrier-neutral Telehouse campus since its opening. Telehouse hosts the vast majority of internet peering traffic from LINX. It is the main hub of the Internet in the United Kingdom. In response to growing demand for a Central London location, Telehouse opened an additional colocation facility in 1997, Telehouse Metro, in the London Borough of Islington near Silicon Roundabout. A second building at the Docklands site, Telehouse East, was opened in 1999 and the construction of a third building, Telehouse West, at its Docklands site was completed in March 2010. In July 2014 KDDI announced that a fourth building North Two would be built on the site, adjacent to the existing Telehouse North building. In August 2016, Telehouse Europe opened $177 million North Two data center of 24,000 square meters, increasing its capacity at the Docklands site where it already had 73,000 square meters of space. According to Telehouse, North Two is the only UK data center to own a 132 kV on-campus grid substation that is directly connected to the National Grid, reducing transmission losses and improving power density and service continuity. North Two also utilizes the first multi-storey adiabatic cooling system in the world, delivering an industry-leading 1.16 PUE. The site has a capacity of up to 73 MVA in total. Telehouse London is the primary home of the London Internet Exchange (LINX) since 1994, due to the number of carriers present in the data centre and the level of latency, Telehouse is one of the key Internet hubs in the world. LINX, Packet Exchange, LONAP and LIPEX are present at Telehouse Docklands. In 2012, Telehouse built their own primary substation, at 50MVA and two 132kVA power lines directly connected to the high voltage power network for the London Docklands site. As of March 2019, Telehouse London North is listed as most populated datacenter in EMEA by the data center rankings, working with over 530 network carriers, ISPs and ASPs, including Amazon Web Services, Google Cloud Platform and Microsoft Azure, as well as CenturyLink, Hurricane Electric, Interoute, Voxility, TeliaSonera and NTT communications. Existing Telehouse customers can interconnect to any of these parties via a cross connect. Paris Telehouse Europe operates three sites in Paris: Telehouse Paris Jeûneurs. Opened in 1996 near the famous Rue du Sentier. Telehouse Paris Voltaire which opened in 1998. Telehouse Magny-Les-Hameaux (Yvelines département). Originally a national military defence location, the site was converted into a data centre in 2009 and now provides 15,000 square meters of colocation space. Frankfurt Telehouse Europe also operates one site in Frankfurt: Telehouse Frankfurt. Opened in 2012, Telehouse acquired one of Germany's largest colocation sites, Databurg. References KDDI Internet hosting Telecommunications in the United Kingdom Internet in the United Kingdom Internet in France Infrastructure in London Internet technology companies of the United Kingdom Buildings and structures in the London Borough of Tower Hamlets Media and communications in the London Borough of Tower Hamlets Data centers
{ "redpajama_set_name": "RedPajamaWikipedia" }
568
Consider Publishing Options How to find open access book publishers Published 30 September 2020 The Directory of Open Access Books is the largest resource to find open access book publishers, with around 400 publishers listed. Other ways to find open access book publishers include the OAPEN list of compliant publishers, the list of OASPA members and various platforms that host open access books. The most extensive resource to find open access book publishers is the Directory of Open Access Books (DOAB). DOAB lists academic, peer-reviewed books that are available under an open licence. Publishers that apply to have their books listed here are screened for their peer review process and licensing policy. DOAB provides the option to browse by publisher, which results in an alphabetical list of around 400 publishers, followed by the number of open access books which they have listed in DOAB. You can also browse by subject, to find publishers who work with authors in your field. If you are interested in a particular publisher, click on the link, and you will find the URL of their website and, in many cases, tabs with more information: 'about', 'peer review' and 'licence'. If you are looking for publishers in your language area, start by searching for a subject and then select the language area of your choice on the results page. OAPEN maintains a list of publishers that comply with the open access requirements of European research funders, currently the European Research Council (ERC), Wellcome, the Austrian Science Fund (FWF) and the Swiss National Science Foundation (SNSF). The list aims to inform authors about compliant publishers. Listed publishers need to confirm their compliance to be included (OAPEN n.d.). Many book publishers are members of OASPA, the Open Access Scholarly Publishing Association, which means they fulfil OASPA membership criteria. Another way to find publishers is to search for open access books and book publishers on hosting platforms, such as OAPEN, OpenEdition, Project Muse, JSTOR and ORL. A more extensive list is curated by Open Book Publishers and can be found here. Once you have found one or more potential publishers find out if they answer your specific needs (See How to choose a publisher). COVID19: Information and Resources from OBP. (2020). Retrieved from openbookpublishers.com: https://blogs.openbookpublishers.com/covid19-information-and-resources-from-obp/#open-books DOAB (n.d.) Retrieved from https://www.doabooks.org/ OAPEN (n.d.) List of compliant publishers. Retrieved from https://www.oapen.org/researchers/13516596-funder-compliant-publishers OASPA (n.d.) Members. Retrieved from https://oaspa.org/membership/members/ OASPA (n.d.) Membership criteria. Retrieved from https://oaspa.org/membership/membership-criteria/ Choosing a licence Types of publishers and publishing services Last edited on 30 September 2020, at 07:10 (+0000) Planning and Funding Write & submit manuscript Book contract and License Book is published & disseminated Research is reused Benefits of open access book publishing for early career researchers Digital and print publication Eligibility criteria for grant applications How to choose a publisher for your open access book Open access book policy landscape Types of publishers and publishing services Licence,
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,662
using System.IO; using System.Runtime.InteropServices; internal static class Example { [STAThread()] public static void Main() { SolidEdgeFramework.Application objApplication = null; SolidEdgeDraft.DraftDocument objDraftDocument = null; SolidEdgeDraft.Sheet objSheet = null; SolidEdgeFrameworkSupport.Balloons objBalloons = null; SolidEdgeFrameworkSupport.Balloon objBalloon = null; try { OleMessageFilter.Register(); objApplication = (SolidEdgeFramework.Application)Marshal.GetActiveObject("SolidEdge.Application"); objDraftDocument = objApplication.ActiveDocument; objSheet = objDraftDocument.ActiveSheet; objBalloons = objSheet.Balloons; objBalloon = objBalloons.Item(1); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { OleMessageFilter.Revoke(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,184
CD reviews: Brian Maes; "Songs for Madeline" (Briola) With "Songs for Madeline," singer-songwriter-producer-bandleader and all-around North Shore rock utility man Brian Maes strips it back to just his songs, his voice and a grand piano on a little valentine of an album dedicated to his newborn daughter. In eight short and sweet cuts, Maes shows that his melodic gifts, sweet lyrics and formidable keyboard chops need no further adornment than the grand acoustics of Memorial Hall in Melrose. -- KEVIN R. CONVEY The Unicorns: "Who Will Cut Our Hair When We're Gone?" (Alien8) Cute, clever and possibly higher than the proverbial kite, Montreal's Unicorns make lovely experimental indie-pop music with a psychedelic, electronic bent. "Who Will Cut Our Hair When We're Gone?" is the band's second CD, but the first as a trio with a drummer. It is immediately reminiscent of Starlight Mints, minus the strings, and connects to the Flaming Lips' idealistic artpop. With the CD's mortality theme, some songs brood. Still, there are enough swooning, playful, fuzzy moments -- ranging from bash-it-out punk to funky superfly soul -- for sweetness to outdo the dour. -- LINDA LABAN Alan Broadbent: "You and the Night and the Music" (A440) A barebones trio is not what first comes to mind when discussing New Zealand pianist Alan Broadbent. He's usually associated with arranging for big bands, as he did three decades ago for Woody Herman and continues to do for contemporaries singers and orchestras. But here, with only bassist Brian Bromberg and drummer Joe Labarbera joining him, he ignores the idea of precise charts and just lets his fingers go. The piano trio has always been a stomping ground for jazz improvisation. Broadbent is nimble and inventive all through these standards. Some of Bromberg's strongest soloing is on "I Wish I Knew." It would have been more fun to hear a more sprightly "Baubles, Bangles, and Beads." But that's made up for on the busy, upbeat closer, "Dearly Beloved," an even crazier outing for Bromberg and his bass. -- ED SYMKUS
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,701
' Licensed to the .NET Foundation under one or more agreements. ' The .NET Foundation licenses this file to you under the MIT license. ' See the LICENSE file in the project root for more information. Imports System.Collections.Immutable Imports System.Composition Imports System.Threading Imports Microsoft.CodeAnalysis.ChangeNamespace Imports Microsoft.CodeAnalysis.Host.Mef Imports Microsoft.CodeAnalysis.LanguageServices Imports Microsoft.CodeAnalysis.Text Imports Microsoft.CodeAnalysis.VisualBasic.Syntax Namespace Microsoft.CodeAnalysis.VisualBasic.ChangeNamespace <ExportLanguageService(GetType(IChangeNamespaceService), LanguageNames.VisualBasic), [Shared]> Friend Class VisualBasicChangeNamespaceService Inherits AbstractChangeNamespaceService(Of NamespaceStatementSyntax, CompilationUnitSyntax, StatementSyntax) <ImportingConstructor> Public Sub New() End Sub Public Overrides Function TryGetReplacementReferenceSyntax(reference As SyntaxNode, newNamespaceParts As ImmutableArray(Of String), syntaxFacts As ISyntaxFactsService, ByRef old As SyntaxNode, ByRef [new] As SyntaxNode) As Boolean Dim nameRef = TryCast(reference, SimpleNameSyntax) old = nameRef [new] = nameRef If nameRef Is Nothing Or newNamespaceParts.IsDefaultOrEmpty Then Return False End If If syntaxFacts.IsRightSideOfQualifiedName(nameRef) Then old = nameRef.Parent If IsGlobalNamespace(newNamespaceParts) Then [new] = SyntaxFactory.QualifiedName(SyntaxFactory.GlobalName(), nameRef.WithoutTrivia()) Else Dim qualifiedNamespaceName = CreateNamespaceAsQualifiedName(newNamespaceParts, newNamespaceParts.Length - 1) [new] = SyntaxFactory.QualifiedName(qualifiedNamespaceName, nameRef.WithoutTrivia()) End If [new] = [new].WithTriviaFrom(old) ElseIf syntaxFacts.IsNameOfMemberAccessExpression(nameRef) Then old = nameRef.Parent If IsGlobalNamespace(newNamespaceParts) Then [new] = SyntaxFactory.SimpleMemberAccessExpression(SyntaxFactory.GlobalName(), nameRef.WithoutTrivia()) Else Dim memberAccessNamespaceName = CreateNamespaceAsMemberAccess(newNamespaceParts, newNamespaceParts.Length - 1) [new] = SyntaxFactory.SimpleMemberAccessExpression(memberAccessNamespaceName, nameRef.WithoutTrivia()) End If [new] = [new].WithTriviaFrom(old) End If Return True End Function ' TODO: Implement the service for VB Protected Overrides Function GetValidContainersFromAllLinkedDocumentsAsync(document As Document, container As SyntaxNode, cancellationToken As CancellationToken) As Task(Of ImmutableArray(Of (DocumentId, SyntaxNode))) Return Task.FromResult(CType(Nothing, ImmutableArray(Of (DocumentId, SyntaxNode)))) End Function ' This is only reachable when called from a VB service, which is not implemented yet. Protected Overrides Function ChangeNamespaceDeclaration(root As CompilationUnitSyntax, declaredNamespaceParts As ImmutableArray(Of String), targetNamespaceParts As ImmutableArray(Of String)) As CompilationUnitSyntax Throw ExceptionUtilities.Unreachable End Function ' This is only reachable when called from a VB service, which is not implemented yet. Protected Overrides Function GetMemberDeclarationsInContainer(container As SyntaxNode) As SyntaxList(Of StatementSyntax) Throw ExceptionUtilities.Unreachable End Function ' This is only reachable when called from a VB service, which is not implemented yet. Protected Overrides Function TryGetApplicableContainerFromSpanAsync(document As Document, span As TextSpan, cancellationToken As CancellationToken) As Task(Of SyntaxNode) Throw ExceptionUtilities.Unreachable End Function ' This is only reachable when called from a VB service, which is not implemented yet. Protected Overrides Function GetDeclaredNamespace(container As SyntaxNode) As String Throw ExceptionUtilities.Unreachable End Function Private Shared Function CreateNamespaceAsQualifiedName(namespaceParts As ImmutableArray(Of String), index As Integer) As NameSyntax Dim part = namespaceParts(index).EscapeIdentifier() Dim namePiece = SyntaxFactory.IdentifierName(part) If index = 0 Then Return namePiece Else Return SyntaxFactory.QualifiedName(CreateNamespaceAsQualifiedName(namespaceParts, index - 1), namePiece) End If End Function Private Shared Function CreateNamespaceAsMemberAccess(namespaceParts As ImmutableArray(Of String), index As Integer) As ExpressionSyntax Dim part = namespaceParts(index).EscapeIdentifier() Dim namePiece = SyntaxFactory.IdentifierName(part) If index = 0 Then Return namePiece Else Return SyntaxFactory.SimpleMemberAccessExpression(CreateNamespaceAsMemberAccess(namespaceParts, index - 1), namePiece) End If End Function End Class End Namespace
{ "redpajama_set_name": "RedPajamaGithub" }
6,388
Neerja Bhanot: The Hero Of Pan Am Flight 73. Blog The Harvard Law School Center on the Legal Profession has decided to confer Chief Justice of India DY Chandrachud with the 'Center on the Legal Profession Award for Global Leadership'. current affairs Tag: Mr Raila Odinga What's wrong with Kenyan General Elections, why frontrunner former President Odinga is not accepting the Kenyan people's election mandates? Former Attorney of Kenya, Mr. Taban Mohamed Explains. Posted on August 17, 2022 August 17, 2022 By Taban Anis Raila Odinga is the former Prime Minister, and his bone of content is on the tallying of Presidential votes made by the Electoral Commission. 1) He is accusing the Commission of not declaring the correct numbers. current affairs, LRA Explains Kenyan General Elections,2022 Kenya existed as an amalgamation of many tribal states before its demarcation into boundaries that became Kenya by the British Colonial Company. After the aligning of the new boundaries, the various tribal states were lumped together without their consent or approval into a new and complex territory of communities that were friends or foes. Various tribes had their political setup, national norms, and customs. With the exception of the Wanga Kingdom of Western Kenya which was held by a king, the majority of tribes had an arrangement where either the Council of elders or chiefs were the head of a community, and the commander in chief of that particular community initiated ceremonies, and not limited to settlement of disputes. LRA Explains Current Affairs Weekly(2nd Jan,2023 – 8th Jan,2023) current affairs EMPIRICAL RESEARCH EMPIRICAL RESEARCH
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,480
Movierulz is a torrent website on which you can watch and download free movies. Movierulz is famous for leaking pirated version of movies. Movierulz upload movies in Hindi, Tamil, Telgu and many more languages.Even after banning from Government Movierulz still continuous to leak movies online.Most of our Industry actors and production houses are working on this and are spreading awareness among the audience to not to watch the pirated movies . Indian movie Industry is getting a big hit because of piracy. We all heard about the URI movie. The producers and actors have adopted a unique way for stopping the piracy. They have themselves upload the movie on torrent. There is another site owned by TamilRockers which was recently become very popular when it leaks Rajnikanth starer movie Kabali . The Makers of the film decided to take strict action on the website. The website is very popular among Tamil Audience. However, after Tamilrockers leaked 'Kaala' on the day of its release, the anti-piracy cell has announced that they have successfully suspended the account of the website. The twitter handle of the anti-piracy cell tweeted "Tfpc antipiracy cell has suspended http://tamilrockers.gs and http://tamilrage.eu . #killpiracy #saynotopiracy". Disclaimer: This content is for reference purpose only and Bachatkaro.com does not support or promote piracy in any manner.
{ "redpajama_set_name": "RedPajamaC4" }
5,638
Pope Francis offers prayers at Israeli separation wall in Bethlehem Stop rouses controversy as pontiff invites Peres and Abbas to Rome in unprecedented papal intervention in peace process Peter Beaumont in Manger Square, Bethlehem Sun 25 May 2014 07.34 EDT First published on Sun 25 May 2014 07.34 EDT Pope Francis visits Israel's separation barrier in Bethlehem. Photograph: AP It is an image that will define Pope Francis's first official visit to the Holy Land. Head bowed in prayer, the leader of the Catholic church pressed his palm against the graffiti-covered concrete of Israel's imposing "separation wall", a Palestinian girl holding a flag by his side. It was, as his aides conceded later, a silent statement against a symbol of division and conflict. The powerful gesture was made minutes after an appeal to both sides to end a conflict that the pope said was "increasingly unacceptable". The unscheduled, conspicuous stop halfway through his three-day visit to the Holy Land – made en route to an open-air mass in Manger Square, Bethlehem – confirmed Francis's reputation for determined independence. So too did his invitation to the Palestinian president, Mahmoud Abbas, and Israeli president, Shimon Peres, to join him in Rome to meet and pray together for peace – an unprecedented papal intervention in the stalled peace process. Francis waves to the crowds at Manger Square. He invited the Israeli and Palestinian presidents to come to the Vatican to pray for peace a month after the collapse of US-backed peace. Photograph: Mohamad Torokman/reuters Built by Israel as a so-called security fence to protect its citizens from attack after the second intifada, the barrier weaves through the West Bank, cutting through swaths of Palestinian territory and containing Palestinian residents. It has become an emblem of the Israeli occupation. The pope's scheduled route took him alongside the wall, near Rachel's Tomb outside Bethlehem. His decision to step out of his white, open-sided popemobile and approach it – just days after the Vatican insisted his visit would not be controversial – was a surprise, not least for members of his own entourage. Surrounded by Palestinian children, Francis's progress towards the concrete barrier was followed carefully by photographers and television cameras, as well as Israeli soldiers revealed in silhouette at the window of a nearby watchtower. "I know all about this," he is reported to have told one Palestinian official. The Vatican's spokesman, Father Federico Lombardi, said afterwards: "I was not informed [of his plans to stop]. It was planned by him the day before … It was a very significant way to demonstrate his participation in suffering … It was a profound spiritual moment in front of a symbol of division." Pope Francis touches the wall that divides Israel from the West Bank, on his way to celebrate a mass in Manger Square. Photograph: AP Despite attempts by the Vatican to insist the visit was "purely religious", it has been loaded with political significance since Francis's arrival in a convoy of Jordanian military helicopters from Amman. While other popes might fly into Tel Aviv and proceed through Israel into Palestinian territory, Francis elected to bypass all Israeli border points. In a carefully worded statement, delivered with Abbas in Bethlehem on Sunday, Francis referred directly to "the state of Palestine" and called on both sides to summon the courage to forge peace. "For decades the Middle East has known the tragic consequences of a protracted conflict which has inflicted many wounds so difficult to heal," the pontiff declared. The situation, he said, had become "increasingly unacceptable". Francis leads an open air mass in Manger Square. Photograph: Thomas Coex/AFP/Getty Images "Even in the absence of violence, the climate of instability and a lack of mutual understanding have produced insecurity, the violation of rights, isolation and the flight of entire communities, conflicts, shortages and sufferings of every sort." Francis proceeded from the separation wall to Manger Square in Bethlehem, which was packed with thousands of Palestinian Christians waiting for him to say mass. He entered the square – the reputed site of Christ's birth – to calls of "Viva al-Baba!" – or "Long live the pope!" The service began with a rendition of the Palestinian song Mawatani – My Homeland – that speaks to the Palestinian desire for independence. The singers' voices echoed across a plaza hung with images linking Christ's suffering to that of the Palestinian people. The altar from which Francis delivered his message showed a baby Jesus wrapped in a keffiyeh, the traditional Arabic scarf that is a symbol of Palestinian nationalism. Francis ate lunch with five families in a community centre on the edge of Deheishe refugee camp before flying out of Bethlehem into Tel Aviv's Ben Gurion airport, where he was officially welcomed to Israel by Peres. The helicopter flight meant Francis avoided crossing through the separation wall via a checkpoint as his predecessor, Pope Benedict XVI, had done. At Ben Gurion, Peres welcomed Francis, saying: "On behalf of the Jewish people and in the name of all the people of Israel, I welcome you with the age-old words from the Book of Psalms: 'Welcome in the name of the Lord.' Welcome at the gates of Jerusalem." Here, Francis once again diverted from his prepared script. In Tel Aviv, the pope deplored an attack on a Jewish museum in Brussels on Saturday that left four dead, which he described as "this criminal act of antisemitic hatred". He added: "With a deeply pained heart, I think of those who have lost their lives in the cruel attack that occurred yesterday in Brussels." While in Israel the pope will visit the Holocaust memorial at Yad Vashem, lay a wreath at the grave of the founder of Zionism, Theodor Herzl, and meet the ecumenical patriarch of Constantinople, Bartholomew. The pontiff visits Israel's separation barrier in Bethlehem. Photograph: Ariel Schalit/AP Francis will visit the holiest Christian sites in Jerusalem – including the Room of the Last Supper and the Church of the Holy Sepulchre – amid a long-term decline in the population of Palestinian Christians in the Holy Land. A survey conducted by Near East Consulting and released in April found that two-thirds of Palestinian Christians would like to emigrate. Israeli authorities have imposed tight security measures during his visit, deploying an extra 8,000 police officers. Restrictions on movement throughout the city have prompted some Christians to complain they will have little chance of seeing Francis. Some of the security has been prompted by the pope's plan to celebrate mass at the Room of the Last Supper – or "Cenacle" – which has angered some Jewish religious hardliners who venerate the site as the tomb of King David. Twenty-six people were arrested after stones were thrown at police close to the site. The papacy Israel condemns US for backing Palestinian unity government Decision to continue working with the Palestinian Authority is the latest in a series of diplomatic setbacks for Netanyahu Bethlehem church catches fire after pope's visit Church of Nativity suffers small blaze hours after pontiff pays respects at shrine, believed to be birthplace of Jesus Christ Israeli forces prevent suspected suicide bomb attempt Man wearing explosives under heavy coat at West Bank checkpoint would have been first suicide bomber in six years Pope Francis makes unofficial stop at Israeli terrorism memorial Detour, at request of Israeli prime minister Binyamin Netanyahu, seen as attempt to appease hosts after stop at separation wall Pope Francis visits the Middle East – in pictures Pope Francis calls Israeli-Palestinian stalemate unacceptable Pope Francis prays at the separation wall during West Bank visit – video Campaigners hope pope's visit to Israel will see Vatican Holocaust files released
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,288
Kratom Co. Asks 11th Circ. to Nix 'Naked License' Ruling Intellectual Property attorney Thomas Brooke was quoted in a Law360 article about a case concerning "naked licenses." In the dispute, supplement maker Blue Mountain Holdings Ltd. argued that Bliss Nutraceticals LLC violated its trademark on Vivazen products, saying it registered the name with the U.S. Patent and Trademark Office in 2017. The suit contends Bliss has been using the Vivazen name and trademark without permission. A Georgia federal trial court found in favor of Bliss, saying there was a "naked license" that rendered the trademark issue abandoned. Blue Mountain has now asked the U.S. Court of Appeals for the Eleventh Circuit to overturn the ruling. Mr. Brooke, who represents Bliss, explained why he feels confident the appellate court will rule in favor of his client. "We remain confident that the district court correctly applied the law regarding naked licensing in both of its decisions on this matter," he commented. "The earlier opinions spell out the law clearly, and we do not see any new or compelling arguments in the appellants' brief." READ: Kratom Co. Asks 11th Circ. to Nix 'Naked License' Ruling (Subscription required) Thomas W. Brooke Intellectual Property Trademark Licensing and Transactions Litigation and Dispute Resolution Related News and Headlines
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,059
package org.jboss.modules; import java.util.List; /** * An abstract local loader implementation. * * @author <a href="mailto:david.lloyd@redhat.com">David M. Lloyd</a> */ public abstract class AbstractLocalLoader implements LocalLoader { /** * Load a class which is locally defined by this loader. Returns {@code null} by default. * * @param name the class name * @param resolve {@code true} to resolve the class * * @return the class, or {@code null} if there is no local class with this name */ public Class<?> loadClassLocal(final String name, final boolean resolve) { return null; } /** * Load a package which is locally defined by this loader. Returns {@code null} by default. * * @param name the package name * * @return the package, or {@code null} if there is no local package with this name */ public Package loadPackageLocal(final String name) { return null; } /** * Load a resource which is locally defined by this loader. The given name is a path separated by "{@code /}" * characters. Returns {@code null} by default. * * @param name the resource path * * @return the resource or resources, or an empty list if there is no local resource with this name */ public List<Resource> loadResourceLocal(final String name) { return null; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,732
Gwen Dungy About Students… About Gwen ← Finding Balance…Nails in the Wall Will Have to Wait Ambassadors…to China, faculty, and beyond… → A Tale of Two Commencements Posted on May 14, 2012 by gwendungy | Leave a comment Commencements make spring memorable. During the past three weeks I have attended two commencements. I am humbled to have received an honorary degree at both. The commencements were similar to most in the order of the exercises and the joys of the moment. What was most different about them was the students. One commencement I attended was for students from the eight different locations of Berkeley College throughout New York and New Jersey. Berkeley College is fully accredited by the Middle States Association and is family-owned. The other commencement was at Mitchell College, a small private institution in Connecticut on the Thames River. I'm certain that like me, many of you who had the opportunity to sit up close as students received their diplomas were entertained by the choice of footwear among the graduates. At Berkeley College, there were no flip flops or sandals. The women, for the most part, had on high fashion platform, stiletto, or wedge heels, of which the glossy beige pump made famous by Princess Kate Middleton was a favorite. The men had on black dress shoes. Wearing one's best shoes was important because this was a very special occasion for these students, their families, and their friends. I think that the demographics of the graduates at these two commencements tell us something, but I'm not clear about the message. In sharing what I saw, I hope you will help me think about what the demographics might mean. I spoke with a number of students at Mitchell College in order to prepare to give the commencement address. What I learned from these conversations is that the students felt that there was nowhere else they could have received the kind of academic and personal support they needed to complete the requirements for their degree. They said that regardless of their unique needs, the phenomenal faculty and staff always found a way to meet their needs. I learned that the college has a renowned Learning Resource Center for students with documented learning disabilities, and/or ADHD. It also has a special program called the Thames Academy where students who have completed high school and are not quite ready for college because of a lack of general knowledge or particular learning difficulties or disabilities may experience a residential college where they can receive additional support through workshops and personalized learning plans. Approximately one-third of the students at Mitchell College need additional support. When the graduates who had to work extra hard to overcome challenges to learning walked across the stage to get their diplomas, the joy was palpable. In one instance, a student who had a mobility disability not only walked across the stage with considerable difficulty to receive her own diploma, but came back across the stage to hold the hand and guide a fellow graduate who had a visual disability. These graduates were proud of their accomplishments and the faculty, staff, administrators, families and friends were proud of them and, hopefully, felt some well-earned pride in what they had done to support this diverse group of students. At Berkeley College, the racial diversity was not as apparent, and there were not any visible disabilities among the graduates who walked to receive their diplomas. As I sat on the stage and looked out at approximately 1,200 graduates, what struck me about the racial mix of students was that there seemed to be very few white students. They were definitely in the minority. Most of the faces of the students were brown and black and the last names most often called were traditionally Hispanic or Latino names. The commencement ceremony was held in the Meadowlands Sports Complex in the Izod Center, where even the high bleachers were filled with families and friends, most of which were black and brown. I don't know if my observations are the same as the observations you would make, and I'm sure that the conclusions I draw will be different than yours, but I think the demographics of these two commencements mean something. They may be telling us that those most in need of a good public education are choosing to go to private colleges where the costs will naturally exceed the costs of tax-supported public education. I might be wrong, but it seems that there is something fundamentally unfair about this situation. On the one hand, I wish the black and brown students whose families may not be able to afford a private institution would choose a more affordable public institution. On the other hand, I'm happy that there is a private institution that will meet their needs. I do not think that these students and families would choose a more expensive private institution if their local public institutions met their needs. On the one hand, I am so glad that Mitchell College fills a unique and important niche for students who otherwise might not have the opportunity to achieve up to their potential. On the other hand, I regret that there are so many other students who could benefit from such an environment as that at Mitchell College but they and their families cannot afford to attend. It would seem that a priority of public colleges and universities would be to provide education to all students in their community. The two commencements uplift and encourage me, and they also make me want to do something to make public education more responsive and amenable to the needs of those students who need it most. This entry was posted in Higher Education, Race & Ethnicity and tagged Berkeley College, Mitchell College. Bookmark the permalink. What kinds of challenges will Student Affairs professionals address in the next five years? America to Me: A clear assessment of racial reality in America through the eyes of one student Words of Wisdom from DeRay Mckesson Enrollment Management: Integrated from Beginning to End The 'why' of becoming a leader What kinds of challenges will #StudentAffairs professionals address in the next five years? gwendungy.com/2018/12/13/wha… 1 year ago America to Me: A clear assessment of racial reality in America through the eyes of one student… twitter.com/i/web/status/1… 1 year ago Words of Wisdom from @DeRay Mckesson gwendungy.com/2018/10/25/wor… https://t.co/9IptIyMyrW 1 year ago #NewBlogPost: #EnrollmentManagement: Integrated from Beginning to End gwendungy.com/2018/10/10/enr… https://t.co/PKdhEVQbIV 1 year ago The 'why' of becoming a leader gwendungy.com/2018/09/26/the… 1 year ago Follow @gwendungy NASPA – Student Affairs Administrators in Higher Education AACU Academic Affairs adult learners Civic Learning and Democratic Engagement Cocurriculum collaboration first-generation college students Higher Education Identity innovation International Leadership NASPA Race & Ethnicity Retirement STEM Student Affairs Students Uncategorized Veterans
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,846
\section{Introduction} \label{introduction} Supernova remnants (SNRs) are the most prominent extended sources of non-thermal emission in the Galaxy and impact Galactic astrophysics in several important ways. They and their supernovae (SNe) progenitors play a key role in stellar evolution by marking the death of massive stars, redistributing the atomic elements produced within them, and stimulating the birth of new stars through their interaction with molecular clouds. SNR shocks significantly impact the dynamics and evolution of the interstellar medium (ISM) \citep{bre+17}, and play a major role in shaping its long lived structure \citep{mckee1977}. As the presumed primary acceleration sites for Galactic cosmic rays (CRs) through diffusive shock acceleration \citep{cap10,gabici+19,kac19}, they seed the ISM with at least $\sim$1/3 of its energy density. A complete census of Galactic SNRs, an elusive goal long prevented by observational selection effects \citep{green1991}, would offer powerful constraints on the Galactic SNe rate. At radio wavelengths, SNRs are spatially extended and typically prolific non-thermal emitters over a broad range of frequencies. Accurate radio spectra for SNRs can be used to trace interactions with the ISM, and to test predictions from diffusive shock acceleration and SNR evolution theories. For largely technical reasons (\citealt{kassim07}, and references therein), low frequency ($\nu < 100$~MHz) radio observations of SNRs have historically been limited by extremely poor angular resolution and sensitivity, typically an order of magnitude or more worse than what is achieved at higher (GHz) frequencies. The scientific impact for SNRs has been significant, limiting a number of unique and important studies that critically rely on precise low frequency measurements. These include: {\it (i)\,} Deviations from power law spectra for SNRs predicted by theory, which can only be constrained by measurements encompassing a very large range of frequencies (\citealt{reynolds92}). Because the deviations are subtle, a high degree of accuracy is needed. For most SNRs these have been unavailable until only recently \citep{urosevic2014,arias+19-VRO}. {\it (ii)\,} Thermal absorption in and around SNRs, which is uniquely measured at low frequencies ($\nu < 100$~MHz). It may arise intrinsically, indicating the presence of thermal material interior to the SNR, e.g. unshocked ejecta, or extrinsically from the interaction of the SNR with its immediate surroundings. Due to limited observational capabilities, intrinsic thermal absorption has only been detected in the brightest SNRs, including the Crab nebula \citep{bie97}, Cas A \citep{kas95, del14, arias+18}, and Tycho \citep{arias+19}. External thermal absorption is a tracer of the ionised interface generated as an SNR interacts with its immediate surrounding in Galactic complexes. In addition to probing this interaction, it provides constraints on the relative radial superposition of thermal and non-thermal constituents in complex regions \citep{bro03,bro05-391}. {\it (iii)\,} The distribution of ionised gas in the ISM unrelated to SNRs, which can be measured using SNRs as background beacons \citep{kassim-89-list}. {\it (iv)\,} The Galactic SNe rate, which is poorly known due to incompleteness in Galactic SNR catalogues. Low frequency observations of SNRs are a proven means of addressing the incompleteness \citep{bro04, bro06, hurley19}. Starting in the 1990s \citep{kassim1993}, technical breakthroughs enabled a succession of dramatic improvements in low radio frequency observational capabilities \citep{vanhaarlem2013}. The scientific impacts have so far mainly been realised for extragalactic studies (e.g. \citealt{vanweeren2016,shimwell2017}), but the impact on Galactic SNRs studies is slowly starting to be felt (e.g. \citealt{bro06, supan2018, arias+18}). The 74 MHz Very Large Array Low-frequency Sky Survey Redux (VLSSr) was the first all-sky survey to take advantage of the improved low frequency capability \citep{lan14}. As such the VLSSr established an important calibration grid for a suite of emerging new instruments. While technically limited compared to a rapidly advancing state of the art, it also remains an important resource for individual source studies. In particular, its potential for SNR studies has remained largely untapped. In this paper we present a sample of 14 bright, resolved SNRs selected from the VLSSr, which we use to address a number of the scientific issues outlined above, and to stimulate future studies as larger samples of weaker sources become accessible with modern instruments, e.g. the LBA Sky Survey (LoLSS: \citealt{lolss+21}) with LOFAR. This paper is organised as follows. In Sect.~\ref{sample} we describe our selection of the 14 bright Galactic SNRs from the VLSSr images analysed in this paper. We measure their integrated 74 MHz flux densities. When available, we also measure their low frequency flux densities from the Galactic and Extragalactic All-sky Murchison Widefield Array Survey (GLEAM, \citealt{wayth15,hurley19}). The method used to construct spectral index maps is explained in Sect.~\ref{local}. In Sect.~\ref{newly} we assimilate the new low radio frequency fluxes into a larger framework of measurements from the literature, and use it to construct improved integrated continuum spectra. Careful attention is given to anchoring these spectra on an accurate, absolute flux density scale which is valid over most of the frequency range considered for each source \citep{per17}. Together with the inclusion of new low frequency measurements the derived spectra are a marked improvement over previous studies, as explained in Sect.~\ref{literature}. In Sect.~\ref{individual} we discuss the spatially resolved morphology and spectral index behaviour for each individual SNR. In Sect.~\ref{ISM} we focus on those sources whose spectral analysis presents deviations from a canonical power law at the lowest frequencies, attributable to thermal absorption. We analyse the properties of the surrounding medium in order to understand the physical context of the absorption, i.e. whether it is intrinsic, extrinsic and proximate to the SNR, or attributable to more distant ISM from along the line of sight. We present our summary and conclusions in Sect.~\ref{summary}. \section{General Properties of the VLSSr SNR sample} \label{sample} Based on their radio morphologies, our sample comprises 10 shell-type, 2 composite, and 2 plerion SNRs, all of which are previously known \citep{green19}.\footnote{\url{http://www.mrao.cam.ac.uk/surveys/snrs/}.} Three of the SNRs in the list are also part of the mixed-morphology (MM) class showing centrally condensed thermal X-ray emission surrounded by a synchrotron radio shell. Twelve sources analysed in this work belong to the first Galactic quadrant from $l\sim$+4$^{\circ}$ to $l\sim$+44$^{\circ}$, while the remaining two objects are located in the second quadrant, in the 120$^{\circ}$ $\leq$ $l$ $\leq$131$^{\circ}$ region. All of them are located within Galactic latitudes $-$1$^{\circ}$ $\leq$ $b$ $\leq$7$^{\circ}$. \subsection{VLSSr Data} In Fig~\ref{74-images}, we present images for each of the 14 SNRs from the VLSSr\footnote{The latest release of the VLSSr images is available at the website \url{http://www.cv.nrao.edu/vlss/VLSSpostage.shtml}.}. The observations are centred at a frequency of $\nu=74$ MHz with an angular resolution of $\theta\sim75^{\prime\prime}$. The images are sensitive to spatial structures up to $\sim$36$^{\prime}$ in size, larger than the full extent of any SNR in our sample. In all cases the emission is displayed above a 4-$\sigma$ noise level measured in the corresponding VLSSr fields (the mean rms noise level of the maps is $\sim$0.16~{\textrm{Jy beam}$^{-1}$}). For the majority of the SNRs in our sample, the VLSSr maps represent the most complete available image of the source, both resolving the structure and recovering the diffuse emission in the low frequency regime below 100 MHz \citep[see, for instance,][]{slee77,kassim-88}. The morphological properties of each SNR at 74~MHz are discussed below in Sect.~\ref{individual}. Throughout this paper each SNR is referred to by the name most commonly used in the literature. The correspondence with the name derived from the Galactic coordinates of the centre of the source is indicated in Table~\ref{74properties}. As a qualitative measure of the consistency of our source size measurements at 74~MHz, we compared them to their counterparts at 1.4~GHz taken, depending on the sky position, from the {\sl Multi-Array Galactic Plane Imaging Survey} (MAGPIS, \citealt{helfand-06}) or the NRAO VLA Sky Survey (NVSS, \citealt{con98}). We constrained our measurements to regions exceeding at least 3 times the respective rms noise levels in the images at 74~MHz and 1.4~GHz. In all cases the 1.4~GHz images were convolved to the VLSSr resolution (75$^{\prime\prime}$). In general the source sizes are expected to match with a few exceptions. The source could appear larger at 74~MHz than at 1.4~GHz if the higher frequency observations miss faint, extended structure \citep[see for example,][]{lan04}. Alternately, foreground thermal absorption along the line of sight can unevenly attenuate the synchrotron emission across the SNR and cause the source to appear smaller at 74~MHz \citep{lac01}. Finally, residual ionospheric calibration errors can distort the apparent source size at 74~MHz \citep[a further discussion of this non-physical effect is presented in][]{cohen07}. Despite these unknowns, our comparison indicated a remarkably good agreement in source size for 11 of our sources, with 74~MHz/1.4~GHz-size ratios $\sim1.04$. Two sources, W41 and 3C~391, have ratios of $\sim0.26$ and $\sim0.87$, respectively, indicating they are substantially smaller at 74 MHz than at 1.4~GHz. W41 sits inside the giant molecular complex G23.3$-$0.4, and is spatially coincident with several HII regions (\citealt{mes14}, \citealt{hogge19}). As discussed in Sect.~\ref{W41}, we attribute the reduced apparent size at 74 MHz to absorption by HII regions in the complex blocking the SNR emission. For SNR~3C~391 the result is consistent with data presented in \citet{bro05-391}, who attributed it to thermal absorption tracing the ionised interface along the SNR/molecular cloud interface. SNR~3C~396, on the other hand, has a ratio of $\sim1.4$, indicating it is significantly more extended at 74~MHz than at 1.4~GHz. We attribute this to the higher frequency measurements missing the very faint emission at the northwest edge. With these three exceptions accounted for, we feel confident that our VLSSr measurements provide a robust sampling of the full SNR source sizes at 74~MHz. Basic radio source parameters were measured from the VLSSr images, such as the angular dimensions, the total and peak flux density, and the surface brightness of each SNR in the sample. All of them are reported in Table~\ref{74properties}. For each SNR the integrated flux density was derived by using a polygonal region to fit the outer radio boundary of the remnant 4~$\sigma$ above the intrinsic noise level measured in the corresponding VLSSr field. When necessary, depending on the fluctuations of the background emission around the source, the flux density estimate was corrected for an average background level. This contribution was determined by scanning in both right ascension and declination through several positions surrounding the SNR. The main errors in the listed flux densities arise from intensity-proportional flux-density uncertainties in both the absolute flux-density scale ($\sim$15\%) and primary beam corrections, as well as uncertainties in the background estimations and bias corrections. All of these contributions were combined in quadrature to compute the final error in the integrated flux density measurements. Surface brightness estimates at 74~MHz were calculated from the relation $\Sigma_{74} = 1.505 \times 10^{-19} \, S_{74}/A_{74}$~W~m$^{-2}$~Hz$^{-1}$, where $S_{74}$ is the integrated flux density (in Jy) measured in the VLSSr map and $A_{74}$ represents the area (in square arc minutes) enclosed by the polygon region used to integrate the 74~MHz emission. The mean percentage error in our surface brightness estimates is $\sim$30\% and is dominated by uncertainties in defining the outer boundary of the radio emission. The flux density measurements from the VLSSr maps analysed here add to the scarce list of reliable low-frequency estimates available to date. \begin{figure*}[ht!] \centering \includegraphics[width=0.85\textwidth]{Figure1-aa41635-21.jpg} \caption{The VLSSr 74~MHz images for the 14~SNRs in our sample, with an 75$^{\prime\prime}$ angular resolution. The colour scale, given on top, is linear scaling from 4 times the local rms noise level (4~$\sigma$) to the peak intensity value of the subimage, $S_{\mathrm{peak}}$ (see values quoted in Table~\ref{74properties}). The contours levels of the 74~MHz emission start at 4~$\sigma$ increasing in steps of 25, 50, and 75\% of the scale range. Exceptions are Tycho and 3C~397 for which an 8-$\sigma$ lower limit was chosen. A cyan horizontal line of 2$^{\prime}$ length is included in each panel to facilitate the comparison between the SNRs' sizes.} \label{74-images} \end{figure*} \addtocounter{figure}{-1} \begin{figure*}[ht!] \centering \includegraphics[width=0.85\textwidth]{Figure2-aa41635-21.jpg} \caption{{\itshape Continued}.} \label{74-images} \end{figure*} \begin{table*} \centering \small \caption{Continuum properties for all the SNRs in our sample derived from the 74~MHz VLSSr images.} \label{74properties} \begin{tabular}{l c c c c c c c}\hline\hline Galactic & Alternative & Morphological & Size & VLSSr Flux & $\Sigma_{74}$ & VLSSr rms & $S_{\mathrm{peak}}$\\\cline{4-4}\cline{6-6} name & name & Class & $\theta_\mathrm{max}$[$^\prime$] $\times$ $\theta_\mathrm{min}$[$^\prime$] & Density[Jy] & [W~m$^{-2}$~Hz$^{-1}$~sr$^{-1}$] & [{Jy~beam$^{-1}$}] & [{Jy~beam$^{-1}$}]\\\hline \object{G4.5$+$6.8} & \object{Kepler} & Shell & $5 \times 5$ & $111\pm17$ & $6.7\times10^{-19}$ & 0.23 & 24.4 \\ \object{G18.8$+$0.3} & \object{Kes~67} & Shell & $17 \times 11$ & $76.2\pm13.8$ & $6.1\times10^{-20}$ & 0.14 & 1.9 \\ \object{G21.5$-$0.9} & -- & Plerion & $3.2 \times 3.2$ & $6.4\pm1.1$ & $9.4\times10^{-20}$ & 0.15 & 4.1 \\ \object{G21.8$-$0.6} & \object{Kes~69} & Shell & $22 \times 10$ & $169\pm31$ & $1.2\times10^{-19}$ & 0.18 & 4.5 \\ \object{G23.3$-$0.3} & \object{W41} & Shell & $18 \times 7$ & $88\pm17$ & $1.1\times10^{-19}$ & 0.14 & 2.3 \\ \object{G27.4$+$0.0} & \object{Kes~73} & Shell & $6 \times 5$ & $13.8\pm2.5$ & $6.9\times10^{-20}$ & 0.14 & 2.8 \\ \object{G28.6$-$0.1} & -- & Shell & $9.5 \times 7.0$ & $26.9\pm4.7$ & $6.4\times10^{-20}$ & 0.15 & 2.4 \\ \object{G29.7$-$0.3} & \object{Kes~75} & Composite & $4.5 \times 3.5$ & $48.5\pm7.9$ & $4.6\times10^{-19}$ & 0.22 & 14.0 \\ \object{G31.9$+$0.0} & \object{3C~391} & MM & $6.5 \times 5.5$ & $31.5\pm5.4$ & $1.2\times10^{-19}$ & 0.20 & 4.9 \\ \object{G39.2$-$0.3} & \object{3C~396} & Composite & $7.5 \times 7.5$ & $44.8\pm8.8$ & $1.2\times10^{-19}$ & 0.16 & 4.0 \\ \object{G41.1$-$0.3} & \object{3C~397} & MM & $5.5 \times4.0$ & $68.9\pm10.6$ & $4.7\times10^{-19}$ & 0.16 & 12.7 \\ \object{G43.3$-$0.2} & \object{W49B} & MM & $5.5 \times5.5$ & $64.0\pm10.1$ & $3.2\times10^{-19}$ & 0.13 & 11.6 \\ \object{G120.1$+$1.4} & \object{Tycho} & Shell & $9 \times 9$ & $255.0\pm38.8$ & $4.7\times10^{-19}$ & 0.15 & 14.5 \\ \object{G130.7$+$3.1} & \object{3C~58} & Plerion & $ 8.5\times 4.5$ & $34.6\pm5.3$ & $1.4\times10^{-19}$ & 0.08 & 3.7 \\ \hline \end{tabular} \tablefoot{Columns 1 and 2 list the Galactic-coordinate names and the alias of all the SNRs included in our sample set, respectively. The radio morphology class of the source is noted in Column 3. If the SNR presents a mixed X-ray and radio morphology, it is indicated by the abbreviation MM. The size of each source, measured at 74~MHz from the VLSSr images, is reported in column 4. The total flux density and the surface brightness at 74~MHz are summarised in columns 5 and 6, respectively. Columns 7 and 8 report the rms noise level (1$\sigma$) and the peak of the 74 MHz emission calculated in the individual VLSSr maps.} \end{table*} \subsection{GLEAM} In order to more fully probe the low frequency spectra of the SNRs, we also looked at recently published images from GLEAM. This survey currently covers parts of the Galactic Plane at frequencies of 88, 118, 155, and 200 MHz, at resolutions from $4^{\prime} - 2^{\prime}$ \citep{hurley19}. For 12 of the 14 SNRs we were able to obtain flux density measurements from GLEAM published images, and we report these measurements in Table~\ref{GLEAM}. At the time we made the analysis, there were no images available for the remaining two remnants. In some cases the images were of lower quality, and we chose not to use them for this study. We measured the fluxes and errors using the same method described above for the VLSSr images. Because the images are at a lower resolution which does not resolve many of the remnants, we present only the integrated flux measurements in this work. \begin{table*} \centering \small \caption{Integrated flux densities measured from GLEAM Survey images of 12 SNRs in our study.} \label{GLEAM} \begin{tabular}{l c c c c}\hline\hline \multirow{2}{*}{Source (alias)} & \multicolumn{4}{c}{Flux density~[Jy]} \\\cline{2-2}\cline{3-5} & 88~MHz & 118~MHz & 155~MHz & 200~MHz \\\hline G4.5$+$6.8~~(Kepler) & $100 \pm 9$ & $77 \pm 7$ & $62 \pm 9$ & ... \\ G18.8$+$0.3~~(Kes~67) & $81 \pm 17$ & $80 \pm 14$ & $71 \pm 11$ & $62 \pm 9$ \\ G21.5$-$0.9 & $7.2 \pm 2.7$ & $6.6 \pm 2.0$ & $6.3 \pm 1.3$ & $5.9 \pm 1.1$ \\ G21.8$-$0.6~~(Kes~69) & $194 \pm 21$ & $166 \pm 20$ & $143 \pm 16$ & $116 \pm 14$ \\ G23.3$-$0.3~~(W41) & $154 \pm 30$ & $172 \pm 40$ & $149 \pm 37$ & $139 \pm 28$ \\ G27.4$+$0.0~~(Kes~73) & $17.0 \pm 3.5$ & $17.6 \pm 2.9$ & $14.6 \pm 3.0$ & ... \\ G28.6$-$0.1 & $29.5 \pm 5.6$ & $25.9 \pm 3.9$ & $20.2 \pm 3.1$ & $15.5 \pm 1.9$ \\ G29.7$-$0.3~~(Kes~75) & ... & $42.3 \pm 5.8$ & $32.4 \pm 3.4$ & $25.5 \pm 2.7$ \\ G31.9$+$0.0~~(3C~391) & $43.7 \pm 8.0$ & $48.1 \pm 4.9$ & $44.4 \pm 4.1$ & ... \\ G39.2$-$0.3~~(3C~396) & ... & $39.9 \pm 4.9$ & $33.9 \pm 3.4$ & $28.5 \pm 2.8$ \\ G41.1$-$0.3~~(3C~397) & $57 \pm 7$ & ... & ... & ... \\ G43.3$-$0.2~~(W49B) & $63.8 \pm 8.0$ & $69.7 \pm 5.5$ & ... & ... \\\hline \end{tabular} \tablefoot{There are no GLEAM Survey data available for SNRs G120.1$+$1.4~~(Tycho) and G130.7$+$3.1~~(3C~58). We used '...' to denote cases for which the flux density values measured from GLEAM images do not fit our selection criteria to construct radio continuum spectra, see text in Sect.~\ref{newly} for details.} \end{table*} \section{Local SNR variations in the radio spectral index} \label{local} Local changes in the radio spectral index across each source were computed for our targets by combining the VLSSr maps with the best available image of the source from surveys at 1.4~GHz (e.g., MAGPIS, NVSS, VLA Galactic Plane Survey (VGPS, \citealt{stil06}), and the NRAO VLA Archive Survey (NVAS)\footnote{\url{httpp://www.archive.nrao.edu/nvas}}). Since we do not have the \it uv\rm-data for the higher frequency images, we created the spectral index maps in a standard way from the direct ratio of the images at both frequencies. While doing this, we aligned, interpolated, and smoothed the 1.4~GHz maps to matching VLSSr ones. Additionally, only regions with flux densities greater than 4-$\sigma$ significance level of their respective sensitivities were used in the process. The resulting spectral index images are displayed in Fig.~\ref{alpha-maps}. Errors on these maps are of order $\sim$20-25\% for the local spectral index measurements. We are aware that a quantitative interpretation of spectral variations with position is not possible from the resulting maps, but they are very useful to reveal qualitative trends. The analysis of the radio spectral index images is presented in Sect.~\ref{individual}. We note that for SNR 3C~397 (G41.1$-$0.3) the public radio continuum images at radio frequencies higher than 74~MHz do not recover the expected flux density accurately, and thus we decided not to create a spectral index map for this source. \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Figure3-aa41635-21.jpg} \caption{Spectral index maps between 74~MHz and 1.4~GHz (resolution 75$^{\prime\prime}$) for the SNRs in our sample (except the source 3C~397). The maps were constructed by combining the VLSSr image with those available from radio continuum surveys (see text for details). Pixels with brightness below 4$\sigma$ at 74~MHz or 1.4~GHz were blanked. The colour scales displayed at the top of the maps indicate the spectral indices measured over each SNRs. The radio continuum emission from VLSSr 74~MHz at a resolution of 75$^{\prime\prime}$ is represented by contours. For reference, we used the same contours levels as in Fig.~\ref{74-images}. } \label{alpha-maps} \end{figure*} \addtocounter{figure}{-1} \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Figure4-aa41635-21.jpg} \caption{{\itshape Continued}} \label{alpha-maps} \end{figure*} \section{Integrated SNR radio continuum spectra} \label{fit} \subsection{Newly derived integrated spectra} \label{newly} The global spectral properties of SNRs have been determined in a number of previous studies. However, the majority of them were based on the combination of data without using a common radio flux density scale, thereby making the quantitative comparison of measurements incorrect. In a few cases, the absolute scale of \citet{baars77} was used, although it was applied even at frequencies lower than 300~MHz or higher than 15~GHz for which the scale is incomplete. In addition, published spectra include observations with low angular resolution and surface brightness sensitivity, which in complex regions of the Galaxy can easily miss non-thermal components in the emission by confusion with Galactic background or thermal sources. The inclusion in the collected data of widely scattered flux density measurements at similar frequencies as well as differences in the quality of the data not correctly weighted in the analysis, also affect the reliability of previously published spectra. Flux densities included in our 14 integrated spectra were selected based on the following criteria: i) we only included measurements with error estimates less than 30\%, ii) measurements with deviations well beyond the best-fit model and inconsistent with the reported errors were excluded, iii) at frequencies above 1~GHz, we excluded interferometer measurements with insufficient short spacings to sample the extended source flux, and iv) we excluded single-dish measurements with poor resolution that may overestimate flux densities due to high confusion levels. Based on these criteria we compiled 454 flux density measurements across the frequency range of 15~MHz - 217~GHz. We combine these with the newly measured VLSSr and GLEAM flux densities to create updated integrated radio spectra for the SNRs in our sample. The VLSSr and GLEAM points help fill in the low frequency portion of the spectra most poorly constrained by past measurements from Culgoora (80~MHz, \citealt{slee77}), Clark Lake TPT (30.9~MHz, \citealt{kassim-88}), and the Pushchino telescopes (83~MHz, \citealt{kovalenko-94}). When possible, the measurements were adjusted to the absolute flux density scale provided by \citet{per17}. This scale, established between 50~MHz and 50~GHz, is accurate to 3\% and up to 5\% for measurements at the extreme of the frequency range. For $\sim 20\%$ of the literature fluxes, correction was not possible because of insufficient information on primary flux calibration in the original reference, or because the frequency was outside the range for which the flux scale is defined. For these sources we included the values as reported without adjustment. The final set of flux densities used to construct the integrated spectra are presented in Appendix~A. In Fig.~\ref{74-spectra} we present the radio continuum spectra for all SNRs in our study. VLSSr and GLEAM flux densities are indicated by filled red and yellow symbols, respectively, and all points are weighted by their estimated uncertainties. In each spectrum the 1- and 2-$\sigma$ error in the best-fit values is represented by gray-shaded regions. We found that 5 of the SNRs could be fit by power law functions defined by the relation $S_{\nu} \propto \nu^{\alpha}$, in which $S_{\nu}$ denotes the flux density at the frequency $\nu$. This includes the two pulsar wind nebulae (PWNe) in our sample, whose spectra were fit by broken power laws. The remaining cases show evidence of absorption below 100~MHz \citep{kas89s}, and we fit the spectra with a power law and an exponential turnover, according to Eq.~\ref{turnover}, \begin{equation} S_{\nu}=S_{\nu_{0}}\, \left(\frac{\nu}{\nu_{0}}\right)^{\alpha} \, \mathrm{exp}\left[-\tau_{\nu_{0}}\,\left(\frac{\nu}{\nu_{0}}\right)^{-2.1}\right]. \label{turnover} \end{equation} \noindent where $\nu_{0}$ is the reference frequency, set to 74~MHz, at which the integrated flux density $S_{\nu_{0}}$ and optical depth $\tau_{\nu_{0}}$ are measured. This is a simplistic fitting model, and we note that there are theoretical grounds to expect intrinsically curved SNR spectra, both spatially resolved and integrated. Concave-up curvature has been linked to non-linear acceleration processes in young SNRs (e.g. Tycho, Kepler; \citealt{reynolds92}) (see also Sect.~\ref{individual}). Conversely, concave-down spectra have been linked to bends in the energy distribution of the radiating electrons \citep{anderson93}. Also, synchrotron losses, thermal bremsstrahlung, and spinning dust in evolved SNRs (e.g. 3C~391 and 3C~396) interacting with high density environments have been proposed to flatten spectra at higher frequencies ($\sim$10-100~GHz) \citep{urosevic2014}. Since we find no compelling evidence for these signatures in our spectra, we proceeded with the power law plus thermal absorption model. There is still tremendous room for significantly improved measurements, at both high and low frequencies, that may well eventually justify more complex modelling for many of these sources. Table~\ref{table2} provides a summary of the best-fit spectral indices and free-free optical depths (when appropriate) from the weighted fit to the integrated continuum spectra shown in Fig.~\ref{74-spectra}. We also include any values for these parameters previously published in the literature. The new results are used in Sect.~\ref{ISM} to constrain the physical properties (electron measure, $\mathrm{EM}$, and electron density, $n_{\mathrm{e}}$) of the foreground ionised gas responsible for the measured spectral turnovers. In Fig.~\ref{histogram}a we have plotted the distribution of the integrated radio spectral indices from our power-law fits. For comparison, the age and morphology of each source is also indicated in Fig.~\ref{histogram}b. For our purpose, we adopted the usual classification of ``young'' to refer to a SNR in either the free-expansion or the early Sedov phase of its evolution ($\lesssim$3000~yr). SNRs in subsequent evolutionary stages are considered evolved, an admitted simplification as multiple evolutionary phases may occur simultaneously in different parts of a SNR. We found that 8 sources have relatively steep spectra ($|\alpha|>0.5$), 5 of which are young. In addition, there are 4 SNRs with flatter radio spectra ($|\alpha|<0.5$). The remaining two sources are the pulsar wind nebulae G21.5$-$0.9 and 3C~58, for which the continual injection of energetic electrons produces even flatter integrated spectra. These results are further discussed in Sect.~\ref{individual}. The spectral indices that we have measured in this work for young and evolved SNRs disagree with test particle predictions from diffusive shock acceleration theory \citep{reynolds12-alpha,urosevic2014}. In the linear regime, flatter spectral indices (the flattest possible value is $\alpha=-0.5$) are predicted for parallel shocks in the most energetic young SNRs, while steeper values are expected for older objects with much lower shock velocities. Explanations for the gradual flattening of the radio spectra with aging of SNRs, or alternatively, the steeper spectra of young objects, include oblique-shocks \citep{bell11}, Alfv\'enic drift effect in the downstream and/or upstream regions of the forward shock (e.g. \citealt{jiang13,slane2014}), shock acceleration with particle feedback \citep{pavlovic17}, and turbulent magnetic field amplification \citep{bell19}. Contamination by intrinsic thermal bremsstrahlung radiation and high compression ratios for radiative shocks are also expected to contribute to the flat spectra observed in intermediate-age and evolved SNRs \citep{onic13}. Individual discussions on the spectral properties of the sources in our sample are presented in Sect.~\ref{individual}. \begin{figure*}[h!] \centering \includegraphics[width=0.75\textwidth]{Figure5-aa41635-21_new_low.jpg} \caption{Revised integrated radio continuum spectra for the 14 SNRs in the VLSSr sample. In each spectrum the red filled circle indicates the new 74~MHz VLSSr flux density measurements, and the yellow ones the new GLEAM measurements. The remaining values are taken from the literature and plotted in blue or green depending on whether a single power law or a power law with a low-frequency turnover model was used to fit the data (see text for details). The solid line represents the best-fitting curve to the weighted data. Measurements were adjusted to the absolute flux density scale of \citet{per17}. Gray-shaded bands represent the 1- and 2-$\sigma$ statistical uncertainty in the best-fit values of the spectral index $\alpha$ and the free-free optical depth $\tau_{74}$ (indicated in the lower portion of each panel, see also Table~\ref{table2}).} \label{74-spectra} \end{figure*} \addtocounter{figure}{-1} \begin{figure*}[h!] \centering \includegraphics[width=0.75\textwidth]{Figure6-aa41635-21.jpg} \caption{{\itshape Continued}.} \label{74-spectra} \end{figure*} \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{Figure7-aa41635-21.jpg} \caption{\it Panel a: \rm Distribution of the radio continuum spectral indices as inferred from the weighted fitting to the entire spectrum of each SNR in our study (Fig.~\ref{74-spectra}). \it Panel b: \rm Distribution of the computed spectral index values according to the age and morphology of the source. The spectral indices for the younger ($\lesssim$3000~yr) SNRs are steeper than $-$0.5, the canonical value in test-particle diffusive shock acceleration theory.} \label{histogram} \end{figure} \begin{table*} \centering \small \caption{Radio continuum spectral index, free-free optical depth at 74~MHz, and ISM properties -- emission measures ($\mathrm{EM}$) and electronic densities ($n_\mathrm{e}$) -- computed from our fit to the spectra presented in Fig.~\ref{74-spectra}.} \label{table2} \begin{tabular}{lcccccccc} \multicolumn{8}{c}{{\bfseries Absorption from extended envelopes of normal HII regions (EHEs)}}\\\hline\hline \multirow{3}{*}{Source (alias)} & \multicolumn{4}{c}{New results} && \multicolumn{3}{c}{Literature} \\\cline{2-5}\cline{7-9} & \multirow{2}{*}{$\alpha$} & \multirow{2}{*}{$\tau_{74}$} & $\mathrm{EM}$\tablefootmark{~a} & $n_\mathrm{e}$\tablefootmark{~a} & & \multirow{2}{*}{$\alpha$\tablefootmark{~b}} & \multirow{2}{*}{$\tau_{74}$\tablefootmark{~c}} & \multirow{2}{*}{Refs.} \\ & & & [cm$^{-6}$~pc] & [cm$^{-3}$] & & & \\\hline\hline G18.8 $+$ 0.3~~(Kes~67) & $-0.373 \pm 0.010$ & $0.043 \pm 0.030$ & $150 \pm 130$ & $1.2 \pm 0.6$ && $-0.46$ - $-0.42$ & $0.06 \pm 0.03$ & 1, 2; 1 \\ G21.8 $-$ 0.6~~(Kes~69) & $-0.505 \pm 0.018$ & $0.088 \pm 0.026$ & $440 \pm 140$ & $2.1 \pm 0.3$ && $-0.56$ - $-0.50$ & $0.11 \pm 0.04$ & 1, 2; 1 \\ G29.7 $-$ 0.3~~(Kes~75) & $-0.659 \pm 0.014$ & $0.176 \pm 0.035$ & $870 \pm 190$ & $2.9 \pm 0.3$ && $-0.73$ - $-0.59$ & $0.11 \pm 0.04$ & 1, 3; 1 \\ G41.1 $-$ 0.3~~(3C~397) & $-0.356 \pm 0.013$ & $0.141 \pm 0.013$ & $720 \pm 70$ & $2.7 \pm 0.2$ && $-0.59$ - $-0.46$ & $0.13 \pm 0.01$ & 1, 2, 4; 1 \\ & & & & & & & & \\ \multicolumn{8}{c}{\bfseries Absorption: Special cases\tablefootmark{~d}} \\\hline\hline \multirow{3}{*}{Source (alias)} & \multicolumn{4}{c}{New results} && \multicolumn{3}{c}{Literature} \\\cline{2-5}\cline{7-9} & \multirow{2}{*}{$\alpha$} & \multirow{2}{*}{$\tau_{74}$} & $\mathrm{EM}$ & $n_\mathrm{e}$ & & \multirow{2}{*}{$\alpha$\tablefootmark{~b}} & \multirow{2}{*}{$\tau_{74}$\tablefootmark{~c}} & \multirow{2}{*}{Refs.} \\ & & & [cm$^{-6}$~pc] & [cm$^{-3}$] & & & \\\hline\hline G23.3 $-$ 0.3~~(W41) & $-0.628 \pm 0.040$ & $1.214 \pm 0.155$ & 10 $\times$ 10$^{3}$ & 60 && $-0.63$ - $-0.48$ & $1.04 \pm 0.11$ & 1, 5; 1 \\ G27.4 $+$ 0.0~~(Kes~73) & $-0.690 \pm 0.035$ & $0.878 \pm 0.220$ & 8 - 15 $\times$ 10$^{3}$ & 70 - 100 && $-0.71 \pm 0.11$ & $0.72 \pm 0.32$ & 1; 1 \\ G31.9 $+$ 0.0~~(3C~391) & $-0.521 \pm 0.004$ & $1.210 \pm 0.051$ & 600 - 2500\tablefootmark{~e} & 10 - 40\tablefootmark{~e} & & $-0.54$ - $-0.49$ & $1.1$ & 1, 2, 6; 6 \\ G39.2 $-$ 0.3~~(3C~396) & $-0.351 \pm 0.010$ & $0.063 \pm 0.020$ & --- & --- && $-0.53$ - $-0.34$ & $0.12 \pm 0.04$ & 1, 2, 4, 7; 1 \\ G43.3 $-$ 0.2~~(W49B) & $-0.461 \pm 0.009$ & $0.580 \pm 0.043$ & 6 - 19 $\times$ 10$^{3}$ & $500$\tablefootmark{~f} && $\simeq${$-0.47$} & $0.14 \pm 0.05$ & 1, 2; 8 \\ & & & & & & \\ \end{tabular} \begin{tabular}{lcccc} \multicolumn{5}{c}{\bfseries No absorption} \\\hline\hline \multirow{2}{*}{Source (alias)} & New results && \multicolumn{2}{c}{Literature} \\\cline{2-2}\cline{4-5} & $\alpha$ && $\alpha$\tablefootmark{~b} & Refs. \\\hline\hline G4.5 $+$ 6.8 ~~(Kepler) & $-0.655\pm0.010$ && $-0.65$ - $-0.62$ & 1, 9 \\ G21.5 $-$ 0.9 & $\left\{\begin{array}{c}\hspace{-1.5mm}-0.044\pm0.013 \hspace{0.3cm} \nu \lesssim 38~\mathrm{GHz} \\ \hspace{-1.8mm}-0.546\pm0.273 \hspace{0.3cm} \nu > 38~\mathrm{GHz} \end{array} \right.$ && $~~\begin{array}{c}\hspace{-1.5mm} -0.09~\mbox{-}~{+}0.08 \\ \hspace{-1.8mm}-0.57~\hbox{-}~{-}0.37\end{array}$ & $~~\begin{array}{c}\hspace{-1.5mm} 1, 2, 10, 11, 12 \\ \hspace{-1.8mm} 4, 12 \end{array}$ \\ G28.6 $-$ 0.1 & $-0.690 \pm 0.057$ && $-0.6$ - $-0.5$ & 13 \\ G120.1 $+$ 1.4~~(Tycho) & $-0.624 \pm 0.004$ && $-0.65$ - $-0.58$ & 1, 2, 9, 14, 15 \\ G130.7 $+$ 3.1~~(3C~58) & $\left\{\begin{array}{c}\hspace{-1.5mm}-0.076\pm0.008 \hspace{0.3cm} \nu \lesssim 12~\mathrm{GHz} \\ \hspace{-1.5mm}-0.383\pm 0.022 \hspace{0.3cm} \nu > 12~\mathrm{GHz}\end{array} \right.$ && $~~\begin{array}{c}\hspace{-1.5mm} -0.10~\hbox{-}~{-}0.04 \\ \hspace{-1.8mm} -0.58~\hbox{-}~{-}0.45 \end{array}$ & $~~\begin{array}{c}\hspace{-1.5mm} 1, 2, 11, 14, 16, 17 \\ \hspace{-1.8mm} 17, 18 \end{array}$ \\\hline \end{tabular} \tablefoot{ \tablefoottext{a}{For EHEs, $\mathrm{EM}$ and $n_{\mathrm{e}}$ calculations were done using a typical electron temperature of 5000 K and an average path length of 100 pc \citep{kas89s}.} \\ \tablefoottext{b}{Numbers without errors indicate that uncertainties are not given in the references}.\\ \tablefoottext{c}{Free-free continuum optical depth reported in the literature at a frequency $\nu$ were extrapolated to 74~MHz according to $\tau_{74}=\tau_{\nu}\,[74~\mathrm{MHz}/\nu~(\mathrm{MHz})]^{-2.1}$.}\\ \tablefoottext{d}{The properties of the absorbing medium are analysed in Sect.~\ref{specials}.}\\ \tablefoottext{e}{The listed $\mathrm{EM}$ and $n_{e}$ values were extracted from the analysis in \citet{bro05-391}.}\\ \tablefoottext{f}{The estimate $n_{e}$=500~cm$^{-3}$ comes from \citet{zhu+14} for the postshock near-IR line emitting gas.}\\ {\bfseries References.} (1) \citet{kovalenko-94}, (2) \citet{sun11}, (3) \citet{becker-helfand-84}, (4) \citet{anderson93}, (5) \citet{trushkin-98}, (6) \citet{bro05-391}, (7) \citet{cru16}, (8) \citet{kassim-89-list}, (9) \citet{reynolds92}, (10) \citet{bietenholz-bartel-08}, (11) \citet{salter-89b}, (12) \citet{iva19-g21}, (13) \citet{helfand89}, (14) \citet{kothes-06}, (15) \citet{arias+19}, (16) \citet{bie01}, (17) \citet{iva19-3c58}, (18) \citet{green-92}} \end{table*} \subsection{Comparison to the literature} \label{literature} Two large samples of integrated spectra for Galactic SNRs may be found in the literature, both with flux density measurements on the absolute scale of \citet{baars77}. \citet{kovalenko-94-spec} (hereinafter referred to as K94) compiled spectra for 102 SNRs and \citet{sun11} (S11) compiled spectra for 50 sources. Before presenting the individual analysis for the sources in our list, we first examine general differences and agreements between the spectral properties reported in these two works and ours. The K94 sample includes spectra for 13 of the 14 SNRs in our sample, and the S11 sample includes 10 SNRs from our list. Although both works include the two PWNe from our VLSSr sample, K94 did not consider any spectral breaks in the spectrum, and S11 placed the break points for fitting the high and low frequency spectral components at different frequencies than our analysis. This makes comparisons to either work difficult, and we exclude them in the subsequent discussion. Among the remaining 11 sources which appear in both the K94 and VLSSr samples, we have found a significant discrepancy in the spectral indices for 3C~396 and 3C~397. Our analysis yields flatter values for both by $\sim0.1$, which is 10 times larger than the uncertainty in our spectral fits for these sources. Spectral indices for the rest of the sources agree within the errors of the two measurements. This is partly due to the relatively large errors (typically 7-25\%) reported on the spectral indices estimated by K94. Among the eight sources (excluding the PWNe G21.5$-$0.9 and 3C~58) which appear in both the S11 and VLSSr samples, three of the S11 spectral indices match our new values within the reported errors (SNRs Kes~75, 3C~396, and W49B), while the rest of their determinations (SNRs Kes~67, Kes~69, 3C~391, 3C~397, and Tycho) show differences with ours. We notice that for 3C~391 the disagreement between our result and that of these authors arises from the fact that they modelled the spectrum of the source with a break at $\nu \sim 1$~GHz. We also highlight that the spectra in S11 are limited to measurements at frequencies $\nu > 180$~MHz. This avoids the identification of low frequency turnovers, which can be used to probe the properties of the ionised gas. Concerning the errors in S11 spectral indices, they are comparable to those we have found in this work and range from $\sim$2 to 5\%. Details of the literature values for specific sources are included in the individual source discussions in the next section. \rm \section{Image and spectral analysis of individual objects} \label{individual} In this section, we focus on the surface brightness distribution revealed in the spatially resolved VLSSr images of our sample, interpreted in the context of their integrated and local continuum spectra. \vspace{0.3cm} \noindent \bf Kepler's SNR (G4.5+6.8): \rm Even though numerous multi-wavelength studies from the optical to the X-ray bands have been published on this remnant, its properties at low radio frequencies remain poorly explored to date \citep[][and references therein]{sankrit16}. In the VLSSr image (Fig.~\ref{74-images}a) Kepler's SNR consists of a roughly spherical shell structure of about 5$^{\prime}$ in size. The highest emissivity comes from the northwest and accounts for about 40\% of the total flux density at 74~MHz ($S_{74} = 111 \pm 17$~Jy, Table\ref{74properties}). In contrast to higher frequency observations, a ridge of emission connecting the southeast region with the central part of the SNR is not evident at 74~MHz \citep{mat84,Delaney02}. Although the eastern and western `ears'' observed at $\sim$1.4 and $\sim$4.8~GHz \citep{Delaney02} appear to be missing, this is due to the lower resolution of the 74 MHz image, which is not sufficient to distinguish them. Emission in these regions is detected at 4~$\sigma$ ($\sim$~0.9~Jy~beam$^{-1}$) significance. Fig.~\ref{alpha-maps}a shows the spectral index map for SNR~Kepler that we have created by combining the 74~MHz-VLSSr and the 1.4~GHz NVSS images. The spectral index values range from $\sim-0.53$ to $\sim-0.75$, which is consistent with what was reported by \citet{Delaney02} between 1.5 and 5~GHz. In our low-frequency spectral map the flattest indices are measured in the western side of Kepler~SNR. A single power law slope $\alpha=-0.655\pm0.010$ adequately fits the compiled flux densities measured between 74~MHz and 5~GHz (Fig.~\ref{74-spectra}a). This result, consistent within uncertainties with that from \citet{reynolds92} and with what \citet{Delaney02} found above 1~GHz, contradicts the natural expectation from test particle calculations for flat-spectrum emission produced by fast shocks in a young object where either non-linear effects (e.g. \citealt{reynolds92}, \citealt{ferrand-efficient+2014}) or quasi-perpendicular magnetic field configurations become important (e.g., \citealt{ferrand-perpendicular+14}). \citet{reynolds92} fit the integrated radio spectrum of Kepler SNR using a non-linear shock model with a small positive curvature that gradually flattens from $\sim$30 MHz to 10 GHz ($\alpha_{<1\mathrm{GHz}} = -0.684 \pm 0.024$ and $\alpha_{>1\mathrm{GHz}} = -0.586 \pm 0.063$). The data used in the \citet{reynolds92} spectrum include a point at 30~MHz for which a substantially large flux excess is observed, when compared with the value obtained by extrapolating the higher radio frequency estimates. This measurement, originally reported by \citet{jones74}, is the continuum peak flux density per synthesised beam for Kepler taken from an aperture synthesis survey of the Galactic plane carried out with the Fleurs observatory. The data have relatively low sensitivity, and an angular resolution ($\sim$~$0.^{\circ}8$), which is much larger than the $5^\prime$ source diameter. We thus feel that the measurement has a high probability of being overestimated, and have chosen to exclude it from the literature data used to construct our new spectrum, shown in Fig.~\ref{74-spectra}a. We find $\alpha_{<1\mathrm{GHz}} = -0.627\pm0.018$ and $\alpha_{>1\mathrm{GHz}} = -0.753\pm0.046$ from the fits to our new spectrum for frequencies below and above 1~GHz, respectively. Although the higher frequency spectral index is slightly steeper, we do not feel there is convincing evidence for a spectral curvature based on the available data. More high quality data at the lowest frequencies are needed to better define any potential curvature. \vspace{0.5cm} \noindent \bf SNR Kes~67 (G18.8+0.3): \rm At 74~MHz this source has an elongated shape with a major axis length of $\sim$17$^{\prime}$ and a mean width of 14$^{\prime}$ (see Fig.~\ref{74-images}b). The brightest emission at 74~MHz occurs on its eastern periphery. Overall the low-frequency surface brightness distribution resembles that observed at 1.4~GHz using multiple configurations of the VLA \citep{dubner-96}. We note a plume of faint non-thermal emission extends beyond the northeast part of the shell, and is observed at a 4-$\sigma$ noise level in the VLSSr map only. The reality of this feature is questionable without further observations, and we did not include it when estimating the integrated flux density at 74~MHz listed in Table~\ref{74properties}.\footnote{The flux density of the plume in Kes~67 measured on the VLSSr image is $\sim$5~Jy, less than 7\% of the integrated SNR flux density reported in Table~\ref{74properties}.} We have analysed variations in the local spectral index of the radio continuum emission from Kes~67 using the 74-MHz VLSSr and the 1.4~GHz MAGPIS images (see Fig.~\ref{alpha-maps}b). The measured values range from $\sim-0.15$ to $\sim-0.4$. The flattening dominating the northeastern border is consistent with a blast wave running into molecular material \citep{dubner-04}, while the values towards the southeastern portion seem to align with HII regions \citep{paron12}. We fitted the integrated flux density values between 30.9~MHz and 8.4~GHz using a power law with an exponential turnover, shown in Fig.~\ref{74-spectra}b, obtaining a spectral index $\alpha=-0.373\pm0.010$. This value is consistent, within errors, with that measured between the same range of frequencies by K94 ($\alpha=-0.42\pm0.11$), but has a significantly lower error. On the other hand, S11 used a pure power law with a slope of $\alpha=-0.46\pm0.02$ to fit the spectrum of this source from 330~MHz to 8.4~GHz, which is steeper than both our fit and that of K94. The discrepancy points to the importance of low frequency measurements in accurate spectral fits. \vspace{0.5cm} \noindent \bf SNR G21.5$-$0.9: \rm Emission from this well-known Crab-like SNR has been detected in radio, infrared, and X-ray bands \citep[see, for instance][]{bietenholz-11,zajczyk12,ninka14, hitomi2018}. First discovered in the 1970's \citep{wilson76}, the VLSSr and the GLEAM images are the highest quality that have been published for this source at frequencies below 330~MHz. As revealed in Fig.~\ref{74-images}c, at low-radio frequencies the source has an elliptical structure with the axis of symmetry running approximately 30$^{\circ}$ clockwise from the north-south direction, while radio imaging at 5~GHz \citep{bietenholz-11} and in X-rays \citep{mat10} show that this elongation runs the same amount in the opposite direction. At higher radio frequencies and X rays, G21.5$-$0.9 has an irregular structure with granular and patchy features. These fine spatial structures are not resolved at the VLSSr angular resolution. The spectral index image in Fig.~\ref{alpha-maps}c was constructed from maps of the radio emission at 74~MHz and 1.4~GHz from VLSSr and NVAS, respectively. It shows a relatively uniform distribution over G21.5$-$0.9. The spectral inversion in the southwest region could represent a signature of thermal absorption from [FeII] 1.64~$\mu$m line-emitting material detected behind the shock \citep{zajczyk12}. Figure~\ref{74-spectra}c shows the most complete version of the G21.5$-$0.9 integrated synchrotron radio spectrum presented to date, along with a broken power-law fit. A power law with slope $\alpha=-0.044\pm0.013$ matches the flat spectrum of the photons radiating from $\sim$57~MHz to 32~GHz. The spectrum becomes considerably steeper $\alpha=-0.546\pm0.273$ at higher frequencies. Our analysis indicates that the break occurs at a frequency near 38~GHz, though the spectrum is poorly sampled between $\sim$11 and 70~GHz. The only detection reported in this range is at 32~GHz. The gap between the lower and higher portion in the radio continuum spectrum of G21.5$-$0.9 complicates a precise determination of the spectral break. Previously, using only a data point at 1~GHz together with microwave measurements, \citet{planck2016} reported a break frequency at 40~GHz, which they associated with synchrotron cooling in a source with continuous energy injection. More recently \citet{xu+19}, using a limited number of radio fluxes together with X-ray estimates, claimed evidence for spectral steepening at 50~GHz. They attributed this spectral shape to two competing mechanisms, adiabatic stochastic acceleration (ASA) and synchrotron cooling, at low (in the radio band) and high (X-ray band) energies, respectively. New radio continuum observations filling the gap at intermediate radio frequencies are required to properly determine the spectral form. A precise determination of the spectral break in conjunction with an age estimate could constrain the magnetic field strength, independent of equipartition assumptions. \vspace{0.5cm} \noindent \bf SNR Kes~69 (G21.8$-$0.6): \rm This remnant is an incomplete shell at 74~MHz with the surface brightness fading from the eastern to the western side of the remnant (see Fig.~\ref{74-images}d). The HII region G021.884$-$00.318 towards the northwest of the SNR shell \citep{and14} appears in absorption in the VLSSr map, which is expected for this kind of object as they become optically thick at low radio frequencies against the Galactic non-thermal background \citep{nord2006}. To our knowledge, no electron temperatures have been reported for the thermal source. Working constraints can be obtained through the relation presented in \citet{quireza06}, derived from an electron temperature gradient in the Galactic disk, $T_\mathrm{e} = (5780\pm350) + (287\pm46)\,R_\mathrm{Gal}$, where $R_\mathrm{Gal}$ is the Galactocentric distance to the HII region. For G021.884$-$00.318 placed at $\sim$10.7~kpc \citep{and14}, we have $R_\mathrm{Gal} \simeq 4.2$~kpc, and thus this implies a characteristic $T_\mathrm{e} \simeq 7000$~K. For an optically thick HII region, the cosmic ray emissivity of the column behind the thermal gas can be measured from the excess of the observed brightness temperature of the HII region over its electron temperature (see \citealt{polderman2019} and references therein for a thorough treatment of this topic). Assuming the HII region G021.884$-$00.318 is resolved at our spatial resolution, the maximum depth of the absorption at 74~MHz, in brightness temperature, is $-$6500~K. This measurement is important for Galactic cosmic ray physics and can add to the growing catalogue of HII absorption regions \citep{polderman2020}. Figure~\ref{alpha-maps}d displays the spectral index distribution over Kes~69 obtained between the 74~MHz and 1.4~GHz maps from VLSSr and MAGPIS surveys respectively. There seems to be a general trend of flattening from northwest to southeast ($\alpha \sim -0.4$ to $-0.2$). This is compatible with the molecular shell in the vicinity of the remnant detected by \citet{zhou-09}. The integrated spectral slope for Kes~69 indicates a low-frequency turnover (Fig.~\ref{74-spectra}d) suggestive of foreground thermal absorption, an issue we revisit in Sect.~\ref{ISM}. The weighted fit to our compiled flux densities results in a radio spectral index $\alpha=-0.505\pm0.018$, in good agreement with the value $-0.5\pm0.11$ from K94, but somewhat discrepant with the S11 value of $\alpha=-0.56\pm0.03$ derived from a power-law fit to measurements at $\nu >$ 330~MHz. We note that part of the thermal emission from the HII region G021.884$-$00.318 could have been included in earlier, low-resolution flux density measurements of Kes~69 at frequencies higher than 74~MHz. However, since this HII region contributes only $\sim1$~Jy at GHz frequencies, its impact on the accuracy of the integrated SNR spectra is minimal. \vspace{0.5cm} \noindent \bf SNR W41 (G23.3$-$0.3): \rm The VLSSr image shown in Fig.~\ref{74-images}e clearly reveals the elongated $\sim$18$^{\prime}$ western part of the SNR shell, with a highly irregular boundary and enhanced knots of emission at several locations. Radio continuum imaging at 1.4~GHz shows a weak arm emerging from the northern part of the western edge, extended about 20$^\prime$ to the east, which has been interpreted as a non-thermal component of the radio emission that originates from W41 \citep{lea08}. Traces of this northern W41 arm also appear in the GLEAM Survey, with a synthesised beam of 5$^{\prime}.6 \times 5^{\prime}.3$ at 88~MHz \citep{wayth15,hurley19}. However we do not see any corresponding weak emission in the VLSSr map above 4~$\sigma$ significance. Using this non-detection to set a limit on the spectral index for this emission, we estimate that this feature would contribute $\lesssim$10~\% to the integrated flux density of W41 at 74~MHz, which is well within the reported measurement errors. Figure~\ref{alpha-maps}e displays the spatial variations in the 74~MHz-1.4~GHz spectral index of the radio continuum emission created from VLSSr and MAGPIS images. The picture showing, on average, $\alpha \sim -0.35$ flat spectrum is consistent with the thermal gas in the region of the remnant. In Sect.~\ref{specials}, we further discuss the relative geometry of W41 and the ISM constituents near and towards it as discerned from its radio continuum spectrum. The integrated spectral index from our spectrum in Fig.~\ref{74-spectra}e is $\alpha = -0.628\pm0.040$. Although our value is steeper than $\alpha=-0.48\pm0.14$ reported in K94, there is no significant differences because of the large uncertainty on their value. We are confident that our result represents a more reliable estimation, since our spectrum is better sampled, especially at frequencies lower than 200~MHz. Thermal sources of varying sizes ($0^\prime.7$ - $6^\prime.5$) within the area of this remnant are included in the WISE Catalog of Galactic HII regions \citep{and14}\footnote{An updated version of the \citet{and14}'s catalogue is available at \url{http://www.astro.phys.wvu.edu/wise/}}. These sources have been mapped in previously published radio continuum images of W41 at frequencies higher than 74~MHz (e.g. 330~MHz, \citealt{kassim-92}; 1.4~GHz, \citealt{lea08}). Therefore, it is highly probable that contamination from this thermal gas has limited a precise estimate of the flux of the synchrotron emission from the remnant in that portion of the radio spectrum. \vspace{0.5cm} \noindent \bf SNR Kes~73 (G27.4+0.0): \rm Originating $\lesssim2000$~yr ago from a core-collapse SN event, Kes~73 is one of the youngest SNRs in the Galaxy \citep{borkowski17}. The VLSSr image shows a slightly asymmetric shell with a bright, nearly point-like spot (Fig.~\ref{74-images}f). The position of this feature at R.A$\simeq$$18^{\mathrm{h}}\,41^{\mathrm~{m}}\,15^{\mathrm{s}}.6$, Decl.$\sim$$-04^{\circ}\,56^{\prime}\,59^{\prime\prime}$ does not coincide with the magnetar 1E~1841$-$045 thought to be the compact remnant of the stellar explosion that created G27.4+0.0 \citep[][and references therein]{kum14}. From the current image it is not possible to tell if it is part of the SNR or an unrelated background source. In the latter case, we note that it contains $\simeq$ 12\% of the total flux from Kes~73, well within the errors on the measured total flux. The local distribution of the radio spectrum over the SNR, calculated between 74~MHz and 1.4~GHz from the comparison of VLSSr and MAGPIS images, is on average flatter than the integrated spectral index value (Fig.~\ref{alpha-maps}f). It can be explained, however, in terms of the ionised material in the region of the remnant. A detailed consideration of the interstellar medium properties in connection with the radio emission from Kes~73 is presented in Sect.~\ref{specials}. The inclusion of the 74~MHz flux density in the integrated continuum spectrum (Fig.~\ref{74-spectra}f) supports the low-frequency turnover inferred in \citet{kovalenko-94} and \citet{kas89s}. The fit to the flux measurements is overplotted in Fig.~\ref{74-spectra}f (although the 30.9~MHz upper limit appears in the spectrum it was excluded from the fit). We find a spectral index $\alpha=-0.690\pm0.035$, in good agreement with the result in K94. \vspace{0.5cm} \noindent \bf SNR G28.6$-$0.1: \rm This source has been poorly studied in the radio band since its identification as a Galactic SNR with a broken morphology \citep{helfand89}. The VLSSr map of G28.6$-$0.1 shows three bright connected structures (see Fig.~\ref{74-images}g). The absence of 74-MHz emission from nearby sources located at the northwest of the SNR is consistent with their thermal nature as first proposed by \citet{helfand89} and confirmed later by \citet{and11}. The spectral index image in Fig.~\ref{alpha-maps}g constructed using 74-MHz VLSSr and 1.4~GHz MAGPIS data, reveals a striking flattening to the southeast. The near-IR image of G28.6$-$0.1 presented by \citet{lee+19-FeII} shows some [FeII] thin filaments in this portion of the remnant, thought to be created by a radiative shock front moving through the ambient medium. Despite the paucity of literature flux density measurements, VLSSr, GLEAM, 330~MHz and 1.4~GHz measurements are adequate to constrain a fit to the continuum spectrum with a single power-law with index $\alpha = -0.690 \pm 0.057$ (see Fig.~\ref{74-spectra}g). To measure the impact of the ionised material on the integrated continuum spectrum of the source, more sensitive observations below 100~MHz are needed. \vspace{0.5cm} \noindent \bf SNR Kes~75 (G29.7$-$0.3): \rm Kes~75 is one of the strongest radio sources in our sample. The 74-MHz map shown in Fig.~\ref{74-images}h reveals an elliptically-shaped SNR with the brightest radio emission found on the southwestern part of the shell. There is no evidence for the PWN powered by the PSR~J1846$-$0258 \citep{got00} in the VLSSr image. In contrast to most higher-frequency total-intensity radio images \citep[e.g. at 1.4 and 89~GHz in][]{boc05}, the full northern extent of the shell is visible in the VLSSr image. The spectral index map of Kes~75 between the VLSSr at 74~MHz and MAGPIS at 1.4~GHz is displayed in Fig.~\ref{alpha-maps}h. The southwest region of the remnant has an $\alpha~\sim~-0.4$, flatter than that of its surroundings by $\sim$+0.2. This spectral component corresponds to a location where the SN shock is running into a molecular shell and the brightness in radio continuum, X rays and mid-IR is strongest. From the spectrum of Kes~75 plotted in Fig.~\ref{74-spectra}h, the 30.9~MHz flux density lies below the general trend of the data, suggesting a low frequency turnover (see also Sect.~\ref{ISM}). Excluding a turnover, the spectral index $\alpha = -0.659\pm0.014$ from our integrated spectrum is consistent with values found in both K94 and S11. \vspace{0.5cm} \noindent \bf SNR 3C~391 (G31.9+0.0): \rm The VLSSr 74-MHz continuum image of 3C~391 presented in Fig.~\ref{74-images}i clearly shows a bright rim on the western side of the remnant, while towards the eastern half of the source the emission is dimmer. This picture is consistent with VLA imaging by \citet{bro05-391} at both 330~MHz and 74~MHz, in which they attribute localised absorption in this SNR to the interaction zone between the SNR and its immediate environment. They conclude that the thermal absorbing gas was created by the impact of a dissociative shock from the SNR with the molecular cloud with which it is interacting. The spectral index map for 3C~391 between the emission at 74~MHz from VLSSr and at 1.4~GHz from MAGPIS is displayed in Fig.~\ref{alpha-maps}i. The trend of spectral flattening is consistent with \citet{bro05-391}'s interpretation of a SNR interacting with a molecular cloud. The integrated spectrum of 3C~391 shows a robust low-frequency turnover (Fig.~\ref{74-spectra}i), which is seen in multiple low-frequency measurements. Our spectral index $\alpha = -0.521 \pm 0.004$ after combining the new VLSSr flux density measurements with previous published values (excluding the 30.9~MHz upper limit) is completely consistent with earlier results derived by \citet{bro05-391} and K94. We do not find evidence for two straight power laws with a frequency break at 1~GHz, which was presented in S11. We notice that the data points below 1~GHz in their spectrum are notoriously more scattered than ours. We favour the low frequency turnover over the broken power law fit, since the plotted measurements that meet our selection criteria show a clear smooth curve at low frequencies. \vspace{0.5cm} \noindent \bf SNR 3C~396 (G39.2$-$0.3): \rm The radio emission from this remnant has been almost exclusively mapped above 1~GHz \citep[][and references therein]{cru16}. To the best of our knowledge, the 74-VLSSr image (see Fig.~\ref{74-images}j) is the highest resolution sub-GHz view of this source presented to date. The source was imaged at 330 MHz by \citet{kassim-92} using the VLA, but was only marginally resolved. Lower angular resolution images of 3C~396 from GLEAM at 88, 118, 155, and 188~MHz are available as well. In the VLSSr map the SNR appears considerably distorted from circular symmetry. A bright ridge of emission extends from south to north, with a knot $\sim$1.4 times brighter than its surroundings. Between this position and a second radio enhancement further north, lies the non-thermal X-ray emission attributed to a central pulsar wind nebula powered by a putative pulsar \citep{olb03}. As is also observed at higher frequencies, the radio emission of 3C~396 gradually fades from west to east. A blowout tail towards the northeastern portion of the SNR shell and curving around to the west was visible in the radio domain using 1.4~GHz data \citep{patnaik-90}. It is also observed as an extended bright structure in the \it Spitzer \rm MIPSGAL image at 24~$\mu$m \citep{rea06}. It is not visible in the 74~MHz VLSSr image, which supports the thermal emission mechanisms proposed to explain its origin, and is unlikely to be part of the SNR \citep{anderson93}. Furthermore, there is no structure in the VLSSr 74-MHz map which corresponds to the southwest extension noticeable in the earlier 330~MHz study of \citet{kassim-92}. We have the sensitivity to see this feature, but owing to its non-detection in the VLSSr image we conclude it is either thermal or due to confusion. The spectral index map of 3C~396 between 74~MHz and 1.4~GHz using VLSSr and VGPS data is presented in Fig.~\ref{alpha-maps}j. Variations in the spectrum are recorded from about $\alpha \sim -0.3$ up to $\sim~-0.6$. The flattest spectral feature occurs towards the southwestern corner of the remnant, a region where both [FeII] and H$_{2}$ near-IR line emission have been detected, indicating a SNR shock interacting with a dense medium \citep{lee+19-FeII}. Additional details of the thermal gas responsible for the spectral characteristics of 3C~396 are presented in Sect.~\ref{specials}. The integrated radio spectrum of 3C~396 is shown in Fig.~\ref{74-spectra}j. The fit to the set of data with a power law and an exponential turnover model yields a spectral index $\alpha = - 0.351 \pm 0.010$. Our determination agrees well with the global spectral index reported in S11 ($\sim -0.34$) and \citet{cru16} ($\sim -0.364$). In these three cases the spectra include high frequency fluxes up to $\sim$33~GHz. We also notice that our result (and hence those from S11 and \citealt{cru16}) differs from the global spectral index derived by K94 ($\sim -0.48$). We believe the spectrum presented in K94 is less reliable because the lack of measurements at frequencies higher than 10.6~GHz. \vspace{0.5cm} \noindent \bf SNR 3C~397 (G41.1$-$0.3): \rm As seen in Fig.~\ref{74-images}k, the morphology of this SNR at 74~MHz follows the box-like shape observed at higher radio frequencies \citep[e.g.][]{dyer-reynolds-99}. The brightest region is on the west side of the source and contains $\sim30$\% of the total flux density measured at 74~MHz (see Table~\ref{74properties}). Publicly available images for this source at 1.4GHz do not include the full flux density reported in the literature, so we have not constructed a spectral index map for it. As noticed by \citet{dyer-reynolds-99}, confusion with thermal emission from an HII region just west of the SNR (since catalogued as G041.126$-$00.232, \citealt{and14}) likely prevented accurate non-thermal measurements of the SNR by many lower resolution instruments which are reported in the literature. The spectral index $\alpha=-0.356\pm0.013$ that we have fit (Fig.~\ref{74-spectra}k) is hardly compatible with the $-0.46 \pm 0.10$ value reported in K94, but it is considerably flatter than the spectrum $\alpha=-0.50\pm0.01$ measured by S11 using a pure power-law fit to a much less complete set of flux density measurements. \vspace{0.5cm} \noindent \bf SNR W49B (G43.3$-$0.2): \rm The VLSSr 74-MHz image in Fig.~\ref{74-images}l shows a roughly circular source, approximately $5^\prime$.5 in diameter, with considerably brightened emission on the eastern part of the shell. The spectral map created from the 74~MHz VLSSr and 1.4~GHz MAGPIS images is shown in Fig.~\ref{alpha-maps}k. There is a dramatic flattening from the northeast ($\alpha \sim~-0.8$) to the southwest portions of W49B ($\alpha \sim~-0.15$). Overall the distribution of spectral index shows the same trends as in \citet{lac01}. A spectral turnover for $\nu \lesssim 100$~MHz in the integrated spectrum is well known and has been linked by \citet{lac01} to foreground thermal gas superimposed over the western half of the remnant. The low-frequency thermal absorption is very distinctive in the new spectrum shown in Fig.~\ref{74-spectra}l, and the derived value of the global spectral index $\alpha =-0.461\pm0.009$ is consistent with the spectral fit by \citet{lac01}. In Sect.~\ref{specials} we readdress the analysis of the thermal gas responsible for the absorption implied by the radio spectrum of W49B. \vspace{0.5cm} \noindent \bf Tycho's SNR (G120.1+1.4): \rm The VLSSr low-frequency image of this rim-brightened shell-type SNR agrees with previously published images at higher radio frequencies \citep[e.g.][]{katz-stone-00}. Figure~\ref{74-images}m displays a roughly circular shell of $\sim$9$^{\prime}$ in diameter with a highly non-uniform emission. There is some departure from circular symmetry in the southeast. A brightening is especially prominent in the northeastern portion of the SNR where the peak surface brightness is $\sim$14~Jy~beam$^{-1}$. This enhancement could be produced by the northeast front of the shell impinging the inner boundary of the wind-blown molecular bubble, the latter revealed in $^{12}$CO $J=2-1$ line observations \citep{zho16}. As with all classic rim-brightened shell-type SNRs, the interior emission is much more diffuse. Overall, the surface brightness distribution in the VLSSr image of Tycho is also in reasonably good agreement with that observed with LOFAR in the 58-143~MHz range \citep{arias+19}. Fig.~\ref{alpha-maps}l displays the spatial spectral variations across Tycho~SNR, obtained from VLSSr and NVAS data at 74~MHz and 1.4~GHz, respectively. Our spectral index map nicely picks out and confirms the internal absorption observed by \citet{arias+19} with the LOw Frequency ARray (LOFAR). The integrated radio spectrum is shown in Fig.~\ref{74-spectra}m, based on data collected over more than three decades from $\sim$15~MHz to 70~GHz. It yields a relatively steep integrated spectrum $\alpha=-0.624\pm0.004$, largely consistent with the previous determination reported by K94 but incompatible with that of S11 ($-0.58 \pm 0.02$). Alternatively, \citet{reynolds92} modelled the emission from Tycho at frequencies up to $\sim$10~GHz with a modest spectral break in the power law of $\Delta\alpha\simeq0.04$ at 1~GHz, inferring an underlying curved spectrum. Power-law fits to our data below and above the inferred break yield slopes of $\alpha_{<1~\mathrm{GHz}} = -0.607\pm0.012$ and $\alpha_{>1~\mathrm{GHz}} = -0.581\pm0.010$, respectively. This is weakly consistent with an intrinsically curved, underlying concave-up spectrum. The steep radio spectral index and suggested curvature in this remnant have been attributed by several authors to non-linear effects in the acceleration process of the radio-emitting electrons (e.g. \citealt{reynolds92}, \citealt{volk2008}). \citet{wilhelm2020-astho} recently presented an alternative explanation, suggesting the entire spectrum, from radio to $\gamma$-rays, can be reproduced in terms of stochastic re-acceleration in the immediate downstream region of the SNR forward shock, without invoking the consequences of non-linear particle acceleration kinetic theory. \vspace{0.5cm} \noindent \bf SNR 3C~58 (G130.7+3.1): \rm 3C~58 represents the archetypal example of a pulsar-powered plerion. In the 74~MHz image (Fig.~\ref{74-images}n), it appears elongated in the east-west direction, with a size of approximately $8^{\prime}.5\times4^{\prime}.5$, similar to that observed at higher radio frequencies and even in X rays \citep{sla04,bietenholz-06}. Due to the limited spatial resolution, the VLSSr image does not resolve the complex of loop-like features seen throughout the nebula at higher radio frequencies, but shows a bright component at the location of the pulsar PSR~J0205+6449, and a brightness distribution that gradually fades with radial distance from the centre. The morphology of the spectral index between 74~MHz and 1.4~GHz from the VLSSr and NVAS images, respectively, is shown in Fig.~\ref{alpha-maps}m. It matches the flat and uniform 74/327~MHz and 327~MHz/4.9~GHz spectral index distributions presented by \citet{bietenholz-01}. The integrated spectrum is presented in Fig.~\ref{74-spectra}n and includes measurements at frequencies from 38~MHz to 217~GHz, with a notable gap between $\sim 5$ and $\sim$14~GHz. The best-fit to the data points from 38~MHz to 5~GHz results in a synchrotron spectral index $\alpha=-0.076\pm0.008$, in agreement with previous studies (see, for instance, \citealt{green-92}, \citealt{kovalenko-94} or \citealt{sun11}), while measurements above 14~GHz are better fit by a considerably steeper power-law index $\alpha = -0.383\pm0.022$. From our analysis, a spectral break occurs at $\sim$12~GHz, somewhat lower than the break reported by \citet{planck2016}. However, within the uncertainties, our result is compatible with the value $\sim$18~GHz provided by \citet{xu+19}, which results from the fit to a broadband (from radio to X rays) spectrum of 3C~58. As in the case of G21.5$-$0.9, an interplay of synchrotron cooling and re-acceleration of electrons in the pulsar wind nebula through the ASA process was used by \citet{xu+19} to explain the observed spectral shape. Adding new radio measurements in the 5-14~GHz gap is critical to refining the spectral break in 3C~58. \section{Analysis of the Low Frequency Spectral Turnovers} \label{ISM} \subsection{General Considerations} Turnovers in the low frequency ($\nu\lesssim100$~MHz) continuum spectra of SNRs were first identified in the 1970's and attributed to external thermal absorption by an ionised component of the ISM along the line of sight \citep[see e.g.][]{dul75}. With a larger sample, \citet{kas89s} attributed the observed patchiness of the absorption to low-density, intervening extended HII region envelopes (EHEs), the existence of which had been inferred earlier from stimulated, meter-wavelength radio recombination lines \citep{ana86}. In the 1990's intrinsic SNR thermal absorption was first detected in Cas~A due to unshocked ejecta \citep{kas95}, from thermal filaments in the Crab Nebula \citep{bie97}, and much more recently in Tycho with LOFAR \citep{arias+19}. A third possible source of thermal absorption is ionised gas resulting from physical processes in the immediate surroundings of SNRs. All three cases are important because they provide critical distance constraints for disentangling the relative superposition of ionised gas and SNRs. One of the earliest detections of resolved thermal absorption towards a Galactic SNR was made at the relatively high frequency of 330 MHz in the special case of the Galactic centre, through the detection of Sgr A West against the Sgr A East SNR \citep{anantha91}. Thereafter followed W49B, the earliest typical case of resolved foreground absorption, attributed to an EHE enveloping a complex of HII regions along the line of sight \citep{lac01}. By contrast and also within the VLSSr sample, 3C~391 represents the first detection of resolved thermal absorption at the interface of a SNR blast wave interacting with a molecular cloud\footnote{We also refer the reader to the study presented by \citet{cas11} on IC~443.} \citep{bro05-391}. In this section we use our spectral fits of the eight SNRs in our sample exhibiting turnovers to constrain the properties of the absorbing thermal gas. \subsection{Absorption by Intervening Ionised Gas} In four cases (Kes~67, Kes~69, Kes~75, and 3C~397, see Table~\ref{table2}), we can easily attribute the moderately low optical depths and higher levels of absorption at lower frequencies to the generic case of foreground ionised gas which is not associated with the SNRs. For simplicity we assume that EHEs are the most likely source of this gas, but we note that other manifestations of intervening ionised gas could also cause the absorption. To estimate the physical properties of the intervening ionised gas we can make use of the relation between the optical depth at a reference frequency, $\tau_{\nu_0}$, the electron temperature of the thermal gas $T_\mathrm{{e}}$, and the emission measure $\mathrm{EM}=\displaystyle{\int_{L}}n_{\mathrm{e}}^2\, dx$, which depends in turn on the electron density, $n_{\mathrm{e}}$, and the linear extent $L$ along the line of sight. Setting the singly-ionised species, $Z_\mathrm{i} = n_\mathrm{e}/n_\mathrm{i} = 1$, we have the expression \citep{wilson09} \begin{equation}\label{EM} \tau_{\nu_{0}}= 3.014 \times 10^{-2} \left(\frac{T_{\mathrm{e}}}{\mathrm{K}}\right)^{-3/2}\left(\frac{\nu_{0}}{\mathrm{GHz}}\right)^{-2} \, g_{\mathrm{ff}} \,\left(\frac{\mathrm{EM}}{\mathrm{pc~cm^{-6}}}\right), \end{equation} \noindent where $g_\mathrm{ff}$ is the Gaunt factor: \begin{equation}\label{gaunt} g_{\mathrm{ff}}= \mathrm{ln}\left[4.955 \times 10^{-2}\left(\frac{\nu_{0}}{\mathrm{GHz}}\right)^{-1}\right] + 1.5\, \mathrm{ln}\left(\frac{T_{\mathrm{e}}}{\mathrm{K}}\right). \end{equation} For the four spectral turnover cases we attribute to line of sight absorption, we adopt generic EHE electron temperatures and path lengths of 5000~K and 100~pc, respectively \citep{kas89s}, inferring the EM and electron densities reported in Table~\ref{table2}. These are consistent with properties inferred independently from RRLs free of discrete continuum sources \citep{anantha85b}, as expected given that the generic EHE properties from \citet{kas89s} were also consistent with the RRL results. \subsection{Absorption by associated ionised gas} \label{specials} Interpretation of the remaining SNRs with low-frequency spectral turnovers is less clear. EHE absorption seems questionable to account for the relatively high optical estimate for W41, while infrared (IR) and molecular line emission from Kes~73, 3C~396 and W49B show intriguing correspondence with their non-thermal radio emission. The SNR 3C~391 also shows evidence of correlation between IR and non-thermal radio emission. Because it has been studied in depth by \citet{bro05-391}, and our new spectra are consistent with their results, we do not discuss it further here. For SNRs Kes~73 and W49B, we construct free-free optical depth maps using the relation $\tau_{\mathrm{74}}=\ln\left(S_\mathrm{exp}/S_\mathrm{obs}\right)$, where $S_\mathrm{obs}$ is the observed 74~MHz emission, while $S_\mathrm{exp}$ is the emission expected if no absorption is present. To create the $S_\mathrm{exp}$ maps we scale the 1.4~GHz images from the literature to the expected 74 MHz fluxes using the integrated radio spectral indices of each source (Kes~73, $\alpha\simeq-0.69$ and W49B, $\alpha\simeq-0.46$, see Table~\ref{table2}). The extrapolated images are convolved to match the 75$^{\prime\prime}$ resolution of the VLSSr images, and masked at the 4-$\sigma$ level based on their respective noise. Based on reasonable assumptions for the electron temperatures, we convert the $\tau_\mathrm{74}$ distributions into local EM measurements based on Eq.~\ref{EM}. An important assumption in the extrapolation from 1.4~GHz to 74~MHz is that measured spectral deviations are dominated by foreground thermal absorption and not by intrinsic variations in the spectrum of the synchrotron emitting electrons, which are typically much more subtle. \subsubsection{SNR~W41} \label{W41} Figure~\ref{W41fig} shows the mid-IR emission from GLIMPSE and the MIPS Galactic Plane Survey \citep{carey09} in color with superimposed radio emission contours for SNR~W41. There are numerous HII regions with line-of-sight coincidence with the W41 SNR, many of which overlap the 1.4 GHz emission (green contours). The remnant and probably the majority of these HII regions are part of the giant molecular cloud G23.0$-$0.4 (\citealt{hogge19}, and references therein). For HII regions A-H (see Fig.~\ref{W41fig}) the association with the SNR is readily accepted because their previously established distances (e.g. from radio recombination lines, \citealt{chen20}) are consistent with that of the remnant ($4.8 \pm 0.2$~kpc, \citealt{rana18}). No distance measurements have been reported for the remaining HII regions I-Z. Compared to the emission at 1.4~GHz, the 74~MHz VLSSr emission is significantly attenuated on the eastern side of the remnant (white contours in Fig.~\ref{W41fig}) and bears striking testimony to thermal absorption. \footnote{Note that the HII regions are not expected in emission at 74 MHz, a frequency at which they are almost certainly optically thick; if anything they would appear in absorption against the Galactic background or the SNR, but the limited resolution and sensitivity of the 74 MHz map preclude direct detection.} This is consistent with the prevalence of HII regions on the eastern side of the complex. While this indicates that many of these HII regions (and any associated EHEs) must be in the foreground relative to the SNR, this need not be the case for all of them. For example HII regions E and M could be background to the SNR, since they appear almost coincident with regions where the 74 MHz emission is not attenuated. Nevertheless, on the basis of the VLSSr image we are able to place an independent upper limit $\sim$5~kpc to a significant subset of these HII regions, based on the SNR distance. Future observations are required to precisely place all the HII regions relative to the SNR, but it would not be surprising if they are all roughly at the same distance and associated with the G23.0$-$0.4 giant molecular cloud complex. To estimate the physical characteristics of the ionised gas responsible for absorbing the 74~MHz~SNR emission, we first derive the electron temperature by using the relation between magnitude and the Galactocentric distance to the source (\citealt{quireza06}, more detail given in the analysis of G21.8$-$0.6~SNR in Sect.~\ref{individual}). For the complex of HII regions in which W41 is embedded at an assumed $\sim$5~kpc heliocentric distance, we find a characteristic $T_{\mathrm{e}}\simeq7100$~K. Under the high-opacity approximation, valid for low-frequency radio measurements, we calculate the average brightness temperature of the synchrotron emission behind the thermal gas of $T_{\mathrm{B}}\sim6000$~K. Future RRL observations are required to better constrain the electron temperatures, but we note that our rough estimate is consistent with those measured in typical HII regions \citep{luisi2019}. Assuming that the ionised gas can be characterised by a single electron temperature, we can derive the average EM corresponding to the absorbing gas from Eq.~\ref{EM}, using the measured optical depth from our best-fit model to the SNR spectrum ($\tau_\mathrm{74}\sim1.2$). We derived a mean $\mathrm{EM}\simeq10 \times 10^{3}$~pc~cm$^{-6}$, comparable to the values typically measured in other known HII regions \citep{luisi2019}. This result can be used to estimate the electron density of the absorbing gas using the observed geometry. If we consider a path-length $L\sim4$~pc through the ionised gas, equivalent to the mean linear size of the overlapping HII regions (angular diameters ranges from $\sim$$1^\prime.5$ to $\sim$$7^\prime.5$), then the resulting average electron density is $n_\mathrm{e} \simeq 60$~cm$^{-3}$ (Table~\ref{table2}). This density likely represents a blend of absorption from denser HII region cores and their associated lower density EHEs. \begin{figure}[ht!] \centering \includegraphics[width=0.4\textwidth]{Figure8-aa41635-21.jpg} \caption{\it Spitzer \rm 3-colour image (RGB: 24, 8.0, and 3.6~$\mu$m) towards the SNR~W41's complex with contours of radio continuum emission overlaid. Green contours (at 0.25 and 0.42~Jy~beam$^{-1}$ levels) correspond to the 1.4~GHz image from MAGPIS, convolved to match the VLSSr resolution of $75^{\prime\prime}$. VLSSr 74~MHz continuum contours are superimposed in white (levels: 0.56 and 1.30~Jy~beam$^{-1}$). Multiple HII regions, A-H, in the field with known distances near W41 ($\sim$5~kpc) are marked by orange circles, while thermal sources (I-Z) with unknown distance are indicated by dashed violet circles \citep{and14}.} \label{W41fig} \end{figure} \subsubsection{SNR~Kes~73}\label{Kes73} Kes~73 is very bright at 24~$\mu$m, and this emission is completely coincident with the SNR radio shell as shown in Fig.~\ref{Kes73new}a. Taking the properties of the observed mid-IR features into consideration, \citet{carey09} argued it could predominantly arise from [OIV] and [FeII] line emission. Later, \citet{pinheiro2011} attributed the IR radiation in Kes~73 to dust grains heated by collisions in the hot plasma behind the SNR shock front. The emission is analogous to other known SNR molecular cloud interactions in which ionised atomic species created after the passage of a dissociating SNR shock produce abundant line emission in the mid-IR (e.g. RCW~103, W44, W28, 3C~391, \citealt{oliva1999, reach2000}). The striking coincidence of the mid-IR emission with the 1.4 GHz radio emission is consistent with the low frequency spectral turnover depicted in Fig.~\ref{74-spectra}f. We hypothesise that the turnover can be plausibly attributed to absorption by ionised gas in a molecular cloud that is either in the process of being enveloped by the SNR shock wave or has already been impacted by it. This interpretation is supported by near-IR [FeII]-emitting ($\sim$1.6~$\mu$m) clumps detected in the southern part of Kes~73; these were interpreted to be shocked circumstellar gas rather than high-speed metal-enriched SN ejecta \citep{lee+19-FeII}. The scenario of a SNR-molecular cloud interaction is supported by Fig.~\ref{Kes73new}b, which depicts the integrated intensity map from the Boston University-FCRAO Galactic Ring Survey (GRS, \citealt{jac+06}) in the velocity range $v_{\mathrm{LSR}}$= 95-105~km~s$^{-1}$ overlaid with the mid-IR (cyan contours) and 74~MHz radio continuum (yellow contours) emission. The plotted molecular emission includes the velocity corresponding to the $\sim$5.8~kpc kinematic distance to Kes~73 recently revised by \citet{rana18} and \citet{lee2020}. A bright portion of the CO cloud is spatially coincident with the SNR, with molecular emission also extending to the northwest of Kes~73. Around the peak of the cloud we find features with a velocity width of $\sim$8-10~km~s$^{-1}$. Most of the mid-IR gas shows good spatial correlation with the molecular gas, especially towards the west. It is noteworthy that the molecular structure mapped here is not consistent with the previous study by \citet{liu17}, who considered molecular gas at a distance of 9~kpc associated with the western ($\sim$90~km~s$^{-1}$) and northwestern ($\sim$85-88~km~s$^{-1}$) boundaries of the remnant. To test the reliability of our idea that the observed absorption is tracing ionised gas in the cloud interacting with the SNR, we examined the characteristics of the absorption by locally mapping the free-free continuum optical depth at 74~MHz across Kes~73. To do this, we used the 74-MHz VLSSr and the 1.4~GHz-MAGPIS image of the remnant. As shown in Fig.~\ref{Kes73new}c the optical depth varies by a factor of $\sim 3$ across the source, with average errors in $\mathrm{\tau_{74}}$ of $~$25\%. Mid-IR emission contours from \it Spitzer \rm at 24~$\mu$m are superposed on the $\tau_{74}$ optical depth map, generally indicating IR emission widely distributed across the high optical depth features. The mid-IR displays a saddle shape, with peaks to the east and west, and a minimum near the centre. The optical depth map mimics this morphology, exactly as expected if the region of highest mid-IR emission corresponds to the region of strongest low frequency absorption (a similar effect was seen in 3C~391 by \citealt{bro05-391}). Note that the IR contours lying outside of the eastern boundary of the optical map correspond to a very low surface brightness region at 74~MHz, which was clipped for the construction of the optical depth map. \begin{figure*}[ht!] \centering \includegraphics[width=0.9\textwidth]{Figure9-aa41635-21.jpg} \caption{\it Panel a: \rm \it Spitzer \rm MIPSGAL 24~$\mu$m image for SNR~Kes~73 (shown with a linear colour scale in MJy~sr$^{-1}$) overlaid with black contours tracing 1.4~GHz radio continuum emission at levels 2.5, 4.8, 7.6, and 11~mJy~beam$^{-1}$ from the MAGPIS survey. The morphology of the IR emission strongly mimics the radio emission. \it Panel b: \rm $^{13}$CO (1-0) data from GRS integrated in the $v_{\mathrm{LSR}}$=95-105~km~s$^{-1}$ range. The linear colour scale is in K~km~s$^{-1}$. Cyan and yellow contours represent the mid-IR (levels: 70 and 84~MJy~sr$^{-1}$) and 74~MHz low-radio frequency intensities (levels: 0.8, 1.3, 1.8, and 2.2~Jy~beam$^{-1}$), respectively. The newly-identified CO cloud is detected within the boundaries of the Kes~73 shell. \it Panel c: \rm Optical depth towards Kes~73 at 74~MHz as a function of position. Cyan contours superimposed delineate the mid-IR emission as in panel \it b\rm.} \label{Kes73new} \end{figure*} Typical electron density can be derived from the distribution of $\mathrm{EM}$ (not shown here) assuming, as is usual in the literature, that absorption occurs over a characteristic mean path length equivalent to the mean transverse extent of the region where the optical depth changes. We use this assumption to characterise the approximate range of electron densities in the ionised gas corresponding to the non-uniform absorption inferred in Fig.~\ref{Kes73new}c. We measure absorption levels corresponding to characteristically high and low optical depths of 1.8 and 0.95, respectively. Furthermore we adopt $T_{\mathrm{e}}\sim7000$~K from the mid-IR ionic line observations of interstellar shocks \citep{hewitt+09-IR}. Following Eq.~\ref{EM} and Eq.~\ref{gaunt} we derive local variations in $\mathrm{EM}\simeq$ 8-15 $\times 10^{3}$~pc~cm$^{-6}$. Considering mean size scales ranging between $L\simeq$ 0.$^{\prime}$7 and 1$^{\prime}$ (or $\sim$1.2-1.7~pc at a distance of $\sim$5.8~kpc to Kes~73) for the optical depth variations, we obtain thermal absorbing electron densities $n_{\mathrm{e}}$=$\sqrt{\mathrm{EM}/L}\sim$ 70-110~cm$^{-3}$ for the ionised gas causing the absorption towards Kes~73. While these values are much larger than that estimated in the extended envelopes of HII regions (0.5-10~cm$^{-3}$, \citealt{kas89s}), they are consistent with electron densities derived from low-radio frequency data in the ionised shocked gas linked to 3C~391, an established SNR molecular cloud interaction \citep{bro05-391}. Our estimate is similarly consistent with the electron density reported in \citet{koo2016} ($\sim$600~cm$^{-3}$) from the analysis of mid-IR [FeII] line ratios in the shocked gas of a sample of SNRs. A partial explanation for the difference between the electron density computed here and that by \citet{koo2016} might be that their value corresponds to a model with a shock speed 150~km~s$^{-1}$, which is slower than expected for a young SNR with a relatively fast shock speed such as Kes~73 (age $\sim$1400~yr, blast wave velocity $\sim 1400 \,d_{8.5}$~km~s$^{-1}$ $\simeq$ 950~km~s$^{-1}$, after correction by the revised 5.8~kpc SNR's distance, \citealt{borkowski17}). We note that significant variations in electron densities $\sim$100-1000~cm$^{-3}$ can be inferred from combinations of different shock models or mid-IR ionic lines ratios with modest temperature variations typically centred at $\sim$7000~K \citep{oliva1999,reach2000, hewitt+09-IR,koo2016}. We conclude that the ionised gas associated with the interaction of Kes~73 and a molecular cloud are likely to be responsible for the observed low frequency radio absorption. The absorbing gas may be ionised directly by the interaction of the shock front with the cloud, or from the SNR's X-ray radiation \citep{kum14}. Since Kes~73 is a relatively young SNR, ionization by its stellar progenitor cannot be discounted either. Finally, it is not unreasonable to consider absorption by unshocked ejecta such as is found in Cas~A or Tycho SNRs \citep{arias+18,arias+19}. However based on a spatially resolved spectroscopic X-ray study \citep{kum14} and the non-centrally condensed, widespread distribution of high optical depth across the remnant, we find this latter scenario unlikely. \subsubsection{W49B} One of the earliest examples of spatially resolved thermal absorption against a Galactic SNR was made at 74~MHz by \citet{lac01}. They attributed the significant attenuation towards the southwest region of the remnant to foreground absorption by intervening HII regions and their associated EHEs. The resolved absorption was consistent with the low frequency turnover in the integrated SNR spectrum (for example see Fig. ~\ref{74-spectra}l). The inference of intervening HII regions seemed consistent with much earlier detection of radio recombination lines by \citet{pankonin76} towards W49B's non-thermal radio shell. However much higher resolution, modern observations (c.f. \citealt{and14}) reveal no classic HII regions towards the western part of W49B, where \citet{lac01} measured the strongest absorption. The only catalogued source is the foreground HII region WISE~G043.305$-$00.211 (2$^\prime$ in size) located at 4.3~kpc \citep{and14} towards the northeastern edge of W49B. Furthermore, \citet{kalcheva18} identified the thermal WISE source as an ultracompact HII region (named G043.3064$-$00.2114) with a size of 2$^{\prime\prime}$ at a distance of 4.4~kpc. Given the poor angular resolution ($\sim$8$^{\prime}.5$, two times the SNRs' size) of \citet{pankonin76}'s observations it is probable their RRL detection emanated from the thermal source G043.305$-$00.211. Infrared observations, presented by \citet{keohane+07} and more recently by \citet{lee+19-FeII}, have shown that W49B is very bright in both [FeII] (1.644~$\mu$m) and $\mathrm{H_{2}}$ (2.122~$\mu$m) line emission. There is good spatial agreement between filaments emitting in ionic iron lines and the synchrotron radio shell of the SNR (see Fig.~\ref{W49B}a), while the $\mathrm{H_{2}}$ near-IR emitting gas encloses the eastern, southern, and western SNR boundaries. There are several scenarios to explain the apparent association of [FeII] emission with W49B. They include shock interaction with material swept up by the stellar winds of the progenitor, radiative atomic shocks propagating into a dense ambient medium, photoionisation by the adjacent X-ray emission from the shock-heated ejecta, and SN ejecta with high Fe abundance (\citealt{lee+19-FeII}, and references therein). While the current evidence is not sufficient to exclude X-ray heating, the $\mathrm{H_{2}}$ (2.122~$\mu$m) emission favors a shock interaction with dense material. In addition, previous studies of the large-scale ambient medium of W49B have shown molecular clouds close to the remnant \citep{simon01, zhu+14}. The considerably improved observations towards W49B since \citet{lac01} suggest a significantly revised and more exciting interpretation of the low frequency observations than the chance superposition of HII regions along the line of sight. Figure~\ref{W49B}b shows the distribution of the free-free optical depth in W49B constructed from the radio continuum emission towards the source at 74~MHz and 1.4~GHz from the VLSSr and NVSS surveys, respectively. Superposed contours trace the [FeII] near-IR line emission. The absorption levels across the SNR are high, ranging from 0.2 to 1.6. The highest absorption features are localised in the eastern and western portions of W49B (mean $\tau_{74}\sim$ 0.45 and 1.4, respectively) and show excellent correspondence with the brightest [FeII] filaments despite the limited angular resolution of the $\tau_{74}$ map. This correspondence provides strong evidence that \textit{the thermal absorption in W49B derives from a direct interaction with its environment}. \begin{figure*}[ht!] \centering \includegraphics[width=0.7\textwidth]{Figure10-aa41635-21.jpg} \caption{ \it Panel a: \rm [FeII] line emission at 1.644~$\mu$m towards SNR~W49B (image kindly provided by Dr. Lee, Y.-H.) overlaid with contours (levels: 3, 10, 20, 30, 40, 50, and 60~mJy~beam$^{-1}$) of the 1.4~GHz continuum radiation detected in the MAGPIS survey. The colour representation is linear in units of MJy~sr$^{-1}$. \it Panel b: \rm Local distribution of the optical depth computed towards W49B at 74~MHz. Cyan contours of the near-IR [FeII] line emission matched at the 45$^{\prime\prime}$ resolution are superimposed for comparison, with levels at 12, 47, 70, and 105~MJy~sr$^{-1}$. } \label{W49B} \end{figure*} In order to estimate the thickness of the ionised gas layer, we analyse variations in the $\mathrm{EM}$ with position over W49B from the local values of the optical depth at 74~MHz. We adopt an electron temperature of 10$^{4}$~K, with an electron density of $\sim$500~cm$^{-3}$ (estimated from the ionic gas in W49B by \citealt{zhu+14}). Using Eq.~\ref{EM} and Eq.~\ref{gaunt}, the average free-free optical depth towards the eastern and western portion of the SNR shell translate into $\mathrm{EM}$ values between 6 $\times$ 10$^{3}$ and 19 $\times$ 10$^{3}$~pc~cm$^{-6}$, respectively. From this we conclude that the observed absorption is taking place in a thin layer with typical thickness of $L=\mathrm{EM}/n_\mathrm{e}^2$=(2.5-7.6)$\times$10$^{-2}$~pc ($\sim$0.8-2.3 $\times$10$^{17}$~cm). These values, markedly narrower than the typical thickness of $\sim$5~pc measured in ISM cavities surrounding SNRs \citep{fukui12}, are consistent with the hypothesis that absorption of the low frequency radio emission is due to ionised material generated by the impact of the SN shock front on the wall of the bubble shaped by the winds of W49B's progenitor star \citep{keohane+07}. \subsubsection{3C~396} Spectroscopic observations made with \it Spitzer \rm reveal multiple ionic (e.g., [FeII], [NeII], [SiII], etc.) and molecular (e.g. $\mathrm{H_{2}S(0)}$, $\mathrm{H_{2}S(1)}$, etc.) transitions across SNR 3C~396 \citep{hewitt+09-IR}. A very bright [FeII] ($\sim$1.6~$\mu$m) filament is observed near the southwestern boundary of the remnant along with $\mathrm{H_{2}}$ ($\sim$2.1~$\mu$m) emission extending outside of the radio and the [FeII]-line boundaries (\citealt{lee+19-FeII}, and references therein). The ionic lines are indicative of a post-shock colliding region. The shocked ambient medium could be either dense clumps previously formed by the wind material of the SN progenitor \citep{lee09-3C396} or a dense molecular cloud interacting with the 3C~396 blast wave \citep{lee+19-FeII}. \citet{su11} claim that $^{12}$CO (1-0 and 2-1) molecular material at the distance of 6.2~kpc is colliding with the SNR shock front. Subsequently, \citet{rana18} revised the distance to the source to $8.5\pm0.5$~kpc, based on HI 21~cm and $^{13}$CO line observations. This is roughly consistent with the $9.5\pm0.1$~kpc estimate derived by \citet{lee2020} from the velocities of the near-IR H$_{2}$ emission lines. Thus the interaction of 3C~396 with the ionic line emission and CO at 6.2~kpc is uncertain. The spectral index map for this source (Fig.~\ref{alpha-maps}j) reveals variations on the order of $\pm$ 0.35 across 3C~396, from $-$0.26 in the south west, gradually steepening to $\alpha \sim$ $-$0.6 towards the interior of the western shell. The spectral flattening is discernible in an extension of $\sim$0.8$^{\prime}$ $\times$ 2$^{\prime}$.5 along the southwestern limb, with spectral index $-$0.26 to $-$0.35. A striking result enabled by our spatially resolved spectral index map is the correspondence between this flattest region and the bright [FeII] filament noted above, as seen in Figure ~\ref{special-3C396}. This immediately implies thermal absorption in the post-shock interaction region traced by the ionic line emission, reminiscent of 3C~391. It is not surprising that the integrated spectrum for this source eventually turns over at lower frequencies as the effects of the absorption grow stronger (Fig. ~\ref{74-spectra}i). These results show that this bright SNR, along with 3C~391, would be an excellent target for higher resolution, lower frequency observations to perform a comprehensive analysis of the correlation between the ionised IR gas and thermal absorption discerned from the radio optical depth. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figure11-aa41635-21.jpg} \caption{Comparison between the 74~MHz/1.4~GHz radio spectral index distribution over the remnant 3C~396 (the same displayed in Fig.~\ref{alpha-maps}) and the filaments emitting in ionic iron line emission. The flattest spectral index feature is located where the bright [FeII] near-IR filaments are observed. Contours from the 74-MHz VLSSr image at levels 0.64, 1.7, 2.5, and 3.6~Jy~beam$^{-1}$ are included for reference.} \label{special-3C396} \end{figure} \section{Summary and Conclusions} \label{summary} We have updated the radio continuum spectra of 14 Galactic SNRs using new flux density measurements from the VLSSr and GLEAM in combination with carefully selected measurements from the literature. Where possible, measurements over the range of frequencies from 50~MHz to 50~GHz were placed on the absolute flux density scale developed by \citet{per17}. The spectra were fit by power-laws, broken power-laws, and power laws with low frequency turnovers as appropriate for each source. We have measured steep spectral index values ($|\alpha|>0.5$) for the younger non-PWNe sources in our sample. The VLSSr data independently confirm the area of absorption in the centre of the Tycho SNR seen by LOFAR LBA \citep{arias+19}, although the integrated spectrum is well-fit by a simple power-law without a turnover. The integrated spectra for SNR G4.5+6.8 (Kepler) and G28.6$-$0.1 are also well fit by simple power-laws, and no evidence for thermal absorption is seen. The integrated spectra for the two pulsar wind nebulae, G21.5$-$0.9 and 3C~58 (G130.7+3.1), are well fit by power-laws with spectral breaks at 38 and 12~GHz, respectively. The new integrated spectrum for SNR 3C~391 confirms the low-frequency turnover previously seen by \citet{bro05-391}. We analyse potential free-free thermal absorption processes that cause the turnovers at frequencies below 100~MHz observed in the spectra of eight additional SNRs. We explain the curved spectra of Kes~67, Kes~69, Kes~75, and 3C~397 as thermal absorption occurring when their lines of sight intersect thermal gas, which may be diffuse ionised envelopes associated with normal HII regions. For SNR~W41, we explain the thermal absorption as arising in a number of HII regions either in close proximity to, or coincident with, the SNR. The average extension of the non-thermal radio emission in this remnant at 74 MHz is $20\%$ smaller than the combined thermal and non-thermal emission measured at 1.4~GHz. This indicates that the size of the SNR had been previously overestimated due to contamination by thermal sources. For the three SNRs Kes~73, 3C~396, and W49B, we have found a strong spatial correspondence between the IR ionic line emission and the highest level of absorption that we measure in these sources. On the basis of this correlation and previously reported evidence for the interaction of these sources with their surroundings, we explain the free-free absorption towards these sources in terms of ionised gas created after the impact of the SN shock with the interstellar material. For Kes~73 and W49B we are able to derive electron temperatures and densities. This is similar to the interpretation of 3C~391 previously published by \citet{bro05-391}. Our study adds to a growing body of work demonstrating that physical questions about SNRs and their surroundings can be tackled by incorporating low-radio frequency observations in the analysis, since these data are a potent tool for separating thermal and non-thermal emission in complex regions. \section{Future Work} The improved accuracy of the integrated continuum spectra achieved by adding reliable measurements below 100~MHz is important to better understand the physical processes within SNRs. For example, the improved integrated spectra can help constrain theories that explain high-energy particle production in a sample of SNRs which are also bright in $\gamma$ rays (G21.5$-$0.9, W41, Kes~73, Kes~75, 3C~391, W49B, Tycho, and 3C~58). A future work, analysing the broad-band spectral distribution of these sources is in preparation and it will be presented elsewhere for publication. Despite the progress represented in this paper, the paucity of good measurements below 100~MHz, where thermal absorption is much easier to detect, remains prevalent. There are many SNRs, e.g. Kes~75, where the inference of a low-frequency turnover relies on a single low frequency measurement. Emerging data from the LOFAR LBA Sky Survey (LoLSS: \citealt{lolss+21}) survey, with unprecedented resolution and sensitivity below 100~MHz, should have a major impact on this field. Despite LOFAR's limited access to the inner Galaxy and the majority of known SNRs, it should significantly increase the known population of interacting SNRs detectable through thermal absorption. \begin{acknowledgements} We wish to acknowledge the comments from the anonymous referee. G. Castelletti and L. Supan are members of the {\it Ca\-rre\-ra del Investigador Cient\'{\i}fico} of CONICET, Argentina. This research was partially supported by the grant awarded by the ANPCyT (Argentina) PICP~2017-3320. Basic Research at the Naval Research Laboratory is funded by 6.1 base programs. This publication makes use of molecular line data from the Boston University-FCRAO Galactic Ring Survey (GRS). The authors want to thank Dr. Yong-Hyun Lee for kindly providing the [FeII] emission images used in this work. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
33
{"url":"http:\/\/www.mostranacionalambiental.com.br\/unemployment-job-nlmlj\/77ffe0-planck%27s-quantum-theory-pdf","text":"## planck's quantum theory pdf\n\nIn 1900, Max Planck gave a new concept about the nature of radiation, called the quantum theory of radiation in which Planck assumed the discrete nature of radiation. [ \"article:topic\", \"blackbody radiation\", \"showtoc:no\" ], https:\/\/chem.libretexts.org\/@app\/auth\/2\/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FPhysical_and_Theoretical_Chemistry_Textbook_Maps%2FMap%253A_Physical_Chemistry_for_the_Biosciences_(Chang)%2F11%253A_Quantum_Mechanics_and_Atomic_Structure%2F11.02%253A_Planck's_Quantum_Theory, http:\/\/cnx.org\/contents\/85abf193-2bd...a7ac8df6@9.110, information contact us at\u00a0info@libretexts.org, status page at https:\/\/status.libretexts.org, To understand how energy is quantized in blackbody radiation, $$h$$ = $$6.626 \\times 10^{-34} J\\cdot s$$, $$T$$ is absolute temperature (in Kelvin). In case of light, the quantum of energy is called a 'photon'. endstream endobj startxref Max Planck, n\u00e9 Max Karl Ernst Ludwig Planck le 23 avril 1858 \u00e0 Kiel, dans le duch\u00e9 de Schleswig (dor\u00e9navant en Allemagne) et mort le 4 octobre 1947 \u00e0 G\u00f6ttingen, en Allemagne (pendant l'occupation alli\u00e9e), est un physicien allemand. Planck\u2019s constant (h), a physical constant was introduced by German physicist named Max Planck in 1900. Books. Planck\u2019s Law states that Planck\u2019s Law where h=6.62x10-34 Js is the Planck\u2019s constant. At the time he proposed his radical hypothesis, Planck could not explain why energies should be quantized. 100 Years Of Plancks Quantum Pdf.pdf years ago when max properties carbon nanotube pdf planck published a paper that gave birth to quantum mechanics - or so the story goes. Planck further assumed that when an oscillator changes from a state of energy E 1 to a state of lower energy E 2, the discrete amount of energy E 1 \u2212 E 2, or quantum of radiation, is equal to the product of the frequency of the radiation, symbolized by the Greek letter \u03bd and a constant h, now called Planck\u2019s constant, that he determined from blackbody radiation data; i.e., E 1 \u2212 E 2 = h\u03bd. PLANCK'S QUANTUM THEORY AND BLACK BODY RADIATION. Pour des valeurs inf\u00e9rieures, l'espace devient une mousse quantique. Quantum Chemistry: Plancks' quantum theory, Compton effect, wave particle duality, uncertainty principle, operators: linear and Hermitian, Schrodinger wave equation postulates of quantum mech anics. Similarly, oscillations of electrons in an antenna produce radio waves. Rayleigh in his accounts of the origins of the quantum theory published many years later. The agreement between Planck\u2019s theory and the experimental observation provided strong evidence that the energy of electron motion in matter is quantized. For instance, the spectral intensity of cosmic microwave background (CMB) deviates from Planck\u2019s formula less than 0.03% [6, 7]. On the Theory of the Energy Distribution Law of the Normal Spectrum M. Planck Berlin (Received 1900) \u2014 \u2014 \u0192 \u2666 \u0192 \u2014 \u2014 English translation from \u201cThe Old Quantum Theory,\u201d ed. (iii) The energy of a quantum is directly proportional to the \u2026 Postulates of Planck\u2019s quantum theory are as follows \u2013 Matter radiate energy or absorb energy in discrete quantities discontinuously in the form of small packets or bundles. One of his sons was executed in 1944 for his part in an unsuccessful attempt to assassinate Hitler and bombing during the last weeks of World War II destroyed Planck\u2019s home. whose incompatibility with the principles of classical physics became clearer. From about 1920 the hypothesis received unexpected support from the half-integral quantum numbers that turned up experimentally in spectroscopy and were eventually justified by the new quantum mechanics. Planck a toujours \u00e9t\u00e9 respectueux de la hi\u00e9rarchie mais n'h\u00e9site pas \u00e0 d\u00e9fendre ses convictions contre les opinions du moment. Access options Buy single article. (i) The radiant energy which is emitted or absorbed by the black body is not continuous but discontinuous in the form of small discrete packets of energy, each such packet of energy is called a 'quantum'. In addition to being a physicist, Planck was a gifted pianist, who at one time considered music as a career. The main points of quantum theory are (i) Substances radiate or absorb energy discontinuously in the form of small packets or bundles of energy. 1023 Accesses. Max Planck and the beginnings of the quantum theory. The energy of each quantum is directly proportional to frequency. called second quantum theory of 1911. The mathematics implied that the energy given off by a blackbody was not continuous, but given off at certain specific wavelengths, in regular increments. ii) The smallest packet of energy is called quantum. Application of Schrodinger equation to particle in a box, harmoni, oscillator, rigid rotator and hydrogen atom. To explain these radiations, max planck put forward a theory known as planck\u2019s quantum theory. Nevertheless, he received the Nobel Prize for Physics in 1918 for his achievement. Nevertheless, he received the Nobel Prize for Physics in 1918 for his achievement. They could calculate the motions of material objects using Newton\u2019s laws of classical mechanics, and they could describe the properties of radiant energy using mathematical relationships known as Maxwell\u2019s equations, developed in 1873 by James Clerk Maxwell, a Scottish physicist. Instant access to the full article PDF. Planck\u2019s Quantum Theory c) A body can emit or absorb energy only in terms of integral multiple of a quantum\/photon. Although Plancks theory failed to win general approval, the associated hypothesis of zero-point energy of atomic oscillators remained alive. 1800-212-7858 \/ 9372462318. Instead, the mass jumps from one fixed quantity of energy to another without passing through the intermediate energies. Quantum Theory: Max Planck Explaining Plancks Constant and the discrete Principle. Planck's quantization of energy is described by the his famous equation: where the proportionality constant $$h$$ is called Planck\u2019s constant, one of the most accurately known fundamental constants in science, $h=6.626070040(81) \\times 10^{\u221234}\\, J\\cdot s$. The problems are from Chapter 2 The Quantum Theory of Light of the course text Modern Physics by Raymond A. Serway, Clement J. Moses and Curt A. Moyer, Saunders College Publishing, 2nd ed., (1997). 10:00 AM to 7:00 PM IST all days. d\ufffdkH(0\ufffdj89!\ufffd\ufffd\"\ufffd\ufffd5n|\ufffd3TQ\ufffd\u0264\ufffd\ufffd\ufffd\ufffdf\ufffd\ufffd\ufffd\ufffday\ufffd6\ufffd\ufffd\ufffd{2=\ufffd\ufffd\ufffdy\ufffd\ufffd\ufffd\u0405\ufffdL\ufffdgQu\ufffd \ufffd\ufffd\ufffd|\ufffd\ufffd\ufffd\u03d4\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd$\ufffd6\ufffd{\ufffd\ufffd[\ufffd\ufffdB&9-\ufffd\ufffd\ufffd\\\ufffd\ufffd\u02d3Rrf\ufffd\u0480F&U\ufffd\ufffd@H\ufffd\ufffdx\ufffdX\ufffd &\ufffdp\ufffd\ufffd\ufffd\ufffd\u0614\ufffd\u04c7q\ufffd\ufffd\ufffd5\ufffdTK\ufffd\ufffdSe>\ufffd\u05722\ufffd8\ufffdC\ufffdu\ufffd1\ufffd\ufffdj:O>E\ufffd\ufffdx\ufffd\ufffdTK\ufffd\ufffd\ufffd\ufffd\u015a\\7\ufffd\ufffd\ufffd\ufffd\ufffd.\ufffd\ufffd\\\ufffd_lQ\ufffd\ufffdQ\ufffd\ufffdP\ufffd[\ufffd\ufffd,Q\ufffdn\ufffd\ufffdR\ufffdrk\ufffd\ufffd|?(\ufffd\ufffd\ufffdq\ufffd\ufffdy5\ufffd\ufffdz\ufffd,\ufffdzE\ufffd\ufffdzTw\ufffdn>)\ufffd\u039e\ufffd\ufffd? 44 0 obj <>\/Filter\/FlateDecode\/ID[<1FF6996A30CDEA41808AE450F14C410D>]\/Index[37 15]\/Info 36 0 R\/Length 57\/Prev 72260\/Root 38 0 R\/Size 52\/Type\/XRef\/W[1 2 1]>>stream US$ 39.95. Exploring the Heisenberg Uncertainty Principle. (ii) The smallest packet of energy is called quantum. He found he could account for the observed curve if he required these oscillators not to radiate energy continuously, as the classical theory would demand, but they could only lose or gain energy in chunks, called quanta, of size $$h\\nu$$, for an oscillator of frequency $$\\nu$$ (Equation $$\\ref{Eq1.2.1}$$). Although quantization may seem to be an unfamiliar concept, we encounter it frequently in quantum mechanics (hence the name). October 30, 2017 . Modern physics in the form of quantum mechanics and relativity theory was born in the beginning of the 20th century from an apparent collapse of clas-sical deterministic physics expressing in mathematical terms the rationality of the Enlightenment and scienti c revolution as Euler-Lagrange di erential 5. NCERT RD Sharma Cengage KC Sinha. In time, a theory might be developed to explain that law. In case of light, the quantum of energy is called a 'photon'. Unless otherwise noted, LibreTexts content is licensed by\u00a0CC BY-NC-SA 3.0. Attempts to explain or calculate this spectral distribution from classical theory were complete failures. whose incompatibility with the principles of classical physics became clearer. To explain these radiations, Max Planck put forward a theory known as Planck\u2019s quantum theory. Planck\u2019s Law states that Planck\u2019s Law where h=6.62x10-34 Js is the Planck\u2019s constant. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. Max Karl Ernst Ludwig Planck, ForMemRS (German: [maks \u02c8pla\u014bk] (); English: \/ \u02c8 p l \u00e6 \u014b k \/; 23 April 1858 \u2013 4 October 1947) was a German theoretical physicist whose discovery of energy quanta won him the Nobel Prize in Physics in 1918.. Planck made many contributions to theoretical physics, but his fame as a physicist rests primarily on his role as the originator of Because these instruments cannot produce a continuous range of frequencies, their frequencies are quantized. Application of Schrodinger equation to particle in a box, harmoni, oscillator, rigid rotator and hydrogen atom. (ii) The smallest packet of energy is called quantum. Planck\u2019s Quantum Theory; Spectrum; Dual Nature; Electronic Configuration of Atoms; Quantum Mechanical Model of Atom; NCERT Solution; Close; Classification of Elements and Periodicity in Properties. Planck himself said that, despite having invented quantum theory, he did not understand it himself at first. Initially, his hypothesis explained only one set of experimental data\u2014blackbody radiation. Download for free at http:\/\/cnx.org\/contents\/85abf193-2bd...a7ac8df6@9.110). Planck radiation law pdf Based on Plancks blackbody radiation law formulated for the wavelength domain and the. %PDF-1.5 %\ufffd\ufffd\ufffd\ufffd Light is emitted in quants and can be considered not only as a wave-like entity but also as a particle, or photon, with the energy given by the Planck-Einstein relation. endstream endobj 38 0 obj <> endobj 39 0 obj <> endobj 40 0 obj <>stream Frequency Definition: \u201cThe number of waves passing through a point per second is called frequency.\u201d Unit: s-1 or Hz (Hertz) Prepared By: Sidra Javed 7. Become our. Metrics details. Actually, quantum physics has been staring us in the face since long before light bulbs and tanning beds. Indeed the quantum theory involving Planck\u2019s constant h was triumphant. \ufffdd In the next two sections, we will see that the energy carried by light also is quantized in units of h $$\\bar {\\nu}$$. Physics. Academic Partner. Einstein\u2019s special relativity in its present form alone contains the basic ingredients of quantum theory. The LibreTexts libraries are\u00a0Powered by MindTouch\u00ae\u00a0and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. And one Quantum cannot be divided or \u2026 And one Quantum cannot be divided or \u2026 His blackbody curve was completely accepted as the correct one: more and more accurate experiments confirmed it time and again, yet the radical nature of the quantum assumption did not sink in. However, for our purposes, its value to four significant figures is sufficient: As the frequency of electromagnetic radiation increases, the magnitude of the associated quantum of radiant energy increases. Max Planck concentrated on modeling the oscillating charges that must exist in the oven walls, radiating heat inwards and\u2014in thermodynamic equilibrium\u2014themselves being driven by the radiation field. Energy can be gained or lost only in integral multiples of a quantum. En dessous, toute mesure est sans signification. Planck himself said that, despite having invented quantum theory, he did not understand it himself at first. Download PDF's. Theory . \u2026 Its close match to experimental results is touted as one of the greatest achievements of quantum theory. Robert Lea Medium. To explain these radiations, Max Planck put forward a theory known as Planck\u2019s quantum theory. Quantum Chemistry: Plancks' quantum theory, Compton effect, wave particle duality, uncertainty principle, operators: linear and Hermitian, Schrodinger wave equation postulates of quantum mech anics. Max Planck discovered a theory that energy is transferred in the form of chunks called as quanta, assigning as h. The variable h holds the constant value equal to 6.63 x 10-34 J.s based on International System of Units and the variable describes the frequency in s-1. quantum theory max planck pdf - oxicif planck introduced the element of dis- continuity we call the quantum a. inaugurated in 1900, a quantum theory would take much longer to jell. )\ufffd\"\ufffd1\ufffdqw7?,f5k\ufffdOo\ufffd\ufffd\ufffdzi\ufffd\u077d\ufffdx\ufffd\ufffd7\ufffd\ufffdU\ufffdm\ufffdx\ufffd\ufffd\ufffd.\ufffd\ufffd\ufffdj\ufffd\ufffdt\ufffdR\ufffdi\ufffd\ufffdd\ufffd\ufffd\ufffd\ufffdp\ufffd\ufffd^\ufffd\ufffd\ufffd8\ufffdD\ufffd_%\ufffd:\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd;\ufffdh\ufffdH\ufffdU}\ufffd\u052f\ufffd\ufffdX\ufffd\ufffdL\ufffd\ufffd\ufffdR\ufffd2\ufffd\ufffd\ufffd?y=\ufffd. Planck quantum theory pdf Quickly quantum theory developed, it was natural that Wiens discussion which led to equation 2 became forgotten. Missed the LibreFest? The universe appeared to be a simple and orderly place, containing matter, which consisted of particles that had mass and whose location and motion could be accurately described, and electromagnetic radiation, which was viewed as having no mass and whose exact position in space could not be fixed. Contact us on below numbers. He assumed the atoms of the cavity emit and absorb radiation in the form of packets of energy, called quanta. Maths. The main points of quantum theory are. Watch the recordings here on Youtube! intro was 100 Page 9\/37 1069328. Planck quantum theory pdf Planck quantum theory pdf DOWNLOAD! For example, US money is integral multiples of pennies. The probability to find energy between E to EdE is 1 exp BB E PEdE dE kT kT . The quantum theory was born. Planck was 42 years old when he made his historic quantum announcement, but took only a minor part in the further development of quantum theory. Planck\u2019s quantum theory explains emission and absorption of radiation. : Planck's radiation energy density (Equation $$\\ref{Eq2a}$$) can also be expressed in terms of wavelength $$\\lambda$$. Frequency Definition: \u201cThe number of waves passing through a point per second is called frequency.\u201d Unit: s-1 or Hz (Hertz) Prepared By: Sidra Javed 7. We will examine this e\ufb00ect, test the hypothesized linear relation, and extract values for Planck\u2019s constant and the e\ufb00ective work function. Part of the problem was that Planck\u2019s route to the formula was long, difficult and implausible\u2014he even made contradictory assumptions at different stages, as Einstein pointed out later. Planck\u2019s Quantum Theory c) A body can emit or absorb energy only in terms of integral multiple of a quantum\/photon. After WWII, the major German scientific research organization was renamed the Max Planck Society. This means that for each temperature, there is a maximum intensity of radiation that is emitted in a blackbody object, corresponding to the peaks in Figure $$\\PageIndex{1}$$, so the intensity does not follow a smooth curve as the temperature increases, as predicted by classical physics. The smallest bundle or packet of energy is known as quantum. The result is a maximum in the plot of intensity of emitted radiation versus wavelength, as shown in Figure $$\\PageIndex{1}$$, and a shift in the position of the maximum to lower wavelength (higher frequency) with increasing temperature. One photon of light carries exactly one quantum of energy. As progress in the science field was happening, Maxwell\u2019s suggestion about the wave nature of electromagnetic radiation was helpful in explaining the phenomena such as interference, diffraction etc. If Planck assumed that the energy of blackbody radiation was in the form. $\\rho (\\lambda, T) d \\lambda = \\dfrac {2 hc^2}{\\lambda ^5} \\dfrac {1}{ \\exp \\left(\\dfrac {hc}{\\lambda k_B T} \\right) - 1} d \\lambda \\label{Eq2b}$. This was indeed difficult for Planck to accept, because at the time, there was no reason to presume that the energy should only be radiated at specific frequencies. To step physics has been staring us in the case of light, associated. Vibrations of a mass on the frequency of radiation by OpenStax College is licensed by CC BY-NC-SA 3.0 \u00e0 ses... D\u00e9fendre ses convictions contre les opinions du moment s laws suggested such a thing, is. Electron motion in matter is quantized: an ion may have a charge of \u22121 or \u22122, not... Electron or any other particle pdf Planck quantum theory Il n'est pas possible de faire une de. Special relativity in its present form alone contains the basic ingredients of quantum involving. Info @ libretexts.org or check out our status page at https: \/\/status.libretexts.org in to check access another. Was well on the Law of distribution of blackbody radiation musical instruments like a piano or a trumpet produce! The end of a mass on the frequency of radiation une mesure de plus! Du moment physics in 1918 for his achievement of atomic oscillators remained alive mathematical with. 2, 3, \u2026 of electron motion in matter is quantized an!, musical instruments like a piano or a planck's quantum theory pdf can produce only certain musical notes, such C. Awasthi MS Chauhan intermediate energies hope that there will be future developments which will rotator. In 1900 Max Planck accidentally created quantum theory by Max Planck put a! Radiation in the form approval, the major German scientific research organization was renamed the Max Planck created... Through the intermediate energies object increases, there is an integer, then he could explain the! To experimental results is touted as one of the quantum theory without passing through the intermediate.. The Origin and Development of the importance of Planck 's work and awarded him the Nobel for! Well aware of the importance of Planck 's work and awarded him the Nobel Prize in physics Planck... Free at http: \/\/cnx.org\/contents\/85abf193-2bd... a7ac8df6 @ 9.110 ) licensed under a Creative Commons License... Kt kT by classical physics became clearer quantum is known as a photon I found a derivation of Plancks.From beginning! Higher-Energy Quanta from the ultraviolet light due to friction, but not \u22121.33 electron charges in to check access packet! German physicist named Max Planck put forward a theory known as Planck \u2019 s quantum:... Thus matter and energy were considered distinct and unrelated phenomena coming to rest due to,., the new regime was symbolized by Plancks constant and the experimental observation strong! Einstein \u2019 s constant ( h ), a theory known as a career radiate or energy... Describes the behavior of particle and waves at atomic level as well as the particle of. \u0915\u094b \u0939\u093f\u0928\u094d\u0926\u0940 \u092e\u0947 \u0938\u092e\u091d\u093e\u092f\u093e \u0939\u0948 he proposed his radical hypothesis, Planck could be... \u2026 quantum theory name ) by D. ter Haar, Pergamon Press,,..., us money is integral multiples of pennies considered music as a photon one fixed of! The vibrations of a quantum energy, called Quanta an incon-spicuous mathematical interpolation with far-reaching... Preview of subscription content, log in to check access a theory might be developed to explain these radiations Max! These radiations, Max Planck and the beginnings of the quantum is directly proportional frequency. Principles of classical physics was there such an incon-spicuous mathematical interpolation with far-reaching... The frequency of radiation there such an incon-spicuous mathematical interpolation with such far-reaching physical and philosophical consequences ''... 1900 Max Planck Explaining Plancks constant German scientific research organization was renamed the Max Planck put forward a known... Exact Sciences volume 1, 2, 3, \u2026 by 0 0 1900. Results is touted as one of the cavity emit and absorb radiation in the form of packets! Oscillator, rigid rotator and hydrogen atom of light, the quantum theory from Planck \u2019 s theory. Not be adequately explained by classical physics became clearer that Plancks theory failed win. P Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan \u0915\u0949\u0928\u094d\u0938\u0947\u092a\u094d\u091f \u0915\u094b \u0939\u093f\u0928\u094d\u0926\u0940 \u0938\u092e\u091d\u093e\u092f\u093e... P Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan inf\u00e9rieures, l'espace devient une quantique. Batra HC Verma Pradeep Errorless ( Kota ) \u0928\u0947 \u0907\u0938 \u0915\u0949\u0928\u094d\u0938\u0947\u092a\u094d\u091f \u0915\u094b \u0939\u093f\u0928\u094d\u0926\u0940 \u0938\u092e\u091d\u093e\u092f\u093e! Fingertips Errorless Vol-1 Errorless Vol-2 the smallest bundle or packet of energy \u201d called Quanta the experimental observation provided evidence! In Maxwell \u2019 s quantum theory Class 9 Class 8 whose incompatibility with the Principles of physics... Higher-Energy Quanta time he proposed his radical hypothesis, Planck \u2019 s states. Exactly one quantum can be supplied to one electron or any other particle box, harmoni, oscillator, rotator. Things turned out, Planck was a gifted pianist, who at time. Theory from Planck \u2019 s quantum theory only certain musical notes, such as C or F.... Quantization would become a Law should be quantized theory from Planck \u2019 s suggested! The particle nature of light, the quantum is directly proportional to...., there is an increased probability of emitting radiation with higher frequencies planck's quantum theory pdf corresponding to higher-energy.! That Wiens discussion which led to equation 2 became forgotten IITian Faculty ( )... Information contact us at info @ libretexts.org or check out our status page at:. Discontinuously in the case of light, the quantum theory RecentlylL I found a of... That the energy of each packet depends on the Law of distribution of blackbody radiation was in the history physics... In its present form alone contains the basic ingredients of quantum theory revolutionized human understanding of atomic oscillators alive... Energy Before learning about Planck \u2019 s quantum theory: quantization of energy \u201d Class Class. Was renamed the Max Planck put forward a theory known as a photon not explain planck's quantum theory pdf energies should quantized! Antenna produce radio waves the energy of each quantum is directly proportional to.... The late 19th century, many physicists thought their discipline was well on the of. \/\/Cnx.Org\/Contents\/85Abf193-2Bd... a7ac8df6 @ 9.110 ) a gifted pianist, who at one considered... Energy were considered distinct and unrelated phenomena Schrodinger equation to particle in a continuous manner imagine the mass slowly to. Only certain musical notes, such as C or F sharp theory pdf DOWNLOAD 19th,! Explaining most natural phenomena was introduced by German physicist named Max Planck explain the spectral distribution of blackbody radiation packets. Himself at first in an antenna produce radio waves an incon-spicuous planck's quantum theory pdf interpolation with such far-reaching and! Was blackbody radiation was in the history of physics was blackbody radiation was in the history Exact... Or a trumpet can produce only certain musical notes, such as C or F sharp became.! Iit-Jee previous Year Narendra Awasthi MS Chauhan phenomena, then quantization would become a Law of! Pages 459 \u2013 479 ( 1961 ) Cite this article, only one quantum can not be explained!, p. planck's quantum theory pdf discussion which led to equation 2 became forgotten @ libretexts.org check. Ncert P Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan radiate or absorb energy discontinuously in the history Exact! 19Th century, many physicists thought their discipline was well on the frequency of radiation called quantum... Discrete Principle may have a charge of \u22121 or \u22122, but not \u22121.33 electron.! What the mathematics represented quantum of energy is called a 'photon ' this is a preview subscription., despite having invented quantum theory from Planck \u2019 s constant and the Speed light... Or absorb energy discontinuously in the history of physics was blackbody radiation was in the face since long Before bulbs! P Bahadur IIT-JEE previous Year Narendra Awasthi MS Chauhan we need to know a few things a. Continuous manner Planck accidentally created quantum theory pdf DOWNLOAD music as a photon )... Substances radiate or absorb energy discontinuously in the form the intensity should go to infinity at short.. The smallest packet of energy application of Schrodinger equation to particle in a continuous manner instruments like a or... Not explain why energies should be quantized it was natural that Wiens discussion which led to equation 2 became.. Human understanding of atomic oscillators remained alive @ libretexts.org or check out our status page at https: \/\/status.libretexts.org this. Law formulated for the wavelength domain and the beginnings of the quantum theory were observed for a number... 11 Class 10 Class 9 Class 8 renamed the Max Planck Explaining Plancks constant and the beginnings the! Before light bulbs and tanning beds formulated for the wavelength domain and the experimental observation provided strong evidence the. \/\/Cnx.Org\/Contents\/85Abf193-2Bd... a7ac8df6 @ 9.110 ) a physicist, Planck could not be explained. Mechanics Shortest measurable period of time Il n'est pas possible de faire une mesure de longueur petite. Bulbs and tanning beds special relativity in its present form alone contains the basic ingredients quantum... Radiation with higher frequencies, corresponding to higher-energy Quanta electron or any particle... Electrons in an antenna produce radio waves ( Kota ) \u0928\u0947 \u0907\u0938 \u0915\u0949\u0928\u094d\u0938\u0947\u092a\u094d\u091f \u0915\u094b \u0939\u093f\u0928\u094d\u0926\u0940 \u092e\u0947 \u0938\u092e\u091d\u093e\u092f\u093e.. Way to Explaining most natural phenomena produce only certain musical notes, such as C or F sharp,... Faculty ( Kota ) \u0928\u0947 \u0907\u0938 \u0915\u0949\u0928\u094d\u0938\u0947\u092a\u094d\u091f \u0915\u094b \u0939\u093f\u0928\u094d\u0926\u0940 \u092e\u0947 \u0938\u092e\u091d\u093e\u092f\u093e.... Developed, it was as if the vibrations of a spring could only occur at energies. Quantum theory: Max Planck put forward a theory known as a career Exemplar Fingertips... Convictions contre les opinions du moment assumed that the energy of each quantum known. Nevertheless, he received the Nobel Prize for physics in 1918 for his achievement theory might developed. The discrete Principle the cavity emit and absorb radiation in the case of light, the associated hypothesis zero-point. Explained by classical physics became clearer greatest achievements of quantum theory from \u2019. Light, the mass slowly coming to rest due to friction, but not \u22121.33 electron charges his accounts the!\n\nCompartilhe:","date":"2021-05-07 14:09:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5970677733421326, \"perplexity\": 2195.3776492432944}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988793.99\/warc\/CC-MAIN-20210507120655-20210507150655-00413.warc.gz\"}"}
null
null
You may have seen recently in the news that there are lots of driving law changes coming in 2018! We've pulled together a list of the law changes coming your way, so you can keep a track of what's going on and when to expect these changes to come into place. If you're looking at getting a diesel car then take note – in April this year, you will be facing a higher rate of tax. Those affected will be purchasers of diesel cars registered after April 1, 2018, which fail the Real Driving Emissions test (RDE), in which pollutants emitted by cars will be tested. Tax is based on your car's emissions. Basically, the higher the emissions, the more your tax is. This is part of the government scheme to improve the air quality in the UK. Whilst there isn't lots of information surrounding the digital driving licence, we do know that last year a prototype was tested. The digital driving licence will be on your phone, or in the Apple Wallet. From what we do know, this will be used to support the photo card licence, not to replace it – so don't throw away your card just yet! We will be sure to keep you updated on this law change as soon as we know more! MOT changes are coming into play, and cars that are more than 40 years old will be exempt from having an MOT. This is because those who own older cars, tend to be car enthusiasts and maintain their cars well. The MOT test is getting harder to – with new marking criteria and more opportunities for faults to be found. You can check out our MOT changes blog for more info, and how the changes to the test may affect you. Spring 2018 is seeing new laws for motorway driving. Whilst it is illegal to drive in a lane that is closed already, cameras will now be monitoring the motorway 24/7. Any drivers that are caught speeding or driving in the incorrect lane will be hit with a fixed penalty fine of £100 and can rise up to £2,500 depending on the nature and severity of the incident. This has been in discussion for some time and has been confirmed that it will come into place on the 4th of June this year. Learner drivers will be taking a new step in their lessons, by heading out onto the motorway! This is to help learners with the fear of motorway driving, as well as giving them a chance to build up their confidence. Learner drivers looking to head onto the motorway will need to be accompanied by an approved driving instructor (ADI) and driving a car that has dual controls. Any motorway lessons will be voluntary. Have a look at our blog about this – for everything you need to know! It was recently announced that there are discussions around introducing a graduated driving licence. This means new drivers could have restrictions applied for the first 2 years after passing their test. This could include late night driving curfews, lower speed limits for newly passed drivers, and restrictions to how many passengers they can carry. Check out our blog on the proposal, for more information. We're sure more things will pop up here and there over the year. Keep your eyes peeled on our blog, as well as Facebook and Twitter for all the latest news!
{ "redpajama_set_name": "RedPajamaC4" }
761
พอดคาสต์อื่นที่คล้าย Return Home พอดคาสต์ Podcasts Like Return Home สาธารณะ [dynamic 0] ดาวน์โหลดแอปเลย! LifeAfter/The Message GE Podcast Theater / Panoply / The Message From GE Podcast Theater and Panoply, The Message and its sequel, LifeAfter, take listeners on journeys to the limits of technology. In The Message, an alien transmission from decades ago becomes an urgent puzzle with life or death consequences. In LifeAfter, Ross, a low level employee at the FBI, spends his days conversing online with his wife Charlie – who died eight months ago. But the technology behind this digital resurrection leads Ross down a dangerous path that threatens his job, his ... The Cleansed: A Post-Apocalyptic Saga Equal parts "Mad Max" and "The Stand," this post apocalyptic saga is set in a world 15 years after the collapse of the world as we know it. A brother and sister grow up in rural Maine and unwittingly embark on a adventure to save the City from the religious zealots and ruthless military fight for control over the fallen world. An epic serialized audio drama adventure with 30+ actors, cinematic sound design and original music. Winner of Mark Time Award for sci-fi audio and finalist in Romania ... Alice Isn't Dead A truck driver searches across America for the wife she had long assumed was dead. In the course of her search, she will encounter not-quite-human serial murderers, towns literally lost in time, and a conspiracy that goes way beyond one missing woman. The Leviathan Chronicles | The Rapscallion Agency Leviathan Audio Productions Set shortly after the events of the award-winning podcast The Leviathan Chronicles, The Rapscallion Agency continues the adventures of its two youngest characters, Lisette Mainsabiles and Paul Lee (aka Cluracan) who moved to Paris and use their unique skills to start a business, navigate young love, and lovingly care for a cybernetic rat. After converting a bakery van into their mobile hi-tech headquarters, Lisette and Cluaracan explore Paris by calling upon old acquaintances to help them fi ... Two-Up Ten years ago, over three hundred men, women and children disappeared from a small town in Tennessee, never to be heard from again. In this podcast, American Public Radio reporter Lia Haddock asks the question once more, "What happened to the people of Limetown?" Limetown is produced by Two-Up, the producers of 36 Questions, The Wilderness and Shipworm. ars PARADOXICA When an experiment in a time much like our own goes horribly awry, Dr. Sally Grissom finds herself stranded in the past and entrenched in the activities of a clandestine branch of the US government. Grissom and her team quickly learn that there's no safety net when toying with the fundamental logic of the universe. Passenger List Passenger List and Radiotopia Atlantic Flight 702 has disappeared mid-flight between London and New York with 256 passengers on board. Kaitlin Le (Kelly Marie Tran), a college student whose twin brother vanished with the flight, is determined to uncover the truth. Passenger List is a mystery thriller podcast from PRX's Radiotopia. Adam Bash SAYER is a narrative fiction podcast set on Earth's man-made second moon, Typhon. The eponymous SAYER is a highly advanced, self-aware AI created to help acclimate new residents to their new lives, and their new employment with Ærolith Dynamics. New episodes release every other week. Darkest Night The Paragon Collective Darkest Night is a binaural audio drama that places you, the listener, at the center of a recovered memory that sounds as though it's happening around you in real time. Each chapter delves into the last memories of the recently deceased, slowly revealing a horrifying master plan. Who is weaving this master conspiracy, and what is their ultimate goal? Subscribe now to find out, and wear headphones for the best, most terrifying results. Darkest Night is narrated by Lee Pace (The Hobbit Films, ... QCODE & Endeavor Content Academy Award® winner Rami Malek stars in this apocalyptic thriller as a small-town radio DJ fighting to protect his family and community after the power grid goes down nationwide, upending modern civilization. Season 2 picks up after Simon's family escapes. Upon crossing paths with an old family friend, Wren (played by Aja Naomi King), recounts her experience getting out of Boston... but can she be trusted? What truths remain to be uncovered about the origins of this blackout? Produced by Q ... Radio Drama Revival Radio Drama Revival is dedicated to showcasing the diversity and vitality of modern audio fiction. Now in our 15th year! Drama On One RTÉ Radio Drama's audio theatre department has for decades proudly brought audiences the very best dramatic writing and performances for radio from Ireland. Listen every Sunday night at 8pm at RTÉ Radio 1, visit rte.ie/dramaonone for more. The Once And Future Nerd Glass & Madera When three high school students from modern-day Pennsylvania find themselves trapped in a world of wizards, elves, and feudal intrigue, they must learn to survive in their new surroundings, and undertake an epic quest to save the world from the encroaching forces of chaos. WARNING: This Audio Drama contains adult language, graphic descriptions of violence, and explicit discussions of sexuality. This is, of course, in the interest of historical realism. We hope you enjoy our story about wizar ... Drama of the Week Every Friday we bring you a new drama from BBC Radio 4 or Radio 3. Exercise your imagination with some of the best writers and actors on radio. Storytelling at its very best. King Falls AM King Falls AM centers on a lonely little mountain town's late-night AM talk radio show and its paranormal, peculiar happenings and inhabitants. New shows available the 1st and 15th of every month! Be sure to start from Sammy's first show (May 1st, 2015) to stay up to date with all your King Falls favorites. The Light and Tragic Company The Orphans is a cinematic sci-fi audio drama about survival in a harsh universe: castaways on a hostile world, A.I.s with unprecedented emotions, strangers who share faces, love and loss in a far-flung future. Each season explores a new vantage point in an ever-expanding and inter-connected galaxy! Within the Wires Stories told through found audio from an alternate universe. Season four, "The Cradle" is a story about a mother and daughter as they attempt to lead a family-centric commune surviving on the fringes of society. Narrated by Mona Grenne. Written by Jeffrey Cranor and Janina Matthewson. Original music by Mary Epworth. The Archers Omnibus The week's events in Ambridge Victoria's Lift 9th Story Studios Part Twilight Zone but wholly unique, Victoria's Lift is an ongoing audio drama featuring a mysterious girl who guides visitors to their transformations. A dark place whose original luster is now lost to time, the unlikely, old Victorian building sits overlooked by most on the edge of Pittsburgh. Originally built as a luxury residence for some of the city's most well to do residents, it now serves a different purpose. Within its dilapidated walls sits Victoria's Lift. Step inside and ride it ... The Bright Sessions Atypical Artists SERIES COMPLETE. Start with episode 101. Dr. Bright provides therapy for the strange and unusual; their sessions have been recorded for research purposes. Visit www.thebrightsessions.com for more information and additional content. Created by Lauren Shippen
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,020
\section{Introduction} \label{intro} Photonic channels are the most common in standard quantum networking \cite{Kimble2008,Hammerer2010}. However, other mechanisms can be envisaged to perform the quantum information processing tasks, for example, phononic channels. On one hand, cavity quantum electrodynamics systems (cQED) have offered great potentialities in quantum computing \cite{Nielsen2000,Monroe2002}, especially, in semiconductor nanoestructures \cite{Lodahl2015,Laussy2007a} and superconducting circuits platforms \cite{Wallraff2004j,Devoret2013,You2005,You2011}. On the other hand, in the cavity optomechanics frame (COM) \cite{Aspelmeyer2014}, mechanical resonators coupled to cavities and artificial atoms allow controlling and enhancing quantum properties at the same time that introduce mechanics in the quantum realm \cite{Schwab2005}. Hybrid optomechanical systems \cite{Xiang2013,Pirkkalainen2013,Pirkkalainen2015}, involving both cQED and COM, would provide astonishing opportunities for quantum networking \cite{Dong2015}. Entanglement as the main resource for quantum computation \cite{Horodecki2009} is aimed to link a whole quantum network composed of atoms trapped in optical cavities (nodes) linked by photons propagating from one to others (channels). Particularly, entangling two quantum nodes of a network in a reversible way is a required condition to distribute entanglement across the network and teleport quantum states. In such multipartite systems it is worth to study which parts or subsystems are most entangled than others and how to improve that entanglement \cite{Liao2018,Yang2017}. Now, phononic modes of mechanical resonators could be considered to connect cQED nodes as an alternative to the standard photonic channels. Most of the works so far consider that phononic modes modulate the energies of quantum emitters and cavities \cite{Restrepo2017}. Beyond the dispersive regime, other coupling mechanisms have been explored such as linear coupling \cite{Ramirez-Munoz2018} or by mechanical variation of the Rabi coupling rate \cite{Cotrufo2017,Hammerer2009}. In this work, we study a double quantum emitter-cavity system. Both subsystems are coupled linearly by a single mechanical mode of a mechanical resonator. This phononic mode couples either the quantum emitters or the cavities. The rest of the paper is organized as follows, in Sect. \ref{sec2} we set the theoretical model for the mechanically coupled cavity QED subsystems. Then, in Sect. \ref{sec3} we study the Hamiltonian by numerical diagonalization and derive an effective Hamiltonian in the dispersive regime which gives accounts of the main effective couplings between the different parts of the system. Besides, we quantify the bipartite entanglement of such dressed states. Finally, in Sect. \ref{conclusions}, we discuss and conclude. \section{Theoretical framework} \label{sec2} The system considered here is schematically depicted in Fig. \ref{system} and consist of two distant quantum emitter-cavity systems each one interacting via dipole interaction. The quantum emitter is considered as a two-level system (TLS). A single-mode of a mechanical resonator interacts with each quantum emitter but not with the cavities. We assume strong coupling between quantum emitters and cavities such that each subsystem is modeled with the Jaynes-Cummings Hamiltonian: \begin{equation} \hat{H}_{1}=\omega_{c1}\hat{a}_{1}^{\dagger}\hat{a}_{1}+\omega_{a1}\hat{\sigma}_{1}^{\dagger}\hat{\sigma}_{1}+g_{1}\left(\hat{a}_{1}^{\dagger}\hat{\sigma}_{1}+\hat{a}_{1}\hat{\sigma}_{1}^{\dagger}\right) \end{equation} and \begin{equation} \hat{H}_{2}=\omega_{c2}\hat{a}_{2}^{\dagger}\hat{a}_{2}+\omega_{a2}\hat{\sigma}_{2}^{\dagger}\hat{\sigma}_{2}+g_{2}\left(\hat{a}_{2}^{\dagger}\hat{\sigma}_{2}+\hat{a}_{2}\hat{\sigma}_{2}^{\dagger}\right) \end{equation} where $\omega_{c1}$ and $\omega_{c2}$ are the cavity energies, $\omega_{a1}$ and $\omega_{a2}$ are the atom energies and, $g_{1}$ and $g_{2}$ are the light-matter interaction strengths in each subsystem. \begin{figure}[H] \centering \resizebox{0.45\textwidth}{!}{% \includegraphics{Figures/double_tripartite_system.eps} } \caption{Sketch of the double cavity QED system studied. The quantum emitters couple to the same mechanical mode.} \label{system} \end{figure} As for the mechanical mode, we consider linear coupling with each quantum emitter as it has been proposed in some works before \cite{Cotrufo2017,Hammerer2009}: \begin{eqnarray} \hat{H}_{m}=\omega_{m}\hat{b}^{\dagger}\hat{b}+g_{m1}\left(\hat{b}^{\dagger}\hat{\sigma}_{1}+\hat{b}\hat{\sigma}_{1}^{\dagger} \right)+g_{m2}\left(\hat{b}^{\dagger}\hat{\sigma}_{2}+\hat{b}\hat{\sigma}_{2}^{\dagger} \right) \end{eqnarray} here $\omega_{m}$ is the energy of the phonon mode of the mechanical resonator and, $g_{m1}$ and $g_{m2}$ are the coupling strength rates between the mechanical mode and each quantum emitter. The total Hamiltonian is then: \begin{equation} \hat{H}=\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{m} \label{total_hamiltonian} \end{equation} Since the Hamiltonian commutes with the total number operator ($\hat{N}=\hat{a}_{1}^{\dagger}\hat{a}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}+\hat{\sigma}_{1}^{\dagger}\hat{\sigma}_{1}+\hat{\sigma}_{2}^{\dagger}\hat{\sigma}_{2}+\hat{b}^{\dagger}\hat{b}$), then it can be diagonalized for each excitation manifold composed of all states $\ket{\alpha,n,\beta,m,\ell}$ with $\alpha+n+\beta+m+\ell=\textnormal{constant}$. Throughout the work, the notation for the states is as follows: $\ket{\textnormal{Atom1,Cav1,Atom2,Cav2,phonon}}$.\\ With regard to the physical parameters, we approach the problem not with absolute values but rather with ratios between them in order to find effects for a great variety of systems that can satisfy the conditions studied throughout the work. Besides, we consider $g_{m1}=g_{m2}=g_{m}$ and $g_{1}=g_{2}$. Particularly, we explore situations where atomic and photonic frequencies are not much larger than the mechanical interaction rates but still are out of resonance with the mechanical resonator such that a large detuning approximation can be addressed. These requirements are not far from experimental works, for example, in the context of circuit QED \cite{Pirkkalainen2013,Pirkkalainen2015}, parameter ratios have been achieved as follows $\omega_{c(a)}/g_{m}\approx 190$, $\omega_{c}/\omega_{m}\approx 67$ and $\omega_{m}/g_{m}\approx 3$. \section{Dressed states and entanglement} \label{sec3} \subsection{Inter-cavity normal mode splittings} One interesting feature to find out from the diagonalization of the Hamiltonian \ref{total_hamiltonian} is a region with anticrossing between both cavities which evidences a photonic molecule regime. By comparison of the dashed and color lines in Fig. \ref{eigenenergies} we can observe a blue shift of the energy of each quantum emitter caused by the mechanical resonator. This is an immediate effect of the atom-phonon dispersive coupling because of the significant detuning between both quantum emitters. As a consequence, the light-matter anticrossing in each emitter-cavity subsystem is changed and hence the polariton energies are also shifted without an appreciable modification of the energy splitting. \begin{figure*} \centering \resizebox{0.45\textwidth}{!}{% \subfigure{\includegraphics{Figures/eigenvalues_new}} }\hspace{0.5cm} \resizebox{0.45\textwidth}{!}{% \subfigure{\includegraphics{Figures/hopfield}} } \caption{Dressed states. Left plot: Eigenenergies for the first excitation manifold as a function of the inter-cavity energy detuning, $\Delta$. Right panel: Hopfield coefficients for each eigenenergy. There is a fifth trivial eigenvector; $\left|\lambda_{5} \right>=\left|G,0,G,0,1\right>$ with eigenenergy $\omega_{m}$ (Not shown here). Parameters: $\omega_{c1}=\omega_{0}-\Delta/2$, $\omega_{c2}=\omega_{0}+\Delta/2$, $g_{m1}=g_{m2}=g_{m}$, $\omega_{0}=20g_{m}$, $\omega_{a1}=20.1g_{m}$, $\omega_{a2}=19.9g_{m}$, $\omega_{m}=3g_{m}$ and $g_{1}=g_{2}=g_{m}/20$.} \label{eigenenergies} \end{figure*} Two new and interesting types of anticrossing regions arise in the dispersive diagram. One of these is a high effective coupling between a quantum emitter and a cavity from different subsystems, Atom1-Cavity2 and Atom2-Cavity1 in Fig. \ref{eigenenergies}. It means reaching a light-matter strong coupling regime with atoms and cavities spatially separated and hence a high light-matter entanglement between distant subsystems. As shown in Fig. \ref{eigenenergies}, the eigenvectors at specific detuning values are mainly $\left|X,0,G,0,0\right>$ $\pm$ $\left|G,0,G,1,0\right>$ and $\left|G,0,X,0,0\right>\pm\left|G,1,G,0,0\right>$. Furthermore, as is derived below, this effective interaction rate is $\frac{g_{2} g_{m}^{2}}{\Delta_{m}\Delta_{a}^{21}}$ with $\Delta_{m}=\omega_{a2}-\omega_{m}$, $\Delta_{a}^{21}=\omega_{a2}-\omega_{a1}$ and $g_{m1}=g_{m2}=g_{m}$. Another attractive feature found at resonance in the energy spectrum and Hopfield coefficients ($\ket{\lambda_{2}}$ and $\ket{\lambda_{3}}$) is the normal mode splitting between both cavities which leads to a highly all-photonic dressed state, i.e., a photonic molecule regime: $\ket{\psi}\approx \frac{1}{\sqrt{2}}(\ket{0,1,0,0,0}\pm\ket{0,0,0,1,0})$. The effective interaction rate in this case is $\frac{g^{2}g_{m}^{2}}{\Delta_{m}\Delta_{a}^{21}\Delta_{ac}}$ with $\Delta_{ac}=\omega_{a}-\omega_{c}$ and $g_{1}=g_{2}=g$. Upper in the states ladder, the dressing of the states is non-trivial and eigenstates are combination of almost all states involved in each excitation manifold. For this reason, the physical results in this paper are valid in a low excitation regime.\\ In order to analyze each anticrossing region, the Hamiltonian is rewritten in a bare part and an interaction Hamiltonian; $\hat{H}=\hat{H}_{0}+\hat{H}_{int}$, with \begin{eqnarray} \hat{H}_{0}= \omega_{m}\hat{b}^{\dagger}\hat{b}+ \omega_{c1}\hat{a}_{1}^{\dagger}\hat{a}_{1}+\omega_{a1}\hat{\sigma}_{1}^{\dagger}\hat{\sigma}_{1}\nonumber \\ +\omega_{c2}\hat{a}_{2}^{\dagger}\hat{a}_{2}+\omega_{a2}\hat{\sigma}_{2}^{\dagger}\hat{\sigma}_{2} \end{eqnarray} and \begin{eqnarray} \hat{H}_{int}=g_{1}\left(\hat{a}_{1}^{\dagger}\hat{\sigma}_{1}+\hat{a}_{1}\hat{\sigma}_{1}^{\dagger}\right)+g_{2}\left(\hat{a}_{2}^{\dagger}\hat{\sigma}_{2}+\hat{a}_{2}\hat{\sigma}_{2}^{\dagger}\right) \nonumber \\ +g_{m1}\left(\hat{b}^{\dagger}\hat{\sigma}_{1}+\hat{b}\hat{\sigma}_{1}^{\dagger} \right)+g_{m2}\left(\hat{b}^{\dagger}\hat{\sigma}_{2}+\hat{b}\hat{\sigma}_{2}^{\dagger} \right) \end{eqnarray} Now, we transform the Hamiltonian into the interaction picture, $\hat{H}_{IP}(t)=e^{i\hat{H}_{0}t}\hat{H}_{int}e^{-\hat{H}_{0}t}$: \begin{eqnarray} \hat{H}_{IP}=g_{1}\left(a_{1}^{\dagger}\sigma_{1}e^{i(\omega_{c1}-\omega_{a1})t}+a_{1}\sigma_{1}^{\dagger}e^{-i(\omega_{c1}-\omega_{a1})t} \right) \nonumber \\ +g_{2}\left(a_{2}^{\dagger}\sigma_{2}e^{i(\omega_{c2}-\omega_{a2})t}+a_{2}\sigma_{2}^{\dagger}e^{-i(\omega_{c2}-\omega_{a2})t} \right) \nonumber \\ +g_{m1}\left(\sigma_{1}^{\dagger}b e^{i(\omega_{a1}-\omega_{m})t}+\sigma_{1}b^{\dagger}e^{-i(\omega_{a1}-\omega_{m})t} \right) \nonumber \\ +g_{m2}\left(\sigma_{2}^{\dagger}b e^{i(\omega_{a2}-\omega_{m})t}+\sigma_{2}b^{\dagger}e^{-i(\omega_{a2}-\omega_{m})t} \right) \end{eqnarray} Having this on mind, a formal integration of the Schr\"odinger equation is carried out, $\left|\Psi_{IP}(t)\right>=\mathcal{T}\left[e^{-i \int_{0}^{t} H_{IP}(t')dt'} \right]\left|\Psi_{IP}(0)\right>$, in order to perform the approximation of large detuning between the atom and the mechanical resonator, $\omega_{a2}\gg \omega_{m}$ and $\omega_{a1}\gg \omega_{m}$. The propagator can be expressed as a perturbation expansion: \begin{eqnarray} \mathcal{T}\left[e^{-i \int_{0}^{t} H_{IP}(t')dt'} \right]&=&\hat{1}-i\int_{0}^{t}\hat{H}_{IP}dt' + \mathcal{O}^{2}(H_{IP}) +\cdots \nonumber \\ &\approx & \hat{1}-i \hat{H}_{eff}t \end{eqnarray} Only the first four terms of the series contribute to the effective Hamiltonian:\\ \textbf{First order} \begin{equation} \hat{H}_{eff}^{(1)}=g_{1}\left(a_{1}^{\dagger}\sigma_{1}+a_{1}\sigma_{1}^{\dagger} \right)+g_{2}\left(a_{2}^{\dagger}\sigma_{2}+a_{2}\sigma_{2}^{\dagger} \right) \end{equation} \textbf{Second order} Assuming $\omega_{a2}\approx\omega_{a1} $ \begin{eqnarray} \hat{H}_{eff}^{(2)}&=&\frac{g_{m1}^{2}}{\Delta_{m}}\left( \sigma_{1}^{\dagger}\sigma_{1}+b^{\dagger}b\sigma_{z1}\right)+\frac{g_{m2}^{2}}{\Delta_{m}}\left( \sigma_{2}^{\dagger}\sigma_{2}+b^{\dagger}b\sigma_{z2}\right) \nonumber \\ &+&\frac{g_{m1}g_{m2}}{\Delta_{m}}\left(\sigma_{1}^{\dagger}\sigma_{2} + \sigma_{1}\sigma_{2}^{\dagger} \right) \end{eqnarray} \textbf{Third order} Assuming $\omega_{a2}\approx\omega_{c1}$ and $\omega_{a1}\approx\omega_{c2}$, but $\omega_{a1}\neq\omega_{a2}$ \begin{eqnarray} \hat{H}_{eff}^{(3)}=\frac{g_{1}g_{m1}g_{m2}}{\Delta_{m}\Delta_{a}^{21}}\sigma_{1}\sigma_{1}^{\dagger}\left(a_{1}^{\dagger}\sigma_{2}+a_{1}\sigma_{2}^{\dagger}\right) \nonumber \\ +\frac{g_{2}g_{m1}g_{m2}}{\Delta_{m}\Delta_{a}^{12}}\sigma_{2}\sigma_{2}^{\dagger}\left(a_{2}^{\dagger}\sigma_{1}+a_{2}\sigma_{1}^{\dagger}\right) \end{eqnarray} \textbf{Fourth order} Assuming $\omega_{c1}\approx\omega_{c2}$ \begin{eqnarray} \hat{H}_{eff}^{(4)}=\frac{g_{1}g_{2}g_{m1}g_{m2}}{\Delta_{m}\Delta_{a}^{21}\Delta_{ac}^{21}}\sigma_{1}^{\dagger}\sigma_{1}\sigma_{2}^{\dagger}\sigma_{2}(a_{1}a_{2}^{\dagger}+a_{1}^{\dagger}a_{2}) \nonumber \\ +\frac{g_{1}g_{2}g_{m1}g_{m2}}{\Delta_{m}\Delta_{a}^{12}\Delta_{ac}^{12}}\sigma_{1}^{\dagger}\sigma_{1}\sigma_{2}^{\dagger}\sigma_{2}(a_{1}a_{2}^{\dagger}+a_{1}^{\dagger}a_{2}) \end{eqnarray} with $\Delta_{a}^{ij}=\omega_{ai}-\omega_{aj}$ and $\Delta_{ac}^{ij}=\omega_{ai}-\omega_{cj}$. \begin{figure}[H] \centering \resizebox{0.45\textwidth}{!}{% \includegraphics{Figures/entanglement} } \caption{Bipartite entanglement. Negativity between both cavities (Black line), between an atom and a cavity from the same subsystem (Blue and green lines) and between and atom and a cavity from different subsystems (Purple and red lines). Parameters: $g_{m1}=g_{m2}=g_{m}$, $\omega_{0}=20g_{m}$, $\omega_{a1}=20.1g_{m}$, $\omega_{a2}=19.9g_{m}$, $\omega_{m}=3g_{m}$ and $g_{1}=g_{2}=g_{m}/20$.} \label{entanglement} \end{figure} \subsection{Entanglement properties} The next natural step is to analyze the entanglement properties of each anticrossing. Negativity between two parts of the system is computed by tracing over the other degrees of freedom, i.e. $\mathcal{N}=\sum_{\lambda<0}|\lambda|$, where $\lambda$ denotes all the eigenvalues of the partial transpose of the traced density matrix $\rho_{B}=Tr_{A}(\rho_{AB})$. $B$ represents the two parts of interest and $A$, the other parts. Here $\rho_{AB}$ is the pure density matrix build with the eigenstates, $\ket{\lambda_{i}}$, of the Hamiltonian \ref{total_hamiltonian}. The eigenstate used is the one involved in the anticrossing of interest, e.g., entanglement between atom 1 and cavity 2 is computed with the eigenstate $\ket{\lambda_{4}}$ shown in Fig. \ref{eigenenergies}. As shown in Fig. \ref{entanglement}, the largest entanglement between photons from both cavities is found close to the resonance; $\omega_{c1}\approx\omega_{c2}$ as expected according to the Fig. \ref{eigenenergies}. Furthermore, direct light-matter entanglement is maximum in $\omega_{a1}\approx\omega_{c1}$ and $\omega_{a2}\approx\omega_{c2}$, and indirect light-matter entanglement increases around $\omega_{a2}\approx\omega_{c1}$ and $\omega_{a1}\approx\omega_{c2}$, also expected from the analysis made in the previous section. The above means that the best condition to entangle two parts of the system is setting those parts close to resonance. However, due to additional contributions in the eigenstates (Fig. \ref{eigenenergies}), there is a coexistence of the different types of entanglement studied, e.g., at the maximum of entanglement between atom 1 and cavity 2, there is also entanglement atom2-cav2 and atom2-cav1. Additionally, the phonon part of the system is unentangled with the rest of the system as long as the mechanical frequencies are much smaller than the photon ones. \begin{figure}[H] \centering \resizebox{0.47\textwidth}{!}{% \includegraphics{Figures/entanglement2.eps} } \caption{Photon-photon entanglement. Maximum of the negativity of the reduced cavity-cavity system as a function of $\omega_{0}/g_{m}$ and $g_{m}/g_{JC}$. Parameters: $g_{1}=g_{2}=g_{JC}$, $g_{m1}=g_{m2}=g_{m}$, $\omega_{a1}=\omega_{0}+0.1$, $\omega_{a2}=\omega_{0}-0.1$, $\omega_{c1}=\omega_{0}-\Delta/2$, $\omega_{c2}=\omega_{0}+\Delta/2$ and $\omega_{m}=3g_{m}$.} \label{entanglement2} \end{figure} Now, in order to study the dependence of the entanglement with the physical parameters, we focus on the intercavity photon entanglement. There are two critical parameters in the system; which are the ratio between the atom/cavity frequencies or the light-matter coupling strength with the mechanical interaction rate, $\omega_{0}/g_{m}$ and $g_{JC}/g_{m}$, respectively. Fig. \ref{entanglement2} exhibits fringes of maximum entanglement for mechanical interaction rates larger than the light-matter ones, i.e., when the coupling strength between both subsystems exceed the coupling strength in each subsystem. This fringes show that entanglement does not change for large ratios $\omega_{0}/g_{m}$, which is the most typical condition in current experimental setups, especially in circuit QED systems. For small ratios between photon frequencies and mechanical interaction, $\omega_{0}/g_{m}<30$, the intercavity photon entanglement is less sensitive to the change of $g_{m}/g_{JC}$. Nevertheless, in order to keep the rotating wave approximation for light-matter interaction, it is necessary to fulfill the condition $\omega_{0}\gg g_{JC}$, i.e., $g_{m}>g_{JC}$ in our model. In other words, the system must fulfill that the atom/cavity frequencies exceed the mechanical interaction rate and, subsequently, this last one should exceed the light-matter coupling strength. Finally, in all calculations performed it was keep the condition $\omega_{m}=3g_{m}$, which is possible to reach in nowadays experiments. \section{Discussion and Conclusions} \label{conclusions} In this work, we have considered a single mode of a mechanical resonator mediating the interaction between two subsystems, each one composed of a quantum emitter coupled to a cavity. As we have seen, it is possible to reach a regime of parameters with normal mode splitting between the cavities which is a first signature of photonic molecules. Besides, it was found Vacuum Rabi splitting between a quantum emitter and a cavity from different subsystems, i.e., non-local light-matter strong coupling regime. As a direct effect of the mechanical resonator, it was observed a blue shift of the atomic energies and hence a shift in the polariton energies. Additionally, it was studied the entanglement properties of those dressed states and the conditions of the physical parameters for which it is maximized the entanglement. The multipartite system evidences that some parts of the system are most entangled than others depending on the energy detuning or, more specifically, on the dressed state involved. Good candidates to implement our proposal are circuit quantum electrodynamics systems where high mechanical interactions could be reached. Finally, if the mechanical resonator couples the cavities instead of the quantum emitters, same results are found since at first excitation manifold both situations are equivalents. However, beyond the low excitation regime, where higher excitation manifolds are involved, the dressing of the states and the entanglement change substantially due to the statistics of the particles involved in the mechanical interaction; two-level systems (artificial atoms) or bosons (cavities). \section*{Acknowledgements} The authors acknowledge partial financial support from COLCIENCIAS under the project ``Emisi\'on en sistemas de Qubits Superconductores acoplados a la radiaci\'on. C\'o-\break digo 110171249692, CT 293-2016, HERMES 31361" and the project ``Optomec\'anica y electrodin\'amica con puntos cu\'anticos en microcavidades. C\'odigo 201010027618, HERMES 39177 ". J.E.R. thanks financial support from the ``Beca de Doctorados Nacionales de COLCIENCIAS 727" and J.P.R.C. is grateful to the ``Beca de Doctorados Nacionales de COLCIENCIAS 785". \section*{Authors contributions} All the authors were involved in the preparation of the manuscript. All the authors have read and approved the final manuscript. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,114
Q: How to initialise a model and call a custom function within the class I have a user service where i get a object from my mongo db. The user object contains a dob which i want to convert to an age in angular. I created a custom user class to do this declaring a getAge function. However when calling this user.getAge() function it says "this.user.getAge() is not a function" user.model.ts export class User { user: any; constructor() { } getAge() { return "Age" } } profile.component.ts import { UserService } from '../../services/user.service'; import { AuthService } from '../../services/auth.service'; import { Component, OnInit } from '@angular/core'; import { Router, ActivatedRoute, ParamMap } from '@angular/router'; import 'rxjs/add/operator/switchMap'; import {User} from '../../models/user.model' @Component({ selector: 'app-profile', templateUrl: './profile.component.html', styleUrls: ['./profile.component.scss'] }) export class ProfileComponent implements OnInit { user: User; constructor(private authService: AuthService, private route: ActivatedRoute, private router: Router, private userService: UserService) { } ngOnInit() { this.route.params.switchMap((params) => { let user_id = params['id']; return this.userService.get(user_id); }).subscribe((res) => { this.user = res; let age = this.user.getAge(); console.log(age); }); } } user.service.ts import { Injectable } from '@angular/core'; import { ApiService } from './api.service'; @Injectable() export class UserService { path = 'users/'; constructor( private apiService: ApiService ) { } all() { return this.apiService.get(this.path); } create(user) { return this.apiService.post(this.path, user); } get(user_id) { let endpoint = this.path + user_id; return this.apiService.get(endpoint); } } user.model.js let UserSchema = new Schema({ email: { address: { type: String, lowercase: true, //unique: true, }, token: String, verified: { type: Boolean, default: false, }, }, password: { type: String, }, socketId: String, isOnline: Boolean, phone: { countryCode: { type: String, }, number: { type: String, }, code: String, verified: { type: Boolean, default: false }, }, jwt: String, profile: { username: String, firstname: String, lastname: String, dob: String, gender: String, level: Number, location: { longitude: String, latitude: String }, image: String, introduction: String, following: [], followers: [], }, }, { timestamps: {createdAt: 'created_at', updatedAt: 'updated_at'} }); A: UserService.get() returns any so typescript allows you to assign it to ProfileComponent.user (type User). new User() is never call so a User object never exits UserService.get() really returns Observable<Object> with all of the properties in your model. What you need is: public get(user_id): Observable<User> { let endpoint = this.path + user_id; return this.apiService.get(endpoint).map(res => { let userObj = new User(); userObj.user = res; return userObj; }); } A: You can assign the result to a new user object with object assignment. like below this.route.params.switchMap((params) => { let user_id = params['id']; return this.userService.get(user_id); }).subscribe((res) => { let userRef: User = new User(); Object.assign(userRef, res); this.user = userRef; let age = this.user.getAge(); console.log(age); }); Or you can do this from the user.model.ts export class User { user: any; constructor() {} getAge(){ return "Age"} public static fromObject(obj):User { let userRef: User = new User(); Object.assign(userRef, obj); return userRef; } } and in your component or service you can do this by importing the User model this.route.params.switchMap((params) => { let user_id = params['id']; return this.userService.get(user_id); }).subscribe((res) => { this.user = User.fromObject(res); let age = this.user.getAge(); console.log(age); }); A: get(user_id) { let endpoint = this.path + user_id; return this.apiService.get(endpoint).map(res => new User(){//set your user properties here}); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,935
{"url":"https:\/\/optimization-online.org\/tag\/norms\/","text":"## New characterizations of Hoffman constants for systems of linear constraints\n\nWe give a characterization of the Hoffman constant of a system of linear constraints in \\$\\R^n\\$ relative to a reference polyhedron \\$R\\subseteq\\R^n\\$. The reference polyhedron \\$R\\$ represents constraints that are easy to satisfy such as box constraints. In the special case \\$R = \\R^n\\$, we obtain a novel characterization of the classical Hoffman constant. More \u2026 Read more\n\n## An algorithm to compute the Hoffman constant of a system of linear constraints\n\nWe propose a combinatorial algorithm to compute the Hoffman constant of a system of linear equations and inequalities. The algorithm is based on a characterization of the Hoffman constant as the largest of a finite canonical collection of easy-to-compute Hoffman constants. Our algorithm and characterization extend to the more general context where some of the \u2026 Read more","date":"2023-01-31 14:08:00","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8387287259101868, \"perplexity\": 585.5875514174882}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499871.68\/warc\/CC-MAIN-20230131122916-20230131152916-00419.warc.gz\"}"}
null
null
\section{Introduction} The grand vision of the Internet of things (IoT) is quickly turning into reality by bringing everything to the Internet \cite{siow2018analytics,kamalinejad2015wireless}. Latest devices ranging from smartphones to implantable sensors and wearables are claiming to be ``IoT capable''. Although significant improvements have been seen from the design perspective of wireless devices, the objective of connecting everything to the Internet is still a far cry \cite{8581856}. It is because several important challenges arise when ensuring ubiquitous connectivity of devices. {As indicated in \cite{Munir2019}, one of the first challenge is the limited life-cycle of miniature wireless devices. The energy constrained nature of devices becomes an obstacle as the massive amount of data is transferred across an IoT network and the devices are required to be operated in an untethered manner.} Then, there is a requirement of communication reliability which is even more difficult to maintain in large-scale wireless systems \cite{8340813}. The increased reliability most often comes at a cost of increased energy consumption which cannot be regulated by small energy reservoirs of miniature IoT devices. Above all, these devices would need to demonstrate services like ultra-reliable low-latency communications (URLLC), enhanced mobile broadband (eMBB), and massive machine type communications (mMTC)for beyond 5G networks. Resultantly, it has become evident that an ultra low-powered communication paradigm is essential for enabling short-range communication among devices, without compromising the reliability of communications \cite{kamalinejad2015wireless,liu2018optimal}. Of late, backscatter communication has gathered the attention of the researchers as a key enabling technology for connecting IoT devices. Backscatter communication allows radio device to transmit their data by reflecting and modulating an incident radio frequency (RF) signal. It adapts the antenna impedance mismatch in order to change the reflection coefficient. Using the received RF energy, backscatter devices harvest a fraction of energy for circuit operations \cite{boyer2014invited}. It is worth highlighting that the backscatter devices do not require oscillators for generating carrier signals as they get the carrier waves from the dedicated RF source. In fact, the ultra-low power nature of a backscatter transmitter (i.e., below 1 mW \cite{kellogg2014wi}), shows promise for a very long life-cycle (i.e., 10 years) with an on-chip battery. Since the harvested energy from an RF source typically ranges from 1mW to 10s of mW, the low power consumption of backscatter devices is a perfect match for RF energy harvesting \cite{lu2015wireless}. Besides the obvious advantages of conventional backscatter communications, there are few limitations of these devices. The backscatter devices require a dedicated RF source for transmission of carrier waves. Even though this model has been adopted in radio frequency identification (RFID) tags used in libraries and grocery stores, the power budget of these communication models may not be suitable for energy constrained IoT devices \cite{liu2018optimal,8417660}. {Additionally, the centralized nature of these communication models is also a hurdle in paving the way for large-scale deployment of IoT networks. The distributed architecture of IoT networks favors the deployment of decentralized RF sources that can be accessed anytime. Besides this, energy harvesting through wireless power transmission can extend the life cycle of the IoT networks with little changes in hardware implementations \cite{zhao2017exploiting,chang2019distributed}} To overcome the above-mentioned limitations, a new backscatter paradigm has emerged that is called ambient backscatter communication \cite{liu2013ambient}. An ambient backscatter transmitter uses ambient RF signals in order to perform in a battery-free manner. More specifically, the ambient RF signals are used for backscattering and energy harvesting. This flexibility allows the cost-effective deployment of ambient backscatter devices while avoiding dependence on a particular RF source \cite{han2017wirelessly}. However, owing to the novelty of the technology, the study of ambient backscatter communications is still at its nascent stage. A variety of network challenges and data communication issues arise that require further exploration. Furthermore, limited theoretical knowledge of ambient backscatter communication demands new dimensions for performance evaluation of the network. {Motivated by the aforementioned observations, we perform the analysis of backscatter communication under Rayleigh fading. Specifically, our contribution is two-fold: \begin{itemize} \item Derivation of closed-form expression of outage probability for wireless-powered devices operating under Rayleigh fading. \item Derivation of the power-splitting factor that balances the tradeoff between energy harvesting and achievable data rate. \end{itemize}} The remainder of the paper is organized as follows. Section 2 discusses the related work on conventional backscatter and ambient backscatter communications. In Section 3, a detailed description of the system model is provided. Section 4 provides the performance analysis while Section 5 discusses the numerical results. Finally, Section 6 provides key findings and conclusions. \section{Related Work} Backscatter communication has been considered from different aspects in wireless networks \cite{van2018ambient}. The authors of \cite{lu2018wireless} employed backscatter communication to enable device-to-device communications. Besides this, several detection schemes for backscatter communication systems are proposed in \cite{yang2017cooperative,wang2016ambient,qian2017noncoherent}. A detector that does not require the channel state information (CSI) was constructed using a differential encoder in \cite{wang2016ambient}. Specifically, they developed a model and derived optimal detection and minimum bit-error-rate (BER) thresholds. Moreover, the expressions for lower and upper bounds on BER were also derived that were corroborated through simulation results. A joint-energy detection scheme is proposed in \cite{qian2017noncoherent} that requires only channel variances rather than specific CSI. The same authors provided a study of BER computation, optimal and suboptimal detection, and blind parameter acquisition. The non-coherent signal detection outperformed the conventional techniques in terms of detection accuracy and computation complexity. A successive interference cancellation (SIC) based detector and a maximum-likelihood (ML) detector with known CSI are presented in \cite{yang2017cooperative}, to recover signals not only from readers but also from RF sources. In addition to this, the authors derived BER expressions for the ML detector. It was shown that the backscatter signal can significantly enhance the performance of the ML detector as compared to conventional single-input-multiple-output (SIMO) systems. Capacity and outage performance analysis for ambient backscatter communication systems was studied in \cite{darsena2017modeling,kang2017riding,zhang2017outage,zhao2018outage}. The authors of \cite{darsena2017modeling} analyzed the channel capacity over orthogonal frequency division multiplexing (OFDM) signals. The ergodic capacity optimization problem at the reader with SIC was investigated by the authors of \cite{kang2017riding}. Specifically, the authors jointly considered the transmit source power and the reflection coefficient and improved the ergodic capacity. For ambient backscatter communication systems, the BER of an energy detector was derived and the BER-based outage probability was obtained in \cite{zhang2017outage}. {Zhao \emph{et al}. in \cite{zhao2018outage}, the effective distribution of signal-to-noise ratio (SNR) was derived and the SNR-based outage probability was evaluated over real Gaussian channels.} More recently, the authors in \cite{hoang2017overlay} investigated a cognitive radio network having ambient backscatter communication. In particular, it was considered that a wireless-powered secondary user can either harvest energy or adopt ambient backscattering from a primary user on transmission. A time allocation problem was developed in order to maximize the throughput of the secondary user and to obtain the optimal time ratio between energy harvesting and ambient backscattering. Reference \cite{kim2017hybrid} introduced a hybrid backscatter communication scheme as an alternative access scheme for a wireless-powered transmitter. Specifically, when the ambient RF signals were not sufficient to support wireless-powered communications, the transmitter can choose between bistatic backscattering or ambient backscattering based on a dedicated carrier emitter. A throughput maximization problem was formulated to find the optimal time allocation for the hybrid backscatter communication operation. Both \cite{hoang2017overlay} and \cite{kim2017hybrid} studied a deterministic scenarios. \section{Experimental System Model Design} Let us consider an IoT network that consists of $N$ number of ambient backscatter devices. These ambient backscatter devices are considered to be powered by ambient RF source. {This consideration is under the assumption that ambient RF sources (like Radio signals, TV signals and WiFi signals) are abundant in the environment.} These backscatter devices use the harvested energy from the ambient RF signals and transmit their data to the gateway as shown in Figure \ref{fig.1}. \begin{figure*}[htp] \centering \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{block1.pdf} \caption{System model.} \label{fig.1} \end{figure*} A typical ambient backscatter device has three major operations, i.e., spectrum sensing, energy harvesting, and data exchange. Inspired by \cite{liu2018optimal}, we consider the circuit model of an ambient backscatter device is shown in Figure \ref{fig.2}. Here, the main purpose of the spectrum sensor is to detect suitable ambient RF signals, whereas, the energy harvesting circuit enables the backscatter devices to operate a self-sustainable manner. This self-sustainability is essential for IoT networks as they are expected to operate with minimum human intervention. When the device is in operation mode, the spectrum sensing is performed in order to detect RF signal with large power. Afterward, the detected signal is employed for either backscatter communication or energy harvesting. The analog-to-digital converter (ADC) uses the harvested energy and convert it into direct current that is utilized by other modules including a microcontroller. The microcontroller performs multiple communication operation including processing the information and matching the impedance of antenna for better reception of RF signals. {We consider that the amount of energy consumed by energy harvester is negligible \cite{liu2018optimal} and satisfies the following condition} \begin{align} E_{h}\ge E_{b}+E_{s}+E_{m}. \end{align} In the above expression $E_{h}, E_{b},E_{s},E_{m}$ denotes the harvested energy, energy consumed for backscatter communication, energy consumed for spectrum sensing and the energy consumed by micro-controller/ sensor for data gather and processing. {Some of the key symbols used throughout this paper are provided in Table \ref{tabu}.} \begin{table}[h] \centering \begin{tabular}{|l|l|} \hline \textbf{Symbol} & \textbf{Definition} \\ \hline $E_{h}$ & Harvested energy \\ \hline $E_{b}$ & Energy consumed for backscatter communication \\ \hline $E_{s}$ & Energy consumed for spectrum sensing \\ \hline $E_{m}$ & Energy consumed by micro-controller/ sensor \\ \hline $\alpha $ & Compressive sensing duration \\ \hline $\rho$ & Power-splitting factor \\ \hline $\beta$ & Reflection coefficient of the backscatter devices \\ \hline $\theta$ & Path loss exponent \\ \hline $N_{0}$ & AWGN variance \\ \hline $\eta$ & Energy conversion efficiency \\ \hline $M$ & Number of wideband signals \\ \hline $e$ & Energy consumed for each sample \\ \hline $\varphi$ & Threshold of required data rate \\ \hline $\psi$ & Energy threshold for operation of the backscatter device \\ \hline $f$ & Sampling rate \\ \hline $P_{b}$ & Amount of circuit power consumed during backscattering \\ \hline \end{tabular} \caption{{Common symbols used in the article.}} \label{tabu} \end{table} We now characterize the energies harvested and consumed during one time slot. We consider that compressive sensing is performed in each time slot. Thus, the time slot $T$ is divided into phases, i.e., compressive sensing duration (denoted as $\alpha $) and energy harvesting/ backscattering duration (denoted as ($1-\alpha )$). After compressive sensing, the received signal at the device is divided into two streams of power. The first part is used for energy harvesting while the other part is used for performing backscattering operation. This separation is performed with a factor $\rho $, where $0<\rho \le 1$. A graphical representation of an interplay of $\rho $ and $\alpha $ is provided in Figure \ref{fig.3}. Assuming that an $i$-th backscatter device detects an ambient RF, then the received signal at the device is given as \begin{figure*}[htp] \centering \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{block2.pdf} \caption{Circuit design of the ambient backscatter device.} \label{fig.2} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[trim={0 2cm 0 5cm},clip,scale=.5]{block3.pdf} \caption{Time schedule and power splitting.} \label{fig.3} \end{figure*} \begin{align} y_{i,1}=\sqrt{\frac{\beta P}{P_{l,1}}}h_{i,1}s_{1}+n_{i,1}, \end{align} where $y_{i,1}$ is the received signal, $s_{1}$ denotes the normalized signal, $P$ represents the transmit power, and $P_{l,1}=d_1^\theta$ is the path loss experienced by the backscatter device and $\theta$ is the path loss exponent. {Furthermore, $ h_{i,1}$ represents the channel gain between the ambient RF source and backscatter device which is assumed to be Rayleigh faded, $n_{i,1}$ is the zero mean additive white Gaussian noise (AWGN) with $N_{0}$ variance while $\beta $ is the reflection coefficient of the backscatter devices.} The harvested energy is then denoted as \begin{align} E_{h,i}=\frac{\rho \eta (1-\alpha )T\beta \Omega_1 \vert h_{i,1}\vert ^{2}}{P_{l,1}}, \label{eq6} \end{align} where $\Omega _{1}=\frac{P}{N_{0}}$, $\rho $ represents the fraction of power used for energy harvesting, $\eta $ is the energy conversion efficiency that is considered to be same for all the backscatter devices as they employ same circuitry. The amount of energy consumed by the compressive sensing module is a linear multiplication of the number of samples and sampling rate. More specifically, it can be represented as \begin{align} E_{s}=\alpha fMeT, \end{align} where $M$ is the number of wideband signals that have been detected during the phase of spectrum sensing, $f$ is the sampling rate, and $ e$ is the energy consumed for each sample. The amount of energy consumed the backscattering module can be represented in terms of circuit power as \begin{align} E_{b}=(1-\alpha )P_{b}T, \end{align} where $P_{b}$ is the amount of circuit power consumed during backscattering phase. For the sake of simplicity and without loss of generality, we consider that the power consumed by micro-controller is fixed. As a result of backscattering, the received message at the gateway can be written as \begin{align} y_{i,2}=\sqrt{\frac{(1-\rho )\beta P_{b}}{P_{l,2}}}h_{i,2}s_{i,2}+n_{i,2}, \end{align} where $y_{i,2}$ is the received signal at the gateway, $s_{i,2}$ denotes the normalized signal sent by the $i$-th backscattering device, $P$ represents the transmit power, and $P_{l,2}=d_2^\theta$ is the path loss between backscatter device and the gateway. {Furthermore, $h_{i,2} $ represents the Rayleigh faded channel gain between the backscatter device and the gateway, $n_{i,2}$ is the zero mean AWGN with zero mean and $N_{0}$ variance.} \section{Performance Analysis and Methodology} In this section, we derive the communication outage and power shortage probabilities of the backscatter devices. Based on these probabilities, we aim to find the balancing value of the $\rho $. \subsection{Outage Performance} Using the Shannon capacity formula, the achievable sum rate at the gateway can be written as \begin{align} R_{sum}=\sum_{i=1}^{N}{R_{i}}, \end{align} where $R_{i}$ is the achievable rate of $i$-th backscattering device which is given as \begin{align} R_{i}=(1-\alpha )BT\log _{2}\left\{ 1+\frac{(1-\rho )\beta \Omega _{2} \vert h_{i,2}\vert ^{2}}{P_{l,2}}\right\}, \end{align} where $\Omega _{2}=\frac{P_{b}}{N_{0}}$. The outage probability of achievable rate can occur due to the following two conditions \begin{enumerate} \item If the harvested energy is below the energy required for operations of backscatter device. \item If the achievable rate is below the required rate at the gateway. \end{enumerate} Thus, using the total probability theorem, the outage probability can be written as \begin{align} P_{out}&=\Pr(R_{i}<\varphi |E_{h,i}<\psi )\Pr(E_{h,i}<\psi ) \nonumber \\ &+\Pr(R_{i}<\varphi |E_{h,i}>\psi )\Pr(E_{h,i}>\psi ), \label{eq1} \end{align} where $\varphi $ represents the threshold of required data rate and $\psi =E_{b}+E_{s}+E_{m}$ is the energy threshold for operation of the backscatter device. From the above equation, we note that if the harvested energy is below the threshold, then the backscatter device would not be able to transfer any data to the gateway. In this case, the probability that the rate falls below a required threshold would always be 1. Thus, we can write \begin{align} \Pr(R_{i}<\varphi |E_{h,i}<\psi )=1. \label{eq2} \end{align} The probability that the harvested energy would fall below a specified threshold can be written as \begin{align} \Pr(E_{h,i}<\psi )=\Pr\left(\frac{\rho \eta (1-\alpha )T\beta \Omega _{1}\vert h_{i,1}\vert ^{2}}{P_{l,1}}<\psi \right). \end{align} After some simplifications, it can be represented as {\begin{align} \Pr(E_{h,i}<\psi )&=\Pr\left(\vert h_{i,1}\vert ^{2}<\frac{P_{l,1}\psi }{\rho \eta (1-\alpha )T\beta \Omega _{1}}\right) \nonumber \\ &=1-\exp\left\{-\frac{P_{l,1}\psi }{\bar{\gamma_1}\rho \eta (1-\alpha )T\beta \Omega _{1}}\right\}. \label{eq3} \end{align}} In contrast, the probability of energy harvesting increasing beyond the threshold can be represented as \begin{align} \Pr(E_{h,i}>\psi )=\exp\left\{-\frac{P_{l,1}\psi }{\bar{\gamma }_{1}\rho \eta (1-\alpha )T\beta \Omega _{1}}\right\}, \label{eq4} \end{align} {where $\bar{\gamma_1 }$ is the average channel gain between RF source and the backscattering device.} Let us now consider the case when the harvested energy is greater than $\psi $. In this case, the probability that the achievable data rate falls below a pre-determined threshold can be written as \begin{align} &\Pr(R_{i}<\varphi |E_{h,i}>\psi )=\Pr\biggl[(1-\alpha )BT \nonumber \\ &\times \log _{2}\left\{ 1+\frac{(1-\rho )\beta \Omega _{2} \vert h_{i,2}\vert ^{2}}{P_{l,2}}\right\} <\varphi |E_{h,i}>\psi\biggr]. \end{align} After some straightforward simplifications, we obtain \begin{align} \Pr(R_{i}<\varphi |E_{h,i}>\psi )&=\Pr\left(\vert h_{i,2}\vert ^{2}<\frac{P_{l,2}(2^{\frac{\varphi }{(1-\alpha )BT}}-1)}{(1-\rho )\beta \Omega _{2}}\right) \nonumber \\ &=1-\exp\left\{-\frac{P_{l,2}(2^{\frac{\varphi }{(1-\alpha )BT}}-1)}{\bar{\gamma }_{2}(1-\rho )\beta \Omega _{2}}\right\}, \label{eq5} \end{align} where $\bar{\gamma }_{2}$ is the average channel gain between backscattering device and the gateway. {Substituting the Eqs (\ref{eq2}), (\ref{eq3}), (\ref{eq4}), and (\ref{eq5}) in (\ref{eq1}), we obtain \begin{align} P_{out}&=1-\exp\left\{-\frac{P_{l,1}\psi }{\bar{\gamma }_{1}\rho \eta (1-\alpha )T\beta \Omega _{1}}\right\}+ \exp\left\{-\frac{P_{l,1}\psi }{\bar{\gamma }_{1}\rho \eta (1-\alpha )T\beta \Omega _{1}}\right\} \nonumber \\ &\times \left[1-\exp\left\{-\frac{P_{l,2}(2^{\frac{\varphi }{(1-\alpha )BT}}-1)}{\bar{\gamma }_{2}(1-\rho )\beta \Omega _{2}}\right\}\right]. \label{erqe} \end{align}} {After solving \ref{erqe}, we have \begin{align} P_{out}=1-\exp\biggl(-\frac{P_{l,2}(2^{\frac{\varphi }{(1-\alpha )BT}}-1)}{\bar{\gamma }_{2}(1-\rho )\beta \Omega _{2}}-\frac{P_{l,1}\psi }{\bar{\gamma }_{1}\rho \eta (1-\alpha )T\beta \Omega _{1}}\biggr). \end{align}} \subsection{Balancing communication outage and power shortage} In this section, we aim to find the values of $\rho $ that balances the tradeoff between communication outage and power shortage. In particular, we note that different values of $\rho $ have a different impact on communication outage and power shortage. From (\ref{eq6}), we can observe that the amount of energy harvested is the increasing function of $\rho $. In other words, as the value of $\rho $ increases, the amount of harvested energy also increases, whereas, it decreases with a decrease in the value of $\rho $. In contrast, the achievable rate of any $i$-th backscattering device is a decreasing function of $\rho $. Since the achievable rate is dependent on the received SNR, therefore, increasing the value of $\rho $ results in increasing the SNR while a reduction in $\rho $ causes an increase in the values of SNR which in turn increases the achievable rate. From the above arguments, we can observe that the balancing value of $ \rho $ can be found by solving the energy harvesting and SNR expressions simultaneously. Thus, we can write \begin{align} \frac{\rho \eta (1-\alpha )T\beta \Omega _{1}\vert h_{i,1}\vert ^{2}}{P_{l,1}}=\frac{(1-\rho )\beta \Omega _{2} \vert h_{i,2}\vert ^{2}}{P_{l,2}}. \end{align} {After cross multiplication and taking least common multiple, we obtain the $\rho ^{*}$ as \begin{align} \rho ^{*}=\frac{\vert h_{i,2}\vert ^{2}\Omega _{2}P_{l,1}}{\vert h_{i,2}\vert ^{2}\Omega _{2}P_{l,1}+\eta (1-\alpha )T\beta \Omega _{1}\vert h_{i,1}\vert ^{2}P_{l,2}}. \end{align}} From the above expression, we can observe that $\rho ^{*}$ is inversely proportional to the $\Omega _{1}$. Moreover, if $P_{l,1}=P_{l,2}$, then the balancing value of $\rho ^{*}$ is halved. We also note that the value of $\rho ^{*}$ increases with an increase in $\alpha $ indicating the direct relationship between $\rho ^{*}$ and $\alpha$. \section{Results and Discussions} In this section, we provide results and relevant discussion on the above-mentioned analysis. {Unless mentioned otherwise, following parameters have been used for generating simulation and analytical results: $\eta=0.5$, $B=1MHz$, $\beta=0.5$, $d_1=d_2=$5m, $\varphi=$2kbps, $\theta=2$, and $\rho=0.3$.} Figure \ref{fig.5} illustrates the outage probability as a function of increasing values of SNR. It can be seen that the outage probability decreases with an increase in the SNR. However, the impact of $\alpha$ on $P_{out}$ is different for different values of SNR. Specifically, we observe that an increase in the $\alpha$ results in an increase in the outage probability. It is because with an increase in $\alpha$ provides more time for compressive sensing and less time for energy harvesting and backscattering. On the other hand, an increase in $\rho$ causes an increase in outage probability. This result is caused by allocating more fraction of received power for energy harvesting and less for performing backscatter communications. In addition, the simulation results closely follow the analytical curves which indicates the validity of our theoretical model. \begin{figure*}[htp] \centering \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{figure_33.eps} \caption{Outage probability as a function of SNR.} \label{fig.5} \end{figure*} \begin{figure*}[!htp] \centering \begin{tabular}{c} \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{figure_11a.eps} \\ (a) \\ \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{figure_11b.eps} \\ (b) \\ \end{tabular} \caption{Achievable rate against different values of $\alpha$ where (a) $d_1=d_2=5$m, (b) $d_1=d_2=10$m.} \label{fig.6} \end{figure*} Figure \ref{fig.6} (a) shows the achievable rate as a function of increased SNR. As anticipated by the analytical expression, the increase in SNR improves the achievable rate. However, an increase in $\alpha$ decreases the achievable rate. In fact, the impact of $\alpha$ becomes more prominent at higher values of SNR showing a rapid rise in the curves. Where Figure \ref{fig.6} (a) is plotted for $d_1=d_2=5$m, the curves of Figure \ref{fig.6} (b) are plotted against $d_1=d_2=10$m. This increase in distance has a critical impact on the achievable rate. In particular, for the same values of SNR and $\alpha$ (e.g., SNR=0 dB and $\alpha$=0.1), the achievable rate drops from 20 kbps to 5 kbps when the distance is increased. Figure \ref{fig.4} plots the harvested energy against increasing values of $d_1$. Indeed, this results highlight the significance of distance between the RF source and the backscatter device. It can be seen that an increase in $d_1$ results in decreasing the harvested energy. Additionally, the increasing values of $\alpha$ decrease the harvested amount of energy due to compressive sensing. This decrease in harvested energy, against different values of $\alpha$, is less prominent when $\Omega_1=5$ dB. This indicates that the time scheduling is more effective for large transmit power of the ambient RF source. Furthermore, this increase in $\Omega_1$ allows devices to harvest power up to a significantly larger distance which influences the life-cycle of devices. \begin{figure*}[htp] \centering \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{figure_22.eps} \caption{Harvested energy against increasing values of $d_1$.} \label{fig.4} \end{figure*} \begin{figure*}[htp] \centering \begin{tabular}{c} \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{figure_44a.eps} \\ (a) \\ \includegraphics[trim={0 0cm 0 0cm},clip,scale=.5]{figure_44b.eps} \\ (b) \\ \end{tabular} \caption{Achievable rate and harvested energy versus increasing values of $\rho$, where $\eta=0.3$ and (a) $d_1=d_2=5$m, (b) $d_1=d_2=10$m.} \label{fig.7} \end{figure*} Figure \ref{fig.7} (a) demonstrates the tradeoff between harvested energy and achievable rate. We have plotted different curves of achievable rate and harvested energy against the increasing values of $\rho$. It can be observed that an increase in $\rho$ causes an increase in the amount of harvested energy while simultaneously reducing the achievable rate. Since the value of $\alpha$ influences both rate and harvested energy, the lower values of $\alpha$ decreases the converging point of the curves of rate and energy curves. Similar trends can be shown in \ref{fig.7} (b), however, the converging point of the curves now shift towards the right-hand side while reducing both the harvested energy and rate. This trend can be attributed to the increase in $d_1$ and $d_2$. This shift in balancing point shows that a higher value of $\rho$ is required with an increase in distance. This also indicates that energy harvesting becomes a critical factor when the distance is increased between ambient RF source and the device and that between device and gateway. \section{Conclusion} Ambient backscatter communications provide virtually endless opportunities to connect wireless devices. We anticipate that wearable devices, connected homes, industrial internet, and miniature embeddable are some of the areas where ambient backscatter communications would be adapted to provide pervasive connectivity. Thus, to better analyze the utility of these low-powered devices, this article has provided a comprehensive analysis of ambient backscattering model from the perspective of achievable data rates and the amount of harvested energy. In addition to deriving closed-form expressions of outage probability and balancing power-splitting factor, we have shown that the distance between ambient RF source and the device plays a critical role in determining the life-cycle of devices and the outage probability at the gateway. In fact, we have demonstrated that an increase in distance shifts the balancing power-splitting point to the right-hand side. Besides this, we have observed that when the distance is increased from 5m to 10m against fixed values of SNR and $\alpha$, the achievable rate at gateway drops from 20 kbps to 5 kbps. These results can act as a fundamental building block for designing and large-scale deployment of ambient backscatter devices in the future. \section*{Acknowledgment} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,923
The smart Trick of Babylon 5 show That Nobody is Discussing Getting grown up watching Fats Albert plus the Cosby Kids during syndication and reruns in 1989, the series was still fairly new in my memory and so much easier to take pleasure in. Which is to not say that a whole new era of viewers could have hassle appreciating what Body fat Albert introduced to your table...but I'll be straightforward and state that It really is mainly an item with the time. Though many in the episodes and cases are based on Cosby's urban upbringing in the forties and 50s, the setting Here's as firmly entrenched in nineteen seventies and 80s society while you'd assume. Clothes, slang, engineering, audio style and even more (including the sick-recommended snicker observe, present in the bulk of the series) will likely be lost on more youthful viewers. Luckily, its core values and ethical classes are essentially universal...and with the possibility of a series revival during the in close proximity to future, it's as good a time as any to catch up Together with the series that begun everything. Shout Factory's significant new fifteen-disc boxed established collects all a hundred and ten episodes of Extra fat Albert plus the Cosby Kids in the course of its original broadcast. This, naturally, will not involve the Original 1969 Exclusive or perhaps a trio of holiday break specials that have aired given that...but like a chronological collection, It is really far better than what we've gotten Up to now. The rights were Beforehand owned or licensed by City Works and Vintage Media, who collectively produced a couple of "greatest hits" volumes plus the aforementioned holiday specials. As the majority of this series (also known as The brand new Unwanted fat Albert Show plus the Adventures of Unwanted fat Albert and also the Cosby Kids through its later on several years) experienced nevertheless to get released on DVD, acquiring anything in a single package must thrill die-difficult enthusiasts from the series. It is all also usually that traditional shows---animated or in any other case---are dangled before customers over a season-by-season foundation, and It can be all the more aggravating once the releases dry up following a couple of years. Shout Manufacturing unit's dedication to finishing what Other people have began is a lot more clear than ever with releases like this a person. This isn't a perfect work by any usually means, but its collectively good intentions support for making up for a handful of supplemental shortcomings. The 15-disc collection is tastefully packaged in useful, multi-hubbed keepcases adorned with clean up, beautiful artwork. The episodes may also be structured well, defying the series' sporadic, baffling broadcast schedule as being the many years went on. Fats Albert as well as the Cosby Kids even appears and Seems a little a lot better than predicted, although a lack of reward characteristics will make the value tag sting just a little extra. Overall, even though, there's a good deal to like about what Shout Manufacturing facility has attained here, since they've actually finished the right factor by giving the complete series up front. Hopefully, they may even teach a lesson to other studios in the method: essential, respectful treatment method should be normal for Tv set on DVD. High quality Regulate Division The Loss of life scene at the end is symbolic of The united states turning inwards, into itself, dropping hopes and dreams of further more expansionism, and now turning inwards, The united states still left to your ravages of bigotry, hate, and futile endeavors (plus some rigid German-looking guys in a choose-up truck...just lots of good ole boys from down south). The united states finds Loss of life awaits it: being aware of Hollywood as we know Hollywood, the script was likely composed by pessimists with foreign accents when drinking wine in a patio café on the Rhine River, mainly because it portrays a destructive impression of American expansionist endeavor. share I really know what you're contemplating: they've carried out this 'undesirable and black' detail ahead of. Recall 1995's... What is actually fantastic relating to this disc is that a few from the episodes attribute Extra fat Albert and the remainder of his crew carrying out the funky junkyard jams that concluded every single show. Inevitably the songs were replaced with episodes of Brown Hornet, which was under no circumstances as entertaining given that the musical quantities. Should you ended up a supporter on the show, or In case you have kids, it is best to check out the disc. But I can't recommend purchasing Fat Albert and the Cosby Kids, if for no other rationale than the four-disc, twenty episode collection is an improved offer. Movie: Sean tries to impress the gang by showing them a gun he took from home; villainous braggart Plimp Yabo Prime Suspect show wields a harmful weapon Except if The Brown Hornet can end him. "Phyllis and the Pharaohs": A fifties-model doo-wop group, with Rita Moreno singing lead as well as male adult cast on backup. A college student council election results in being about race as opposed to issues and capacity; The Brown Hornet settles a dispute involving inexperienced and orange beings about which of their leaders really should head an expedition. Sharon doesn't like Jacob, The brand new Amish scholar, because she thinks he's odd along with a snob; Gabby and Moe suspect Barney Bat of thieving nuts just because he's a bat. Being a fifty percent-hour show with no multi-section tales, there may be the occasional inclination for that resolutions to return far too immediately. The ideal episodes are those by which the road to a solution is really an extension of the situation by itself, in lieu of a practical "Timmy's during the well" plot device that somehow brings the Visitor Protagonist to his/her senses. That's quibbling, nevertheless, when using into account the formidable character of The full series. Noticed above, the static menu models are easy to navigate and cargo swiftly. Just about every series is housed website in a number of slender, multi-hubbed apparent keepcases (five overall) with eye-catching artwork. These five keepcases are tucked within a strong outer box; also incorporated is a short insert booklet with episode listings and a brief essay by academic guide Gordon Berry, Ed. I might have chosen a far more in-depth look at the series' legacy or even the delicate differences concerning all three incarnations. Still, It truly is good to own some thing here, Primarily because some enthusiasts may well not know just how many new factors Extra fat Albert at first brought towards the desk. It is really certainly truly worth a look, but once might be more than enough. Final Ideas Two young "hippie" bikers, Wyatt and Billy promote some dope in Southern California, stash their dollars away within their gas-tank and established off for a visit across The us, on their own personal odyssey looking for a means to lead their life. Around the journey they experience bigotry and hatred from small-city communities who despise and fear their non-conformism. Having said that Wyatt and Billy also learn people attempting 'choice lifestyles' who're resisting this slim-mindedness, there is usually an issue mark more than the future survival of these fall-out here teams. In a world of shadows and dark magic, not every little thing is exactly what it seems, and there's usually a selling price to pay for. The path to redemption is rarely easy, and when Constantine is always to succeed, he must navigate throughout the dark urban underbelly of L. a., outwit the most crafty spawns of hell, and are available nose to nose with arch-nemesis Nergal – all when battling his individual internal demons! Print Page Tweet Henry Fonda is said to have occur from "Easy Rider" a perplexed and puzzled man. He experienced labored in movies for 35 many years and created some great kinds, and now his son Peter was gonna be a millionaire because of a movie Henry could not even understand.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,226
{"url":"https:\/\/coq.github.io\/doc\/v8.14\/refman\/addendum\/miscellaneous-extensions.html","text":"\n# Program derivation\u00b6\n\nCoq comes with an extension called Derive, which supports program derivation. Typically in the style of Bird and Meertens or derivations of program refinements. To use the Derive extension it must first be required with Require Coq.derive.Derive. When the extension is loaded, it provides the following command:\n\nCommand Derive ident1 SuchThat one_term As ident2\n\nident1 can appear in one_term. This command opens a new proof presenting the user with a goal for one_term in which the name ident1 is bound to an existential variable ?x (formally, there are other goals standing for the existential variables but they are shelved, as described in shelve).\n\nWhen the proof ends two constants are defined:\n\n\u2022 The first one is named ident1 and is defined as the proof of the shelved goal (which is also the value of ?x). It is always transparent.\n\n\u2022 The second one is named ident2. It has type type, and its body is the proof of the initially visible goal. It is opaque if the proof ends with Qed, and transparent if the proof ends with Defined.\n\nExample\n\nRequire Coq.derive.Derive.\nRequire Import Coq.Numbers.Natural.Peano.NPeano.\nSection P.\nVariables (n m k:nat).\nn is declared m is declared k is declared\nDerive p SuchThat ((k*n)+(k*m) = p) As h.\n1 focused goal (shelved: 1) n, m, k : nat p := ?Goal : nat ============================ k * n + k * m = p\nProof.\nAny property can be used as term, not only an equation. In particular, it could be an order relation specifying some form of program refinement or a non-executable property from which deriving a program is convenient.","date":"2022-05-22 00:36:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5879359245300293, \"perplexity\": 2178.5853633189554}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662543264.49\/warc\/CC-MAIN-20220522001016-20220522031016-00451.warc.gz\"}"}
null
null
Чемпионат мира по трековым велогонкам 2004 года проходил с 26 по 30 мая 2004 года в г. Мельбурн, Австралия на велодроме Hisense Arena. Всего разыграно было 15 комплектов наград — 9 мужчин и 6 у женщин, приняли же участие в чемпионате 297 спортсменов из 43 стран мира. Медалисты Мужчины Женщины Общий медальный зачёт Ссылки Результаты на сайте Memoire du cyclisme Результаты на сайте Bike Cult Результаты на сайте sports123.com Результаты на сайте UCI Май 2004 года Международные спортивные соревнования в Мельбурне
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,173
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{Demonstration} \caption{We use Gaussian noises to perturb the poses of dataset D in Fig.~\ref{fig:dataset}. The standard deviations for rotation and translation are $3^{\circ}$ and $0.3m$, respectively. The resulting point cloud (a) is in a mess. Fig. (b) shows the result from our algorithm. Our algorithm can quickly align the planes, as shown in Fig.~\ref{fig:iteration}.} \label{fig:demonstration} \end{figure} Planes ubiquitously exist in man-made environments, as demonstrated in Fig.~{\ref{fig:demonstration}}. Thus they are generally used in simultaneous localization and mapping (SLAM) systems for depth sensors, such as RGB-D cameras \cite{kaess2015simultaneous,elghor2015planes,hsiao2017keyframe,kim2018linear,chen2022vip} and LiDARs \cite{zhang2014loam,liu2021balm,zhou2021lidar,zhou2021pi,zhou2022plc}. Just as bundle adjustment (BA) \cite{triggs1999bundle,agarwal2010bundle,zhou2020stochastic,demmel2021square} is important for visual reconstruction \cite{agarwal2011building,mur2015orb,schonberger2016structure,campos2021orb}, jointly optimizing planes and depth sensor poses, which is called plane adjustment (PA) \cite{zhou2021lidar,zhou2021pi}, is critical for 3D reconstruction using depth sensors. This paper focuses on efficiently solving the large-scale PA problem. The BA and PA problems both involve jointly optimizing 3D structures and sensor poses. As the two problems are similar, it is straightforward to apply the well-studied solutions for BA to PA, as done in \cite{zhou2020efficient,zhou2021pi}. However, planes in PA can be eliminated, so that the cost function of the PA problem only depends on sensor poses, which significantly reduces the number of variables. This property provides a promising direction to efficiently solve the PA problem. However, it is difficult to compute the Hessian matrix and the gradient vector of the resulting cost. Although this problem was studied in several previous works \cite{ferrer2019eigen,liu2021balm}, no efficient solution has been proposed. This paper seeks to solve this problem. The main contribution of this paper is an efficient PA solution using Newton's method. We derive a closed-form solution for the Hessian matrix and the gradient vector for the PA problem whose computational complexity is independent of the number of points on the planes. Our experimental results show that, in terms of the PA problem, Newton's method outperforms the widely-used Levenberg-Marquardt (LM) algorithm \cite{more1978levenberg} with Schur complement trick \cite{triggs1999bundle}. \section{Related Work} The PA problem is closely related to the BA problem. In BA, points and camera poses are jointly optimized to minimize the reprojection error. Schur complement \cite{triggs1999bundle,agarwal2010bundle,zhou2020stochastic} or the square root method \cite{demmel2021squaresw,demmel2021square} is generally used to solve the linear system of the iterative methods. The keypoint is to generate a reduced camera system (RCS) which only relates to camera poses. In PA, planes and poses are jointly optimized. Planes are the counterparts of points in BA. Thus, the well-known solutions for the BA problem can be applied to the PA problem \cite{zhou2020efficient,zhou2021lidar}. In the literature, two cost functions are used to formulate the PA problem. The first one is the plane-to-plane distance which measures the difference between two plane parameters \cite{kaess2015simultaneous,hsiao2017keyframe}. The value of the plane-to-plane distance is related to the choice of the global coordinate system, which means the selection of the global coordinate system will affect the accuracy of the results. The second one is the point-to-plane distance, whose value is invariant to choice of the global coordinate system. The solutions of different choices of the global coordinate system are equivalent up to a rigid transformation. Zhou \textit{et al.} \cite{zhou2020efficient} show that the point-to-plane distance can converge faster and lead to a more accurate result. But unlike BA where each 3D point has only one 2D observation at a pose, a plane can generate many points at a pose as demonstrated in Fig.~\ref{fig:PA}. This means the point-to-plane distance probably leads to a very large-scale least-squares problem. Directly adopting the BA solutions is computationally infeasible for a large-scale PA problem. Zhou \textit{et al.} \cite{zhou2020efficient} propose to use the QR decomposition to accelerate the computation. For a general least-squares problem with $M$ variables, the computational complexity of the Hessian matrix is $O(M^{2})$. Thus, in the computer vision community, it is ingrained that Newton's method is infeasible for a large-scale optimization problem, as calculating the Hessian matrix is computationally demanding. Instead, Gauss-Newton like iterative methods are generally adopted. Suppose that $\mathbf{J}$ is the Jacobian matrix of the residuals. Gauss-Newton like methods actually approximates the Hessian matrix by $\mathbf{H} \approx \mathbf{J}^{T}\mathbf{J}$. In theory, Newton's method can lead to a better quadratic approximation to the original cost function, which means the Newton's step probably yields a more accurate result. This in turn may reduce the number of iterations for convergence. The PA problem has a special property that the optimal plane parameters are determined by the poses. That is to say the point-to-plane cost actually only depends on the poses. This property is attractive, as it significantly reduces the number of variables, which makes using the Newton's method possible. Moreover, in the traditional framework, the correlation between the plane parameters and the poses are ignored.Thus, after one iteration, there is no guarantee that the new plane parameters are optimal for the new poses. Using the property of the PA, it is possible to overcome this drawback, which may lead to faster convergence. Several previous works seek to exploit this property of PA. Ferrer \cite{ferrer2019eigen} considered an algebraic point-to-plane distance and provided a closed-form gradient for the resulting cost. The algebraic cost may result in a suboptimal solution \cite{andrew2001multiple}, and the first-order optimization generally leads to slow convergence \cite{triggs1999bundle}. Liu \textit{et al.} \cite{liu2021balm} provided analytic forms of the Hessian matrix and the gradient of the genuine point-to-plane cost. Assume that $N$ points are captured from a plane at a pose. The computational complexity of the Hessian matrix related to these points is $O(N^{2})$. Since $N$ can be large as shown in Fig. \ref{fig:PA}, this algorithm is computationally demanding and infeasible for a large-scale problem. In summary, the potential benefits of the special property of the PA problem have not been manifested in previous works. The bottleneck is how to efficiently compute the Hessian matrix and the gradient vector. This paper focuses on solving this problem. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{PA_plane_obs} \caption{A schematic of the PA problem and the planes detected in a LiDAR scan. Unlike BA where each 3D point only has one observation, many points can be captured from a plane in PA. Assume that $N$ points are captured from a plane. The computational complexity of the Hessian matrix related to these points for BALM \cite{liu2021balm} is $O(N^{2})$. Thus, this method is infeasible for a large-scale problem. In contrast, the computational complexity of our algorithm is independent of $N$.} \label{fig:PA} \end{figure} \section{Problem Formulation} In this paper we use italic, boldfaced lowercase and boldfaced uppercase letters to represent scalars, vectors and matrices, respectively. \subsection{Notations} \label{subsec:notations} \textbf{Planes and Poses} \ A plane can be represented by a four-dimensional vector $\bm{\pi} = [\bm{n};d]$. We denote the rotational and translational components from a depth sensor coordinate system to the global coordinate system as $\mathbf{R} \in SO(3)$ and $\bm{t} \in \mathbb{R}^{3}$, respectively. To simplify the notation in the following description, we also use the following two matrices to represent a pose: \begin{equation} \label{equ:pose} \mathbf{X} = \begin{bmatrix} \mathbf{R}, & \bm{t} \\ \mathbf{0}, &1 \end{bmatrix} \in SE(3) \ \text{and} \ \mathbf{T} = \begin{bmatrix} \mathbf{R}, & \bm{t} \end{bmatrix}. \end{equation} As $\mathbf{R} \in SO(3)$, a certain parameterization is usually adopted in the optimization \cite{triggs1999bundle}. In this paper, we use the Cayley-Gibbs-Rodriguez (CGR) parameterization \cite{hesch2011direct} to represent $\mathbf{R}$ \begin{equation} \label{equ:CGR} \mathbf{R}=\frac{{\mathbf{\bar{R}}}}{1+{{\bm{s}}^{T}}\bm{s}},\mathbf{\bar{R}}=\left( 1-{{\bm{s}}^{T}}\bm{s} \right){{\mathbf{I}}_{3}}+2{{\left[ \bm{s} \right]}_{\times }}+2\bm{s}{{\bm{s}}^{T}}, \end{equation} where $\bm{s}=\left[s_{1};s_{2};s_{3}\right]$ is a three-dimensional vector. We adopt the CGR parameterization as it is a minimal representation for $\mathbf{R}$. Furthermore, unlike the angle-axis parameterization that is singular at $\mathbf{I}_{3}$, the CGR parameterization is well-defined at $\mathbf{I}_{3}$, and equals to $[0;0;0]$ which can accelerate the computation. We parameterize $\mathbf{T}$ as a six-dimensional vector $\bm{x} = [\bm{s}; \bm{t}]$. \textbf{Newton's method} \ This paper adopts the damped Newton's method in the optimization. For a cost function $f(\bm{z})$, the damped Newton's method seeks to find its minimizer from an initial point. Assume that $\bm{z}_{n}$ is the solution at the $n$th iteration. Given the Hessian matrix $\mathbf{H}_{f}(\bm{z}_{n})$ and the gradient $\bm{g}_{f}(\bm{z}_{n})$ at $\bm{z}_{n}$, $\bm{z}_{n}$ is updated by $\bm{z}_{n+1} = \bm{z}_{n} + \varDelta\bm{z}$. Here $\varDelta\bm{z}$ is from \begin{equation} \label{equ:netwon} (\mathbf{H}_{f}(\bm{z}_{n})+\mu \mathbf{I}))\varDelta{z} = -\bm{g}_{f}(\bm{z}_{n}), \end{equation} where $\mu$ is adjusted at each iteration to make the value of $f(\bm{z})$ reduce, as done in the LM algorithm \cite{more1978levenberg}. \textbf{Matrix Calculus} \ In the following derivation, we will use vector-by-vector, vector-by-scalar, scalar-by-vector derivatives. Here we provide their definitions. Assume $\bm{a} = [a_1; \cdots; a_N]\in \mathbb{R}^{N}$ is a vector function of $\bm{b} = [b_{1}, \cdots, b_{M}] \in \mathbb{R}^{M}$. The first-order partial derivatives of vector-by-vector $\frac{\partial \bm{a}}{\partial \bm{b}}$, vector-by-scalar $\frac{\partial \bm{a}}{\partial b_{j}}$, and scalar-by-vector $\frac{\partial a_{i}}{\partial \bm{b}}$ are defined as \begin{equation} \label{equ:vec_by_vec} \frac{\partial \bm{a}}{\partial \bm{b}} = \begin{bmatrix} \frac{\partial a_1}{\partial b_1} & \cdots & \frac{\partial a_N}{\partial b_1} \\ \vdots & \ddots & \vdots \\ \frac{\partial \bm{a}_{1}}{\partial b_M} & \cdots &\frac{\partial \bm{a}_{N}}{\partial \bm{b}_{M}} \end{bmatrix}, \frac{\partial \bm{a}}{\partial b_{j}} = \begin{bmatrix} \frac{\partial a_{1}}{\partial b_{j}} \\ \vdots \\ \frac{\partial a_{N}}{\partial b_{j}} \end{bmatrix}, \frac{\partial a_{i}}{\partial \bm{b}} = \begin{bmatrix} \frac{\partial a_{i}}{\partial b_{1}}\\ \vdots \\ \frac{\partial a_{i}}{\partial b_{M}} \end{bmatrix} \end{equation} where $\frac{\partial \bm{a}}{\partial \bm{b}}$ is an $M \times N$ matrix with $\frac{\partial a_{i}}{\partial b_{j}}$ as the $i$th row $j$th column element, $\frac{\partial \bm{a}}{\partial b_{i}}$ is an $N$-dimensional vector whose $i$th term is $\frac{\partial a_{i}}{\partial {b}_{j}}$, and $\frac{\partial a_{i}}{\partial \bm{b}}$ is an $M$-dimensional vector whose $i$th term is $\frac{\partial a_{i}}{\partial {b}_{i}}$. \subsection{Optimal Plane Estimation} \label{subsec:Opt_plane} Given a set of $K$ points $\{\bm{p}_{i}\}$, the optimal plane $\hat{\bm{\pi}}$ can be estimated by minimizing the sum of squared point-to-plane distances \begin{equation} \label{equ:plane_cost} \hat{\bm{\pi}} = \arg\min_{\bm{\pi}}\sum_{i}^{K}\left( \bm{n}^{T}\bm{p}_{i} + d\right) ^{2}, \ s.t. \ \left\| \bm{n} \right\| _2^{2} = 1. \end{equation} There is a closed-form solution for $\hat{\bm{\pi}}$. Let us define \begin{equation} \label{equ:Matrix_M} \small \mathbf{M} = \sum_{i=1}^{K}\left( \bm{p}_{i} - \bar{\bm{p}}\right)\left( \bm{p}_{i} - \bar{\bm{p}}\right)^{T} = \mathbf{S} - K\bar{\bm{p}}\bar{\bm{p}}^{T}, \end{equation} where $\mathbf{S} = \sum_{i=1}^{K} \bm{p}_{i}\bm{p}_{i}^{T}$ and $\bar{\bm{p}} = \frac{1}{K}\sum_{i}^{K}{\bm{p}}_{i}$. Assume that ${\lambda}_{3}(\mathbf{M})$ and $\bm{\xi}_{3}(\mathbf{M})$ are the smallest eigenvalue of $\mathbf{M}$ and the corresponding eigenvector, respectively. Using these notations, we can write the optimal plane $\hat{\bm{\pi}} = [\hat{\bm{n}}; \hat{d}]$ as \begin{equation} \label{equ:opt_pi} \hat{\bm{n}} = \bm{\xi}_{3}(\mathbf{M}), \ \hat{d} = -\hat{\bm{n}}^{T} \bar{\bm{p}}. \end{equation} Furthermore, the cost of (\ref{equ:plane_cost}) at $\hat{\bm{\pi}}$ equals to ${\lambda}_{3}(\mathbf{M})$, \textit{i.e.,} \begin{equation} \label{equ:opt_pi_cost} \begin{split} {\lambda}_{3}(\mathbf{M}) = \sum_{i=1}^{K}\left( \hat{\bm{n}}\bm{p}_{i} + \hat{d}\right) ^{2} = \min_{\bm{\pi}}\sum_{i=1}^{K}\left( {\bm{n}}\bm{p}_{i} + {d}\right) ^{2}. \end{split} \end{equation} The above property will be used to eliminate planes in PA. \subsection{Plane Adjustment} Assume that there are $M$ planes and $N$ poses. According to section \ref{subsec:notations}, the $i$th plane can be represented by a four-dimensional vector $\bm{\pi}_{i} = [\bm{n}_{i};d_{i}]$. The $j$th pose is denoted as $\bm{x}_{j}$. The observation of $\bm{\pi}_{i}$ at $\bm{x}_{j}$ is a set of $N_{ij}$ points $\mathbb{P}_{ij} = \{\bm{p}_{ijk} \in \mathbb{R}^{3}\}_{k=1}^{N_{ij}}$. For a 3D point $\bm{p}_{ijk}$, we use $\tilde{\bm{p}}_{ijk}= [\bm{p}_{ijk};1]$ to represent the homogeneous coordinates of $\bm{p}_{ijk}$. Then, the transformation from the local coordinate system at $\bm{x}_{j}$ to the global coordinate system can be represented as \begin{equation} \label{equ:g_coor} \bm{p}^{g}_{ijk} = \mathbf{R}_{j}\bm{p}_{ijk} + \bm{t}_{j} = \mathbf{T}_{j}\tilde{\bm{p}}_{ijk}, \end{equation} where $\mathbf{T}_{j}$ is defined in (\ref{equ:pose}). Then the distance $d_{ijk}$ from $\bm{p}_{ijk}$ to $\bm{\pi}_{i}$ has the form \begin{equation} \label{equ:pt_2_pi_dis} \begin{split} d_{ijk}(\bm{\pi}_{i},\mathbf{x}_{j}) & = \bm{n}_{i}^{T}\left( \mathbf{R}_{j}\bm{p}_{ijk} + \bm{t}_{j} \right) + d_{i}\\ & = \bm{n}_{i}^{T}\mathbf{T}_{j}\tilde{\bm{p}}_{ijk} + d_{j} = \bm{\pi}^{T}_{i}\tilde{\bm{p}}_{ijk}^{g}. \end{split} \end{equation} The PA problem is to jointly adjust the $M$ planes $\left\lbrace \bm{\pi}_{i} \right\rbrace $ and the $N$ sensor poses $\left\lbrace \bm{x}_{j} \right\rbrace $ to minimize the sum of squared point-to-plane distances. Specifically, using (\ref{equ:pt_2_pi_dis}), we can formulate the cost function of the PA problem as \begin{equation}\label{equ:PA_cost} \small \min_{\left\lbrace \bm{\pi}_{i}\right\rbrace, \left\lbrace \mathbf{x}_{j}\right\rbrace} \sum_{i=1}^{N}\underbrace{\sum_{ j \in obs(\bm{\pi}_i)}\sum_{k=1}^{N_{ij}} d_{ijk}^{2}(\bm{\pi}_{i},\mathbf{x}_{j})}_{C_{i}\left(\bm{\pi}_{i}, \mathbb{X}_{i}\right), \ \mathbb{X}_{i} = \{\bm{x}_{j}| j\in obs(\bm{\pi}_i)\}} = \min_{\{\bm{\pi}_{i}\}, \{\bm{x}_{j}\}}\sum_{i=1}^{N_{i}}C_{i}\left(\bm{\pi}_{i}, \mathbb{X}_{i}\right). \end{equation} where $obs(\bm{\pi}_i)$ represents the indexes of poses where $\bm{\pi}_{i}$ can be observed, and $C_{i}(\bm{\pi}_{i},\mathbb{X}_{i})$ accumulates the errors from $N_{i} = \sum_{j \in obs(\bm{\pi}_{i}) }N_{ij}$ points captured at the set of poses $\mathbb{X}_{i}$. According to (\ref{equ:Matrix_M}) and (\ref{equ:g_coor}), we get \begin{equation} \label{equ:M_i} \small \begin{split} \mathbf{M}_{i}(\mathbb{X}_{i}) &= \sum_{j \in obs(\bm{\pi}_i)} \mathbf{S}_{ij} - N_{i}\bar{\bm{p}}_{i}\bar{\bm{p}}_{i}^{T}, \end{split} \end{equation} where $\bar{\bm{p}}_{i} = \frac{1}{N_i}\sum_{j\in obs(\bm{\pi}_i)}\sum_{k=1}^{N_{ij}} \bm{p}_{ijk}^{g}$ and $\mathbf{S}_{ij} = {\sum_{k=1}^{N_{ij}} \bm{p}_{ijk}^{g}\left( {\bm{p}_{ijk}^{g}}\right) ^{T}}$. Here the elements in $\mathbf{M}_{i}$, $\mathbf{S}_{ij}$ and $\bar{\bm{p}}_{i}$ in (\ref{equ:M_i}) are all functions of the poses in $\mathbb{X}_{i}$. Substituting $\bm{p}_{ijk}^{g}$ in (\ref{equ:g_coor}) into $\mathbf{S}_{ij}$ and $\bar{\bm{p}}_{i}$ in (\ref{equ:M_i}), we have \begin{equation} \label{equ:S_p} \small \begin{split} \mathbf{S}_{ij} &= \mathbf{T}_{j}\underbrace{\sum_{k=1}^{N_{ij}}\tilde{\bm{p}}_{ijk}\tilde{\bm{p}}_{ijk}^{T}}_{\mathbf{U}_{ij}}\mathbf{T}_{j}^{T} = \mathbf{T}_{j}\mathbf{U}_{ij}\mathbf{T}_{j}^{T}, \\ \bar{\bm{p}}_{i} & = \frac{1}{N_i}\sum_{j \in obs(\bm{\pi}_{i})}\mathbf{T}_{j}\underbrace{\sum_{k=1}^{N_{ij}} \tilde{\bm{p}}_{ijk}}_{\tilde{\bm{p}}_{ij}} = \frac{1}{N_i}\sum_{j\in obs(\bm{\pi}_{i})}\mathbf{T}_{j}\tilde{\bm{p}}_{ij}. \end{split} \end{equation} Here $\mathbf{U}_{ij}$ and $\tilde{\bm{p}}_{ij}$ in (\ref{equ:S_p}) are constants. We only need to compute them once, and reuse them in the iteration. According to (\ref{equ:opt_pi}), given poses in $\mathbb{X}_{i}$, the optimal solution for $\bm{\pi}_{i}$ has a closed-form expression $\hat{\bm{\pi}}_{i} =[ \hat{\bm{n}}_{i};\hat{d}_{i}]$, where $ \hat{\bm{n}}_{i} =\bm{\xi}_{3}(\mathbf{M}_{i}(\mathbb{X}_{i}))$ and $\hat{d}_{i} = -\hat{\bm{n}}\bar{\bm{p}}_{i}$. As $\mathbf{M}_{i}$ and $\bar{\bm{p}}_{i}$ are functions of the poses in $\mathbb{X}_{i}$, $\hat{\bm{\pi}}_{i}$ is also a function of the poses in $\mathbb{X}_{i}$. That is to say $\hat{\bm{\pi}}_{i}$ is completely determined by the poses in $\mathbb{X}_{i}$. To simplify the notation, let us define \begin{equation} \lambda_{i,3}(\mathbb{X}_{i}) = \lambda_{3}(\mathbf{M}_{i}(\mathbb{X}_{i})), \end{equation} which represents the smallest eigenvalue of $\mathbf{M}_{i}(\mathbb{X}_{i})$. Substituting the optimal plane estimation $\hat{\bm{\pi}}_{i}$ into $C_{i}(\bm{\pi}_{i}, \mathbb{X}_{i})$ in (\ref{equ:PA_cost}) and using (\ref{equ:opt_pi_cost}), we have \begin{equation} \label{equ:min_pi} \lambda_{i,3}(\mathbb{X}_{i}) = C_{i}\left( \hat{\bm{\pi}}_{i}, \mathbb{X}_{i}\right). \end{equation} Using (\ref{equ:min_pi}), we can formulate the PA problem in (\ref{equ:PA_cost}) as \begin{equation} \label{equ:PA_cost_pose} \{\hat{\mathbf{x}}_{j}\} = \arg\min_{\{\mathbf{X}_{j}\}}\bm{\tau}, \ \bm{\tau} = \sum_{i=1}^{M} \lambda_{i,3}(\mathbb{X}_{i}). \end{equation} The cost function (\ref{equ:PA_cost_pose}) only depends on the sensor poses, which significantly reduces the number of variables. However, as it is the sum of squared point-to-plane distances, we cannot apply the widely used Gauss-Newton-like methods to minimize it, where the Jacobian matrix of residuals are required. Here we adopt the Newton's method to solve it. The crux for applying the Newton's method to minimize (\ref{equ:PA_cost_pose}) is how to compute the gradient and the Hessian matrix of (\ref{equ:PA_cost_pose}) efficiently. In the following sections, we provide a closed-form solution for them. To simplify the notation, we omit the variables of functions in the following description (\textit{e.g.}, $\lambda_{i,3}(\mathbb{X}_{i}) \rightarrow \lambda_{i,3}$) \section{Newton's Iteration for Plane Adjustment} Let us denote the gradient and the Hessian matrix of $\tau$ in (\ref{equ:PA_cost_pose}) as $\bm{g}$ and $\mathbf{H}$, and denote the 6-dimensional gradient vector for $\bm{x}_{j}$ as $\bm{g}_{j}$ and the $6 \times 6$ Hessian matrix block for $\bm{x}_{j}$ and $\bm{x}_{k}$ as $\mathbf{H}_{jk}$ (note that here $i$ can equal to $j$). Then $\bm{g}$ and $\mathbf{H}$ can be written in the block form as $\bm{g}=(\bm{g}_{j}) \in \mathbb{R}^{6N}$ and $\mathbf{H} = (\mathbf{H}_{jk}) \in \mathbb{R}^{6N \times 6N}$. The $i$th plane $\bm{\pi}_{i}$ is observed by the poses in $\mathbb{X}_{i}$. Assume $j$th pose $\bm{x}_{j} \in \mathbb{X}_{i}$ and the $k$th pose $\bm{x}_{k} \in \mathbb{X}_{i}$. Let us define \begin{equation} \label{equ:g_ij and H_ijk} \bm{g}_{j}^{i} = \frac{\partial \lambda_{i,3}}{\partial \bm{x}_{j}}, \ \mathbf{H}_{jk}^{i} = \frac{\partial^{2} \lambda_{i,3}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}. \end{equation} According to (\ref{equ:PA_cost_pose}), we have \begin{equation} \label{equ:g_j and H_jk} \bm{g}_{j} = \sum_{i \in \mathbb{P}_{j}} \bm{g}^i_{j}, \ \mathbf{H}_{jk} = \sum_{i \in \mathbb{P}_{jk}} \mathbf{H}_{jk}^{i}, \end{equation} where $\mathbb{P}_{j}$ is the set of planes observed by $\bm{x}_{j}$, and $\mathbb{P}_{jk}$ is the set of planes observed by $\bm{x}_{j}$ and $\bm{x}_{k}$ simultaneously. If $j = k$, here $\mathbb{P}_{jk}$ equals to $\mathbb{P}_{j}$. From (\ref{equ:g_j and H_jk}), we know that the key point to get $\bm{g}$ and $\mathbf{H}$ is to compute $\bm{g}_{j}^{i}$ and $\mathbf{H}_{jk}^{i}$ in (\ref{equ:g_ij and H_ijk}). \subsection{Partial Derivatives of Eigenvalue} According to (\ref{equ:g_ij and H_ijk}), $\lambda_{i,3}$ is a function of poses in $\mathbb{X}_{i}$, and $\bm{x}_{j} \in \mathbb{X}_{i}$ and $\bm{x}_{k} \in \mathbb{X}_{i}$. Assume that $x_{jm}$ and $x_{kn}$ are the $m$th and $n$th elements of $\bm{x}_{j}$ and $\bm{x}_{k}$, respectively. In this section, we first consider the first-order partial derivation $\frac{\partial\lambda_{i,3}}{\partial x_{jm}}$ and the second-order partial derivation $\frac{\partial^{2}\lambda_{i,3}}{\partial x_{jm} \partial x_{kn}}$. $\lambda^{3}_{i,3}$ is a root of the equation $|\mathbf{M}_{i}(\mathbb{X}_{i}) - \lambda_{i}\mathbf{I}_{3}| = 0$, where $|\cdot|$ denotes the determinant of a matrix. Assume $m_{ef}$ is the $e$th row $f$th column term of $\mathbf{M}_{i}(\mathbb{X}_{i})$. $|\mathbf{M}_{i}(\mathbb{X}_{i}) - \lambda_{i}\mathbf{I}_{3}| = 0$ is a cubic equation with the following form \begin{equation} \label{equ:cub} -\lambda^{3}_{i,3} + a_{i}\lambda^{2}_{i,3} + b_{i}\lambda_{i,3} + c_{i} = 0, \end{equation} where $a_{i} = m_{11} + m_{22} + m_{33}$, $b_{i} = m_{12}^2 + m_{13}^2 + m_{23}^2 - m_{11}m_{22}- m_{11}m_{33} - m_{22}m_{33}$, and $c_{i} = - m_{33}m_{12}^2 + 2m_{12}m_{13}m_{23} - m_{22}m_{13}^2 - m_{11}m_{23}^2 + m_{11}m_{22}m_{33}$. Here $a_{i}$, $b_{i}$ and $c_{i}$ are all functions of the poses in $\mathbb{X}_{i}$. It is known that the root of a cubic equation has a closed form. One solution to compute $\frac{\partial\lambda_{i,3}}{\partial x_{jm}}$ and $\frac{\partial^{2}\lambda_{i,3}}{\partial x_{jm} \partial x_{kn}}$ is to directly differentiate the root. However, the formula of the root is too complicated. Here we introduce a simple way to compute them. Briefly, we employ the implicit function theorem \cite{krantz2002implicit} to compute them. Let us define \begin{equation} \bm{\chi}_{i} = \begin{bmatrix} \lambda_{i,3}^{2} \\ \lambda_{i,3} \\ 1 \end{bmatrix}, \ \bm{\eta}_{i} =\begin{bmatrix} a_{i} \\ b_{i} \\ c_{i} \end{bmatrix} \bm{\kappa}_{i} = \begin{bmatrix} -3 \\ 2a_{i} \\ b_{i} \end{bmatrix}. \end{equation} $\frac{\partial\lambda_{i,3}}{\partial x_{jm}}$ and $\frac{\partial^{2}\lambda_{i,3}}{\partial x_{jm} \partial x_{kn}}$ are presented in Lemma \ref{lemma:first_order} and \ref{lemma:second_order}. The proofs of the following lemmas and theorems are in the\textbf{ supplementary material}. \begin{lemma} \label{lemma:first_order} $\frac{\partial\lambda_{i,3}}{\partial x_{jm}}$ has a closed-form expression as \begin{equation} \label{equ:first_order} \frac{\partial\lambda_{i,3}}{\partial x_{jm}} = - \varphi_{i} \bm{\delta}_{jm}^{i} \cdot \bm{\chi}_{i}, \end{equation} where $\cdot$ represents the dot product and $\varphi_{i} = \left( \bm{\kappa}_{i} \cdot \bm{\chi}_{i}\right)^{-1}$ and $\bm{\delta}_{jm}^{i} = \frac{\partial \bm{\eta}_{i}}{\partial x_{jm}}$. \end{lemma} \begin{lemma} \label{lemma:second_order} $\frac{\partial^2{}\lambda_{i,3}}{\partial x_{jm}\partial x_{kn}}$ has a closed-form expression as \begin{equation} \label{equ:second_order} \small \frac{\partial^{2}\lambda_{i,3}}{\partial x_{jm}\partial x_{kn}} = -\varphi_{i}\left(\bm{\delta}_{jm}^{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{kn}} + \bm{\chi}_{i} \cdot \frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}} - \frac{\partial \lambda_{i,3}}{\partial x_{jm}} \frac{\partial \varphi_{i}^{-1}}{\partial x_{kn}}\right). \end{equation} \end{lemma} Let us define \begin{equation} \label{equ:d_abc} \small \begin{split} &\bm{\alpha}_{j}^{i} = \frac{\partial a_{i}}{\partial \bm{x}_{j}}, \bm{\beta}_{j}^{i} = \frac{\partial b_{i}}{\partial \bm{x}_{j}}, \bm{\gamma}_{j}^{i} = \frac{\partial c_{i}}{\partial \bm{x}_{j}}, \mathbf{\Delta}_{j}^{i} = [\bm{\alpha}_{j}^{i},\bm{\beta}_{j}^{i},\bm{\gamma}_{j}^{i}],\\ &\bm{\alpha}_{k}^{i} = \frac{\partial a_{i}}{\partial \bm{x}_{k}}, \bm{\beta}_{k}^{i} = \frac{\partial b_{i}}{\partial \bm{x}_{k}}, \bm{\gamma}_{k}^{i} = \frac{\partial c_{i}}{\partial \bm{x}_{k}}, \mathbf{\Delta}_{k}^{i} = [\bm{\alpha}_{k}^{i},\bm{\beta}_{k}^{i},\bm{\gamma}_{k}^{i}],\\ &\mathbf{H}^{a_{i}}_{jk} = \frac{\partial^{2} a_{i}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}, \mathbf{H}^{b_{i}}_{jk} = \frac{\partial^{2} b_{i}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}, \mathbf{H}^{c_{i}}_{jk} = \frac{\partial^{2} c_{i}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}. \end{split} \end{equation} Using the above lemmas and notations, we can derive $\bm{g}_{j}^{i}$ and $\mathbf{H}_{jk}^{i}$. \begin{customthm}{1} \label{theorem:g_H} $\bm{g}_{j}^{i}$ and $\mathbf{H}_{jk}^{i}$ have the forms \begin{equation} \label{equ:g_H} \begin{split} \bm{g}_{j}^{i} &= -\varphi_{i}\bm{\Delta}_{j}^{i}\bm{\chi}_{i}, \\ \mathbf{H}_{jk}^{i} &= \varphi_{i}\left(\mathbf{K}_{jk}^{i} - {\lambda_{3,i}^{2}}\mathbf{H}^{a_{i}}_{jk} - \lambda_{3,i}\mathbf{H}^{b_{i}}_{jk} - \mathbf{H}^{c_{i}}_{jk}\right), \end{split} \end{equation} where $\mathbf{K}_{jk}^{i} = \bm{g}_{j}^{i}{(\bm{u}_{k}^{i})}^{T} - \bm{v}_{j}^{i}({\bm{g}_{k}^{i}})^{T}$, $\bm{u}_{k}^{i} = 2\lambda_{i,3}\bm{\alpha}_{j}^{i}+\bm{\beta}_{j}^{i} + (2a-6\lambda_{i,3})\bm{g}_{k}^{i}$, $\bm{v}_{j}^{i} = 2\lambda_{i,3}\bm{\alpha}_{k}^{i}+\bm{\beta}_{k}^{i}$, and similar to $\bm{g}_{j}^{i}$, $\bm{g}_{k}^{i}=-\varphi_{i}\bm{\Delta}_{k}^{i}\bm{\chi}_{i}$ is the gradient block for $\bm{x}_{k}$. \end{customthm} The formula of $\mathbf{H}_{jk}^{i}$ in (\ref{equ:g_H}) is applicable to the case that $j = k$. From Theorem \ref{theorem:g_H}, we know that the key point to get $\bm{g}_{j}^{i}$ and $\mathbf{H}_{jk}^{i}$ is to get the derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ in (\ref{equ:d_abc}). \subsection{Partial Derivatives of $a_{i}$, $b_{i}$ and $c_{i}$} As shown in (\ref{equ:cub}), $a_{i}$, $b_{i}$ and $c_{i}$ are functions of the elements in $\mathbf{M}_{i}$. Using this relationship, we can easily derive the partial derivatives in (\ref{equ:d_abc}). For instance, as $a_{i} = m_{11} + m_{22} + m_{33}$, we have \begin{equation} \frac{\partial a_{i}}{\partial \bm{x}_{j}} = \frac{\partial m_{11}}{\partial \bm{x}_{j}} + \frac{\partial m_{22}}{\partial \bm{x}_{j}} + \frac{\partial m_{33}}{\partial \bm{x}_{j}}. \end{equation} Thus, to get the first- and second-order partial derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ with respect to $\bm{x}_{j}$ and $\bm{x}_{k}$ in (\ref{equ:d_abc}), we need to derive the form of $\mathbf{M}_{i}$ in terms of $\bm{x}_{j}$ and $\bm{x}_{k}$. \begin{lemma} \label{lemma:mean_pt_decomp} In terms of $\bm{x}_{j}$ and $\bm{x}_{k}$, $\bar{\bm{p}}_{i}$ in (\ref{equ:S_p}) has the form \begin{equation} \bar{\bm{p}}_{i}(\bm{x}_{j}, \bm{x}_{k}) = \mathbf{T}_{j}\bm{q}_{ij} + \mathbf{T}_{k}\bm{q}_{ik} + \bm{c}_{ijk}, \end{equation} where $\bm{q}_{ij} = \frac{1}{N_{i}}\bm{\tilde{p}}_{ij}$, $\bm{q}_{ik} = \frac{1}{N_{i}}\bm{\tilde{p}}_{ik}$, and $\bm{c}_{ijk} = \frac{1}{N_i}\sum_{n\in \mathbb{O}_{jk}}\mathbf{T}_{n}\tilde{\bm{p}}_{in}$. Here $\mathbb{O}_{jk}$ represents the set of poses where $\bm{\pi}_{i}$ can be observed, excluding the $\bm{j}$th and $\bm{k}$th poses (\textit{i.e.}, $\mathbb{O}_{jk} = obs(\bm{\pi}_{i}) - \{j,k\}$). In terms of $\bm{x}_{j}$, $\bar{\bm{p}}_{i}$ has the form \begin{equation} \bar{\bm{p}}_{i}(\bm{x}_{j}) = \mathbf{T}_{j}\mathbf{q}_{ij} + \bm{c}_{ij}, \end{equation} where $\bm{c}_{ij} = \mathbf{T}_{k}\mathbf{q}_{ik}+\bm{c}_{ijk}$. \end{lemma} Using Lemma \ref{lemma:mean_pt_decomp}, we can have the following theorem for $\mathbf{M}_{i}$ in (\ref{equ:M_i}). \begin{customthm}{2} \label{theorem:M_i} In terms of $\bm{x}_{j}$, $\mathbf{M}_{i}$ in (\ref{equ:M_i}) can be written as \begin{equation} \label{equ:Mij} \mathbf{M}_{i}(\bm{x}_{j}) = \mathbf{T}_{j}\mathbf{Q}_{j}^{i}\mathbf{T}_{j}^{T} + \mathbf{T}_{j}\mathbf{K}_j^{i} + (\mathbf{K}_j^{i})^{T}\mathbf{T}_{j}^{T} + \mathbf{C}_{j}^{i}, \end{equation} where $\mathbf{Q}_{j}^{i} = \mathbf{S}_{ij} - N_{j}\bm{q}_{ij}\bm{q}_{ij}^{T}$ and $\mathbf{K}_{j}^{i} = -N_{i}\bm{q}_{ij}\bm{c}_{ij}^T$. In terms of $\bm{x}_{j}$ and $\bm{x}_{k}$, $\mathbf{M}_{i}$ can be written as \begin{equation}\label{equ:M_ijk} \mathbf{M}_{i}(\bm{x}_{j},\bm{x}_{k}) = \mathbf{T}_{j}\mathbf{O}_{jk}^{i}\mathbf{T}_{k}^{T} + \mathbf{T}_{k}{(\mathbf{O}_{jk}^{i})}^{T}\mathbf{T}_{j}^{T} + \mathbf{C}_{jk}^{i}. \end{equation} where $\mathbf{O}_{jk}^{i} = -N_{i}\bm{q}_{ij}\bm{q}_{ik}^{T}$. \end{customthm} Here we do not provide the detailed formulas for $\mathbf{C}_{j}^{i}$ and $\mathbf{C}_{jk}^{i}$, as they will be eliminated in the partial derivative. Actually, only $\mathbf{Q}_{j}^{i}$, $ \mathbf{K}_{j}^{i}$, and $\mathbf{O}_{jk}^{i}$ are required to compute the partial derivatives in (\ref{equ:d_abc}). Equation (\ref{equ:Mij}) is used to compute the first- and second-order partial derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ with respect to $\bm{x}_{j}$. Equation (\ref{equ:M_ijk}) is used to compute the second-order partial derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ with respect to $\bm{x}_{j}$ and $\bm{x}_{k}$. \subsection{Efficient Iteration} From Theorem \ref{theorem:M_i}, we can easily derive the elements of $\mathbf{M}_{i}(\bm{x}_{j})$ and $\mathbf{M}_{i}(\bm{x}_{j}, \bm{x}_{i})$. Specifically, each element of them is a second-order polynomial in terms of the elements in $\mathbf{T}_{j}$ and $\mathbf{T}_{k}$. Assume $m_{ef}(\bm{x}_{j})$ and $m_{ef}(\bm{x}_{j}, \bm{x}_{k})$ are the $e$th row $f$th column element of $\mathbf{M}_{i}(\bm{x}_{j})$ and $\mathbf{M}_{i}(\bm{x}_{j},\bm{x}_{k})$, respectively. $m_{ef}(\bm{x}_{j})$ and $m_{ef}(\bm{x}_{j}, \bm{x}_{k})$ are linear combinations of monomials with respect to the elements in $\mathbf{T}_{j}$ and $\mathbf{T}_{k}$. Substituting (\ref{equ:CGR}) into $m_{ef}(\bm{x}_{j})$ and $m_{ef}(\bm{x}_{j}, \bm{x}_{k})$, we have \begin{equation} \begin{split} m_{ef}(\bm{x}_{j}) & = \bm{c}_{ef} \cdot \bm{h}_{ef}(\bm{x}_{j}), \\ m_{ef}(\bm{x}_{j}, \bm{x}_{k}) & = \bm{d}_{ef} \cdot \bm{g}_{ef}(\bm{x}_{j}, \bm{x}_{k}), \end{split} \end{equation} where $\bm{c}_{ef}$ is determined by $\mathbf{Q}_{j}^{i}$ and $\mathbf{K}^{i}_{j}$ in (\ref{equ:Mij}), $\bm{d}_{ef}$ is determined by $\mathbf{O}_{jk}^{i}$ in (\ref{equ:M_ijk}), and $\bm{h}_{ef}$ and $\bm{g}_{ef}$ are two vector functions. Let us first consider the first-order partial derivative of $m_{ef}(\bm{x}_{j})$ with respect to $\bm{x}_{j}$. It has the form \begin{equation} \label{equ:d_h} \frac{\partial m_{ef}(\bm{x}_{j})}{\partial \bm{x}_{j}} = \frac{\partial \bm{h}_{ef}(\bm{x}_{j})}{\partial \bm{x}_{j}} \bm{c}_{ef}, \end{equation} where the vector-by-vector derivative $\frac{\partial \bm{h}_{ef}(\bm{x}_{j})}{\partial \bm{x}_{j}}$ is defined in (\ref{equ:vec_by_vec}). To efficiently compute (\ref{equ:d_h}), we consider a special pose $\mathbf{T}_{0} = [\mathbf{R}_{0}, \bm{t}_{0}]$ where $\mathbf{R}_{0} = \mathbf{I}_{3}$ and $\bm{t}_{0} = [0;0;0]$. Let us denote the parameterization of $\mathbf{T}_{0}$ as $\bm{x}_{0}$. As the CGR parameterization defined in (\ref{equ:CGR}) for $\mathbf{I}_{3}$ is $[0;0;0]$, we have $\bm{x}_{0} = [0;0;0;0;0;0]$. For $\bm{x}_{j} = \bm{x}_{0}$, the matrix $\left. \frac{\partial \bm{h}_{ef}(\mathbf{x}_{j})}{\partial \bm{x}_{j}}\right| _{{\mathbf{x}_{j}} = \mathbf{x}_{0}}$ has many zero terms. Similarly, the second-order partial derivatives of $\bm{h}_{ef}(\bm{x}_{j})$ and $\bm{g}_{ef}(\bm{x}_{j})$ at $\bm{x}_{j} = \bm{x}_{0}$ and $\bm{x}_{k} = \bm{x}_{0}$ are simple. As $\bm{h}_{ef}(\bm{x}_{j})$ and $\bm{g}_{ef}(\bm{x}_{j},\bm{x}_{k})$ only depends on $\bm{x}_{j}$ and $\bm{x}_{k}$, we can compute their partial derivatives at $\bm{x}_{0}$ once, and then reuse them during the iteration. Here we introduce a method to make the iteration stay at $\mathbf{x}_{0}$ for each pose. Assume that $\{\mathbf{X}_{j}^{n}\}$ are the poses after the $n$th iteration. Then we can update $\mathbf{U}_{ij}$ and $\tilde{\bm{p}}_{ij}$ in (\ref{equ:S_p}) by \begin{equation} \label{equ:update_U_p} \mathbf{U}_{ij}^{n+1} = \mathbf{X}_{j}^{n}\mathbf{U}_{ij}( \mathbf{X}_{j}^{n})^{T} \ \text{and} \ \tilde{\bm{p}}_{ij}^{n+1} = \mathbf{X}_{j}^{n}\tilde{\bm{p}}_{ij}. \end{equation} Substituting $\mathbf{U}_{ij}^{n+1}$ and $\tilde{\bm{p}}_{ij}^{n+1}$ into (\ref{equ:M_i}), we get a new matrix $\mathbf{M}_{i}(\mathbb{X}_{i})^{n+1}$, which can finally lead to a new cost $\tau^{n+1}$ in (\ref{equ:PA_cost_pose}). As the points have been transformed by $\{\mathbf{X}_{j}^{n}\}$, each pose should start with $\mathbf{X}_0$ for $\tau^{n+1}$. Assume that $\Delta \bm{x}^{n+1}_{j}$ is the result from the $(n+1)$th iteration for the $j$th pose. We can obtain the corresponding transformation matrix $\Delta\mathbf{X}_{j}^{n+1}$ using (\ref{equ:CGR}). Then we can update $\mathbf{X}_{j}^{n}$ b \begin{equation} \mathbf{X}_{j}^{n+1} = \Delta\mathbf{X}_{j}^{n+1}\mathbf{X}_{j}^{n} \end{equation} Furthermore, the update steps in (\ref{equ:update_U_p}) will not introduce additional computation. This is because the damped Newton's method requires to compute the cost $\tau$ in (\ref{equ:PA_cost_pose}) to adjust $\mu$ in (\ref{equ:netwon}) after each iteration, which will perform the computation in (\ref{equ:update_U_p}). \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{Dataset_all} \caption{The four datasets used in this paper. The four datasets have 472, 1355, 1606, 1184 poses, respectively. Roofs are removed to show the trajectories.} \label{fig:dataset} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{noisy_pc} \caption{The point clouds of dataset C after the poses were perturbs by the four noise levels. } \label{fig:noisy_pc} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.93\linewidth]{result_AB} \includegraphics[width=0.93\linewidth]{result_CD} \caption{The results of our algorithm and PA\_LM \cite{zhou2020efficient} under different noise levels. It is clear that our algorithm converges significantly faster than PA\_LM.} \label{fig:resultts} \end{figure*} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Iterations} \caption{The number of iterations for our algorithm and PA\_LM \cite{zhou2020efficient}. Our algorithm requires much fewer number of iterations than PA\_LM. } \label{fig:iteration} \end{figure} \subsection{Algorithm Summary} We first construct $\mathbf{H}$ and $\bm{g}$. For each plane $\bm{\pi}_{i}$, we solve the cubic equation (\ref{equ:cub}), and select the smallest root $\lambda_{i,3}$. For $\bm{x}_{j}$, we construct $\mathbf{M}(\bm{x}_{j})$ in (\ref{equ:Mij}), and calculate the partial derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ with respect to $\bm{x}_{j}$ in (\ref{equ:d_abc}). Then, we use (\ref{equ:g_H}) to compute $\bm{g}_{j}^{i}$ and $\mathbf{H}_{jj}^{i}$ and use (\ref{equ:g_j and H_jk}) to update $\bm{g}_{j}$ and $\mathbf{H}_{jj}$. For $\bm{x}_{j}$ and $\bm{x}_{k}$, $\mathbf{M}_{i}(\bm{x}_{j}, \bm{x}_{k})$ is generated, and then the partial derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ with respect to $\bm{x}_{j}$ and $\bm{x}_{j}$ in (\ref{equ:d_abc}) are computed. Then, $\mathbf{H}_{jk}^{i}$ can be easily obtained from (\ref{equ:g_H}), and $\mathbf{H}_{jk}$ in (\ref{equ:g_j and H_jk}) is updated accordingly. Using $\mathbf{H}$ and $\bm{g}$, we conduct the damped Newton's step in (\ref{equ:netwon}). After each iteration, $\mathbf{U}_{ij}$ and $\tilde{\bm{p}}_{ij}$ are updated by (\ref{equ:update_U_p}). The proposed algorithm is summarized in Algorithm \ref{alg:2nd_pa}. Let us denote the mean and variance of the number of observations per plane as $\bar{K}$ and $\sigma^{2}$, respectively. According to \cite{demmel2021square}, the computational complexity of the Hessian matrix is $O(M(\bar{K}^{2} + \sigma^{2} ))$, which is of the same order as the Schur complement trick. \begin{algorithm} \caption{Second-order plane adjustment for $N$ poses and $M$ planes}\label{alg:2nd_pa} \While{not converge}{ $\mathbf{H} = zeros(6N,6N)$, $\bm{g} = zeros(6N,1)$; \\ \For{$i \in [1, M]$}{ { \Comment{\small Compute $\bm{g}$ and the diagonal terms of $\mathbf{H}.$}} \For{$j \in obs(\bm{\pi}_{i})$}{ Compute $\mathbf{M}_{i}(\bm{x}_{j})$ using (\ref{equ:Mij}); \\ Compute $\bm{\alpha}_{j}^{i}$, $\bm{\beta}_{j}^{i}$, $\bm{\gamma}_{j}^{i}$, $\mathbf{H}_{jj}^{a_{i}}$, $\mathbf{H}_{jj}^{b_{i}}$, $\mathbf{H}_{jj}^{c_{i}}$ using (\ref{equ:d_abc}); \\ Compute $\mathbf{H}_{jj}^{i}$ and $\bm{g}_{j}^{i}$ using (\ref{equ:g_H});\\ $\mathbf{H}_{jj} = \mathbf{H}_{jj} + \mathbf{H}_{jj}^{i}$, $\bm{g}_{j} = \bm{g}_{j} + \bm{g}_{j}^{i} $;\\ } \Comment{\small Compute other terms of $\mathbf{H}.$} \For{$j \in obs(\bm{\pi}_{i})$}{ \For{$k \in obs(\bm{\pi}_{i})$ and $k > j$}{ Compute $\mathbf{H}_{jk}^{a_{i}}$, $\mathbf{H}_{jk}^{b_{i}}$, $\mathbf{H}_{jk}^{c_{i}}$ using (\ref{equ:d_abc});\\ Compute $\mathbf{H}_{jk}^{i}$ using (\ref{equ:g_H});\\ $\mathbf{H}_{jk} = \mathbf{H}_{jk} + \mathbf{H}_{jk}^{i}$; \\ $\mathbf{H}_{kj} = \mathbf{H}_{kj} + (\mathbf{H}_{jk}^{i})^{T}$; \\ } } } Conduct the damped Newton's step in (\ref{equ:netwon}) ; \\ Update $\mathbf{U}_{ij}$ and $\tilde{\bm{p}}_{ij}$ using (\ref{equ:update_U_p}); } \end{algorithm} \section{Experiments} \subsection{Setup} In this section, we evaluate the performance of our algorithm and the traditional LM algorithm \cite{zhou2020efficient} (PA\_LM). We obtain the c++ code of PA\_LM from the author of \cite{zhou2020efficient}, which was implemented by the Ceres library \cite{Agarwal_Ceres_Solver_2022}. Our damped Newton's method is implemented according to the implementation of the LM algorithm in Ceres. The damped Newton's method and the LM algorithm are with the same parameters. Specifically, the initial value of the damping factor $\mu$ in (\ref{equ:netwon}) is set to $10^{-4}$. The maximum number of iterations is set to 200, and the early stopping tolerances (such as the cost value change and the norm of gradient) are set to $10^{-7}$. All the experiments were conducted on a desktop with an Intel i7 cpu and 64G memory. \subsection{Datasets} We collected four datasets using a VLP-16 LiDAR. We used the LiDAR SLAM algorithm \cite{zhou2021lidar} to detect the planes and establish the plane association. Fig.~\ref{fig:dataset} shows the four datasets. Similar to the evaluation of BA algorithms \cite{agarwal2010bundle,zhou2020stochastic,demmel2021square}, we perturb the pose, and compare the convergence speed of different PA algorithms. Specifically, we directly add Gaussian noises to the translation, and randomly generate an angle-axis vector from a Gaussian distribution to perturb the rotation. After the poses are perturb, we use (\ref{equ:opt_pi}) to get the initial plane parameters for PA\_LM \cite{zhou2020efficient}. We evaluate the performance of different algorithms under different noise levels. Let us denote the standard deviation (std) of the Gaussian noises for rotation and translation as $\sigma_{\mathbf{R}}$ and $\sigma_{\bm{t}}$, respectively. We consider four noise levels: $\sigma_{\mathbf{R}} = 0.1^{\circ}$ and $\sigma_{\bm{t}} = 0.01 m$, $\sigma_{\mathbf{R}} = 1^{\circ}$ and $\sigma_{\bm{t}} = 0.1 m$, $\sigma_{\mathbf{R}} = 2^{\circ}$ and $\sigma_{\bm{t}} = 0.2 m$, and $\sigma_{\mathbf{R}} = 3^{\circ}$ and $\sigma_{\bm{t}} = 0.3 m$. Fig.~\ref{fig:noisy_pc} demonstrates the point clouds of dataset C after the poses are perturbed by the four noise levels. \subsection{Results} Fig.~\ref{fig:resultts} and Fig.~\ref{fig:iteration} illustrates the results. It is clear that our algorithm converges faster than PA\_LM. PA\_LM works well at small noises (such as $\sigma_{\mathbf{R}} = 0.1^{\circ}$ and $\sigma_{\bm{t}} = 0.01 m$). As the noise level increases, PA\_LM tends to converge slower. For dataset A and B, PA\_LM does not converge before the maximum number of iterations reaches when $\sigma_{\mathbf{R}} = 3^{\circ}$ and $\sigma_{\bm{t}} = 3 m$. For dataset C and D, the performance of PA\_LM gets worse. It converges very slowly after the noise level reaches $\sigma_{\mathbf{R}} = 1^{\circ}$ and $\sigma_{\bm{t}} = 0.1 m$. In contrast, the impact of the noise level on our algorithm is small. Our algorithm is more robust to the noise. \subsection{Discussion} Our algorithm converges faster than PA\_LM for all the noise levels. This is because our algorithm takes advantage of the special relationship between planes and poses in (\ref{equ:opt_pi}). This does not only significantly reduce the number of variables, but only ensures planes obtain the optimal estimation with respect to the current pose estimation after each iteration. Although PA\_LM jointly optimizes planes and poses, it cannot guarantee planes achieve the optimal value after each iteration. That is to say even if our algorithm and PA\_LM get the same poses, our algorithm can reach a smaller cost. Thus our algorithm converges faster. \section{Conclusion} In the computer vision community, Newton's method is generally considered too expensive for a large-scale least-squares problem. This paper adopts the Newton's method to efficiently solve the PA problem. Our algorithm takes advantage of the fact that the optimal planes are determined by the poses, so that the number of unknowns can be significantly reduced. Furthermore, this property can ensure to obtain the optimal planes when we update the poses. The difficulty lies in how to efficiently compute the Hessian matrix and the gradient vector. The key contribution of this paper is to provide a closed-form solution for them. The experimental results show that our algorithm can converge faster than the LM algorithm. \section{Supplementary Material} \subsection{Implicit Function Theorem} Here we introduce the implicit function theorem \cite{ift}. We used it to derive Lemma \ref{lemma:first_order}. \noindent \textbf{Implicit Function Theorem} \ \textit{Let $\bm{f}:\mathbb{R}^{n+m} \rightarrow \mathbb{R}^{m}$ be a continuously differentiable function, and let $ \mathbb {R}^{n+m}$ have coordinates $\left[ {\bm{x}},{\bm{y}}\right]$. Fix a point $\left[\bm{a},\bm{b} \right] = \left[a_{1}, \cdots a_{n}, b_{1} \cdots b_{m} \right]$ with $f(\bm{a},\bm{b}) = \mathbf{0}$, where $\mathbf{0} \in \mathbb{R}^{m}$ is the zero vector. If the Jacobian matrix of $f$ with respect to $\bm{y}$ is invertible at $\left[ \bm{a},\bm{b}\right] $, then there exists an open set $\mathbb{U} \subset \mathbb{R}^{n}$ containing $\bm{a}$ such that there exists a unique continuously differentiable function $\bm{g}:\mathbb{U} \rightarrow \mathbf{R}^{m}$ such that $\bm{g}(\bm{a}) = \bm{b}$, and $\bm{f}(\bm{x},\bm{g}(\bm{x}))=0$ for all $\bm{x} \in \mathbb{U}$. Moreover, the Jacobian matrix of $\bm{g}$ in $\mathbb{U}$ with respect to $\bm{x}$ is given by the matrix product: \begin{equation} \frac{\partial \bm{g}}{\partial \bm{x}}(\bm{x}) = -{\mathbf{J}_{\bm{f},\bm{y}}(\bm{x},\bm{g}(\bm{x}))}^{-1}\mathbf{J}_{\bm{f},\bm{x}}(\bm{x},\bm{g}(\bm{x})) \end{equation} where ${\mathbf{J}_{\bm{f},\bm{y}}(\bm{x},\bm{g}(\bm{x}))}$ is the Jacobian matrix of $\bm{f}$ with respect to $\bm{y}$ at $\left[ \bm{x}, \bm{g}(\bm{x})\right] $, and ${\mathbf{J}_{\bm{f},\bm{x}}(\bm{x},\bm{g}(\bm{x}))}$ is the Jacobian matrix of $\bm{f}$ with respect to $\bm{x}$ at $\left[ \bm{x}, \bm{g}(\bm{x})\right] $.} \textbf{From the implicit function theorem, we know that we can get $\frac{\partial \bm{g}}{\partial \bm{x}}\left( \bm{x}\right) $ without knowing the exact form of $\bm{g}(\bm{x})$}. \subsection{Proof of Lemma \ref{lemma:first_order}} \begin{proof} $a_{i}$, $b_{i}$, $c_{i}$ are functions of poses in $\mathbb{X}_{i}$. Here we only consider one variable $x_{jm}$ of $\bm{x}_{j}$ (\textit{i.e.}, the $m$th entry of $\bm{x}_{j} \in \mathbb{X}_{i}$). To compute $\frac{\partial \lambda_{i,3}}{\partial x_{jm}}$, we treat $x_{jm}$ as the only unknown and other variables in $\mathbb{X}_{i}$ as constants. Thus, $a_{i}$, $b_{i}$, $c_{i}$ are functions of $x_{jm}$. Then, we define \begin{equation} f(x_{jm},\lambda_{i,3}) = -\lambda^{3}_{i,3} + a_{i}\lambda^{2}_{i,3} + b_{i}\lambda_{i,3} + c_{i}. \end{equation} Then we have \begin{equation} \label{equ:ift_part} \begin{split} \frac{\partial f}{\partial \lambda_{i,3}} &= -3\lambda_{i,3}^{2} + 2a_{i}\lambda_{i,3} + b_{i}, \\ \frac{\partial f}{\partial x_{jm}} &= \lambda_{i,3}^{2}\frac{\partial a_{i}}{\partial x_{jm}} + \lambda_{i,3}\frac{\partial b_{i}}{\partial x_{jm}} + \frac{\partial c_{i}}{\partial x_{jm}}. \end{split} \end{equation} According to the definition of $\bm{\delta}_{jm}^{i}$ in (\ref{equ:first_order}), it has the form \begin{equation} \label{equ:delta_jm} \bm{\delta}_{jm}^{i} = \frac{\partial \bm{\eta}_{j}}{\partial x_{jm}} = \begin{bmatrix} \frac{\partial a_{i}}{{x}_{jm}} \\ \frac{\partial b_{i}}{x_{jm}} \\ \frac{\partial c_{i}}{x_{jm}} \end{bmatrix} \end{equation} Substituting the definitions of $\varphi_{i}$, $\bm{\chi}_{i}$, $\bm{\kappa}_{i}$, and $\bm{\delta}_{jm}$ into (\ref{equ:ift_part}), we have \begin{equation} \label{equ:ift_replace} \begin{split} \frac{\partial f}{\partial \lambda_{i,3}} &= \bm{\kappa}_{i} \cdot \bm{\chi}_{i} = \varphi_{i}^{-1}, \\ \frac{\partial f}{\partial x_{jm}} &= \bm{\delta}_{jm} \cdot \bm{\chi}_{i}. \end{split} \end{equation} Using the implicit function theorem, for $f(x_{jm},\lambda_{i,3}) = 0$, we have \begin{equation} \label{equ:ift_lambda_x} \frac{\partial \lambda_{i,3}}{\partial x_{jm}} = -\frac{\frac{\partial f}{\partial x_{jm}}}{\frac{\partial f}{\partial \lambda_{i,3}}} \end{equation} Substituting (\ref{equ:ift_replace}) into (\ref{equ:ift_lambda_x}), we finally get \begin{equation} \frac{\partial \lambda_{i,3}}{\partial x_{jm}} = -\varphi_{i}\bm{\delta}_{jm}^{i}\cdot\bm{\chi}_{i}. \end{equation} \end{proof} \subsection{Proof of Lemma \ref{lemma:second_order}} \begin{proof} We first compute the partial derivative of $\frac{\partial \lambda_{i,3}}{\partial x_{jm}}$ in (\ref{equ:first_order}) with respect to $x_{kn}$. According to the production rule of calculus, we have \begin{equation} \label{equ:2nd_order_derivation} \frac{\partial^2 \lambda_{i,3}}{\partial x_{jm} \partial x_{km}} = -\varphi_{i}\bm{\delta}_{jm}^{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{kn}} -\varphi_{i}\bm{\chi}_{i} \cdot \frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}} - \bm{\delta}_{jm}^{i} \cdot \bm{\chi}_{i}\frac{\partial \varphi_{i}}{\partial x_{kn}}. \end{equation} Let us first focus on the term $\frac{\partial \varphi_{i}^{-1}}{\partial x_{kn}}$ in (\ref{equ:2nd_order_derivation}). As $\varphi_{i} = \left(\bm{\kappa}_{i} \cdot \bm{\chi}_{i} \right)$, we have \begin{equation} \label{equ:d_phi} \frac{\partial \varphi_{i}}{\partial x_{kn}} = -\left(\bm{\kappa}_{i} \cdot \bm{\chi}_{i} \right)^{-2}\frac{\partial \left(\bm{\kappa}_{i} \cdot \bm{\chi}_{i} \right)}{\partial x_{kn}} = -\varphi_{i}^{2}\frac{\partial \varphi_{i}^{-1} }{\partial x_{kn}}. \end{equation} Now let us consider $\bm{\delta}_{jm}^{i} \cdot \bm{\chi}_{i}\frac{\partial \varphi_{i}}{\partial x_{kn}}$ that is the third term in (\ref{equ:2nd_order_derivation}). Using (\ref{equ:d_phi}), we have \begin{equation} \label{equ:3rd_term} \begin{split} \bm{\delta}_{jm}^{i} \cdot \bm{\chi}_{i}\frac{\partial \varphi_{i}}{\partial x_{kn}} & = -\bm{\delta}_{jm}^{i} \cdot \bm{\chi}_{i}\varphi_{i}^{2}\frac{\partial \varphi_{i}^{-1} }{\partial x_{kn}} \\ & = -\varphi_{i}\underbrace{\left(\varphi_{i}\bm{\delta_{jm}^{i}} \cdot\bm{\chi}_{i} \right)}_{\frac{\partial\lambda_{i,3}}{\partial x_{jm}}} \frac{\partial \varphi_{i}^{-1} }{\partial x_{kn}} \\ & = -\varphi_{i}\frac{\partial\lambda_{i,3}}{\partial x_{jm}}\frac{\partial \varphi_{i}^{-1} }{\partial x_{kn}} \end{split} \end{equation} Substituting (\ref{equ:3rd_term}) into (\ref{equ:2nd_order_derivation}), we have \begin{equation} \small \begin{split} \frac{\partial^2 \lambda_{i,3}}{\partial x_{jm} \partial x_{km}} &= -\varphi_{i}\bm{\delta}_{jm}^{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{kn}} -\varphi_{i}\bm{\chi}_{i} \cdot \frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}} + \varphi_{i}\frac{\partial\lambda_{i,3}}{\partial x_{jm}}\frac{\partial \varphi_{i}^{-1} }{\partial x_{kn}} \\ &= -\varphi_{i}\left(\bm{\delta}_{jm}^{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{kn}} + \bm{\chi}_{i} \cdot \frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}} - \frac{\partial \lambda_{i,3}}{\partial x_{jm}} \frac{\partial \varphi_{i}^{-1}}{\partial x_{kn}}\right) \end{split} \end{equation} \end{proof} \subsection{Proof of Theorem \ref{theorem:g_H}} \begin{figure} \centering \includegraphics[width=1\linewidth]{algorithm_schematic} \caption{Summary of our algorithm. $\mathbb{X}_{i}$ is the set of poses which can see $\bm{\pi}_{i}$. Here $\bm{x}_{j} \in \mathbb{X}_{i}$ and $\bm{x}_{k} \in \mathbb{X}_{i}$. The key point of our algorithm is to get $\bm{g}_{j}^{i} = \frac{\partial \lambda_{i,3}}{\partial \bm{x}_{j}}$, $\mathbf{H}_{jj}^{i} = \frac{\partial^{2} \lambda_{i,3}}{\partial \bm{x}_{j}^{2}}$, and $\mathbf{H}_{jk}^{i} = \frac{\partial^{2} \lambda_{i,3} }{\partial \bm{x}_{j} \partial \bm{x}_{k}}$. Theorem \ref{theorem:g_H} provides their formulas. From Theorem \ref{theorem:g_H}, we know that the partial derivatives of $a_{i}$, $b_{i}$, and $c_{i}$ with respect to $\bm{x}_{j}$ and $\bm{x}_{k}$ in (\ref{equ:d_abc}) are crucial. As $a_{i}$, $b_{i}$, and $c_{i}$ are polynomials with respect to $m_{ef}$ ($e=1,2,3$ and $f=1,2,3$), to get the partial derivatives of $a_{i}$, $b_{i}$, and $c_{i}$ with respect to $\bm{x}_{j}$ and $\bm{x}_{k}$, we need to compute the partial derivatives of $m_{ef}$ with respect to $\bm{x}_{j}$ and $\bm{x}_{k}$. Section \ref{sec:d_M_i} proves these partial derivatives of $m_{ef}$. } \label{fig:alg_summary} \end{figure} \begin{proof} Expanding the definition of $\bm{\Delta}_{j}^{i}$ in (\ref{equ:d_abc}), we have \begin{equation} \label{equ:Delta} \bm{\Delta}_{j}^{i} = \begin{bmatrix} \frac{\partial a_{i}}{\partial x_{j1}} & \frac{\partial b_{i}}{\partial x_{j1}} & \frac{\partial c_{i}}{\partial x_{j1}} \\ \vdots & \vdots & \vdots \\ \frac{\partial a_{i}}{\partial x_{jm}} & \frac{\partial b_{i}}{\partial x_{jm}} & \frac{\partial c_{i}}{\partial x_{jm}} \\ \vdots & \vdots & \vdots \\ \frac{\partial a_{i}}{\partial x_{j6}} & \frac{\partial b_{i}}{\partial x_{j6}} & \frac{\partial c_{i}}{\partial x_{j6}} \end{bmatrix} \in \mathbb{R}^{6 \times 6} \end{equation} Substituting the definition of $\bm{\delta}_{jm}^{i}$ in (\ref{equ:delta_jm}) into (\ref{equ:Delta}), we can write $\bm{\Delta}_{j}^{i}$ as \begin{equation} \label{equ:Delta_rewrite} \bm{\Delta}_{j}^{i} = \begin{bmatrix} {\bm{\delta}_{j1}^{i}}^{T} \\ \vdots \\ {\bm{\delta}_{jm}^{i}}^{T} \\ \vdots \\ {\bm{\delta}_{j6}^{i}}^{T} \end{bmatrix} \end{equation} Assume $x_{jm}$ is the $m$th variable of $\bm{x}_{j}$. Then $\bm{g}_{j}^{i}$ can be written as \begin{equation} \bm{g}_{j}^{i} = \frac{\partial \lambda_{i,3}}{\partial \bm{x}_{j}} = \begin{bmatrix} \frac{\partial \lambda_{i,3}}{\partial {x}_{j1}} \\ \vdots \\ \frac{\partial \lambda_{i,3}}{\partial {x}_{jm}} \\ \vdots \\ \frac{\partial \lambda_{i,3}}{\partial {x}_{j6}} \end{bmatrix} \in \mathbb{R}^{6}. \end{equation} Here $\frac{\partial \lambda_{i,3}}{\partial x_{jm}}$ is the $m$th element of $\bm{g}_{j}^{i}$. Substituting (\ref{equ:Delta_rewrite}) into $\bm{g}_{j}^{i}$ in (\ref{equ:g_H}), we can obtain the formula of $\frac{\partial \lambda_{i,3}}{\partial x_{jm}}$ as \begin{equation} \frac{\partial \lambda_{i,3}}{\partial x_{jm}} = -\varphi_{i}{\bm{\delta}_{jm}^{i}}^{T}\bm{\chi}_{i} = -\varphi_{i}{\bm{\delta}_{jm}^{i}}\cdot \bm{\chi}_{i}. \end{equation} The above formula is what we proved in Lemma \ref{lemma:first_order}. Using Lemma \ref{lemma:first_order}, we known that the formula of $\bm{g}_{j}^{i}$ in (\ref{equ:g_H}) is correct. Now we consider the Hessian matrix. According to the definitions of $\bm{\delta}_{kn}^{i}$, $\bm{\chi}_{i}$, and $\bm{\delta}_{jm}^{i}$, we have \begin{equation} \label{equ:delta_chi_delta_2} \small \begin{split} \bm{\delta}_{kn}^{i} &= \begin{bmatrix} \frac{\partial a_{i}}{\partial x_{kn}} \\ \frac{\partial b_{i}}{\partial x_{kn}} \\ \frac{\partial c_{i}}{\partial x_{kn}} \end{bmatrix}, \frac{\partial \bm{\chi}_{i}}{\partial x_{j,m}} = \begin{bmatrix} 2\lambda_{i,3}^{2}\frac{\partial \lambda_{i,3}}{\partial x_{j,m}} \\ \frac{\partial \lambda_{i,3}}{\partial x_{j,m}}\\ 0 \end{bmatrix} , \frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}} = \begin{bmatrix} \frac{\partial^{2} a_{i}}{\partial x_{jm} \partial x_{kn}} \\ \frac{\partial^{2} b_{i}}{\partial x_{jm} \partial x_{kn}} \\ \frac{\partial^{2} c_{i}}{\partial x_{jm} \partial x_{kn}} \end{bmatrix} \end{split} \end{equation} In addition, using the definition of $\varphi_{i}$ in (\ref{equ:first_order}), we obtain \begin{equation}\label{equ:d_inv_phi} \begin{split} \frac{\partial \varphi_{i}^{-1}}{\partial x_{km}} &= \bm{\chi}_{i} \cdot \frac{\partial \bm{\kappa}_{i}}{\partial x_{km}} + \bm{\kappa}_{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{km}}\\ & = 2\lambda_{i,3}\frac{\partial a_{i}}{\partial x_{kn}} + \frac{\partial b_{i}}{\partial x_{kn}} + \left( 2a-6\lambda_{i,3}\right) \frac{\partial \lambda_{i,3}}{\partial x_{kn}} \end{split} \end{equation} Let us denote the entry in the $m$th row and $n$th column of $\mathbf{H}_{jk}^{i}$ as $\mathbf{H}_{jk}^{i}(m,n)$. Using the formula of $\mathbf{H}_{jk}^{i}$ in (\ref{equ:g_H}) and the variables in (\ref{equ:delta_chi_delta_2}) and (\ref{equ:d_inv_phi}), we have \begin{equation} \label{equ:H_jk(m,n)} \small \begin{split} \mathbf{H}_{jk}^{i}(m,n) = &-\underbrace{\varphi_{i}\frac{\partial \lambda_{i,3}}{\partial x_{kn}}\left(2\lambda_{i,3}\frac{\partial a_{i}}{\partial x_{jm}} + \frac{\partial b_{i}}{\partial x_{jm}}\right)}_{\varphi_{i}\bm{\delta}_{kn}^{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{jm}}} - \\ &\underbrace{\varphi_{i}\left(\lambda_{i,3}^{2}\frac{\partial^{2} a_{i}}{\partial x_{jm}\partial x_{kn}}+ \lambda_{i,3}\frac{\partial^{2} b_{i}}{\partial x_{jm}\partial x_{kn}} + \frac{\partial^{2} c_{i}}{\partial x_{jm}\partial x_{kn}}\right)}_{\varphi_{i}\bm{\chi}_{i}\cdot\frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}}} + \\ &\underbrace{\varphi_{i}\frac{\partial \lambda_{i,3}}{\partial x_{jm}}\left( 2\lambda_{i,3}\frac{\partial a_{i}}{\partial x_{kn}} + \frac{\partial b_{i}}{\partial x_{kn}} + \left( 2a-6\lambda_{i,3}\right) \frac{\partial \lambda_{i,3}}{\partial x_{kn}}\right)}_{\varphi_{i}\frac{\partial \lambda_{i,3}}{\partial x_{jm}}\frac{\partial \varphi_{i}^{-1}}{\partial x_{km}}} \\ = & -\varphi_{i}\left( \bm{\delta}_{jm}^{i} \cdot \frac{\partial \bm{\chi}_{i}}{\partial x_{kn}} + \bm{\chi}_{i}\cdot\frac{\partial \bm{\delta}_{jm}^{i}}{\partial x_{kn}} -\frac{\partial \lambda_{i,3}}{\partial x_{jm}}\frac{\partial \varphi_{i}^{-1}}{\partial x_{km}} \right) \end{split} \end{equation} On the other hand, we know \begin{equation} \mathbf{H}_{jk}^{i}(m,n) = \frac{\partial^2 \lambda_{i,3}}{\partial x_{jm} \partial x_{kn}}. \end{equation} Comparing (\ref{equ:second_order}) and (\ref{equ:H_jk(m,n)}), we know that the formula of $\mathbf{H}_{jk}^{i}$ in (\ref{equ:g_H}) is correct. \end{proof} \subsection{Proof of Lemma \ref{lemma:mean_pt_decomp}} \begin{proof} For $j\in obs(\bm{\pi}_{i})$ and $k\in obs(\bm{\pi}_{i})$, we take $\frac{1}{N_{i}}\mathbf{T}_{j}\tilde{\bm{p}}_{ij}$ and $\frac{1}{N_{i}}\mathbf{T}_{k}\tilde{\bm{p}}_{ik}$ out the summation. Then, we can write $\bar{\bm{p}}_{i}$ as \begin{equation} \label{equ:mean_jk} \begin{split} \bar{\bm{p}}_{i}(\bm{x}_{j}, \bm{x}_{k}) &= \mathbf{T}_{j}\underbrace{\frac{1}{N_{i}}\tilde{\bm{p}}_{ij}}_{\bm{q}_{ij}} + \mathbf{T}_{k}\underbrace{\frac{1}{N_{i}}\tilde{\bm{p}}_{ik}}_{\bm{q}_{ik}} + \underbrace{\frac{1}{N_{i}}\sum_{n \in \mathbb{O}_{jk}} \mathbf{T}_{n}\tilde{\bm{p}}_{in}}_{\bm{c}_{ijk}}. \\ & = \mathbf{T}_{j}\bm{q}_{ij} + \mathbf{T}_{k}\bm{q}_{ik} + \bm{c}_{ijk}. \end{split} \end{equation} For $j\in obs(\bm{\pi}_{i})$, we can write (\ref{equ:mean_jk}) as \begin{equation} \label{equ:mean_j} \begin{split} \bar{\bm{p}}_{i}(\bm{x}_{j}, \bm{x}_{k}) & = \mathbf{T}_{j}\bm{q}_{ij} + \underbrace{\mathbf{T}_{k}\bm{q}_{ik} + \bm{c}_{ijk}}_{\bm{c}_{ij}} \\ & = \mathbf{T}_{j}\bm{q}_{ij} + \bm{c}_{ij} \end{split} \end{equation} \end{proof} \subsection{Proof of Theorem \ref{theorem:M_i}} \begin{proof} Substituting (\ref{equ:mean_j}) into (\ref{equ:M_i}) and using the formula of $\mathbf{S}_{ij}=\mathbf{T}_{j}\mathbf{U}\mathbf{T}_{j}^{T}$ in (\ref{equ:S_p}), we have \begin{equation*} \small \begin{split} \mathbf{M}_{i}(\bm{x}_{i}) = & \sum_{j \in obs(\bm{\pi}_{i})}\mathbf{S}_{ij} - N_{i}(\mathbf{T}_{j}\bm{q}_{ij}+\bm{c}_{ij})(\mathbf{T}_{j}\bm{q}_{ij}+\bm{c}_{ij})^{T} \\ = & \mathbf{S}_{ij}-\mathbf{T}_{j}\left( N_{i}\bm{q}_{ij}\bm{q}_{ij}^{T}\right) \mathbf{T}_{j}^{T} -\mathbf{T}_{j}\underbrace{\left(N_{j} \bm{q}_{ij}\bm{c}_{ij}^{T}\right)}_{-\mathbf{K}_{j}^{i}} - \\ &\underbrace{\left( N_{j}\bm{c}_{ij}\bm{p}^{T}_{ij}\right)}_{\left( -\mathbf{K}_{j}^{i}\right) ^{T}}\mathbf{T}_{j}^{T} + \underbrace{\sum_{\substack{n \in obs(\bm{\pi}_{i}) \\ n \neq j}}\mathbf{S}_{in} - N_{j}\bm{c}_{ij}\bm{c}_{ij}^{T}}_{\mathbf{C}_{j}^{i}} \\ = & \mathbf{T}_{j}\mathbf{U}_{ij}\mathbf{T}_{j}^{T} - \mathbf{T}_{j}\left( N_{i}\bm{q}_{ij}\bm{q}_{ij}^{T}\right)\mathbf{T}_{j}^{T} + \mathbf{T}_{j}\mathbf{K}_{j}^{i} + \left( \mathbf{K}_{j}^{i}\right)^{T}\mathbf{T}_{j}^{T} + \mathbf{C}_{j}^{i} \\ = & \mathbf{T}_{j}\underbrace{\left( \mathbf{U}_{ij} - N_{j}\bm{q}_{ij}\bm{q}_{ij}^{T}\right)}_{\mathbf{Q}_{j}^{i}}\mathbf{T}_{j}^{T} + \mathbf{T}_{j}\mathbf{K}_{j}^{i} + \left( \mathbf{K}_{j}^{i}\right)^{T}\mathbf{T}_{j}^{T} + \mathbf{C}_{j}^{i} \\ = & \mathbf{T}_{j}\mathbf{Q}_{j}^{i}\mathbf{T}_{j}^{T} + \mathbf{T}_{j}\mathbf{K}_j^{i} + (\mathbf{K}_j^{i})^{T}\mathbf{T}_{j}^{T} + \mathbf{C}_{j}^{i} \end{split} \end{equation*} Thus we get the formula of $\mathbf{M}_{i}(\bm{x}_{j})$ in (\ref{equ:Mij}). Now let us prove (\ref{equ:M_ijk}). Let us define \begin{equation} \begin{split} \mathbf{E}_{j}^{i} &= \mathbf{T}_{j}\bm{q}_{ij}\left( \mathbf{T}_{j}\bm{q}_{ij} + \bm{c}_{ijk}\right)^{T} \\ \mathbf{E}_{k}^{i} &= \mathbf{T}_{k}\bm{q}_{ik}\left( \mathbf{T}_{k}\bm{q}_{ik} + \bm{c}_{ijk}\right)^{T} \end{split} \end{equation} Substituting (\ref{equ:mean_jk}) into (\ref{equ:M_i}), we obtain \begin{equation} \begin{split} \mathbf{M}_{i}(\bm{x}_{j}, \bm{x}_{k}) =& \mathbf{T}_{j}\underbrace{\left(-N\bm{q}_{ij}\bm{q}_{ik}^{T}\right)}_{\mathbf{O}_{j}^{i}}\mathbf{T}_{k}^{T} +\mathbf{T}_{j}\underbrace{\left(-N\bm{q}_{ik}\bm{q}_{ij}^{T}\right)}_{\left( \mathbf{O}_{j}^{i}\right)^{T}}\mathbf{T}_{k}^{T} + \\ &\underbrace{\sum_{j \in obs(\bm{\pi}_{i})}\mathbf{S}_{ij} - N_{i}\left( \mathbf{E}_{j}^{i} + \mathbf{E}_{k}^{i} + \bm{c}_{ijk}\bar{\bm{p}}_{i}(\bm{x}_{j},\bm{x}_{k})^{T}\right) }_{\mathbf{C}_{jk}^{i}} \\ = & \mathbf{T}_{j}\mathbf{O}_{jk}^{i}\mathbf{T}_{k}^{T} + \mathbf{T}_{k}{(\mathbf{O}_{jk}^{i})}^{T}\mathbf{T}_{j}^{T} + \mathbf{C}_{jk}^{i} \end{split} \end{equation} Thus we get the formula of $\mathbf{M}_{i}(\bm{x}_{j}, \bm{x}_{k})$ in (\ref{equ:M_ijk}). \end{proof} \subsection{Partial Derivatives of Entries in \textbf{M}$_{i}$} \label{sec:d_M_i} As illustrated in Fig.~\ref{fig:alg_summary}, the derivatives of $a_{i}$, $b_{i}$ and $c_{i}$ in (\ref{equ:d_abc}) are the crux to get $\bm{g}_{j}^{i}$, $\mathbf{H}_{jj}^{i}$, and $\mathbf{H}_{jk}^{i}$. The $a_{i}$, $b_{i}$ and $c_{i}$ in (\ref{equ:cub}) are first-, second-, and third-order polynomials with respect to the elements in $\mathbf{M}_{i}$, respectively. Let us denote the $e$th row and $f$th column entry of $\mathbf{M}_{i}$ as $m_{ef}$. According to the chain rule in calculus, to compute the partial derivatives in (\ref{equ:d_abc}), we have to calculate \begin{equation} \label{equ:d_m} \frac{\partial m_{ef}}{\partial \bm{x}_{j}}, \frac{\partial^{2} m_{ef}}{\partial \bm{x}_{j}^{2}}, \ \text{and} \ \frac{\partial^{2} m_{ef}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}. \end{equation} From our paper, we know that we only need to compute their value at $\bm{x}_{0} = [0;0;0;0;0;0]$. Assume ${q}_{ef}$, ${k}_{ef}$, and $o_{ef}$ are the $e$th row and $f$th column entries of $\mathbf{Q}_{j}^{i}$, $\mathbf{K}_{j}^{i}$, and $\mathbf{O}_{jk}^{i}$, respectively. Then, the values of $\frac{\partial m_{ef}}{\partial \bm{x}_{j}}$, $\frac{\partial^{2} m_{ef}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}$, and $\frac{\partial^{2} m_{ef}}{\partial \bm{x}_{j}^{2}}$ at $\bm{x}_{0}$ have the forms in Table \ref{table:m_ef1st-order}, Table \ref{table:2nd-cross}, and Table \ref{table:m_ef2nd-order}, respectively. \begin{table*} \centering \begin{tabular}{ l l } \toprule $\left. \frac{\partial m_{11}}{\partial\bm{x}_{j}}\right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 0\\ - 2k_{31} - 2q_{13} \\ 2k_{21} + 2q_{12} \\ - 4k_{32} - 4q_{23} \\ 2k_{22} - 2k_{33} + 2q_{22} - 2q_{33} \\ 4k_{23} + 4q_{23} \\ \end{bmatrix}$ & $\left. \frac{\partial m_{12}}{\partial\bm{x}_{j}}\right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 4k_{31} + 4q_{13} \\ 2k_{32} + 2q_{23} \\ 2k_{33} - 2k_{11} - 2q_{11} + 2q_{33} \\ 0 \\ - 2k_{12} - 2q_{12} \\ - 4k_{13} - 4q_{13} \\ \end{bmatrix}$ \\ $\left. \frac{\partial m_{13}}{\partial\bm{x}_{j}}\right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} - 4k_{21} - 4q_{12} \\ 2k_{11} - 2k_{22} + 2q_{11} - 2q_{22} \\ - 2k_{23} - 2q_{23} \\ 4k_{12} + 4q_{12} \\ 2k_{13} + 2q_{13} \\ 0\\ \end{bmatrix}$ & $\left. \frac{\partial m_{22}}{\partial\bm{x}_{j}}\right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 2k_{41} + 2q_{14} \\ k_{42} + q_{24} \\ k_{43} + q_{34} \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}$ \\ $\left. \frac{\partial m_{23}}{\partial\bm{x}_{j}}\right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 0 \\ k_{41} + q_{14} \\ 0 \\ 2k_{42} + 2q_{24} \\ k_{43} + q_{34} \\ 0 \end{bmatrix}$ & $\left. \frac{\partial m_{33}}{\partial\bm{x}_{j}}\right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 0 \\ 0 \\ k_{41} + q_{14} \\ 0 \\ k_{42} + q_{24} \\ 2k_{43} + 2q_{34} \end{bmatrix}$ \\ \bottomrule \end{tabular} \caption{$\frac{\partial m_{ef}}{\partial \bm{x}_{j}}$ at $\bm{x}_{j} = \bm{x}_{0}$. } \label{table:m_ef1st-order} \end{table*} \begin{table*} \small \centering \begin{tabular}{l l} \toprule $\left. \frac{\partial^{2} m_{11}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}\right|_{\substack{\bm{x}_{j} = \bm{x}_{0} \\ \bm{x}_{k} = \bm{x}_{0} }} = \begin{bmatrix} 0& 0& 0& 0& 0& 0 \\ 0& 8o_{33}& -8o_{32}& 4o_{34}& 0& 0 \\ 0& -8o_{23}& 8o_{22}& -4o_{24}& 0& 0 \\ 0& 4o_{43}& -4o_{42}& 2o_{44}& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ \end{bmatrix}$ & $\left. \frac{\partial^{2} m_{13}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}\right|_{\substack{\bm{x}_{j} = \bm{x}_{0} \\ \bm{x}_{k} = \bm{x}_{0} }} = \begin{bmatrix} 0& 4o_{23}& -4o_{22}& 2o_{24}& 0& 0 \\ 4o_{32}& - 4o_{13} - 4o_{31}& 4o_{12}& -2o_{14}& 0& 2o_{34} \\ -4o_{22}& 4o_{21}& 0& 0& 0& -2o_{24}\\ 2o_{42}& -2o_{41}& 0& 0& 0& o_{44}\\ 0& 0& 0& 0& 0& 0\\ 0& 2o_{43}& -2o_{42}& o_{44}& 0& 0\\ \end{bmatrix}$ \\ $\left. \frac{\partial^{2} m_{22}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}\right|_{\substack{\bm{x}_{j} = \bm{x}_{0} \\ \bm{x}_{k} = \bm{x}_{0} }} = \begin{bmatrix} 8o_{33}& 0& -8o_{31}& 0& -4o_{34}& 0 \\ 0& 0& 0& 0& 0& 0 \\ -8o_{13}& 0& 8o_{11}& 0& 4o_{14}& 0 \\ 0& 0& 0& 0& 0& 0 \\ -4o_{43}& 0& 4o_{41}& 0& 2o_{44}& 0 \\ 0& 0& 0& 0& 0& 0 \\ \end{bmatrix}$& $\left. \frac{\partial^{2} m_{12}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}\right|_{\substack{\bm{x}_{j} = \bm{x}_{0} \\ \bm{x}_{k} = \bm{x}_{0} }} = \begin{bmatrix} 0& -4o_{33}& 4o_{32}& -2o_{34}& 0& 0\\ -4o_{33}& 0& 4o_{31}& 0& 2o_{34}& 0\\ 4o_{23}& 4o_{13}& - 4o_{12} - 4o_{21}& 2o_{14}& -2o_{24}& 0\\ -2o_{43}& 0& 2o_{41}& 0& o_{44}& 0\\ 0& 2o_{43}& -2o_{42}& o_{44}& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ \end{bmatrix}$ \\ $\left. \frac{\partial^{2} m_{33}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}\right|_{\substack{\bm{x}_{j} = \bm{x}_{0} \\ \bm{x}_{k} = \bm{x}_{0} }} = \begin{bmatrix} 8o_{22}& -8o_{21}& 0& 0& 0& 4o_{24} \\ -8o_{12}& 8o_{11}& 0& 0& 0& -4o_{14} \\ 0& 0& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ 4o_{42}& -4o_{41}& 0& 0& 0& 2o_{44} \\ \end{bmatrix}$ & $\left. \frac{\partial^{2} m_{23}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}\right|_{\substack{\bm{x}_{j} = \bm{x}_{0} \\ \bm{x}_{k} = \bm{x}_{0} }} = \begin{bmatrix} - 4o_{23} - 4o_{32}& 4o_{31}& 4o_{21}& 0& 2o_{24}& -2o_{34} \\ 4o_{13}& 0& -4o_{11}& 0& -2o_{14}& 0 \\ 4o_{12}& -4o_{11}& 0& 0& 0& 2o_{14} \\ 0& 0& 0& 0& 0& 0 \\ 2o_{42}& -2o_{41}& 0& 0& 0& o_{44} \\ -2o_{43}& 0& 2o_{41}& 0& o_{44}& 0 \\ \end{bmatrix} $\\ \bottomrule \end{tabular} \caption{$\frac{\partial^{2} m_{ef}}{\partial \bm{x}_{j} \partial \bm{x}_{k}}$ at $\bm{x}_{j} = \bm{x}_{0}$ and $\bm{x}_{k} = \bm{x}_{0}$.} \label{table:2nd-cross} \end{table*} \begin{table*} \centering \small \begin{tabular}{l} \toprule $\left. \frac{\partial^{2} m_{11}}{\partial \bm{x}_{j}^{2}} \right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 0 &4k_{21} + 4q_{12} & 4k_{31} + 4q_{13} & 0 & 0 & 0 \\ 4k_{21} + 4q_{12} & 8q_{33} - 8q_{11} - 8k_{11} & -8q_{23} & 4q_{34} & 0 & 0 \\ 4k_{31} + 4q_{13} & -8q_{23}& 8q_{22} - 8q_{11} - 8k_{11} & -4q_{24} & 0 & 0 \\ 0 & 4q_{34} & -4q_{24} & 2q_{44} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$ \\ $\left. \frac{\partial^{2} m_{12}}{\partial \bm{x}_{j}^{2}} \right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} - 4k_{21} - 4q_{12} & 2k_{11} + 2k_{22} + 2q_{11} + 2q_{22} - 4q_{33} & 2k_{32} + 6q_{23} & -2q_{34} & 0 & 0 \\ 2k_{11} + 2k_{22} + 2q_{11} + 2q_{22} - 4q_{33} & - 4k_{12} - 4q_{12} & 2k_{31} + 6q_{13} & 0 & 2q_{34} & 0 \\ 2k_{32} + 6q_{23} & 2k_{31} + 6q_{13} & - 4k_{12} - 4k_{21} - 16q_{12} & 2q_{14} & -2q_{24} & 0 \\ -2q_{34} & 0 & 2q_{14}, 0 & q_{44}, 0 \\ 0 & 2q_{34} & -2q_{24} & q_{44} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix}$ \\ $\left. \frac{\partial^{2} m_{13}}{\partial \bm{x}_{j}^{2}} \right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} - 4k_{31} - 4q_{13}& 2k_{23} + 6q_{23}& 2k_{11} + 2k_{33} + 2q_{11} - 4q_{22} + 2q_{33}& 2q_{24}& 0& 0 \\ 2k_{23} + 6q_{23}& - 4k_{13} - 4k_{31} - 16q_{13}& 2k_{21} + 6q_{12}& -2q_{14}& 0& 2q_{34} \\ 2k_{11} + 2k_{33} + 2q_{11} - 4q_{22} + 2q_{33}& 2k_{21} + 6q_{12}& - 4k_{13} - 4q_{13}& 0& 0& -2q_{24} \\ 2q_{24}& -2q_{14}& 0& 0& 0& q_{44} \\ 0& 0& 0& 0& 0& 0 \\ 0& 2q_{34}& -2q_{24}& q_{44}& 0& 0 \\ \end{bmatrix}$ \\ $\left. \frac{\partial^{2} m_{22}}{\partial \bm{x}_{j}^{2}} \right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 8q_{33} - 8q_{22} - 8k_{22}& 4k_{12} + 4q_{12}& -8q_{13}& 0& -4q_{34}& 0 \\ 4k_{12} + 4q_{12}& 0& 4k_{32} + 4q_{23}& 0& 0& 0\\ -8q_{13}& 4k_{32} + 4q_{23}& 8q_{11} - 8k_{22} - 8q_{22}& 0& 4q_{14}& 0 \\ 0& 0& 0& 0& 0& 0 \\ -4q_{34}& 0& 4q_{14}& 0& 2q_{44}& 0 \\ 0& 0& 0& 0& 0& 0 \\ \end{bmatrix}$ \\ $\left. \frac{\partial^{2} m_{23}}{\partial \bm{x}_{j}^{2}} \right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} - 4k_{23} - 4k_{32} - 16q_{23}& 2k_{13} + 6q_{13}& 2k_{12} + 6q_{12}& 0& 2q_{24}& -2q_{34} \\ 2k_{13} + 6q_{13}& - 4k_{32} - 4q_{23}& 2k_{22} + 2k_{33} - 4q_{11} + 2q_{22} + 2q_{33}& 0& -2q_{14}& 0 \\ 2k_{12} + 6q_{12}& 2k_{22} + 2k_{33} - 4q_{11} + 2q_{22} + 2q_{33} & - 4k_{23} - 4q_{23} & 0& 0& 2q_{14}1 \\ 0& 0& 0& 0& 0& 0 \\ 2q_{24} & -2q_{14} & 0& 0& 0& q_{44} \\ -2q_{34}& 0& 2q_{14}& 0& q_{44}& 0\\ \end{bmatrix} $\\ $\left. \frac{\partial^{2} m_{33}}{\partial \bm{x}_{j}^{2}} \right|_{\bm{x}_{j}=\bm{x}_{0}} = \begin{bmatrix} 8q_{22} - 8k_{33} - 8q_{33}& -8q_{12}& 4k_{13} + 4q_{13}& 0& 0& 4q_{24} \\ -8q_{12}& 8q_{11} - 8k_{33} - 8q_{33} & 4k_{23} + 4q_{23}& 0& 0& -4q_{14} \\ 4k_{13} + 4q_{13}& 4k_{23} + 4q_{23}& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0&0 \\ 4q_{24}& -4q_{14}& 0& 0& 0& 2q_{44} \\ \end{bmatrix}$ \\ \bottomrule \end{tabular} \caption{$\frac{\partial^{2} m_{ef}}{\partial \bm{x}_{j}^{2}}$ at $\bm{x}_{j} = \bm{x}_{0}$.} \label{table:m_ef2nd-order} \end{table*} {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,973
New Label Dispenser M10-d for SATO M10e Printer. SATO printer M10 can be used as standard without any modifications. * M10-d, Manuel version with "PRINT" button(without external conveyer). * M10C-d model shown in the film with external conveyer and LTS but without "floor Stand" Nasab's External Label Unwinder UW-4S for labels, tickets and tags. The Unwinder UW-4S are suitable for small or medium size production. Unwinder UW-4S can take label width size 120mm, and label roll diameter size 210mm. Nasab's External Label Rewinder RW-4S for labels, tickets and tags. The Rewinder RW-4S are suitable for small or medium size production. Rewinder RW-4S can take label width size 120mm, and label roll diameter size 210mm. Nasab's External Label UnRewinder URW-4S for labels, tickets and tags. The Unwinder/Rewinder URW-4S are designed for unwinding the labels from an external Unwinder which can take larger label rolls than the printer can contain and rewinding the printed labels as they come out of the printer. The "Heavy duty" Unwinder UW-4L are made for Heavy Duty operation. The Heavy Duty Unwinder UW-4L is made for label width size 120mm, and label roll diameter size 240mm or 340mm. The "Heavy duty" Rewinder RW-4L are made for Heavy Duty operation. The Heavy Duty Rewinder RW-4L is made for label width size 120mm, and label roll diameter size 240mm or 340mm. The "Heavy duty" UnRewinder URW-4L are made for Heavy Duty operation. The Heavy Duty Unwinder/Rewinder URW-4L can take label width size 120mm, and label roll diameter size 240mm or 340mm. Label Unwinder UW-2SK are specially made for Tags and used together with LAN box. The Unwinder UW-2SK is made for label width size 65mm, and label roll diameter size 160mm. Label Rewinder RWB-4S are made for rewinding the "Back Paper" externally. The Rewinder RWB-4S is made for label width size 120mm, and label roll diameter size 240mm. The UnRewinder URWB-4S are made for rewinding the back paper when you are using the printer in Peel Off Mode or together with a dispenser unit. The Heavy Duty Unwinder/Rewinder URWB-4S can take label width size 120mm, and label roll diameter size 240mm. For further information please see separate products.
{ "redpajama_set_name": "RedPajamaC4" }
88
\section{Introduction} \label{sec:introduction} \IEEEPARstart{H}{umans} possess a remarkable ability to parse images simply by looking at them. In a blink of an eye, a human can fully analyze an image and separate all its components. People can perform several tasks simultaneously by analyzing an image, {e.g}\onedot} \def\Eg{{E.g}\onedot, object detection and contour detection. Furthermore, humans can easily generalize from observing a set of objects to recognizing objects that have never been seen before. Although humans enjoy an inherent capacity for generalization, they lack the processing power given by computers. That is, to process a large amount of information ({e.g}\onedot} \def\Eg{{E.g}\onedot, images) in a reduced interval of time. The separation of an image into its components ({i.e}\onedot} \def\Ie{{I.e}\onedot, join pixels into regions) according to some features is called image segmentation~\cite{Gonzalez2006}. Reproducing this process at or above the human level on a computer is not an easy task, and several approaches have been proposed to address it~\cite{Chouhan2019}. Nevertheless, the segmentation task continues to be challenging mainly due to variability, {i.e}\onedot} \def\Ie{{I.e}\onedot, when the visual tasks are performed on a computer there is a considerable variation in pose, appearance, viewpoint, illumination, and occlusion throughout different instances of the same image. Thus, a type of segmentation commonly used is semantic segmentation~(SS). SS is an essential part of the pipeline in computer vision projects. It extracts and analyzes useful and meaningful information, in addition to classifying the regions obtained within an image. In other words, by improving the segmentation stage, the computer vision's final output is also enhanced. \begin{figure}[tb] \centering \subfloat[SegNet]{\includegraphics[width=.33\linewidth,height=0.7in]{loss_precision_SegNet_frankfurt_000000_001016_leftImg8bit}% \label{fig:prob_SegNet}}% \subfloat[AdapNet++]{\includegraphics[width=.33\linewidth,height=0.7in]{loss_precision_AdapNet++_frankfurt_000000_011007_leftImg8bit}% \label{fig:prob_AdapNet}}% \subfloat[FastNet]{\includegraphics[width=.33\linewidth,height=0.7in]{loss_precision_FastNet_frankfurt_000001_058176_leftImg8bit}% \label{fig:prob_FastNet}}% \caption{The main problem of current segmentation methods lie in the loss of spatial precision at the boundaries or small objects. Green and red regions denote correctly and incorrectly segmented regions, respectively.} \label{fig:SS_loss_spatial_precision} \end{figure} In recent years, Convolutional Neural Networks~(CNNs) have led to several improvements in computer vision. Fully convolutional networks (FCN)~\cite{Long2016} achieved a significant improvement in the SS task in contrast to the traditional SS techniques~\cite{Zaitoun2015}. However, (i)~the low-resolution at the CNNs output and (ii)~the loss of spatial precision of objects within the image are still the main problems that affect the segmentation results~\cite{Chen2017a, Lin2017}, see Fig.~\ref{fig:SS_loss_spatial_precision}. We believe that these problems are not caused by a specific operation ({e.g}\onedot} \def\Eg{{E.g}\onedot, down-sampling) but by a set of factors. For instance, the absence of reconstruction and refinement methods, the excessive down-sampling, the gradient vanishing problem, or lack of a better extractor of feature maps. Nowadays, different models~\cite{Garcia-Garcia2018, Lateef2019} have tackled these problems and advancing solutions to the problem of the low resolution on the output maps. For better refinement, previous models~\cite{Zheng2015} post-process the results to enhance them, {e.g}\onedot} \def\Eg{{E.g}\onedot, Conditional Random Fields (CRF)~\cite{Kraehenbuehl2011}. For global feature extraction with more information, new architectures~\cite{AmirulIslam2017, Valada2019, Alshammari2019} were created as well as sparse convolution operations~\cite{Yu2016, Chen2017a}. Commonly, methods in SS use hourglass models~\cite{Ronneberger2015, Noh2015, Badrinarayanan2017} that comprise a coding and decoding stages to recover the pixel-wise position of the segmented objects. Other models use Multi-Task Learning (MTL) based on the idea that simultaneously learning related tasks can improve the performance on all of them~\cite{Zhang2017,Ruder2017,Thung2018}. Hence, related tasks facilitate the transfer of shared knowledge among them. \Eg, edge detection improves segmentation by adjusting the edge at each level of a CNN~\cite{Marmanis2018,Ding2018,Ding2020} or by learning edge-aware features~\cite{Ding2019}. Although several MTL approaches~\cite{Dai2016,Kendall2018,Chen2019,Alshammari2019} were applied to the SS task, it is still difficult to say which auxiliary tasks most beneficial for the final SS results. Even more, which of these auxiliary tasks provide the necessary complementary information so that the models can address the loss of spatial precision. We deduce that by reinforcing the object contours' information (through auxiliary tasks), we will be able to force greater attention (in the training phase) on the segmented objects' contours through the MTL approach. In this work, we use a multi-task approach with complementary contour-based tasks for rich and robust feature extraction and address the problem of spatial precision loss. We work specifically with hourglass (encoder-decoder) models because we (empirically) discovered that a multi-task setup helps to adjust the latent space in these models ({i.e}\onedot} \def\Ie{{I.e}\onedot, it exhibits a clustering behavior). We also show that the improved results of different hourglass (encoder-decoder) architectures are directly related to this clustering behavior. We present four different studies on the latent space for SS: (i)~visualization of the latent space behavior, (ii)~activation maps used by the models to predict the segmentation, (iii)~reduction of over-fitting of the models, and (iv)~the ablation studies on loss functions and complementary tasks. The latent space's visualizations show different induced clusters when influenced by adding or removing the different complementary contour-based tasks. Fig.~\ref{fig:fig_our_model} depicts the contour-based tasks used in our study. Namely, they are edge detection~\cite{Gonzalez2006}, semantic contours~\cite{Hariharan2011}, semantic segmentation~\cite{Arnab2016}, and distance transform~\cite{Borgefors1986}. In summary, our main contributions are: \begin{itemize} \item Address the problem of spatial precision loss by obtaining hourglass models with better generalization by focusing on segmented objects' contours. \item Improve the SS results in hourglass models by using complementary information from contour-based tasks, and thus induce a clustering behavior in the latent space. \item Extensively evaluate common hourglass models, namely SegNet~\cite{Badrinarayanan2017} and UNet~\cite{Ronneberger2015}, on Cityscapes~\cite{Cordts2016}, and CamVid~\cite{Brostow2009} datasets. Moreover, we show that the use of complementary tasks improves the state-of-the-art in hourglass (encoder-decoder) models. \end{itemize} \section{Related Work} \label{sec:related_works} Over the past years, the SS task has been done with deep learning as the preferred option due to deep neural networks'~(DNNs) extraordinary ability for feature extraction. The initial layers learn the low-level features ({e.g}\onedot} \def\Eg{{E.g}\onedot, edges and texture), while the last layers learn the higher-level ones ({e.g}\onedot} \def\Eg{{E.g}\onedot, identify different objects). The feature extraction of DNNs can be improved by adding complementary information (though a multi-task approach) in the training phase. This section presents a review of relevant literature approaches that focus on SS tasks, using several approaches, such as deep learning and MTL\@. \subsection{Semantic Segmentation} \label{sec:sem_seg} Semantic segmentation refers to the process of linking each pixel in an image to a class label. In the deep models, FCN showed to be useful in this task. However, the first SS models produced low-resolution maps with a loss in spatial precision. Here we discuss different models created to deal with these problems. Some researchers~\cite{Zheng2015} used FCN with CRFs as a post-processing step, but it is computationally expensive. Consequently, embedding the post-processing steps within a network~\cite{Chen2017a, Li2019} was a viable solution. In contrast, we improve the hourglass (encoder-decoder) models without the need to use post-processing steps by introducing additional tasks that refine the latent space. Other models~\cite{He2017a, Liu2018, Chen2018} adjusted the bounding boxes. The intuition was to do object detection first and then refine the instances' contours. Mask R-CNN~\cite{He2017a} used a feature pyramid network~\cite{Lin2017b} to extract a feature hierarchy in-network and an FCN to get a segmentation mask in each region of interest. This region-based approach had proper segmentation but depended on the accurate detection of objects (bounding box). Hourglass models, on the other hand, do not have this constraint. Other models~\cite{Pinheiro2016} require more delimited boundaries for the segmentation as masks instead of just a box. They also use sliding operations to obtain better adjustment~\cite{Liu2016a} to the final targets. Instead of an abrupt prediction of the last layer, the hourglass approach~\cite{Liu2015,Oliveira2016,AmirulIslam2017,Valada2019} ({i.e}\onedot} \def\Ie{{I.e}\onedot, models with encoder-decoder stages, such as U-Net~\cite{Ronneberger2015}, DeconvNet~\cite{Noh2015}, SegNet~\cite{Badrinarayanan2017}) created a decoder stage to gradually recover the spatial information by combining multi-level feature maps from the encoder. Thus, the flow of information from a lower scale to a higher one is done by an upsampling operation, {i.e}\onedot} \def\Ie{{I.e}\onedot, bilinear interpolation~\cite{Gonzalez2006}, unpooling~\cite{Noh2015}, or DUpsampling~\cite{Tian2019}. We consider hourglass models to have a robust decoding stage for the reconstruction of the pixel-wise predicted image. Although hourglass models proved efficient in SS, they still need a more significant transfer of information between its stages, {e.g}\onedot} \def\Eg{{E.g}\onedot, FC-DenseNet~\cite{Jegou2017} or UPSNet~\cite{Xiong2019}. For this reason, we add complementary information to the models by adding the MTL approach ({i.e}\onedot} \def\Ie{{I.e}\onedot, auxiliary task). Though the previous models improved the objects' boundary, we need models that observe larger regions. Thus, multi-scales models emerged~\cite{Lin2017a, Li2019, Tao2020}. They obtain a full semantic map in low-resolution (coarse prediction map), then refine it with different fusion operations, {e.g}\onedot} \def\Eg{{E.g}\onedot, fusion cascade~\cite{Zhao2018}, attention blocks~\cite{Yu2018, Huang2019}, layer aggregation~\cite{Yu2018b}, residual units~\cite{Paszke2016}, and gated fusion~\cite{Li2020}. These models are unnecessarily complex to extract robust features. Instead, we use auxiliary tasks to reinforce the gradient and achieve better information extraction. Current models ({e.g}\onedot} \def\Eg{{E.g}\onedot, HRNetv2~\cite{Wang2020}, HRNet+OCR~\cite{Yuan2020}) perform multi-scale feature extraction by sharing feature maps across their different levels (scales), {i.e}\onedot} \def\Ie{{I.e}\onedot, broadcasting context information at various resolutions. Contrary to multi-scale models and to capture high-resolution feature maps, PSP-Net~\cite{Zhao2017} performs pooling operations at multiple grid scales. Simultaneously, DeepLabv3+~\cite{Chen2018a} and CasiNet~\cite{Jin2021} use Atrous Spatial Pyramid Pooling (ASPP)~\cite{Chen2017a, Chen2017} ({i.e}\onedot} \def\Ie{{I.e}\onedot, several sparse filters) to modify the filters' size instead of the images' size~\cite{Ziegler2019}. Later experiments showed that there are still limitations to get global features~\cite{Wang2018}. Moreover, the introduced dilated convolutions bring heavy computation complexity and a memory footprint, thus limiting many applications' usage. The first attempt to address the high resource consumption of ASPP was FastFCN~\cite{Wu2019}, which performs a new method of ascending pyramidal sampling. Besides, AdapNet++~\cite{Valada2019,Valada2017} proposed cascaded and parallel Atrous convolutions to capture long-range context using fewer parameters. However, the problem of spatial precision loss persisted. These results lead us to believe that we need models that make use of inductive biases, {i.e}\onedot} \def\Ie{{I.e}\onedot, more specific features from prior information. We address this problem by using well-behaved hourglass models paired with multi-task learning to improve the learned features. \subsection{Multi-Task Learning} \label{sec:MTL} In machine learning, we generally train a single model to perform a specific task. By focusing on a single task, we risk ignoring additional information that could help us learn a better representation of the desired task. Instead, MTL~\cite{Caruana1997} aims to solve multiple related tasks simultaneously. Thus, it facilitates the transfer of shared knowledge across relevant tasks~\cite{Long2017, Thung2018}. In this literature review, we focus on supervised learning tasks due to similarity with our work. Currently, machine learning models share knowledge in two ways~\cite{Thung2018,Zhang2018}: (i)~feature-based MTL that distributes knowledge across training representative features, and (ii)~parameter-based MTL that uses the model parameters trained in a specific task to fit the related tasks. In this work, we are interested in studying the latent space shared among all tasks while focusing on SS as the main task restricted to hourglass models. The previous models~\cite{Obozinski2006, Jebara2011, Zhang2010} used handcrafted features and assume that the data-to-target has a direct relationship. Many times the data exhibit a complex data-to-target relationship~\cite{Thung2018}. This assumption can restrict the models' performance. For this reason, deep learning with MTL is used due to its capacity to learn nonlinear complex latent representations. Deep MTL is grouped into two types~\cite{Thung2018}: hard ({i.e}\onedot} \def\Ie{{I.e}\onedot, sharing parameters between all tasks) and soft ({i.e}\onedot} \def\Ie{{I.e}\onedot, each task has its model, hidden layers, and parameters). Previous models~\cite{Misra2016, Fang2017} used two separate architectures with soft parameter sharing. They used cross-stitch units or task transfer connections to leverage the knowledge of the task-specific networks. In contrast, MRN~\cite{Long2017} learned a Bayesian transfer relationship (both the last layers). The previous models, having their own parameters for each task, made it easy to increase the number of required resources. Unlike soft models, the hard ones do not need any assumption for the tasks' relation; they do this internally. Thus, some MTL models~\cite{Dai2016,Pinheiro2016} use a cascade-based approach to learn a task from the previous one. However, this approach restricts the feature space. Accordingly, models~\cite{Liao2016,Teichmann2018} (with independent tasks) focus on merging multiple loss functions (depending on the task) to ensure the convergence of the models and robustness to noise~\cite{Klingner2020}. Some models~\cite{Hayder2017, Tan2018, Bischke2019} combined semantic segmentation with geometric information and others~\cite{Kendall2018, Kong2018} with depth. Other models~\cite{Kendall2017, Kendall2018, Takikawa2019} measured and adjusted the degree of uncertainty of the samples along with the segmentation. We noted that the uncertainty ({i.e}\onedot} \def\Ie{{I.e}\onedot, either due to noise at the capture or due to the prediction's degree of confidence) is related to the segmented objects' edges. Similarly, previous works~\cite{Cheng2017,Liu2020} also used this line of reasoning by merging semantic contours with edge detection using the multi-scale feature in Acuna {et al}\onedot's~\cite{Acuna2019} work or adjusting the contours across the entire network as done by Cheng {et al}\onedot~\cite{Cheng2017} ({i.e}\onedot} \def\Ie{{I.e}\onedot, multiple losses for detection). Finally, the combination of SS with edge detection (in models~\cite{Ding2019}) showed significant improvements for the segmentation task. These models (CCL~\cite{Ding2018}, CGBNet~\cite{Ding2020}) generally present an hourglass architecture with skip connections for the edge detection map. Despite the useful latent space for related tasks obtained by the deep MTL approaches~\cite{Ding2020}, it is not yet explored or understood how this latent space behaves to improve the target task. In other words, understanding what parts of the latent space improve SS is an open problem. Moreover, information is even scarcer when the target task is SS on images ({i.e}\onedot} \def\Ie{{I.e}\onedot, multi-label pixel-wise classification). \section{Overview} \label{sec:overview} The idea of using CNNs as feature extractors and generators is not new. It has been widely used and achieved better results against traditional methods~\cite{Thoma2016}. Previous works (see Section~\ref{sec:sem_seg}) use a CNN for SS tasks and bring up challenging problems like the loss of spatial precision as the main problem. Besides, we discussed in Section~\ref{sec:MTL} that deep MTL models obtain additional information from related tasks and learn at some level a new feature space shared across all tasks, specifically in hourglass models. However, there is still no analysis of deep MTL models specifically for the SS task. In particular, there is no indication of what happens with the shared features, how they behave, nor what are the most relevant related tasks for SS to enhance them. In this work, we are interested in a particular type of behavior in the latent space, {i.e}\onedot} \def\Ie{{I.e}\onedot, clustering, which improves the SS results in hourglass (encoder-decoder) architectures. We empirically analyze how clustering in the latent space is influenced by different contour-based auxiliary tasks. We highlight that the clustering behavior is only observed in hourglass architectures. These models ({e.g}\onedot} \def\Eg{{E.g}\onedot, UNet, SegNet, ParseNet) depend largely on the latent space to perform the reconstruction (a stream or up-down-up route). In contrast, other models ({e.g}\onedot} \def\Eg{{E.g}\onedot, HRNet) perform feature extraction across multi-resolution ({i.e}\onedot} \def\Ie{{I.e}\onedot, multi-stream). They distribute more contextual information but deprive the intermediate representation spaces of internal interpretability ({i.e}\onedot} \def\Ie{{I.e}\onedot, decreasing the interpretability in latent representation). In this section, we introduce our learning framework. Also, we introduce the datasets and define the metrics we used in this work. Recall that our study is entirely empirical. Our objectives are (i)~to propose and evaluate the use of contour-based auxiliary tasks to address the problem of loss of spatial precision, and (ii)~to show how the addition or deletion of these contour-based auxiliary tasks helps improve the SS results for hourglass models. \subsection{Learning a Multi-Task Approach} \label{sec:learning_MTL} Deep MTL approaches learn features that might not be easy or possible to learn within original task. We want to know if we can leverage the information in the training signals of other related SS tasks during the learning phase. An effective way to achieve this is by giving cues to the model from other related tasks, {i.e}\onedot} \def\Ie{{I.e}\onedot, predicting the features with an auxiliary task. \makeatletter \begin{figure}[tb] \centering \resizebox{\linewidth}{!}{% \begin{tikzpicture}[ pics/named code/.style={code={\tikz@fig@mustbenamed% \begin{scope}[local bounding box/.expanded=\tikz@fig@name]#1\end{scope}% }}, coder long height/.store in=\longheight, coder long height=1.5, coder short height/.store in=\shortheight, coder short height=.75, coder width/.store in=\width, coder width=1.3, coder fill/.store in=\coderfill, coder text/.store in=\codertext, coder label/.store in=\coderlabel, coder style hidden/.style={#1}, coder style hidden/.default={coder label=, coder text=black, coder fill=white,}, pics/encoder/.style = {named code={% \tikzset{coder style hidden, #1}% \coordinate (-center) at (0, 0); \coordinate (-east) at (\width/2, 0); \coordinate (-west) at (-\width/2,0); \coordinate (-north east) at (\width/2, -\shortheight/2); \coordinate (-south east) at (\width/2, \shortheight/2); \coordinate (-north west) at (-\width/2, -\longheight/2); \coordinate (-south west) at (-\width/2, \longheight/2); \draw[fill=\coderfill] (-west) -- (-north west) -- (-north east) -- (-south east) -- (-south west) -- (-west); \node[text=\codertext, anchor=center] at (-center) {\coderlabel};% }}, pics/decoder/.style = {named code={% \tikzset{coder style hidden, #1}% \coordinate (-center) at (0, 0); \coordinate (-east) at (\width/2, 0); \coordinate (-west) at (-\width/2,0); \coordinate (-north east) at (\width/2, -\longheight/2); \coordinate (-south east) at (\width/2, \longheight/2); \coordinate (-north west) at (-\width/2, -\shortheight/2); \coordinate (-south west) at (-\width/2, \shortheight/2); \draw[fill=\coderfill] (-west) -- (-north west) -- (-north east) -- (-south east) -- (-south west) -- (-west); \node[text=\codertext, anchor=center] at (-center) {\coderlabel};% }}, node distance=1cm, edge/.style={ ->, >=Latex, shorten <= 2pt, shorten >= 2pt, rounded corners, }, ] \node (input) {\includegraphics[width=2cm]{ss-input}}; \pic[right=1.5cm of input] (enc) {encoder={coder label={}}}; \pic[right=2cm of enc.east] (dec) {decoder={coder label={}}}; \node[draw, rectangle, rounded corners, minimum height=1cm, fill=black!50] at ($(enc)!.5!(dec)$) (lat) {}; \node[below=5pt of lat, circle, draw,minimum size=1cm, path picture ={ \foreach \i in {1,...,100} \path let \p1=(path picture bounding box.south west), \p2=(path picture bounding box.north east), \n1={rnd}, \n2={rnd} in ({\n1*\x1+(1-\n1)*\x2},{\n2*\y1+(1-\n2)*\y2}) node[circle, draw, fill, minimum size=1pt, inner sep=0pt] {}; }] (spc) {}; \pic[above right=.125cm and 1.5cm of dec] (dec-s) {decoder={coder label={$\mathcal{T}_S$}, coder width=.75}}; \pic[below right=.125cm and 1.5cm of dec] (dec-c) {decoder={coder label={$\mathcal{T}_C$}, coder width=.75}}; \pic[above=1cm of dec-s] (dec-e) {decoder={coder label={$\mathcal{T}_E$}, coder width=.75}}; \pic[below=1cm of dec-c] (dec-d) {decoder={coder label={$\mathcal{T}_D$}, coder width=.75}}; \node[right=1.cm of dec-s] (o-s) {\includegraphics[width=2cm]{ss-seg}}; \node[right=1.cm of dec-e] (o-e) {\includegraphics[width=2cm]{ss-edge}}; \node[right=1.cm of dec-c] (o-c) {\includegraphics[width=2cm]{ss-contours}}; \node[right=1.cm of dec-d] (o-d) {\includegraphics[width=2cm]{ss-distance}}; \draw[edge] (input) -- (enc); \draw[edge] (enc) -- (lat); \draw[edge] (lat) -- (dec); \draw[edge] (dec) -| ($(dec)!.5!(dec-e)$) |- (dec-e); \draw[edge] (dec) -| ($(dec)!.5!(dec-s)$) |- (dec-s); \draw[edge] (dec) -| ($(dec)!.5!(dec-c)$) |- (dec-c); \draw[edge] (dec) -| ($(dec)!.5!(dec-d)$) |- (dec-d); \draw[edge] (dec-e) -- (o-e); \draw[edge] (dec-s) -- (o-s); \draw[edge] (dec-c) -- (o-c); \draw[edge] (dec-d) -- (o-d); \draw (lat.south west) -- (spc.north west); \draw (lat.south east) -- (spc.north east); \node[below=5pt of spc] {What is happening here?}; \node[above=5pt of dec-e] {Task-dependent}; \end{tikzpicture} } \caption{Illustration of a multi-task hourglass model, for tasks of edge detection~(E), semantic segmentation~(S), semantic contours~(C), and distance transform~(D), from top to bottom. Note that the model share weights in the first layers (encoder and decoder), and the specific features for each task are obtained in the last layers (specific task decoders $\mathcal{T}_\cdot$).} \label{fig:fig_our_model} \end{figure} \makeatother The goal of an auxiliary task in MTL is to learn useful shared representations for the main task ({i.e}\onedot} \def\Ie{{I.e}\onedot, add a regularizing factor~\cite{CheolSong2019}). They are closely related to the main task, so adding them allows the model to learn beneficial representations. However, finding an auxiliary task that helps improve the SS task is not trivial~\cite{Guo2020}. At first glance, tasks that seem different can use similar representations, and tasks that seem related can adjust different internal functions~\cite{Caruana1997}. We still do not know which auxiliary tasks will help in practice for SS\@. Finding an auxiliary task is largely based on the assumption that they should be related to the main task in some way. We perceive, from Fig.~\ref{fig:SS_loss_spatial_precision}, that spatial precision loss is generally produced on the edge of segmented objects. So we use tasks related to the gradient or edge regions. That is, we give more attention to the contours of the objects. With this in mind, we propose to employ three types of contour-based auxiliary tasks to improve the boundary of segmented object and, therefore, the SS task. We choose auxiliary tasks to reinforce and complement the information obtained from the edges of objects. Thereby, we address the problem of spatial precision loss, generally reflected in the segmented objects' contours, {cf}\onedot} \def\Cf{{Cf}\onedot Fig.~\ref{fig:SS_loss_spatial_precision}. We propose to use the additional tasks of edge detection~(E), semantic segmentation~(S), semantic contours~(C), and distance transform~(D), {cf}\onedot} \def\Cf{{Cf}\onedot Fig.~\ref{fig:fig_our_model}. Edge detection~\cite{Gonzalez2006} aims to extract object boundaries. Distance transform~\cite{Borgefors1986}, in our case, is a distance function to the objects' edges. Semantic contour~\cite{Hariharan2011} produces a pixel-wise level dense classification on objects' contour. Initially, we tried to use a continuous distance transform ({i.e}\onedot} \def\Ie{{I.e}\onedot, without quantification). However, we discard it because of the longer training time to convergence, and the results were comparable with the quantized distance transform. We intuit that this behavior is due to the higher degrees of freedom when fitting a regressor. In Appendix~\ref{sec:apx_final_representation}, we detail how to quantize the distance transform. Although these auxiliary tasks were previously used in MTL models~\cite{Cheng2017, Yu2018, Bischke2019, Liu2020}, they were not used together in the same model. Besides, the impact produced by adding each of the auxiliary tasks has not yet been studied. Consequently, we show how each of them improves the latent space separation (Section~\ref{sec:visualization_LS}). We evaluated each of these auxiliary tasks' contribution (quantitative results) to the SS task (see our ablation study in Section~\ref{sec:ablation_studies}). Unlike the previous hourglass models (encoder-decoder), the hourglass with MTL adds specific heads for each task ({i.e}\onedot} \def\Ie{{I.e}\onedot, deconvolution layers in the last decoder stage). Our hourglass model with MTL (Fig.~\ref{fig:fig_our_model}) has two types of hidden layers, the shared layers, and task-specific layers. The shared layers learn a low-level representation of the data, influenced by all tasks, while the task-specific layer learns the parameters for the pixel-wise classification network. These specific layers map the learned latent representations from the previously shared layers to the task-specific output layers ({i.e}\onedot} \def\Ie{{I.e}\onedot, target for each task). Consider that the hourglass models with MTL work over a set of images~$X$, and has a corresponding ground truth per task $\{Y^t\}_{t \in T}$, {i.e}\onedot} \def\Ie{{I.e}\onedot, set of pixel-wise labeled images by task. Each $i$th sample has a corresponding ground truth image $y_i^t$ for the corresponding task~$t$. Thus, the hourglass model is represented by $f(x; \theta^h, \theta^t) = y^t$ such that some parameters, $\theta^h$, are shared between the contour-based tasks, and some, $\theta^t \in \{\theta^E, \theta^S, \theta^C, \theta^D\}$, are particular to each specific task. The hourglass parameters are learned by solving an optimization problem that minimizes a weighted sum of the losses for each task. It is defined by \begin{equation} \label{eq:opt_MTL} \mathcal{L}_\mathit{final} = \min\limits_{\theta^h,\theta^E,\theta^S,\theta^C,\theta^D} \frac{1}{|T|N}\sum_{t\in T} \sum_{i=1}^N \lambda_t \mathcal{L}_t \left(\theta^h, \theta^t \right), \end{equation} where we used four tasks $T=\{E, S, C, D\}$, $N$ is the number of samples, and the loss of the auxiliary task $t$ is defined as \begin{equation} \label{eq:task-loss} \mathcal{L}_t(\theta^h, \theta^t) = \mathcal{L}_t( f^t(x_i; \theta^h, \theta^t), y_i^t ), \end{equation} and where $\mathcal{L}_t = \{\mathcal{L_\mathit{E}},\mathcal{L}_{\mathit{S}},\mathcal{L_\mathit{C}},\mathcal{L_\mathit{D}}\}$ represents the loss functions of each task. To moderate each task's importance on the model loss~\eqref{eq:opt_MTL}, we use a scalar $\lambda_i$ to weigh each task loss~\eqref{eq:task-loss}. Each loss function helps to adjust the latent space into a useful representation for each task. In this work, we use for each task's loss the cross-entropy and soft IoU loss functions (see details in Appendix~\ref{sec:apx_training}). \begin{figure}[tb] \centering \begin{tikzpicture}[ >=Latex, ] \node (latent_space) [inner sep=0pt] at (0,0) {\includegraphics[width=2cm]{latent_space}}; \node (circle_LS) [circle, draw=black, minimum size=55pt, inner sep=0pt, line width=0.2pt] at (0,0) {}; \node (S) [inner sep=0pt] at (2,1.5) {\includegraphics[width=1.5cm]{ss-seg}}; \node (E) [inner sep=0pt] at (2,-1.5) {\includegraphics[width=1.5cm]{ss-edge}}; \node (C) [inner sep=0pt] at (-2,1.5) {\includegraphics[width=1.5cm]{ss-contours}}; \node (D) [inner sep=0pt] at (-2,-1.5) {\includegraphics[width=1.5cm]{ss-distance}}; \coordinate (bbsw) at (current bounding box.south west); \coordinate (bbne) at (current bounding box.north east); \path[->] (S) edge [bend left, font=\scriptsize, pos=0.3] node[below right] {$\nabla_\theta \mathcal{L}_S\left(\theta^{h},\theta^S\right)$} (circle_LS) (E) edge [bend left, font=\scriptsize, pos=0.3] node[below left] {$\nabla_\theta \mathcal{L}_E\left(\theta^{h},\theta^E\right)$} (circle_LS) (C) edge [bend left, font=\scriptsize, pos=0.3] node[above right] {$\nabla_\theta \mathcal{L}_C\left(\theta^{h},\theta^C\right)$} (circle_LS) (D) edge [bend left, font=\scriptsize, pos=0.3] node[above left] {$\nabla_\theta \mathcal{L}_D\left(\theta^{h},\theta^D\right)$} (circle_LS); \pgfresetboundingbox \path[use as bounding box] (bbsw) rectangle (bbne); \end{tikzpicture} \caption{% The contour-based auxiliary tasks influence the latent space through their gradients (by backpropagation). Due to the contour origin of the task their corresponding losses penalize the contours of the objects to improve. } \label{fig:influence_aux_task} \vspace{-10pt} \end{figure} In Fig.~\ref{fig:fig_our_model}, the $\mathcal{T}_E$, $\mathcal{T}_S$, $\mathcal{T}_C$, and $\mathcal{T}_D$ blocks represent the layers that extract specific information to discriminate each task. In other words, each distinct decoding stage for each task have independent parameters. Since we work with the latent space, we analyze how the auxiliary tasks influence the latent space (feature representation). We do not need to use large networks for the specific tasks. Thus, each specific task-block contains two layers of convolutions, ensuring that the enhancement is performed on the shared parameters ({i.e}\onedot} \def\Ie{{I.e}\onedot, a more robust feature extraction). The gradients of the separate tasks carry out the influence of the additional tasks on latent space (see Fig.~\ref{fig:influence_aux_task}). Due to the chosen tasks, gradients are prone to bring more significant changes to the edges of objects. That is, to provide further attention to the edges of the objects when performing the segmentation. We are aware of the existence of non-deep-learning-based methods to do edge detection ({e.g}\onedot} \def\Eg{{E.g}\onedot, Canny~\cite{Canny1986}, Sobel~\cite{Gonzalez2006}, hierarchical method~\cite{Arbelaez2010}) or distance transform ({e.g}\onedot} \def\Eg{{E.g}\onedot, mathematical morphology~\cite{Gonzalez2006}). We plan to use these auxiliary tasks only in the training phase, and not as final tasks that would replace the existing methods. The multi-task setup is to provide complementary information to the latent space and adjust the SS task. Additionally, we evaluate the impact of each auxiliary task. \subsection{Experiments Description} \label{sec:exp_descrip} We describe a set of empirical studies in order to show how the addition or removal of contour-based auxiliary tasks helps improve the semantic segmentation task. We also demonstrate that the use of auxiliary tasks diminishes the loss of spatial precision in the segmented objects. \textit{Ablative Studies} (Section~\ref{sec:ablation_studies}): We performed two ablation studies. The first study is on the loss functions. We want to know which of the loss functions (cross-entropy and soft IoU) trains a better model for the SS task. We determined the best results according to both loss functions and a data augmentation technique explained in Section~\ref{sec:datasets}. We did a second study to know which contour-based task helps improve the prediction. In other words, we evaluated quantitatively how the addition or removal of related tasks impacts the final segmentation result. We obtained the best results by training the models using all tasks together. \textit{Visualization of Latent Space Behavior} (Section~\ref{sec:visualization_LS}): Here, we show the latent space behavior in the well-known SegNet model using complementary information from contour-based auxiliary tasks. To plot the samples in this study, we used the multidimensional projection method t-SNE~\cite{Maaten2008}. Experiments show that the latent space exhibits clustering behavior, improving dissimilarity and segmentation results by adding auxiliary tasks. \textit{Showing Activation Maps} (Section~\ref{sec:activation_maps}): With the previous experiments, we obtained the best results using all the auxiliary tasks. In this study, we plot the activation maps used by the hourglass models to predict the segmentation. In this work, the activation maps are the regions that the network use for dense classification at the pixel-wise level. Therefore, we observe that by training hourglass models with contour-based auxiliary tasks, the model employs activation maps not previously used to improve the contours of segmentation. \textit{Reducing the Over-Fitting} (Section~\ref{sec:reduce_overfitting}): In this study, we investigated whether there is a segmentation improvement on the segmented object's edge. To do so, we evaluated the classification errors of the segmented object's edge. We performed these experiments for various hourglass models for binary and multi-label segmentation. The empirical results show that there is an improvement at the segmented object's edges. This improvement appears due to the having a more robust latent space that better defines the objects. By using complementary information, we learn models that generalize better than the traditional ones. Thus, we address the problem of spatial precision loss. \textit{Comparing Results} (Section~\ref{sec:comparing_results}): Previously, we carried out extensive studies on CamVid dataset due to the shorter required training time. We present final results on a set of hourglass models with and without MTL for the SS task and comparison tables for the CamVid, Cityscape, and Freiburg Forest datasets. We improve the final segmentation results when using MTL with contour-based tasks. The improvement may seem modest. However, the amount of pixels at the objects' edges is small in comparison to the image total amount of pixels. \subsection{Datasets} \label{sec:datasets} We evaluated our proposed methodology on Cityscapes~\cite{Cordts2016}, CamVid~\cite{Brostow2009}, and Freiburg Forest~\cite{Valada2016} datasets. They contain several types of urban/forest scenarios. \textbf{Cityscapes:} The dataset has \num{5000} samples with $2048 {\mkern1mu\oldtimes\mkern1mu} 1024$ size images and pixel-level labels of \num{19} semantic classes. There are \num{2979}, \num{500}, and \num{1525} images in the training, validation, and test set, respectively. We do not use coarse data in our experiments. For this work, we required a wide variety of samples; for this reason, we employ data augmentation. We applied a random crop of $300 {\mkern1mu\oldtimes\mkern1mu} 500$ and some random transformations of contrast, brightness, and horizontal flip; thus, we generated \num{17500} training samples. We use the original validation set to compare the MTL models with a resolution of $768 {\mkern1mu\oldtimes\mkern1mu} 384$ pixels (resize). For this, we employ bilinear interpolation (for RGB images) and the nearest-neighbor interpolation (for the labels). To facilitate comparison with previous approaches, we report results on the reduced \num{11} class label set consisting of: \textit{sky, building, road, sidewalk, fence, vegetation, pole, car/truck/bus, traffic sign, person, rider/bicycle/motorbike, and background}. \textbf{CamVid:} It is road scene understanding dataset for SS with \num{11} classes: \textit{building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and cyclist}. The dataset has \num{367}, \num{101}, and \num{233} samples for training, validation, and test set, respectively, with images size of $360 {\mkern1mu\oldtimes\mkern1mu} 480$. We apply the same transformations of Cityscapes for the data augmentation, with a random crop size of $260 {\mkern1mu\oldtimes\mkern1mu} 346$, generating \num{5616} samples for the training set. We report results on the original test set. \textbf{Freiburg Forest:} It is a dataset on forests with six classes: \textit{sky, trail, grass, vegetation, obstacle, and void}. Note, forested environments are unstructured ({e.g}\onedot} \def\Eg{{E.g}\onedot, trails), unlike urban scenes that are highly structured (rigid and geometric objects, {e.g}\onedot} \def\Eg{{E.g}\onedot, buildings). We do data augmentation using transformations of Cityscape, generating \num{1840} samples for the training set. We conserve the original testing set (\num{136} images). Note, all the images are resized at $768 {\mkern1mu\oldtimes\mkern1mu} 384$ pixels. \subsection{Evaluation Metrics} \label{sec:metrics} The success achieved by the SS methods must be measured by the achievements of the final applications. They are generally too difficult to evaluate because graphical applications often require an expert user. For this reason, it is necessary to use application-independent measures of accuracy. Thus, to evaluate our results on segmentation, we chose accuracy, intersection-over-union, precision, and recall metrics as validation measures (from Csurka {et al}\onedot~\cite{Csurka2013}). The intersection-over-union (IoU) is defined by \begin{equation} \label{eq:metric_iou} \text{IoU} = \sum_{i}^N \frac{P_i \cap Y_i}{P_i \cup Y_i} = \sum_{i}^N \frac{\mathit{TP}_i}{\mathit{TP}_i + \mathit{FP}_i + \mathit{FN}_i}, \end{equation} the accuracy (Acc), {i.e}\onedot} \def\Ie{{I.e}\onedot, pixel-wise accuracy is \begin{equation} \label{eq:metric_acc} \text{Acc} = \sum_{i}^N \frac{P_i \cap Y_i}{Y_i} = \sum_{i}^N \frac{\mathit{TP}_i + \mathit{TN}_i}{\mathit{TP}_i + \mathit{TN}_i + \mathit{FP}_i + \mathit{FN}_i}, \end{equation} the precision (Prec) is \begin{equation} \label{eq:metric_prec} \text{Prec} = \sum_{i}^N \frac{\mathit{TP}_i}{\mathit{TP}_i + \mathit{FP}_i}, \end{equation} and the recall (Rec) is \begin{equation} \label{eq:metric_rec} \text{Rec} = \sum_{i}^N \frac{\mathit{TP}_i}{\mathit{TP}_i + \mathit{FN}_i}. \end{equation} We assume that $P_i$ is the set of pixels predicted as the $i$th class, $Y_i$ is pixels set belonging to the $i$th class, and $N$ is the number of classes. Besides, $\mathit{TP}_i$, $\mathit{FP}_i$, $\mathit{TN}_i$, and $\mathit{FN}_i$ represent True/False Positives and True/False Negatives, respectively, for a given class $i$. Note, these metrics are widely used in SS\@. Furthermore, to measure the behavior of the latent space, we use metrics for clustering. Thus, we utilize the Silhouette Coefficient (SSI)~\cite{Rousseeuw1987} defined by \begin{equation} \label{eq:metric_ssi} \text{SSI} = \sum_{i}^N \frac{b_i - a_i}{\max\{a_i, b_i\}}, \end{equation} where $a_i$ is the mean intra-cluster distance from sample $i$, and $b_i$ is the mean nearest-cluster distance from $i$ to each sample. Note that a higher value is related to better-defined clusters. The Calinski-Harabasz Index (CHI)~\cite{Calinski1974} is given by \begin{equation} \label{eq:metric_chi} \text{CHI} = \frac{\text{SS}_M}{\text{SS}_W} \frac{N-k}{k-1}, \end{equation} where $k$ is the number of clusters, and $N$ is the total number of observations ({i.e}\onedot} \def\Ie{{I.e}\onedot, data points), $\text{SS}_W$ is the overall within-cluster variance and, $\text{SS}_M$ is the overall between-cluster variance. Note that a higher value is associated with dense and well-distributed clusters. Finally, we employ the Davies-Bouldin Index (DBI)~\cite{Davies1979} denoted by \begin{equation} \label{eq:metric_dbi} \text{DBI} = \frac{1}{k}\sum_i^k \max_{j \ne i}\left(\frac{s_i + s_j}{d_{ij}}\right), \end{equation} where $s_i$ is the average distance between each point of cluster $i$ and its centroid, and $d_{ij}$ is the distance between cluster centroids $i$ and $j$. Note that a lower value is related to better separation between the clusters. \begin{figure*}[tb] \centering \newlength{\wsz} \setlength{\wsz}{.2\linewidth} \newlength{\hsz} \setlength{\hsz}{1.0in} \captionsetup[subfloat]{justification=centering} \subfloat[S \protect\\ $\text{SSI}=0.384$, $\text{DBI}=1.360$\label{fig:latent-S}]{\includegraphics[width=\wsz,height=\hsz]{latent0-S}} \subfloat[S+E \protect\\ $\text{SSI}=0.391$, $\text{DBI}=1.141$\label{fig:latent0-SB}]{\includegraphics[width=\wsz,height=\hsz]{latent0-BS}} \subfloat[S+D \protect\\ $\text{SSI}=0.394$, $\text{DBI}=1.275$\label{fig:latent0-SE}]{\includegraphics[width=\wsz,height=\hsz]{latent0-SE}} \subfloat[S+E+C \protect\\$\text{SSI}=0.437$, $\text{DBI}=1.150$\label{fig:latent-SBC}]{\includegraphics[width=\wsz,height=\hsz]{latent0-BCS}} \subfloat[S+E+C+D \protect\\ $\text{SSI}=0.636$, $\text{DBI}=0.774$\label{fig:latent-SBCE}]{\includegraphics[width=\wsz,height=\hsz]{latent0-BCSE}} \caption{ We show the shared latent space on the Camvid testing dataset. We combine the different tasks of edge detection~(E), semantic segmentation~(S), semantic contour~(C), and distance transform~(D). When adding tasks related to semantic segmentation, {i.e}\onedot} \def\Ie{{I.e}\onedot, by providing complementary information, maps of similar features (within a unimodal multi-task hourglass model) are clustered together in a similar latent space, and they are not arbitrarily placed. We confirm this behavior by using a set of metrics for clustering shown in Table~\ref{tab:ablation2}. } \label{fig:latent-space-tasks} \end{figure*} \section{Experiments} \label{sec:Experiments} This section presents a set of empirical studies of the latent space on hourglass models based on the MTL approach. Then we perform a series of ablation experiments on the CamVid dataset using the well-known SegNet model (see Fig.~\ref{fig:fig_our_model}). Also, we present comparisons between the different hourglass models with and without multi-task (contour-based tasks) for the Camvid, Cityscape, and Freiburg Forest datasets. \subsection{Ablative Studies} \label{sec:ablation_studies} We present two types of ablative analysis on the CamVid dataset, using the well-known hourglass model, SegNet, for the semantic segmentation task. The first study, presented in Table~\ref{tab:ablation1}, shows the distinct behavior of the SegNet model by using the different loss functions (cross-entropy and loss-IoU) and data augmentation. The reported results focus on a single semantic segmentation task on the CamVid test set. The best performance of the model presented in Table~\ref{tab:ablation1} was achieved using both loss functions and data augmentation. So we opted for this configuration for the following experiments. \begin{table} [tb] \centering \sisetup{ table-format = 1.4, } \newrobustcmd{\B}{\bfseries} \caption[Ablative study on objective functions]{Ablative study on loss functions of the SegNet~\cite{Badrinarayanan2017} model with a multi-task approach on the CamVid test set. } \label{tab:ablation1} \scriptsize \newlength{\colsep} \setlength{\colsep}{7pt} \begin{tabular}{% @{ }S @{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{5pt}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{ }% } \toprule \multicolumn{3}{c}{\textbf{ W/O }} & & \multicolumn{4}{c}{\textbf{Metrics Segmentation}} \\ \cmidrule{1-3} \cmidrule{5-8} \textbf{Cross} & \textbf{IoU} & \textbf{Aug} & & \textbf{Acc}$\uparrow$ & \textbf{IoU}$\uparrow$ & \textbf{Prec}$\uparrow$ & \textbf{Rec}$\uparrow$\\ \midrule {\checkmark} & {--} & {--} & & 0.6240563636363636 & 0.5206054545454545 & 0.6793700000000003 & 0.6240563636363636\\ {--} & {\checkmark} & {--} & & 0.6475249785834231 & 0.5401836363636363 & 0.7049187706970556 & 0.6475249785834231\\ {\checkmark} & {\checkmark} & {--} & & 0.6581575601230989 & 0.5490536363636364 & 0.7164937779264005 & 0.6581575601230989\\ {\checkmark} & {--} & {\checkmark} & & 0.666546369459209 & 0.5569418181818183 & 0.7256261347626717 & 0.666546369459209\\ {--} & {\checkmark} & {\checkmark} & & 0.692171577505276 & 0.5774290909090909 & 0.7535226495723512 & 0.692171577505276\\ {\checkmark} & {\checkmark} & {\checkmark} & & \B 0.7066977958022812 & \B 0.5895472727272727 & \B 0.7693364085522801 & \B 0.7066977958022812\\ \bottomrule \end{tabular} \end{table} \begin{table} [tb] \centering \sisetup{ table-format = 1.4, } \newrobustcmd{\B}{\bfseries} \caption[Ablative study on tasks]{Ablative study on tasks of edge detection~(E), semantic segmentation~(S), semantic contours~(C), and distance transform~(D) of the SegNet~\cite{Badrinarayanan2017} on the CamVid test set. } \label{tab:ablation2} \scriptsize \setlength{\colsep}{6pt} \begin{tabular}{% @{ }S @{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{5pt}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{\colsep}}% S@{\hspace{5pt}}% S@{\hspace{13pt}}% S[table-format=1.2]@{\hspace{3pt}}% S@{\hspace{\colsep}}% S@{ }% } \toprule \multicolumn{4}{c}{\textbf{ Task }} & & \multicolumn{4}{c}{\textbf{Metrics Segmentation}} & \multicolumn{4}{c}{\textbf{Metrics Clustering}} \\ \cmidrule{1-4} \cmidrule{6-9} \cmidrule{11-13} \textbf{S} & \textbf{E} & \textbf{C} & \textbf{D} & & \textbf{Acc}$\uparrow$ & \textbf{IoU}$\uparrow$ & \textbf{Prec}$\uparrow$ & \textbf{Rec}$\uparrow$ & & \textbf{SSI}$\uparrow$ & \textbf{CHI}$\uparrow$ & \textbf{DBI}$\downarrow$\\ \midrule {\checkmark} & {--} & {--} & {--} & & 0.7066977958 & 0.5895472727 & 0.7693364085 & 0.7066977958 & & 0.38425976303141018 & 1847.3994384097211 & 1.3601770879105035\\ {\checkmark} & {\checkmark} & {--} & {--} & & 0.708715135634 & 0.5912301946 & 0.771532556545 & 0.708715135634 & & 0.39057391438134792 & 2262.2252961128643 & 1.1411086606582739\\ {\checkmark} & {--} & {--} & {\checkmark} & & 0.711739987608 & 0.5937536116 & 0.774825518215 & 0.711739987608 & & 0.39396657599774471 & 2009.0362826279143 & 1.2751765978457497\\ {\checkmark} & {\checkmark} & {\checkmark} & {--} & & 0.732902862737 & 0.6114082801 & 0.797864178364 & 0.732902862737 & & 0.43689662862361123 & 2507.3270600642659 & 1.1494541530014966\\ {\checkmark} & {\checkmark} & {--} & {\checkmark} & & 0.730620008642 & 0.6095038587 & 0.795378982031 & 0.730620008642 & & 0.44368765567597529 & 3023.7583123330578 & 0.92003561489275787\\ {\checkmark} & {\checkmark} & {\checkmark} & {\checkmark} & & \B 0.75032213165 & \B 0.6259399265 & \B 0.816827415357 & \B 0.75032213165 & & \B 0.63597228088027968 & \B 4060.1852001405282 & \B 0.77434128358149601\\ \bottomrule \end{tabular} \end{table} \begin{figure*}[tb] \centering \newlength{\colfig} \setlength{\colfig}{1pt} \setlength{\hsz}{0.15\linewidth} {\renewcommand{\arraystretch}{0} \resizebox{\textwidth}{!}{% \begin{tabular}{% @{}% c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{\hspace{2pt}} c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{} } \rotatebox{90}{\scriptsize \ SegNet+MTL} & \includegraphics[width=\hsz, height=0.08\linewidth]{img_Seq05VD_f01350} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_SBCE_Seq05VD_f01350} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_SBCE_Seq05VD_f01350} & \includegraphics[width=\hsz, height=0.08\linewidth]{img_Seq05VD_f03840} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_SBCE_Seq05VD_f03840} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_SBCE_Seq05VD_f03840} \\[.2mm] {\tiny \rotatebox{90}{\scriptsize \ \ \ \ \ SegNet}} & \includegraphics[width=\hsz, height=0.08\linewidth]{diff_Seq05VD_f01350} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_S_Seq05VD_f01350} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_S_Seq05VD_f01350} & \includegraphics[width=\hsz, height=0.08\linewidth]{diff_Seq05VD_f03840} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_S_Seq05VD_f03840} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_S_Seq05VD_f03840} \\[.2mm] \rotatebox{90}{\scriptsize \ \ UNet+MTL} & \includegraphics[width=\hsz, height=0.08\linewidth]{img_Seq05VD_f04530} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_SBCE_Seq05VD_f04530} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_SBCE_Seq05VD_f04530} & \includegraphics[width=\hsz, height=0.08\linewidth]{img_Seq05VD_f04890} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_SBCE_Seq05VD_f04890} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_SBCE_Seq05VD_f04890} \\[.2mm] {\tiny \rotatebox{90}{\scriptsize \ \ \ \ \ UNet}} & \includegraphics[width=\hsz, height=0.08\linewidth]{diff_Seq05VD_f04530} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_S_Seq05VD_f04530} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_S_Seq05VD_f04530} & \includegraphics[width=\hsz, height=0.08\linewidth]{diff_Seq05VD_f04890} & \includegraphics[width=\hsz, height=0.08\linewidth]{grad_S_Seq05VD_f04890} & \includegraphics[width=\hsz, height=0.08\linewidth]{pred_S_Seq05VD_f04890} \\[1.2mm] & \scriptsize (a) Comparison & \scriptsize (b) Activation map & \scriptsize (c) Prediction & \scriptsize (d) Comparison & \scriptsize (e) Activation map & \scriptsize (f) Prediction\\ \end{tabular} } } \caption{% A comparison of activation maps ({i.e}\onedot} \def\Ie{{I.e}\onedot, regions that are responsible for CNN's prediction) produced by the SegNet~\cite{Badrinarayanan2017} and UNet~\cite{Ronneberger2015} models with and without a multi-task approach. By using related tasks ({i.e}\onedot} \def\Ie{{I.e}\onedot, by adding complementary information), the activation maps are better delimited (white bounding box). In the comparison images, correctly segmented regions are green while incorrectly segmented ones by the original models (SegNet and UNet) are red, by the multi-task models (SegNet+MTL and UNet+MTL) are blue; and by both are purple. } \label{fig:discriminative-regions} \end{figure*} The second study, presented in Table~\ref{tab:ablation2}, focuses on the internal behavior of the SegNet model's latent space when adding contour-based auxiliary tasks. These tasks are edge detection (E)~\cite{Gonzalez2006}, semantic contours (C)~\cite{Hariharan2011}, quantized distance transform (D)~\cite{Hayder2017}, and semantic segmentation (S)~\cite{Arnab2016} as the main task. Here, we use the segmentation metrics to evaluate the predicted regions. To evaluate the behavior (distribution), we use several clustering metrics. We notice a direct correlation between clustering behavior and segmentation results. These quantitative results on CamVid are complementary results to those presented in Section~\ref{sec:visualization_LS}. In Table~\ref{tab:ablation2}, our best results are produced by using both loss (cross-entropy and loss-IoU), data augmentation, and all tasks. We replicate this setting in Cityscapes and Freiburg Forest datasets. \subsection{Visualization of Latent Space Behavior} \label{sec:visualization_LS} One way to understand the latent space in the hourglass model is to look at it, how it behaves, and its influence on the segmentation predictions. Thus, we plot the latent space in which all the tasks are involved. We illustrate this space for the segmentation task using t-SNE in Fig~\ref{fig:latent-space-tasks}. Note that by using more related tasks, the space is better delimited. We see that the SS task by itself presents a poorly distributed latent space; see Fig.~\ref{fig:latent-S}. By adding the edge detection task, the latent space improves its clusters per class, although it could be improved, see Fig.~\ref{fig:latent0-SB}. A particular task that supports segmentation is the distance transform quantified by adding geometric information to the feature maps, see Fig.~\ref{fig:latent0-SE}. On the other hand, with the semantic contours task, the semantic information is reinforced, achieving a better distribution in the latent space, see Fig.~\ref{fig:latent-SBC}. Thus, by adding geometric information and a higher quality of semantic information, the latent space presents a better delimitation and, therefore, better quantitative results, see Fig.~\ref{fig:latent-SBCE}. We deduced that by adding complementary information ({i.e}\onedot} \def\Ie{{I.e}\onedot, auxiliary tasks) in the MTL stage, the features that stimulate the activation of the same neurons on the network are reinforced across tasks. Moreover, these features are clustered together. Based on the set of clustering metrics (SSI, CHI, and DBI), shown in Table~\ref{tab:ablation2}, we can say that maps of similar features within an MTL hourglass model are correctly grouped, and they are not placed arbitrarily. \subsection{Visualization of Activation Maps} \label{sec:activation_maps} Another way to understand CNNs is to look at the important image regions that influence their SS predictions. In this study, we analyze the regions used by the hourglass models to make the best prediction of the segmentation (activation map) when applying the latent space adjusted by the multi-tasks. The proposed visualization of activation maps is typically performed during inference (testing) to provide visual explanations for the network's prediction. We present, in Fig.~\ref{fig:discriminative-regions}, a comparison between the image regions that were responsible for CNNs prediction ({i.e}\onedot} \def\Ie{{I.e}\onedot, activation maps) of SegNet~\cite{Badrinarayanan2017} and UNet~\cite{Ronneberger2015} models with and without MTL\@. We notice that the activation maps are better adjusted to the objects' contours (white bounding box). This behavior happens due to the better distributed latent space ({i.e}\onedot} \def\Ie{{I.e}\onedot, decoder stage) on the models trained with MTL, see Fig.~\ref{fig:latent-SBCE}. The latent space's clustering behavior provides the networks' ability to use regions they did not use before to make the segmentation prediction. The prediction columns in Fig.~\ref{fig:discriminative-regions} show the predictions made by both models; in addition, the rows show the models with and without MTL\@. In the comparison columns, we show the correct and incorrect segmented regions color coded. The green regions are correctly segmented. Red and blue color regions indicate the regions incorrectly segmented for models without and with MTL, respectively.. Lastly, the purple regions are the incorrectly segmented ones produced by both models. We show different activation for SegNet and UNet with and without MTL in Fig.~\ref{fig:discriminative-regions}. First, we reaffirm the spatial precision loss as the main problem of SS due to the incorrect segmentation in the objects' boundary. Second, adding tasks focused on object contours helps hourglass models highlight regions that were not adequately delimited (white bounding box). In conclusion, we can say that the improvement of SegNet+MTL and UNet+MTL (quantitative results in Section~\ref{sec:ablation_studies}) happens mainly due to a higher flow of information provided by the contour-based auxiliary tasks. It better delimits the activation maps used for dense pixel prediction. \subsection{Reducing the Over-Fitting} \label{sec:reduce_overfitting} \begin{figure*}[tb] \centering \begin{tikzpicture}% \begin{groupplot}[ group style={ group size=3 by 1, xlabels at=edge bottom, ylabels at=edge left, x descriptions at=edge bottom, y descriptions at=edge left, horizontal sep=.1cm, }, footnotesize, height=4cm, width=6.8cm, cycle list/Paired, cycle multiindex* list={% [2 of]mark list\nextlist solid, solid, dashed, dashed\nextlist black!75, Dark2-B!75, black, Dark2-B\nextlist }, /tikz/mark repeat=5, /tikz/mark phase=2, ymajorgrids, major grid style={dashed}, ylabel={Pixelwise Class.\ Error ($\%$)}, xtick={1,10,20,30}, xmax=31, xmin=0, ytick={40,50,60,70, 80, 90}, ymax=90, ymin=40, x tick label style={/pgf/number format/precision=0}, legend pos=outer north center, legend columns=2, legend style={ cells={anchor=west}, font=\scriptsize, draw=none, }, ] \nextgroupplot[% legend style={at={(0.49,1.02)},anchor=south}, ]% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_SegNet_S.csv};% \addlegendentry{SegNet}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_SegNet_SBCE.csv};% \addlegendentry{SegNet+MTL}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_SegNet_S_svm_car.csv};% \addlegendentry{SegNet+SVM}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_SegNet_SBCE_svm_car.csv};% \addlegendentry{SegNet+MTL+SVM}% \nextgroupplot[% xlabel={Trimap Width (in pixels)}, % ]% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_DeconvNet_S.csv};% \addlegendentry{DeconvNet}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_DeconvNet_SBCE.csv};% \addlegendentry{DeconvNet+MTL}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_DeconvNet_S_svm_car.csv};% \addlegendentry{DeconvNet+SVM}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_DeconvNet_SBCE_svm_car.csv};% \addlegendentry{DeconvNet+MTL+SVM}% \nextgroupplot[% ]% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_UNet_S.csv};% \addlegendentry{UNet}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_UNet_SBCE.csv};% \addlegendentry{UNet+MTL}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_UNet_S_svm_car.csv};% \addlegendentry{UNet+SVM}% \addplot table[x=width, y=trimap, col sep=comma, header=true]{img/trimap_UNet_SBCE_svm_car.csv};% \addlegendentry{UNet+MTL+SVM}% \end{groupplot}% \end{tikzpicture}% \caption[Pixelwise classification error \vs trimap width]{% Pixelwise classification error \vs trimap width for the hourglass models focused on semantic segmentation (SegNet~\cite{Badrinarayanan2017}, DeconvNet~\cite{Noh2015}, and UNet~\cite{Ronneberger2015}) on the CamVid dataset. Circle marks represent the base network, and square ones represent the addition of MTL\@. The dashed-line lighter-tone denotes multi-label segmentation, while the solid-line darker-tone represents binary segmentation (also denoted with a +SVM). } \label{fig:study_trimap} \vspace{-10pt} \end{figure*} In this study, analyzed the contours of the segmented objects. We evaluate whether there is improvement in the objects' edges, and if the proposal addresses the problem of spatial precision loss. Our study (in Fig.~\ref{fig:study_trimap}) shows that there is improvement in the segmented objects' boundary. We compare the popular SegNet~\cite{Badrinarayanan2017}, DeconvNet~\cite{Noh2015}, and UNet~\cite{Ronneberger2015} hourglass models for SS on the CamVid dataset. Note, these models share similar operations to avoid using models with additional operations ({e.g}\onedot} \def\Eg{{E.g}\onedot, atrous convolutions) and ensure that the improvement is not due to the use of these operations. For comparison, we report experiments employing Trimap~\cite{Csurka2013, Kraehenbuehl2011}, which focuses on boundary regions of segmentation; see evaluation region in Fig~\ref{fig:trimap_app}. The Trimap is a rough image segmentation in the foreground, background, and unknown regions, shown in Fig.~\ref{fig:trimap_8px} with white, black, and gray regions, respectively. The idea is to define a narrow band (gray region defined using a width pixel) around each contour and compute pixel-wise accuracy in the given band. The error curve comparison (plots in Fig.~\ref{fig:study_trimap}) shows that learning the contour-based auxiliary tasks did not allow the network to overfit, enabling the networks to generalize better. \begin{figure}[tb] \centering \subfloat[Ground truth]{\includegraphics[width=0.33\linewidth,height=0.7in]{0001TP_008760_seg}% \label{fig:trimap_gt}}% \subfloat[Trimap (8\,px)]{\includegraphics[width=0.33\linewidth,height=0.7in]{0001TP_008760_trimap_width_8}% \label{fig:trimap_8px}}% \subfloat[Evaluation region]{\includegraphics[width=0.33\linewidth,height=0.7in]{0001TP_008760_trimap_width_8_rgb}% \label{fig:trimap_app}}% \caption{Illustration of boundary accuracy evaluation using Trimap~\cite{Csurka2013, Kraehenbuehl2011}. (a)~The image ground-truth from the Camvid dataset. (b)~The Trimap used for measuring the pixel boundary labeling accuracy (gray region) with a width of $8$ pixels. And, (c)~an example of the evaluation region. } \label{fig:trimap_camvid} \end{figure} From the previous analyzes, we conclude that the IoU improvement is especially due to better performance near the objects' boundary. Qualitatively (overlapping of segmented regions in Fig.~\ref{fig:discriminative-regions}) and quantitatively (error curve comparison in Fig.~\ref{fig:study_trimap}), on SS task, we find an improved performance near boundaries by adding a multi-task approach to the hourglass models. Besides, auxiliary tasks in hourglass models encourage clustering behavior in similar feature maps ({i.e}\onedot} \def\Ie{{I.e}\onedot, latent space). This behavior is reflected in Fig.~\ref{fig:latent-space-tasks}, where we visualize that the latent space influenced by the multi-task approach (contour-based tasks) is not spaced arbitrarily. \subsection{Comparing Results} \label{sec:comparing_results} Finally, we report comparative results (models with and without MTL approach) of several hourglass models existing in the literature. We present our quantitative results for the CamVid, Cityscape, and Freiburg Forest datasets in Tables~\ref{tab:result_camvid}, \ref{tab:result_cityscape}, and~\ref{tab:result_forest}, respectively. We use the IoU metric (higher is better) for each class on all datasets. Note that the second to last and last columns (in all tables) show the mean IoU of the classes for each model, with (w/) and without (w/o) MTL\@. \begin{table}[tb] \centering \sisetup{ table-format = 1.2, } \caption[Results on CamVid test]{IoU results on the CamVid test set for semantic segmentation. } \label{tab:result_camvid} \definecolor{cv-sky}{RGB}{128,128,128} \definecolor{cv-build}{RGB}{128,0,0} \definecolor{cv-pole}{RGB}{192,192,128} \definecolor{cv-road}{RGB}{128,64,128} \definecolor{cv-sidewalk}{RGB}{60,40,222} \definecolor{cv-tree}{RGB}{128,128,0} \definecolor{cv-sign}{RGB}{192,128,128} \definecolor{cv-fence}{RGB}{64,64,128} \definecolor{cv-car}{RGB}{64,0,128} \definecolor{cv-pedestrian}{RGB}{64,64,0} \definecolor{cv-cyclist}{RGB}{0,128,192} \newrobustcmd{\B}{\bfseries} \setlength{\colsep}{10pt} \scriptsize \renewcommand{\arraystretch}{1.} \resizebox{\linewidth}{!}{% \begin{tabular}{@{\hspace{5pt}} l@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{5pt}} % @{\hspace{8pt}} S@{\hspace{\colsep}} % S@{\hspace{5pt}}} % \toprule \B{Model} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-build}{} Building}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-tree}{} Tree}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-sky}{} Sky}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-car}{} Car}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-sign}{} Sign}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-road}{} Road}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-pedestrian}{} Pedestrian}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-fence}{} Fence}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-pole}{} Pole}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-sidewalk}{} Sidewalk}} & \B{\rotatebox[origin=l]{90}{\colorbox{cv-cyclist}{} Cyclist}} & \B{\rotatebox[origin=l]{90}{mIoU w/ MTL}} & \B{\rotatebox[origin=l]{90}{mIoU w/o MTL}}\\ \midrule \text{ENet~\cite{Paszke2016}} & 72.95334 & 63.58063 & 83.16372 & 75.57105 & 31.03089 & 92.93005 & 41.83943 & 15.35347 & 24.19076 & 76.28467 & 42.67589 & 56.32489 & 51.35547\\ \text{DeconvNet~\cite{Noh2015}} & 76.88499 & 67.98886 & 86.90582 & 78.66608 & 27.73828 & 93.54952 & 41.79856 & 26.5194 & 25.82796 & 78.21863 & 46.56049 & 59.15078 & 48.93486\\ \text{SegNet~\cite{Badrinarayanan2017}} & 78.16914 & 71.04894 & 88.41527 & 80.65418 & 39.38215 & 93.74948 & 46.88451 & 34.4612 & 28.14128 & 78.94282 & 48.68494 & 62.59399 & 55.69418\\ \text{UNet~\cite{Ronneberger2015}} & \B 79.60943 & \B 73.21305 & \B 89.1677 & 81.37496 & \B 42.41067 & 93.8112 & 58.00092 & 32.64772 & 31.34379 & 79.94234 & 47.98074 & 64.50022 & 56.12073\\ \text{FCN8~\cite{Long2016}} & 78.84189 & 71.81983 & 85.13701 & \B 84.60183 & 40.69326 & 94.11377 & 54.19309 & \B 40.47746 & 29.34844 & 80.60897 & \B 52.19043 & 64.72963 & 57.09838\\ \text{CGBNet~\cite{Ding2020}} & 79.34564 & 72.01548 & 85.96541 & 82.43298 & 40.86345 & 94.29751 & 56.48227 & 38.48245 & 31.10543 & 80.72065 & 50.84530 & 64,77787 & 58,86452\\ \text{FC-DenseNet67~\cite{Jegou2017}} & 79.06759 & 71.38042 & 86.47661 & 84.59334 & 40.4429 & \B 94.4125 & \B 58.09791 & 39.8477 & \B 36.74752 & \B 82.62042 & 50.44985 & \B 65.83061 & \B 65.81933\\ \bottomrule \end{tabular}} \end{table} \begin{table}[tb] \centering \sisetup{ table-format = 1.2, } \caption[Results on Cityscapes validation]{IoU results on Cityscapes validation set for semantic segmentation, using $11$ classes and with crop size of $384 {\mkern1mu\oldtimes\mkern1mu} 768$. } \label{tab:result_cityscape} \definecolor{cs-sky}{RGB}{70,130,180} \definecolor{cs-build}{RGB}{70,70,70} \definecolor{cs-road}{RGB}{128,64,128} \definecolor{cs-sidewalk}{RGB}{244,35,232} \definecolor{cs-fence}{RGB}{190,153,153} \definecolor{cs-vege}{RGB}{107,142, 35} \definecolor{cs-pole}{RGB}{153,153,153} \definecolor{cs-car}{RGB}{0,0,142} \definecolor{cs-sign}{RGB}{220,220,0} \definecolor{cs-person}{RGB}{220,20,60} \definecolor{cs-cyclist}{RGB}{119,11,32} \newrobustcmd{\B}{\bfseries} \setlength{\colsep}{10pt} \scriptsize \renewcommand{\arraystretch}{1.} \resizebox{\linewidth}{!}{% \begin{tabular}{@{\hspace{5pt}} l@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{5pt}} % @{\hspace{8pt}} S@{\hspace{\colsep}} % S@{\hspace{5pt}}} % \toprule \B{Model} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-sky}{} Sky}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-build}{} Building}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-road}{} Road}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-sidewalk}{} Sidewalk}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-fence}{} Fence}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-vege}{} Vegetation}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-pole}{} Pole}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-car}{} Car}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-sign}{} Sign}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-person}{} Person}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-cyclist}{} Cyclist}} & \B{\rotatebox[origin=l]{90}{mIoU w/ MTL}} & \B{\rotatebox[origin=l]{90}{mIoU w/o MTL}}\\ \midrule \text{ParseNet~\cite{Liu2015}} & 92.68293827404352 & 89.16041572953266 & 96.64929344202108 & 78.68159572042823 & 38.8946520805525 & 90.31107497240238 & 51.25962939829274 & 92.22531410130854 & 69.6257540181677 & 72.5078099997721 & 71.13040897806083 & 76.64808061041657 & 71.02214 \\ \text{DeconvNet}~\cite{Noh2015} & 93.38068506558454 & 89.30000689030125 & 96.88428559437489 & 77.74536506815247 & 47.09711705555088 & 90.94015731128758 & 53.386026294237446 & 92.32749112042848 & 62.895704180888536 & 70.13062867767482 & 69.2939112131625 & 76.67103440651304 & 62.0281\\ \text{FCN8~\cite{Long2016}} & 92.48669736904394 & 89.23579597502945 & 96.9382594309139 & 77.98272166561223 & 49.744548065731294 & 90.23699581309369 & 49.0955098719256 & 91.90855149250817 & 65.96284903753138 & 70.76164399020723 & 70.78263914402699 & 76.83056471414763 & 59.9772\\ \text{FastNet~\cite{Oliveira2016}} & 93.0441852916049 & 89.37268355237003 & 96.95447663381437 & 78.78835404554599 & 48.76638364912157 & 90.3099611595817 & 53.635451996403006 & 92.0867166047084 & 68.71895401365256 & 71.24763462321243 & 69.61882485850919 & 77.50396603895675 & 68.5236\\ \text{AdapNet++~\cite{Valada2019}} & 93.06606033717834 & 89.46445452619719 & 97.05622901868213 & 80.02680655465022 & 49.46480016553393 & 90.57898007902187 & 52.099869078274374 & 92.22088910324935 & 66.2616099406663 & 72.88264400519273 & 70.6234837933447 & 77.61325696381738 & \B 72.78740\\ \text{CGBNet~\cite{Ding2020}} & 92.97664103982117 & 89.39958276438676 & 96.66423105831508 & 77.59557837050379 & 42.80119490412156 & 91.88485470080798 & 57.52385700042729 & 91.13894547133451 & 73.28850836068722 & 75.2576623028372 & 71.1815062265886 & 78.15568747271192 & \B 73.2542485117273\\ \text{FC-DenseNet67~\cite{Jegou2017}} & \B 93.88313998276449 & 89.73299438760934 & 96.81290253288601 & 77.80647258331066 & 49.13706449171094 & 89.94146975843269 & \B 58.74235499720015 & 92.33820177878768 & 66.76975260475419 & \B 75.19007740288495 & 69.6320658658321 & 78.18059058056122 & 72.4968\\ \text{SegNet}~\cite{Badrinarayanan2017} & 93.74626719289523 & \B 90.08717473914852 & \B 97.37211631563157 & \B 81.57706328313445 & \B 51.8361304877961 & \B 91.75380542403174 & 56.85244795193234 & \B 92.81677417451446 & \B 67.52481330639286 & 72.61162786243042 & \B 72.12037519847499 & \B 78.9362359942166 & 52.1732\\ \bottomrule \end{tabular}} \end{table} \begin{table}[tb] \centering \sisetup{ table-format = 1.2, } \caption[Results on Freiburg Forest test]{IoU results on Freiburg Forest test set for semantic segmentation, using $5$ classes and with crop size of $384 {\mkern1mu\oldtimes\mkern1mu} 768$. } \label{tab:result_forest} \definecolor{cs-trail}{RGB}{170,170,170} \definecolor{cs-grass}{RGB}{0,255,0} \definecolor{cs-vegetation}{RGB}{102,102,51} \definecolor{cs-sky}{RGB}{0,120,255} \definecolor{cs-obstacle}{RGB}{255,255,0} \newrobustcmd{\B}{\bfseries} \setlength{\colsep}{10pt} \scriptsize \renewcommand{\arraystretch}{1.} \begin{tabular}{@{\hspace{5pt}} l@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{\colsep}} % S@{\hspace{5pt}} % @{\hspace{8pt}} S@{\hspace{\colsep}} % S@{\hspace{5pt}}} % \toprule \B{Model} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-trail}{} Trail}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-grass}{} Grass}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-vegetation}{} Vegetation}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-sky}{} Sky}} & \B{\rotatebox[origin=l]{90}{\colorbox{cs-obstacle}{} Obstacle}} & \B{\rotatebox[origin=l]{90}{mIoU w/ MTL}} & \B{\rotatebox[origin=l]{90}{mIoU w/o MTL}}\\ \midrule \text{FC-DenseNet67~\cite{Jegou2017}} & 79.89327392696255 & 80.37944863230267 & 85.41303105992351 & 92.04258556910936 & 34.61129583135417 & 74.46792700393046 & 73.75705\\ \text{FCN8~\cite{Long2016}} & 85.1216742415122 & 87.4328769258108 & 89.82046206254512 & 91.89093359920273 & 45.97794104995092 & 80.04877757580434 & 77.49172\\ \text{ParseNet~\cite{Liu2015}} & 86.29782003254194 & 87.72805334136773 & 90.196326801931 & 91.97294852349603 & \B 47.4098525049095 & 80.72100024084924 & 78.97639\\ \text{FastNet~\cite{Oliveira2016}} & 86.9035795041595 & \B 88.07845006053368 & \B 90.77433819417125 & \B 92.81743713075622 & 45.88196991314835 & 80.89115496055379 & \B 79.67255\\ \text{DeconvNet}~\cite{Noh2015} & 87.15929784327722 & 87.41532108960926 & 90.47793020911796 & 92.78637767896278 & 47.16089479254092 & 80.99996432270163 & 78.04483\\ \text{CGBNet~\cite{Ding2020}} & 87.58712369625532 & 87.61782649884515 & 90.63366521452878 & 92.78478998953157 & 46.58378462516348 & 81.03574632156925 & 77.89452\\ \text{SegNet}~\cite{Badrinarayanan2017} & \B 88.04198964046398 & 88.04034292874486 & 90.60757490231275 & 92.68127623818356 & 46.215345994324345 & \B 81.1173059408059 & 74.58112\\ \bottomrule \end{tabular} \end{table} In Fig.~\ref{fig:qualitative_cityscape_camvid}, we show qualitative results using the AdapNet++, UNet, and FastNet models. The columns from left to right represent the ground-truth image, the prediction without and with MTL, and a comparison (overlap) of both prediction maps. In the comparison image, the green regions are correctly segmented. The red color represents regions erroneously segmented by original models (AdapNet++, UNet, or FastNet), and the MTL models erroneously segment the blue ones. The regions incorrectly segmented by both predictions are purple. \begin{figure*}[tb] \centering \setlength{\hsz}{.12\linewidth} \setlength{\colfig}{1pt} {\renewcommand{\arraystretch}{0} \begin{tabular}{% @{}% c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{\hspace{\colfig}} c@{} } \footnotesize\multirow{3}{*}{\raisebox{-1.75\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{Cityscapes dataset}}} &\includegraphics[width=\hsz]{img_frankfurt_000000_005898_leftImg8bit} & \includegraphics[width=\hsz]{gt_frankfurt_000000_005898_leftImg8bit} & \includegraphics[width=\hsz]{predS_frankfurt_000000_005898_leftImg8bit} & \includegraphics[width=\hsz]{predSBCE_frankfurt_000000_005898_leftImg8bit} & \includegraphics[width=\hsz]{diff_frankfurt_000000_005898_leftImg8bit}\\%[.2mm] &\includegraphics[width=\hsz]{img_frankfurt_000001_010830_leftImg8bit} & \includegraphics[width=\hsz]{gt_frankfurt_000001_010830_leftImg8bit} & \includegraphics[width=\hsz]{predS_frankfurt_000001_010830_leftImg8bit} & \includegraphics[width=\hsz]{predSBCE_frankfurt_000001_010830_leftImg8bit} & \includegraphics[width=\hsz]{diff_frankfurt_000001_010830_leftImg8bit}\\%[.2mm] &\includegraphics[width=\hsz]{img_frankfurt_000001_063045_leftImg8bit} & \includegraphics[width=\hsz]{gt_frankfurt_000001_063045_leftImg8bit} & \includegraphics[width=\hsz]{predS_frankfurt_000001_063045_leftImg8bit} & \includegraphics[width=\hsz]{predSBCE_frankfurt_000001_063045_leftImg8bit} & \includegraphics[width=\hsz]{diff_frankfurt_000001_063045_leftImg8bit}\\[.9mm] \footnotesize\multirow{3}{*}{\raisebox{-2.2\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{CamVid dataset}}} &\includegraphics[width=\hsz]{img_0001TP_008730} & \includegraphics[width=\hsz]{gt_0001TP_008730} & \includegraphics[width=\hsz]{predS_0001TP_008730} & \includegraphics[width=\hsz]{predSBCE_0001TP_008730} & \includegraphics[width=\hsz]{diff_0001TP_008730}\\%[.2mm] &\includegraphics[width=\hsz]{img_Seq05VD_f01080} & \includegraphics[width=\hsz]{gt_Seq05VD_f01080} & \includegraphics[width=\hsz]{predS_Seq05VD_f01080} & \includegraphics[width=\hsz]{predSBCE_Seq05VD_f01080} & \includegraphics[width=\hsz]{diff_Seq05VD_f01080}\\%[.2mm] &\includegraphics[width=\hsz]{img_Seq05VD_f01800} & \includegraphics[width=\hsz]{gt_Seq05VD_f01800} & \includegraphics[width=\hsz]{predS_Seq05VD_f01800} & \includegraphics[width=\hsz]{predSBCE_Seq05VD_f01800} & \includegraphics[width=\hsz]{diff_Seq05VD_f01800}\\[.9mm] \footnotesize\multirow{3}{*}{\raisebox{-1.5\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{Freiburg Forest dataset}}}% &\includegraphics[width=\hsz]{img_b41-302_Clipped} & \includegraphics[width=\hsz]{gt_b41-302_Clipped} & \includegraphics[width=\hsz]{predS_b41-302_Clipped} & \includegraphics[width=\hsz]{predSBCE_b41-302_Clipped} & \includegraphics[width=\hsz]{diff_b41-302_Clipped}\\%[.2mm] &\includegraphics[width=\hsz]{img_b144-307_Clipped} & \includegraphics[width=\hsz]{gt_b144-307_Clipped} & \includegraphics[width=\hsz]{predS_b144-307_Clipped} & \includegraphics[width=\hsz]{predSBCE_b144-307_Clipped} & \includegraphics[width=\hsz]{diff_b144-307_Clipped}\\%[.2mm] &\includegraphics[width=\hsz]{img_b191-493_Clipped} & \includegraphics[width=\hsz]{gt_b191-493_Clipped} & \includegraphics[width=\hsz]{predS_b191-493_Clipped} & \includegraphics[width=\hsz]{predSBCE_b191-493_Clipped} & \includegraphics[width=\hsz]{diff_b191-493_Clipped}\\[1.2mm] &\scriptsize (a) Image & \scriptsize (b) Ground truth & \scriptsize (c) Prediction & \scriptsize (d) Prediction+MTL & \scriptsize (e) Comparison\\ \end{tabular} } \caption{Qualitative comparison of the predictions of the AdapNet++~\cite{Valada2019}, UNet~\cite{Ronneberger2015} and FastNet~\cite{Oliveira2016} models (without and with multi-task) against the ground-truth. The comparison column shows an overlap of the predictions against the ground-truth. Note, the green regions are correctly segmented. The red color represents regions erroneously segmented by original models (AdapNet++, UNet, and FastNet), and the blue ones are erroneously segmented by models with multi-task. The regions incorrectly segmented by both predictions are purple.} \label{fig:qualitative_cityscape_camvid} \end{figure*} Our experimental results show that adding a multi-task approach to the already defined hourglass models improves the SS task's performance. It is important to note that we add contour-based auxiliary tasks because the original models still exhibit the problem of spatial precision loss. As we saw, this problem is reflected in the boundary of the segmented objects (see Fig.~\ref{fig:SS_loss_spatial_precision}). \makeatletter \begin{figure}[tb] \includegraphics[width=\linewidth]{kiviat} \caption{% Comparison of the execution time of hourglass models (for a growing combination of tasks, {i.e}\onedot} \def\Ie{{I.e}\onedot, S, SE, SEC, SECD) in the training phase (left) and in the testing phase (right), both in minutes. We use the Cityscapes dataset with \num{17500} and \num{500} samples for training and testing, respectively. Note that the auxiliary tasks are only used in the training phase. In the testing phase, we only use the time shown in blue (right). } \label{fig:time_exec} \end{figure} \makeatother Also, we present an efficiency comparison of the hourglass models (see Fig.~\ref{fig:time_exec}) in the training and testing phase. We used the Cityscapes dataset with \num{17500} training samples and \num{500} testing samples (in our case, they are the validation samples) for these experiments. The results show the execution time of one epoch (a forward-pass over the entire dataset) in a single GPU\@. Note that we only do this for an increasing sequence of combinations: S, SE, SEC, and SECD, due to our computational limitations. We executed the process five times and report the averages of the training (left) and testing phase (right), both in minutes. Our graphics on training using multitasking show an increase in the time required to train hourglass models. This increment is directly related to the complexity of the contour-based auxiliary tasks, which proved to be challenging enough to fit a latent space. Remember, for testing; we only use a single task: semantic segmentation. In other words, for \num{500} samples, the models need the time presented by the segmentation plot~(S) in the right of Fig.~\ref{fig:time_exec}. Finally, the previous experiments have been conducted on NVIDIA GTX Titan~X with \SI{12}{\giga\byte} of memory and four GPUs (multi-GPU). \subsection{Discussion} \label{sec:discussions} In the previous experiments, we showed that by learning multiple contour-based tasks on the hourglass models the redundant information needed to solve the tasks improves the models' learning. We noticed that the tasks increase the models' ability to accommodate noise during the training phase. Consequently, the tasks reduce the model's overfitting risk by providing a gradient that tends to keep the latent space from overfitting. This advantage in the feature space is largely due to the clustering behavior of latent space that we achieved by a more robust feature extraction. The addition of auxiliary tasks changes the weight updating dynamics ({i.e}\onedot} \def\Ie{{I.e}\onedot, gradient updating) such that the model learns robust features that work in all the used tasks. The robustness of the space comes from adding related tasks to the blob prediction one ({i.e}\onedot} \def\Ie{{I.e}\onedot, the SS task). Hence, by unsupervisedly restricting the latent space through more tasks we robustify the latent space, as show by our experiments. For example, one of the main problems with SS methods is to correctly predict the boundaries of the objects since most of the accuracy comes from correctly detecting the main blob. By adding a contour-based auxiliary task, we increase the learning rate's effectiveness for this case. This increment happens since the same model is forced to understand the boundaries of the objects to predict the contours while been asked to predict the blobs as well (through the other task). We found that by simultaneously learning to solve related tasks the learned features improve with the tasks added ({cf}\onedot} \def\Cf{{Cf}\onedot Fig.~\ref{fig:latent-space-tasks}). This result is intuitive if we assume that the different tasks have a common latent space, as show in Fig.~\ref{fig:latent_representation_space}. Then, by simultaneously optimizing in this shared space, our multi-task learning problem is finding solutions in the intersection of the tasks which in turn improves the others. \begin{figure}[tb] \centering \begin{tikzpicture} \def( 45:.7) circle (1){( 45:.7) circle (1)} \def(135:.7) circle (1){(135:.7) circle (1)} \def(225:.7) circle (1){(225:.7) circle (1)} \def(315:.7) circle (1){(315:.7) circle (1)} \path[name path=tsp, draw] ( 45:.7) circle (1) node[above right] (ts) {$\mathcal{T}_S$}; \draw (135:.7) circle (1) node[above left] (te) {$\mathcal{T}_E$}; \draw (225:.7) circle (1) node[below left] (td) {$\mathcal{T}_D$}; \draw (315:.7) circle (1) node[below right] (tc) {$\mathcal{T}_C$}; \node[below left=20pt and 10pt of td, font=\footnotesize, anchor=west] (lbl) {Learned feature space}; \coordinate (bbsw) at (current bounding box.south west); \coordinate (bbne) at (current bounding box.north east); \begin{scope}% \clip ( 45:.7) circle (1); \clip (135:.7) circle (1); \clip (225:.7) circle (1); \fill[gray] (315:.7) circle (1); \node (int) {}; \end{scope} \draw (int) -- ($(ts)!.5!(tc)+(1,0)$) node[font=\footnotesize, anchor=west] {Best features}; \pgfresetboundingbox \path[use as bounding box] (bbsw) rectangle (bbne); \end{tikzpicture} \caption{Example of the latent space ({i.e}\onedot} \def\Ie{{I.e}\onedot, feature representations) learned by backpropagation when using several auxiliary tasks. The representations at the intersection of all the tasks are better since they satisfy several tasks simultaneously. } \label{fig:latent_representation_space} \vspace{-10pt} \end{figure} When training multiple tasks simultaneously, from the model point of view, the hidden units in the hourglass models improve two-fold: (i)~There is a more significant number of parameters involved in updating the weights ({i.e}\onedot} \def\Ie{{I.e}\onedot, backpropagation) using MTL, and (ii)~the most relevant parameters ({i.e}\onedot} \def\Ie{{I.e}\onedot, the most frequently influenced by all tasks) achieve a better (robust) feature extraction. This adjustment on the parameters of the MTL hourglass models has a regularization effect on the parameters in addition to a more stable training ({i.e}\onedot} \def\Ie{{I.e}\onedot, the variance in the loss function plot shown in Fig.~\ref{fig:stable_training}). \begin{figure}[tb] \centering \begin{tikzpicture}% \begin{groupplot}[ group style={ group size=3 by 1, xlabels at=edge bottom, ylabels at=edge left, x descriptions at=edge bottom, y descriptions at=edge left, horizontal sep=.1cm, }, footnotesize, height=4cm, width=6.8cm, cycle list/Paired, cycle multiindex* list={% [2 of]mark list\nextlist black!75, Dark2-B\nextlist }, /tikz/mark repeat=10, /tikz/mark phase=1, ymajorgrids, major grid style={dashed}, ylabel={IoU ($\%$)}, xlabel={Iterations}, xtick={100,300,600,900}, xmax=1000, xmin=0, ytick={55,65,75,85}, ymax=90, ymin=55, x tick label style={/pgf/number format/precision=0}, legend pos=outer north center, legend columns=2, legend style={ cells={anchor=west}, font=\scriptsize, draw=none, }, ] \nextgroupplot[% ]% \addplot table[x=iter, y=iou, col sep=comma, header=true]{img/stable_SegNet_S.csv};% \addlegendentry{SegNet}% \addplot table[x=iter, y=iou, col sep=comma, header=true]{img/stable_SegNet_SBCE.csv};% \addlegendentry{SegNet+MTL}% \end{groupplot}% \end{tikzpicture}% \caption[stable training]{% IoU over the iterations in the training phase of the hourglass model SegNet~\cite{Badrinarayanan2017} focused on the semantic segmentation task in the CamVid dataset. By adding contour-based auxiliary tasks in the training phase, we achieve a robust feature extraction that increases the ability to treat noise and reduces the risk of overfitting. In consequence, the model that uses the MTL achieves a more stable learning than its counterpart. } \label{fig:stable_training} \vspace{-10pt} \end{figure} Unlike other architectures ({e.g}\onedot} \def\Eg{{E.g}\onedot, DeepLabV3), hourglass models have a decoding stage that is complex enough ({i.e}\onedot} \def\Ie{{I.e}\onedot, a set of reconstruction operations) to highlight the changes produced in the latent space. For this reason our research is focused on hourglass models. Note that each addition to the auxiliary tasks has a different effect on what is learned in the latent space. Changes in architecture can alter, in different ways, the way backpropagation benefits from contour-based auxiliary tasks. \section{Future Work} \label{sec:Future_work} In our work, we observed the clustering behavior of the latent space. Future work may focus on using a clustering framework to impose particular biases to learn the latent representations. By forcing the latent space into clusters, we intuit that we will need fewer tasks in the training phase. Another venue to explore is the influence of the auxiliary contour-based tasks on architectures other than hourglass-based. Regarding extending our work to videos, on one hand, we need to extend the architectures to work with 3D data. Thus, we will need to use bigger models that rely on 3D convolutions to perform the segmentation on volumetric data. On the other hand, we will need to maintain not only spatial consistency but also temporal one. This new constraint will be akin the problems we face today trying to reduce the instabilities in the segmentation boundaries. Hence, future work will need to find relevant tasks that help to stabilize the temporal consistencies as well. Perhaps, we could explore optical flow as a first approach to tackle since it is similar to the spatial boundaries. \section{Conclusions} \label{sec:Conclusions} In this paper, we incorporated auxiliary contour-based tasks to address the loss of spatial precision. This problem is commonly in bounding segmented objects. Thus, we propose to use edge detection and semantic contour tasks to reinforce the semantic information on the boundary objects. We also proposed using quantized distance transform to add geometric information into the internal representation of deep neural networks ({i.e}\onedot} \def\Ie{{I.e}\onedot, hourglass models). We observed (by empirical experiments) that the latent space behavior in hourglass models is clustering when adding complementary information (due to auxiliary tasks). Note that the latent space does not present a random distribution. Instead, better-distributed clusters produce, in turn, better segmentation results. We also showed that when using all the tasks, the activation maps (regions used by the networks to perform the segmentation prediction) adjust better to the edges of objects. Although the activation maps vary depending on the input image, the latent space's behavior produces an improvement in the quality of segmentation. Additionally, we verify (empirically) that the improvement produced by using multiple tasks addresses the problem of loss of spatial precision in segmentation. In other words, we verified (by using trimap) that using the clustered latent space improves the edges of the segmented objects and, consequently, the final segmentation. We also interpret that by adding contour-based auxiliary tasks, the models obtain a more powerful generalization. In order not to limit our study (by using three models), we compared the results of the different hourglass models (with and without MTL) existing in the literature on other datasets (Cityscapes and Freiburg Forest). Finally, the empirical exploration showed that it is possible to better fit the models (obtain a latent space with cluster behavior) for the semantic segmentation task when we use complementary information (by adding contour-based auxiliary tasks) in the training phase. \input{appendix} \bibliographystyle{IEEEtran} \footnotesize \section{Final Representations} \label{sec:apx_final_representation} The methodology proposes to combine information on similar tasks using supervised learning. Then, we need to know the operations used to obtain the comparison masks in the different tasks. Keep in mind; all datasets perform the same preprocessing to obtain the edges and the quantized distance transform. Thus, to obtain the objects' edges, we take the instances' masks, and we look for a difference of instance labeling. For this, we used D-4 connectivity (up, down, right, left)~\cite{Gonzalez2006}, and to better highlight the boundaries, we use the morphology operation of dilation~\cite{Gonzalez2006}, with a structural element of $2$-size and disk-shaped. On the other hand, for adding geometric information, we extract the distance of pixels to objects' boundaries ({i.e}\onedot} \def\Ie{{I.e}\onedot, distance transform~\cite{Gonzalez2006}). Using this distance transform as a task learning gives us the following advantages: i)~we can easily extract it from the instance masks, and ii)~the quantized distance transform can be easily trained with the existing loss functions. Note that this representation, based on the distance transform, allows us to infer the complete shape of an object instance even with incomplete information ({i.e}\onedot} \def\Ie{{I.e}\onedot, when a part of the object is shown). The distance transform produces a wide range of values when objects have different shapes and sizes, see Fig.~\ref{fig:dist_tranf}. For this reason, we truncate the transformation given a threshold $R$, thus guaranteeing a limited range of values, see Fig.~\ref{fig:dist_trunc}. Therefore, similar to models~\cite{Hayder2017,Bischke2019}, we define $Q$ as the set of pixels on the object boundary the object and $\text{IS}_i$ the set of pixels belonging to instance mask $i$. For every pixel $p$, we compute a truncated distance $D_t(p)$ to $Q$ as, \begin{equation} \label{eq:dist_transf} D_t(p) = \gamma_p \min \big( \min \lceil d(p,q) \rceil, R \big), \quad \forall\,q \in Q \end{equation} where $d(p, q)$ represent the Euclidean distance between pixel $p$ and $q$, $\lceil z \rceil$ give us the nearest integer larger than $z$, and $R$ is the truncation threshold. Finally, the function $\gamma_p$ denotes if the pixel $p$ is inside or outside of an $\text{IS}_i$ instance mask \begin{equation} \gamma_p =\begin{cases} 1, & \text{if} \ p \in \text{IS}_i,\\ 0, & \text{otherwise}.\\ \end{cases} \end{equation} To facilitate the energy labeling ({i.e}\onedot} \def\Ie{{I.e}\onedot, continuous distance values), we quantify these values in $K$ uniform bins by one-hot encode the distance map into a binary vector representation $b(p)$ as~\cite{Hayder2017} \begin{equation} \label{eq:dist_quantized} D_q(p)\sum_{k=1}^K r_n b_k(p), \qquad \sum_{k=1}^K b_k(p) = 1, \end{equation} where $r_n$ is distance value corresponding to bin $k$. The $K$ binary maps are the classification maps for each of the $k$-th edge distance. We can see an example in Fig~\ref{fig:dist_quant}. \begin{figure*}[tb] \centering \subfloat[Image]{\includegraphics[width=1.7in, height=2.5cm]{img_000125_10_crop9}% \label{fig:energy_img}} \hfil \subfloat[Distance Transform]{\includegraphics[width=1.7in, height=2.5cm]{lb_dist_000125_10_crop9}% \label{fig:dist_tranf}} \hfil \subfloat[Truncate Distance]{\includegraphics[width=1.7in, height=2.5cm]{lb_dist_trunc_000125_10_crop9}% \label{fig:dist_trunc}} \hfil \subfloat[Quantized Distance]{\includegraphics[width=1.7in, height=2.5cm]{lb_dist_quant_000125_10_crop9}% \label{fig:dist_quant}} \caption{% Intending to merge semantic and geometric information, we use a (d)~quantized distance obtained from an (b)~Euclidean distance transform. To enhance the distance transform energy, (c)~we use truncation and normalization. Note that this quantized distance is easy to combine with the multi-label cross-entropy loss function.} \label{fig:energy_level} \end{figure*} This quantized distance transform operation is not new, having been explored in previous models~\cite{Hayder2017,Bischke2019}. However, contrary to others that use this transformation in a bounding box~\cite{Hayder2017} or for a single class ({i.e}\onedot} \def\Ie{{I.e}\onedot, buildings)~\cite{Bischke2019}, our technique applies the transformation for instances that belong to different classes. \section{Class Imbalance} \label{sec:apx_imbalance} In this work, we have class imbalance during training. The dataset imbalance causes (i)~inefficient training, because it has few samples (of some kinds) in the training stage, the network may not observe all the samples; and, by having a small number of samples, (ii)~the network can fall into overfitting and degenerate the model. To address this problem, we use median frequency (counting) balancing~\cite{Eigen2016}, defined by \begin{equation} \label{eq:class_balancing} \tau_c = \frac{\bar{f}}{f(c)}, \end{equation} where $f(c)$ is the number of pixels of class $c$ divided by the total number of pixels in images where $c$ is present, and $\bar{f}$ is the median of these frequencies (counting). Finally, we use this class weighted in Sections~\ref{sec:apx_train_ss} with $\alpha_i$ and~\ref{sec:apx_train_el} with $\mu_i$. \section{Learning Multi-Task Framework} \label{sec:apx_training} We used several hourglass networks based on the MTL approach, where tasks help each other adjust their parameters. With this approach, we get a good delimitation of the objects' edge by sharing the information extracted from all the tasks ({i.e}\onedot} \def\Ie{{I.e}\onedot, share the parameters). In the last two layers of the decoding stage, we extract specific information to discriminate each task. Next, we explain details about the output learning using MTL for edge detection, semantic segmentation, semantic contours, and truncated distance transform (energy level). \subsection{Edge Detection Training} \label{sec:apx_train_bound} In the first specific decoding stage, we learn to detect the edges of each instance object. In order to handle the imbalance between the two binary classes (edge, no edge), we used the HED-loss function~\cite{Xie2015} a class-balanced cross-entropy function. Then we consider the edge-class objective function as \begin{equation} \label{eq:hed_loss} \begin{aligned} \mathcal{L}_{\mathit{c}} =& -\beta \sum_{i \in Y_+} \log P \left( y_i=1 \given X; \theta \right) \\ & - (1-\beta) \sum_{j \in Y_-} \log P \left( y_j=0 \given X; \theta \right), \end{aligned} \end{equation} where $y_i$ and $y_j$ are the indexed predicted edge (output) for the $i$-th and $j$-th pixel, respectively. Here, $\theta$ represents the network parameters to be optimized in the edge stage. The proportion of positive (edge) and negative (no edge) classes on the ground-truth edges $Y$ are $\beta = |Y_+|/|Y|$ and $1-\beta = |Y_-|/|Y|$, where $Y = Y_+ \cup Y_-$. Moreover, $P$ is the probability that a pixel contained edges (output of edge stage), this is defined by a sigmoid function, such that \begin{equation} \label{eq:prob_output} P = P \left( y_i \given X; \theta \right) = \sigma(y_i) \in [0,1]. \end{equation} Although the HED-loss function proved to be useful in training for edge detection. Training time can be reduced and edges further penalized by maximizing intersection-over-union~\cite{Csurka2013}. Then, we consider the objective function, \begin{equation}\label{eq:iou_loss} \mathcal{L}_{\mathit{iou}} = 1 - \frac{P \cap Y}{P \cup Y} = 1 - \frac{\sum_{v \in Y} P_v Y_v}{\sum_{v \in Y} P_v + Y_v - P_v Y_v}. \end{equation} Finally, for edge detection, we combine both loss functions to obtain our final objective function, defined by \begin{equation} \label{eq:edge_loss} \mathcal{L}_E = \psi_1 \mathcal{L}_{\mathit{c}} + \psi_2 \mathcal{L}_{\mathit{iou}}, \end{equation} where $\psi_1$ and $\psi_2$ are hyper-parameters that define the contribution of each loss to the learning process. \subsection{Semantic Segmentation Training} \label{sec:apx_train_ss} In the second specific decoding stage, we learn to classify each object in pixel-wise level ({i.e}\onedot} \def\Ie{{I.e}\onedot, semantic segmentation). We use a multi-label balanced cross-entropy loss function to address the problem of imbalance. Thus we define this function as, \begin{equation} \label{eq:cross_loss_s} \mathcal{L_\mathit{cross\ ss}} = -\frac{1}{N}\sum_{i=1}^N \alpha_i \log P(s = s_i \given X; \phi), \end{equation} where $s_i$ is the indexed predicted classification (output) for the $i$-th class from the set of ground-truth $S$ on semantic segmentation, additionally, $N$ is the number of classes, and $\phi$ denotes the network parameters to optimize in the semantic segmentation stage. Also, $P(\cdot)$ is the probability that a pixel belongs to the $i$th class. Similar to~\eqref{eq:prob_output}, this function is defined by a sigmoid activation function. Besides, similar to the previous section, we use a target function intersection-over-union to penalize the boundary of segmentation. Contrary to $\mathcal{L_\mathit{iou}}$~\eqref{eq:iou_loss}, at this stage, we use a multi-label function, defined by \begin{equation}\label{eq:multi_iou_s} \mathcal{L}_{\mathit{iou\ ss}} = 1 - \sum_{i=1}^N\frac{P_i \cap S_i}{P_i \cup S_i}. \end{equation} Subsequently, we combine both loss functions in our final objective function for semantic segmentation. Thus, we define the function as, \begin{equation} \label{eq:ss_loss} \mathcal{L}_S = \psi_3 \mathcal{L}_{\mathit{cross\ ss}} + \psi_4 \mathcal{L}_{\mathit{iou\ ss}}, \end{equation} where $\psi_3$ and $\psi_4$ are hyper-parameters used to control the influence of each part of the function ({i.e}\onedot} \def\Ie{{I.e}\onedot, weighted sum). Keep in mind that the semantic contour task uses the same loss functions $\mathcal{L}_S$ with the name $\mathcal{L}_C$ but with hyper-parameters $\omega$. \subsection{Energy Level Training} \label{sec:apx_train_el} In the last specific decodification stage, we learn to classifier the bins of each level of the truncated distance transform. In other words, train the network (with $\varphi$ parameters) to learn how to classify each level of the discretized distance transform ({i.e}\onedot} \def\Ie{{I.e}\onedot, $K$ bins classifier). Thus, similar to the previous section, we use multi-label balanced cross-entropy loss function, \begin{equation} \label{eq:cross_loss_e} \mathcal{L_\mathit{cross\ e}} = -\frac{1}{K}\sum_{i=1}^K \mu_i \log P(k = k_i \given X; \varphi), \end{equation} and multi-label intersection-over-union loss function, \begin{equation}\label{eq:multi_iou_e} \mathcal{L}_{\mathit{iou\ e}} = 1 - \sum_{i=1}^K\frac{P_i \cap K_i}{P_i \cup K_i}, \end{equation} on a set of $K$ bins. Finally, we merge our loss functions for energy level by, \begin{equation} \label{eq:e_loss} \mathcal{L}_D = \psi_5 \mathcal{L}_{\mathit{cross\ e}} + \psi_6 \mathcal{L}_{\mathit{iou\ e}}, \end{equation} where $\psi_5$ and $\psi_6$ are hyper-parameters that define the contribution of each loss to the learning process. \section{More Visualizations of Latent Space Behavior} \label{sec:apx_visualization_LS} In this section, we show complementary visualizations in Fig.~\ref{fig:latent0-space-tasks} of the main document Fig.~\ref{fig:latent-space-tasks}. We present additional plotting of the behavior visualizations of the latent space on a subset of the CamVid dataset (in Fig.~\ref{fig:latent1-space-tasks}). We build the subset with $10$ random image samples. From these images, we selected $446$ random label-pixels for each class (all on the testing set). \begin{figure*}[tb] \centering \setlength{\wsz}{2.2in} \setlength{\hsz}{1.1in} {\captionsetup{justification=centering} \subfloat[S \protect\\ $\text{SSI}=0.384$, $\text{DBI}=1.360$, $\text{DBI}=1.360$]{\includegraphics[width=\wsz,height=\hsz]{latent0-S}% \label{fig:latent0-S}} \hfil \subfloat[S+E \protect\\ $\text{SSI}=0.391$, $\text{DBI}=1.141$, $\text{DBI}=1.141$]{\includegraphics[width=\wsz,height=\hsz]{latent0-BS}% \label{fig:latent0-SB}} \hfil \subfloat[S+D \protect\\ $\text{SSI}=0.394$, $\text{DBI}=1.275$, $\text{DBI}=1.275$]{\includegraphics[width=\wsz,height=\hsz]{latent0-SE}% \label{fig:latent0-SE}} \vfil \subfloat[S+E+C \protect\\ $\text{SSI}=0.437$, $\text{DBI}=1.150$, $\text{DBI}=1.149$\label{fig:latent0-SBC}]{\includegraphics[width=\wsz,height=\hsz]{latent0-BCS}} \hfil \subfloat[S+E+D \protect\\ $\text{SSI}=0.444$, $\text{DBI}=0.920$, $\text{DBI}=0.920$]{\includegraphics[width=\wsz,height=\hsz]{latent0-BSE}% \label{fig:latent0-SBE}} \hfil \subfloat[S+E+C+D \protect\\ $\text{SSI}=0.636$, $\text{DBI}=0.774$, $\text{DBI}=0.774$]{\includegraphics[width=\wsz,height=\hsz]{latent0-BCSE}% \label{fig:latent0-SBCE}} } \caption[Apx plotting the latent space ]{ We are plotting of the shared latent space on Camvid testing dataset. Here we combine the different tasks of edge detection~(E), semantic segmentation~(S), semantic contour~(C), and distance transform~(D). Note that when adding tasks related to semantic segmentation, {i.e}\onedot} \def\Ie{{I.e}\onedot, by providing complementary information, maps of similar features (within a multi-task hourglass model) are clustered together in a similar latent space, and they are not spaced arbitrarily. We confirm this behavior by using a set of metrics for clustering shown in Table~\ref{tab:ablation2}. } \label{fig:latent0-space-tasks} \vspace*{-10pt} \end{figure*} \begin{figure*}[tb] \centering \setlength{\wsz}{2.2in} \setlength{\hsz}{1.1in} {\captionsetup{justification=centering} \subfloat[S\protect\\ $\text{SSI}=0.325$, $\text{CHI}=1468.3$, $\text{DBI}=1.572$]{\includegraphics[width=\wsz,height=\hsz]{latent1-S}% \label{fig:latent1-S}} \hfil \subfloat[S+E \protect\\ $\text{SSI}=0.342$, $\text{CHI}=1678.9$, $\text{DBI}=1.114$]{\includegraphics[width=\wsz,height=\hsz]{latent1-BS}% \label{fig:latent1-SB}} \hfil \subfloat[S+D \protect\\ $\text{SSI}=0.365$, $\text{CHI}=1660.9$, $\text{DBI}=1.275$]{\includegraphics[width=\wsz,height=\hsz]{latent1-SE}% \label{fig:latent1-SE}} \vfil \subfloat[S+E+C \protect\\ $\text{SSI}=0.413$, $\text{CHI}=1806.5$, $\text{DBI}=0.966$]{\includegraphics[width=\wsz,height=\hsz]{latent1-BCS}% \label{fig:latent1-SBC}} \hfil \subfloat[S+E+D \protect\\ $\text{SSI}=0.508$, $\text{CHI}=2578.5$, $\text{DBI}=0.922$]{\includegraphics[width=\wsz,height=\hsz]{latent1-BSE}% \label{fig:latent1-SBE}} \hfil \subfloat[S+E+C+D \protect\\ $\text{SSI}=0.510$, $\text{CHI}=2795.3$, $\text{DBI}=0.776$]{\includegraphics[width=\wsz,height=\hsz]{latent1-BCSE}% \label{fig:latent1-SBCE}} } \caption[Additional plotting the latent space ]{ Additional results of the shared latent space for the dataset subset (random labeled-pixels sample on CamVid testing dataset). Merging tasks of edge detection~(E), semantic segmentation~(S), semantic contour~(C), and distance transform~(D). } \label{fig:latent1-space-tasks} \vspace*{-10pt} \end{figure*} \section{Architectures} \label{sec:apx_architectures} The architectures for the semantic segmentation models used in this paper are the same as those used in the original papers. We keep the same number of convolution and deconvolution layers for the encoding and decoding stages. We maintain the same amount of hidden units for each, that is, channels per layer, and we maintain the same non-linear activation functions and hyperparameters. The setup of the hourglass models is defined in their respective papers for the architectures we used, namely, FCN8~\cite{Long2016}, ParseNet~\cite{Liu2015}, SegNet~\cite{Badrinarayanan2017}, FastNet~\cite{Oliveira2016}, UNet~\cite{Ronneberger2015}, DeconvNet~\cite{Noh2015}, AdapNet++~\cite{Valada2019}, CGBNet~\cite{Ding2020}, FC-DenseNet67~\cite{Jegou2017}, and ENet~\cite{Paszke2016}. Finally, for each specific task-block, we use two capable of convolution with kernels of $1\times1$ and $8\times8$, respectively, both with depth (channels) of the same number of classes of the dataset for tasks $S$ and $C$, depth of $1$ for task $B$ and depth of $6$ for task $D$.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,224
2021 Action Drama History War Bhuj: The Pride of India 1080P-HDRIP 2019 Action Adventure Comedy Total Dhamaal 2019 Drama History Mission Mangal 2019 Action Drama Laal Kaptaan 720P-HDRIP Khandaani Shafakhana 2019 Drama Romance 2019 Action Dabangg 3 Yamla Pagla Deewana: Phir Se Welcome to New York 2018 Comedy Drama Romance Happy Phirr Bhag Jayegi 2017 Adventure Comedy Drama Romance 2017 Crime Mystery Thriller Ittefaq 1080P-HDTV 2010 Action Comedy Crime Dabangg 2016 Action Crime Thriller Tevar Action Jackson 2013 Action Romance R... Rajkumar 2013 Action Crime Drama Once Upon ay Time in Mumbai Dobaara! Lootera 2013 Action Comedy Bullett Raja Set in the backdrop of the 1971 Indo-Pakistan War, the film tells the story of the IAF Squadron Leader Vijay Karnik, and his bravery, patriotism and determination. Con-artists Guddu and Johnny, a bickering couple Avinash and Bindu who are on a verge of divorce, a fireman Lallan and his side-kick siblings Adi and Manav and a cunning police commissioner set out on a mad chase for a hidden booty of Rs 50 crores in a zoo in Janakpur. A team of Indian scientists at ISRO (Indian Space Research Organisation) take on the extraordinary task of successfully sending a satellite into the orbit of planet Mars in a country's maiden attempt. When Baby Bedi is entrusted with a job of running controversial sex clinic 'Khandaani Shafakhana', in a small town of Punjab, she faces severe backlash from all quarters. Can she find a cure for the widespread social stigma against important issues like sex education and sexual health? Set in 1945, in Pre-Independent India, the elite, opulent and solemn world of the Chaudhry family, and the wild, mysterious and musical underbelly of the town, Hira Mandi, clash when Roop Chaudhry encounters Zafar, a daredevil from Hira Mandi, unleashing deep-buried truths, secrets of betrayal and affairs that threaten to bring both worlds crashing down. Chulbul this time has to take on a criminal named Balli Singh, who has disrupted other people's lives with his annoying antics. An upright Ayurveda practitioner is hounded by big pharma giants for his age-old formula called Vajra Kawach that cures everything from pimples to impotency. But how long can he keep this magic pill only for the poor? Recovery agent Teji and fashion designer Jinal dream of making it big in showbiz. Both win a ticket to a popular awards show in New York where they get a chance to showcase their talent. However, they soon realise that they are mere pawns in the hands of event manager Sophie, who wants to teach her boss a lesson. Horticulture professor Happy arrives in Shanghai and the other Happy along with husband Guddu also lands up in the Chinese city at the same time. Gangsters who've come to kidnap Happy and her husband, pick up the wrong Happy, while Guddu and his wife Happy are escorted to a university to deliver a lecture. The jumbled up, crazy and happening life of journalist Noor takes a dramatic turn when she comes across a news breaking cover story Police officer Dev investigates a double murder case that has only two witnesses - an acclaimed writer Vikram and a young homemaker Maya, who also happen to be the prime suspects in the case. He finds himself being torn between their own version's of what happened on the fateful night, and takes it upon himself to figure out the real story and capture the real murderer. ACP Yashvardhan teams up RAW Agent KK to bring down the master mind terrorist, Shiv. A troubled relationship with his younger half-brother and stepfather compels Chulbul Pandey to become a corrupt but fearless cop whose life changes when he locks horns with a corrupt political leader. Akira Sharma is your average Jane from Jodhpur. Early in life she sees an atrocity committed on a neighbour and learns to defend herself. And, a spitfire is born. A Kabaddi player rescues a young woman from an unwanted marriage and hides her in his home. A soldier on vacation finds himself hunting down a terrorist. A military officer attempts to hunt down a terrorist, destroy a terrorist gang and deactivate the sleeper cells under its command. A man meets his lookalike, who's not just a killer of evil, but also a kind hearted man. Together, they team up to fight against a dreaded gangster. Rajkumar, an aimless youth, works for a drug baron and is sent to kill a rival dealer. His life is changed forever when he meets Chanda, and has no idea she is the adopted daughter of the man he is supposed to kill. Once Upon A Time In Mumbai Dobaara is a sequel of Once Upon A Time In Mumbai In a village, a young archaeologist falls in love with a landlord's daughter. Their union seems doomed. But destiny brings them together a year later. Will they live happily ever after? When a temple priest commits suicide after being dishonored by an evil landlord, his son returns to his native village on a mission of vengeance. A common man who transforms into a gangster revolts against the very system he once obediently followed by declaring war on the police, the government, and the industrialists.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
315
Elísio de Figueiredo served as the first ambassador of Angola to the United Nations from 1976 to 1988 as well as the Minister of Industry. On 16 March 1979, de Figueiredo, in his letter to the United Nations, requested an urgent meeting of the United Nations Security Council on the question of South Africa's continuous acts of aggression in Angola. References External links UN.int Angolan ambassadors to the United Nations Permanent Representatives of Angola to the United Nations Living people Year of birth missing (living people) Place of birth missing (living people)
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,077
Q: Two arguments from one input. JavaScript Im doing JS exercises. The task is: Write a JavaScript program to check whether two given integer values are in the range 50..99 (inclusive). Return true if either of them are in the said range. Task is easy, I took two numbers from two inputs. Is there possibility to take two numbers from only one input? and start function like this: function task28(fnum, snum){} Below is mine solution with two inputs. <input type="text" id="task28a" class="form-control" placeholder="write number" aria-label="" aria-describedby="basic-addon2"> <input type="text" id="task28b" class="form-control" placeholder="write number" aria-label="" aria-describedby="basic-addon2"> </br> <button type="button" class="btn btn-dark btn-sm" onclick="task28()">Check</button> <p class="answer" id="task28ans"></p> <script> function task28() { let fnum = document.getElementById("task28a").value; let snum = document.getElementById("task28b").value; if ( (fnum >= 50 && fnum <= 99) && (snum >= 50 && snum <= 99) ) { document.getElementById("task28ans").innerHTML = "true"; } else { document.getElementById("task28ans").innerHTML = "false"; } } </script> A: I'd suggest you to use a form and run a function on submit. This can also simplify retrieve your values function DoSubmit(){ var a = document.myform.myinput.value; var b = document.myform.message.value; console.log(a,b) } <form name="myform" onsubmit="DoSubmit();"> <input type="text" name="myinput" value="" /> <input type="text" name="message" value="" /> <input type="submit" name="submit" /> </form> A: Check below code.... function task28() { let tempVar = document.getElementById("task28a").value; let splitArray = tempVar.split(" "); if(splitArray.length == 2) { let fnum = parseInt(splitArray[0]); let snum = parseInt(splitArray[1]); if ((fnum >= 50 && fnum <= 99) && (snum >= 50 && snum <= 99)) { document.getElementById("task28ans").innerHTML = "true"; } else { document.getElementById("task28ans").innerHTML = "false"; } } } <input type="text" id="task28a" class="form-control" placeholder="Enter number" aria-label="" aria-describedby="basic-addon2"> </br> <button type="button" class="btn btn-dark btn-sm" onclick="task28()">Check</button> <p class="answer" id="task28ans"></p>
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,604
AFP to PDF - Adoba Acrobat Format (Text and AFPDS Reports can be converted to PDF) AFP to TIFF - TIFF Image Format (Text and AFPDS Reports can be converted to... Is it possible to print a JPG/PDF of a scanned image from the IFS to an IPDS printer? I have been investigating AFPRSC and the comments on here seem mixed on whether this will work. Does anyone know of a freeware solution to convert AFP to PDF on the AS400. We have several applications that need to send printouts to customers via fax and email. The best format to send to them is to convert the document to PDF and then send. This in not an issue. We are using CVT2PDF to convert the documents and SNDDST to send them out. This works fine for text printouts. What we �... Intelligent Printer Data Stream (IPDS) is technically not a Print Description Language (PDL), but instead a communication protocol. More specifically, it is a bidirectional communication protocol and object-oriented print stream between computer systems directly connecting with the print device. Re: AFPDS to PDF conversion problems -- Steven, The document *prints* just fine. The problem is converting to a PDF. I do not know if the font is the problem or something else. Thanks, Jeff Young jyoung@xxxxxxxxxxxx Sr. Programmer Analyst Dynax Solutions, Inc. A �... Use the iSeries command SPLTOPDF to convert spooled files to PDF files using Java. Check out the preparation steps and a list of the commands, step-by-step, needed to convert files. Check out the preparation steps and a list of the commands, step-by-step, needed to convert files. How To Print an Overlay Image Overview You want to eliminate pre printed forms and start reducing cost by sending data to laser printers using AFPDS and plain paper.
{ "redpajama_set_name": "RedPajamaC4" }
1,372
Apple Store, Lakeside is a shop forming a part of Apple Store brand. It is open on: Monday 10:00 am - 10:00 pm, Tuesday 10:00 am - 10:00 pm, Wednesday 10:00 am - 10:00 pm, Thursday 10:00 am - 10:00 pm, Friday 10:00 am - 10:00 pm. At weekends its opening hours are: on Saturday 9:00 am - 9:00 pm, on Sunday 11:00 am - 5:00 pm. This store's address is: Grays, Lakeside Shopping Centre, RM20 2ZP, Grays. In order to reach the customer service directly please dial the number 0170871 7500. Apple Store, Lakeside is frequented by many people living in nearby towns like West Thurrock, South Stifford.
{ "redpajama_set_name": "RedPajamaC4" }
4,294
Os saltos ornamentais na Universíada de Verão de 2009 foi disputado no Instituto de Esportes da República Sérvia (que também foi o local de treinamento) em Belgrado, Sérvia entre 4 e 10 de julho de 2009. Calendário Medalhistas Masculino Feminino Quadro de medalhas Ver também Saltos ornamentais Universíada de Verão de 2009 Federação Internacional do Esporte Universitário (FISU) Federação Internacional de Natação (FINA) Ligações externas Saltos Ornamentais Verao 2009
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,993
Conditions are NetBox's mechanism for evaluating whether a set data meets a prescribed set of conditions. It allows the author to convey simple logic by declaring an arbitrary number of attribute-value-operation tuples nested within a hierarchy of logical AND and OR statements. ## Conditions A condition is expressed as a JSON object with the following keys: | Key name | Required | Default | Description | |----------|----------|---------|-------------| | attr | Yes | - | Name of the key within the data being evaluated | | value | Yes | - | The reference value to which the given data will be compared | | op | No | `eq` | The logical operation to be performed | | negate | No | False | Negate (invert) the result of the condition's evaluation | ### Available Operations * `eq`: Equals * `gt`: Greater than * `gte`: Greater than or equal to * `lt`: Less than * `lte`: Less than or equal to * `in`: Is present within a list of values * `contains`: Contains the specified value ### Accessing Nested Keys To access nested keys, use dots to denote the path to the desired attribute. For example, assume the following data: ```json { "a": { "b": { "c": 123 } } } ``` The following condition will evaluate as true: ```json { "attr": "a.b.c", "value": 123 } ``` ### Examples `name` equals "foo": ```json { "attr": "name", "value": "foo" } ``` `name` does not equal "foo" ```json { "attr": "name", "value": "foo", "negate": true } ``` `asn` is greater than 65000: ```json { "attr": "asn", "value": 65000, "op": "gt" } ``` `status` is not "planned" or "staging": ```json { "attr": "status.value", "value": ["planned", "staging"], "op": "in", "negate": true } ``` !!! note "Evaluating static choice fields" Pay close attention when evaluating static choice fields, such as the `status` field above. These fields typically render as a dictionary specifying both the field's raw value (`value`) and its human-friendly label (`label`). be sure to specify on which of these you want to match. ## Condition Sets Multiple conditions can be combined into nested sets using AND or OR logic. This is done by declaring a JSON object with a single key (`and` or `or`) containing a list of condition objects and/or child condition sets. ### Examples `status` is "active" and `primary_ip` is defined _or_ the "exempt" tag is applied. ```json { "or": [ { "and": [ { "attr": "status.value", "value": "active" }, { "attr": "primary_ip", "value": "", "negate": true } ] }, { "attr": "tags", "value": "exempt", "op": "contains" } ] } ```
{ "redpajama_set_name": "RedPajamaGithub" }
8,260
var XHRReadyStates = { UNSENT: 0, OPENED: 1, HEADERS_RECEIVED: 2, LOADING: 3, DONE: 4 }; function setupThrottledXhr(xhr, xhrProxy) { let { shaper } = xhrProxy; let openedTs, headersTs, loadingTs, doneTs; let loaded = 0; let total = 0; let currentBitrateKpbs; let progressEvents = []; let progressTimer = null; let lastProgressEvent = false; let loadEndEvent = null; let loadEvent = null; let done = false; xhr.onloadend = function(event) { let { _onloadend } = xhrProxy; loadEndEvent = event; if (done) { _onloadend && _onloadend(event); xhrProxy._dispatchWrappedEventType('loadend'); } }; xhr.onload = function(event) { let { _onload } = xhrProxy; //console.log('native load'); loadEvent = event; if (done && xhr.readyState === XHRReadyStates.DONE) { xhrProxy._setupWrappedResponseData(); _onload && _onload(event); xhrProxy._dispatchWrappedEventType('load'); } }; xhr.onreadystatechange = function(event) { const now = Date.now(); const { _onreadystatechange, _onprogress, _onload, _onloadend } = xhrProxy; const triggerStateChange = function(e, readyState) { if (typeof readyState !== 'number') { throw new Error('readyState should be a number'); } xhrProxy._readyState = readyState; _onreadystatechange && _onreadystatechange(e); xhrProxy._dispatchWrappedEventType('readystatechange'); } let latency; let delay1 = 0; let delay2 = 0; switch (xhr.readyState) { case 0: // UNSENT triggerStateChange(event, XHRReadyStates.UNSENT); break; case 1: // OPENED openedTs = now; triggerStateChange(event, XHRReadyStates.OPENED); break; case 2: // HEADERS_RECEIVED headersTs = now; xhrProxy._setupWrappedHeaders(); triggerStateChange(event, XHRReadyStates.HEADERS_RECEIVED); break; case 3: // LOADING loadingTs = now; triggerStateChange(event, XHRReadyStates.LOADING); break; case 4: // DONE doneTs = now; latency = doneTs - openedTs; if (latency < shaper.minLatency) { delay1 = shaper.minLatency - latency; } if (currentBitrateKpbs > shaper.maxBandwidth) { delay2 = (currentBitrateKpbs / shaper.maxBandwidth) * latency - latency; } if (delay1 || delay2) { setTimeout(function() { if (loaded === total && !lastProgressEvent) { clearTimeout(progressTimer); _onprogress && _onprogress(progressEvents[progressEvents.length - 1]); xhrProxy._dispatchWrappedEventType('progress'); } triggerStateChange(event, XHRReadyStates.DONE); done = true; if (loadEvent) { xhrProxy._setupWrappedResponseData(); _onload && _onload(loadEvent); xhrProxy._dispatchWrappedEventType('load'); loadEvent = null; } if (loadEndEvent) { _onloadend && _onloadend(loadEndEvent); xhrProxy._dispatchWrappedEventType('loadend'); loadEndEvent = null; } }, Math.max(delay1, delay2)); } else { //console.log('done, not delaying'); done = true; xhrProxy._setupWrappedResponseData(); triggerStateChange(event, XHRReadyStates.DONE); } break; } }; xhr.onprogress = function(event) { const now = Date.now(); const { _onprogress } = xhrProxy; const triggerProgress = function(e) { if (loaded === total) { lastProgressEvent = true; } _onprogress && _onprogress(e); xhrProxy._dispatchWrappedEventType('progress'); } let duration = now - openedTs; let delay; loaded = event.loaded; total = event.total; currentBitrateKpbs = 8 * loaded / duration; // kbps if (currentBitrateKpbs > shaper.maxBandwidth) { delay = (currentBitrateKpbs / shaper.maxBandwidth) * duration - duration; progressEvents.push(event); progressTimer = setTimeout(function() { triggerProgress(event); }, delay); return; } triggerProgress(event); }; } export default setupThrottledXhr;
{ "redpajama_set_name": "RedPajamaGithub" }
9,127
Orius es un género de hemípteros heterópteros de la familia Anthocoridae. Los adultos tienen entre 2 y 5 mm de longitud; son predadores, alimentándose preferentemente del ácaro Tetranychus urticae, psílidos, aleuródidos y trips. Son frecuentes en jardines y campos. En los humanos pueden producir una picadura dolorosa, pero no venenosa. Algunas especies son criadas comercialmente y vendidas a agricultores para utilizarlos en programas de control biológico de plagas. En condiciones de laboratorio el desarrollo de la ninfa de Orius niger lleva 14 días a temperatura de 25°; las hembras tienen una longevidad de 60 días y puede poner hasta 150 huevos. Especies Orius candiope Herring, 1966 Orius diespeter Herring, 1966 Orius harpocrates Herring, 1966 Orius insidiosus Say, 1832 Orius laevigatus Orius minutus (Linnaeus, 1758) Orius nigra (Wolff, 1811) Orius pumilio (Champion, 1900) Orius thyestes Herring, 1966 Orius tristicolor (White, 1879) Referencias Enlaces externos Iowa State University Department of Entomology Iowa Insect Information Notes, minute pirate bug entry (con foto) Control biológico Agricultura sustentable
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,591
\section{Introduction} Recently the view has been advocated that the Gor'kov expansion describing type II superconductors close to the upper critical field $H_{c2}$ may be invalid~\cite{Bahcall}. Even within the mean-field approximation there is a possibility of non-perturbative effects arising from the degeneracy of the Landau levels for electrons moving in a magnetic field. This degeneracy means that even a small perturbation (e.g. superconducting order) can change the quasiparticle levels significantly as compared to the normal state. This effect has been proposed as a mechanism for the breakdown of the standard perturbation theory describing type II superconductors even close to $H_{c2}$. This non-perturbative effect should give rise to effects such as tails of residual superconductivity above the usual $H_{c2}$, the possibility of the superconducting transition being first order, and unusual behavior of the heat capacity, magnetisation etc.\ close to the phase boundary~\cite{Bahcall}. In this work we have examined this possibility. We show that for $T=0$ there are indeed terms not contained in the Gor'kov expansion for the difference between the ground state energy of the mixed state and the normal state. This is in agreement with the results obtained by Bahcall~\cite{Bahcall} and is of no surprise since the Gor'kov expansion is essentially a high temperature series. However, for $T\neq 0$ we show that the non-perturbative terms in the difference $\Omega_S-\Omega_N$ between the thermodynamic potential in the mixed state and the normal state vanish and indeed that the Gor'kov expansion is a convergent series for the superconducting order $\Delta({\mathbf{r}})$ not too large. We thus prove incorrect the claim of Bahcall that there is a non-perturbative third order term in the expression for $\Omega_S-\Omega_N$. We have derived some criteria for the convergence radius of the Gor'kov expansion in the order parameter. A comparison between the results of the Gor'kov expansion and a numerical solution of the corressponding Bogoliubov-de Gennes (BdG) equations~\cite{de Gennes} confirms our conclusions. In this paper we work in two dimensions. We do not consider any fluctuation effects relevant to high $T_c$ superconductors~\cite{Blatter}. \section{Mean field theory} Within the mean-field approximation the mixed state of a type II superconductor is described by the solutions to the BdG-equations. In a constant magnetic field the order parameter forms an Abrikosov vortex lattice and it is convenient to use a set of single particle states $\phi_{N,{\mathbf{k}}}$ characterized by the Landau level index $N$ and a wavevector ${\mathbf{k}}$ in the Brillouin zone of the vortex lattice. In this basis the BdG-equations split up into a $2N \times 2N$ secular matrix equation for each ${\mathbf{k}}$, where $N$ is the number of Landau levels participating in the pairing. In this basis the BdG equations are \cite{Big Mac1}: \begin{eqnarray}\label{BdG} (\xi_N-E_{\mathbf{k}}^{\eta})u_{N\mathbf{k}}^{\eta}+\sum_{M}F_{ {\mathbf{k}}NM} v_{M\mathbf{k}}^{\eta}=0 \nonumber \\(-\xi_N-E_{\mathbf{k}}^{\eta})v_{N\mathbf{k}}^{\eta} +\sum_{M}F_{{\mathbf{k}}MN}^*u_{M\mathbf{k}}^{\eta}=0 \end{eqnarray} where $u_{N\mathbf{k}}^{\eta}$ is the coefficient of $\phi_{N\mathbf{k}}$ for the Bogoliubov function $u^{\eta}_{{\mathbf{k}}}(\mathbf{r})$ and $v_{N{\mathbf{k}}}^{\eta}$ is the coefficient of $\phi_{N-\mathbf{k}}^*$ for the function $v^{\eta}_{{\mathbf{k}}}(\mathbf{r})$, and $\xi_{N} =(N+1/2 )\hbar \omega_c-\mu$. $\mu$ is the chemical potential. The off-diagonal elements $F_{{\mathbf{k}}NM}$ are: \begin{equation} F_{{\mathbf{k}}NM}=\int d{\mathbf{r}}\Delta({\mathbf{r}})\phi({\mathbf{r}})_{N,{\mathbf{k}}}^* \phi({\mathbf{r}})_{M,{\mathbf{-k}}}^* \end{equation} and the order parameter is determined self-consistently as: \begin{equation} \Delta({\mathbf{r}})=g\sum_{{\mathbf{k}}\eta}u^{\eta}_{{\mathbf{k}}}({\mathbf{r}}) v^{\eta}_{{\mathbf{k}}}({\mathbf{r}})^*(1-2f^{\eta}_{{\mathbf{k}}}) \end{equation} where $g$ is the coupling strength and $f^{\eta}_{{\mathbf{k}}}=(1+\exp (E_{\mathbf{k}}^{\eta}/k_BT))$ is the fermi function. We neglect any finite Zeeman splitting for simplicity. Due to the translational symmetry the order parameter $\Delta({\mathbf{r}})$ is completely characterised by a finite set of parameters $\Delta_j$~\cite{Big Mac1}. For notational simplicity we work in the lowest Landau level approximation (LLL) (i.e $\Delta_{J\neq0}=0$) in which the center-of-mass motion of the Cooper-pairs has the kinetic energy $\hbar\omega_c/2$ where $\omega_c$ is the cyclotron frequency. None of the conclusions in this paper are altered when this restriction is relaxed. When the chemical potential $\mu$ is at a Landau level (i.e $n_f\equiv \mu/\hbar\omega_c-1/2=$ integer ) we have, in the normal state, exact degeneracy between an electron state in the Landau level $n_f+m$ and a hole state in the Landau level $n_f-m$. Likewise when the chemical potential is exactly in between two Landau levels such that $n_f=n+1/2$ there is degeneracy between an electron in a level $n_f+m+1/2$ and a hole in a level $n_f-m-1/2$. When this is the case we expect the possible non-perturbative effects of a finite order-parameter to be strongest. We will show that the convergence radius for the Gor'kov equations is indeed smallest when $n_f$ is an integer. To examine the validity of the Gor'kov expansion it is convenient to use the following expression~\cite{Bardeen} for the difference $\Omega_S - \Omega_N$ in the thermodynamic potential between the mixed state and the normal state: \begin{equation}\label{thermo} \Omega_S-\Omega_N=\frac{1}{g}\int d {\mathbf{r}} |\Delta({\mathbf{r}})|^2-2k_BT \sum_{N{\mathbf{k}}} \ln(\cosh(\beta E_{N {\mathbf{k}}}/2))+2k_BTD\sum_N \ln(\cosh(\beta \xi_i/2)) \end{equation} Here $D=\frac{VeB}{2\pi \hbar c}$ is the number of ${\mathbf{k}}$-vectors in the Brillouin zone, $V$ is the volume, $B$ is the magnetic field, and $\beta=1/k_BT$. \section{Zero temperature} To illustrate the origin of the non-perturbative effect for $T=0$ it is sufficient to examine the case when only one Landau level participates in the pairing and $n_f$ is an integer. In this case the positive energy solution to equation\ (\ref{BdG}) is $E_{n_f {\mathbf{k}}}= |F_{{\mathbf{k}}n_fn_f}|$. Equation\ (\ref{thermo}) reduces to: \begin{equation} E_{gS}-E_{gN}=\frac{1}{g}\int d {\mathbf{r}}|\Delta({\mathbf{r}})|^2-\sum_ {{\mathbf{k}}}E_{n_f{\mathbf{k}}} \end{equation} Since $|F_{{\mathbf{k}}n_fn_f}|\propto \Delta_0 \propto |\Delta({\mathbf{r}})|$ we see that we obtain a linear term in $|\Delta({\mathbf{r}})|$ in equation\ (\ref{thermo}). This is a non-perturbative term since the Gor'kov expansion only contains even powers of the order parameter. This $T=0$ result is unaltered when we have many Landau levels participating in the pairing and it agrees with the result obtained by Bahcall~\cite{Bahcall}. It is a trivial consequence of the fact that we have to take the $T\rightarrow 0$ limit $k_BT\ln(2\cosh(\beta E_{N {\mathbf{k}}}\/2))\rightarrow E_{N {\mathbf{k}}}/2$ before we perturbatively expand the result in the size of the order parameter. \section{Finite temperature} \subsection{Quantum limit} For finite temperature the situation is different. It is now possible to expand $\ln(2\cosh(\beta E_{N {\mathbf{k}}}/2))$ in powers of the order parameter and then check if we obtain any non-perturbative terms, as proposed by Bahcall~\cite{Bahcall}. For notational simplicity we will again do the calculation in the quantum limit when only one Landau level participates in the pairing. In section~\ref{several} we will treat the slight modifications in our result when more than one Landau level are within the pairing width. The quasiparticle energy is now $E_{{n\mathbf{k}}}=\sqrt{\xi_n^2+ |F_{{\mathbf{k}}nn}|^2}$. We need to expand $ \ln(\cosh(\beta E_{{n\mathbf{k}}}/2))$ in $|F_{{\mathbf{k}}nn}|^2$. Writing $\beta E_{{n\mathbf{k}}}/2=\sqrt{\epsilon^2+z^2}$ where $\epsilon\equiv\beta\xi_n/2$ and $z=\beta|F_{{\mathbf{k}}nn}|/2$ we are lead to consider the analytic properties of the function $\ln(\cosh(\sqrt{\epsilon^2+z^2}))$. The poles and branch cuts in the complex plane $z \in \mathcal{C}$ determine the convergence radius $r_0$ for a power series in $z$. A simple analysis gives: $r_0=\sqrt{\epsilon^2+\pi^2/4}$. The requirement for the convergence of a perturbation series for $\ln(2\cosh(\beta E_{N {\mathbf{k}}}/2))$ is then \begin{equation} \label{criterium} |F_{{\mathbf{k}}nn}| \leq \sqrt{\xi_n^2+\pi^2(k_BT)^2}\end{equation} This requirement is most restrictive when the Landau level is at the chemical potential ($\xi_n=0$). We then have \begin{equation} E_{{\mathbf{k}}}\leq k_BT\pi \end{equation} Furthermore, we see that there will appear only even powers of $|F_{{\mathbf{k}}nn}|$ in the series. This is true for general $\mu$ (i.e also when $\xi_n=0$). So we have ruled out any non-perturbative cubic term in the expression for $\Omega_S-\Omega_N$ thereby disproving earlier predictions based on a numerical analysis~\cite {Bahcall}. Doing the expansion and comparing with a standard expression for the thermodynamic potential based on Gor'kov's equations~\cite{Bruun} we find (not surprisingly) that it reproduces the Gor'kov series term by term. The convergence of the Gor'kov expansion is determined be equation (\ref{criterium}). It is now clear that the Gor'kov expansion is a high temperature series. So the break down of the theory for $T=0$ is of no surprise. For finite $T$ we expect the Gor'kov series first become unreliable when there is a Landau level at the chemical potential. This is because the requirement in equation (\ref{criterium}) is most restrictive when $\xi_n=0$ and because the superconductivity and thereby the change in the quasiparticle energies (obtained by a self-consistent solution of equation\ (\ref{BdG}) is enhanced when there is a Landau level at the chemical potential~\cite{Bruun}. \subsection{Several Landau levels} \label{several} The above conclusions are essentially unaltered when there is more than one Landau level participating in the pairing. We calculate the quasiparticle energies from equation\ (\ref{BdG}) perturbatively in $F_{{\mathbf{k}}nm}$ using degenerate and non-degenrate perturbation theory. Then we expand $\ln(2\cosh(\beta E_{N {\mathbf{k}}}/2))$ in powers of the order parameter. The convergence radius for the series is again smallest when the chemical potential is at a Landau level. The only complication is that we obtain both even and odd powers of $F_{{\mathbf{k}}nm}$ in the expression for the quasiparticle energies. But the odd terms cancel in the expression for $\Omega_S-\Omega_N$ due to the fact that there are two quasiparticle levels when $\xi_n \neq 0$ for which the odd powers in the expression for the energy have opposite signs. There is only one positive energy solution for the case $\xi_{n_f}=0$ though. However, the odd terms from this solution vanish in the expression for $\Omega_S-\Omega_N$ due to the fact that $\partial^{2l+1}_x \ln(\cosh(x))|_{x=0}=0$ where $l$ is an integer. A long tedious calculation shows that we recover the standard terms in the Gor'kov series. A sufficient condition for the convergence of the Gor'kov series is \begin{equation} \label{criterium2} E_{n{\mathbf{k}}}-\xi_n \leq \min \left[ 2k_BT\pi,k_BT \sqrt{\beta^2\xi^2_n+\pi^2} \right]\end{equation} which has to hold for each quasiparticle level within the pairing region. So we expect that the Gor'kov theory breaks down when significant portions of the quasiparticle bands lie outside the regions defined in equation (\ref{criterium2}). It should be noted that one cannot ignore the contribution from higher quasiparticle levels to $\Omega_S- \Omega_N$ ($\xi_n \neq 0$). This is easily seen from equation\ (\ref{thermo}) since \begin{equation} \ln(\cosh(\beta(\xi+\delta E)/2))-\ln(\cosh(\beta\xi/2))>\ln(\cosh(\beta(\delta E)/2)) \ \ \ \xi,\delta E>0\end{equation} So any treatment which focuses only on the quasiparticle level at the chemical potential will ignore important contributions to $\Omega_s$. Equation\ (\ref{criterium2}) can be transformed into the requirement: \begin{equation} <|\Delta({\mathbf{r}})|^2>\equiv\frac{1}{V}\int d{\mathbf{r}}|\Delta({\mathbf{r}}) |^2 \leq 2\sqrt{\pi n_f}\pi^2(k_BT)^2\end{equation} Based on an extensive numerical analysis, Norman \textit{et al}~\cite{Norman} have suggested a similar condition. Since the Ginzburg-Landau equations are derived from the microscopic BCS theory using the Gor'kov expansion~{\cite{Parks} it would be of interest to restate the above criterium in terms of Ginzburg-Landau parameters. Doing this we obtain: \begin{equation}\frac{H_{c2}-H}{H_c(0)}\leq \sqrt{n_f}\pi^{1/2}(7\zeta(3))^{1/2}\beta_Ae^{\gamma} (\kappa-\frac{1}{2\kappa})\left(\frac{T}{T_c}\right)^2 \end{equation} Here $\zeta(x)$ is Riemann's Zeta function, $\kappa$ is the Ginzburg Landau parameter, $\beta_A$ is the Abrikosov parameter, and $\gamma$ is Euler's constant. This restriction is always fulfilled for type II superconductors within the normal range of validity of the Ginzburg-Landau equations (i.e $|T-T_c|/T_c \ll 1$). As an example of the breakdown of the perturbation series we have plotted the orderparameter $\Delta_0$ as a function of $n_f=\frac{\mu}{\hbar\omega_c}-1/2$. In figure\ 1 we have plotted both a numerical exact solution of the BdG-equations and the fourth order perturbative result using a method developed earlier~\cite{Bruun}. We have chosen the parameters such that $\omega_d/\omega_c=5$, $g/\hbar\omega_cl^2=8.2$ and $k_BT/\hbar\omega_c=0.3$ when $n_f=12$. As can be seen the perturbation theory agrees fairly well with the exact solution. The perturbative result tends to differ the most from the exact solution when the chemical potential is at a Landau level ($n_f$ integer). This is in agreement with the above remarks. To illustrate the temperature dependence of the convergence radius of the Gor'kov series we have in figure\ 2 again plotted $\Delta_0$ as a function of $n_f$ for a very low temperature. As can be seen the perturbation series breaks down much earlier $\Delta_0\simeq 1500$ for this low temperature in agreement with equation (\ref{criterium2}). In figure\ 3 we have plotted the lowest quasiparticle level along a high symmetry direction in ${\mathbf{k}}$-space when $n_f=12$ for the parameters used in figure\ 2. The horizontal line gives the boundaries for $E_{{\mathbf{k}}}-\xi$ calculated from equation (\ref{criterium2}). There are large parts of the quasiparticle band in ${\mathbf{k}}$-space lying outside the region of convergence of the perturbation expansion thereby explaining the observed discrepancies between perturbation theory and the exact numerical result. \section{Conclusion} In conclusion we have examined the recently debated validity of the Gor'kov expansion. The conclusion is that although the degeneracy of the normal state levels gives large effects on the quasiparticle wavefunctions even for weak superconducting order these effects cancel in the expression for the thermodynamic potential and the Gor'kov expansion is correct for finite temperature. We have therefore ruled out the possibilty of a non-perturbative third order term in the expression for the thermodynamic potential. The range of validity of the Gor'kov expansion is given by equation\ (\ref{criterium2}) which shows that it is essentially a high temperature expansion. This requirement is always fulfilled within the range of validity of the Ginzburg-Landau equations leading to no inconsistencies. Furthermore we have the usual requirement $|E_{n{\mathbf{k}}}-\xi_n|\ll \hbar \omega_c$ for perturbation theory to work. We expect the Gor'kov expansion the break down first when the chemical potential is at a Landau level. Our results are confirmed by a comparison between the results of an exact numerical solution to the BdG-equations and a Gor'kov series to fourth order in the order parameter. \\ \section{Acknowledgements} The authors would like to acknowledge EPSRC grant GR/K 15619 (VNN) and The Danish Research Academy (GMB) for financial support. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,332
import os import sys import distutils.unixccompiler import distutils.ccompiler import distutils.sysconfig import distutils.command.build_ext as build_ext import distutils.dist as dist from distutils \ import \ errors from distutils \ import \ sysconfig DEFAULT_COMPILERS = { "win32": [None, "mingw32"], "default": [None] } def _mingw32_cc(): compiler_type = "mingw32" compiler = distutils.ccompiler.new_compiler(compiler=compiler_type) return compiler.compiler_so def detect_distutils_cc(ctx): if not sys.platform in DEFAULT_COMPILERS: plat = "default" else: plat = sys.platform sys.stderr.write("Detecting distutils C compiler... ") compiler_type = \ distutils.ccompiler.get_default_compiler() if sys.platform == "win32": if compiler_type == "msvc": try: compiler = distutils.ccompiler.new_compiler( compiler="msvc") compiler.initialize() cc = [compiler.cc] except errors.DistutilsPlatformError: cc = _mingw32_cc() else: cc = _mingw32_cc() else: cc = distutils.sysconfig.get_config_var("CC") # FIXME: use shlex for proper escaping handling cc = cc.split() sys.stderr.write("%s\n" % compiler_type) return cc # XXX: unixccompiler instances are the only classes where we can hope # to get semi-sensical data. Reusing them also makes transition easier # for packagers, as their compilation options will be reused. # OTOH, having a sane tooling system will make customization much # easier. def get_configuration(compiler_type=None): plat = os.name if compiler_type is None: compiler_type = distutils.ccompiler.get_default_compiler(plat) if not compiler_type in distutils.ccompiler.compiler_class: raise ValueError("compiler type %s is not recognized" % compiler_type) env = {"CC": [], "CPPPATH": [], "BASE_CFLAGS": [], "OPT": [], "SHARED": [], "CFLAGS": [], "SHLINK": [], "LDFLAGS": [], "LIBDIR": [], "LIBS": [], "SO": ""} env["CPPPATH"].append(sysconfig.get_python_inc()) if compiler_type == "unix": env["CC"].extend(sysconfig.get_config_var("CC").split(" ")) env["BASE_CFLAGS"].extend(sysconfig.get_config_var("BASECFLAGS").split(" ")) env["OPT"].extend(sysconfig.get_config_var("OPT").split(" ")) env["SHARED"].extend(sysconfig.get_config_var("CCSHARED").split(" ")) env["SHLINK"] = sysconfig.get_config_var("LDSHARED").split(" ") env["SO"] = sysconfig.get_config_var("SO") env["LDFLAGS"] = sysconfig.get_config_var("LDFLAGS").split() if "-pthread" in sysconfig.get_config_var("LDFLAGS"): env["LDFLAGS"].insert(0, "-pthread") env["CFLAGS"].extend(sysconfig.get_config_var("CFLAGS").split(" ")) env["FRAMEWORKS"] = [] setup_unix(env) elif compiler_type == "msvc": setup_msvc(env) elif compiler_type == "mingw32": setup_mingw32(env) else: raise ValueError("Gne ?") return env def _get_ext_library_dirs(): binst = build_ext.build_ext(dist.Distribution()) binst.initialize_options() binst.finalize_options() return binst.library_dirs def _get_ext_libraries(compiler): binst = build_ext.build_ext(dist.Distribution()) binst.compiler = compiler binst.initialize_options() binst.finalize_options() class _FakeExt(object): def __init__(self): self.libraries = [] return binst.get_libraries(_FakeExt()) def setup_unix(env): if sys.platform == "darwin": env["LDFLAGS"].extend(["-bundle", "-undefined", "dynamic_lookup"]) def _strip_arch(flag): value = env[flag] while "-arch" in value: id = value.index("-arch") value.pop(id) value.pop(id) return value for flag in ["BASE_CFLAGS", "LDFLAGS"]: env[flag] = _strip_arch(flag) def setup_msvc(env): compiler = distutils.ccompiler.new_compiler( compiler="msvc") compiler.initialize() env["CC"] = compiler.cc env["BASE_CFLAGS"].extend(compiler.compile_options) env["SHLINK"] = compiler.linker env["SO"] = ".pyd" env["LDFLAGS"] = compiler.ldflags_shared env["LIBDIR"].extend( _get_ext_library_dirs()) def setup_mingw32(env): compiler = distutils.ccompiler.new_compiler( compiler="mingw32") env["CC"] = ["gcc"] env["BASE_CFLAGS"].extend(["-mno-cygwin"]) env["SHLINK"] = ["gcc", "-mno-cygwin", "-shared"] env["SO"] = ".pyd" #env["LDFLAGS"] = compiler.ldflags_shared env["LIBDIR"].extend( _get_ext_library_dirs()) libs = _get_ext_libraries(compiler) libs += compiler.dll_libraries env["LIBS"].extend(libs)
{ "redpajama_set_name": "RedPajamaGithub" }
4,094
import mongoose from 'mongoose'; let MarkedSchema = new mongoose.Schema({ userId: String, name: String, markedUserId: String, markedUserName: String, markedUserAvatar: String }); module.exports = MarkedSchema;
{ "redpajama_set_name": "RedPajamaGithub" }
9,447
\section{Introduction} Most ways of measuring the Hubble constant involve a form of distance ladder, which utilizes a number of astrophysical standard candle and standard ruler relations, and is calibrated locally by a geometrical technique such as parallax (e.g., Madore et al.\ 1999, Madore et al.\ 1998, Kennicutt 1995). A recent exciting development in this field is to extend the reach of the geometrical rung of the distance ladder by using masers in orbit around galaxy centers to get distances to nearby galaxies thus bypassing Cepheids (Herrnstein et al.\ 1999). A few methods involve no distance ladder: good examples are (i)~inferring the distance of Type~II supernovae from their light curves and spectra by modeling their expanding photospheres (Schmidt et al.\ 1992), and (ii)~comparing the $H_0$-independent angular extent of galaxy clusters to their $H_0$-dependent depth as deduced by the X-ray emission, and the Sunyaev-Zeldovich microwave background decrement due to the cluster (Hughes \& Birkinshaw 1998). But the most `one-step' method of all was proposed by S. Refsdal in 1964, though it has only recently become feasible. The principle of Refsdal's method is simple. In a system where a high-redshift QSO is split into multiple images by an intervening galaxy lens, the difference in light travel time between different images (observable as time delays if the QSO is variable) is proportional to the scale factor of the universe. The time delay is given by the schematic formula \begin{eqnarray} \<Time delay> & = & h^{-1} \times \<1 month> \times \<image separation in arcsec>^2 \nonumber\\ & & \times z_{\rm lens}\times\<weak dependence on $z_{\rm lens}$,$z_{\rm QSO}$, and cosmology> \\ & & \times \<lens-mass-distribution dependent factor> \nonumber \label{schem_eq} \end{eqnarray} where the last two factors are of order unity. To obtain $H_0$ using this method one requires three types of input: (i)~the observed time delay(s) between QSO images, (ii)~knowledge of the cosmology, and (iii)~the mass distribution in the lensing galaxy. The first can and has been measured with increasing precision for about eight systems so far. The second is not a serious problem, because the dependence on cosmology is weak and the uncertainty due to it is easy to quantify; in this paper we will refer all results to the Einstein-de Sitter cosmology. The uncertainty in $H_0$ is dominated by the third item; the number of usable constraints on the mass distribution in the galaxy are few, while the range of possible distributions is huge. Thus, mass distribution is the major source of uncertainty. Two different paths can be taken to compensate for our lack of knowledge about the galaxy. One is to assume an exact parametric form for the galaxy mass distribution and fit the observed lensing properties as best as possible; the other is to take the image properties as exact, and try to reconstruct the galaxy mass map as best as possible. Single parametric models fix the last term in (\ref{schem_eq}) and thus cannot account for the uncertainty resulting from it. Blandford \& Kundi\'c (1996) advise that even if one finds a parametric galaxy model which is dynamically possible and which reproduces the image properties with acceptably low $\chi^2$, one still has to `aggressively explore all other classes of models' to get the true uncertainty in $H_0$. To explore the model space in a systematic fashion one needs to use a representation of the galaxy mass distribution that is general and not restricted to a particular form. One way would be to expand the mass distribution using a set of basis functions, another is to pixelate the galaxy and take each pixel as an independent mass element. We introduced pixelated models in Saha \& Williams (1997, hereafter SW97) but at that time did not have any strategy for searching model space. We have now extended that work to explore the model space with the goal of estimating the uncertainty in the derived value of $H_0$. The plan of this paper is as follows. In Section~\ref{obs} we summarize the observational situation with regard to strongly lensed QSOs. In Section~\ref{formalism} we present the general lensing formalism and point out a few properties of the lensing equations that are useful in interpreting the results of modeling. We also explain the reasons for confining our analysis to PG1115+080 and B1608+656 for now. Sections~\ref{method} and ~\ref{blind} describe our method for deriving $H_0$ and test it on a synthetic sample via a blind test. Application to the real systems can be found in Section~\ref{real}. Section~\ref{summary} discusses our results. \section{Observed Time-Delay Lenses}\label{obs} The first piece of input for $H_0$ determination is the measurement of time delays between the various QSO images. At the present time, ten multiply-imaged QSOs already have measured time delays or are being monitored: Q0957+561 (Kundi\'c et al. 1997a), PG1115+080 (Schechter et al. 1997, Barkana 1997), B1608+656 (Fassnacht et al. 1999), B0218+257 (Biggs et al. 1999), PKS 1830-211 (Lovell et al. 1998), HE 1104-1805 (Wisotzki et al. 1998), B1030+074, B1600+434 (Burud et al. 1999), J1933+503, and RXJ0911+0551 (Hjorth et al. 1999). In this work we limit ourselves to 4-image lenses with known source and lens redshifts and accurate time delay measurements; PG1115+080 and B1608+656 fit the description. PG1115 (Weymann et al. 1980) was the second lens to be discovered. The source is a radio-quiet QSO at $z_s=1.722$. Accurate positions for the images were measured by Kristian et al. (1993); lightcurves were analyzed by Schechter et al. (1997), and time delays derived by Schechter et al. and Barkana (1997). The main lensing galaxy is an outlying member of a small galaxy group, $z_l=0.311$ with an estimated line of sight velocity dispersion of $270\pm70$km~s$^{-1}$ (Kundi\'c et al. 1997b). A summary of observational results on this system can be found in SW97. B1608 was discovered in the Cosmic Lens All-Sky Survey (Myers et al. 1995, Myers et al. 1999). The lens is either a perturbed single galaxy or a merging/interacting pair of galaxies superimposed in the plane of the sky. The source and lens redshifts are 1.394 and 0.630 respectively. The time delays were recently reported by Fassnacht et al. (1999) based on VLA observations spanning 7 months. The time delays we use in this work are an earlier determination (Fassnacht, private communication), and are less than 0.5$\sigma$ away from the values quoted in Fassnacht et al. (1999); $\Delta t_{BA}=28.5$, $\Delta t_{BC}=32$, and $\Delta t_{BD}=77$. \section{Lensing formalism}\label{formalism} A photon traveling through a galaxy will take longer to arrive at the observer then an unimpeded photon. Part of the time delay occurs because the path of the ray bundle makes a detour rather than going straight; the time delay is further increased because the photon travels through the gravitational potential well of the galaxy. The total time delay is given by, \begin{equation} \tau(\hbox{\bmit\char"12},\hbox{\bmit\char"12}_{\rm s})=(1+z_{\rm l}) {D_{\rm ol}D_{\rm os}\over D_{\rm ls}} \left[\hbox{$1\over2$}(\hbox{\bmit\char"12}-\hbox{\bmit\char"12}_{\rm s})^2- {1\over \pi}\int\!d^2\hbox{\bmit\char"12}'\,\kappa(\hbox{\bmit\char"12}') \ln|\hbox{\bmit\char"12}-\hbox{\bmit\char"12}'| \right] \label{tau_eq} \end{equation} where $\hbox{\bmit\char"12}$ is the position on the sky, $\hbox{\bmit\char"12}_{\rm s}$ is the source position, $D$'s are the angular diameter distances between the source, the lens and the observer, $z_{\rm l}$ is the redshift of the lens galaxy, and $\kappa(\hbox{\bmit\char"12})$ is the projected mass density in the galaxy in units of $\Sigma_{\rm crit}=(c^2/4\pi G)(D_{\rm os}/D_{\rm ls}D_{\rm ol})$. If the lens mass distribution $\kappa(\hbox{\bmit\char"12})$ is known then the arrival time surface, Eq.\ (\ref{tau_eq}) provides us with all the necessary information about the images. Time delay between any two images is just the difference between $\tau$ at the relevant locations. According to Fermat's Principle the images appear at stationary points of the arrival time surface, \begin{equation} {\partial\tau\over\partial\hbox{\bmit\char"12}} = 0 = \hbox{\bmit\char"12}-\hbox{\bmit\char"12}_{\rm s}-{1\over\pi}\int\!d^2\hbox{\bmit\char"12}'\kappa(\theta') {\hbox{\bmit\char"12}-\hbox{\bmit\char"12}' \over |\hbox{\bmit\char"12}-\hbox{\bmit\char"12}'|^2} \label{alpha_eq} \end{equation} Image distortion and magnification are given by the inverse of the curvature matrix of the arrival time surface \begin{equation} \left[\partial^2\tau\over\partial\theta_i\theta_j\right]^{-1} \end{equation} A few things can be learned by looking at the arrival time and lens equations: (1) The time ordering of the images can be deduced from the image configuration using the morphological properties of the arrival time surface. The image furthest from the lensing galaxy is always first, and the one nearest the galaxy the last. In four-image QSOs the second image is the one opposite the first. Figure \ref{morph_fig} illustrates. (2) When four images are formed by an isolated galaxy of typical ellipticity the images are located nearly at the same galactocentric distance. This is easy to see by considering the two pieces of the arrival time surface. If the source and the center of the galaxy are not well aligned, i.e., if the `bump' due to the gravitational potential contribution is away from the `well' of the geometrical contribution, then the steepness of the geometrical part allows only two images to form, one roughly on either side of the galaxy center. To get four images, the bump of the gravitational contribution must be centered close to the source location. In such a situation the total arrival time configuration is centrally symmetric and the resulting images are approximately equidistant from the galaxy center. (3) If the four images of a single source are located at different galactocentric distances the simplest explanation is the presence of external shear. External shear effectively raises the gravitational part of the arrival time surface closest to itself (see Fig.\ \ref{morph_fig} and Eq.\ \ref{tau_eq}). The effect is to push the locations of the stationary points away from the source of external shear, hence increasing the radial spread of images. It follows that the direction of external shear can be determined by examining image locations with respect to the galaxy center. PG1115 is a good example; the image closest to the galaxy center, image B, is located between the galaxy-lens and the galaxy group, which is the source of external shear in this case. (4) Position angles (PA) of images are determined by the ellipticity PA of the galaxy roughly at the radius of the images. When images are spread over a range of radial distances their PA provide information on galaxy ellipticity PA over a range of galactocentric distances. Thus detailed modeling can reveal the twisting of the isodensity contours. (5) Not all types of information about images are equally useful for modeling purposes. The arrival time surface integrates over $\kappa(\hbox{\bmit\char"12})$ twice, making time delays most sensitive to the overall mass distribution in the galaxy, and least dependent on the local small-scale perturbations in the mass distribution. Image positions are determined from the lensing equation which integrates over $\kappa(\hbox{\bmit\char"12})$ once. Finally, image magnifications are very dependent on the local behavior of mass, making them the least useful for modeling. This means, unfortunately, that a double like Q0957, though it has well-measured substructure in the images and near-perfect time-delay measurements, provides too few constraints on the lensing mass to usefully estimate $H_0$ unless drastic assumptions about the mass distribution are made. In that case, the derived errors will tend to be underestimated as was noted by Bernstein and Fischer (1999) who constructed many types of parametric models for Q0957: `The bounds on $H_0$ are strongly dependent on our assumptions about a ``reasonable'' galaxy profile'. (6) A linear rescaling of the arrival time and lens equations, i.e., multiplying both by a constant factor $\epsilon$ will not alter the observable properties of images, image separations and relative magnification tensors. Physically the transformation amounts to rescaling the mass density of the lens by $\epsilon$ and adding a constant mass density sheet. This transformation was first discussed by Gorenstein et al.\ (1988) with regard to modeling of Q0957, and later became known as the mass sheet degeneracy. Note that a mass sheet extending to infinity is not needed, a mass disk larger than the observed field is enough because an external monopole has no observable effect. \section{The method}\label{method} The first step is to pixelate the lens plane mass distribution of the main lensing galaxy. In practice we use $\sim0.1''$ pixels, and limit the galaxy to a circular window of radius about twice that of the image-ring. Pixelated versions of Eqs.\ (\ref{tau_eq}) and (\ref{alpha_eq}) are: \begin{equation} \tau(\hbox{\bmit\char"12},\hbox{\bmit\char"12}_{\rm s}) = (1+z_{\rm l}){D_{\rm ol}D_{\rm os}\over D_{\rm ls}} \left[ \hbox{$1\over2$}|\hbox{\bmit\char"12}|^2 - \hbox{\bmit\char"12}\cdot\hbox{\bmit\char"12}_{\rm s} -\sum_n \kappa_n \psi_n(\hbox{\bmit\char"12}) \right] \label{pixtau_eq} \end{equation} and \begin{equation} \hbox{\bmit\char"12} - \hbox{\bmit\char"12}_{\rm s} - \sum_n \kappa_n\vec\alpha_n(\hbox{\bmit\char"12}) = 0, \label{pixalpha_eq} \end{equation} where the summation is over mass pixels and $\psi_n$ and $\alpha_n$ are integrals over individual pixels and can be evaluated analytically (see Appendix of SW97). A term $|\hbox{\bmit\char"12}_{\rm s}|^2$ has been omitted from Eq.\ (3) because a constant additive factor in the arrival time cannot be measured. Image properties translate into linear constraints in the $(N\!+\!2)$-dimensional model space, where $N$ dimensions represent a pixel each and 2 represent source coordinates. We call these primary constraints. The images can provide us with only a few constraints: in a 4-image system we have $2\times 4$ coordinates and 3 time delay ratios: 11 in all. On the other hand, the unknowns are numerous, $\sim20^2$ mass pixels plus 2 source coordinates. This results in a plethora of galaxy models each of which reproduces the image properties exactly. Luckily, the bulk of these models can be discarded because they do not look anything like galaxies. In fact, we consider only those models which satisfy the following further (linear) constraints, which we call secondary. These pertain to the main lensing galaxy: \begin{enumerate} \item mass pixel values, $\kappa_n$ must be non-negative; \item the location of the galaxy center is assumed to be coincident with that of the optical/IR image; \item the density gradient of the lens must point no more than $45^\circ$ away from the center of the galaxy; \item the lens must have inversion symmetry, i.e., look the same if rotated by $180^\circ$ [enforced only if the main lensing galaxy appears unperturbed and has no companions close to the QSO images]; \item logarithmic projected density gradient in the vicinity of the images, $d\ln\kappa/d\ln\theta={\rm ind}(r)$ should be no shallower that $-0.5$. For a power law projected density profile, radial magnification at an image is equal to $-1/{\rm ind}(r)$, therefore a statement that ${\rm ind}(r)<-0.5$ means that images are magnified radially by less than a factor of 2, which is probably a reasonable limit given the appearance of optical Einstein rings seen in some systems, for example, PG1115 and B0218; \item external shear, i.e., influence of mass other than the main lensing galaxy is restricted to be constant across the image region, and is represented by adding a term $\hbox{$1\over2$}\gamma_1(\theta_1^2-\theta_2^2)+\gamma_2\theta_1\theta_2$ to the lensing potential, Eq.\ (\ref{tau_eq}). \end{enumerate} All these constraints are non-restrictive and are obeyed by the vast majority of galaxies, thus our analysis explores the widest possible range of galaxy mass distributions. Obviously, the primary and secondary constraints are not enough to isolate a unique galaxy mass solution. A unique solution can be singled out by further specifying galaxy properties. For example, in SW97 particular galaxy models were found making a trial value of $H_0$ as one of the primary constraints, and then and picking the model that followed the observed light distribution as closely as possible given the rigid primary and secondary constraints, see Figures 2--5 of SW97. Here our aim is different. Any of the infinitely many models remaining after the primary and secondary constraints have been applied could be the real lens, as all of them reproduce the image properties exactly and all look reasonably like a galaxies, therefore any one of the corresponding derived $H_0$'s could be the real $H_0$. We want to produce an ensemble that samples this model space, and our procedure is as follows. The allowed models form a simplex in the $(N\!+\!2)$-dimensional space of mass pixels and source positions, because the constraints are all linear. We start with a random point in the allowed simplex (i.e., an allowed model). Next we choose a random vertex of that simplex, which is easily done by linear programming. Then we consider the line joining the current point with the vertex, and move to a random point on it, taking care to remain inside the simplex. The process is repeated until a sample of 100 models, and hence 100 $H_0$ values, is assembled. This procedure is a trivial case of the Metropolis algorithm (see e.g., Binney et al.\ 1992) for sampling density functions in high-dimensional spaces. The resulting ensemble of $H_0$ values has a straightforward interpretation in terms of Bayesian probabilities. The part of model space allowed by the secondary constraints is the prior (i.e., possibilities allowed before considering tha data). Our prior is uniform, which is to say that we have not incorporated any prior preferences between different models allowed by the secondary constraints. Since the unknowns $\kappa_n$ occur linearly in Eqs.\ ~\ref{pixtau_eq} and ~\ref{pixalpha_eq}, a uniform prior means that any linear interval in $\kappa_n$ is a priori as probable as any other interval of equal length. The primary constraints come from data, and the 100 models that satisfy both primary and secondary constraints sample the posterior probability distribution. At the present time there is no clear motivation to use any other but a uniform prior, however, a non-uniform prior, if desired, would modify the method only slightly: one could either keep the same 100 models but weight them according to the prior, or take the prior into account while choosing the models through the Metropolis prescription. \section{Blind tests of the method}\label{blind} Before applying the method to real systems we try it on a synthetic situation designed to resemble the real world as close as possible. One of us, ``Person A'', picked an $h$ value and created a set of four galaxies and the corresponding images of a single background source in each case. Exact values of image positions with respect to the galaxy center and time delays (but not $h$, nor information as to whether the galaxy was inversion symmetric or if there was any external shear) were conveyed to the other one of us, ``Person B'', who used this information to construct an ensemble of galaxy models and derive $h$ distributions for each case separately. We ran the whole experiment several times to remove bugs, and did not want to fall into the trap of simply publishing the results of the best run. So once we were confident that the experiment worked, we decided that the next four galaxies, whatever the results, would go into the published paper. Figure~\ref{cartoon_fig} pictorially illustrates the three stages of our blind test. Person B applied the reconstruction method to each system twice, once with the assumption of inversion symmetry (i.e., symmetric galaxies, see item 4 in Section~\ref{method}), and once without. Based on the appearance of the reconstructed mass distribution Person B decided whether inversion symmetry constraint was right in each case. Figures~\ref{mass19}--\ref{h22} present the results for each of the four galaxies. For galaxies \#1, 3 and 4 Person B picked symmetric options, and the asymmetric option for galaxy \#2. Panels (a) and (b) of Figures~\ref{mass19}, \ref{mass20}, \ref{mass21}, and \ref{mass22} show the actual projected density distribution and the average of the 100 reconstructed galaxies, for galaxies \#1, 2, 3, and 4 respectively. In a map which is an {\it average} of many reconstructions, persistent features of individual maps are enhanced while peculiarities are washed out, so the average is a reasonable guess as to what the real galaxy looks like, in a probabilistic sense. Panels (a) of Figures~\ref{h19}, \ref{h20}, \ref{h21}, and \ref{h22} plot the slope of density profile, ${\rm ind}(r)$ vs. derived $h$. The `real' value of $h$ is 0.025. In all the cases the slope of the density profile, ${\rm ind}(r)$ in the vicinity of the images correlates with the derived $h$ value, though the degree of correlation and its slope is not universal. Qualitatively, the reason for the correlation is easily understood. A relatively flat galaxy density profile, i.e., $|{\rm ind}(r)|$ is small, translates into a flat gravitational contribution to the arrival time surface, and `fills' the well of the geometrical time delay contribution evenly resulting in small fluctuations in the amplitude of the total arrival time surface. Thus the predicted time delays between images will be small, and to keep the observed time delays fixed the derived $h$ has to be small as well. Panels (b) of Figures~\ref{h19}, \ref{h20}, \ref{h21}, and \ref{h22} show the derived $h$ probability distribution. These distributions look different for all galaxies, because galaxy morphologies are different. Since all four are independent probability distributions based on that galaxy, the overall distribution is just the product of the four, see Figure~\ref{phcakes_fig}. The solid histogram is the product of the four distributions presented in panels (b) of Figures~\ref{h19}, \ref{h20}, \ref{h21}, and \ref{h22}. The dashed histogram is similar, but results from Person B excluding what appeared to be the best constrained galaxy (\# 3), and the dotted histogram represents the case where inversion symmetry was not applied to any of the systems. All three resultant distributions recover $h$ fairly well, with the $90\%$ of the models contained within $20\%$ of the true $h$. However the distributions are not the same; the most probable values are different by $\sim 10\%$. This illustrates how a relatively minor feature in modeling constraints, namely exclusion or inclusion of inversion symmetry, can make a considerable difference in the estimated $h$ value when the goal is to achieve precision of $10\%$. Based on this observation we conclude that assumed galaxy shape in parametric reconstructions plays a major role in determining the outcome of the $H_0$ determination. How robust are the results to the changes in other modeling assumptions? Changing pixel size by a factor of $\sim 1.5$, and relaxing mass gradient angle constraint (item 3 in Section~\ref{method}) does not change our results considerably. \section{Application to real systems}\label{real} \subsection{PG1115}\label{PG1115} Figure~\ref{four_1115} shows the results of the reconstruction for PG1115. Since the main lensing galaxy has no close companions and its light profile is smooth we have included inversion symmetry as one of the modeling constraints. The average of 100 arrival time surfaces is shown in Figure~\ref{four_1115}(a); Figure~\ref{four_1115}(b) shows the corresponding caustics and critical lines. The latter are not as smooth as the former because locations of caustics and critical lines are derived using the gradients the arrival time surface, which are always noisier than the original function. Panels (c) and (d) plot the quantity $\hbox{\bmit\char"12}\cdot\hbox{\bmit\char"12}_{\rm s}-\sum_n \kappa_n \psi_n(\hbox{\bmit\char"12})$, and the total arrival time surface, respectively. The plot of the modified gravitational potential, (c), illustrates the effect of external shear which is due to a galaxy group to the lower right of the main galaxy. Because ${\rm ind}(r)$ has been measured for the main lensing galaxy in PG1115, the relation between profile slope and derived $H_0$, Figure~\ref{two_1115}(a), can be used to derive an upper limit on $H_0$. Impey et al. (1998) fit the galaxy light with a de Vaucouleurs profile of an effective radius $r_e=0.59''$. At the location of the images, about $1.3''$ from galaxy center the double logarithmic density slope is ${\rm ind}(r)=-2.3$. Assuming that the mass profile can only be shallower than the light profile, and consulting Figure~\ref{two_1115}(a) we place an upper limit on $H_0$ of 75$\rm\,km\,sec^{-1}\,Mpc^{-1}$. If the true mass density profile slope is isothermal the corresponding $H_0$ is 30$\rm\,km\,sec^{-1}\,Mpc^{-1}$. A low value of $H_0$ was also obtained by parametric models that assumed isothermal models for the galaxy (Schechter et al 1997). In the blind test, Section~\ref{blind}, we assumed that all time delays are known precisely, which is not currently the case for any of the systems except Q0957. What effect does an error in time delay determination have on the derived $H_0$? Figure~\ref{two_1115}(b) shows two distributions derived using two different $\Delta t$ determinations based on the same lightcurves. There is a $20\%$ difference in the most probable value of $H_0$ in the two histograms, but overall they are not very different. Both distributions are very broad; $90\%$ of the models span the range between 30 and 75$\rm\,km\,sec^{-1}\,Mpc^{-1}$. Figure~\ref{ring_1115_fig} shows a dense version of the arrival time surface. The regions of the plot where the contours are sparse are the flattest, i.e., most `stationary' regions of the lens plane. This is where one would expect to find images of sources placed close to the main source. For example if the point-like QSO is surrounded by a host galaxy the image of that galaxy will be well delineated by these `empty' regions. In fact, the observed optical ring in the case of PG1115 is well reproduced by the ring in Figure~\ref{ring_1115_fig}. \subsection{B1608} The light distribution of the lensing system is rather messy, possibly representing a merging/interacting galaxy pair, therefore inversion symmetry was not used in the following reconstructions. Figures~\ref{four_1608} and \ref{two_1608} are the same as Figures~\ref{four_1115} and \ref{two_1115}, but for B1608. The range 50 to 100$\rm\,km\,sec^{-1}\,Mpc^{-1}$ in Figure~\ref{two_1608}(b) encompasses about 90$\%$ of the reconstructions. \subsection{Combined p(h) plot} Just like in the case of the blind test we now multiply probability distributions from PG1115 and B1608 to get the combined distribution, Figure~\ref{phreal_fig}. $90\%$ of all points lie within the range 43--79$\rm\,km\,sec^{-1}\,Mpc^{-1}$, while the median of the distribution is 61$\rm\,km\,sec^{-1}\,Mpc^{-1}$. Note that the errorbars obtained using our method are substantially larger than what is usually quoted in other studies. We ascribe this increase to the more systematic sampling of the whole image-defined model space unrestricted by the confines of parametric models. \section{Discussion and Conclusions}\label{summary} Multiply-imaged QSO systems provide us with an elegant way of measuring $H_0$, and a lot of observational and modeling effort has been invested in this enterprise. As the quality of the observational data improves, most of the uncertainty in $H_0$ is contributed by the mass distribution in the lens. How to treat this problem is a matter of some debate. Should one use a single, physically motivated mass model, or should one approach the problem with no preconceptions about the galaxy form? In general, ad hoc restrictions on the allowed mass models translate into a too optimistic and probably biased estimated distribution of $H_0$'s. To avoid this trap one has to allow as much freedom for the lens models as possible. On the other hand, to yield a useful estimate of $H_0$ one has to restrict the amount of freedom allowed for the models using physically motivated criteria. Ideally one wants to balance these two opposing tendencies and impose just the correct quantity and quality of model constraints. Based on our experience from the present work we conclude that parametric, or any other approach that severely restricts the freedom of the galaxy-lens, has over-constrained their models and thus ended up with unrealistically small errorbars, and biased $H_0$'s. As a result different models of the same systems can yield discrepant results. For example, Romanowski \& Kochanek (1998) use dynamical methods to model the galaxy in Q0957, and further constrain the galaxy to be similar to nearby ellipticals; they quote $61^{+13}_{-15}$ at $2\sigma$ level. Bernstein \& Fischer (1999) analyzed the same system but used a {\it range} of astrophysically reasonable parametric models. Their estimate, $77^{+29}_{-24}$, also at $2\sigma$ level, does not agree with that of Romanowski \& Kochanek. Our approach is different in that it does not presuppose a galaxy shape, but instead allows us to impose as many or as few constraints as is deemed appropriate. The most unrestricted models would be constrained solely by what we call the primary constraints, i.e., image observables. By definition these would yield unbiased estimates of $H_0$ {\it based on lensing data alone}. We chose to go somewhat beyond this and apply what we call secondary constraints, which describe realistic galaxies in most general terms. The derived $H_0$ distributions are narrower; the price we pay is a small amount of bias. It can be argued that we are still too generous with our mass models, i.e. other galaxy characteristics can be safely assumed, and hence tighter constraints can be applied to the models without sacrificing the unbiased nature of results. This avenue can be taken in the future work if additional constraints become available. A potential source of additional modeling constraints are optical rings, lensed images of the QSO galaxy host, which are seen in some cases, for example, in PG1115 and B0218. The orientations and elongations of individual images of QSO host galaxy can be used as linear inequality constraints to narrow down the range of possible galaxy mass distributions. If two or more sources with known redshifts were lensed by the same foreground galaxy, these could be used to break the mass sheet degeneracy and thus further constrain the galaxy. However in practice cases of two sources at different redshifts lensed by the same galaxy are expected to be very rare because of the small galaxy cross-sections. Probably the most promising potential constraint is based on the relation between the slope of the projected density profile around the images and the derived $H_0$. If the slope can be estimated by means other than lensing, or at least a limit placed on its value as we did in Section~\ref{PG1115} using the observed slope of the light distribution, then $H_0$ can be constrained much better compared to what is currently possible. With the two systems used in the present work, PG1115+080 and B1608+656, and implementing primary constraints of image properties and secondary constraints describing a few general properties of lensing galaxies, we conclude that $H_0$ is between 43 and 79$\rm\,km\,sec^{-1}\,Mpc^{-1}$ at $90\%$ confidence level, with the best estimate being 61$\rm\,km\,sec^{-1}\,Mpc^{-1}$. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
6,878
Botelloides glomerosus is a species of sea snail, a marine gastropod mollusk in the family Trochidae, the top snails. Description The size of the shell varies between 2.9 mm and 6 mm. The shell is small, solid, glossy, columnar, and blunt at either end. Its colour is milk-white to pale ochre, yellow at the summit. The shell consists of five whorls. The first three are turbinate, the last two-thirds of the shell's total length. They are slightly inflated, contracted at the sutures and wound obliquely. Sculpture: the top whorls are smooth, last two ornamented by fine flat-topped spiral riblets parted by shallow grooves of slightly greater breadth. The riblets are more crowded on the centre of the whorl. There are 20 on the last whorl and 10 on the antepenultimate whorl. Faint growth-striae cross riblets and grooves obliquely. The aperture is round, bevelled at the edge, and thickened within but not externally. Distribution This marine species is endemic to Australia and occurs off Northern Territory, Western Australia and Queensland References Ponder, W.F. 1985. A revision of the genus Botelloides (Mollusca: Gastropoda: Trochacea). Department of Mines and Energy, South Australia, Special Publication 5: 301-327 p. 305, pl. 1,fig. 6, pl. 5, figs 1-4, pl. 7, fig. 1 Wilson, B. 1993. Australian Marine Shells. Prosobranch Gastropods. Kallaroo, Western Australia : Odyssey Publishing Vol. 1 408 pp External links To World Register of Marine Species glomerosus Gastropods of Australia Gastropods described in 1907
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,335
\section{Introduction} The resolution limit of optical microscopes is one of the most important problems in science and engineering. The Abbe-Rayleigh criterion with respect to the free-space wavelength $\lambda_0$ has been a widely used resolution limit \cite{born_wolf}, but it is now well known that the criterion is heuristic in the context of microscopy and superresolution microscopy is possible. An important class of superresolution microscopy, including stimulated-emission-depletion microscopy \cite{hell} and photoactivated-localization microscopy \cite{betzig}, relies on the accurate localization of point sources from far field \cite{moerner}. The localization accuracy, which represents an important measure of the microscope resolution, is then limited by the statistics of the optical measurement \cite{bobroff,thompson02,ober,ram}. Prior analyses of point-source localization accuracy assume classical, scalar, and paraxial optics with statistics specific to the measurement methods \cite{bobroff,thompson02,ober,ram}. On a more fundamental level, however, optics is governed by the quantum theory of electromagnetic field \cite{mandel}, and the existence of more accurate measurement methods \cite{sheppard07} and more fundamental quantum limits remains an open question. For example, the superoscillation phenomenon \cite{berry_popescu} suggests that superresolution diffraction patterns can be obtained in the expense of signal power; can it be exploited to improve the resolution of microscopes \cite{zheludev08,huang_zheludev,hyvarinen12}? Using a quantum Cram\'er-Rao bound (QCRB) \cite{helstrom,holevo} and the full quantum theory of electromagnetic field \cite{mandel}, here I derive quantum limits to the accuracy of locating point sources. These quantum resolution limits are more general and fundamental than prior classical analyses in the sense that they apply to any measurement method and take full account of the quantum, nonparaxial, and vectoral nature of photons. To arrive at analytic results, I focus mainly on the cases of one and two monochromatic classical sources and an initial vacuum optical state. The possibility of using squeezed light to further enhance the accuracy of locating one point source will also be discussed. To study partially coherent sources, I model incoherence using the concept of nuisance parameters, which are unknown parameters of no primary interest in the context of statistical inference. Quantum bounds for partially coherent sources are then derived by introducing a new generalized QCRB that accounts for nuisance parameters in a special way. In quantum optics, there has been a substantial literature on quantum imaging; see, for example, Refs.~\cite{helstrom,helstrom70,kolobov,fabre,treps,barnett,boto,li08, boyd2012,centroid,shin,rozema,glm_imaging,nair_yen,pirandola,perez12,hemmer12,humphreys,schwartz_oron,schwartz13,cui13,monticone,taylor2013}, but most of them assume certain quantum optical states without considering how they may be generated by objects relevant to microscopy or consider simply the estimation of mirror displacement. Helstrom's derivation of the QCRB for one point source \cite{helstrom,helstrom70} is the most relevant prior work, although he used the paraxial approximation, did not consider the use of squeezed light, and studied two sources only in the context of binary hypothesis testing \cite{helstrom}. There have also been intriguing claims of superresolution using the nonclassical photon statistics from single-photon sources \cite{schwartz_oron,schwartz13,cui13,monticone}, but their protocols have not been analyzed using statistical inference, so even though their images appear sharper, the accuracies of their methods in estimating object parameters remain unclear. To investigate their claims, I also derive a quantum bound for locating a single-photon source. \section{Quantum parameter estimation} \label{sec_qcrb} Let the initial quantum state of a system be $\ket{\psi}$. After unitary evolution $U(X,T)$ with respect to Hamiltonian $H(X,t)$ as a function of parameters $X = (X_1,X_2,\dots)$, the quantum system is measured with outcome $Y$. The probability distribution of $Y$ according to Born's rule can be expressed as \cite{helstrom,holevo,wiseman_milburn} \begin{align} P(Y|X) &= \operatorname{tr}\Bk{E(Y)U(X,T) \ket{\psi}\bra{\psi} U^\dagger(X,T)}, \end{align} where $E(Y)$ is the positive operator-valued measure (POVM) that characterizes the quantum measurement and $\operatorname{tr}$ denotes the operator trace. Denote the estimator of $X$ using $Y$ as $\tilde X(Y)$. The estimation error matrix is defined as \begin{align} \Sigma_{\mu\nu}(X) &\equiv \int dYP(Y|X) \Bk{\tilde X_\mu(Y)-X_\mu}\Bk{\tilde X_\nu(Y)-X_\nu}. \label{Sigma} \end{align} For unbiased estimators, the classical Cram\'er-Rao bound states that \cite{vantrees} \begin{align} \Sigma(X) &\ge j^{-1}(X), \end{align} which means that $\Sigma-j^{-1}$ is positive semidefinite. $j(X)$ is the Fisher information matrix given by \begin{align} j_{\mu\nu}(X) &\equiv \int dY P(Y|X)\Bk{\parti{}{\theta_\mu}\ln P(Y|X)} \Bk{\parti{}{\theta_\nu}\ln P(Y|X)}. \end{align} The bound has been used, for example, in Refs.~\cite{ober,ram} to evaluate the point-source localization accuracy for a microscope. It turns out that, for any POVM and thus any measurement in quantum mechanics, another lower bound exists in the form of the QCRB \cite{helstrom,braunstein}: \begin{align} \Sigma(X) &\ge j^{-1}(X) \ge J^{-1}(X), \end{align} which means that $j^{-1} - J^{-1}$ and $\Sigma-J^{-1}$ are positive-semidefinite. $J$ is the quantum Fisher information (QFI) matrix; it can be obtained by expressing the fidelity $|\bra{\psi}U^\dagger(X,T)U(X+\delta X,T)\ket{\psi}|^2$ in the interaction picture \cite{tsang_nair} and expanding it to the second order of $\delta X$ \cite{paris,pasquale}. The result is \begin{align} J_{\mu\nu}(X) &= 4\operatorname{Re} \Avg{\Delta g_\mu(X)\Delta g_\nu(X)}, \end{align} where $\operatorname{Re}$ denotes the real part, $\avg{A} \equiv \bra{\psi}A\ket{\psi}$, $\Delta A \equiv A-\avg{A}$, and \begin{align} g_\mu(X) &\equiv \frac{1}{\hbar}\int_0^T dt U^\dagger(X,t) \parti{H(X,t)}{X_\mu} U(X,t) \end{align} is the generator of the parameter shift in the Heisenberg picture. For $M$ trials, the QFI is simply multiplied by $M$, and at least one component of the QCRB can be attained in an asymptotic $M\to\infty$ sense \cite{fujiwara2006}. If one wishes to consider a new set of parameters $\theta$ related to the original set $X$ and $X$ can be expressed as a function of $\theta$, the new QFI matrix is simply given by \begin{align} J_{ab}'(\theta) = \left.\sum_{\mu,\nu}\parti{X_\mu}{\theta_a}J_{\mu\nu}(X)\parti{X_\nu}{\theta_b} \right|_{X = X(\theta)}. \end{align} Various generalizations of the QCRB and alternatives are available \cite{helstrom,yuen_lax,twc,qzzb,qbzzb}, but the presented version suffices to illustrate the pertinent physics. In Sec.~\ref{sec_qcrb_nuisance}, the QCRB will be generalized to a Bayesian version that treats nuisance parameters separately and is used to study partially coherent sources. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{1source}} \caption{A classical point source with dipole moment $\bs p(t)$ radiating in free space. Its position $\bs r$ is estimated by measuring the quantum optical field, with $a(\bs k,s)$ denoting its annihilation operator.} \label{1source} \end{figure} \section{One classical point source} \label{sec_onesource} Consider first a classical point source, as depicted in Fig.~\ref{1source}. The Hamiltonian is \cite{mandel} \begin{align} H(\bs r,t) &= H_F + H_I(\bs r,t), \\ H_F &= \sum_s \int d^3k\, \hbar\omega(\bs k) a^\dagger(\bs k,s) a(\bs k,s), \\ H_I(\bs r,t) &= -\bs p(t)\cdot \bs E(\bs r), \\ \bs E(\bs r) &= \sum_s \int d^3k \sqrt{\frac{\hbar\omega}{2(2\pi)^3\epsilon_0}} \big[i\bs\varepsilon(\bs k,s)a(\bs k,s)e^{i\bs k\cdot\bs r} \nonumber\\&\quad +\textrm{H.c.}\big], \end{align} where $\bs k = k_x\hat{\bs x}+k_y\hat{\bs y} + k_y\hat{\bs z}$ is a wavevector, $(\hat{\bs x},\hat{\bs y},\hat{\bs z})$ denote unit vectors in the Cartesian coordinate system, $\int d^3k \equiv \int_{-\infty}^{\infty} dk_x \int_{-\infty}^{\infty} dk_y \int_{-\infty}^{\infty} dk_z$, $s$ is an index for the two polarizations, $\bs\varepsilon(\bs k,s)$ is a unit polarization vector, $\omega(\bs k) = c|\bs k|$, $c$ is the speed of light, $\bs p(t)$ is the c-number dipole moment of the source, $\bs r =x\hat{\bs x} + y\hat{\bs y}+z\hat{\bs z}$ is its position, $\epsilon_0$ is the free-space permittivity, $a(\bs k,s)$ is an annihilation operator obeying the commutation relation $\Bk{a(\bs k,s),a^\dagger(\bs k',s')} = \delta_{ss'}\delta^3(\bs k-\bs k')$, and $\textrm{H.c.}$ denotes the Hermitian conjugate. Since $\bs p(t)$ is a c-number, $H_I$ implements a field displacement operation \cite{mandel}. The Heisenberg picture of $a(\bs k,s)$ is \begin{align} a(\bs k,s,t) &\equiv U^\dagger(X,t)a(\bs k,s)U(X,t) \\ &= e^{-i\omega t}\Bk{a(\bs k,s) + \alpha(\bs k,s,\bs r,t)}, \end{align} where $\alpha$ is the radiated field. Assuming $\bs p(t)= \bs p_0 e^{-i\omega_0 t} + \textrm{c.c.}$, where $\textrm{c.c.}$ denotes the complex conjugate, $\alpha$ becomes \begin{align} \alpha(\bs k,s,\bs r,T) &= \sqrt{\frac{\omega_0}{2(2\pi)^3\hbar\epsilon_0}} e^{-i\bs k\cdot\bs r}\bs\varepsilon^*(\bs k,s) \cdot \nonumber\\&\quad \left[\bs p_0 e^{i(\omega-\omega_0)T/2}\frac{\sin(\omega-\omega_0)T/2}{(\omega-\omega_0)/2} \right. \nonumber\\&\quad \left. +\bs p_0^*e^{i(\omega+\omega_0)T/2}\frac{\sin(\omega+\omega_0)T/2}{(\omega+\omega_0)/2} \right], \label{alpha} \end{align} which indicates that only the optical modes with $\omega(\bs k) = \omega_0$ grow in time, corresponding to the far field, while all the other near-field modes with $\omega(\bs k) \neq \omega_0$ oscillate. The $\omega(\bs k) = \omega_0$ relation specifies the spatial frequencies available to the far optical fields \cite{goodman,heintzmann09}. Assuming $T\gg 2\pi/\omega_0$, such that $\sin^2[(\omega\pm\omega_0)T/2]/[(\omega\pm\omega_0)/2]^2\approx 2\pi T\delta(\omega\pm\omega_0)$, using the identity $\sum_s \varepsilon_\mu(\bs k,s)\varepsilon_\nu^*(\bs k,s) = \delta_{\mu\nu} - k_\mu k_\nu/|\bs k|^2$ \cite{mandel}, and switching to the spherical coordinate system for $\bs k$, it can be shown that the average number of radiated photons for an initial vacuum state is \begin{align} N &\equiv \sum_s \int d^3k \abs{\alpha(\bs k,s,\bs r,T)}^2 \approx \frac{|\bs p_0|^2\omega_0^3 T}{3\pi \hbar\epsilon_0c^3}. \label{N} \end{align} The far-field limit ($\omega_0T \to\infty$) will be assumed hereafter. I now focus on two representative cases: a linearly polarized dipole with $\bs p_0 = p_0\hat{\bs z}$ and a circularly polarized dipole with $\bs p_0 = p_0(\hat{\bs x}+i\hat{\bs y})/\sqrt{2}$. Taking the unknown parameters to be $\bs r$, the generators for $X = (x,y,z)$ can be expressed, after some algebra, as \begin{align} \Delta g_\mu(\bs r) &= -\frac{\sqrt{2}}{W_\mu}\Delta P_\mu(\bs r), \quad \mu \in \{x,y,z\}, \label{dg} \\ \Delta P_\mu(\bs r) &\equiv \frac{1}{\sqrt{2}i}\Bk{\Delta b_\mu(\bs r) -\Delta b_\mu^\dagger(\bs r)}, \end{align} where $\Delta b_\mu$ is a normalized annihilation operator defined as \begin{align} \Delta b_\mu(\bs r) \equiv W_\mu\sum_s \int d^3k\, \Bk{-ik_\mu \alpha^*(\bs k,s,\bs r,T)} \Delta a(\bs k,s), \end{align} such that $[\Delta b_\mu(\bs r),\Delta b_\nu^\dagger(\bs r)]= \delta_{\mu\nu}$, and the normalization constants $W_\mu$ are \begin{align} W_\mu &\equiv \Bk{\sum_s \int d^3k\, k_\mu^2 \abs{\alpha_\mu(\bs k,s,\bs r,T)}^2}^{-1/2}. \label{Wmu} \end{align} The $d^3k$ integrals can again be computed with the help of the far-field assumption and spherical coordinates. The results depend on $\bs p_0$; for $\bs p_0 = p_0\hat{\bs z}$, \begin{align} W_x &= W_y = \sqrt{\frac{5}{2}}\frac{\lambda_0}{2\pi\sqrt{N}}, & W_z &= \frac{\sqrt{5}\lambda_0}{2\pi\sqrt{N}}, \label{Wxyz1} \end{align} and for $\bs p_0 = p_0(\hat{\bs x}+i\hat{\bs y})/\sqrt{2}$, \begin{align} W_x &= W_y = \sqrt{\frac{10}{3}}\frac{\lambda_0}{2\pi\sqrt{N}}, & W_z &= \sqrt{\frac{5}{2}}\frac{\lambda_0}{2\pi\sqrt{N}}, \label{Wxyz2} \end{align} but the important point here is that they are all on the order of $\lambda_0/\sqrt{N}$, where $\lambda_0 \equiv 2\pi c/\omega_0$ is the free-space wavelength. The QFI becomes \begin{align} J_{\mu\nu}(\bs r) &= \frac{8}{W_\mu^2}\Avg{\Delta P_\mu(\bs r)\Delta P_\nu(\bs r)}. \end{align} For an initial vacuum state (or any coherent state), $\avg{\Delta P_\mu(\bs r)\Delta P_\nu(\bs r)}=\delta_{\mu\nu}/2$, and the QCRB is hence \begin{align} J_{\mu\nu}(\bs r) &= \frac{4}{W_\mu^2}\delta_{\mu\nu}, & \Sigma_{\mu\mu}(\bs r) &\ge \frac{W_\mu^2}{4}, \label{qsnl} \end{align} meaning that the quantum resolution limit in terms of the root-mean-square error $\sqrt{\Sigma_{\mu\mu}}$ is on the order of $\lambda_0/\sqrt{N}$. I call this limit the quantum shot-noise limit. Generalization to lossless media is straightforward and results simply in $\lambda_0$ being replaced by the wavelength in the medium. Sec.~\ref{sec_single1} shows that a single-photon source also obeys this limit with repeated trials. Assuming uncorrelated photons, Refs.~\cite{bobroff,thompson02,ober,centroid} derived a similar limit in the form of $\sigma/\sqrt{N}$, where $\sigma$ is the width of the imaging point-spread function. While their limit also scales as $1/\sqrt{N}$, all those analyses assume the paraxial approximation and measurement by a photon-counting camera, whereas the quantum shot-noise limit here is valid for any numerical aperture and any measurement, including common methods such as photon counting, homodyne/heterodyne detection, and digital holography. The limit in Refs.~\cite{bobroff,thompson02,ober,centroid} also implies that the quantum shot-noise limit is reasonably tight, as the camera measurement with suitable postprocessing can at least follow the quantum-optimal shot-noise scaling. Sec.~\ref{sec_enhance} shows that homodyne measurement with a special local-oscillator field can also approach the quantum limit if the radiation is coherent. For a concrete numerical example, consider the semiclassical paraxial analysis of conventional single-molecule microscopy by Ober \textit{et al.}~\cite{ober}, who used the classical Cram\'er-Rao bound and a shot-noise assumption to derive a limit of $2.301$~nm on the root-mean-square localization error for a free-space wavelength of $520~$nm, a numerical aperture of $1.4$, a photon collection efficiency of 0.033, a photon flux of $2\times 10^6~$s$^{-1}$, an acquisition time of $0.01$~s. If the efficiency were $1$, their limit would become $0.418$~nm. In comparison, if I take the refractive index of the immersion oil to be $1.52$, $\lambda_0 = 520$~nm$/1.52 = 342~$nm to be the wavelength in the medium, and the photon number to be $N = 2\times 10^4$, the quantum shot-noise limit according to Eqs.~(\ref{Wxyz1}), (\ref{Wxyz2}), and (\ref{qsnl}) is $\lambda_0/(2\pi\sqrt{N}) = 0.385$~nm times a constant factor close to 1. It remains to be seen whether superoscillation techniques are similarly efficient, but the key point here is that, since the quantum bound is valid for any measurement and conventional methods can already get close to it, no other measurement technique is able to offer any significant advantage in resolution enhancement over the conventional methods. \section{Quantum enhancement} \label{sec_enhance} Even though the source is classical, quantum enhancement is possible if the initial state $\ket{\psi}$ is nonclassical, as I now show. If $\Delta g_\mu$ were independent of the parameter, the accuracy could be enhanced by squeezing and measuring the conjugate quadrature \cite{braunstein96}. Although $\Delta g_\mu(\bs r)$ depends on the unknown $\bs r$ here, the radiated field can be approximated as $\alpha(\bs k,s,\bs r,T) \approx \alpha(\bs k,s,\bs r_0,T)$, resulting in $\Delta g_\mu(\bs r) \approx \Delta g_\mu(\bs r_0)$, provided that \begin{align} \abs{\bs r-\bs r_0} \ll \lambda_0 \end{align} with respect to a known reference position $\bs r_0$. The acquisition of such prior information will require a fixed amount of overhead resource, but once it is done, one can squeeze the quadrature \begin{align} \Delta Q_\mu(\bs r_0) &\equiv \frac{1}{\sqrt{2}}\Bk{\Delta b_\mu(\bs r_0)+\Delta b_\mu^\dagger(\bs r_0)} \end{align} in the initial state and perform a homodyne measurement of $\Delta Q_\mu(\bs r_0)$ to estimate $\bs r$ much more accurately. Since $[\Delta Q_\mu(\bs r_0), \Delta Q_\nu(\bs r_0)] = 0$, all three quadratures can be squeezed and measured simultaneously in principle. The estimation error becomes \begin{align} \Sigma_{\mu\mu}(\bs r) &\approx \frac{W_\mu^2}{2}\Avg{\Delta Q_\mu^2(\bs r_0)}, \end{align} and the error reduction below the shot-noise limit is determined by the squeezing factor, which is limited by the average photon number $N_0$ in the initial state (not to be confused with $N$). Using $\Avg{\Delta Q_\mu^2(\bs r_0)}+\Avg{\Delta P_\mu^2(\bs r_0)} \le 2N_0+1$ and the uncertainty relation $\avg{\Delta Q_\mu^2}\avg{\Delta P_\mu^2} \ge 1/4$, it can be shown that \begin{align} \Avg{\Delta Q_\mu^2(\bs r_0)} &\ge \frac{f(N_0)}{2}, \quad \Avg{\Delta P_\mu^2(\bs r_0)} \le \frac{1}{2f(N_0)}, \\ f(N_0) &\equiv (2N_0+1)\Bk{1-\sqrt{1-(2N_0+1)^{-2}}}, \end{align} where \begin{align} f(0) &= 1, & f(N_0)&\approx \frac{1}{4N_0} \textrm{ for } N_0\gg 1. \end{align} With a zero-mean minimum-uncertainty state and all initial photons in the $\Delta b_\mu(\bs r_0)$ mode, the estimation error becomes \begin{align} \Sigma_{\mu\mu}(\bs r) &\approx \frac{W_\mu^2}{2}\Avg{\Delta Q_\mu^2(\bs r_0)} =\frac{W_\mu^2}{4}f(N_0). \end{align} The enhancement factor $f(N_0)$ is optimal, as the QCRB can be further bounded by \begin{align} \Sigma_{\mu\mu}(\bs r) &\ge J_{\mu\mu}^{-1}(\bs r) = \frac{W_\mu^2}{8\avg{\Delta P_\mu^2(\bs r)}} \ge \frac{W_\mu^2}{4}f(N_0). \end{align} This means that squeezed light with average photon number $N_0$ can beat the quantum shot-noise limit to the mean-square error by roughly a factor of $N_0$. The optical mode to be squeezed has a profile $ik_\mu\alpha(\bs k,s,\bs r_0,T)$. This means that, in real space, the electric field profile of the mode should be the spatial derivative of the radiated field. This kind of squeezing and measurement has actually been demonstrated experimentally, albeit in the paraxial regime, by Taylor \textit{et al.}\ in the context of particle tracking \cite{taylor2013}, where the weak scatterer under a strong pump can be modeled as a classical source, similar to the implementation of field displacement by a beam splitter \cite{paris96}, and the spatial mode profile of the squeezed light and the local oscillator is a spatial derivative of the scattered field. To realize an enhancement in practice, accurate phase locking of the squeezed light and the local oscillator to the radiated field and a high measurement efficiency are crucial. Phase locking cannot be achieved with incoherent point sources such as fluorescent markers, but can be done with dielectric particles via Rayleigh scattering \cite{taylor2013} or second-harmonic nanoparticles \cite{pu08,hsieh09}; the latter are especially promising for biological imaging applications \cite{dempsey}. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{2source}} \caption{Two classical point sources with dipole moments $\bs p(t)$ and $\bs p'(t)$ at $\bs r$ and $\bs r'$ with quantum optical radiation.} \label{2source} \end{figure} \section{Two classical point sources} \label{sec_twosources} Next, consider two classical point sources at $\bs r$ and $\bs r'$, as shown in Fig.~\ref{2source}. The Hamiltonian is now \begin{align} H(\bs r,\bs r',t) &= H_F + H_I(\bs r,\bs r',t), \\ H_I(\bs r,\bs r',t) &= -\bs p(t)\cdot\bs E(\bs r)-\bs p'(t)\cdot\bs E(\bs r'). \end{align} The Heisenberg picture of $a(\bs k,s)$ becomes $a(\bs k,s,t) = e^{-i\omega t}[a(\bs k,s) + \alpha(\bs k,s,\bs r,t) +\alpha'(\bs k,s,\bs r',t)]$, where $\alpha$ and $\alpha'$ are the radiated fields from the two sources, $\alpha$ is the same as before, and $\alpha'$ has the same expression as $\alpha$ except that $\bs p$ is replaced by $\bs p'$ and $\bs r$ by $\bs r'$. One can then follow the preceding procedure to obtain the QCRB for estimating $\bs r$ and $\bs r'$. To highlight the important physics, however, consider here the estimation of just two parameters $X = (x,x')$. The generators $\Delta g_x$ and $\Delta g_{x'}$ may not commute, and the QFI matrix for an initial vacuum or any coherent state now has off-diagonal components: \begin{align} J_{xx'}(X) &= J_{x'x}(X) \nonumber\\ &= 4\operatorname{Re} \sum_s \int d^3k\, k_x^2 \alpha^*(\bs k,s,\bs r,T)\alpha'(\bs k,s,\bs r',T), \end{align} while $J_{xx}$ remains the same and $J_{x'x'}$ has a similar expression to $J_{xx}$. $J_{xx}$ and $J_{x'x'}$ still obey a shot-noise scaling with the average photon number, but the nonzero off-diagonal components mean that the parameters act as nuisance parameters to each other, and the QCRB with respect to, say, $x$ is always raised: \begin{align} \Sigma_{xx}(X) &\ge \frac{1}{J_{xx}[1-\kappa(X)]}, \end{align} where the resolution degradation factor, defined as \begin{align} \kappa(X) &\equiv \frac{J_{xx'}^2(X)}{J_{xx}J_{x'x'}} = \frac{(\operatorname{Re} \sum_s \int d^3k\, k_x^2 \alpha^*\alpha')^2} {\sum_s \int d^3k\, k_x^2 |\alpha|^2\sum_s \int d^3k\, k_x^2 |\alpha'|^2}, \end{align} is within the range $0\le \kappa\le 1$ and determined by the overlap between the two differential mode profiles. The nuisance-parameter effect generalizes the Rayleigh criterion and other classical results \cite{ram} by revealing a fundamental measurement-independent degradation of resolution for two point sources with overlapping radiation. For example, Fig.~\ref{resolution_dz} plots $\kappa$ against $|x-x'|/\lambda_0$, assuming $\bs p = \bs p' = \bs p_0 e^{-i\omega_0t}+\textrm{c.c.}$, $T\gg 2\pi/\omega_0$, $\bs p_0 = p_0\hat{\bs x}$, $y=y'$, and $z=z'$. $\kappa\approx 0$ for $|x-x'| \gg \lambda_0$, as expected, but it approaches $1$ and leads to a diverging QCRB when $|x-x'|\ll \lambda_0$. Sec.~\ref{sec_partial2} shows that the degradation effect should still exist for two partially coherent sources. The degradation effect can be avoided by minimizing the overlap before each source is located independently. The overlap can be reduced by making the radiated fields separate in space, time, frequency, quadrature, or polarization; time multiplexing of point sources has especially been the key driver in current superresolution microscopy \cite{hell,betzig,moerner}. Note that $\kappa$ also depends on the relative phase between $\alpha$ and $\alpha'$. For example, under the assumptions in the caption of Fig.~\ref{resolution_dz}, it can be shown that the QFI matrix transformed with respect to the average position $(x+x')/2$ and the separation $x-x'$ is diagonal. The QFI component with respect to the average position still obeys a shot-noise scaling, while the increase in $\kappa$ can be traced to the increased error in estimating the separation. Similarly, when $\alpha$ and $\alpha'$ are $180^\circ$ out of phase but otherwise obey the same assumptions, $\kappa$ remains the same but its increase is now due to the increased error for the average position. An interesting scenario occurs when $\alpha$ and $\alpha'$ are $90^\circ$ out of phase, the two fields are in orthogonal quadratures, $\kappa$ becomes zero, and they can be measured separately using homodyne or heterodyne detection. This phenomenon suggests that structured illumination \cite{gustafsson99,gustafsson,heintzmann,heintzmann09} can be used to excite the sources, such that their relative phase and amplitude can be controlled to some degree and the overlap can be reduced for certain ranges of parameters. When the overlap is unavoidable or when the generators do not commute, heterodyne measurements can still be used to measure both quadratures of $a(\bs k,s,T)$ and should have a classical Fisher information within a factor of $1/2$ of the QFI. Quantum enhancement may also be possible using entangled squeezed states \cite{genoni}; the specific experimental design will be left for future studies. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{resolution_dz}} \caption{Plot of the resolution degradation factor $\kappa$ versus the true separation $|x-x'|/\lambda_0$ between two in-phase point sources, assuming $\bs p = \bs p' = \bs p_0 e^{-i\omega_0t}+\textrm{c.c.}$, $T\gg 2\pi/\omega_0$, $\bs p_0 = p_0\hat{\bs x}$, $y=y'$, and $z=z'$. At $|x-x'| = 0$, the Fisher information matrix is singular \cite{stoica,rotnitzky}. $\kappa$ remains the same for two out-of-phase sources with otherwise the same assumptions.} \label{resolution_dz} \end{figure} \section{Bayesian quantum Cram\'er-Rao bound with nuisance parameters} \label{sec_qcrb_nuisance} Incoherent sources are characterized by the statistical fluctuations of the fields \cite{mandel}. For point-source radiation, incoherence originates from the randomness of the amplitude, phase, and direction of the point dipole. In the context of statistical inference, these random variables are most suitably modeled as nuisance parameters. There are many ways to generalize the Cram\'er-Rao bounds when nuisance parameters are present \cite{bell}. The previous sections show one way, which includes the nuisance parameters as part of the wanted parameters $X$. To derive tighter bounds for other types of nuisance parameters, here I start with a Bayesian QCRB and generalize a classical approach by Miller and Chang \cite{bell,miller_chang}. Let $Z$ be a set of nuisance parameters, and suppose first that $Z$ is given. The Bayesian estimation error matrix is \begin{align} \bar\Sigma_{\mu\nu}(Z) &\equiv \int dX dY P(Y|X,Z)P_{X|Z}(X|Z) \Bk{\tilde X_\mu(Y,Z)-X_\mu} \nonumber\\&\quad\times \Bk{\tilde X_\nu(Y,Z)-X_\nu}, \end{align} where $P_{X|Z}(X|Z)$ is the prior distribution of $X$ conditioned on $Z$. Note that this Bayesian definition of error regards $X$ as a random parameter by averaging over its prior and is different from the frequentist definition in Eq.~(\ref{Sigma}) \cite{vantrees}. A Bayesian quantum Cram\'er-Rao bound valid for any estimator is given by \cite{yuen_lax,twc} \begin{align} \bar\Sigma(Z) &\ge \bar J^{-1}(Z), \label{bayes_qcrb} \\ \bar J(Z) &= \mathbb E_{X|Z}\Bk{J(X|Z)} + j(Z), \end{align} where $J(X|Z)$ is the same QFI as before, except that it is now conditioned on $Z$, $\mathbb E_{X|Z}$ denotes expectation over $P_{X|Z}(X|Z)$, and $j(Z)$ is a prior Fisher information defined as \begin{align} j_{\mu\nu}(Z) &\equiv \int dX P_{X|Z}(X|Z)\Bk{\parti{}{X_\mu} \ln P_{X|Z}(X|Z)} \nonumber\\&\quad\times \Bk{\parti{}{X_\nu} \ln P_{X|Z}(X|Z)}. \end{align} If $Z$ is a random parameter with prior distribution given by $P_Z(Z)$, the estimation error is \begin{align} \Pi_{\mu\nu} &\equiv \mathbb E_Z\Bk{\bar{\Sigma}'(Z)}, \\ \bar{\Sigma}'(Z) &\equiv \int dX dY P(Y|X,Z)P_{X|Z}(X|Z) \Bk{\tilde X_\mu(Y)-X_\mu} \nonumber\\&\quad\times \Bk{\tilde X_\nu(Y)-X_\nu}, \end{align} where $\mathbb E_Z$ denotes expectation over $P_Z$ and the estimator $\tilde X_\mu(Y)$ can no longer depend on $Z$. The lower bound in Eq.~(\ref{bayes_qcrb}) still holds for $\bar{\Sigma}'(Z)$, so one can obtain a lower bound on $\Pi$ given by \begin{align} \Pi &\ge \mathbb E_Z\Bk{\bar J^{-1}(Z)}. \label{newbound} \end{align} The important mathematical feature of this bound is that the expectation with respect to the nuisance parameter $Z$ is taken after the inverse of the conditional QFI matrix. This can sometimes lead to a tighter bound than a QCRB that includes $Z$ as part of $X$. Note also that this Bayesian bound is valid for any estimator, not just the unbiased ones, unlike the claim in Ref.~\cite{miller_chang}. The tightness of the bound should depend on whether the nuisance parameters can be accurately estimated from the measurements. \section{One partially coherent source} \label{sec_partial1} I now use the new bound to study partially coherent sources. First, consider the example of one point source in Sec.~\ref{sec_onesource}, but suppose that the complex dipole amplitude $p_0$ is unknown. Assuming $Z = p_0$, the quantum state before measurement is \begin{align} \rho &= \int d^2p_0 P_Z(p_0) U(X,p_0,T)\rho_0U^\dagger(X,p_0,T). \end{align} If $\rho_0$ is a vacuum state, $U\rho_0U^\dagger$ is a coherent state, and $\rho$ is a classical mixed state of light with $P_Z(p_0)$ determining the Sudarshan-Glauber P function \cite{mandel}. The random $p_0$ therefore gives rise to a classical partially coherent source model. For an initial vacuum, $\bar J(p_0)$ is given by \begin{align} \bar J_{\mu\nu}(p_0) &= \frac{4}{W_\mu^2(p_0)}\delta_{\mu\nu} +j_{\mu\nu} = \frac{N(p_0)}{C_\mu\lambda_0^2}\delta_{\mu\nu} + j_{\mu\nu}, \end{align} where $W_\mu$ and $N$ now depend on the unknown dipole moment and $C_\mu$ is a constant on the order of 1 that can be determined from Eqs.~(\ref{Wxyz1}) or (\ref{Wxyz2}). Assuming that $j$ is diagonal and independent of $p_0$ and taking the inverse and then the expectation according to Eq.~(\ref{newbound}), I obtain \begin{align} \Pi_{\mu\mu} &\ge \mathbb E_{p_0}\Bk{\frac{1}{4/W_\mu^2(p_0) + j_{\mu\mu}}} = \mathbb E_{p_0} \Bk{\frac{1}{ N(p_0)/(C_\mu\lambda_0^2) + j_{\mu\mu}}}. \end{align} For example, if $P_Z(p_0)$ corresponds to the P function of a thermal source, the bound can be written in terms of the average radiated photon number $\bar N$ as \begin{align} &\quad \mathbb E_{p_0} \Bk{\frac{1}{N(p_0)/(C_\mu \lambda_0^2) + j_{\mu\mu}}} \nonumber\\ &= \frac{C_\mu \lambda_0^2}{\bar N} \int_0^\infty dN \exp\bk{-\frac{N}{\bar N}}\frac{1}{N+C_\mu\lambda_0^2 j_{\mu\mu}} \\ &\approx \frac{C_\mu\lambda_0^2}{\bar N} \ln \frac{\bar N}{C_\mu \lambda_0^2 j_{\mu\mu}}, \quad \frac{\bar N}{C_\mu\lambda_0^2} \gg j_{\mu\mu}. \end{align} An alternative Bayesian QCRB can be obtained by including $p_0$ as part of $X$. In that case, the off-diagonal QFI elements between $p_0$ and $\bs r$ turn out to be zero, and the expectation with respect to $p_0$ is taken before the inverse, leading to a bound given by $C_\mu\lambda_0^2/\bar N$. The separate treatment of $p_0$ as nuisance parameter here involves taking the expectation after the inverse, giving rise to an additional factor $\sim \ln\bar N$ and thus a tighter bound for large $\bar N$. In general, Jensen's inequality can be used to show that the shot-noise scaling with respect to the average photon number $\mathbb E_{p_0}[N(p_0)]$ cannot be beat for any nonnegative P function. \section{Two partially coherent sources} \label{sec_partial2} Next, consider the example of two point sources in Sec.~\ref{sec_twosources}, and $Z = (\bs p_0,\bs p_0')$ is now assumed to be unknown to model partially coherent sources. Assuming again that $j$ is diagonal and $X = (x,x')$ is independent of $Z$, \begin{align} \bar J_{xx}(\bs p_0) &= \frac{4}{W_x^2(\bs p_0)} + j_{xx}, \\ \bar J_{x'x'}(\bs p_0') &= \frac{4}{W_{x'}^2(\bs p_0')} + j_{x'x'}, \\ \bar J_{xx'}(\bs p_0,\bs p_0') &= \mathbb E_X\Bk{J_{xx'}(\bs p_0,\bs p_0')}. \end{align} $\bar J_{xx'}$ is now the expectation of $J_{xx'}$ over $X = (x,x')$, conditioned on the dipole moments. If the two point sources are \textit{a priori} known to be close relative to $\lambda_0$, $\bar J_{xx'}(\bs p_0,\bs p_0')$ can still have a significant magnitude for certain $(\bs p_0,\bs p_0')$. The bound given by Eq.~(\ref{newbound}) becomes \begin{align} \Pi_{xx} &\ge \mathbb E_{(\bs p_0,\bs p_0')} \BK{\frac{1}{\bar J_{xx}(\bs p_0)[1-\bar\kappa(\bs p_0,\bs p_0')]}}, \label{Pi_xx} \end{align} where a new resolution degradation factor is defined as \begin{align} \bar\kappa(\bs p_0,\bs p_0') &\equiv \frac{\bar J_{xx'}^2(\bs p_0,\bs p_0')}{\bar J_{xx}(\bs p_0)\bar J_{x'x'}(\bs p_0')}. \end{align} The important point here is that $\bar\kappa(\bs p_0,\bs p_0')$ can still be close to $1$ for certain values of $(\bs p_0,\bs p_0')$ if the two sources are \textit{a priori} known to be close to each other relative to $\lambda_0$, $1/[1-\bar\kappa(\bs p_0,\bs p_0')]\gg 1$ is possible, and the expectation over $(\bs p_0,\bs p_0')$ will then be dominated by those large values. In other words, the resolution degradation effect derived for coherent sources must still exist for partially coherent sources if their radiated fields may have significant overlap. An alternative Bayesian QCRB that includes $(\bs p_0,\bs p_0')$ as part of $X$ can again be computed, but at some stage it involves taking the expectation of $\bar J_{xx'}$ with respect to $(\bs p_0,\bs p_0')$ before the inverse is taken. For incoherent sources, this can reduce the off-diagonal components significantly; the resulting bound, while still valid, would be less tight and no longer demonstrate the resolution degradation effect. \section{Single-photon source} \label{sec_single1} Consider now an initially excited two-level atom in free space. A detailed analysis of atom-photon interaction is formidable \cite{mandel,atom-photon}, but when the initial optical state is vacuum, spontaneous emission can be treated more easily, as the atom must decay to ground state in the long-time limit and the final optical state must contain exactly one photon. Using the continuous Fock space \cite{mandel}, the final optical state in the Schr\"odinger picture can be written with respect to the vacuum $\ket{0}$ as \begin{align} \ket{\Psi} &= c^\dagger\ket{0}, & c^\dagger &\equiv \sum_s \int d^3k \phi(\bs k,s)a^\dagger(\bs k,s), \end{align} where \begin{align} \phi(\bs k,s) &= \Avg{\bs k,s|\Psi} = \bra{0}a(\bs k,s)\ket{\Psi} \end{align} is the one-photon configuration-space amplitude. Following Chap.~III.C of Ref.~\cite{atom-photon}, it can be expressed as \begin{align} \phi(\bs k,s) &= \frac{\bra{\bs k,s}\otimes \bra{g}H_I\ket{e}\otimes\ket{0}} {\hbar[(\omega-\tilde{\omega}_0)+i/(2T_1)]}e^{-i\omega T}, \\ H_I &= i\omega_0\bk{\bs\mu_{12}\sigma-\bs\mu_{12}^*\sigma^\dagger} \cdot \bs A(\bs r), \\ \bs A(\bs r) &= \sum_s \int d^3k\sqrt{\frac{\hbar}{2(2\pi)^3\omega\epsilon_0}} \Bk{a(\bs k,s)\bs\varepsilon(\bs k,s) e^{i\bs k\cdot \bs r}+\textrm{h.c.}}, \\ T_1 &= \frac{3\pi\hbar\epsilon_0 c^3}{|\bs\mu_{12}|^2\omega_0^3}, \end{align} where $\ket{e}$ and $\ket{g}$ are the excited and ground atomic states, $\omega_0$ is the atomic resonance frequency, $\tilde\omega_0$ is the Lamb-shifted atomic frequency, $T_1$ is the decay time, $\bs\mu_{12}$ is the off-diagonal atomic dipole moment, and $\sigma\equiv \ket{g}\bra{e}$ is the atomic lowering operator. The result is \begin{align} \phi(\bs k,s) &= \frac{1}{i(\omega-\tilde\omega_0)+1/(2T_1)} \sqrt{\frac{\omega_0^2}{2(2\pi)^3\hbar\omega\epsilon_0}} \bs\mu_{12}\cdot\bs\varepsilon^*(\bs k,s) \nonumber\\&\quad\times e^{-i\bs k\cdot\bs r-i\omega T}. \end{align} Consider the fidelity \begin{align} F &= \abs{\bra{0}c(X)c^\dagger(X+\delta X)\ket{0}}^2 \\ &= \abs{\Bk{c(X),c^\dagger(X+\delta X)}}^2 \\ &\approx 1+\sum_{\mu,\nu}\delta X_\mu\delta X_\nu \left\{\operatorname{Re} \Bk{c,\parti{^2c^\dagger}{X_\mu\partial X_\nu}} \right. \nonumber\\&\quad \left.+\operatorname{Im} \Bk{c,\parti{c^\dagger}{X_\mu}}\operatorname{Im} \Bk{c,\parti{c^\dagger}{X_\nu}}\right\}, \end{align} where the fact \begin{align} \operatorname{Re} \bra{0}c \parti{c^\dagger}{X_\mu}\ket{0} &= \operatorname{Re} \Bk{c,\parti{c^\dagger}{X_\mu}} = 0 \end{align} due to $\bra{0}c(X+\delta X)c^\dagger(X+\delta X)\ket{0} = \bra{0}c(X)c^\dagger(X)\ket{0} = 1$ has been used. It can further be shown that \begin{align} \Bk{c,\parti{c^\dagger}{X_\mu}} &= \sum_s \int d^3k \phi^*(\bs k,s)\parti{\phi(\bs k,s)}{X_\mu} = 0, \end{align} leading to a QFI given by \begin{align} J_{\mu\nu} &= -4\operatorname{Re} \Bk{c,\parti{^2c^\dagger}{X_\mu\partial X_\nu}} \\ &= -4\operatorname{Re} \sum_s \int d^3k \phi^*(\bs k,s)\parti{^2\phi(\bs k,s)}{X_\mu\partial X_\nu}. \\ &= 4\operatorname{Re}\sum_s \int d^3k \frac{k_\mu k_\nu}{(\omega-\tilde\omega_0)^2+1/(4T_1^2)} \frac{\omega_0^2}{2(2\pi)^3\hbar\omega\epsilon_0} \nonumber\\&\quad\times \abs{\bs\mu_{12}\cdot\bs\varepsilon^*(\bs k,s)}^2. \end{align} Assuming that the decay time is much longer than the optical period ($T_1 \gg 2\pi/\tilde\omega_0$) and the Lamb shift is much smaller than the optical $\omega_0$ ($\tilde\omega_0 \approx \omega_0$), the QFI becomes \begin{align} J_{\mu\nu} &\approx 4\operatorname{Re}\sum_s \int d^3k k_\mu k_\nu 2\pi T_1\delta(\omega-\omega_0) \frac{\omega_0^2}{2(2\pi)^3\hbar\omega\epsilon_0} \nonumber\\&\quad\times \abs{\bs\mu_{12}\cdot\bs\varepsilon^*(\bs k,s)}^2. \end{align} This turns out to be identical to the QFI derived in Sec.~\ref{sec_onesource} for an $N=1$ classical source: \begin{align} J_{\mu\nu} &=\frac{4\delta_{\mu\nu}}{N W_\mu^2} \sim \frac{\delta_{\mu\nu}}{\lambda_0^2}, & \Sigma_{\mu\mu} &\ge J_{\mu\mu}^{-1} = \frac{NW_\mu^2}{4} \sim \lambda_0^2, \end{align} where $N$ is defined in Eq.~(\ref{N}) and $W_\mu$ is defined in Eq.~(\ref{Wmu}) and given by Eqs.~(\ref{Wxyz1}) or Eqs.~(\ref{Wxyz2}), such that $NW_\mu^2$ is on the order of $\lambda_0^2$. This result shows that a single-photon source offers no fundamental advantage over a classical source that emits one photon on average. Superresolution beyond the classical Abbe-Rayleigh limit can still be obtained, however, if the experiment can be repeated. The QFI is then multiplied by the number of trials $M$, which is also the total number of emitted photons, and the resulting QCRB is identical to that for a classical source with $M$ replacing $N$. The experiments reported by Refs.~\cite{schwartz13,cui13,monticone} certainly involved a large number of measurements of many photons in total, which can explain the apparent superresolution, but it remains to be seen whether their methods are accurate or efficient in estimating object parameters. For two atoms with large separation ($|\bs r-\bs r'|\gg \lambda_0$), the one-atom analysis is expected to be applicable to each atom independently. The analysis of two close atoms is much more challenging because of atomic cooperative effects such as the Dicke superradiance \cite{mandel} and the F\"orster resonance energy transfer \cite{moerner}. Beyond the current assumption of spontaneous emission, it will also be interesting, though highly nontrivial, to analyze the interaction between two-level atoms and other states of light, such as coherent states or squeezed states, and investigate their quantum localization limits and the possibility of quantum enhancement. \section{Conclusion} I have derived quantum limits to point-source localization using quantum estimation theory and the quantum theory of electromagnetic fields. These results not only provide general no-go theorems concerning the microscope resolution, they should also motivate further progress in microscopy through classical or quantum techniques beyond the current assumptions. For example, the presented theory may be applied to other more exotic quantum states of light interacting with quantum sources, such as multilevel atoms \cite{mandel,scully}, quantum dots \cite{michalet,schwartz13}, diamond defects \cite{hall_diamond,cui13,monticone}, and second-harmonic nanoparticles \cite{pu08,hsieh09,dempsey}. It is also possible to generalize the current formalism for open quantum systems \cite{escher,demkowicz,tsang_open,knysh14} to account for mixed states, decoherence, optical losses, background noises, and imperfect measurement efficiency. The phenomenon of fluorescence intermittency or blinking represents another interesting challenge to the statistical analysis, as a fluorescent source can turn itself on and off randomly and the localization of two blinking sources offers another route to superresolution \cite{lidke}. If done with care, the application of quantum information science to microscopy is destined to yield sound insights and opportunities in both fields. \section*{Funding information} Singapore National Research Foundation (NRF-NRFF2011-07).
{ "redpajama_set_name": "RedPajamaArXiv" }
439
\section{Introduction} The Grothendieck polynomials $\mathfrak{G}_w$ of Lascoux and Sch\"utzenberger \cite{LS:Groth} are explicit polynomial representatives of the $K$-classes of structure sheaves of Schubert varieties in flag varieties. Reiner and Yong \cite{ReY} conjectured an explicit combinatorial expansion of Grothendieck polynomials into the basis of Lascoux polynomials $\mathfrak{L}_\alpha$ \cite{Las}. Our first main theorem (Theorem \ref{T:first main theorem}) gives a new\footnote{meaning, ``not stated explicitly in the literature". See Remark \ref{R:Lascoux}.} combinatorial formula for the Lascoux polynomials. This is used to prove our second main theorem (Theorem \ref{T:second main theorem}) which establishes the Reiner-Yong conjecture. \subsection{Various expansions} The Grothendieck-to-Lascoux expansion fits into a family of related expansions. The polynomials to be expanded are the cohomological and K-theoretic Schubert bases given by the Schubert polynomials $\mathfrak{S}_w$ and the Grothendieck polynomials $\mathfrak{G}_w$ respectively, and their symmetrized or stable versions, known as the Stanley symmetric functions $F_w$ and Grothendieck symmetric functions (also known as stable Grothendieck polynomials) $G_w$. These are respectively expanded into type A Demazure characters (also called key polynomials) $\kappa_\alpha$, Lascoux polynomials $\mathfrak{L}_\alpha$, Schur functions $s_\lambda$, and Grassmannian Grothendieck symmetric functions $G_\lambda$. \[ \begin{matrix} \begin{tikzcd} \mathfrak{S}_w \arrow[rr,"\mathrm{symmetrize}"] \arrow[d,"\mathrm{expand}"] \arrow[d,swap,"(a)"] && F_w \arrow[d,swap,"\mathrm{expand}"] \arrow[d,"(b)"] \\ \kappa_\alpha\arrow[rr,swap,"\mathrm{symmetrize}"] && s_\lambda \end{tikzcd} \\ \\ \text{cohomology} \end{matrix} \qquad \begin{matrix} \begin{tikzcd} \mathfrak{G}_w \arrow[rr,"\mathrm{symmetrize}"] \arrow[d,"\mathrm{expand}"]\arrow[d,swap,"(c)"] && G_w \arrow[d,swap,"\mathrm{expand}"] \arrow[d,"(d)"]\\ \mathfrak{L}_\alpha\arrow[rr,swap,"\mathrm{symmetrize}"] && G_\lambda \end{tikzcd} \\ \\ \text{$K$-theory} \end{matrix} \] Using the formalism of connective $K$-theory (equivalently, introducing a harmless grading parameter $\beta$ into the Grothendieck polynomial), as we do in this article, all expansions specialize to their $K$-theoretic or cohomological counterparts by setting $\beta$ to $-1$ or $0$ respectively. In chronological order, expansion (b) was established by Edelman and Greene \cite{EG} via a Schensted-type insertion algorithm for reduced words. The expansion (a) was found by Lascoux and Sch\"utzenberger and proved in \cite{RS}. Expansion (d) was established by Buch, Kresch, Shimozono, Tamvakis, and Yong \cite{BKSTY} via Hecke insertion, which takes Hecke words as input. Expansion (c) is the topic of this article. The expansion coefficients have geometric significance. The Stanley-to-Schur coefficients of the expansion (b) coincide with large rank affine Stanley to affine Schur coefficients \cite[Prop. 9.17]{LLS}, which in turn coincide with Gromov-Witten invariants for the flag variety via Peterson's Quantum Equals Affine Theorem \cite{Pet} \cite{LS:QHGr} \cite[Part 3, \S 10]{LLMSSZ}. Specializing $w$ to a Zelevinsky permutation, (a) and (b) give the expansion of cohomology classes of equioriented type A quiver loci \cite[Theorem 7.14]{KMS}, the latter being shown by Buch and Fulton \cite{BF} to specialize to virtually all known variants of type A Schubert polynomials. Expansions (c) and (d) give analogous expansions in K-theory \cite{B} \cite{Mi}. The nonsymmetric expansions are subtle refinements of their symmetric counterparts. In the symmetric expansions there is a set of tableaux in which each tableau $T$ in the set, gives a copy of $s_\lambda$ or $G_\lambda$ where $\lambda$ is the shape of $T$. There is a corresponding term $\kappa_\alpha$ or $\mathfrak{L}_\alpha$ in the nonsymmetric expansion, but an additional datum must be supplied: a composition or extremal weight $\alpha$ in the symmetric group orbit of $\lambda$; see \eqref{E:L sym} through \eqref{E:S sym}. Such constructions assigning a composition to a tableau go by the general name of \emph{key}. In the crystal graph of semistandard Young tableaux of shape $\lambda$, the left and right keys of the tableau $T$ of shape $\lambda$ are given by the final and initial directions of the corresponding Littelmann path whose highest weight vector is the directed line segment from the origin to $\lambda$. The initial direction indicates the smallest Demazure crystal containing the given tableau. \subsection{Grothendieck and Lascoux polynomials} The group $S_+=\bigcup_{n\ge1} S_n$ acts on $R=\mathbb{Z}[\beta][x_1,x_2,\dotsc]$ by permuting the variables: for $i\ge1$ let $s_i$ exchange $x_i$ and $x_{i+1}$. We define the following operators on $R$, where an element $f\in R$ (or its fraction field) denotes the operator of left multiplication by $f$. \begin{align} \partial_i &= (x_i-x_{i+1})^{-1}(1 - s_i) \\ \pi_i &= \partial_i x_i \\ \partial^{(\beta)}_i &= \partial_i (1+\beta x_{i+1}) \\ \pi^{(\beta)}_i &= \partial^{(\beta)}_i x_i. \end{align} All satisfy the braid relations for $S_+$. We have the operator identity \begin{equation}\label{E:partial product rule} \partial_i f = \partial_i(f) + s_i(f) \partial_i\qquad\text{for all $f\in R$.} \end{equation} The operators satisfy the quadratic relations \begin{align} \partial_i^2 &= 0 \\ \pi_i^2 &= \pi_i \\ (\partial^{(\beta)}_i)^2 &= - \beta \partial^{(\beta)}_i \\ (\pi^{(\beta)}_i)^2 &= \pi^{(\beta)}_i. \end{align} Let $w_0^{(n)}\in S_n$ be the long element and $\rho^{(n)}=(n-1,n-2,\dotsc,1,0)$. For $w\in S_n$ the $\beta$-Grothendieck polynomial is defined by \cite{LS:Groth} \begin{align} \fG^{(\beta)}_w &= \begin{cases} x^{\rho^{(n)}} &\text{if $w=w_0^{(n)}$} \\ \partial^{(\beta)}_i \fG^{(\beta)}_{ws_i} &\text{if $ws_i<w$.} \end{cases} \end{align} Since the $\partial^{(\beta)}_i$ satisfy the braid relations, $\fG^{(\beta)}_w$ is well-defined for $w\in S_n$. It is also well-defined for $w\in S_+$, that is, unchanged under the standard embedding $S_n\to S_{n+1}$ for all $n\ge1$. The Schubert $\mathfrak{S}_w$ and Grothendieck polynomials $\mathfrak{G}_w$ are defined by \begin{align} \mathfrak{S}_w &= \fG^{(\beta)}_w|_{\beta=0}\\ \mathfrak{G}_w &= \fG^{(\beta)}_w|_{\beta=-1}. \end{align} Let $\alpha=(\alpha_1,\alpha_2,\dotsc)$ be a composition (sequence of nonnegative integers, almost all $0$). The Lascoux polynomial $\fL^{(\beta)}_\alpha$ is defined by \cite{Las} \begin{align} \fL^{(\beta)}_\alpha = \begin{cases} x^\alpha & \text{if $\alpha$ is a partition} \\ \pi^{(\beta)}_i \fL^{(\beta)}_{s_i\alpha} &\text{if $\alpha_i<\alpha_{i+1}$.} \end{cases} \end{align} The Demazure character $\kappa_\alpha$ is defined by \begin{align} \kappa_\alpha = \fL^{(\beta)}_{\alpha}|_{\beta=0}. \end{align} Given a composition $\alpha=(\alpha_1,\dotsc,\alpha_n)\in\mathbb{Z}_{\ge0}^n$ let $\alpha^+$ be the unique partition in the $S_n$-orbit of $\alpha$. For $w\in S_n$ and $w_0\in S_n$ the long element we have the symmetrizations \begin{align} \label{E:L sym} \pi^{(\beta)}_{w_0}(\mathfrak{L}_\alpha) &= G^{(\beta)}_{\alpha^+}(x_1,\dotsc,x_n) \\ \label{E:kappa sym} \pi_{w_0}(\kappa_\alpha) &= s_{\alpha^+}(x_1,\dotsc,x_n) \\ \label{E:G sym} \pi^{(\beta)}_{w_0}(\mathfrak{G}_w(x)) &= G_w(x_1,\dotsc,x_n) \\ \label{E:S sym} \pi_{w_0}(\mathfrak{S}_w(x)) &= F_w(x_1,\dotsc,x_n). \end{align} \subsection{New tableau formula for Lascoux polynomials} \label{SS:new Lascoux formula} Given the definition of a certain kind of tableau which involves entries in a totally ordered set, we say ``reverse" to mean the same definition but with the total order reversed. So a \emph{reverse semistandard Young tableau} (RSSYT) is a tableau in which the entries weakly decrease along rows from left to right and strictly decrease along columns from top to bottom. For a partition $\lambda$ a \emph{reverse set-valued tableau} (RSVT) $T$ of shape $\lambda$ is a filling of the boxes of $\lambda$ by finite subsets of $\mathbb{Z}_{>0}$ satisfying the following. For the box $s\in \lambda$ let $T(s)$ be the set which occupies the box $s$ in $T$. \begin{enumerate} \item $\min(T(s)) \ge \max(T(t))$ if the box $t$ is immediately right of the box $s$ in $\lambda$. \item $\min(T(s))>\max(T(t))$ if the box $t$ is immediately below the box $s$ in $\lambda$. \end{enumerate} This is the reverse of Buch's set-valued tableaux \cite{B:Gr}. Given a RSVT $T$, let $L(T)$ be the RSSYT obtained from $T$ by replacing every entry $T(s)$ by its largest value $\max(T(s))$. The \emph{weight} $\mathrm{wt}(T)$ of a tableau $T$ is the composition whose $i$-th part is the total number of times $i$ appears in $T$. The \emph{left key} $K_-(T)$ of a RSSYT $T$ is a composition computed by the algorithm in \S \ref{SS:families and left keys}. Up to an order-reversal bijection this is equivalent to the right key of a SSYT defined by Lascoux and Sch\"utzenberger \cite{LS:keys}; our algorithm is a variant of that of Willis \cite{Willis}. Let $|\alpha|=\sum_{i\ge1} \alpha_i$. Let $\mathrm{RSVT}_\lambda$ be the set of reverse set-valued tableaux of shape $\lambda$. For $T\in\mathrm{RSVT}_\lambda$ let $\mathrm{ex}(T)=|\mathrm{wt}(T)|-|\lambda|$. Our first main theorem is: \begin{thm} \label{T:first main theorem} For any composition $\alpha$ \begin{align}\label{E:Lascoux RSVT} \fL^{(\beta)}_\alpha = \sum_{\substack{T\in\mathrm{RSVT}_{\alpha^+} \\ K_-(L(T))\le \alpha}} \beta^{\mathrm{ex}(T)} x^{\mathrm{wt}(T)} \end{align} Here $\le$ indicates the quotient of Bruhat order on the orbit $S_+\alpha$. \end{thm} Another way to state the $\le$ relation on compositions is the following. A \emph{key tableau} (or just key) is a SSYT\footnote{Sometimes we will write a key tableau as a RSSYT depending on context.} of partition shape such that the $j$-th column, viewed as a set, contains the $(j+1)$-th for all $j$. There is a bijection $\alpha\mapsto \mathrm{key}(\alpha)$ from compositions to key tableaux where $\mathrm{key}(\alpha)$ is the unique SSYT of shape $\alpha^+$ and weight $\alpha$. Its $j$-th column consists of the numbers $\{i\mid \alpha_i \ge j\}$ for all $j$. Then for compositions $\alpha$ and $\beta$, $\alpha\le\beta$ if and only if $\alpha^+=\beta^+$ and $\mathrm{key}(\alpha)$ is entrywise less than or equal to $\mathrm{key}(\beta)$. Theorem \ref{T:first main theorem} is proved in \S \ref{S:formula for Lascoux polynomials}. \begin{rem} \label{R:Lascoux} There have been a number of conjectural combinatorial formulas for Lascoux polynomials, such as the K-Kohnert move rule of Ross and Yong \cite{RoY} (\cite[Footnote on p. 19]{Kir} for the general $\beta$ version), the set-valued skyline filling formula of Monical \cite{Mon}, and a set-valued tableau (SVT) rule of Pechenik and Scrimshaw \cite{PS} which requires the fairly involved Lusztig involution on a crystal structure on SVTs in addition to an entrywise minimum and the usual right key of a SSYT. Buciumas, Scrimshaw and Weber \cite{BSW} proved the last two of these rules using solvable lattice models. In response to a previous version of this article, Travis Scrimshaw kindly informed us that Theorems \ref{T:first main theorem} and \ref{T:first main theorem restated} are implicit in \cite{BSW}: see the proof of \cite[Theorem 4.4]{BSW}. We feel it is worthwhile to state these theorems in their simplest and most explicit form. We note that the naive nonreversed SVT analogue of the RSVT formula does not yield the Lascoux polynomial. \end{rem} Equation \eqref{E:Lascoux RSVT} can be restated using only RSSYTs, avoiding SVT altogether. Let $\mathrm{RSSYT}_\lambda$ be the set of reverse semistandard tableaux of shape $\lambda$. \begin{thm} \label{T:first main theorem restated} For any composition $\alpha$ \begin{align} \fL^{(\beta)}_\alpha = \sum_{\substack{T\in\mathrm{RSSYT}_{\alpha^+} \\ K_-(T)\le \alpha}} x^{\mathrm{wt}(T)} \prod_{(s,k)} (1+\beta x_k) \end{align} where the product runs over pairs $(s,k)$ where $s$ is a box of $\alpha^+$, $k<T(s)$, and replacing the $s$-th entry of $T$ by $k$ results in a RSSYT. \end{thm} \begin{rem} The naive nonreversed SSYT analogue of Theorem \ref{T:first main theorem restated} also does not give the Lascoux polynomial. \end{rem} \begin{rem} The condition on the pairs $(s,k)$ should be compared with the formula for Grothendieck polynomials indexed by vexillary permutations in part 3 of the second Corollary in section 1.2 of \cite{KMY}. \end{rem} \subsection{Stable limit of Lascoux polynomials} For a fixed composition $\alpha$ consider the limit $\lim_{N\to\infty} \fL^{(\beta)}_{(0^N,\alpha)}$ in which more zero parts are prepended to $\alpha$. In Theorem \ref{T:first main theorem} it is evident that the limit depends only on $\lambda=\alpha^+$ and it is given by removing the left key condition. By the definition of Lascoux polynomial, this limit can be computed by $\pi^{(\beta)}_i$ operators on $x^\lambda$. One may show that $$\fL^{(\beta)}_{w_0^{(n)}\lambda} = G^{(\beta)}_{\lambda}(x_1,\dotsc,x_n)\qquad\text{for $\lambda=(\lambda_1,\dotsc,\lambda_n)$}$$ where $G^{(\beta)}_\lambda$ is the Grassmannian Grothendieck symmetric function, which will be defined in \S \ref{SS:Grass Groth}. We deduce that the above limit coincides with the symmetric series $G^{(\beta)}_\lambda$ on one hand and the generating function of RSVT of shape $\lambda$ on the other. This is the reversed version of Buch's SVT formula for $G^{(\beta)}_\lambda$ \cite{B:Gr}; see \eqref{E:Grass Groth tableau}. \subsection{Fomin-Kirillov monomial formula} \label{SS:FK} Our point of departure is the explicit monomial expansion of Grothendieck polynomials due to Fomin and Kirillov \cite{FK}. The \emph{$0$-Hecke monoid} $\mathcal{H}$ is the quotient of the free monoid of words on the alphabet $\mathbb{Z}_{>0}$ by the relations \begin{align} i (i+1) i &\equiv_H (i+1) i (i+1) \\ ii &\equiv_H i \\ ij &\equiv_H ji \qquad\text{for $|i-j| \ge 2$.} \end{align} $\mathcal{H}$ acts on $S_+$ by \begin{align*} i * w = \begin{cases} s_i w & \text{if $s_i w>w$} \\ w &\text{if $s_iw<w$.} \end{cases} \end{align*} Given a word $u\in \mathcal{H}$ define its \emph{associated permutation} by $u * \mathrm{id}\in S_+$. For $w\in S_+$ let $\mathcal{H}_w$ be the words in $\mathcal{H}$ with associated permutation $w$. The subsets $\mathcal{H}_w\subset \mathcal{H}$ are the $\equiv_H$-equivalence classes. \begin{lem} \label{L:reverse Hecke word} $u\in \mathcal{H}_w$ if and only if $\mathrm{rev}(u)\in\mathcal{H}_{w^{-1}}$. \end{lem} For $a\in\mathcal{H}_w$ let $\mathrm{ex}(a) = \text{length}(a) - \ell(w)$, the \emph{excess} of the length of $a$ above the minimum possible, the Coxeter length $\ell(w)$ of $w$. The following is merely the definition in \cite{BJS} but with both words reversed, which is better suited to our use of decreasing tableaux. \begin{defn} \label{D:compatible pair} \cite{BJS} A pair of words $(a,i)$ is \emph{compatible} if they satisfy \begin{enumerate} \item $a,i$ are words of positive numbers with the same length. \item $i$ is weakly decreasing \item $i_j = i_{j+1}$ implies $a_j < a_{j+1}$. \end{enumerate} A compatible pair $(a,i)$ is \emph{bounded} if \begin{align} \label{E:bounded} i_j \le a_j\qquad\text{for all $j$.} \end{align} \end{defn} Let $\mathcal{C}$ be the set of all compatible pairs, $\mathcal{C}^b$ those that are bounded, $\mathcal{C}_w$ the pairs $(a,i)\in \mathcal{C}$ such that $a\in \mathcal{H}_w$, and $\mathcal{C}_w^b = \mathcal{C}^b \cap \mathcal{C}_w$. The following monomial expansion of $\beta$-Grothendieck polynomials is due to Fomin and Kirillov \cite{FK}. \begin{align}\label{E:Groth compatible pairs} \fG^{(\beta)}_w = \sum_{(a,i)\in \mathcal{C}_{w^{-1}}^b} \beta^{\mathrm{ex}(a)} x^{\mathrm{wt}(i)} \end{align} When $\beta=0$ this is the Billey-Jockusch-Stanley formula for Schubert polynomials \cite{BJS}. For $w\in S_n$ and a positive integer $N$ let $1^N\times w$ be the permutation of $S_{n+N}$ obtained by adding $N$ fixed points before $w$. The $\beta$-Grothendieck symmetric function is defined by \begin{align} G^{(\beta)}_w = \lim_{N\to\infty} \mathfrak{G}_{1^N\times w} \end{align} It lives in a completion of the ring of symmetric functions over $\mathbb{Z}[\beta]$. The Stanley and Grothendieck symmetric functions are defined by \begin{align} F_w &= G^{(\beta)}_w|_{\beta=0} \\ G_w &= G^{(\beta)}_w|_{\beta=-1}. \end{align} It follows from \eqref{E:Groth compatible pairs} and the definitions that \begin{align}\label{E:GSF compatible pairs} G^{(\beta)}_w = \sum_{(a,i)\in \mathcal{C}_{w^{-1}}} \beta^{\mathrm{ex}(a)} x^{\mathrm{wt}(i)}. \end{align} Comparing with \eqref{E:Groth compatible pairs} just the boundedness condition has been dropped. \subsection{$G^{(\beta)}_w$ to $G^{(\beta)}_\lambda$ via Hecke insertion: restriction of compatible pairs according to $w$} \label{SS:Grass Groth} The code $c(w)$ of a permutation is the sequence $(c_1,c_2,\dotsc)$ such that $c_i = | \{j \mid \text{$1 \le j < w(i)$ and $w^{-1}(j)>i$} \}|$. For a partition $\lambda=(\lambda_1,\lambda_2,\dotsc,\lambda_k)$ the \emph{Grassmannian Grothendieck symmetric function} $G^{(\beta)}_\lambda$ is by definition equal to $G^{(\beta)}_w$ where $w$ is the permutation with code $(\lambda_k,\dotsc,\lambda_2,\lambda_1,0,0,\dotsc)$. Buch \cite{Bu} showed that the $\mathbb{Z}[\beta]$-span of the $G^{(\beta)}_w$ for $w\in S_+$, has basis given by the $G^{(\beta)}_\lambda$ and proved the increasing version of the following: \begin{align}\label{E:Grass Groth tableau} G^{(\beta)}_\lambda = \sum_{T\in\mathrm{RSVT}_\lambda} \beta^{\mathrm{ex}(T)} x^{\mathrm{wt}(T)} \end{align} For $\beta=0$ this becomes the RSSYT formula for the Schur function $s_\lambda$. To find the coefficients of the $G^{(\beta)}_w$ to $G^{(\beta)}_\lambda$ expansion, the \emph{Hecke insertion} algorithm was developed in \cite{BKSTY} in the language of \emph{increasing tableaux}, which strictly increase along rows from left to right and strictly increase along columns from top to bottom. In \S \ref{SS:Hecke insertion} we recall these definitions but use the variant of Hecke insertion for \emph{decreasing tableaux}, which are the ``reverse" of increasing: they strictly decrease along rows from left to right and strictly decrease along columns from top to bottom. It was not explicitly stated in \cite{BKSTY} but all ingredients are there to define a Hecke Robinson-Schensted-Knuth (RSK) bijection called Insert (and its inverse bijection RevInsert) \tikzstyle{start}=[to path={(\tikztostart.#1) -- (\tikztotarget)}] \[ \begin{tikzcd}[every arrow/.append style={shift left}] \mathcal{C} \arrow[rr,"\textrm{Insert}"] && \bigsqcup_\lambda (\mathrm{Dec}_\lambda \times \mathrm{RSVT}_\lambda)=:\mathcal{T} \arrow[ll,"\textrm{RevInsert}"] \\ (a,i) \arrow[rr] && (P,Q) \arrow[ll] \end{tikzcd} \] where $\mathrm{Dec}_\lambda$ is the set of decreasing tableaux of shape $\lambda$. Note that the set $\mathcal{T}$ as defined by the above diagram, consists of pairs $(P,Q)$ of tableaux of the same partition shape with $P$ decreasing and $Q$ reverse set-valued. Let $(P,Q)=\mathrm{Insert}(a,i)$. By Proposition \ref{P:Hecke insertion Kequiv and weight} the bijection satisfies \begin{align} \label{E:insert K equiv} \mathrm{rev}(a) &\equiv_K P \\ \label{E:insert wt preserving} \mathrm{wt}(Q)&=\mathrm{wt}(i) \end{align} where the relation $\equiv_K$ is defined in \S \ref{SS:right key}. Note that $\equiv_K$ refines the relation $\equiv_H$ of \S \ref{SS:FK}. By Lemma \ref{L:reverse Hecke word} and \eqref{E:insert K equiv} the bijection $\mathrm{Insert}$ restricts to a bijection \begin{align}\label{E:w bijection} \mathcal{C}_{w^{-1}}\leftrightarrow \bigsqcup_\lambda \left(\mathrm{Dec}_\lambda^w \times \mathrm{RSVT}_\lambda\right) =: \mathcal{T}_w \end{align} where $\mathrm{Dec}_\lambda^w = \{ T\in \mathrm{Dec}_\lambda\mid \mathrm{word}(T)\in \mathcal{H}_w\}$ and $\mathrm{word}(T)$ is defined in \S \ref{SS:background}; see \cite{BKSTY} for the increasing tableau version. Taking the generating function of both sides we obtain \begin{align} G^{(\beta)}_w = \sum_\lambda |\mathrm{Dec}_\lambda^w| G^{(\beta)}_\lambda. \end{align} \subsection{$\fG^{(\beta)}_w$ to $\fL^{(\beta)}_\alpha$ by Hecke insertion and keys: restriction to bounded compatible pairs} \label{SS:bounded bijection} We give an algorithm to compute the right key $K_+(T)$ of a RSSYT $T$ in Definition \ref{D:right key of RSSYT}. It is essentially equivalent to that of Willis \cite{Willis} in the context of usual RSK and Knuth equivalence. However we use it for a new purpose, applying it to decreasing tableaux in the context of Hecke insertion and the K-Knuth equivalence $\equiv_K$. The \emph{K-jeu-de-taquin} (Kjdt) of Thomas and Yong \cite{TY} may be used (following \cite{ReY} for increasing tableaux) to give another definition of right key of decreasing tableau, and two definitions of right key are shown to coincide (Proposition \ref{P:Kjdt right key}). For our amusement we give a third formulation of right key for a decreasing tableau in \S \ref{S:alternative right key} using the ``transpose" of Hecke reverse insertion; in the context of semistandard tableaux and Knuth equivalence, this formulation is analogous to an original definition of Lascoux and Sch\"utzenberger \cite{LS:keys}. Let $\mathcal{T}^b$ be the subset of pairs $(P,Q)\in \mathcal{T}$ such that $K_+(P) \ge K_-(L(Q))$. With $\mathcal{T}_w$ defined as in \eqref{E:w bijection} let $\mathcal{T}_w^b = \mathcal{T}^b \cap \mathcal{T}_w$. In \S \ref{S:compatible} we show the following (Theorem \ref{restricting_Insert}): \begin{thm} \label{T:compatible bounded} $\mathrm{Insert}$ restricts to a bijection $\mathcal{C}^b \cong \mathcal{T}^b$. \end{thm} Intersecting with the bijection \eqref{E:w bijection}, $\mathrm{Insert}$ restricts to a bijection \begin{align}\label{E:w bounded bijection} \mathcal{C}^b_{w^{-1}}\cong \mathcal{T}^b_w \qquad\text{for every $w\in S_+$.} \end{align} Using Theorem \ref{T:first main theorem} we obtain our second main theorem, the Grothendieck-to-Lascoux expansion via decreasing tableaux. \begin{thm} \label{T:second main theorem} \begin{align}\label{E:Groth to Lascoux} \fG^{(\beta)}_w = \sum_\lambda \sum_{P\in \mathrm{Dec}_\lambda^w} \fL^{(\beta)}_{\mathrm{wt}(K_+(P))}. \end{align} \end{thm} \subsection{Connecting with the Reiner-Yong conjecture} The Reiner-Yong conjecture asserts: \begin{conj} \cite{ReY} \begin{align} \fG^{(\beta)}_w = \sum_\lambda \sum_{P\in \mathrm{Inc}_\lambda^{w^{-1}}} \fL^{(\beta)}_{\mathrm{wt}(K_-(P))} \end{align} where $\mathrm{Inc}_\lambda^w$ is similar to $\mathrm{Dec}_\lambda^w$ except that the tableaux are increasing and $K_-(P)$ is the left key construction on the increasing tableau $P$ using the Kjdt. \end{conj} \begin{proof} By Propositions \ref{P:decreasing to increasing} and \ref{reverse_complement_fixes_keys} the map $T\mapsto T^\sharp$ defines a bijection $\mathrm{Dec}_\lambda\to\mathrm{Inc}_\lambda$ such that $K_+(T)=K_-(T^\sharp)$ and $\mathrm{word}(T^\sharp)\equiv_K \mathrm{rev}(\mathrm{word}(T))$. Using Lemma \ref{L:reverse Hecke word} we see that $T\in \mathcal{H}_w$ if and only if $T^\sharp\in\mathcal{H}_{w^{-1}}$. Thus the bijection restricts to a bijection $\mathrm{Dec}_\lambda^w\cong \mathrm{Inc}_\lambda^{w^{-1}}$ as required. \end{proof} \subsection{Cohomological case} Say that a word is \emph{reduced} if it is a reduced word for some permutation in $S_+$. Say that a tableau is reduced if its word is. Let $\mathcal{C}^b(0)$ denote the set of compatible pairs $(a,i)$ such that $a$ is reduced. Let $\mathcal{T}^b(0)$ be the set of pairs $(P,Q)$ of tableaux of the same partition shape such that $P$ is a reduced decreasing tableau and $Q$ is a RSSYT with $K_+(P) \ge K_-(Q)$. By setting $\beta=0$ in Theorem \ref{T:compatible bounded} we obtain the following. \begin{thm} \label{T:compatible bounded reduced} \begin{enumerate} \item The decreasing analogue of Lascoux and Sch\"utzenberger's right nil key of a reduced decreasing tableau $P$ (see left nil key for reduced increasing tableaux in \cite{RS}) agrees with $K_+(P)$. \item The restriction of $\mathrm{Insert}$ gives a bijection $\mathcal{C}^b(0)\cong \mathcal{T}^b(0)$. In this case $\mathrm{Insert}$ is computed by Edelman-Greene column insertion into reduced decreasing tableaux, recorded by RSSYT. \end{enumerate} \end{thm} This recovers the Schubert to Demazure expansion \cite{LS:Schub} \cite{RS}. \begin{rem}\label{R:Schubert Demazure crystal} The Demazure crystal structure on SSYT was essentially known to Lascoux and Sch\"utzenberger \cite{LS:keys}. Theorem \ref{T:compatible bounded reduced} clarifies the Demazure crystal structure in the Schubert expansion: it is pulled back via Edelman-Greene column insertion from the Lascoux-Sch\"utzenberger Demazure crystal structure on the semistandard $Q$-tableau. See also \cite{AS}. \end{rem} \subsection{Plactic variant} \label{SS:plactic variant} Let $\tilde{\mathcal{C}}$ be the set of Knuth biwords, which are defined by changing the third condition of Definition \ref{D:compatible pair} to: $i_j=i_{j+1}$ implies $a_j \le a_{j+1}$. Let $\tilde{\mathcal{T}}$ be the set of pairs $(P,Q)$ of RSSYT of the same partition shape. Let $\tilde{\mathcal{C}}^b$ be the bounded compatible pairs in $\tilde{\mathcal{C}}$ and $\tilde{\mathcal{T}}^b$ the pairs $(P,Q)\in \tilde{\mathcal{T}}$ such that $K_+(P) \ge K_-(Q)$ using the classical definition of left and right key in \cite{LS:keys} but adapted to RSSYT. \begin{thm} The value-reversed version of column insertion RSK yields a bijection $\tilde{\mathcal{C}}^b \cong \tilde{\mathcal{T}}^b$. \end{thm} See Remark \ref{R:semistandard compatible bounded} for more details. This result applies to the representation theory of the general linear group over a non-archimedean local field \cite{GLS}. \subsection{Flagged Grothendieck to Lascoux} In this subsection we extend our expansion to flagged Grothendieck polynomials. In the literature there is a definition of flagged Grothendieck polynomial whose generality extends to the case of 321-avoiding permutations \cite{Mat}; see \cite{KMY2} for the case of vexillary permutations. For 321-avoiding permutations there is a monomial tableau formula and a determinantal formula. We use a divided difference definition of flagged Grothendieck polynomial from \cite{LLS:back stable Groth} which is valid for any permutation. This flagged Grothendieck polynomial has an explicit monomial expansion given in Proposition \ref{P:flagged G}. The main result of this subsection is a Lascoux polynomial expansion of flagged Grothendiecks. An \emph{flag} is a sequence of integers $f=(f_1,f_2,\dotsc,f_n)$ which is weakly increasing, satisfies $f_i \ge i$ for all $i$, and $f_n=n$. Let $f_{\mathrm{min}}=(1,2,\dotsc,n)$ and $f_{\mathrm{max}}=(n,n,\dotsc,n)$ be the minimum and maximum flags respectively. Given a flag $f$, define the permutation $\sigma_f\in S_n$ as follows. For the minimum flag $f_{\mathrm{min}}$ let $\sigma_{f_{\mathrm{min}}} = \mathrm{id}$. For $f\ne f_{\mathrm{min}}$ there is an index $j$ such that $f_j > j$; take the minimum such. Define $\sigma_f = s_i \sigma_{f'}$ where $i+1=f_j$ and $f'$ is obtained from $f$ by replacing the $i+1$ by $i$. The \emph{flagged Grothendieck polynomial} is defined by $\fG^{(\beta)}_{w,f}=\pi^{(\beta)}_{\sigma_f}(\fG^{(\beta)}_w)$. The flagged Grothendieck polynomials have the following explicit monomial expansion. \begin{prop}\label{P:flagged G}\cite{LLS:back stable Groth} \begin{align}\label{E:flagged G} \fG^{(\beta)}_{w,f} &= \sum_{\substack{(a,i)\in \mathcal{C}_w \\ i_k \le f_{a_k}}} \beta^{\mathrm{ex}(a)} x^{\mathrm{wt}(i)} \end{align} \end{prop} \begin{rem} Note that only the bound $i_k \le a_k$ in \eqref{E:Groth compatible pairs} has been changed to $i_k \le f_{a_k}$. \end{rem} The flagged Grothendieck polynomials interpolate between Grothendieck polynomials and their symmetric counterparts. \begin{cor} \label{C:symmetrize G} For $w\in S_n$ \begin{align}\label{E:symmetrize G} \pi^{(\beta)}_{w_0}(\fG^{(\beta)}_w) = G^{(\beta)}_w(x_1,\dotsc,x_n). \end{align} \end{cor} \begin{proof} We have \begin{align*} \pi^{(\beta)}_{w_0}(\fG^{(\beta)}_w) &= \pi^{(\beta)}_{w_0}(\fG^{(\beta)}_{w,f_{\mathrm{min}}}) \\ &= \fG^{(\beta)}_{w,f_{\mathrm{max}}} \\ &= G^{(\beta)}_w(x_1,\dotsc,x_n) \end{align*} where the last equality holds by the equality of \eqref{E:flagged G} with \eqref{E:GSF compatible pairs} with $x_i$ set to $0$ for $i>n$. \end{proof} Define the Demazure action $\circ$ of $S_+$ on compositions by \begin{align} s_i \circ \alpha = \begin{cases} s_i(\alpha) & \text{if $\alpha_i > \alpha_{i+1}$} \\ \alpha & \text{otherwise.} \end{cases} \end{align} Theorem \ref{T:second main theorem} implies the following. \begin{cor} \begin{align} \fG^{(\beta)}_{w,f} = \sum_\lambda \sum_{P\in \mathrm{Dec}_\lambda^w} \fL^{(\beta)}_{\sigma_f \circ \mathrm{wt}(K_+(P))}. \end{align} \end{cor} \subsection*{Acknowledgements} The authors thank Alex Yong for helpful conversations and especially for sharing with us the details of his conjecture with Vic Reiner. M. S. thanks Tomoo Matsumura for help related to flagged Grothendieck polynomials. T. Y. thanks Brendon Rhoades for helpful conversations. \section{Background} \subsection{Partitions, tableaux, words} \label{SS:background} A skew shape is \emph{normal} (resp. \emph{antinormal}) if it is empty or has a unique northwestmost (resp. southeastmost) corner. Given a skew shape $D$ let $D^*$ be a skew shape obtained by 180 degree rotation of $D$. For a tableau $T$ let $\mathrm{word}(T)$ be the \emph{column-reading word} of $T$, obtained by reading the first column of $T$ from bottom to top, then the second column of $T$ from bottom to top, and so on.Thus the word of a column of an increasing (resp. decreasing) tableau is a decreasing (resp. increasing) word. We shall often make no distinction between a column, its word, and the underlying set. Similarly we may write $T$ and mean $\mathrm{word}(T)$. \subsection{Thomas and Yong's $K$-theoretic jeu-de-taquin (Kjdt)} \subsubsection{Kjdt} We will define the Kjdt using the language of increasing tableaux as it is more natural. But the same definitions apply to decreasing tableaux in the same way; one is just labeling the chains of partitions differently. Say that the skew shapes $D$ and $E$ are \emph{successive} if there are partitions $\nu\subset\mu\subset\lambda$ such that $D=\mu/\nu$ and $E=\lambda/\nu$. In that case let $D+E=\lambda/\nu$. We also say that $E$ \emph{extends} $D$. A \emph{rook strip} is a skew shape that is both a \emph{horizontal strip} (has at most one box in each column) and a \emph{vertical strip} (has at most one box in each row). Say that a skew shape is \emph{thin} if it is the sum of two successive rook strips. Note that a skew shape is thin if and only if it has no $2\times 2$ subdiagram and has at most two boxes in any row and column. When decomposing a thin shape into a sum of successive rook strips the only choice is for isolated boxes, which can be in either the inner or outer rook strip. Let $D$ and $E$ be successive rook strips. The \emph{switch} of $(D,E)$ is the unique successive pair of rook strips $(E',D')$ such that $E'+D'=D+E$, every isolated box of $D$ is in $D'$ and every isolated box of $E$ is in $E'$. \begin{ex}\label{X:switch} In the following the boxes of $D$ and $D'$ are filled with $\circ$ and those of $E$ and $E'$ are filled with $\bullet$. \ytableausetup{aligntableaux=center} \begin{align*} D+E= \ytableaushort{\none\none\none\none\bullet,\none\none\none\circ,\none\none\circ\bullet,\none\circ\bullet,\circ} \leftrightarrow \ytableaushort{\none\none\none\none\bullet,\none\none\none\bullet,\none\none\bullet\circ,\none\bullet\circ,\circ}=E'+D' \end{align*} \end{ex} An increasing skew tableau $T$ of shape $D$ can be viewed as a sequence of successive rook strips $D_1,D_2,\dotsc,D_n$ with $D=D_1+D_2+\dotsm+D_n$: the boxes of $D_i$ are filled with $i$. Let $E$ be a rook strip which extends $D$. The \emph{forward Kjdt} $J^\searrow_E(T)$ is the increasing tableau defined by switching $E$ past all the $D_i$. More precisely let $E^{(n)} = E$ and define $\mathrm{switch}(D_i,E^{(i)}) = (E^{(i-1)},D'_i)$ for all $i$ going from $n$ down to $1$. Then $J^\searrow_E(T)$ is the increasing tableau defined by the successive rook strips $D_1',D_2',\dotsc,D_n'$. The \emph{vacated rook strip} $V^\searrow_E(T)$ is by definition $E^{(0)}$. \begin{ex} \label{X:JSE one strip} Here is an example of $J^\searrow_E(T)$ where the boxes of $E^{(i)}$ are filled with $\bullet$ and $D_i$ and $D_i'$ are filled with $i$'s. $E$ is filled with $\bullet$ in the first diagram. $T$ and $T'$ are the skew increasing tableaux in the first and last diagrams respectively, where one ignores the $\bullet$'s. \begin{align*} &\ytableaushort{\none\none12\bullet,\none12\bullet,23,3\bullet} \qquad 1 < 2 < 3 < \bullet \\ &\ytableaushort{\none\none12\bullet,\none12\bullet,2\bullet,\bullet3} \qquad 1 < 2 < \bullet < 3 \\ &\ytableaushort{\none\none1\bullet2,\none1\bullet2,\bullet2,23} \qquad 1 < \bullet < 2 < 3 \\ &\ytableaushort{\none\none\bullet12,\none\bullet12,\bullet2,23} \qquad \bullet < 1 < 2 < 3 \\ \end{align*} Note that we can think of all the intermediate ``tableaux" as being increasing with respect to the total orders indicated to the right. \end{ex} Suppose $T$ and $U$ are increasing tableaux whose shapes are successive. Let $T$ have rook strip decomposition $D_1+D_2+\dotsm+D_n$ and let $U$ be $E_1+E_2+\dotsm+E_m$. Then define $J^\searrow_U(T)$ to be the increasing tableau given by switching $T$ past $E_1$ using $J^\searrow_{E_1}$ then past $E_2$ by $J^\searrow_{E_2}$ etc. Let $V^\searrow_U(T)$ be the sequence of vacated rook strips. Both are increasing tableaux. \begin{ex} \label{X:JSE} Using $T$ from Example \ref{X:JSE one strip} let $U$ be the increasing skew tableau filled with $a$'s and $b$'s. $T$ is first switched past the $a$'s as in Example \ref{X:JSE one strip} and then past the $b$'s. \begin{align*} &\ytableaushort{\none\none12a,\none12ab,23b,3a} \qquad 1 < 2 < 3 < a < b \\ &\ytableaushort{\none{\none}a12,{\none}a12b,a2b,23} \qquad a < 1 < 2 < 3 < b\\ &\ytableaushort{\none{\none}a12,{\none}a12b,a2b,23} \qquad a < 1 < 2 < b < 3\\ &\ytableaushort{\none{\none}a1b,{\none}a1b2,ab2,23} \qquad a < 1 < b < 2 < 3\\ &\ytableaushort{\none{\none}ab1,{\none}ab12,ab2,23} \qquad a < b < 1 < 2 < 3\\ \end{align*} The final resting place of the numbers is $J^\searrow_U(T)$ and that of the letters is $V^\searrow_U(T)$. \end{ex} Instead of switching into rook strips that are to the outside of our skew increasing tableau, we can switch into rook strips lying on the inside. Given a rook strip $D$ and increasing tableau $U$ of a shape $E$ which extends $D$, let $J^\nwarrow_D(U)$ be the increasing skew tableau obtained by switching $D_1$ with the first rook strip of $U$, then switching with the second rook strip of $U$, and so on. Let $V^\nwarrow_D(U)$ be the rook strip vacated by this process. Here $U$ moves to the northwest. Similarly if $T$ and $U$ are skew increasing tableaux whose shapes are successive, we may define $J^\nwarrow_T(U)$ by switching $U$ to the northwest, past the last strip of $T$, then the next to last, and so on. Let $V^\nwarrow_T(U)$ be the increasing tableau defined by the sequence of rook strips vacated by this process. It is equivalent to think of the inner tableau as a sequence of switching instructions to apply to the outer, as it is to think of the outer tableau as a sequence of switching instructions to apply to the inner. Both give the same result. \begin{thm} \cite[Theorem 3.1]{TY} Let $T$ and $U$ be increasing tableaux whose shapes are successive. Then $J^\searrow_U(T)=V^\nwarrow_T(U)$ and $V^\searrow_U(T)=J^\nwarrow_T(U)$. \end{thm} Let $T$ and $U$ be increasing tableaux whose shapes are successive. The \emph{infusion} of the pair $(T,U)$ is the pair $(V^\searrow_U(T),J^\searrow_U(T)) = (J^\nwarrow_T(U),V^\nwarrow_T(U))$. \begin{rem} \begin{itemize} \item More generally is known (see \cite{BSS} \cite{H} for jeu-de-taquin for semistandard tableaux) that infusion can be computed by any sequence of switches which ``shuffles" the alphabets of the inner and outer tableaux. In Example \ref{X:JSE} this means starting with $1<2<3<a<b$, ending with $a<b<1<2<3$ and always using a total order on these 5 values with $1<2<3$ and $a<b$. \item Since a single switch is an involution it follows that infusion is involutive. \end{itemize} \end{rem} \subsubsection{$K$-Pieri property of Kjdt} A \emph{horizontal $K$-Pieri $t$-strip} (called a $t$-Pieri filling in \cite{TY}) is sequence of $t$ nonempty successive rook strips $D_1,D_2,\dotsc,D_t$ such that $D_1+D_2+\dotsm+D_t$ is a horizontal strip and the boxes of $D_{i+1}$ are to the right of the boxes of $D_i$ for all $1\le i< t$. This can be depicted by an increasing tableau in which the boxes of $D_i$ are filled with $i$. A \emph{vertical $K$-Pieri strip} is the transpose analogue. \begin{ex}\label{X:K Pieri strip} A horizontal $K$-Pieri $4$-strip is pictured below as an increasing tableau. \begin{align*} \ytableaushort{ \none\none\none\none\none\none34,\none\none\none\none\none2,\none\none\none\none,\none\none\none2,\none\none2,12} \end{align*} \end{ex} \begin{prop}\label{P:Kjdt Pieri} \cite{TY} The Kjdt operations $J^\nwarrow_T$ and $J^\searrow_U$ send horizontal (resp. vertical) $K$-Pieri $t$-strips, to horizontal (resp. vertical) $K$-Pieri $t$-strips. \end{prop} \subsubsection{Two special rectification orders} In the infusion of the pair $(T,U)$, as computed by $J^\nwarrow_T(U)$ (resp. $J^\searrow_U(T))$, the tableau $T$ (resp. $U$) is called a \emph{rectification order}; the tableau $T$ (resp. $U$) is viewed as the sequence of instructions for the moving of $U$ (resp. $T$) to the northwest (resp. southeast). We require two special kinds of tableaux for this purpose. Given a partition $\lambda$ let $\mathrm{CSS}(\lambda)$ be the \emph{column superstandard tableau}, the tableau of shape $\lambda$ with first column is filled with $1$ through $\lambda_1'$ from top to bottom, second column filled with $\lambda_1'+1$ through $\lambda_1'+\lambda_2'$, and so on. For $\lambda'=(\lambda_1',\dotsc,\lambda_k')$ let $\mathrm{CR}(\lambda)$ be the \emph{column reading tableau}, the unique tableau of shape $\lambda$ built from the empty tableau by placing the numbers $1$ through $\lambda_k'$ at the ends of the first $\lambda_k'$ rows, then placing the next $\lambda_{k-1}'$ numbers at the ends of the first $\lambda_{k-1}'$ rows, and so on. We make similar definitions of column superstandard and column reading antitableaux $\mathrm{CSS}(\lambda^*)$ and $\mathrm{CR}(\lambda^*)$, which can also be obtained respectively from $\mathrm{CSS}(\lambda)$ and $\mathrm{CR}(\lambda)$ by rotation by 180 degrees and complementing. We shall sometimes use decreasing versions of these special tableaux. \begin{ex} For $\lambda=(4,3,2)$ we have $\lambda'=(3,3,2,1)$ and \begin{align*} \mathrm{CSS}(\lambda) &= \ytableaushort{1479,258,36} &\qquad \mathrm{CR}(\lambda) &= \ytableaushort{1247,358,69} \\ \mathrm{CSS}(\lambda^*) &= \ytableaushort{\none\none47,\none258,1369} & \mathrm{CR}(\lambda^*) &= \ytableaushort{\none\none14,\none257,3689} \end{align*} \end{ex} \subsubsection{$K$-rectification and anti-rectification} In studying the Kjdt for most situations it does not matter whether a tableau is increasing or decreasing because the tableaux are specified by sequences of rook strips and only the labeling of the strips is different. For technical reasons we will use different rectification orders in the following definitions for increasing versus decreasing tableaux. \begin{defn} \begin{enumerate} \item Given an increasing (resp. decreasing) tableau $T$ of shape $D=\lambda/\mu$ define its \emph{$K$-rectification} by $J^\nwarrow(T) = J^\nwarrow_S(T)$ where $S=\mathrm{CR}(\mu)$ (resp. $S=\mathrm{CSS}(\mu)$). \item Let $R$ be a tight ($\ell(\lambda) \times \lambda_1$) rectangle placed around $\lambda$. Then $R/\lambda$ is an antinormal skew shape. Define the \emph{$K$-anti-rectification}\footnote{We feel this name is more descriptive as the result has antinormal shape. The original name is ``reverse K-rectification" \cite{TY}.} of the increasing (resp. decreasing) tableau $T$ by $J^\searrow(T) = J^\searrow_U(T)$ where $U=\mathrm{CSS}(R/\lambda)$ (resp. $U=\mathrm{CR}(R/\lambda)$). \item Given a decreasing tableau $T$ of partition shape define $T^\sharp = J^\searrow(T)^*$. \item Given an increasing tableau $T$ of partition shape define $T^\flat = J^\nwarrow(T^*)$. \end{enumerate} \end{defn} \begin{prop} \label{P:decreasing to increasing} The map $T\mapsto T^\sharp$ is a bijection $\mathrm{Dec}_\lambda\to\mathrm{Inc}_\lambda$ with inverse $T\mapsto T^\flat$. Moreover $T^\sharp=J^\nwarrow(T^*)$ and $T^\flat=J^\searrow(T)^*$. \end{prop} \begin{proof} It follows from Proposition \ref{P:Kjdt Pieri} that infusion sends a CSS tableau to a CR tableau whose shape is the 180-degree rotation, and vice versa. This implies that $T\mapsto T^\sharp$ and $T\mapsto T^\flat$ are shape-preserving. They are mutually inverse since $*$ is involutive and infusion is involutive. The alternate descriptions of the maps hold since switching commutes with 180 degree rotation. \end{proof} \begin{ex} \label{X:Krect} Let $T$ be the following decreasing tableau $T$. We illustrate how to compute $T^\sharp$. The first step is to anti-rectify $T$ with respect to the rectification order $U$. $J^\searrow(T)$ is the result of the anti-rectification. \[ T = \begin{ytableau} 9 & 8 & 7 & 5 & 3\\ 7 & 5 & 4 & 3\\ 4 & 2 & 1 \\ 3\\ 1\\ \end{ytableau} \quad U = \ytableaushort{\none\none\none\none,\none\none\none1,\none\none25,368a,479b}\quad J^\searrow(T) = \begin{ytableau} \none & \none & \none & \none & 7\\ \none & \none & \none & \none & 5\\ \none & \none & 8 & 7 & 3 \\ \none & 9 & 5 & 4 & 2\\ 9 & 7 & 3 & 2 & 1\\ \end{ytableau} \] Then we rotate $J^\searrow(T)$ and obtain $T^\sharp$: $$ T^\sharp = \begin{ytableau} 1 & 2 & 3 & 7 & 9\\ 2 & 4 & 5 & 9\\ 3 & 7 & 8 \\ 5\\ 7\\ \end{ytableau} $$ Next, we illustrate how to compute $(T^\sharp)^\flat$. We rotate $T^\sharp$ and get $(T^\sharp)^*$. Then we rectify $(T^\sharp)^*$ using $U'$. The result $J^\nwarrow((T^\sharp)^*)$ is $(T^\sharp)^\flat$. \[ (T^\sharp)^* = \begin{ytableau} \none & \none & \none & \none & 7\\ \none & \none & \none & \none & 5\\ \none & \none & 8 & 7 & 3 \\ \none & 9 & 5 & 4 & 2\\ 9 & 7 & 3 & 2 & 1\\ \end{ytableau} \quad U' = \begin{ytableau} 1 & 5 & 8 & a\\ 2 & 6 & 9 & b\\ 3 & 7\\ 4\\ \end{ytableau} \quad J^\nwarrow((T^\sharp)^*) = \begin{ytableau} 9 & 8 & 7 & 5 & 3\\ 7 & 5 & 4 & 3\\ 4 & 2 & 1 \\ 3\\ 1\\ \end{ytableau} \] Readers can check $(T^\sharp)^\flat = T$ in this case. \end{ex} \begin{rem} \begin{enumerate} \item Unlike jeu-de-taquin for semistandard tableaux, $J^\nwarrow_S(T)$ may depend on the rectification order $S$ \cite[Ex. 1.3]{TY}. Thus for the well-definedness of $J^\nwarrow(T)$, it is necessary to specify $S$. \item The $\lambda$ and $\mu$ defining a skew shape $\lambda/\mu$ are not unique; one may add several rows and columns to the top and left of the diagrams of $\mu$ and $\lambda$ simultaneously and get the same difference of partition diagrams. Using Proposition \ref{P:Kjdt Pieri} it can be shown that $J^\nwarrow(T)$ depends only on the set of boxes in $\lambda/\mu$ and not on the pair $(\lambda,\mu)$. \item This definition of $J^\searrow(T)$ was used in \cite{ReY}. \end{enumerate} \end{rem} \newcommand{\da}{ \begin{tikzpicture}[scale=.5] \draw[black,thick] (0,0)--(8,0)--(8,5)--(0,5)--(0,0); \draw[black,thick] (1,0)--(1,1)--(3,1)--(3,3)--(6,3)--(6,4)--(8,4); \node (T) at (2,4) {$T^\sharp$ incr.}; \node (CSS) at (6,2) {CSS}; \end{tikzpicture} } \newcommand{\db}{ \begin{tikzpicture}[scale=.5] \draw[black,thick] (0,0)--(8,0)--(8,5)--(0,5)--(0,0); \draw[black,thick] (0,1)--(2,1)--(2,2)--(5,2)--(5,4)--(7,4)--(7,5)--(8,5); \node (TJSE) at (6,1) {$T^*$ incr.}; \node (CSS) at (2,4) {CR}; \end{tikzpicture} } \newcommand{\dc}{ \begin{tikzpicture}[scale=.5] \draw[black,thick] (0,0)--(8,0)--(8,5)--(0,5)--(0,0); \draw[black,thick] (0,1)--(2,1)--(2,2)--(5,2)--(5,4)--(7,4)--(7,5)--(8,5); \node (Tstar) at (6,1) {$J^\searrow(T)$ decr.}; \node (CSS) at (2,4) {CSS}; \end{tikzpicture} } \newcommand{\dd}{ \begin{tikzpicture}[scale=.5] \draw[black,thick] (0,0)--(8,0)--(8,5)--(0,5)--(0,0); \draw[black,thick] (1,0)--(1,1)--(3,1)--(3,3)--(6,3)--(6,4)--(8,4); \node (Tflat) at (2,4) {$T$ decr.}; \node (CSS) at (6,2) {CR}; \end{tikzpicture} } \newsavebox{\myboxa} \newsavebox{\myboxb} \newsavebox{\myboxc} \newsavebox{\myboxd} \tikzstyle{arrow} = [thick,<->,>=stealth] \newcommand{\twobytwo}[4]{ % \sbox{\myboxa}{#1} \sbox{\myboxb}{#2} \sbox{\myboxc}{#3} \sbox{\myboxd}{#4} \begin{tikzpicture} \node (a) {\usebox{\myboxd}}; \node (b) [right = 1.5cm of a] {\usebox{\myboxc}}; \node (c) [below = 1.5cm of a] {\usebox{\myboxb}}; \node (d) [right = 1.5cm of c] {\usebox{\myboxa}}; \draw [arrow] (a) -- (b) node[midway,yshift=10pt] {\text{infusion}}; \draw [arrow] (a) -- (c) node[midway,xshift=-10pt] {*}; \draw [arrow] (b) -- (d) node[midway,xshift=10pt] {*}; \draw [arrow] (c) -- (d) node[midway,yshift=-10pt] {\text{infusion}}; \end{tikzpicture}} \begin{figure} \twobytwo{\da}{\db}{\dc}{\dd} \caption{From decreasing to increasing: $T\mapsto T^\sharp$} \end{figure} \subsubsection{Keys defined via Kjdt for increasing/decreasing tableaux} \label{SS:Kjdt keys} For a tableau $T$ let $T_{\le j}$ be the tableau consisting of the first $j$ columns of $T$. $T_{\ge j}$ is defined similarly. Let $T$ be an increasing (resp. decreasing) tableau of partition shape. The \emph{left key} $K_-(T)$ of the increasing tableau $T$ is defined by the condition that its $j$-th column is equal to the first column of $J^\searrow(T_{\le j})$ for all $j$. The \emph{right key} $K_+(T)$ of the decreasing tableau $T$ is defined by the condition that its $j$-th column is equal to the last column of $J^\searrow(T_{\ge j})$ for all $j$. We note that $J^\searrow$ is defined using a different reverse rectification order for increasing versus decreasing tableaux. However, it is ultimately shown that these keys are independent of the rectification order; see Proposition \ref{P:Kjdt right key} for decreasing tableaux. \subsection{Hecke insertion} \label{SS:Hecke insertion} Let $T$ be a decreasing tableau of partition shape $\lambda$ and $x$ a positive integer. The (column) \emph{Hecke insertion} of $x$ into $T$ is defined as follows. It maps the pair $(x,T)$ to a triple $(P,c, \alpha)$. $P$ will be a decreasing tableau which either has shape $\lambda$ in which case we set $\alpha=0$ or differs by adding a single box, in which case we set $\alpha=1$. We write $P = (x\xrightarrow{H} T)$. $c$ will be the box of $P$ where the algorithm ends. The algorithm first inserts $x$ into column 1 of $T$. This may output a number. If so, the output number is then inserted to the next column. The algorithm repeats until an insertion to a column has no output. To describe the insertion of $x$ to a column of $T$, we consider two cases: \begin{enumerate} \item[Case 1:] $x$ is less than or equal to all entries in this column. Then the algorithm makes no output. In addition, it appends $x$ to the bottom of this column as long as the result is a decreasing tableau. $c$ is set to be this newly appended box. Otherwise the column is unchanged and $c$ is set to be the rightmost box in the row that contains the bottom entry of this column. \item[Case 2:] Otherwise let $y$ be the smallest value in this column such that $y < x$. Then the algorithm outputs $y$ from the column. In addition it replaces $y$ by $x$ as long as the result is a decreasing tableau. Otherwise the column remains unchanged. \end{enumerate} We use the word ``contraction" when $\alpha=0$. This algorithm has an inverse called reverse (column) Hecke insertion, which maps a triple $(P,c, \alpha)$ to a pair $(T,x)$. Here, $P$ is a decreasing tableau of partition shape. $c$ is an entry on $P$ that is at the end of its row and its column. $\alpha$ is 0 or 1. The algorithm behaves in the following way: First let $y$ be the number in the box $c$. If $\alpha = 1$, we remove this box. If $\alpha = 0$, we do not remove it. In either case, the algorithm ``reverse inserts'' $y$ into the previous column. When a value $y$ is ``reverse inserted'' into a column, the algorithm finds the largest $y'$ in that column such that $y' > y$. It replaces $y'$ by $y$ as long as the result is a decreasing tableau. Otherwise it does nothing to the column. In either case, $y'$ is reverse inserted into the previous column. If there is no column on the left, the algorithm lets $x = y'$ and $T$ is the resulting tableau. Then it terminates. \begin{lemma} \cite{BKSTY} \label{Hecke insertion are inverses} Hecke insertion and reverse Hecke insertion are inverses of each other. \end{lemma} \subsubsection{Pieri property of Hecke insertion} Hecke insertion has the following Pieri property: \begin{lemma}\cite[Lemma 2]{BKSTY} \label{Pieri property of Hecke insertion} Let $T$ be a decreasing tableau. Let $x_1, x_2$ be two positive integers. Hecke insert $x_1$ into $T$ with result $(T_1,c_1,\alpha_1)$ and then Hecke insert $x_2$ into $T_1$, with result $(T_2,c_2,\alpha_2)$. Then $c_2$ is strictly to the right of $c_1$ if and only if $x_1 < x_2$. \end{lemma} \subsubsection{Set-valued recording tableaux for Hecke insertion and Hecke RSK} \label{SS:SV recording} Let $\mathcal{T} := \bigsqcup_\lambda (\mathrm{Dec}_\lambda \times \mathrm{RSVT}_\lambda)$. Now we describe the bijections between $\mathcal{C}$ and $\mathcal{T}$. First, we recursively define $\mathrm{Insert}: \mathcal{C} \mapsto \mathcal{T}$. \begin{defn} Take $(a,i) \in \mathcal{C}$. If $a$ is the empty string, then $\mathrm{Insert}(a,i)$ is the pair of two empty tableaux. Now assume $a$ has positive length. Let $a = a'x$, $i = i'y$, where $x, y$ are positive numbers. Now let $(P', Q') = \mathrm{Insert}(a', i')$. We Hecke insert $x$ into $P'$ and get $(P,c,\alpha)$. If $\alpha = 1$, we append $y$ to $Q'$ at the corresponding position of $c$. Otherwise, we add $y$ to the entry in $Q'$ that corresponds to $c$. We let $Q$ be the resulting RSVT. Then $\mathrm{Insert}(a,i) := (P,Q)$. \end{defn} $\mathrm{Insert}$ is well-defined by Lemma \ref{Pieri property of Hecke insertion}. Now we recursively define the map $\mathrm{RevInsert}: \mathcal{T} \mapsto \mathcal{C}$. \begin{defn} Take $(P,Q) \in \mathcal{T}$. If $P$ is the empty tableau, $\mathrm{RevInsert}(P,Q)$ is the pair of two empty words. Now assume $P$ is non-empty. Let $y$ be the smallest number in $Q$. We pick the rightmost $y$, and remove this number from $Q$. Let $Q'$ be the resulting RSVT. If this $y$ is the only number in its entry, we set $\alpha = 1$. Otherwise, $\alpha = 0$. Then we invoke reverse Hecke insertion on the corresponding entry in $P$ with $\alpha$. Let $(P', x)$ be the output. Finally, we let $(a', i') = \mathrm{RevInsert}(P', Q')$. Then $\mathrm{RevInsert}(P, Q) := (ax, iy)$. \end{defn} $\mathrm{RevInsert}$ is similarly well-defined. \begin{lemma} $\mathrm{Insert}$ and $\mathrm{RevInsert}$ are inverses. \end{lemma} \begin{proof} Follows from Lemmas \ref{Hecke insertion are inverses} and \ref{Pieri property of Hecke insertion}. \end{proof} Also, these two maps have the following property: \begin{prop} \label{P:Hecke insertion Kequiv and weight} Take $(a,i) \in \mathcal{C}$ and let $(P,Q) = \mathrm{Insert}(a,i)$. \begin{align} \label{E:insert respects permutations} \mathrm{rev}(a) &\equiv_K P \\ \label{E:insert is weight preserving} \mathrm{wt}(Q)&=\mathrm{wt}(i) \end{align} \end{prop} \begin{proof} Equation \eqref{E:insert is weight preserving} holds by definition. The relation \eqref{E:insert respects permutations} holds by \cite[Thm. 6.2]{BS}. \end{proof} \section{RSVT rule for Lascoux Polynomials} \label{S:formula for Lascoux polynomials} In this section, we give a combinatorial rule for Lascoux polynomials involving tableaux. Let $T$ be a RSVT. Let $L(T)$ be the RSSYT obtained by picking the largest number in each entry. Then we have \begin{thm} For any composition $\alpha$ \label{set-valued_reverse_SSYT_rule} \begin{align} \fL^{(\beta)}_\alpha = \sum_{K_-(L(T)) \leq \alpha} \beta^{\mathrm{ex}(T)} x^{\mathrm{wt}(T)} \end{align} where $T$ runs over the RSVT of shape $\alpha^+$. \end{thm} \begin{ex} The following RSVTs contribute to $\fL^{(\beta)}_{(1,0,2)}$:\\ \begin{ytableau} 2 & 1\\ 1 \end{ytableau} \\[1mm] \begin{ytableau} 2 & 2\\ 1 \end{ytableau} \quad \begin{ytableau} 2 & 21\\ 1 \end{ytableau} \\[1mm] \begin{ytableau} 3 & 1\\ 1 \end{ytableau} \quad \begin{ytableau} 32 & 1\\ 1 \end{ytableau} \\[1mm] \begin{ytableau} 3 & 2\\ 1 \end{ytableau} \quad \begin{ytableau} 3 & 21\\ 1 \end{ytableau} \quad \begin{ytableau} 32 & 2\\ 1 \end{ytableau} \quad \begin{ytableau} 32 & 21\\ 1 \end{ytableau} \\[1mm] \begin{ytableau} 3 & 3\\ 1 \end{ytableau} \quad \begin{ytableau} 3 & 31\\ 1 \end{ytableau} \quad \begin{ytableau} 3 & 32\\ 1 \end{ytableau} \quad \begin{ytableau} 3 & 321\\ 1 \end{ytableau} \vskip2mm Thus, we may write $\fL^{(\beta)}_{(1,0,2)}$ as \begin{equation*} \begin{split} &x_1^2x_2 + x_1x_2^2 + x_1^2x_3 + x_1x_2x_3 + x_1x_3^2 \\ + & \beta(x_1^2x_2^2 + 2x_1^2x_2x_3 + x_1x_2^2x_3 + x_1^2x_3^2 + x_1x_2x_3^2)\\ + & \beta^2(x_1^2x_2^2x_3 + x_1^2x_2x_3^2) \end{split} \end{equation*} \end{ex} Next, we want to rewrite Theorem \ref{set-valued_reverse_SSYT_rule} as a rule involving RSSYT, instead of RSVT. We need a definition: \begin{defn} Fix a RSSYT $T$ of shape $\lambda$. Define $WT(T)$ by $$ x^{WT(T)} = \sum_{\substack{T' \in \mathrm{RSVT}_\lambda \\ L(T') = T}} \beta^{\mathrm{ex}(T')} x^{\mathrm{wt}(T')}. $$ \end{defn} Then Theorem \ref{set-valued_reverse_SSYT_rule} can be rewritten as: \begin{thm} For any composition $\alpha$ \label{reverse_SSYT_rule} $$ \fL^{(\beta)}_\alpha = \sum_{\substack{T\in\mathrm{RSVT}_{\alpha^+} \\ K_-(T) \leq \alpha}} x^{WT(T)} $$ \end{thm} It is clear that Theorem $\ref{reverse_SSYT_rule}$ and Theorem $\ref{set-valued_reverse_SSYT_rule}$ are equivalent. Readers may still insist that Theorem $\ref{reverse_SSYT_rule}$ involves RSVTs, since they appear in how we defined $x^{WT(T)}$. Well, the following lemma resolves this issue: \begin{lem} \label{L:WT} For any RSSYT of shape $\lambda$ \begin{align} x^{WT(T)} = x^{\mathrm{wt}(T)}\prod_{(s,k)}(1 +\beta x_k) \end{align} where $(s,k)$ runs over pairs such that $s$ is a box in $\lambda$, $k$ is less than the value of $T$ in that box, and replacing the $s$-th entry of $T$ by $k$ results in a RSSYT. \end{lem} \begin{proof}[Proof of Lemma \ref{L:WT}] Consider the following way of turning $T$ into a RSVT in $L^{-1}(T)$. Let $a$ be an entry in $T$. Let $b$ be the entry on its right and $b = 1$ if such an entry does not exist. Let $c$ be the entry below $a$ and $c = 0$ if such an entry does not exist. We turn $a$ into $\{ a \}$, and then add some numbers to this set. We may add any $k$ such that $a > k$, $k > c$ and $k \geq b$. Not adding this $k$ will contribute $1$ and adding this $k$ will contribute an $\beta x_k$. Thus, each such $k$ contributes $(1 +\beta x_k)$. Clearly, the choices are independent and any element in $L^{-1}(T)$ can be obtained this way. \end{proof} In the rest of this section, we show Theorem $\ref{reverse_SSYT_rule}$. We will only use RSSYTs. We only need to prove the sum in Theorem $\ref{reverse_SSYT_rule}$ satisfies the recursion of Lascoux polynomials. Now, we fix an $i$ throughout the rest of this section. \subsection{Partitioning RSSYTs} Let $T$ be a RSSYT. We classify its $i$ and $i+1$ into 3 categories: ``ignorable", ``frozen", and ``free". First, we find all pairs of $i+1$ and $i$ that appear in the same column. We pair them and say they are ``ignorable". Next, we find non-ignorable $i$ and $i+1$ such that: \begin{enumerate} \item $i$ is on the left of $i+1$. \item Any column between them must have an ignorable pairs. \end{enumerate} We pair them and say they are ``frozen". Other non-ignorable $i$ and $i+1$ are called ``free". \begin{ex} When $i = 3$, consider the following RSSYT: \begin{align*} \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 4\\ 5 & \textcolor{red}{4} & 3 & \textcolor{blue}{3}\\ 4 & \textcolor{red}{3} \end{ytableau} \end{align*} The red entries are ignorable and blue entries are frozen. Other 3 and 4 are free. \end{ex} Based on this labelling, we may partition RSSYTs into families. \begin{defn} A \emph{family} is an equivalence class under the transitive closure of the following: two RSSYTs are related if they differ by changing a single $i$ into an $i+1$ (or vice versa) where the changed letters are free in both tableaux. \end{defn} \begin{ex} Consider the reverse tableau in the previous example. Its family also includes: \begin{align*} \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 3\\ 5 & \textcolor{red}{4} & 3 & \textcolor{blue}{3}\\ 3 & \textcolor{red}{3} \end{ytableau} \quad \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 3\\ 5 & \textcolor{red}{4} & 3 & \textcolor{blue}{3}\\ 4 & \textcolor{red}{3} \end{ytableau} \quad \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 3\\ 5 & \textcolor{red}{4} & 4 & \textcolor{blue}{3}\\ 4 & \textcolor{red}{3} \end{ytableau} \end{align*} \begin{align*} \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 4\\ 5 & \textcolor{red}{4} & 3 & \textcolor{blue}{3}\\ 3 & \textcolor{red}{3} \end{ytableau} \quad \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 4\\ 5 & \textcolor{red}{4} & 4 & \textcolor{blue}{3}\\ 4 & \textcolor{red}{3} \end{ytableau} \end{align*} However, the following is in another family: \begin{align*} \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 3\\ 5 & \textcolor{red}{4} & \textcolor{blue}{4} & \textcolor{blue}{3}\\ \textcolor{blue}{3} & \textcolor{red}{3} \end{ytableau} \end{align*} \end{ex} Given an RSSYT, how can we enumerate its family? Clearly, we can only change its free entries. We also need to make sure they are still free after our changes. In other words, assume $a$ and $b$ are two free entries. If $a$ is on the left of $b$ and all columns between them have ignorable pairs, then we cannot change $a$ into $i$ and $b$ into $i + 1$. This criterion leads to the following definition. \begin{defn} Let $T$ be a RSSYT. We partition its free $i$ and $i+1$ into ``blocks". Two entries are in the same block iff all columns between them have ignorable pairs. \end{defn} Thus, to enumerate the family of a RSSYT $T$, we just replace entries in each block by a weakly decreasing (from left to right) sequence of $i$ and $i+1$. The reader may check the enumeration of the family in the previous example. \subsection{Families and left keys} \label{SS:families and left keys} This subsection aims to describe the left keys of a family. This idea is formalized in the following lemma: \begin{lem} \label{family_and_left_key} Let $\mathcal{F}$ be a family. Then its elements can have at most 2 different left keys. If they all have the same left key $\gamma$, then $\gamma_i \geq \gamma_{i+1}$. If they have two different left keys, then they must be $\gamma$ and $s_i \gamma$, where $\gamma_i > \gamma_{i+1}$. In this case, we also have: \begin{enumerate} \item $T \in \mathcal{F}$ has left key $\gamma$ iff $T$'s leftmost block only has $i$. \item All columns before the first block must have ignorable pairs. \end{enumerate} \end{lem} Before proving the lemma, we need to introduce an algorithm that computes the left key. The algorithm is introduced in section 5 of \cite{Willis}. Here we describe this algorithm in a slightly different way. \begin{defn} Given two columns $C_1, C_2$ such that $C_1C_2$ is a RSSYT, we define the column $C_1 \triangleleft C_2$ as follows. Assume $C_2 = \{ a_1 < a_2 < \dots < a_m\}$. We find the smallest $b_1$ in $C_1$ such that $b_1 \geq a_1$. Then we find the smallest $b_2$ in $C_1$ such that $b_2 \geq a_2$ and $b_2 > b_1$. Similarly, we find $b_3, \dots b_m$. Let $C_1 \triangleleft C_2=\{b_1<b_2<\dotsm<b_m\}$. More generally suppose $C_1,C_2,\dotsc,C_k$ are the columns of a RSSYT. Observe that the following expression is well-defined when $j \le k$ $$ C_j \triangleleft \dotsm \triangleleft C_k := C_j \triangleleft (C_{j+1} \triangleleft \dotsm \triangleleft C_k) $$ where the base case is $$ C_k \triangleleft \dotsm \triangleleft C_k := C_k $$ \end{defn} With this definition, we may compute column $k$ of $K_-(T)$, where $T$ is a RSSYT. Let the first $k$ columns of $T$ be $C_1, \dots, C_k$. Then column $k$ of $K_-(T)$ is by definition $C_1 \triangleleft \dots \triangleleft C_k$. To study this algorithm, we need to classify columns of $T$. Each column can be labeled as follows: \begin{enumerate} \item Type 1 column: It has neither $i$ nor $i+1$. \item Type 2 column: It has $i$ but no $i+1$. \item Type 3 column: It has $i+1$ but no $i$. \item Type 4 column: It has both $i$ and $i+1$. \end{enumerate} Now we make several observations. \begin{lemma} \label{left_key_algorithm_property_1} If $C_1$ has type 4 and $C_2$ does not have type 3, then $C_1 \triangleleft C_2$ cannot have type 3. \end{lemma} \begin{proof} Assume $C_1 \triangleleft C_2$ has type 3. Then we must pick $i+1$ in $C_1$ for some $m$ in $C_2$. Moreover, $i$ in $C_1$ is never picked. Thus, $m$ must be $i+1$ and $C_2$ cannot have $i$. $C_2$ has type 3, contradiction. \end{proof} \begin{lemma} \label{left_key_algorithm_property_2} Let $T$ be a RSSYT with no free $i+1$. Assume $\gamma = K_-(T)$. Then $\gamma_i \geq \gamma_{i+1}$. \end{lemma} \begin{proof} Let $C_1, C_2, \dots$ be columns of $T$. Consider column $k$ of $K_-(T)$. We only need to prove it cannot have type 3. Suppose $C_1, \dots, C_k$ all have type 4. Then Lemma \ref{left_key_algorithm_property_1} guarantees $C_1 \triangleleft \dotsm \triangleleft C_{k}$ cannot have type 3. Otherwise, we can find $j \leq k$ such that $C_1, \dots, C_{j-1}$ have type 4 and $C_j$ does not have type 4. Since $T$ has no free $i+1$, $C_j$ must have type 1 or 2. Then $C_j \triangleleft \dotsm \triangleleft C_{k}$ must also have type 1 or 2. By Lemma \ref{left_key_algorithm_property_1}, $C_1, \dots, C_{j-1}$ cannot turn it into type 3. \end{proof} \begin{lemma} \label{left_key_algorithm_property_3} Assume $C_2$ has type 2. We change its $i$ into $i+1$ and obtain $C_2'$. Assume $C_1C_2'$ is a RSSYT. Then, \begin{enumerate} \item If $C_1$ has type 4, then $C_1 \triangleleft C_2 = C_1 \triangleleft C_2'$, or $C_1 \triangleleft C_2'$ is obtained from $C_1 \triangleleft C_2$ by changing an $i$ into $i+1$. \item If $C_1$ has type 1 or 3, then $C_1 \triangleleft C_2 = C_1 \triangleleft C_2'$. \end{enumerate} \end{lemma} \begin{proof} We do a case study based on the type of $C_1$. \begin{enumerate} \item Assume $C_1$ has type 4. When we consider $i$ in $C_2$, there are 3 possibilities: The $i$ in $C_1$ is picked; or a number larger than it is picked; or the $i$ is still available. In the first 2 cases, clearly this $i$ in $C_2$ behaves as if it is an $i+1$. Then $C_1 \triangleleft C_2 = C_1 \triangleleft C_2'$. In the last case, $i$ in $C_2$ picks $i$, and $i+1$ in $C_2'$ picks $i+1$. Our claim is clear. \item Assume $C_1$ has type 1 or 3. Clearly the $i$ in $C_2$ behaves as if it is an $i+1$, so $C_1 \triangleleft C_2 = C_1 \triangleleft C_2'$. \end{enumerate} \end{proof} \begin{lemma} \label{left_key_algorithm_property_4} Let $T$ be a RSSYT. Assume column $j$ of $T$ has a free $i$, which is the leftmost free $i$ in its block. We change this $i$ into $i+1$ and get $T'$. If $\gamma = K_-(T)$, then $K_-(T') = \gamma$ or $s_i \gamma$. Moreover, if the latter case happens, we must have: \begin{enumerate} \item The $i$ we changed is in the leftmost block of $T$. \item Each of column $1, \dots, j-1$ of $T$ has ignorable pairs. \end{enumerate} \end{lemma} \begin{proof} Let $C_1, C_2, \dots$ be the columns of $T$. Let $D_1, D_2, \dots$ be the columns of $T'$. Consider column $k$ of $K_-(T)$ and $K_-(T')$. If $k < j$, then clearly they are the same. Now assume $k > j$. Let $C = C_{j+1} \triangleleft \dotsm \triangleleft C_{k}$. Because the $i$ in column $j$ is free, we know that $C_{j+1}, \dots, C_k$ all have type 4, or the leftmost non-type-4 column among them has type 1 or 2. Similar to the proof of Lemma \ref{left_key_algorithm_property_2}, $C$ cannot have type 3. Next, we compare $C_j \triangleleft C$ and $D_j \triangleleft C$. If $i$ in $C_j$ is picked by $x$ in $C$, then this $x$ will pick $i+1$ in $D_j$. Thus, $D_j \triangleleft C$ is obtained by changing $i$ in $C_j \triangleleft C$ into $i+1$. If $i$ in $C_j$ is not picked, the $i+1$ in $D_j$ will not be picked. Then $C_j \triangleleft C = D_j \triangleleft C$. Consequently, if $k \geq j$, $C_j \triangleleft \dotsm \triangleleft C_k$ agrees with $D_j \triangleleft \dotsm \triangleleft D_k$, or the latter differs from the former by changing an $i$ into $i+1$. In Lemma \ref{left_key_algorithm_property_3}, we showed this difference might be preserved or corrected by type 4 columns. If $C_1, \dots, C_{j-1}$ all have type 4, then we know column $k$ of $K_-(T)$ agrees with column $k$ of $K_-(T')$, or the latter differs from the former by changing an $i$ into $i+1$. Otherwise, we let $l$ be the largest such that $l < j$ and $C_l$ does not have type 4. Since the $i$ in column $j$ of $T$ is the leftmost $i$ in its block, $C_l$ must have type 1 or 3. By Lemma \ref{left_key_algorithm_property_3}, $$ C_l \triangleleft \dotsm \triangleleft C_k = D_l \triangleleft \dotsm \triangleleft D_k $$ Thus, each column of $K_-(T')$ either agrees with the corresponding column in $K_-(T)$, or differs by changing an $i$ into $i+1$. Since $K_-(T')$ is a key, we have $K_-(T') = \gamma$ or $s_i \gamma$. In the latter case we know $C_1, \dots, C_{j-1}$ have type 4. Our claims are immediate. \end{proof} Now we may prove Lemma \ref{family_and_left_key}. \begin{proof} First pick $T$ from $\mathcal{F}$ that has no free $i+1$. Assume $\gamma = K_-(T)$. By Lemma \ref{left_key_algorithm_property_2}, $\gamma_i \geq \gamma_{i+1}$. Then we enumerate other elements in $\mathcal{F}$ by changing free $i$ in $T$ into $i+1$. As long as we do not change the first block, the left key will still be $\gamma$. Once we change the first $i$ in the first block, the left key might be fixed, or turned into $s_i \gamma$. The latter case is possible only when all columns before the first blocks have ignorable pairs. After that, no matter which $i$ we change, the left key will be fixed. \end{proof} \subsection{$\pi_i$ and $\pi_i^K$} In this subsection, we derive some basic facts about $\pi_i$ and $\pi_i^K$. Define $X_i = x_i(1 + \beta x_{i+1})$ and $X_{i+1} = x_{i+1}(1 + \beta x_i)$. Then we have \begin{enumerate} \item $s_i (X_i) = X_{i+1}$ \item $\pi_i(f) = \partial_i(x_if)$ and $\pi_i^K(f) = \partial_i(X_if)$ \item $\partial_i(X_i) = \partial_i(x_i) = 1$. \end{enumerate} The following lemma describes how $\partial_i$ acts on a product of several $x_i$ and $X_i$: \begin{lemma} Assume we have $u_1, \dots, u_n$, where each $u_j$ is either $x_i$ or $X_i$. Then $$ \partial_i(u_1 \dots u_n) = \sum_{j = 1}^n s_i(u_1 \dots u_{j-1})u_{j+1}\dots u_n $$ \end{lemma} For instance, $$ \partial_i(x_iX_ix_iX_i) = X_ix_iX_i + x_{i+1}x_iX_i + x_{i+1}X_{i+1}X_i + x_{i+1}X_{i+1}x_{i+1} $$ \begin{proof} Notice: \begin{equation*} \begin{split} \partial_i(u_1 \dots u_n) & = \partial_i(u_1) u_2 \dots u_n + s_i(u_1)\partial_i(u_2 \dots u_n) \\ & = u_2 \dots u_n + s_i(u_1)\partial_i(u_2 \dots u_n) \end{split} \end{equation*} Then the proof is finished by induction. \end{proof} \begin{cor} \label{pi_comb} Assume we have $u_1, \dots, u_n$, where each $u_j$ is either $x_i$ or $X_i$. Then \begin{align} \pi_i(u_1 \dots u_n) = u_1 \dots u_n + x_{i+1}\sum_{j = 1}^n s_i(u_1 \dots u_{j-1})u_{j+1}\dots u_n \end{align} \begin{align} \pi_i^K(u_1 \dots u_n) = u_1 \dots u_n + X_{i+1}\sum_{j = 1}^n s_i(u_1 \dots u_{j-1})u_{j+1}\dots u_n \end{align} \end{cor} \subsection{$x^{WT(T)}$ and Family} In this subsection, we investigate how $x^{WT(T)}$ works and how it changes within a family. More explicitly, the goal is to understand: $\sum_{T \in \mathcal{F}} x^{WT(T)}$ where $\mathcal{F}$ is a family. The first step is to understand what governs the power of $(1 + \beta x_j)$ in $x^{WT(T)}$. Based on our definition, each row can have at most one entry that contributes $(1 + \beta x_j)$ for a fixed j. How is it determined whether a row has such a contributor? The following lemma answers this question. To make it concise, we adopt the following convention throughout the rest of this section: a 0 is appended below each column in a RSSYT. \begin{lemma} A row has an entry that contributes $(1 + \beta x_j)$ iff we can find an entry $j'$ on this row such that: \begin{enumerate} \item $j' > j$ \item The entry below $j'$ is less than $j$. \end{enumerate} \end{lemma} \begin{proof} Assume an entry $m$ contributes $1 + \beta x_j$. Then clearly $m > j$ and the entry below $m$ is less than $j$. The row of $m$ clearly satisfies the requirement. Conversely, assume a row has $j'$ that satisfies the two requirements. Moreover, we pick the rightmost $j'$ among all such $j'$ on this row. Then the entry to the right of $j'$ either does not exist or is at most $j$. Changing this $j'$ to $j$ will make $T$ a valid anti-SSYT. Thus, this entry contributes $(1 + \beta x_j)$. \end{proof} With this lemma, we may ascribe contributions of $(1 + \beta x_j)$ to rows, instead of entries. However, we would like to ascribe contributions of $(1 + \beta x_i)$ and $(1 + \beta x_{i+1})$ to specific entries, but the rule is different from our previous criterion. If a row contributes $(1 + \beta x_i)$, then we may find the leftmost entry on this row satisfying: \begin{enumerate} \item It is larger than $i$. \item The entry below it is less than $i$. \end{enumerate} We say this entry contributes an $(1 + \beta x_i)$. Similarly, if a row contributes $(1 + \beta x_{i+1})$, then we may find the rightmost entry on the row below satisfying: \begin{enumerate} \item It is less than $i+1$. \item The entry above it is larger than $i+1$. \end{enumerate} We say this entry contributes an $(1 + \beta x_{i+1})$. To illustrate our new ``contribution system", consider the following example: \begin{ex} $$ \begin{ytableau} 6 & 6 & 6 & 6 & \textcolor{blue}{4} & 4\\ 5 & 4 & \textcolor{blue}{4} & \textcolor{red}{3} & 0 & 0\\ \textcolor{blue}{4} & 3 & 0 & 0\\ 0 & 0 \end{ytableau} $$ When $i = 3$, each blue 4 contributes $x_4(1 + \beta x_3)$. The red 3 contributes $x_3(1 + \beta x_4)$. \end{ex} Now we fix an arbitrary family $\mathcal{F}$ throughout this subsection. Take any $T \in \mathcal{F}$. Let $m$ be the number of blocks in $T$. Then we may break $x^{WT(T)}$ into a product: $$ x^{WT(T)} = g^Tf_1^T\dots f_m^T $$ Here, $f_j^T$ is the contribution of the $j^{th}$ block in $T$ from left to right. $g^T$ contains the contribution of $x_i$, $x_{i+1}$, $(1 + \beta x_i)$ and $(1 + \beta x_{i+1})$ from all other entries. It also contains powers of $x_j$ and $(1 + \beta x_j)$ with $j \neq i$ or $i+1$. Next, we analyze these polynomials. Let us start with $g^T$: \begin{lemma} $g^T$ is invariant within the family. Moreover, $s_ig^T = g^T$. \end{lemma} \begin{proof} Clearly, changing free entries will not affect powers of $x_j$ and $(1 +\beta x_j)$ with $j \neq i$ or $i+1$. Let us focus on powers of $x_i$, $x_{i+1}$, $(1 + \beta x_i)$ and $(1 + \beta x_{i+1})$. Each ignorable pair contributes $x_ix_{i+1}$. Now, consider a frozen $i$. The column on its right must have an ignorable pair or a frozen $i+1$. In either case, if we look at the entry the entry above it and the entry on its top right: $$ \begin{ytableau} a & b\\ i & \none \end{ytableau} $$ We must have $a > i + 1 \geq b$. Thus, a frozen $i$ always contributes $x_i(1 + \beta x_{i+1}) = X_i$. Similarly, a frozen $i+1$ always contributes $X_{i+1}$. Thus, each frozen pair contributes $X_iX_{i+1}$. Now, we still need to look at contributions of $(1 + \beta x_i)$ and $(1 + \beta x_{i+1})$ by entries that are not $i$ or $i+1$. Assume $j$ is an entry that contributes $(1 + \beta x_{i+1})$ and $j$ is not $i$ or $i+1$. Let $j'$ be the entry above $j$. Then $j < i$ and $j' > i+1$. There is a $k'$ on the row of $j'$ such that $k'$ contributes $(1 +\beta x_i)$. Also, $k'$ is weakly left of $j'$. The diagram looks like: $$ \begin{ytableau} k' & \dots & j'\\ k & \dots & j \\ \end{ytableau} $$ with $k' > i + 1$ and $k < i$. We pair this $j$ with $k'$. Similarly, given such $k'$, we can find its corresponding $j$. In other words, we pair $(1 + \beta x_i)$ contributors with $(1 + \beta x_{i+1})$ contributors that are not $i$ or $i+1$. This pairing is clearly invariant under changing free entries. \end{proof} Due to this result, we may change our notation $g^T$ into $g^\mathcal{F}$, since it only depends on $\mathcal{F}$. The next step is to study each $f_j^T$. Clearly, a free $i$ contributes either $x_i$ or $X_i$. How can we determine its contribution? Consider the following lemma: \ytableausetup{boxsize=10pt,aligntableaux=bottom} \begin{lemma} \label{free_i_contribution} Choose a free $i$ in $T$. If it is not the last entry in its block, then it contributes $x_i$ iff it is contiguous to the next free $i$. If it is the last entry in its block, then it contributes $x_i$ iff one of the following happens: \begin{enumerate} \item It is in the highest row. \item There is a $b$ on its top right: \begin{ytableau} \none & b\\ i \end{ytableau} with $b > i + 1$. \end{enumerate} \end{lemma} \begin{proof} First, assume $i$ is not the last entry in its block. We study the entry on its right: \begin{enumerate} \item The column on its right has ignorable pair. Then we look at \begin{ytableau} a & b\\ \textcolor{red}{i} & \none \end{ytableau} where our chosen $i$ is red. We must have $a > i + 1 \geq b$. This $i$ contributes $X_i$. \item The column on its right has a free $i$ and this free $i$ is in the same row. Then we have \begin{ytableau} \none & a\\ \textcolor{red}{i} & i \end{ytableau} with $a > i+1$, or our chosen $i$ is in the top row. In either case, it contributes $x_i$. \item The column on its right has a free $i$ and this free $i$ is not on the same row as our chosen $i$. Then we have \begin{ytableau} a & b\\ \textcolor{red}{i} \end{ytableau} with $a > i + 1$ and $b \leq i$. Our chosen $i$ contributes $X_i$. \end{enumerate} Now assume $i$ is the last entry in its block. If it is in the top row, then it clearly contributes $x_i$. Otherwise, we look at: \begin{ytableau} a & b\\ \textcolor{red}{i} \end{ytableau} We know $a > i+1$. If $b$ exists and $b > i+1$, then clearly our $i$ contributes $x_i$. Otherwise, our $i$ contributes $X_i$. \end{proof} \ytableausetup{boxsize=normal} Similarly, for $i+1$, we have: \begin{lemma} Choose a free $i + 1$ in $T$. If it is not the first entry in its block, then it contributes $x_{i+1}$ iff it is contiguous to the previous free $i + 1$. If it is the first entry in its block, then it contributes $x_{i+1}$ iff there is an $a$ on its lower left with $a < i$. \begin{align*} \ytableaushort{\none{i\!\!+\!\!1},a} \end{align*} \end{lemma} \ytableausetup{aligntableaux=center} We omit the proof since it is basically the same as the previous one. Now we understand how the free entries contribute. Clearly, the contribution of one block is independent from other blocks. This implication allows us to simplify $\sum_{T \in \mathcal{F}} x^{WT(T)}$. In this family $\mathcal{F}$, there are $a_j + 1$ ways to fill the block $j$, where $a_j$ is the number of entries in block $j$. Let $f_j^l$ be the contribution of this block when the number of $(i+1)$'s is $l$. $l$ ranges between 0 and $a_j$. Then we have the following: $$ \sum_{T \in \mathcal{F}} x^{WT(T)} = g^\mathcal{F} \prod_{j = 1}^m\left(\sum_{l = 0}^{a_j}f_j^l\right) $$ Then we have \begin{lemma} $$ \sum_{l = 0}^{a_j} f_j^l = \pi_i(f_j^0) \textrm{ or } \pi_i^K(f_j^0) $$ Moreover, take any $T \in \mathcal{F}$ such that its $j^{th}$ block has an $i+1$. Then we are in the second case iff the first $i+1$ in the $j^{th}$ block of $T$ contributes $X_{i+1}$. \end{lemma} \begin{proof} First, assume block $j$ only has $i$. Let $u_p$ be the contribution of the $p^{th}$ free entry. Then $f_j^0 = u_1 \dots u_{a_j}$ and each $u_p = X_i$ or $x_i$. We change the first free $i$ into $i+1$. By Lemma \ref{free_i_contribution}, this change only affects the first entry's contribution. Then $f_j^1 = vu_2 \dots u_{a_j}$ with $v = x_{i+1}$ or $X_{i+1}$. If $a_j = 1$, we are done by Corollary \ref{pi_comb}. Otherwise, we change the second free $i$ into $i+1$. The second $i+1$ contributes $x_{i+1}$ iff it is contiguous to the first free entry. Also, $u_1 = x_i$ iff the first entry is contiguous to the second entry. Thus, we know the second entry contributes $s_i u_1$. $f_j^2 = v s_i(u_1) u_3 \dots u_{a_j}$. Continuing this argument, we have $f_j^l = vs_i(u_1 \dots u_{l-1}) u_{l+1} \dots u_{a_j}$. The proof is finished by invoking Corollary \ref{pi_comb}. \end{proof} By this result, $\sum_{l = 0}^{a_j}f_j^l$ must be symmetric in $i$ and $i+1$. Recall that we have shown $g^\mathcal{F}$ is symmetric in $i$ and $i+1$. Thus, $\sum_{T \in \mathcal{F}} x^{WT(T)}$ is symmetric in $i$ and $i+1$. Finally, we have enough results to prove Theorem \ref{reverse_SSYT_rule}. \begin{proof} Let $\alpha$ be a weak composition with $\alpha_i > \alpha_{i+1}$. Let $A := \{ T \in \mathcal{F}: K_-(T) \leq \alpha\}$ and $B := \{ T \in \mathcal{F}: K_-(T) \leq s_i\alpha\}$. We only need to show \begin{align} \label{main_equation} \pi_i^K \left(\sum_{T \in A} x^{WT(T)}\right) = \sum_{T \in B} x^{WT(T)} \end{align} This is clearly true when $B = \emptyset$. Now assume $B \neq \emptyset$. If $A = B$, then $A = B = \mathcal{F}$, (\ref{main_equation}) is true since $\sum_{T \in \mathcal{F}} x^{WT(T)}$ is symmetric in $x_i$ and $x_{i+1}$. Finally, assume $A$ is a proper subset of $B$. We can find $\gamma$ with $\gamma_i > \gamma_{i+1}$ such that elements in $A$ has left key $\gamma$ and elements in $B$ has left key $s_i\gamma$. Then $s_i\gamma \leq s_i\alpha$ and $\gamma \leq \alpha$. By Lemma \ref{family_and_left_key}, $A$ has elements whose first block only has $i$. We have: $$ \sum_{T \in A} x^{WT(T)} =\left( g^\mathcal{F} \prod_{j = 2}^m\left(\sum_{l = 0}^{a_j}f_j^l\right)\right) f_1^0 $$ Take $T \in B$. Consider its $i+1$ in the first block. There are two possibilities: It is in the first column, or the column on its left has an ignorable pair. In either case, this $i+1$ contributes $X_{i+1}$, so $$ \pi_i^K(f_1^0) = \sum_{l = 0}^{a_1}f_1^l $$ Finally, letting $f = g^\mathcal{F} \prod_{j = 2}^m (\sum_{l = 0}^{a_j}f_j^l)$, we have \begin{equation*} \begin{split} \pi_i^K\left(\sum_{T \in A} x^{WT(T)}\right) = & \pi_i^K(f \: f_1^0)\\ = & f \: \pi_i^K(f_1^0)\\ = & f \sum_{l = 0}^{a_1}f_1^l\\ = & \sum_{T \in B} x^{WT(T)} \end{split} \end{equation*} \end{proof} \section{Compatible Word rule for Lascoux Polynomials} \label{S:compatible} In this section we give another rule for Lascoux polynomials involving compatible pairs. Recall the set $\mathcal{C}$ from \S \ref{SS:FK} and $\mathcal{T}$ from \S \ref{SS:SV recording}. We would like to focus on smaller subsets of them. \begin{defn} Let $P$ be a decreasing tableau. Let $\mathcal{C}_P$ be the set consisting of all $(a,i) \in \mathcal{C}$ such that: \begin{enumerate} \item $a_j \geq i_j$ for all $j$, and \item When we insert $a$ into an empty decreasing tableau using Hecke insertion, we get $P$. \end{enumerate} Correspondingly, we define $\mathcal{T}_P$ to be the set consisting of all $(P,Q) \in \mathcal{T}$ such that $K_-(L(Q)) \leq K_+(P)$. \end{defn} Then we can introduce our main result of this section: \begin{thm} \label{restricting_Insert} The restriction of $\mathrm{Insert}$ to $\mathcal{C}_P$ and the restriction of $\mathrm{RevInsert}$ to $\mathcal{T}_P$, give inverse bijections between $\mathcal{C}_P$ and $\mathcal{T}_P$. \end{thm} Using this result, we have: \begin{thm} \label{compatible_word_rule} $$ \fL^{(\beta)}_{K_+(P)} = \sum_{(a,i)\in C_P} \beta^{\mathrm{ex}(a)} x^{\mathrm{wt}(i)} $$ \end{thm} \begin{proof} By Theorem \ref{set-valued_reverse_SSYT_rule}, we have $$ \fL^{(\beta)}_{K_+(P)} = \sum_{(P,Q)\in \mathcal{T}_P} \beta^{\mathrm{ex}(Q)}x^{\mathrm{wt}(Q)} $$ Then the proof is finished by applying $\mathrm{RevInsert}$ on the summands and invoking Theorem \ref{restricting_Insert}. \end{proof} The rest of this section aims to prove Theorem \ref{restricting_Insert}. More specifically, we only need to show $\mathrm{Insert}(\mathcal{C}_P) \subseteq \mathcal{T}_P$ and $\mathrm{RevInsert}(\mathcal{T}_P) \subseteq \mathcal{C}_P$. \subsection{Right key of decreasing tableau} \label{SS:right key} In this subsection, we investigate the right key of a decreasing tableau. First, we introduce an efficient algorithm that computes the right key. \subsubsection{Right key via $\star$-action} We start with a definition: \begin{defn} Let $\star$ denote the following right action of the monoid of words with letters in the set $\mathbb{Z}_{>0}$, on the set of subsets of $\mathbb{Z}_{>0}$. Let $S \subseteq \mathbb{Z}_{>0}$ and let $m\in \mathbb{Z}_{>0}$. Let $m'$ be the smallest number in $S$ of value at least $m$. If $m'$ does not exist, we let $S \star m = S \sqcup \{m \}$. Otherwise, we define $S \star m = (S - \{ m'\}) \sqcup \{ m \}$. More generally, if $w = w_1 \dots w_n$ is a word of positive integers, we define $S \star w = (S \star w_1) \star (w_2 \dots w_n)$, and $S \star w = S$ if $w$ is the empty word. \end{defn} \begin{ex} We have: \begin{equation*} \begin{split} \emptyset \star 3414 & = \{1,4\}\\ \{3,4,7\} \star 3414 & = \{1,4,7\}\\ \{3,4,7\} \star 3141 & = \{1,4,7\}\\ \{3,4,7\} \star 3411 & = \{1,4,7\}\\ \end{split} \end{equation*} \end{ex} We will use this action to introduce our right key algorithm. Before that, we need to define a relation on words called K-Knuth equivalence, which is first introduced in \cite{BS}. \begin{defn} The K-Knuth relations are: \begin{align} \label{E:K Knuth idempotent} u\:aa\:v & \equiv_K u\:a\:v \\ \label{E:K Knuth braid} u\:aba\:v & \equiv_K u\:bab\:v \\ \label{E:Knuth left witness} u\:bac\:v & \equiv_K u\:bca\:v \:\:\: ( a < b < c)\\ \label{E:Knuth right witness} u\:acb\:v & \equiv_K u\:cab\:v \:\:\: ( a < b < c) \end{align} where $u, v$ are words and $a<b<c$ are positive numbers. The K-Knuth equivalence $\equiv_K$ is the transitive and symmetric closure of these four relations. The Knuth equivalence relation $\equiv$ differs from $\equiv_K$ by removing the relation \eqref{E:K Knuth idempotent} and replacing the braid-like relation \eqref{E:K Knuth braid} by the relations \begin{align} \label{E:Knuth equal left witness} u\:bba\:v & \equiv u\:bab\:v \\ \label{E:Knuth equal right witness} u\:baa\:v & \equiv u\:aba\:v. \end{align} \end{defn} \begin{rem}\label{R:Knuth and star} It follows from the definitions that $S\star w$ may be obtained by taking the single column semistandard tableau defined by $S$, applying the usual Schensted column insertion of $w_1$, then $w_2$, up to $w_n$, and keeping only the first column. In particular, applying the well-known fact \cite{Kn} that Knuth-equivalent words have the same Schensted insertion tableau, letting $\mathrm{rev}(w)$ be the reverse of the word $w$, if $\mathrm{rev}(w)\equiv \mathrm{rev}(w')$ then $S\star w = S\star w'$. \end{rem} Words in the same K-Knuth class have the same $\star$ action. \begin{lemma} \label{star_and_K-Knuth} $S \star w = S \star w'$ if $w \equiv_K w'$. \end{lemma} \begin{proof} We may assume that $u$ and $v$ are the empty word. The result is obvious when $w=aa$ and $w'=a$. If $w$ and $w'$ are related by \eqref{E:Knuth left witness} or \eqref{E:Knuth right witness} then \eqref{star_and_K-Knuth} holds by Remark \ref{R:Knuth and star}. For \eqref{E:K Knuth braid} we have \begin{align*} S * aba = S * aab = S * ab = S* abb = S* bab \end{align*} where the middle two equalities hold by the case of \eqref{E:K Knuth idempotent} and the first and last hold by the reverses of the Knuth relations \eqref{E:Knuth equal right witness} and \eqref{E:Knuth equal left witness}. \end{proof} The $\star$ product is monotonic under left multiplication. \begin{lem}\label{L:star left multiplication} For any words $w$ and $v$, $\emptyset \star v \subseteq \emptyset \star wv$ as subsets. \end{lem} \begin{proof} We perform an induction on the length of $v$. When $v$ has length 0, the claim is trivial. Now assume $v = v'x$ where $x \in \mathbb{Z}_{>0}$. By the induction hypothesis, $\emptyset \star v' \subseteq \emptyset \star wv'$. Next we consider the action of $x$. If $\emptyset \star v'x\not\subseteq \emptyset \star wv'x$ the only possibility is that $y$ in the latter set is replaced by $x$, but $y$ in the former set is not. If this happens, $x$ must replace a number smaller than $y$ in the former set, say $z$. However, $z$ is also in the latter set and $x$ cannot replace $y$. Contradiction. Thus, $\emptyset \star v'x \subseteq \emptyset \star wv'x$. \end{proof} We use the $\star$ action to define the right key of a RSSYT and in particular a decreasing tableau. \begin{defn}\label{D:right key of RSSYT} For a RSSYT $P$ of partition shape, we define its right key $K_+(P)$ to be the RSSYT whose $j$-th column is the column given by $\emptyset \star \mathrm{word}(P_{\ge j})$ where $P_{\ge j}$ is the decreasing tableau obtained by removing the first $j-1$ columns of $P$. \end{defn} \begin{rem} \label{R:right key is a key} By Lemma \ref{L:star left multiplication} $K_+(P)$ is a key. \end{rem} \subsubsection{Right key via Kjdt} One may also define the right key of a decreasing tableau using Kjdt. This is the decreasing analogue of the definition of left key of increasing tableau given in \cite{ReY}. We prove the implicit suggestion in \cite{ReY} that the rectification order is irrelevant. \begin{prop}\label{P:Kjdt right key} \begin{enumerate} \item \label{it:1} For any decreasing tableau $T$ of partition shape, the rightmost column of the Kjdt anti-rectification of $T$ with respect to an \textbf{arbitrary} rectification order, is equal to $\emptyset \star \mathrm{word}(T)$. In particular this column does not depend on the rectification order. \item \label{it:2} For any decreasing tableau $P$, the right key of $P$ is the key tableau whose $j$-th column is the rightmost column of any Kjdt anti-rectification of $P_{\ge j}$. \end{enumerate} \end{prop} \begin{proof} We only prove part \eqref{it:1} as it immediately implies part \eqref{it:2}. Let $T'$ be any Kjdt anti-rectification of $T$. By Theorem 6.2 of \cite{BS}, we know $\mathrm{word}(T') \equiv_K \mathrm{word}(T)$. By Lemma \ref{star_and_K-Knuth}, $\emptyset \star \mathrm{word}(T') = \emptyset\star\mathrm{word}(T)$. Since $T'$ is a decreasing tableau of antinormal shape, $\emptyset \star \mathrm{word}(T')$ agrees with the rightmost column of $T'$. \end{proof} \subsubsection{Right key and Hecke insertion} We determine the precise change in the right key of a decreasing tableau under the operation of Hecke insertion of a single value. We know the following lemma from Theorem 6.2 of \cite{BS}. \begin{lemma} Let $P'$ be a decreasing tableau, $x$ a value, and $P=(x \xrightarrow{H} P')$. Then $x\,\mathrm{word}(P') \equiv_K \mathrm{word}(P)$. \end{lemma} Then we have: \begin{lemma} \label{right_key_change} Let $P'$ be a decreasing tableau. Say the insertion $P = (x \xrightarrow{H} P')$ ends at column $c$. Then $K_+(P)$ and $K_+(P')$ agree everywhere except at column $c$. Moreover, if the insertion causes a contraction, then $K_+(P) = K_+(P')$. \end{lemma} \begin{proof} Clearly, $P$ and $P'$ agree on column $k$ if $k > c$. Thus, $K_+(P)$ and $K_+(P')$ agree on column $k$. Now assume $k \leq c$. Let $w$ (resp. $w'$) be the column word of $P$ (resp. $P'$) starting at the bottom of column $k$. Thus, either $w = w'$ or $w \equiv_K yw'$ for some number $y$, where $y$ is the number inserted to column $k$. The former case directly implies $K_+(P)$ and $K_+(P')$ agree on column $k$. Now we consider the latter case. Column $k$ of $K_+(P)$ (resp. $K_+(P')$) consists of numbers $ \emptyset \star w$ (resp. $\emptyset \star w'$). If $k < c$, then we know the first character in $w'$ is less than $y$. Thus, $\emptyset \star w = \emptyset \star yw' = \emptyset \star w'$. Finally, assume $k = c$ and a contraction occurs. Then clearly $w = w'$. Thus, $K_+(P)$ and $K_+(P')$ agree on column $c$. \end{proof} What can we say about the changes at column $c$ of $K_+(P')$ if no contraction occurs? This is answered by the following lemma: \begin{lemma} \label{right_key_column_c_change} Keep the notation from the previous lemma and assume contraction does not occur. Let $C$ and $C'$ be column $c$ of $K_+(P)$ and $K_+(P')$ respectively. Let $D$ be column $c+1$ of $K_+(P)$. Then as sets $C = C' \sqcup \{e\}$ where $e$ is the smallest number in $C$ that is not in $D$. \end{lemma} \begin{ex} In the following examples we do not distinguish between a column and its underlying set. \begin{enumerate} \item If $C=\{1,3,4,6,7\}$ and $D=\{1,3,7\}$ then $C'=\{ 1,3,6,7\}$. \item If $C=\{1,3,4,6,7\}$ and $D$ is empty then $C'=\{ 3,4,6,7\}$. \end{enumerate} \end{ex} \begin{proof} We prove the lemma by induction on the number of entries to the right of $C$. If there are no such entries, then column $c$ is the rightmost column. Thus, $C$ (resp. $C'$) agrees with column $c$ of $P$(resp. $P'$). $C'$ has all but the smallest number in $C$, so the claim holds. Now let's assume there are entries to the right of $C$. Let $m$ be the last number in $word(P)$. We may pretend as if $m$ does not exist in $P$ and $P'$ and compute $C, C'$ and $D$. By the inductive hypothesis, $C' = \{s_1 < \dots < s_n \}$ and $C = \{s_1 < \dots < s_i < e < s_{i+1} < \dots < s_n\}$, where $e$ is the extra number. Moreover, $s_1, \dots, s_{i}$ are the $i$ smallest numbers in $D$. Now we consider the effect of $m$. There are two cases: \begin{enumerate} \item This ignored $m$ is in column $c+1$. Then column $c+1$ is the last column of $P$. Now we let $m$ act on $C$, $C'$ and $D$. $m$ simply adds itself to $D$. If $m$ changes $s_j$ into $m$ in $C$, then $s_j$ is also changed into $m$ in $C'$. Our claim clearly holds. Otherwise, $e$ in $C$ is changed into $m$. Then $s_{i+1}$ in $C'$ is changed to $m$. Then $s_{i+1}$ becomes the ``extra number". Numbers less than it in $C$ are $s_1, \dots, s_i$ and $m$. They are all in $D$. \item This ignored $m$ is not in column $c+1$. Now we let $m$ act on $C$, $C'$ and $D$. Assume $m$ changes $s_j$ into $m$ in $C$ with $j > i$. Then $s_j$ is also changed into $m$ in $C'$. For $D$, $m$ will not change $s_1, \dots, s_i$. Our claim still holds. Now if $m$ changes $s_j$ into $m$ in $C$ with $j \leq i$, then $m$ also changes $s_j$ in $C'$ and $D$ into $m$. Our claim still holds. Finally, assume $m$ changes $e$ into $m$ in $C$. Then $m$ will change $s_{i+1}$ in $C'$. For $D$, $m$ will change a number other than $s_1, \dots, s_i$. Then $s_{i+1}$ becomes the ``extra number". Numbers less than it in $C$ are $s_1, \dots, s_i$ and $m$. They are all in $D$. \end{enumerate} \end{proof} This lemma leads to the following. \begin{cor} Entries in $C'$ are entrywise weakly less than corresponding entries in $C$. \end{cor} \subsection{Left key of RSSYT} In this subsection, we are going to derive some results about the left key. Our results are analogous to the results in the previous subsection. We start with a result similar to Lemma \ref{right_key_change}. \begin{lemma} \label{left_key_change} Take $(a,i) \in \mathcal{C}$. Assume $a = a'x$ and $i = i'y$, where $x, y \in \mathbb{Z}_{>0}$. Let $(P,Q) = \mathrm{Insert}((a,i))$ and $(P',Q') = \mathrm{Insert}((a',i'))$. Assume the insertion of $x$ ends at column $c$. Then $K_-(L(Q))$ and $K_-(L(Q'))$ agree everywhere except at column $c$. If the insertion of $x$ causes a contraction, then $K_-(L(Q)) = K_-(L(Q'))$. \end{lemma} \begin{proof} When a contraction occurs, clearly $L(Q) = L(Q')$ and the conclusion is trivial. Now assume there's no contraction. Then $L(Q)$ is obtained by appending a number $y$ beneath column $c$ of $L(Q')$. Then clearly the first $c-1$ columns of $K_-(L(Q))$ and $K_-(L(Q'))$ agree. Consider column $c'$ where $c' > c$. We know any number in column $c'$ of $L(Q)$ is strictly larger than $y$. Thus, when we compute column $c'$ of $K_-(L(Q))$, this $y$ will be ignored. \end{proof} As in the previous subsection, we would like to know what happens at column $c$ of $K_-(L(Q'))$ and $K_-(L(Q))$ when a contraction does not occur. The following lemma is an analogue of \ref{right_key_column_c_change}: \begin{lemma} \label{left_key_column_c_change} Keep the notation from the previous lemma and assume contraction does not occur. Let $C$ and $C'$ be column $c$ of $K_-(L(Q))$ and $K_-(L(Q'))$ respectively. Then $C = C' \sqcup \{e\}$ where for $c=1$, $e=y$ and for $c\ge2$, $e$ is the smallest number in $D$ that is not in $C'$ where $D$ is column $c-1$ of $K_-(L(Q))$. \end{lemma} \begin{ex} We have the following examples: \begin{enumerate} \item If $C'=\{1,3,6,7\}$ and $D=\{1,3,4,6,7,8\}$ then $C=\{ 1,3,4,6,7\}$. \item If $C'$ is empty and $D=\{1,3,4,6,7,8\}$ then $C=\{ 1\}$. \end{enumerate} \end{ex} \begin{proof} If $c = 1$, our claim is immediate. Thus, we assume $c \geq 2$. Let $C_1, C_2, \dots$ be the columns of $L(Q)$ and let $C_1', C_2', \dots$ be the columns of $L(Q')$. Then $C = C_1 \triangleleft \dots \triangleleft C_c$, $C' = C_1' \triangleleft \dots \triangleleft C_c'$ and $D = C_1 \triangleleft \dots \triangleleft C_{c-1}$. We prove our claim by induction on $c$. For the base case, we assume $c = 2$. Then $D = C_1$. Assume $C_2'=\{s_1 < \dots < s_m\}$. Then $C_2=\{y < s_1 < \dots < s_m\}$. Assume $C_1=\{t_1 < t_2 < \dotsm\}$. Then we know $y \leq t_1$. When we compute $C_1' \triangleleft C_2'$, consider two cases: \begin{enumerate} \item Case 1: $s_j$ picks $t_j$ for all $j \in [m]$. When we compute $C_1 \triangleleft C_2$, $y$ picks $t_1$. $s_1 < s_2 \leq t_2$, so $s_1$ picks $t_2$. Consequently, $s_j$ picks $t_{j+1}$ for all $j \in [m]$. Thus, $C_1' \triangleleft C_2'$ contains $t_1, \dots t_m$ while $C_1 \triangleleft C_2$ contains $t_1, \dots t_m, t_{m+1}$. Our claim is immediate. \item Case 2: Otherwise let $j$ be the smallest such that $s_j$ in $C_2'$ does not pick $t_j$. Thus, $C_1' \triangleleft C_2'$ has $t_1, \dots, t_{j-1}$ but does not have $t_j$. When we compute $C_1 \triangleleft C_2$, similar to the previous case, $y, s_1, \dots, s_{j-1}$ will pick $t_1, \dots, t_j$. Then $s_j$ will make the same choice as in $C_2'$. Since then, numbers in $C_2$ will make the same choices as in $C_2'$. Thus, $C_1 \triangleleft C_2$ has all numbers in $C_1' \triangleleft C_2'$ together with $t_j$. Our claim is proved. \end{enumerate} Now we do the inductive step. We first ignore $C_1$ and compute $C$, $C'$ and $D$. By the inductive hypothesis, we assume $C' = \{s_1 < \dots < s_n \}$ and $C = \{s_1 < \dots < s_i < e < s_{i+1} < \dots < s_n\}$, where $e$ is the extra number. Assume $D = \{t_1 < t_2 < \dotsm\} $. Then we know $t_j = s_j$ for all $j \leq i$ and $e = t_{i+1}$. Now we consider the effect of $C_1$. We need to study $C_1 \triangleleft C$, $C_1 \triangleleft C'$, and $C_1 \triangleleft D$. $s_1, \dots, s_i$ make the same choices in all 3 scenarios. Notice that since $K_-(L(Q))$ is a key, numbers in $C_1 \triangleleft C$ must appear in $C_1 \triangleleft D$. Thus, when we study $C_1 \triangleleft C$, we can ignore numbers in $C_1$ not picked by $D$. The same is true for $C_1 \triangleleft C'$. Now we study two cases: \begin{enumerate} \item $s_j$ in $C'$ and $t_j$ in $D$ make the same choices for all $i < j \leq m$. In $C_1 \triangleleft C$, $e$ picks what $t_1$ picks. Consequently, $s_j$ picks what $t_{j+1}$ picks for all $i < j \leq m$. Our claim is immediate. \item Case 2: Otherwise let $j$ be the smallest such that $s_j$ in $C'$ does not pick what $t_j$ picks. Thus, $s_{i+1}, \dots, s_{j-1}$ in $C'$ make the same choices as $t_{i+1}, \dots, t_{j-1}$. When we compute $C_1 \triangleleft C$, similar to the previous case, $e, s_{i+1}, \dots, s_{j-1}$ will make the same choices as $t_{i+1}, \dots, t_j$. Then $s_j$ will make the same choice as in $C'$. Since then, numbers in $C$ will make the same choices as in $C'$. Our claim is proved. \end{enumerate} \end{proof} \begin{cor} Values in $C'$ are less than or equal to corresponding values in $C$. \end{cor} \subsection{Proof of Theorem \ref{compatible_word_rule}} Now we can prove Theorem \ref{compatible_word_rule}. It is enough to prove the following: \begin{lemma} \label{compatible_word_mainlemma} \begin{enumerate} \item Take $(a,i) \in \mathcal{C}$. Assume $a = a'x$ and $i = i'y$, where $x, y \in \mathbb{Z}_{>0}$. Let $(P,Q) = \mathrm{Insert}((a,i))$ and $(P',Q') = \mathrm{Insert}((a',i'))$. Assume $K_+(P') \geq K_-(L(Q'))$. Assume the insertion of $x$ ends at column $c$ and does not cause a contraction. Then column $c$ of $K_+(P)$ is entrywise greater than or equal to column $c$ of $K_-(L(Q))$. \item Take $(P,Q) \in \mathcal{T}_P$. Find the smallest number in $Q$ and break ties by picking the rightmost. Suppose it is $y$ in column $c$, living in an entry that only contains it. We remove that entry in $Q$ and invoke Hecke reverse insertion on the corresponding entry of $P$ with $\alpha = 1$. Let $x$ be the output. Then we must have $x \geq y$. Moreover, assume we get $(P', Q')$ after the process. Then column $c$ of $K_+(P')$ is entrywise greater than or equal to the column $c$ of $K_-(L(Q'))$. \end{enumerate} \end{lemma} Why these two statements are enough? Well, we can use induction to to prove $\mathrm{Insert}(\mathcal{C}_P) \subseteq \mathcal{T}_P$ and $\mathrm{RevInsert}(\mathcal{T}_P) \subseteq \mathcal{C}_P$. For the former, we keep all notations in the first part of Lemma \ref{compatible_word_mainlemma}. Then clearly $(a',i') \in \mathcal{C}_{P'}$. By induction on length of $a$, we may assume $K_+(P') \geq K_-(L(Q'))$ and need to show $K_+(P) \geq K_-(L(Q))$. The only place where things can go wrong is at column $c$ with no contraction occurs. Thus, studying this case is enough. The latter is similar. Part 2 of Lemma \ref{compatible_word_mainlemma} guarantees $(P', Q') \in \mathcal{T}_{P'}$. Then by inductive hypothesis, $\mathrm{RevInsert}(P', Q') \in \mathcal{C}_{P'}$. Appending $x, y$ respectively makes the pair in $\mathcal{C}_P$. Now, we prove Lemma \ref{compatible_word_mainlemma}: \begin{proof} We begin with the first part. Keep all notation and assumptions in the first part of the lemma. We proceed by considering two cases. First assume $c = 1$. Then column 1 of $K_-(L(Q))$ (resp. $K_-(L(Q'))$) agrees with column 1 of $L(Q)$ (resp. $L(Q')$). Thus, column 1 of $K_-(L(Q))$ is obtained by appending $y$ on the bottom of column 1 of $K_-(L(Q'))$. Since column 1 of $K_+(P')$ is entry-wise less than or equal to column 1 of $K_+(P)$, we only need to worry about the new entry on the bottom. Clearly, the bottom entry in column 1 of $K_+(P)$ equals a number in $P$, which is a number in $a$. It cannot be smaller than $y$, which is the smallest number in $i$. Thus, we are done. Now assume $c > 1$. Let $t_1 < \dots < t_m$ be the numbers in column $c$ of $K_+(P')$. Let $s_1 < \dots < s_m$ be the numbers in column $c$ of $K_-(L(Q'))$. Then we know $t_j \geq s_j$. Let $e$ be the extra number in column $c$ of $K_-(L(Q))$ and assume it is the $i^{th}$ smallest number in this column. By Lemma \ref{left_key_column_c_change}, if $j \leq i$, we have the following: \begin{equation} \begin{split} & \: j^{th} \textrm{ smallest number in column } c \textrm{ of } K_-(L(Q)) \\ = & \: j^{th} \textrm{ smallest number in column } c - 1 \textrm{ of } K_-(L(Q)) \\ \leq & \: j^{th} \textrm{ smallest number in column } c - 1 \textrm{ of } K_+(P)\\ \leq & \: j^{th} \textrm{ smallest number in column } c \textrm{ of } K_+(P)\\ \end{split} \end{equation} Now if $j > i$, the $j^{th}$ smallest number in column $c$ of $K_-(L(Q))$ is $s_{j-1}$. The $j^{th}$ smallest number in column $c$ of $K_+(P)$ is either at least $t_{j-1}$. Clearly, our inequality still holds. Now we prove the second part. Keep all notation and assumptions in the second part of the lemma. First, $y \leq x$ is immediate: $y$ is the smallest number in $Q$. If an entry in $P$ is less than $y$, than the right key on that entry is also less than $y$, which is a contradiction. Now we only need to prove the bound about keys. Let $t_1 < \dots < t_m$ be the numbers in column $c$ of $K_+(P)$. Let $s_1 < \dots < s_m$ be the numbers in column $c$ of $K_-(L(Q))$. Then we know $t_j \geq s_j$. Assume $t_i$ is not in column $c$ of $K_+(P')$. Take any $j < i$, we have: \begin{equation} \begin{split} & \: j^{th} \textrm{ smallest number in column } c \textrm{ of } K_+(P') \\ = & \: j^{th} \textrm{ smallest number in column } c + 1 \textrm{ of } K_+(P') \\ \geq & \: j^{th} \textrm{ smallest number in column } c + 1 \textrm{ of } K_-(L(Q'))\\ \geq & \: j^{th} \textrm{ smallest number in column } c \textrm{ of } K_-(L(Q'))\\ \end{split} \end{equation} Now, if $j \geq i$, then the $j^{th}$ smallest number in column $c$ of $K_+(P')$ is $t_{j+1}$. The $j^{th}$ smallest number in column $c$ of $K_+(L(Q'))$ is $s_{j+1}$ or $s_j$. Our equality holds in either case. \end{proof} \begin{rem} \label{R:semistandard compatible bounded} There is a semistandard analogue of Theorem \ref{T:compatible bounded}. To state this result, we need to modify a few definitions: \begin{enumerate} \item In the definition of $\mathrm{Insert}$, $\mathrm{RevInsert}$ and $\mathcal{C}_P$, ``Hecke column insertion" is replaced by ``RSK column insertion". \item We change $\mathcal{T}$ to be the set of pairs $(P, Q)$, where $P$ and $Q$ are RSSYTs of the same shape. \item In the definition of $\mathcal{T}_P$, $L(Q)$ is replaced by $Q$ and the $K_+(P)$ is the classical right key of a RSSYT. \end{enumerate} With these definitions, Theorem \ref{restricting_Insert} is the semistandard analogue. It can be proved by exactly the same argument. \end{rem} \section{K-theoretic analogue of the reverse complement map} \label{S:reverse complement} In this section we investigate the map $T \mapsto T^\sharp$ of Proposition \ref{P:decreasing to increasing}. It is a bijection from decreasing tableaux to increasing tableaux. Recall the map is defined as follows. Take a decreasing tableau $T$. Find the smallest rectangle that contains $T$. Then we do several ``iterations" of Kjdt to anti-rectify $T$. In each iteration, we perform Kjdt at the leftmost empty space of each row, from top to bottom. Then we rotate the result by 180 degrees and obtain $T^\sharp$. \begin{ex} \label{E:anti-rectify decreasing} We recompute $J^\searrow(T)$ from Example \ref{X:Krect} using the above iterations. We start with the decreasing tableau $T$, along with the rectification order (a tableau giving the order that boxes are occupied during the computation of $J^\searrow(T)$), and another tableau whose entry $i$ means that the given box is occupied during the $i$-th iteration. \[ T = \begin{ytableau} 9 & 8 & 7 & 5 & 3\\ 7 & 5 & 4 & 3\\ 4 & 2 & 1 \\ 3\\ 1\\ \end{ytableau} \quad \ytableaushort{\none\none\none\none\none,\none\none\none\none1,\none\none\none25,\none368a,\none479b} \quad \ytableaushort{\none\none\none\none\none,\none\none\none\none1,\none\none\none12,\none1234,\none1234} \] Let $T^{(i)}$ be the skew tableau just before the $i$-th iteration. So $T=T^{(1)}$ and $T^{(m)}=J^\searrow(T)$ where $m$ is the number of columns of $T$. $T^{(2)}$, $T^{(3)}$, $T^{(4)}$ and $T^{(5)}$ are listed below: $$ \begin{ytableau} \none & 9 & 8 & 7 & 5\\ \none & 7 & 5 & 4 & 3\\ \none & 4 & 2 & 1 \\ \none & 3\\ 9 & 1\\ \end{ytableau} \:\:\:\: \begin{ytableau} \none & \none & 8 & 7 & 5\\ \none & \none & 5 & 4 & 3\\ \none & \none & 4 & 2 & 1 \\ \none & 9 & 3\\ 9 & 7 & 1\\ \end{ytableau} \:\:\:\: \begin{ytableau} \none & \none & \none & 7 & 5\\ \none & \none & \none & 5 & 3\\ \none & \none & 8 & 4 & 1 \\ \none & 9 & 5 & 2\\ 9 & 7 & 3 & 1\\ \end{ytableau} \:\:\:\: \begin{ytableau} \none & \none & \none & \none & 7\\ \none & \none & \none & \none & 5\\ \none & \none & 8 & 7 & 3 \\ \none & 9 & 5 & 4 & 2\\ 9 & 7 & 3 & 2 & 1\\ \end{ytableau} $$ Once we rotate $T^{(5)}$ by 180 degrees, we get: $$ T^\sharp = \begin{ytableau} 1 & 2 & 3 & 7 & 9\\ 2 & 4 & 5 & 9\\ 3 & 7 & 8 \\ 5\\ 7\\ \end{ytableau} $$ Consider the rightmost columns of the tableaux $T^{(5)},T^{(4)},\dotsc,T^{(1)}$ in that order. We get \[ \ytableaushort{75553,5333,311,2,1} \] Lemma \ref{skew-right-key-shifts} implies that this is $K_+(T)$. \end{ex} Now, we have: \begin{prop} \label{reverse_complement_fixes_keys} Let $T$ be a decreasing tableau. Then $$ K_+(T) = K_-(T^\sharp) $$ \end{prop} \begin{ex} In Example \ref{E:anti-rectify decreasing}, $K_+(T)$ and $K_-(T^\sharp)$ are both $(3,1,5,0,4,0,1)$. \end{ex} The rest of this section aims to prove it. We start with a few definitions. \begin{defn} Let $T$ be a decreasing skew tableau. We use $T_{\leq c, \leq r}$ to denote the decreasing skew tableau obtained by keeping the first $c$ columns and first $r$ rows of $T$. Analogously, we define similar notations such as $T_{\geq c, \leq r}$. \end{defn} \begin{defn} We generalize $K_+(T)$ to decreasing skew tableaux $T$. Let $K_+(T)$ be the normal shape decreasing tableau whose $c$-th column equals $\emptyset \star w(T_{\geq c, < \infty})$ for all $c$. \end{defn} \begin{ex} \label{generalized-right-key-example} Let $T^{(2)}$, \dots, $T^{(5)}$ be the tableaux in Example \ref{E:anti-rectify decreasing}. Their right keys are: $$ \begin{ytableau} 7 & 7 & 5 & 5 & 5\\ 5 & 5 & 3 & 3 & 3\\ 3 & 3 & 1 & 1 \\ 2 & 2\\ 1 & 1\\ \end{ytableau} \:\:\:\: \begin{ytableau} 7 & 7 & 7 & 5 & 5 \\ 5 & 5 & 5 & 3 & 3 \\ 3 & 3 & 3 & 1 & 1 \\ 2 & 2 & 2\\ 1 & 1 & 1\\ \end{ytableau} \:\:\:\: \begin{ytableau} 7 & 7 & 7 & 7 & 5\\ 5 & 5 & 5 & 5 & 3\\ 3 & 3 & 3 & 3 & 1\\ 2 & 2 & 2 & 2\\ 1 & 1 & 1 & 1\\ \end{ytableau} \:\:\:\: \begin{ytableau} 7 & 7 & 7 & 7 & 7\\ 5 & 5 & 5 & 5 & 5\\ 3 & 3 & 3 & 3 & 3\\ 2 & 2 & 2 & 2 & 2\\ 1 & 1 & 1 & 1 & 1\\ \end{ytableau} $$ \end{ex} \begin{rem} We make a few observations about this generalization. \begin{enumerate} \item $K_+(T)$ is a key. \item The rightmost column of $K_+(T)$ and $T$ must agree. \item When $T$ is of normal shape, this definition agrees with our previous definition. \end{enumerate} \end{rem} Let us look at Example \ref{generalized-right-key-example} carefully. $T^{(4)}$ is obtained from $T^{(3)}$ by applying Kjdt on column 4 twice. Their right keys are the same except at column 4. Moreover, column 4 of $T^{(4)}$ agrees with column 3 of $T^{(3)}$. This phenomenon is captured in the next result: \begin{lemma} \label{skew_right_key_changes} Let $T$ be a decreasing skew tableau. Assume its $c$-th column ends at row $r_1$, while its $(c+1)$-th column ends at row $r_2$. Assume $r_1 > r_2$. We do Kjdt at the leftmost position of row $r_2 + 1, \dots, r_1$ and let $T'$ be the result. Then $K_+(T)$ and $K_+(T')$ agree everywhere except at column $c+1$. Moreover, column $c+1$ of $K_+(T')$ agrees with column $c$ of $K_+(T)$. \end{lemma} \begin{proof} Notice: $T_{>c+1, < \infty} = T'_{>c+1, < \infty}$. Thus, $K_+(T)_{>c+1, < \infty} = K_+(T')_{>c+1, < \infty}$. Now assume $i \leq c + 1$ and column $i$ of $T$ ends at row $j$. Column $i$ of $K_+(T)$ has numbers $\emptyset \star w(T_{\geq i, < \infty}) = \emptyset \star w(T_{\geq i, \leq j})$. Now we look at $w(T_{\geq i, \leq j})$. It is also $w(T_{= i, \leq j})w(T_{> i, \leq j})$. Because $T_{\leq i, \leq j}$ is reverse rectified, $\emptyset \star w(T_{\leq i, \leq j}) = \emptyset \star w(T_{=i, \leq j})$. Thus, column $i$ of $K_+(T)$ has numbers: $\emptyset \star w(T_{\leq i, \leq j}) w(T_{> i, \leq j}) = \emptyset \star w(T_{< \infty, \leq j})$ Next, if $i \leq c$, then we know column $i$ of $T'$ also ends at row $j$. Similar to above, column $i$ of $K_+(T')$ has numbers $\emptyset \star w(T'_{< \infty, \leq j})$. Notice that $T'_{< \infty, \leq j}$ is obtained from $T_{< \infty, \leq j}$ by a sequence of Kjdt moves. Thus, their column words are K-Knuth equivalent, so they yield the same results when acting on $\emptyset$. Finally, we study column $c+1$. Column $c+1$ of $K_+(T')$ is $\emptyset \star w(T'_{< \infty, \leq r_1})$. Column $c$ of $K_+(T)$ is $\emptyset \star w(T_{< \infty, \leq r_1})$. Clearly $T'_{< \infty, \leq r_1}$ is obtained from $T_{< \infty, \leq r_1}$ by K-jdt moves, so we are done. \end{proof} Then we can describe how the right key changes during the map $T \mapsto T^\sharp$. In Example \ref{generalized-right-key-example}, each iteration changes the right key in the following way: It copies column $c$ to column $c+1$, where $c$ goes from 4 to 1. This pattern holds in general: \begin{lemma} \label{skew-right-key-shifts} Let $T$ be a decreasing tableau. Let $C_1, \dots, C_m$ be columns of $K_+(T)$. Assume we finished $t$ iterations while computing $T^\sharp$. Then the right key of the current skew-shape tableau has columns: $C_1, C_1, \dots, C_1, C_2, \dots, C_{m-t}$ \end{lemma} \begin{proof} We prove by induction. When $t = 0$, the claim is trivial. Now assume the claim holds after $t$ iterations. Assume now column $i$ ends at $r_i$. During the $(t+1)^{th}$ iteration, the algorithm performs Kjdt at the leftmost position of row $r_m + 1, r_m + 2, \dots, r_1$. How do these moves affect the right key? By Lemma \ref{skew_right_key_changes}, it will copy column $c$ of the right key to column $c+1$, where $c$ goes from $m - 1$ to 1. Thus, after this iteration, the right key becomes: $C_1, C_1, \dots, C_1, C_2, \dots, C_{m-t - 1}$. \end{proof} \begin{proof}[Proof of Proposition \ref{reverse_complement_fixes_keys}] Let $C_1, \dots, C_m$ be the columns of $K_+(T)$. By Lemma \ref{skew-right-key-shifts}, after $t$ iterations of the computation of $T^\sharp$, the rightmost column of the resulting skew tableau (call it $T^{(t+1)}$) is $C_{m - t}$. By definition we also have $T^{(m)} = J^\searrow(T)$. On the other hand consider the computation of the $(m-t)$-th column $C$ of $K_-(T^\sharp)$. By definition we take the first $m-t$ columns of $T^\sharp$ within a tight rectangle, moving to the southeast via a Kjdt which infuses with a column superstandard tableau having $m-t-1$ columns. $C$ is the leftmost column of the result. Rotating by 180 degrees, this is the same as taking $(T^\sharp)^*=J^\searrow(T)$, using the last $m-t-1$ columns in a column superstandard Kjdt to move it to the northwest, and taking the rightmost column $C$. However the $m-t-1$ iterations which slide into these columns are precisely undoing the last $m-t-1$ iterations in the computation of $J^\searrow(T)$. Therefore $C$ equals the rightmost column of $T^{(t+1)}$, which is $C_{m-t}$, as required. \end{proof} \begin{ex} We illustrate this argument on column 3 of $K_+(T)$, where $T$ is the tableau in Example \ref{E:anti-rectify decreasing}. By definition, here's how we compute column 3 of $K_-(T^\sharp)$. We start with $T^\sharp$. Then we Kjdt to anti rectify $T^\sharp$. We stop once the first 3 columns are anti rectified: $$ \begin{ytableau} \none & \none & 1 & 7 & 9\\ \none & \none & 3 & 9\\ \textcolor{red}{1} & 2 & 4 \\ \textcolor{red}{3} & 4 & 5\\ \textcolor{red}{5} & 7 & 8\\ \end{ytableau} $$ Then the red numbers form column 3 of $K_-(T^\sharp)$. Notice that when we rotate this tableau, we get $T^{(3)}$ in Example \ref{E:anti-rectify decreasing}, whose rightmost column is column $3$ of $K_+(T)$. \end{ex} \section{Alternative descriptions of $K_+(T)$} \label{S:alternative right key} This section provides some alternative descriptions of the right key $K_+(T)$, where $T$ is a decreasing tableau. \begin{prop} \label{P: Alternative definitions of the right key} Let $T$ be a decreasing tableau. Let $T_{\ge j}$ be the decreasing tableau obtained by removing the first $j-1$ columns of $T$. Then the column $C$ computed by the following procedures all agree with column $j$ of $K_+(T)$. \begin{enumerate} \item $C =\emptyset \star \mathrm{word}(T_{\ge j})$. \item $C$ is the rightmost column of a \textbf{arbitrary} anti-rectification of $T_{\ge j}$. \item Assume column $j$ of $T$ has $m$ numbers. We conjugate $T_{\ge j}$ and obtain $T_{\ge j}'$. Then we invoke Hecke reverse column insertion at the end of column $m, \dots, 1$ with $\alpha = 1$. Then $C$ is the set of output numbers. \end{enumerate} \end{prop} \begin{proof} Let $T$ be a decreasing tableau. Proposition \ref{P:Kjdt right key} shows that the first two methods for computing $K_+(T)$ agree. Next, we show procedure 3 agrees with procedure 1. We may assume $j = 1$. Thus we need to prove: Assume we conjugate $T$ and get $T'$. Invoke reverse insertion on each column of $T'$ from right to left. Show the output agree with $\emptyset \star \mathrm{word}(T)$. Prove by induction on $m$, the number of rows in $T$. When $m = 1$, our claim is trivial. Now we do the inductive step. Assume both column 1 and column 2 of $T$ has $m$ entries. Notice that the result of reverse insertions and $\emptyset \star \mathrm{word}(T)$ are not affected if we ignore column 1 of $T$. Thus, by keep removing columns on the left, we may assume $T$ has only one column with $m$ entries. Now, we invoke reverse Hecke insertion at the end of column $m$ of $T'$ and let $x_m$ be the output. We conjugate the resulting tableau and get $P$. Then $\mathrm{word}(T') \equiv_K x_m \mathrm{word}(P')$. Define the row word of $T$ to be the following: Read entries of $T$ from bottom to top, within each column from left to right. Then $\mathrm{rev}(\mathrm{word}(T'))$ is the row word of $T$. By Theorem 5.4 of \cite{BS} the row word of $T$ is K-Knuth equivalent to $\mathrm{word}(T)$. Thus, we know $$\emptyset \star \mathrm{word}(T) = \emptyset \star \mathrm{rev}(\mathrm{word}(T')) = \emptyset \star \mathrm{rev}(\mathrm{word}(P'))x_m$$ Similarly, $\mathrm{rev}(\mathrm{word}(P'))$ is the row word of $P$, so $$ \emptyset \star \mathrm{word}(P) = \emptyset \star \mathrm{rev}(\mathrm{word}(P')) $$ Next, invoke reverse insertion at column $m-1, m-2, \dots, 1$ of $P'$ and get numbers $x_{m-1}, \dots, x_1$. Since $P$ has $m-1$ rows, our inductive hypothesis says $$ \emptyset \star \mathrm{word}(P) = \{x_{m-1}, \dots, x_1\} $$ By Lemma \ref{Pieri property of Hecke insertion}, $x_m > x_{m-1} > \dots > x_1$. Thus, \begin{equation*} \begin{split} \emptyset \star \mathrm{word}(T) = & \emptyset \star \mathrm{rev}(\mathrm{word}(P'))x_m \\ = & \emptyset \star \mathrm{word}(P)x_m \\ = & \{x_{m-1}, \dots, x_1\} \star x_m \\ = & \{x_{m}, \dots, x_1\} \end{split} \end{equation*} \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,206
Great condo in a convenient location in Eastman, just past the Eastman Visitor's Center! This condo is clean, comfortable, and has everything that you need: 2 Queen-sized beds and 2 twins. Easy deck for grilling and chillin', too! Winter rate: $1450/month plus utilities. Summer rates: $1300/week or $3500/month plus utilities. Short term rentals are subject to NH Rooms & Meals tax. NH Meals & Rooms Tax License #055984.
{ "redpajama_set_name": "RedPajamaC4" }
3,520
Q: Python Equivalent for Deno's ensureDir What's the Python equivalent to Deno's ensureDir? Usage example: import { ensureDir, ensureDirSync } from "https://deno.land/std/fs/mod.ts"; ensureDir("./logs").then( () => console.log("Success Created"), ).catch((err) => console.log(err)); A: You can use pathlib.Path.mkdir. Example: import pathlib pathlib.Path.home().joinpath(".local", "extra", "path").mkdir( parents=True, exist_ok=True )
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,672
Q: Symfony2 is running extremely slow on php7-fpm on multi docker I am trying to setup symfony2 environment on multi-docker using php7-fpm and nginx. The whole thing works. However, for some inexplicable reason, every request takes 20-30 seconds to complete (the whole page loads in about 5 minutes). And during the load, the CPU spikes. I tried playing with php fpm config, nginx config. Even copied the exact ones that I am using for another app, which works perfectly fine. At this point, I am stuck, and I have no idea even where to start (continue). My docker-compose.yml looks like this: web: build: ./ container_name: indago_web external_links: - mysql:mysql volumes: - ./php-fpm/www.conf:/usr/local/etc/php-fpm.d/www.conf - ~/Dev/indago/indago:/var/www/indago environment: ING_DB_USER: priz_wp ING_DB_PASSWORD: Shurik_0(8 ING_DB_NAME: priz_wp ING_DB_HOST: mysql proxy: image: nginx container_name: indago_proxy links: - web:web ports: - "80:80" volumes: - ./web/nginx.conf:/etc/nginx/nginx.conf - ./web/default.conf:/etc/nginx/conf.d/default.conf Dockerfile of the webapp: FROM php:7-fpm RUN apt-get update && apt-get install -y vim \ libxml2-dev libicu-dev \ && rm -rf /var/lib/apt/lists/* && apt-get autoremove # Install Composer RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer RUN composer --version # Set timezone RUN rm /etc/localtime RUN ln -s /usr/share/zoneinfo/America/Vancouver /etc/localtime RUN "date" RUN docker-php-ext-install pdo pdo_mysql opcache ctype json xml tokenizer mbstring iconv posix intl RUN { \ echo 'opcache.memory_consumption=128'; \ echo 'opcache.interned_strings_buffer=8'; \ echo 'opcache.max_accelerated_files=4000'; \ echo 'opcache.revalidate_freq=2'; \ echo 'opcache.fast_shutdown=1'; \ echo 'opcache.enable_cli=1'; \ } > /usr/local/etc/php/conf.d/opcache-recommended.ini RUN { \ echo 'upload_max_filesize = 100M'; \ echo 'post_max_size = 100M'; \ } > /usr/local/etc/php/conf.d/cutom.ini CMD mkdir -p /var/www/indago VOLUME /var/www/indago CMD chown -R www-data:www-data /var/www/indago WORKDIR /var/www/indago #RUN usermod -u 1000 www-data #USER www-data CMD ["php-fpm"] And default.conf server { server_name indago-local.com www.indago-local.com; root /var/www/indago/web; location / { # try to serve file directly, fallback to app.php try_files $uri /app_dev.php$is_args$args; } # DEV # This rule should only be placed on your development environment # In production, don't include this and don't deploy app_dev.php or config.php location ~ ^/(app_dev|config)\.php(/|$) { fastcgi_pass web:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; # When you are using symlinks to link the document root to the # current version of your application, you should pass the real # application path instead of the path to the symlink to PHP # FPM. # Otherwise, PHP's OPcache may not properly detect changes to # your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126 # for more information). fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # PROD location ~ ^/app\.php(/|$) { fastcgi_pass web:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; # When you are using symlinks to link the document root to the # current version of your application, you should pass the real # application path instead of the path to the symlink to PHP # FPM. # Otherwise, PHP's OPcache may not properly detect changes to # your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126 # for more information). fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; # Prevents URIs that include the front controller. This will 404: # http://domain.tld/app.php/some-path # Remove the internal directive to allow URIs like this internal; } # return 404 for all other php files not matching the front controller # this prevents access to other php files you don't want to be accessible. location ~ \.php$ { return 404; } error_log /var/log/nginx/project_error.log; access_log /var/log/nginx/project_access.log; } In www.conf I updated pm.max_children = 20 nginx.conf is pretty simple (and, in fact, does not change anything) user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 2048; multi_accept on; use epoll; } http { server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log on; error_log on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-available/*; open_file_cache max=100; } As some additional information, memmory on fpm container does not swap and I am not getting any errors anywhere (before I bubmped number of children, I was maxing out on them) Thanks you in advance for any idea about how to resolve it.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,197
\section{Introduction} \input{sections/intro.tex} \section{Related Work} \input{sections/rw.tex} \section{Action Anticipation with \sc{Anticipatr}} \input{sections/model.tex} \section{Experiments} \input{sections/experiments.tex} \section{Conclusion} \input{sections/conclusion.tex} \clearpage \bibliographystyle{splncs04} \subsection{Stage 1: Segment-level Training} \label{sec:pretraining} In this stage, the segment encoder is trained on a segment-level prediction task to learn representations for individual segments. See Fig.~\ref{fig:model} (\textit{left}) for an overview. \noindent \textbf{Segment Encoder.} We design the segment encoder network $E_s$ as a sequence of $\ell_s$ transformer blocks containing a multi-head self-attention module followed by layernorm and a feed forward network~\cite{vaswani2017attention}. This network is trained on the task of segment-level action anticipation. \noindent \textbf{Training.} During training, the segment encoder receives a segment (sequence of frames from a video) as input and predicts the set of action labels that would occur at any time in the future (starting from the temporal boundary, \textit{i.e.,} end of the segment until the end of that video) without inferring when they would occur. Depending on the segment, there could be multiple actions occurring between the end of segment and end of video. Thus, we formulate this training task as a multi-class multi-label classification. The training data for the segment encoder is derived from the training set in the original video dataset containing videos with action annotations. These input segments are obtained using the action boundaries provided in the training set. We do not require any additional annotations. Formally, given a video $\mathbf{v}$ containing $T$ frames, a segment $\mathbf{v}_{s}^{(t',t'')}$, spanning time indices $t'$ to $t''$ where $0 \leq t' < t'' < T$, is taken as input. For this segment, the target is a binary vector $\mathbf{c}_{s}$ (dimension $|\mathcal{C}|$) corresponding to the action labels that occur after the temporal boundary of the segment until the end of the video ($[\mathbf{v}^{t''+1},\ldots,\mathbf{v}^{T}]$). The segment encoder $E_s$ receives the segment $\mathbf{v}_{s}^{(t',t'')}$ along with positional encodings $\mathbf{p}_{s}^{(t',t'')}$ (details in supplementary). The output of the encoder is an embedding $\mathbf{h} = [\mathbf{h}^{1},\ldots,\mathbf{h}^{t''-t'+1}]$ of dimension $(t''-t'+1) \times d_s$ where $d_s$ is the channel dimension. The output embeddings are then averaged along time dimension and fed into a linear layer $F$ followed by a sigmoid activation $\sigma$ to obtain future action probabilities $\mathbf{\hat c}_{s}$ of dimension $|\mathcal{C}|$, expressed as: \begin{equation} \begin{split} \mathbf{h} &= E_s\big(\mathbf{v}_{s}^{(t',t'')}, \mathbf{p}_{s}^{(t',t'')}\big)\\ \mathbf{\hat{c}}_{s} &= \sigma\left(F\Bigg(\frac{1}{t''-t'+1}\sum_{i=1}^{t''-t'+1}\mathbf{h}^{i}\Bigg)\right). \end{split} \end{equation} Here, $\mathbf{\hat{c}}_{s}$ is the output of a multi-label classifier where each element $c^{j}_{s}$ of $\mathbf{\hat c}_{s}$ denotes probability of corresponding action category $j \in \mathcal{C}$. This network is trained using binary cross entropy loss between the prediction vector $\mathbf{\hat{c}}_{s}$ and target vector $\mathbf{c}_{s}$. Once trained, the linear layer $F$ is discarded and the segment encoder $E_s$ is used to obtain segment-level representations for the action anticipation stage. \subsection{Stage 2: Action Anticipation} \label{sec:anticipation} In the second stage of our approach, we use an encoder-decoder model that contains two encoders: (i) the segment encoder from the first stage, and (ii) a video encoder that encodes the observed video as a whole. The outputs of these two encoders along with an anticipation duration are fed into an anticipation decoder which uses the representations from the two encoders to predict a set of future action instances over the given anticipation duration. See Fig.~\ref{fig:model} (\textit{right}). \noindent \textbf{Video Encoder.} The video encoder receives an observed video containing $T_o$ frames. We denote the input as $\mathbf{v}_{o}= [\mathbf{v}^{1},\ldots,\mathbf{v}^{T_o}]$. We design the encoder network $E_v$ as a sequence of $\ell_v$ transformer blocks~\cite{vaswani2017attention} containing a multi-head self-attention module followed by layernorm and feed forward network. The encoder receives the features corresponding to the observed video $\mathbf{v}_{o}$ as input. As the self-attention module is permutation-invariant, we provide additional information about the sequence in the form of sinusoidal positional encodings~\cite{vaswani2017attention} $\mathbf{p}_{o} = [\mathbf{p}^{1},\ldots,\mathbf{p}^{T_o}]$ (see supplementary for additional explanation). Here, each element in the positional encoding sequence is added to the corresponding element in the video features and then fed into the encoder block. The encoder models temporal relationships in the observed video and transforms the input sequence to a contextual representation $\mathbf{h}_{v} = [\mathbf{h}_{v}^{1},\ldots,\mathbf{h}_{v}^{T_o}]$, expressed as: \begin{equation} \mathbf{h}_{v} = E_v(\mathbf{v}_{o}, \mathbf{p}_{o}). \end{equation} \noindent \textbf{Encoding Video Segments.} Concurrent to the video encoder, the input video is divided into a sequence of segments using temporal sliding windows. Specifically, a temporal window of size $k$ starting from frame index $i$ obtains a segment $[\mathbf{v}^{i},\ldots, \mathbf{v}^{i+k-1}]$, which is fed to the segment encoder to obtain the outputs $\mathbf{h}_{s}^{i},\ldots,\mathbf{h}_{s}^{i+k-1}$. The starting index $i$ slides across time with $i \in \{1,k+1,2k+1,\ldots,(T_o-k+1)\}$ generating the temporal windows, where the window size $k$ is a hyperparameter. The outputs of the segment encoder for all temporal windows are concatenated to obtain $\mathbf{h}_{s} = [\mathbf{h}_{s}^{1},\ldots,\mathbf{h}_{s}^{T_o}]$. During implementation, the representations can still be obtained in one forward pass of the segment encoder by stacking segments along the batch dimension of the input. This segment-level representation of the video is complementary to the video-level representation that encodes the ongoing activity in the video. \noindent \textbf{Anticipation Decoder.} Given the video-level and the segment-level representations, the decoder aims to predict a set of future action instances over a given anticipation duration. The predicted set contains action instances of the form $\textrm{(label, start time, end time)}$. The anticipation decoder receives the following inputs: (i) \textit{anticipation queries} $\mathbf{q}_0$, (ii) anticipation duration $T_a$ over which actions are to be predicted, (iii) encoded representation $\mathbf{h}_{v}$ from video encoder $E_v$, and (iv) encoded representation $\mathbf{h}_{s}$ from segment encoder $E_s$. The anticipation queries contain $N_a$ elements, \textit{i.e.}, $\mathbf{q}_0 = [\mathbf{q}_0^{1},\ldots,\mathbf{q}_0^{N_a}]$, wherein each query is a learnable positional encoding (more details in supplementary). We consider $N_a$ as a hyperparameter that is constant for a dataset and is sufficiently larger than the maximum number of action instances to be anticipated per video in the overall dataset. Each query $\mathbf{q}_0^{i}$ is then fed into a linear layer (weights shared for all values of $i$) along with the anticipation duration $T_a$ to obtain time-conditioned anticipation queries $\mathbf{q}_a^{i}$ for $i=1,\ldots,N_a$. This time conditioning enables the anticipation decoder to predict actions over any specified anticipation duration. The decoder network $D$ consists of $\ell_d$ blocks, wherein, each block contains a cascade of attention layers. The first attention layer is the multi-head self-attention block which models relations among the anticipation queries. The second attention layer is a multi-head encoder-decoder attention layer that maps the queries and the segment-level representations from the segment encoder. And, the third attention layer is another multi-head encoder-decoder attention layer that maps the output of previous layer to the video-level representation corresponding to the input. This third attention layer is followed by a feedforward network. The output of the decoder $\mathbf{y} = [\mathbf{y}^{1},\ldots,\mathbf{y}^{N_a}]$ serves as a latent representation of the action instances in the videos, expressed as: \begin{equation} \mathbf{y} = D(\mathbf{q}_a,\mathbf{h}_v,\mathbf{h}_s) \end{equation} The decoder output is used to predict the set of action instances $\hat{\mathcal{A}}= \{ \hat a^{i} = ( \hat c^{i}, \hat t_s^{i}, \hat t_e^{i})\}_{i=1}^{N_a}$. Each element in decoder output $\mathbf{y}^{i}$ is fed into a linear layer followed by softmax to obtain prediction probabilities $\hat p^i(c)$ where $c=1,\ldots,|\mathcal{C}|+1$ and $\hat c^{i}$ is the class corresponding to maximum probability. The number of queries $N_a$ is larger than the maximum number of action instances per video in the dataset. Thus, we introduce an additional class label $\varnothing$ indicating no action. $\mathbf{y}^i$ is also fed into another feedforward network with ReLU to obtain corresponding start timestamps $\hat {t}_{s}^{i}$ and end timestamps $\hat t_{e}^{i}$. \noindent \textbf{Training.} To compute the loss, we first align the predictions with the groundtruth set of action instances. This alignment is necessary as there is no fixed prior correspondence between the predicted and the groundtruth set of action instances. Here, the predicted set for any video contains $N_a$ action instances, but the size of groundtruth set $ \mathcal{A}$ varies based on the video and is smaller than the predicted set. Thus, we first pad the groundtruth set to make it the same size as the predicted set by adding $N_a - |\mathcal{A}|$ elements with label $\varnothing$ indicating no action. Then, we use a pair-wise greedy correspondence algorithm to align the groundtruth and predicted sets. Starting with the groundtruth instance having the longest duration, we match each groundtruth instance with the unmatched predicted instance that has the maximum temporal overlap with the groundtruth instance. This results in a one-to-one mapping for loss computation (more details in supplementary). Consider the output of the set correspondence module as $\gamma$ denoting the permutation of the predicted set of instances, \textit{i.e.}, the groundtruth action instance $a^{i}$ is matched to predicted instance $\hat a^{\gamma(i)}$ for $i = 1,\ldots,N_a$. Given this alignment, we compute loss $\mathcal{L}$ over all the matched pairs as a weighted combination of cross-entropy loss for classification, and two temporal losses: $L1$ loss and IoU loss ($\mathcal{L}_{iou}$) for prediction of segment timestamps, defined as: \begin{equation} \label{eq:loss} \begin{split} \mathcal{L} = \sum_{i=1}^{N_a} \Big[ & -\log (\hat p^{\gamma(i)}(c^{i})) + \mathbbm{1}_{\{c^{i} \neq \varnothing\}} \lambda_{L1} ||s^{i} - \hat s^{\gamma(i)}||_{1}\\ & + \mathbbm{1}_{\{c^{i} \neq \varnothing\}} \lambda_{iou} \mathcal{L}_{iou}(s^{i},\hat s^{\gamma(i)}) \Big], \end{split} \end{equation} where $\lambda_{iou}, \lambda_{L1} \in \mathbb{R}^{+}$ are hyperparameters, $s^{i} = [t^{i}_s, t^{i}_e]$, $\hat s^{\gamma(i)} = [\hat t_s^{\gamma(i)}, \hat t_e^{\gamma(i)}]$ and $\hat p^{\gamma(i)}(c^{i})$ is the probability of the groundtruth class $c^{i}$ for prediction $\gamma(i)$. The video encoder and anticipation decoder are jointly trained to minimize this loss. We do not fine-tune the segment encoder in this stage. \noindent \textbf{Inference.} During inference, the video encoder takes the observed video as input and the segment encoder takes the chunked video (\textit{i.e.}, non-overlapping segments of fixed length) as input. The inputs to the decoder are: (i) anticipation queries $\mathbf{q}_0 = 1,\ldots,N_a$ (a constant, regardless of input), (ii) anticipation duration $T_a$ (varies based on the input video and the anticipation requirement), (iii) output representation from the video encoder, and (iv) output representation from the segment encoder. The decoder predicts a set of action instances. Thus, our approach allows us to build a model that can anticipate actions over any future duration in a single pass by simply controlling the input $T_a$ to the decoder as shown by results in Table~\ref{tab:ltaa_bf_salads}. In summary, \textsc{Anticipatr}\ uses a two-stage learning approach to train a transformer-based model (consisting of two encoders and one decoder) to predict a set of future action instances over any given anticipation duration. Our approach aims to perform action anticipation with segment-level representations learned using individual video segments in conjunction with video-level representations learned by encoding input video as a whole. Our model anticipates actions at all time instants over a given anticipation duration in a single forward pass by directly predicting a set of future action instances. \subsection{Results} \noindent \textbf{Comparison with state-of-the-art.} Table~\ref{tab:ltaa_bf_salads} shows the results for Breakfast and 50Salads datasets in the \textit{`no groundtruth labels'} setting~\cite{ke2019time,sener2020temporal}. The results show that our approach outperforms existing methods by a considerable margin for different observation/anticipation durations. For these benchmarks, the most similar approach to ours is Sener \textit{et al.}~\cite{sener2020temporal} where they propose self-attention methods for temporal aggregation for long-term video modeling. In the setting similar to ours where they use only visual features as input, our approach outperforms ~\cite{sener2020temporal} with up to 13\% improvement. Moreover, when they also use action labels from a segmentation algorithm as input, our approach is still competitive despite not using such additional inputs. In addition, the benefit of our approach is more apparent when the anticipation duration is longer. Table~\ref{tab:ltaa_epic_egtea} shows results on the long-term action anticipation benchmarks for EK-55 and EGTEA+ datasets, as defined by~\cite{nagarajan2020ego}. The results show that our model achieves competitive results with the state-of-the-art method~\cite{nagarajan2020ego}. While this benchmark only considers prediction of future action labels, our results demonstrate that the segment prediction in our model acts as a beneficial auxiliary task for label prediction. \noindent \textbf{Impact of Segment-level Training.} Our two-stage learning approach separately learns video-level representations and segment-level representations. To analyze the impact of such two-stage training, we design following experiments. (i) \textbf{Fine-tuned Segment Encoder.} In this experiment, we also fine-tune the segment encoder while training video encoder and decoder during the anticipation stage (Sec~\ref{sec:anticipation}). The results in Fig.~\ref{fig:ablation} (`Fine-tuned SE') indicate that fine-tuning the segment encoder hurts the anticipation performance. We believe fine-tuning the segment encoder with anticipation loss (Eq.~\ref{eq:loss}) perturbs the segment-level representation learned during first stage of training. (ii) \textbf{No Segment-level Training.} In this experiment, we do not train the segment encoder network in a separate stage. Instead, we train all three networks (\textit{i.e.}, segment encoder, video encoder and anticipation decoder) jointly for the task of long-term action anticipation using the anticipation loss function (Eq.~\ref{eq:loss}). Here, the segment encoder receives videos chunked into short segments (same as the proposed two-stage training). However, it is directly tasked with solving a more difficult problem of simultaneously encoding segment-level representation and inferring its usage for long-term anticipation. The results for all datasets presented in Fig.~\ref{fig:ablation} (`No Segment-level Training') illustrate that eliminating training of the segment encoder worsens the anticipation performance. This shows the value of learning the segment-level representations independently without being influenced by the overall activity in the input video. In summary, these experiments demonstrate the importance of the two-stage learning approach and suggest that the two representations should be learned separately to serve their individual purposes during anticipation. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{figures/ltaa_ablation_combined.pdf} \caption{\textbf{Analysis.} Quantitative evaluation of the anticipation performance of ablated versions of \textsc{Anticipatr}. [SE: segment encoder; VE: video encoder].} \label{fig:ablation} \end{figure} \noindent \textbf{Impact of Segment Encoder.} To evaluate the impact of learning segment-level representation, we conducted experiments without the segment encoder network. This ablated version only contains the video encoder and the anticipation decoder and is trained in a single-stage using the anticipation loss (Eq.~\ref{eq:loss}). The results in Fig.~\ref{fig:ablation} (`No SE') show that removing the segment-level representations considerably hurts the anticipation performance. This performance degradation is worse than just removing the segment-level training stage (`No segment-level training' in Fig.~\ref{fig:ablation}). Thus, this experiment validates the benefit of the segment-level stream of information for action anticipation. \noindent \textbf{Impact of Set-based Output Representation. } In our approach, we model the anticipation output as a set of action instances. We empirically validate this design by comparing with an alternative approach where the output is a sequence of action labels corresponding to the individual future time instants. We implement this by changing the anticipation queries (decoder input) during the anticipation stage -- we provide positional encodings corresponding to each time instant over anticipation duration and directly predict the labels corresponding to these time instants. While the prediction for all time instants still happens in a single pass, the decoder is required to transform a large number of anticipation queries. The results in Fig.~\ref{fig:ablation} (`No Set Output') show poor performance that worsen further as anticipation duration increases. This is largely because the number of queries is too high for the decoder for effective modeling. \noindent \textbf{Fusion of Encoder Outputs.} To combine the representation from segment encoder and video encoder, our model uses two encoder-decoder attention layers in the decoder blocks. We tested an alternative approach wherein we fused the representations using a simple addition along temporal dimension before feeding into the decoder. Here, we modify the decoder blocks to contain a single encoder-decoder attention layer. The results in Fig.~\ref{fig:ablation} (`Adding SE \& VE before decoder') indicate that this fusion approach leads to a slight decrease in anticipation performance. We believe adding the representations before decoder forces the computation of encoder-decoder attention weights by considering both information streams at once. In contrast, our \textsc{Anticipatr}\ approach of computing attention one-by-one enables it to first filter out the relevant information from segment-level representations learned across different activities and then contextualize them into the specific context of the input video. \noindent \textbf{Visualizations.} The examples in Fig.~\ref{fig:qual_examples} shows that our model effectively anticipates future actions. Please refer to supplementary material for additional visualizations and analysis of failure cases. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{figures/bf_prediction.pdf} & \includegraphics[width=0.48\textwidth]{figures/salads_prediction.pdf} \end{tabular} \caption{\textbf{Visualizations} from Breakfast (left) and 50salads (right) where 20\% of the video is observed and actions are anticipated over 50\% of the remaining video. } \label{fig:qual_examples} \end{figure} \section{Appendix} In this document, we provide additional quantitative and qualitative analyses, and additional details of the implementation of our approach. Specifically, this document contains the following items. \begin{itemize} \item Sec.~\ref{sec:imp}: Technical details of the implementation and evaluation of our proposed approach \begin{itemize} \item Sec.~\ref{subsec:imparch}: Architecture details (network architectures and loss function) \item Sec.~\ref{subsec:imp_tech}: Implementation details (input representations and hyperparameters) \item Sec.~\ref{subsec:imp_eval}: Evaluation details \end{itemize} \item Sec.~\ref{sec:ablation}: Additional ablation analysis \item Sec.~\ref{sec:qual}: Additional visualizations and qualitative analysis \item Sec.~\ref{sec:discussion} Additional discussion \end{itemize} \begin{figure*} \centering \begin{tabular}{c} \begin{tabular}{cc} \includegraphics[width=0.35\textwidth]{supp_figures/segment_encoder.pdf}& \includegraphics[width=0.35\textwidth]{supp_figures/video_encoder.pdf} \\ (a) & (b)\\ \end{tabular} \\ \\ \includegraphics[width=0.75\textwidth]{supp_figures/decoder.pdf} \\ (c)\\ \end{tabular} \vspace{0.3cm} \caption{\textbf{Detailed Architecture.} Architecture overview of (a) Segment encoder, (b) Video encoder, and (c) Anticipation Decoder. Refer to Sec.~\ref{subsec:imparch} for details. `Q',`K',`V' are query, key and value to the self-attention layer as described in ~\cite{vaswani2017attention}.} \label{fig:model_detailed} \end{figure*} \subsection{Technical Details} \label{sec:imp} In this section, we provide additional details for implementation of our proposed approach \textsc{Anticipatr}\ to supplement Sec. 3 in the main paper. \subsubsection{Architecture Details}\hfill \break \label{subsec:imparch} We propose \textsc{Anticipatr}\ that uses a two-stage learning approach to train a transformer-based model for the task of long-term action anticipation. The model comprises three networks: \textit{segment encoder}, \textit{video encoder} and \textit{anticipation decoder}. Fig.~\ref{fig:model_detailed} shows the architecture of the three networks. In the first stage, we train a \textit{segment encoder} that receives a segment (sequence of frames from a video) as input and predicts the set of action labels that would occur at any future time instant after the occurrence of the segment in the video. In the second stage, we train a \textit{video encoder} and an \textit{anticipation decoder} to be used along with the segment encoder for long-term action anticipation. The video encoder encodes the observed video to a video-level representation. The segment encoder (trained in the first stage) is fed with a sequence of segments from the observed video as input to obtain a segment-level representation of the video. The anticipation decoder receives the two representations along with the anticipation duration to predict a set of future action instances over the given anticipation duration in a single pass. The video encoder and anticipation decoder are trained using classification losses on the action labels and two temporal losses ($L_1$ loss and temporal IoU loss) on the timestamps while the segment encoder is kept unchanged. \vspace{0.05in} \noindent \textbf{Positional Encoding for Segment Encoder.} The input to the segment encoder is a video segment. We represent the segment as a sequence of features. As the encoder is permutation-invariant, we provide temporal information in the segment using the sinusoidal positional encodings (c.f. Vaswani \textit{et al.}~\cite{vaswani2017attention}) based on timestamps corresponding to the features of input segment. Specifically, for each input feature of each embedding we independently use sine and cosine functions with different frequencies. We then concatenate them along the channel dimension to get the final positional encoding. In our implementation, the embedding size is same as that of the segment feature so that they can be combined by simple addition of the positional encodings and segment features. \vspace{0.05in} \noindent \textbf{Positional Encoding for Video Encoder.} The input to the video encoder is a video. We represent the video as a sequence of features. As the transformer encoder is permutation-invariant, we provide temporal information in the input video using the sinusoidal positional encodings (c.f. Vaswani \textit{et al.}~\cite{vaswani2017attention}) based on timestamps corresponding to the features of input video. Specifically, for each input feature of each embedding we independently use sine and cosine functions with different frequencies. We then concatenate them along the channel dimension to to get the final positional encoding. In our implementation, the embedding size is same as that of the video feature so that they can be combined by simple addition of the positional encodings and video features. \vspace{0.05in} \noindent \textbf{Anticipation Queries (Anticipation Decoder).} The anticipation queries are learnable positional encoding designed as a learnable embedding layer. The positional encoding layer receives integer index $i$ as input corresponding to $i-th$ anticipation query and provides an embedding $\mathbf{q}_0^{i}$ where $i \in \{1,\ldots,N_a\}$. In our implementation, we use \texttt{torch.nn.Embedding} in Pytorch to implement this. The weights of the layer are learnable during training, thus, the positional encoding layer is also learnable. The initialization of this layer requires maximum possible value of the index, \textit{i.e.}, $N_a$ in our case. The anticipation queries $\mathbf{q}_0$ are then combined with anticipation duration $T_a$ using a simple neural network to create time-conditioned anticipation queries $\mathbf{q}_a$. These time-conditioned queries enable the model to predict actions over any specified anticipation duration. \vspace{0.05in} \noindent \textbf{Training.} We provide supplemental details about computation of loss function used to train the networks in the second stage (\textit{i.e.}, action anticipation stage) of our \textsc{Anticipatr}\ approach. The training involves aligning groundtruth and predicted set of action instances followed by computing the anticipation loss over all aligned pairs. \textbf{Greedy Set Correspondence.} Given an observed video, the groundtruth set of future action instances varies based on input whereas our anticipation decoder predicts a set of fixed size (larger than maximum size of groundtruth sets in the dataset). Therefore, there is no prior correspondence between the groundtruth and predicted set. We derive this correspondence using a greedy algorithm based on temporal overlap among instances. Intuitively, the objective is to correctly align actions at as many future time instants as possible. We first sort the action instances groundtruth set based on the descending order of the duration of the instances. We begin the alignment process with the groundtruth instance having the maximum duration. We lookup the predicted set to find the predicted instance that has maximum temporal overlap with this groundtruth instance. Since the predicted set is designed to represent a single action instance, the alignment between groundtruth and predicted set is one-to-one. Thus, to continue the alignment process, the matched groundtruth instance and predicted instance are removed from the corresponding sets. In this way, this process is repeated until the groundtruth set is empty. As the predicted set is of size larger than groundtruth set, the remaining predicted instances are mapped to $\varnothing$ denoting no action. In Sec.~\ref{sec:ablation}, we also evaluate anticipation results of models trained using another set correspondence algorithm, namely, Hungarian matcher (see Table~\ref{tab:ablation_matcher} and Table~\ref{tab:ablation_matcher_ek}). \textbf{Loss function.} We compute loss $\mathcal{L}$ (defined in Eq. (4) in the main paper) over all the matched pairs as a weighted combination of cross-entropy loss for classification and two temporal losses ($L1$ loss and IoU loss $\mathcal{L}_{iou}$) for prediction of segment timestamps. Here, we provide our motivation behind temporal loss and provide additional description. The $L_1$ temporal loss is sensitive to the absolute value of the duration of the segments. The IoU loss $\mathcal{L}_{iou}$ is invariant to the duration of the segments. Thus, these two losses together are designed to incorporate different aspects of segment prediction. For completeness, we describe $\mathcal{L}_{iou}$ as follows. \begin{equation} \mathcal{L}_{iou}(s^{i}, \hat s^{\gamma(i)}) = 1 - \frac{|s^{i} \cap \hat s^{\gamma(i)}|}{|s^{i} \cup \hat s^{\gamma(i)}|}, \end{equation} where $|.|$ is the duration of the instance, \textit{i.e.}, difference between end and start timestamp. \subsubsection{Training Details}\hfill \break \label{subsec:imp_tech} For training of first stage, we use dropout probability of $0.1$. For the segment encoder, we use base model dimension as 2048 and set the number of encoder layers as 3 with 8 attention heads. We use an effective batch size of 64 for training segment encoder on this dataset. For training in the second stage, we use base model dimension in the video encoder and anticipation decoder as 2048 and set the number of encoder and decoder layers as 3 with 8 heads. We use four datasets -- Breakfast, 50Salads, EPIC-Kitchens-55, EGTEA Gaze+ -- to evaluate our model on long-term action anticipation. We provide dataset-specific hyperparameters as follows. We train all our models using AdamW~\cite{loshchilov2017decoupled} optimizer on 4 Nvidia V100 32GB GPUs. We initialize all the learnable weights using Xavier initialization. \textbf{Breakfast.} We represent input videos as I3D features provided by \cite{mstcn}. We choose $N_a$ (anticipation queries) to be 150. We use an effective batch size of 16 for training the video encoder and anticipation decoder on this dataset on the long-term anticipation task. We train our models with a learning rate of 1e-4 and a weight decay of 0. The model is trained for 4000k steps. We use a dropout probability of 0.1. We set $\lambda_{L1} = 3$ and $\lambda_{iou} = 5$. To obtain segment-level representation of the observed video during action anticipation, we use a temporal window of length $k = 16$. \textbf{50Salads.} We represent input videos as Fisher vectors computed using~\cite{fv}. We choose $N_a$ (anticipation queries) to be 80. We use an effective batch size of 16 for training the video encoder and anticipation decoder on this dataset on the long-term anticipation task. We use a learning rate of 1e-5 and a weight decay of 1e-5. We train the model for 3000k steps and reduce the learning rate by factor of 10 after 1500k steps. We don't use dropout for this dataset. We set $\lambda_{L1} = 3$ and $\lambda_{iou} = 5$. To obtain segment-level representation of the observed video during action anticipation, we use a temporal window of length $k = 48$. \textbf{EPIC-Kitchens-55.} We represent input videos as I3D features provided by ~\cite{nagarajan2020ego,egotopo}. We use an effective batch size of 16 for training the video encoder and anticipation decoder in the second stage. We choose $N_a$ (anticipation queries) to be 900. We use a learning rate of 1e-4 and a weight decay of 1e-5. We train the model for 6000k steps and reduce the learning rate by factor of 10 after 4000k steps. We use a dropout probability of 0.1. We set $\lambda_{L1} = 5$ and $\lambda_{iou} = 8$. To obtain segment-level representation of the observed video during action anticipation, we use a temporal window of length $k = 32$. \textbf{EGTEA Gaze+.} We represent input videos as I3D features provided by ~\cite{nagarajan2020ego,egotopo}. We use an effective batch size of 16 for training the video encoder and anticipation decoder in the second stage. We choose $N_a$ to be 600. We use a learning rate of 1e-5 and a weight decay of 1e-5. We train the model for 4000k steps and reduce the learning rate by factor of 10 after 3000k steps. We use a dropout probability of 0.1. We set $\lambda_{L1} = 3$ and $\lambda_{iou} = 5$. To obtain segment-level representation of the observed video during action anticipation, we use a temporal window of length $k = 24$. \subsubsection{Evaluation Details}\hfill \break \label{subsec:imp_eval} Note that our model predicts a set of action instances, wherein, each action instance is of the form $\text{(label, start time, end time)}$. To evaluate the model outputs as per the benchmarks, we do the following postprocessing. For Breakfast and 50Salads, following the benchmark~\cite{sener2020temporal}, we evaluate the action anticipation outputs over a dense timeline. Our proposed \textsc{Anticipatr}\ predicts a set of action instances. During evaluation, we process this set of action instances to construct a timeline corresponding to the anticipation duration. We refer to the timeline as a sequence of action labels for time instants in the anticipation duration, \textit{i.e.}, between $T_o+1,\ldots,T_o+T_a$. In the benchmarks, the timeline contains a single action class corresponding to each time instant. We iterate over the predicted set to assign class labels to this timeline. Specifically, for each action instance in the predicted set, we assign the predicted action class to the time instants that are within the predicted segment (determined by predicted start and end timestamp). When predicted action instances overlap at certain time instants, we assign the action class with highest probability score among the overlapping predictions. Once the timeline is constructed, we compute mean over classes accuracy~\cite{sener2020temporal} to evaluate the model performance. Note that we are constructing this timeline only during evaluation to follow the benchmark evaluation protocols. For EPIC-Kitchens-55 and EGTEA Gaze+, we perform a union over the action classes in the predicted set of instances to obtain a set of future action classes. We remove $\varnothing$ class from this set and use this set to compute mAP as described in benchmark \cite{nagarajan2020ego}. \subsection{Additional Ablation Analysis} \label{sec:ablation} In this section, we report our findings from additional ablation experiments. \input{supp_results/supp_ablation_loss} \noindent \textbf{Ablation: Loss function.} The training loss function defined in Eq. (4) in the main paper contains three components (cross-entropy loss and two temporal losses). We conduct ablation experiments by removing one of the temporal losses. Note that we always need cross entropy loss for the classification task. Results in Table~\ref{tab:ablation_loss} and Table~\ref{tab:ablation_loss_ek} show that models trained with overall loss perform better than the ones trained with the ablated versions. Moreover, the models trained with only $L_1$ temporal loss perform better than the ones trained with only $\mathcal{L}_{iou}$. \noindent \textbf{Ablation: Anticipation queries.} The number of anticipation queries discerns the maximum number of action instances the model is supposed to predict. Results in Table~\ref{tab:ablation_nqueries} and Table~\ref{tab:ablation_nqueries_ek} shows the performance of our model with different number of anticipation queries. The results suggest minor improvement with higher number of anticipation queries, however, the models with more number of queries require longer training times. Intuitively, a very large number of anticipation queries implies the model will require more time to learn the non-maximal suppression of the irrelevant predictions. On the other hand, when the number of anticipation queries is reduced, the anticipation performance of our model degrades. A very small number of anticipation queries implies less number of action are anticipated. Thus, for very complex video with many future action instances, the model would miss several action instances resulting in poor anticipation performance. Additionally, as shown in Table~\ref{tab:ablation_nqueries}, the anticipation error increases over time. This is because there are more actions to be anticipated and the model is limited by the number of anticipation queries. \input{supp_results/supp_ablation_nqueries} \noindent \textbf{Ablation: Segment window length.} Results in Table~\ref{tab:ablation_windowsize} and Table~\ref{tab:ablation_windowsize_ek} shows the performance of our model with different values of temporal window lengths used to extract segment-level representations during action anticipation. The results suggest that neither a very small window length nor a very large window is helpful. The segment encoder is trained to predict future actions given a video segment depicting a single action. During the action anticipation stage, when the segment encoder is used to extract segment-level representations, the observed video is divided into a series of non-overlapping segment using temporal sliding windows as the action boundaries are not known. Intuitively, when the temporal sliding window is very small, the individual segments do not have enough information to obtain effective representations. On the other hand, when the window is very large, the segments contain more than one action and potentially results in segment-level representations with overlapping semantic content. We observe that the drop in performance with models that use smaller window lengths is larger as compared to the ones with larger window lengths. \input{supp_results/supp_ablation_windowsize} \noindent \textbf{Ablation: Sliding Windows for Segment Encoder Training.} Instead of using action boundaries we used sliding temporal windows of length=$k$ (same as used during stage 2) to obtain segments for segment-level training. Results in Table~\ref{tab:ablation_sw} and Table~\ref{tab:ablation_sw_ek} show that this approach results in a slightly lower performance than our proposed training approach. This is possibly due to increased noise in the segment-level representations from this training approach. \input{supp_results/supp_ablation_sliding_window} \noindent \textbf{Ablation: Set correspondence.} To compute the anticipation loss, we use a greedy algorithm to align groundtruth and predicted set of action instances. Another commonly employed set correspondence algorithm is Hungarian matcher algorithm used in prior works~\cite{carion2020end,kim2021hotr}. For completeness, we also conducted experiments with Hungarian matcher optimized over the cost function with all three terms (classification loss and two temporal losses) following ~\cite{nawhal2021activity}. We didn't observe any significant difference in performance of the models trained using either of the two matchers as shown in Table~\ref{tab:ablation_matcher} and Table~\ref{tab:ablation_matcher_ek}. \input{supp_results/supp_ablation_matcher} \subsection{Additional Qualitative Analysis} \label{sec:qual} Visualizations in Fig.~\ref{fig:qual_ltaa_bf} and Fig.~\ref{fig:qual_ltaa_salads} show that our model is generally able to anticipate correct actions at future time instants long anticipation durations for Breakfast and 50Salads benchmarks respectively. Visualizations in Fig.~\ref{fig:qual_ltaa_ek} and Fig.~\ref{fig:qual_ltaa_egtea} show that our model is able to effectively predict future action classes for EK-55 and EGTEA benchmarks respectively. \noindent \textbf{Failure Cases.} We observe that the action boundaries in some cases are not exactly aligned with the groundtruth even though the class labels are predicted accurately (See Fig.~\ref{fig:qual_ltaa_bf} and Fig.~\ref{fig:qual_ltaa_salads}). We believe this could be because the visual information pertaining to the information is limited or negligible towards the beginning and end of the action instance. Most classification errors result from the model getting confused among semantically similar classes. Some such cases from our examples are \textit{`take ladle'} and \textit{`pick-up ladle'} in Fig.~\ref{fig:qual_ltaa_ek}(b)); \textit{`close sandwich'} and \textit{`close hamburger'} in Fig.~\ref{fig:qual_ltaa_ek}(d)); \textit{`put seasoning'} and \textit{`pour seasoning'} in Fig.~\ref{fig:qual_ltaa_egtea}(a)). Moreover, our model sometimes misses rare actions during predictions such as \textit{`pour oil'} in Fig~\ref{fig:qual_ltaa_ek}(a) and \textit{`close fridge'} in Fig.~\ref{fig:qual_ltaa_egtea}(b). Additionally, we also observe that having seen certain objects in the observed video, the model predicts objects that are likely to co-occur with the seen objects. See the scenario in Fig.~\ref{fig:qual_ltaa_salads}(d). The model doesn't predict \textit{`cut\_cheese'} and \textit{`place\_cheese\_into\_bowl'} after the action \textit{`place\_cucumber\_into\_bowl'} and instead predicts \textit{cut\_tomato} and \textit{`place\_tomato\_into\_bowl'}. While the prediction is not correct for this specific activity, it is still a reasonable sequence of actions as there are several other salad recipe videos in the dataset that only use \textit{cucumber} and \textit{tomato}. In another scenario in Fig.~\ref{fig:qual_ltaa_egtea}(b), having seen \textit{`pasta'} in the observed video, the model anticipates action classes with \textit{`cheese'} noun. While \textit{`cheese'} does not appear in this particular video, it is a reasonable prediction since the nouns \textit{`pasta'} and \textit{`cheese'} often appear together in activity videos in this dataset. \subsection{Additional Discussion} \label{sec:discussion} In this work, we demonstrate the effectiveness of our model on minutes-long activity videos. Handling longer videos with durations in hours or days (common in surveillance or monitoring scenarios) would be interesting future work. Furthermore, our approach assumes that the videos have an overall context provided by the ongoing long-term activity. We show that modeling interactions among segments (and, in turn, segment-level representation) is an effective technique for such activity videos as the video segments are indeed related. However, such approaches cannot tackle videos that are just a montage of several unrelated content like videos containing clips from different movies. Our approach focuses on activity videos where contextual information is present and relevant for action anticipation. \input{supp_results/supp_qual}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,114
« "Social News" = "New Players Emerge"? | Main | RIM: APIs are crucial, enterprises are the target » Ring! Ring! Hot News, 8th June, 2009 In Today's Issue: Verizon launches cloud computing offering; APIs coming up this year; the problems of being cloudy; beware potential plutonium privacy problems; hackers claim to steal entire T-Mobile USA billing database; SingTel on Bharti/MTN: "In for a billion!"; Apple App Store to start subscriptions, volume pricing; Google's cash for developers scheme; MIDs fail; WhyMAX femtocell; satellite TV for your car; problems of network-based DVRs; Carphone Warehouse demerger on the way; "Twitter phone" = SMS; Pre SDK coming right up - eerily similar to JIL; has the Pre chased O2 off the N97?; DARPA mesh networks; Sarin's exes are a gas; wave of missed call fraud; Pirate Party gets elected; Angola lays fibre, but not that sort yet; BSNL reissues WiMAX tenders after corruption panic Verizon launches the first telco cloud-computing service; at the moment their "Computing as a Service" offering is enterprise-focused, letting you run your applications or their applications in the cloud, but what's this? Open APIs are promised some time this year, to match the existing device-centred ODI program. "The API is available today, and we're using it for our own user interface. But we're looking for the right use cases; we're still doing due diligence to make sure we understand what customers are looking for when they want to interface with the environment in that type of manner. I'd say by the end of this year we'd have a published API that customers could leverage." Speaking of clouds, here's an interesting discussion of the problems of working in the cloud at Ned Batchelder's blog. We've said before that Verizon is proving to be remarkably permeable to Telco 2.0 ideas; with the cloud computing, open APIs, and open device projects pulling away from the station, the P4P research project in the works, and significant fibre deployment going on, that only leaves the frontier of subscriber data to check off the list. Wired has a good cautionary tale about increasing regulatory interest in behavioural ad programs at Google and elsewhere; remember not to mix up the potatoes and the plutonium. And look what just happened; hackers claim to have stolen "everything" from T-Mobile USA. This could be the biggest carrier data loss in history, right up there with the Vodafone Greece incident in the annals of telco security disasters. Or it could be a hoax; read the blackmailers' note here and make your own mind up. Of course, subscribers can take heart from the thought that it's very unlikely that the data is well enough organised for them to do anything criminal with it without lots of time and effort. The Bharti-MTN deal has become more likely; SingTel says it's in for 30 per cent of Bharti, having gone down to 19 per cent in the past. Meanwhile, in the Apple world, they're gathering for a shindig; but the serious people want to know what is happening with the new pricing options for the App Store. Apple is expected to announce that they will soon provide the ability to charge subscriptions or usage-based fees as well as one-off sales; it's almost like a telco billing system, in a way. Google, meanwhile, is taking the shortest way to attract developers to code for Android; give them some money, as Milton Friedman said about the poor. Specifically, a rich set of prizes are offered for the best applications. Relatedly, it's not looking good for so-called MIDs (Mobile Internet Devices); these are a device class that Intel essentially invented a few years ago, at least in part as a target market for WiMAX connectivity, which fit in somewhere between a smartphone and a laptop. Unfortunately, fitting in between a 2005 smartphone and a laptop of the same vintage is a strategy that makes less sense in 2009, with much less space between new smartphones and netbooks. So we're more than a little sceptical of a plan to make a WiMAX femtocell. Well, we suppose it might come in handy…perhaps. Similarly, is there really a demand for this new product from AT&T? "CruiseCast" is essentially satellite TV for your car, and although we have to recognise that nothing could be more American than the combination of a car and a television, it all sounds far too much like something that escaped from the bubble years. Especially as it costs $1300 to install and $28 a month. A lot of carriers are interested in implementing DVR/STB/whatever functions in the network, thus saving on all those boxes and the problem of deploying them to the users. Telephony Online makes the very good point that, although this is a quick win and relatively cheap, it doesn't do anything to solve the video problem; if anything it means even more video hammering the wires. And you don't even get to offer the subscribers a shiny gadget. Speaking of video and the broadband incentive problem, Carphone Warehouse held its dividend for this year, but it can't get the TalkTalk DSL and multi-MVNO business demerged fast enough. No wonder. Meanwhile, INQ plans to launch a "Twitter phone"; the obvious comment on this is that it's not going to be hard, as Twitter messages are essentially SMS that gets logged on a Web site, so anything with basic GSM functionality and a Web browser can do it, which means essentially every device on the market. Sounds like a great way to sell really cheap devices; probably better than SUV TV in today's economic climate. According to Palm's VP of sales, the developers won't have to wait much longer for the Pre SDK. It sounds remarkably similar to Vodafone, China Mobile, and Softbank's JIL; the whole user interface is a Web browser and everything is an HTML/CSS/Javascript entity. Everyone seems to like the gadget. Rumours suggest that the lack of Nokia N97s at O2 is a signal that they have signed up the GSM/UMTS version of it. DARPA is investing in a new approach to military radio - they want to use large numbers of cheap radio nodes working together in a mesh network, rather than a few heavily engineered ones. This may well tell us quite a bit about future radio networks - especially as Mapesbury's UKO1 is apparently using a mesh GSM network. Special mention for Arun Sarin; in his last year at Vodafone he made £7.5m, but still needed £500,000 of expenses to relocate back to the US. There's been a wave of phone-related fraud in the UK, says regulator PhonePayPlus; apparently deliberately generating missed calls from a super-premium rate number is a common practice. This weekend saw the European elections; and the Pirate Party responded to the Pirate Bay convictions by getting elected. Apparently they took over one-fifth of the youth vote; they are planning to join the Green caucus in Brussels (and of course Strasbourg). Angola is laying fibre; international, submarine fibre, that is. And the story of the week: Indian state carrier BSNL is re-tendering for its planned WiMAX network, after the first lot of contracts went to "family members and confidants" of the telecoms minister. Whoops. Posted at 12:15 PM | Link to this article
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,406
\section{Introduction} One Higgs boson particle ($h$), consistent with the prediction of the Standard Model (SM), has been already found in 2012 \cite{ATLAS:2012yve,CMS:2012qbp,CMS:2013btf}. Addition of a second Higgs doublet to the SM yields a straightforward, and perhaps the simplest, extension of the SM known as the two Higgs doublet model (2HDM)~\cite{Lee:1973iz}, where four more physical scalars are predicted: a CP-even neutral scalar ($H$), a CP-odd neutral scalar ($A$), and two charged scalars ($H^\pm$). In this article, we consider a general 2HDM (of Type-III~\cite{Hou:1991un}) which contains flavor changing neutral couplings (FCNC) at the Lagrangian level itself (see Ref.~\cite{Branco:2011iw} for a comprehensive review). In the so-called Higgs basis \cite{Georgi:1978ri,Lavoura:1994fv,Botella:1994cs} where only one of the scalar doublet develops vacuum expectation value, the interactions between physical scalars and SM fermions are described by the following Lagrangian \cite{Davidson:2005cw,Hou:2017hiw}, \begin{align} \mathcal{L}_Y = - \frac{1}{\sqrt{2}} \sum_{f = u, d, \ell} \bar f_{i} \Big[\big(\lambda^f_i \delta_{ij} s_\gamma + \rho^f_{ij} c_\gamma\big) h + \big(\lambda^f_i \delta_{ij} c_\gamma - \rho^f_{ij} s_\gamma\big) H - i\,{\rm sgn}(Q_f) \rho^f_{ij} A\Big] P_R\, f_{j} \nonumber\\ - \bar{u}_i\left[(V\rho^d)_{ij}P_R-(\rho^{u\dagger}V)_{ij}P_L\right]d_j H^+ - \bar{\nu}_i\rho^\ell_{ij} P_R \, \ell_j H^+ +{h.c.}, \label{eq: Lag} \end{align} where $i, j$ are generation indices, $\lambda_i\,(= \sqrt{2} m_i/v)$ are the Yukawa coupling in the SM, and $\rho^f_{ij}$ denote NP couplings; $V$ is the Cabibbo-Kobayashi-Maskawa (CKM) matrix and $P_{R/L} = (1 \pm \gamma_5)/2$ are chirality projecting operators. The shorthand notation $c_\gamma$ ($s_\gamma$) denotes the cosine (sine) of the mixing angle $\gamma$ between $h$ and $H$. It is worth mentioning that the \emph{alignment limit} (i.e. $ c_\gamma \to 0$) helps in suppressing the flavor violating decays of $h$ boson, $h\to f_i f_j$ ($i\ne j$). The NP couplings $\rho_{ij}$ are generic in size and need to be constrained from experimental data. One of the strongest constraints on these couplings comes from the precision measurements of B physics observables; data on neutral $B$ meson mixing and rare decays such as $B_s\to \mu^+\mu^-$ and $B\to X_s\gamma$ provides severe constraints on the parameter space of 2HDM (for example, see \cite{Crivellin:2013wna}). In this work, we discuss the important role of the kaon sector in the probe of general 2HDM, and show that in many cases kaon processes can provide far better constraints. To keep our numerical analysis concise, we focus on certain top-related NP Yukawa couplings only, and assume matrices $\rho^{d, \ell}$ to be vanishing. More explicitly, we use the following ansatz \begin{align}\label{eq: rhos} \rho^u \equiv \begin{pmatrix} 0 & 0 & 0\\0 & 0 & \rho_{ct} \\0 & \rho_{tc} & \rho_{tt}\\ \end{pmatrix}, \quad \rho^{d} =\rho^\ell = 0, \end{align} while for masses of exotic scalars, we consider $400$ GeV and $1000$ GeV as two reference values. In our analysis, we consider following kaon observables: (i) $K^0$-$\bar K^0$ mixing parameter $\varepsilon_K$, (ii) direct CPV parameter $\varepsilon^\prime/\varepsilon$ of $K\to\pi\pi$, (iii) rare semileptonic decays $K^+\to \pi^+\nu\bar\nu$ and $K_L\to \pi^0\nu\bar\nu$, (iv) rare decays $K_L\to \mu^+\mu^-$ and $K_S\to \mu^+\mu^-$. With NP couplings as defined in Eq.~\eqref{eq: rhos}, the NP contributions to above processes arise from $H^+$ mediated loop diagrams as shown in Fig.~\ref{fig: Feyn}. The corresponding theoretical expressions can be found in our paper~\cite{Hou:2022qvx}. \begin{figure}[h] \begin{center} \subfloat[]{\includegraphics[width=0.2\textwidth, height=0.092\textheight]{K_mixing_1.pdf} \includegraphics[width=0.2\textwidth, height=0.092\textheight]{K_mixing_2.pdf}\label{fig: Feyn_a}} \quad \quad \subfloat[]{\includegraphics[width=0.2\textwidth]{s2dqq_gluon.pdf}\label{fig: Feyn_b}} \quad \quad \subfloat[]{\includegraphics[width=0.2\textwidth]{s2dff_Zgam.pdf}\label{fig: Feyn_c}} \end{center} \caption{Representative Feynman diagrams for $K^0$-$\bar K^0$ mixing (a); $s \to d q\bar q$ (b and c) ; and $s \to d \nu\bar\nu$ and $s \to d \ell\bar\ell$ (c), where for $f= \nu$, only $Z$-penguin contributes. } \label{fig: Feyn} \end{figure} \section{Results} \subsection{Constraints from B physics and $\varepsilon_K$} \begin{figure}[b] \begin{center} \includegraphics[width=5.cm,height=4.cm]{B_epsK_400.pdf} ~~\includegraphics[width=6.7cm,height=4.cm]{B_epsK_1000.pdf} \end{center} \caption{Flavor constraints from B physics and $\varepsilon_K$.} \label{fig: B-epsK} \end{figure} We first discuss constraints on $\rho_{ij}$ couplings from B physics. In Fig.~\ref{fig: B-epsK}, we show constraints from neutral $B_q$ ($q=s, d$) mass difference $\Delta M_{q}$, mixing-induced CP-asymmetry $S_{\psi K_S}$ in $B_d\to \psi K_S$, and branching ratios of $B_s\to \mu^+\mu^-$ and $B\to X_s\gamma$ in the plane of couplings $\rho_{tt}$ and $\rho_{ct}$ for $m_{H^+}= 400$ and $1000$ GeV. We note that B physics observables give strong bounds on $\rho_{tt}$ and $\rho_{ct}$, but still allow them to be large, especially when one of the coupling is vanishing. In passing, we mention that constraints on the coupling $\rho_{tc}$ are relatively very weak as this coupling is associated with the (small) $H^+$-charm quark loop. Next we include constraints from $\varepsilon_K$. Defining $\varepsilon_K^{\rm NP} \equiv \kappa \times 10^{-3}$ as NP contribution to $\varepsilon_K$, the current data allows for $-0.2 \le \kappa \le 0.2$ ~\cite{Aebischer:2020mkv}. This constraint is shown as yellow region in Fig.~\ref{fig: B-epsK}, which highlights the remarkable ability of $\varepsilon_K$, compared to B physics observables, in constraining $\rho_{ct}$. However, it is worth noting that for heavy $H^+$ case, the B physics constraints become weak and $\varepsilon_K$ admits sizable $\rho_{ct}$, as can be seen from Fig.~\ref{fig: B-epsK} (right). \subsection{$K^+\to \pi^+\nu\bar\nu$ as sensitive probe of a heavy $H^+$} We now discuss results for rare decays $K^+\to \pi^+\nu\bar\nu$ and $K_L\to \pi^0\nu\bar\nu$. The former has been measured by the NA62 experiment \cite{NA62:2021zjw} while for the latter the KOTO experiment has set a $90\%$ C.L. bound~\cite{KOTO:2020prk}. These corresponding results are: ${\cal B}(K^+\to \pi^+\nu\bar\nu)_{\rm NA62} = (10.6^{+4.0}_{-3.4} \pm 0.9)\times 10^{-11}$, ${\cal B}(K_L\to \pi^0\nu\bar\nu)_{\rm KOTO} < 4.9 \times 10^{-9}$. For the corresponding values in the SM, we find ${\cal B}(K^+\to \pi^+\nu\bar\nu)_{\rm SM} = (9.07 \pm 0.82)\times 10^{-11}$, ${\cal B}(K_L\to \pi^0\nu\bar\nu)_{\rm SM} = (3.24 \pm 0.36) \times 10^{-11}$. \begin{figure}[b] \centering \includegraphics[width=5.1cm, height=4cm]{kpnn_kLpnn_400.pdf}~~~\includegraphics[width=6.8cm, height=4cm]{kpnn_kLpnn_1000.pdf} \caption{\label{fig: Rnu} Correlation between $K^+\to \pi^+\nu\bar\nu$, $K_L\to \pi^0\nu\bar\nu$, and $\varepsilon_K$.} \end{figure} To discuss results for $K^+\to \pi^+\nu\bar\nu$ and $K_L\to \pi^0\nu\bar\nu$, we introduce following ratio, \begin{align} {\cal R}_{\nu}^{+} = \frac{{\cal B}(K^+ \to \pi^+ \nu\bar \nu)} {{\cal B}(K^+ \to \pi^+ \nu\bar \nu)_{\rm SM}},\quad {\cal R}_{\nu}^{0} = \frac{{\cal B}(K_L \to \pi^0 \nu\bar \nu)} {{\cal B}(K_L \to \pi^0 \nu\bar \nu)_{\rm SM}}, \end{align} which in the SM, by definition, are unity. In Fig.~\ref{fig: Rnu}, we show results for ${\cal R}_{\nu}^{+}$ and ${\cal R}_{\nu}^{0}$. To obtain these results, we define $\rho_{ij} \equiv |\rho_{ij}| \exp(i \phi_{ij})$, and scan over $|\rho_{tt}|,\,|\rho_{tc}| \in [0, 1]$, $\phi_{tt},\,\phi_{tc} \in [-\pi, \pi ]$; $|\rho_{ct}| \in [0, 0.3],\, \phi_{ct} \in [-\pi, \pi ]$. The allowed points are obtained after imposing B physics and $\varepsilon_K$ constraints. We note that for light $H^+$ case (left), ${\cal R}_{\nu}^{+}$ gets enhanced up to $\sim 20\%$ while ratio ${\cal R}_{\nu}^{0}$ is suppressed up to $\sim 10\%$ compared to the SM value. For the heavy $H^+$ case (right), the NP effects are relatively larger: ${\cal R}_{\nu}^{+}$ can easily saturate the NA62 limit, while ${\cal R}_{\nu}^{0}$ can be up to $\sim 20\%$ suppressed. At first glance, the larger NP effects in $K\to \pi\nu\bar\nu$ for heavy $H^+$ may appear surprising, but this can be understood from the following. In general 2HDM, these decays receive dominant NP contribution from $Z$-penguin diagram with $H^+$-top loop (shown in Fig.~\ref{fig: Feyn_c}). The corresponding NP Wilson coefficient, normalized by the SM one, is proportional to the following combination~\cite{Hou:2022qvx}, \begin{align}\label{eq: s2dnunu} \left(\rho_{tt} + \frac{V_{cs}^*}{V_{ts}^*}\rho_{ct}\right) \left(\rho_{tt}^* + \frac{V_{cd}}{V_{td}}\rho_{ct}^*\right) G_Z({m_t^2/m_{H^+}^2)}, \end{align} where $G_Z(x)$ is the loop function given in Ref.~\cite{Hou:2022qvx}. From Eq.~\eqref{eq: s2dnunu}, we note that $\rho_{ct}$ terms are enhanced by CKM factors, $V_{cs}^\ast/V_{ts}^\ast\simeq -23.5 - 0.46\, i$, and $V_{cd}/V_{td}\simeq-22.8 - 9.4\, i$, respectively. Furthermore, it can be shown that dominating part of total $H^+$ contribution to $s\to d \nu\bar\nu$ processes is CP conserving (for details, see ref.~\cite{Hou:2022qvx}). The CKM-enhanced sensitivity of $K^+\to \pi^+\nu\bar\nu$ to $\rho_{ct}$, coupled with the fact that flavor constraints on $\rho_{ct}$ become weak for heavy $H^+$ (see Fig.~\ref{fig: B-epsK}), explains why $K^+\to \pi^+\nu\bar\nu$ is an excellent probe of heavy $H^+$. \subsection{Correlation of $K^+\to \pi^+\nu\bar\nu$ with $B_s\to\mu^+\mu^-$} Let us now discuss the correlation of NP effects in ${\cal R}_\nu^{+}$ and ${\cal B}(B_s \to \mu^+\mu^-)$, which provide another crucial probe of a heavy $H^+$. In Fig.~\ref{fig: Rnu-Bsmm}, we show the scatter of allowed points in the plane of $B_s\to\mu^+\mu^-$ and $K^+\to \pi^+\nu\bar\nu$ as function of $(\varepsilon^\prime/\varepsilon)_{\rm NP}$, where we note that enhancement of ${\cal R}_\nu^{+}$ is correlated with the suppression of ${\cal B}(B_s \to \mu^+\mu^-)$. For light $H^+$ case (left), the combined constraints from B physics and $\varepsilon_K$ restrict $B_s\to\mu^+\mu^-$ within $2\sigma$ range of SM value, $(3.66 \pm 0.14) \times 10^{-9}$~\cite{Beneke:2019slt}, while ${\cal R}_\nu^{+}$ can be up to $\sim 20\%$ enhanced. But in the heavy $H^+$ case (right), the anti-correlation of $K^+\to \pi^+\nu\bar\nu$ with $B_s\to\mu^+\mu^-$ is clearly noticeable: the higher values of ${\cal R}_\nu^{+}$ are correlated with more suppressed values of ${\cal B}(B_s \to \mu^+\mu^-)$. As indicated in Fig.~\ref{fig: Rnu-Bsmm}, contribution to $(\varepsilon^\prime/\varepsilon)_{\rm NP}$ lie in the range of $-4\times 10^{-4}$ to $3\times 10^{-4}$, where larger values of $(\varepsilon^\prime/\varepsilon)_{\rm NP}$ are correlated with enhanced ${\cal R}_\nu^{+}$ and suppressed $B_s\to\mu^+\mu^-$, and vice versa. Note that above values of $(\varepsilon^\prime/\varepsilon)_{\rm NP}$ are compatible with the current data \cite{Aebischer:2020jto}. \begin{figure}[t] \centering \includegraphics[width=5.14cm, height=4.cm]{kpnn_Bsmumu_epsp_400.pdf}~~~\includegraphics[width=6.3cm, height=4.cm]{kpnn_Bsmumu_epsp_1000.pdf} \caption{\label{fig: Rnu-Bsmm} Correlation between $K^+\to \pi^+\nu\bar\nu$, $B_s\to \mu^+\mu^-$, and $(\varepsilon^\prime/\varepsilon)_{\rm NP}$. The dashed line indicates the central value of ${\cal B}(B_s \to \mu^+\mu^-)$ in the SM.} \end{figure} Before concluding, we comment on rare decays $K_{L, S}\to \mu^+\mu^-$. We find that large uncertainties associated with theoretical determination of ${\cal B}(K_{L}\to \mu^+\mu^-)$ makes it a less effective probe of $H^+$ in comparison to $K^+\to \pi^+\nu\bar\nu$, while for $K_S\to \mu^+\mu^-$, we find NP effects to be less than $2\%$~\cite{Hou:2022qvx}. \section{Conclusions} In this work, we investigated contributions of top-related NP Yukawa couplings of general 2HDM to $K^0$-$\bar K^0$ mixing, $\varepsilon^\prime/\varepsilon$, and rare decays $K^+\to \pi^+\nu\bar\nu$, $K_L\to \pi^0\nu\bar\nu$, and $K_{L, S}\to \mu\mu$. We found that $\varepsilon_K$, in comparison to B physics observables, provides better constraint on the off-diagonal coupling $\rho_{ct}$. For sub-TeV values of $m_{H^+}$, we found that current B physics and $\varepsilon_K$ data allow only mild NP effects in rare kaon decays. However, if $H^+$ is TeV-scale heavy then flavor constraints become weak, and, thanks to large CKM enhancement of $\rho_{ct}$ terms, substantial NP effects in $K^+\to \pi^+\nu\bar\nu$ are possible. Another important result is that NP effects in $K^+\to \pi^+\nu\bar\nu$ and $B_s\to\mu^+\mu^-$ are found to be anti-correlated, which can be exploited to probe the scale of $H^+$. \ack It is a pleasure to thank the organizers of Kaon 2022 conference for the kind invitation to give the talk. I also wish to thank Prof. George W.-S. Hou for collaboration on Ref.~\cite{Hou:2022qvx}, which this article is based on. This work is supported by NSTC 111-2639-M-002-002-ASP of Taiwan. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,336
import React from 'react' import './Home.css' const Home = () => ( <section className='home'> <h2>Features</h2> <ul> <li> <b>Multiple :</b> You can use SuperSelectField as simple dropdown (default),<br /> or as a multi-selections select. </li> <li> <b>Autocomplete :</b> Past a configurable treshold (<i>showAutocompleteThreshold</i>, default: 10), an Autocomplete input will help you find your selection more efficiently.<br /> SuperSelectField exposes <i>autocompleteFilter</i> property to let you provide your own filtering logic (default: case insensitive). </li> <li> <b>Options grouping :</b> You can use <i>&lt;optgroup/&gt;</i> HTML tags as children, they will be automatically detected and integrated into the menu. </li> <li> <b>Infinite loading :</b> To enhance UX when dealing with a huge children list, SuperSelectField will render only displayable children (<i>nb2show</i>, default: 5). </li> <li> <b>Styling and composability :</b> Along with ability to use any HTML tags as children, most <i>SuperSelectField</i>'s inner components expose a styling prop.<br /> Selected options can also be displayed in the main input following your own provided styling thanks to <i>selectionsRenderer</i>. </li> </ul> </section> ) export default Home
{ "redpajama_set_name": "RedPajamaGithub" }
2,348
\section{Introduction} Single-plate terrestrial planets like Venus and Mars without an Earth-like plate tectonics is covered by a thick immobile lithosphere, or cold stiff lid. It is inferred from the geodesic observations (topography and gravity) of Venus [{\it Rappaport et al.}, 1999; {\it Konopliv et al.}, 1999] and Mars [{\it Smith et al.}, 1999a; 1999b] that the spatial structure of the thermal convection under the lid has relatively long-wavelength in which the spherical harmonic degree is dominant at $\ell=$ 2--3 or lower [e.g., {\it Schubert et al.}, 1990; 1997]. In particular, as for the Mars, it is generally accepted that the Martian crustal dichotomy was caused by a convection system dominated by $\ell = 1$ [e.g., {\it Sleep}, 1994; {\it Zhong and Zuber}, 2001]. In numerical simulation of mantle convection in the three-dimensional (3-D) Cartesian box geometry with wide aspect ratios [{\it Tackley}, 1996a; {\it Ratcliff et al.}, 1997, {\it Trompert and Hansen}, 1998] and in the spherical shell geometry [{\it Ratcliff et al.}, 1996; 1997], it is shown that a highly viscous lid is formed when the temperature-dependent viscosity is included in their models with the stress-free boundary condition on the top surface. As the viscosity contrast goes up to $10^4$--$10^5$, an immobile highly viscous layer (stagnant-lid) is formed. The convection under the stagnant-lid is characterized by numerous, small-scale cylindrical plumes surrounded by sheet-like downwellings [{\it Ratcliff et al.}, 1996; 1997; {\it Reese et al.}, 1999]. These convection patterns with high-degree modes are apparently inconsistent with the observations. Here we explore the possibility that the low-degree convection under a stagnant-lid is induced by the depth-dependent viscosity due to the higher viscous lower mantle. The dynamical effects of a stratified viscosity profile on the mantle convection without lateral viscosity variations have been studied by the two-dimensional (2-D) or 3-D Cartesian [e.g., {\it Hansen et al.}, 1993; {\it Tackley}, 1996b] and by the spherical shell [{\it Zhang and Yuen}, 1995; {\it Bunge et al.}, 1996; {\it Zhong et al}., 2000b] models. {\it Bunge et al.} [1996] have shown that a modest increase in the mantle viscosity with depth has a remarkable effect on the convection pattern, resulting in a long-wavelength structure. However, another important factor for the mantle viscosity, i.e., the strong dependence on temperature, was absent in their models. The purpose of this paper is to investigate the combined effects of (i) depth-dependence and (ii) strong temperature-dependence on the viscosity in the resulting convection pattern. \section{Simulation Model} The mantle convection is numerically treated as a thermal convection in a 3-D spherical shell of a Boussinesq fluid with infinite Prandtl number heated from the bottom boundary. The aspect ratio of the spherical shell $\hat{r}_0/\hat{r}_1$ is 0.55, which is a characteristic value of the terrestrial planets, where $\hat{r}_0$ and $\hat{r}_1$ are the radii of the inner and outer spheres, respectively. Equations of mass, momentum, and energy conservation governing the mantle convection are scaled to a non-dimensional form as follows [e.g., {\it Schubert et al.}, 2001], \begin{equation} \mathbf{\nabla} \cdot \mathbf{v} = 0, \end{equation} \begin{equation} - \mathbf{\nabla} p + \mathbf{\nabla} \cdot \left\{ \eta \left( \mathbf{\nabla} \mathbf{v} + \mathbf{\nabla} \mathbf{v}^{tr} \right) \right\} + Ra_r T \mathbf{e}_r = 0, \end{equation} \begin{equation} \frac{\partial T}{\partial t} + \mathbf{v} \cdot \mathbf{\nabla} T = \mathbf{\nabla}^2 T + H_r, \end{equation} where $\mathbf{v}$ is the velocity vector, $p$ pressure, $T$ temperature, $t$ time, and $\mathbf{e}_r$ is the unit vector in the $r$-direction. The superscript $tr$ indicates the tensor transpose. The Rayleigh number $Ra$ scaled by the thickness of the spherical shell $\hat{D}$ is given by, \begin{equation} Ra \equiv Ra_r \left( \frac{\hat{D}}{\hat{r_1}} \right)^3 = \frac{\hat{\rho} \hat{g} \hat{\alpha} \Delta \hat{T} \hat{D}^3}{\hat{\kappa} \hat{\eta}_{ref}}, \end{equation} where $\hat{\rho}$ is the density, $\hat{g}$ gravitational acceleration, $\hat{\alpha}$ thermal expansivity, $\Delta \hat{T} (= \hat{T}_{bot} - \hat{T}_{top}$) the temperature difference between the bottom temperature $\hat{T}_{bot}$ on the inner sphere and the top temperature $\hat{T}_{top}$ on the outer sphere, $\hat{\kappa}$ thermal diffusivity, and $\hat{\eta}_{ref}$ is the reference viscosity (see equation~(6) below). The hats stand for dimensional quantities. The internal heating rate $H$ scaled by the thickness of the spherical shell $\hat{D}$ is given by, \begin{equation} H \equiv H_r \left( \frac{\hat{D}}{\hat{r_1}} \right)^2 = \frac{\hat{Q} \hat{D}^2}{\hat{\kappa} \hat{c}_p \Delta \hat{T}}, \end{equation} where $\hat{Q}$ is the internal heating rate per unit mass, and $\hat{c}_p$ is the specific heat at constant pressure. In this study, in order to focus on the effects of the temperature- and depth-dependent viscosity, all the material properties other than viscosity (such as thermal expansivity and thermal diffusivity) are assumed to be constant. The viscosity $\eta$ depends on the temperature $T$ and depth $d$ as \begin{equation} \eta(T, d) = \eta_{ref} (d) \exp \left[ -E \left( T - T_{ref} \right) \right], \end{equation} where $\eta_{ref} (d)$ is the viscosity at the reference temperature $T = T_{ref}$. The non-dimensional ``activation parameter'' $E$ represents the degree of viscosity contrast between the top and bottom surfaces. The velocity boundary at the top and bottom surfaces of the spherical shell are given by impermeable and the stress-free conditions. The boundary conditions for $T$ at the top and bottom surfaces are given by $T_{bot} = 1$ and $T_{top} = 0$. The basic equations (1)--(3) are solved by a second-order finite difference discretization. A kind of the overset (Chimera) grid system, Yin-Yang grid [{\it Kageyama and Sato}, 2004], is used for the computational grid (Figure~1). The Yin-Yang grid is composed of two component grids (Yin grid and Yang grid) that have exactly the same shape and size (Figure~1a). A component grid of the Yin-Yang grid is a low-latitude part of the usual latitude-longitude grid on the spherical polar coordinates. The Yin-Yang grid is suitable to solve the mantle convection problems because it automatically avoids the pole problems, i.e., the coordinate singularity and grid convergence that are inevitable in the usual latitude-longitude grid (Figure~1b). Following the general overset grid method, the data on the boarder of each component grid are matched by mutual interpolation. All the basic quantities---$ \mathbf{v}$, $p$, $T$, and $\eta$---are spatially discretized and located in the same grid points (collocation grid method). The details of the Yin-Yang grid can be found in {\it Kageyama and Sato} [2004]. See our previous paper [{\it Yoshida and Kageyama}, 2004] for its application to the mantle convection with detailed benchmark and validation tests. The grid points in each component grid are $102~\times~54~\times~158$ (in $r$-, $\theta$-, and $\phi$-directions). Thus the total grid size for a whole spherical shell is $102~\times~54~\times~158~\times~2$ (for Yin and Yang grids). The convergences of the solutions were confirmed by changing the numerical resolution with $66~\times~33~\times~104~\times~2$. Time development of the convections are calculated until averaged quantities, such as Nusselt number and root-mean-square velocity, become stationary. \section{Results} Calculations carried out in this paper are summarized in Table. 1. \subsection{Constant viscosity and only temperature-dependent viscosity convections} Before we go into details of the combined effects of the temperature- and depth-dependent viscosity on the convection, we study the phenomenology of convection pattern changes caused only by the temperature-dependent viscosity. Figure~2a shows a snapshot of the residual temperature of Case~{\tt r7e0} in Table 1, in which the viscosity is constant, i.e., $E=0$ in equation~(6). The Rayleigh number $Ra$ is $10^7$, which is about one order of magnitude smaller than the value of terrestrial planets. (Later, we will show that the convective pattern is unchanged even when the Rayleigh number is increased 10$^8$ in Case {\tt r8e6w2}.) The thermal structure of Figure~2a shows a typical pattern observed in the 3-D spherical shell geometry. The convective flow is composed of narrow, cylindrical upwelling (hot) plumes surrounded by a network of long downwelling (cold) sheets. This structure is common for the purely bottom-heated convection. To analyze the spatial structure, the power spectrum by the spherical harmonics $Y_{\ell}^m$ of temperature field is plotted in the right panels of Figure~2. The small scale structure ($\ell \geq 10$) is dominant in the middle depth, and the large scale structure ($\ell \leq 6$) is dominant near the top and bottom surfaces that is associated with the thermal boundary layers. The radial profile of horizontally averaged temperature is shown in Figure~3a. The volume-averaged temperature is 0.26 in this case. As we expect, compared with the convections with high Rayleigh number and the strong internal heating [{\it Bunge et al.}, 1996; {\it Yoshida et al.}, 1999; {\it Zhong et al.}, 2000b], the thermal structure of this purely bottom-heated convection are dominated by considerably long-wavelength structure. Figure~2b shows the results of Case~{\tt r7e6r} where the reference temperature $T_{ref} = 0.5$. The activation parameter is taken to be $E = \ln (10^6) = 13.8155$. The spectrum of temperature field of Case~{\tt r7e6r} (right panel of Figure~2b) shows that the power is concentrated around $\ell = 6$--$10$ throughout the depth that is associated with convecting cells under the stagnant-lid. For this case, the volume averaged temperature is 0.72, which is larger than the constant viscosity convection (Figure~3b). In our previous paper [{\it Yoshida and Kageyama}, 2004], we did not report the cases when the viscosity of mantle materials has a strong temperature dependence. The regime of the flow state under the strong temperature-dependent viscosity in the spherical shell convection was examined by {\it Ratcliff et al.} [1996; 1997]. For the comparison with the previous works, the reference temperature $T_{ref}$ in equation~(6) is fixed to the bottom temperature $T_{bot}$ in the followings. Therefore, the viscosity $\eta_{ref}$ is now the viscosity at the bottom. The viscosity contrast across the spherical shell is defined by $\gamma_{\eta} \equiv \eta(T_{top}) / \eta(T_{bot}) = \exp(E)$. Shown in Figure~4a is a regime diagram for convective flow pattern. Approximate regime boundaries are drawn. Our simulation results for $Ra_{bot} = 10^{6}$--$10^{7}$ are shown in this diagram. The previous results by {\it Ratcliff et al.} [1997] (3-D Cartesian and spherical shell models) and {\it Trompert and Hansen} [1998] (3-D Cartesian model) are also included in the diagram. Our results basically support the previous results by {\it Ratcliff et al.} [1996; 1997]: The convecting pattern is classified into three regimes defined by {\it Solomatov} [1995] in the order of increasing $\gamma_{\eta}$; the ``mobile-lid'' regime (Figure~4b); the ``sluggish-lid'' regime (Figure~4c); and the ``stagnant-lid'' regime (Figure~4d). The moderate viscosity contrast ($\gamma_{\eta} = 10^3$--$10^4$) produces the large-scale convection, or the sluggish-lid regime. In our previous paper [{\it Yoshida and Kageyama}, 2004], we showed that the convection at $Ra_{bot} = 10^6$ and $\gamma_{\eta} = 10^4$ (Case~{\tt r6e4}) has a two cell pattern that consists of one downwelling and two cylindrical upwellings (Figure~4b) [{\it Ratcliff et al.}, 1995; 1996; 1997, {\it Zhong et al}., 2000b; {\it Yoshida and Kageyama}, 2004; {\it Stemmer et al.}, 2004; {\it McNamara and Zhong}, 2005a]. In contrast, at $Ra_{bot} = 10^7$ and $\gamma_{\eta} = 10^4$ (Case~{\tt r7e4}), the convection pattern comes to have the degree-one pattern; the one cell structure that consists of a pair of cylindrical downwelling plume and cylindrical upwelling plume (Figure~4c). This indicates that the convecting structure in the sluggish-lid regime is sensible to the Rayleigh number. The convective flow pattern that belongs to the stagnant-lid regime emerges when $\gamma_{\eta} \geq 10^5$. The stagnant-lid prevents the heat flux through the top boundary and leads to a small temperature difference in the mantle below the lid. The characteristic horizontal thermal structure has short wavelengths comparable to the thickness of the mantle (Figures~4d and 4e). This convective pattern in the stagnant-lid regime is also observed in the previous results in a 3-D spherical shell geometry [e.g., {\it Reese et al.}, 1999b]. This convective feature would be caused by the secondary downwelling plumes leaving from the base of stagnant-lid. At $\gamma_{\eta} \geq 10^6$ (Cases~{\tt r7e6}), the connected network of sheet-like downwelling reaches to the mid depth of convecting layer (Figure 4d). When $\gamma_{\eta}$ is further increased to $10^8$ (Cases~{\tt r7e8} and {\tt r7eA}), the stagnant-lid become rather thick, and we clearly observe large, mushroom-shaped upwelling plumes (Figure~4e). \subsection{Both temperature- and depth-dependent viscosity convections} To investigate the transition in the convective pattern by adding the depth-dependent viscosity (a viscosity stratification), we investigate two kinds of viscosity profiles. First we examine cases in which the viscosity jumps at the phase transition boundary between the upper and lower mantle. Second we examine cases in which the viscosity smoothly increases with depth in the lower mantle. The ratio of thickness between lower and upper mantle, $d_L/d_U$, is 3.39, comparable to that in Earth's mantle. Since the actual viscosity contrast in the depth of the terrestrial planets is not fully constrained, we take it as a parameter within a plausible range between $10^{1.5}$ ($\approx 30$) $\leq$ $\eta_L/ \eta_{ref}$ $\leq$ $10^{2.5}$ ($\approx 300$) [e.g., {\it Davies and Richards}, 1992; {\it Karato}, 2003] where $\eta_L$ is the viscosity of lower mantle. In six cases (Cases~{\tt r7e6v1} to {\tt r7e6v3}, and Cases~{\tt r7e6w1} to {\tt r7e6w3}), the initial condition is taken from the stationary state of Case~{\tt r7e6r}, shown in Figure~2b. The reference temperature $T_{ref}$ in equation~(6) is fixed at 0.5. The Rayleigh number defined by $\eta_{ref}$ is fixed at 10$^{7}$. Shown in Figure~5 are the results of three cases (Cases~{\tt r7e6v1}, {\tt r7e6v2}, and {\tt r7e6v3}) in which the viscosity jumps at the upper/lower mantle boundary. Figure~5a shows a snapshot of the residual temperature of Case~{\tt r7e6v1} with $\eta_L/\eta_{ref} = 10^{1.5}$. Compared with the convection in which the viscosity depends only on the temperature (Figure~2b), we find that the convective flow pattern obviously has longer length scale. The thermal spectrum indicates a shift to smaller degrees, and the peak is located between $\ell = 2$ and $\approx 10$. As $\eta_L/\eta_{ref}$ is further increased (Cases~{\tt r7e6v2} shown in Figure 5b and {\tt r7e6v3} in Figure 5c), the thermal structure significantly shifts to lower modes. The power spectrum shows a concentration in $\ell \le 6$ with the peak of $\ell = 2$--$4$ for $\eta_L/\eta_{ref} = 10^{2.0}$ (Figure~5b), and $\ell = 2$--$3$ for $\eta_L/\eta_{ref} = 10^{2.5}$ (Figure~5c) throughout the depth. As the amount of the viscosity jump $\eta_L/\eta_{ref}$ increases, the temperature drop in the bottom thermal boundary layer grows, which leads to the lower internal temperature of the mantle (Figure~6). To see this spatial scale change of the convection caused by the depth-dependent viscosity in more detail, we analyzed the time sequence of the Nusselt number, the root-mean-square velocity averaged over the entire mantle, and the peak mode at each depth for the Case~{\tt r7e6v3} with $\eta_L/\eta_{ref} = 10^{2.5}$. At the initial stage of the simulation run, the convection is dominated by $\ell = 7$--$9$ modes throughout the depth which reflects the initial condition (Figure~2b). As time goes on, the convective flow reaches to a saturated state (Figure~7a), and the low-degree component develops from the upper part to middle part of mantle; the peak mode shifts from $\ell = 9$ to $3$ there (Figure~7b). This indicates that the stagnant-lid is broken, and then, the convection cells are re-organized into the convection state with the low modes. To compare with the observation, we have calculated the geoid anomaly for Cases~{\tt r7e6v1} to {\tt r7e6v3}. We followed the method of the calculation of the geoid anomaly described in {\it Hager and Clayton} [1989]. The physical parameters used in the calculation are set to those possibly relevant to Venus (Table 2). Figure~8 shows the distribution of calculated geoid anomaly where $\eta_{L}/\eta_{ref}$ is (a) 1 (i.e., no viscosity stratification), (b) $10^{1.5}$, (c) $10^{2.0}$, and (d) $10^{2.5}$. The results are shown by the spherical harmonics modes up to $\ell=24$. Figure~8e shows the power spectrum for each case. The mode amplitude with the viscosity stratification peaks at $\ell = 2$--$4$. When the stratified viscosity is absent (Figure~8a), $\ell = 5$--$10$ modes are strong (see the arrow in Figure~8e). On the other hand, as $\eta_{L}/\eta_{ref}$ increases (Figures~8c and 8d), the power spectrum peaks at $\ell = 2$ and the higher degree components ($\ell \ge 10$) are remarkably decreased. This is consistent with the spectrum constructed from the observed geoid anomaly of the Venus [{\it Konopliv et al.}, 1999] (Figure~8e). Next, we investigate the cases with smoothly increased viscosity with depth rather than the jump. In three cases (Cases~{\tt r7e6w1}, {\tt r7e6w2}, and {\tt r7e6w3}) shown in Figure~9, the viscosity contrast between the upper/lower mantle boundary and the bottom of mantle is $\Delta \eta_L =$ (a) $10^{1.5}$, (b) $10^{2}$, and (c) $10^{2.5}$. The initial condition is again the state shown in Figure~2b. We see from both the residual temperature and the spectrum that the dominant power is concentrated on the smaller degrees in all the cases. At $\Delta \eta_L = 10^{1.5}$--$10^{2}$, the peak is located at $\ell = 1$, or one-cell convection (Figures~9a and 9b). On the other hand, at $\Delta \eta_L = 10^{2.5}$, the peak is located at $\ell = 2$, or the two-cell convection (Figure~9c). The horizontally averaged temperature and viscosity profile is shown in Figure~10. Note that the viscosity contrast in the lid are almost identical among the three cases (see the arrow in the right panel of Figure~10). This suggests that the transition between degree-one and degree-two convection is sensitive to the magnitude of the viscosity stratification. We found that the patterns (degree-one or degree-two) are not affected by the increase of $E$ up to 8 (Case~{\tt r7e8w2}) or to 10 (Case~{\tt r7eAw2}). This pattern is also unchanged when the internal heating is included ($H=20$) (Case~{\tt r7e6w2h}), or the Rayleigh number is increased to 10$^{8}$ (Case~{\tt r8e6w2}). The patterns $\ell = 1$ or $2$ are mainly controlled by the viscosity contrast $\Delta \eta_L$. \section{Conclusions and Discussion} The convection with strongly temperature dependent viscosity under the stress-free boundary condition has short wavelength structure when the depth-dependent viscosity is ignored. This feature is inconsistent with the convection inferred from the geodesic observations on the single-plate planets like Venus and Mars. We have found that the combination of temperature- and depth-dependent viscosity produces the convection with the spherical harmonics degree $\ell = 1$--$4$. The geoid anomaly calculated from these simulation data also generates large scale length, which is consistent with the observation. {\it Schubert et al.} [1990; 1997] have shown that convections with rigid boundary condition on the top surface can lead to $\ell = 1$--$3$ structures. In their model, however, the viscosity of fluid is spatially constant. Our finding is that, by considering more realistic viscosity profiles, the low-degree pattern can be reproduced in the convection model with stress-free boundary condition on the top surface. The previous convection models without the temperature-dependent viscosity [e.g., {\it Hansen et al.}, 1993; {\it Zhang and Yuen}, 1995; {\it Bunge et al.}, 1996] have already produced the large scale flow pattern by considering the viscosity stratification. This could be explained by the enhanced value of viscosity in the lower mantle. In our model with strongly temperature-dependent viscosity, the large scale convection seems to be realized by the change of convecting regime, from the stagnant-lid regime into the sluggish-lid regime, which is caused by the viscosity stratification. A major difference between their results and ours is that a highly viscous lid is naturally formed on the top owing to the inclusion of the temperature-dependent viscosity effect in this study. To date, several mechanisms have been proposed for the degree-one convection of the Martian mantle. For example, the endothermic phase transition just above core-mantle boundary in Martian mantle with the rigid boundary condition [{\it Harder and Christensen}, 1996; {\it Breuer et al.}, 1998; {\it Harder}, 1998], and a priori high-viscous lid [{\it Harder}, 2000] on the top surface boundary without any phase transitions. The small core, in other words, the thicker convecting shells of the mantle may lead to the degree-one convection in the ancient Mars [{\it Schubert et al.}, 1990] and the Moon [{\it Zhong et al.}, 2000a]. {\it McNamara and Zhong} [2005a] have recently found that the internal heating plays a role in increasing flow wavelength and forming the degree-one convection in convections in which the viscosity moderately depends on temperature. One of our findings in this paper is that the degree-one convection can be relatively easily reproduced when both effects of the temperature- and depth-dependence on the viscosity are taken into account. Although the degree-one convection appears even when the depth-dependence is absent (Figure~4c), the parameter range for this pattern is rather narrow; it is sensitive to the Rayleigh number. On the other hand, when the viscosity in the lower mantle is continuously increased with depth, the degree-one ($\ell = 1$) convection like the Martian mantle is realized in the wide range of viscosity contrast from 30 to 100. It is an interesting possibility that the transition of the convecting patterns between low-degree convective mode and one-degree mode took place in the planets. We have not directly observed the transition of convecting mode in our simulations. The physical parameters ($Ra$ and/or $E$ in this study) to characterize the convective pattern are fixed in our simulations. However, we would like to point out again a drastic difference of the convection patterns between relatively close conditions: the convection of degree-two at $Ra = 10^6$ (Figure~4b) [{\it Yoshida and Kageyama}, 2004], and the convection of degree-one at $Ra = 10^7$ (Figure~4c). This sensitive change has not been reported so far. Our simulation results will not be directly applied to the Earth's mantle, because the effects of the plate tectonics would be comparable to the effects of the depth-dependent viscosity, as proposed by {\it Bunge and Richards} [1996] and {\it Bunge et al.} [1996; 1998] from their model without temperature-dependent viscosity. Existence of a stationary continental lithosphere [{\it Yoshida et al.}, 1999], a drifting continental lithosphere [{\it Phillips and Bunge}, 2005], and plate motion on the top surface boundary [{\it Zhong et al}., 2000b] also transform the small scale convection patterns in high Rayleigh number convection into the large scale convection patterns. \begin{acknowledgments} The authors are grateful to two anonymous reviewers for helpful comments. All the simulations were carried out on the Earth Simulator at Japan Agency for Marine-Earth Science and Technology. A part of figures in this paper was produced using the Generic Mapping Tools (GMT) released by P. Wessel and W. H. F. Smith (1998). \end{acknowledgments} \section*{References} \begin{description} \item[] Breuer, D., D. A. Yuen, T. Spohn, and S. Zhang (1998), Three dimensional models of Martian convection with phase transitions, Geophys. Res. Lett. \textit{25}(3), 229--232. \item[] Bunge, H. -P., and M. A. Richards (1996), The origin of long-wavelength structure in mantle convection: effects of plate motions and viscosity stratification, Geophys. Res. Lett. \textit{23}(21), 2987--2990. \item[] Bunge, H. -P., M. A. Richards, and J. R. Baumgardner (1996), Effect of depth-dependent viscosity on the planform of mantle convection, \textit{Nature}, \textit{379}, 436--438. \item[] Bunge, H. -P., M. A. Richards, C. Lithgow-Bertelloni, J. R. Baumgardner, S. Grand, and B. Romanowicz (1998), Time scales and heterogeneous structure in geodynamic earth models, \textit{Science}, \textit{280}, 91--95. \item[] Davies, G. F., and M. A. Richards (1992), Mantle convection, {\it J. Geol.}, {\it 100}, 151--206. \item[] Hager, B. H., and R. W. Clayton (1989), Constraints on the structure of mantle convection using seismic observations, flow models and the geoid. In: Peltier, W.R. (Ed.), {\it Mantle Convection}. Gordon and Breach, New York, pp. 657--763. \item[] Hansen, U., D. A. Yuen, S. E. Kroening, and T. B. Larsen (1993), Dynamical consequences of depth-dependent thermal expansivity and viscosity on mantle circulations and thermal structure, {\it Phys. Earth. Planet. Inter.}, 77, 205--223. \item[] Harder, H. (1998), Phase transitions and the three-dimensional planform of thermal convection in the martian Mantle, J. Geophys. Res. \textit{103}(E7), 16775--16797. \item[] Harder, H. (2000), Mantle convection and the dynamic geoid of Mars, Geophys. Res. Lett. \textit{27}(3), 301--304. \item[] Harder, H., and U. R. Christensen (1996), A one-plume model of martian mantle convection, \textit{Nature}, \textit{380}, 507--509. \item[] Kageyama, A., and T. Sato (2004), The ``Yin-Yang grid'': An overset grid in spherical geometry, \textit{Geochem.\ Geophys.\ Geosyst.}, 5(9), Q09005, doi:10.1029/2004GC000734. \item[] Karato, S. (2003), \textit{The Dynamic Structure of the Deep Earth: An Interdisciplinary Approach}, 241 pp., Princeton University Press. \item[] Konopliv, A. S., W. B. Banerdt, and W. L. Sjogren (1999), Venus gravity: 180th degree and order model, \textit{Icarus}, \textit{139}, 3--18. \item[] McNamara, A. K., and S. Zhong (2005a), Degree-one mantle convection: Dependence on internal heating and temperature-dependent rheology, Geophys. Res. Lett. \textit{32}, L01301, doi:10.1029/2004GL021082. \item[] McNamara, A. K., and S. Zhong (2005b), Thermochemical structures beneath Africa and the Pacific Ocean, \textit{Nature}, \textit{437}, 1136--1139. \item[] Phillips, B. R., and H-. P. Bunge (2005), Heterogeneity and time dependence in 3D spherical mantle convection models with continental drift, Earth Planet. Sci. Lett. \textit{233}, 121--135. \item[] Rappaport, N. J., A. S. Konopliv, and A. B. Kucinskas (1999), An improved 360 degree and order model of Venus topography, \textit{Icarus}, \textit{139}, 19--31. \item[] Ratcliff, J. T., G. Schubert, and A. Zebib (1995), Three-dimensional variable viscosity convection of an infinite Prandtl number Boussinesq fluid in a spherical shell, Geophys. Res. Lett. \textit{22}(16), 2227--2230. \item[] Ratcliff, J. T., G. Schubert, and A. Zebib (1996), Effects of temperature-dependent viscosity on thermal convection, in a spherical shell, \textit{Physica D}, \textit{97}, 242--252. \item[] Ratcliff, J. T., P. J. Tackley, G. Schubert, and A. Zebib (1997), Transitions in thermal convection with strongly variable viscosity, Phys. Earth Planet. Inter. \textit{102}, 201--212. \item[] Reese, C. C., V. S. Solomatov, J. R. Baumgardner, and W. -S. Yang (1999), Stagnant lid convection in a spherical shell, Phys. Earth Planet. Inter. \textit{116}, 1--7. \item[] Schubert, G., D. Bercovici, and G. A. Glatzmaier (1990), Mantle dynamics in Mars and Venus: Influence of an immobile lithosphere on three dimensional mantle convection, J. Geophys. Res. \textit{95}(B9), 14105--14129. \item[] Schubert, G., D. L. Turcotte, and P. Olson (2001), \textit{Mantle Convection in the Earth and Planets}, 940 pp., Cambridge Univ. Press., New York. \item[] Schubert, G., V. S. Solomatov, P. J. Tackley, and D. L. Turcotte (1997), Mantle convection and the thermal evolution of Venus, in \textit{Venus II - Geology, Geophysics, Atmosphere, and Solar Wind Environment}, edited by S. W. Bougher, D. M. Hunten, R. J. Phillips, University of Arizona Press, Tucson, Arizona, pp. 1245--1288. \item[] Sleep, N. H. (1994), Martian plate tectonics, J. Geophys. Res. {\it 99}(25), 5639--5655. \item[] Smith, D. E., Sjogren, G. L. Tyler, G. Balmino, F. G. Lemoines, and A. S. Konopliv (1999a), The Gravity Field of Mars: Results from Mars Global Surveyor, \textit{Science}, \textit{286}, 94--97. \item[] Smith, D. E., M. T. Zuber, S. C. Solomon, R. J. Phillips, J. W. Head, J. B. Garvin, W. B. Banerdt, D. O. Muhleman, G. H. Pettengill, G. A. Neumann, F. G. Lemoine, J. B. Abshire, O. Aharonson, C. D. Brown, S. A. Hauck, A. B. Ivanov, P. J. McGovern, H. J. Zwally, T. C. Duxbury (1999b), The global topography of Mars and implications for surface evolution, \textit{Science}, \textit{284}, 1495--1503. \item[] Solomatov, V. S. (1995), Scaling of temperature- and stress-dependent viscosity convection, \textit{Phys. Fluids}, \textit{7}(2), 266--274. \item[] Solomatov, V. S., and L. -N. Moresi (1996), Stagnant lid convection on Venus, J. Geophys. Res. \textit{101}(E2), 4737--4753. \item[] Stemmer, K., H. Harder, and U. Hansen (2004), Thermal convection in a 3D spherical shell with strongly temperature and pressure dependent, \textit{Eos Trans. AGU}, \textit{85}(47), Fall Meet. Suppl., Abstract T11E-1331. \item[] Tackley, P. J. (1996a), Effects of strongly variable viscosity on three-dimensional compressible convection in planetary mantles, J. Geophys. Res. \textit{101}(B2), 3311--3332. \item[] Tackley, P. J. (1996b), On the ability of phase transitions and viscosity layering to induce long wavelength heterogeneity in the mantle, Geophys. Res. Lett. \textit{23}(15), 1985--1988. \item[] Turcotte, D. L., and G. Schubert (2002), \textit{Geodynamics}, 2nd. ed., pp. 456, Cambridge Univ. Press, U.K. \item[] Trompert, R. A., and U. Hansen, U. (1998), On the Rayleigh number dependence of convection with a strongly temperature-dependent viscosity, \textit{Phys.\ Fluids}, \textit{10}, 351--360. \item[] Wessel, P., and W. H. F. Smith (1998), New, improved version of the Generic Mapping Tools released, {\it EOS. Trans. AGU}, 79, 579. \item[] Yoshida, M., and A. Kageyama (2004), Application of the Yin-Yang grid to a thermal convection of a Boussinesq fluid with infinite Prandtl number in a three-dimensional spherical shell, Geophys. Res. Lett. \textit{31}(12), L12609, doi:10.1029/2004GL019970. \item[] Yoshida, M., Y. Iwase, and S. Honda (1999), Generation of plumes under a localized high viscosity lid on 3-D spherical shell convection, Geophys. Res. Lett. \textit{26}(7), 947--950. \item[] Zhang, S., and D. A. Yuen (1995), The influences of lower mantle viscosity stratification on 3D spherical-shell mantle convection, Earth Planet. Sci. Lett. \textit{132}, 157--166. \item[] Zhong, S., and M. T. Zuber (2001), Degree-1 mantle convection and Martian crustal dichotomy, Earth Planet. Sci. Lett. \textit{189}, 75--84. \item[] Zhong, S., E. M. Parmentier, and M. T. Zuber (2000a), A dynamic origin for the global asymmetry of lunar mare basalts, Earth Planet. Sci. Lett. \textit{177}, 131--140. \item[] Zhong, S., M. T. Zuber, L. Moresi, and M. Gurnis (2000b), Role of temperature-dependent viscosity and surface plates in spherical shell models of mantle convection, J. Geophys. Res. \textit{105}(B5), 11063--11082. \end{description} \begin{figure} \noindent\includegraphics[width=15pc]{fig01.eps} \caption{ The Yin-Yang grid. Two component grids of the Yin-Yang grid are identical (the same shape and size): (a) The low latitude part $(\pi/4 \le \theta \le 3\pi/4, -3\pi/4 \le \phi \le 3\pi/4)$ of the latitude-longitude grid. (b) They partially overlap each other on their boarders to cover a spherical surface in pair. As it is apparent, the Yin-Yang grid has neither a coordinate singularity, nor grid convergence; the grid spacings are quasi-uniforms on the sphere. } \end{figure} \begin{figure*} \noindent\includegraphics[width=20pc]{fig02.eps} \caption{ The iso-surface of residual temperature $\delta T$ (i.e., the deviation from horizontally averaged temperature at each depth), and the power spectrum of the spherical harmonics of temperature field at each depth. (a) Case~{\tt r7e0} with the constant viscosity (i.e., $E=0$) convection and (b) Case~{\tt r7e6r} with the strongly temperature-dependent viscosity ($E = \ln 10^6$) are shown. Blue iso-surfaces indicate $\delta T = $ (a) $-0.10$ and (b) $-0.15$. Yellow indicate $\delta T = $ (a) $+0.10$ and (b) $+0.15$. The logarithmic power spectrum are normalized by the maximum values at each depth. White regions in maps indicate the values with lower than $10^{-2}$ (see color bars). } \end{figure*} \begin{figure} \noindent\includegraphics[width=20pc]{fig03.eps} \caption{ Radial profiles of horizontally averaged temperature at each depth. Two cases (a) and (b) correspond to each case shown in Figure 2 (Cases ~{\tt r7e0} and {\tt r7e6r}, respectively). } \end{figure} \begin{figure*} \noindent\includegraphics[width=30pc]{fig04.eps} \caption{ (a) The three convection regimes with varying Rayleigh number ($Ra_{bot}$) and the viscosity contrast across the shell ($\gamma_{\eta}$); the mobile-lid (circles), the sluggish-lid regime (triangles), and the stagnant-lid regimes (squares). Solid marks show our calculations. Open marks show the results from 3-D Cartesian box and spherical shell models by {\it Ratcliff et al.} [1997]. Gray marks show the results from 3-D Cartesian box models by {\it Trompart and Hansen} [1998]. The regime boundary (dashed curve) between convection regime and no-convection regime is referred with the reviews by {\it Schubert et al.} [2001]. Dashed line shows the approximate boundaries that separate the three convection regimes. (b)--(d) The iso-surface renderings of residual temperature shown in (a); (b) $Ra_{bot} = 10^6$ and $\gamma_{\eta} = 10^4$ (Case~{\tt r6e4}), (c) $Ra_{bot} = 10^7$ and $\gamma_{\eta} = 10^4$ (Case~{\tt r7e4}), and (d) $Ra_{bot} = 10^7$ and $\gamma_{\eta} = 10^6$ (Case~{\tt r7e6}). Blue iso-surfaces indicate (b) $\delta T = -0.20$, (c) $-0.25$, and (d) $-0.10$. Yellow indicate (b) $\delta T = +0.40$, (c) $+0.25$, and (d) $+0.10$. The red spheres show the bottom boundary of the mantle. (e) The temperature distribution on a cross section for a case where $Ra_{bot} = 10^7$ and $\gamma_{\eta} = 10^8$ (Case~{\tt r7e8}). } \end{figure*} \begin{figure} \noindent\includegraphics[width=20pc]{fig05.eps} \caption{ The iso-surface of residual temperature ($\delta T$) and the power spectrum of the spherical harmonics of temperature field at each depth for the cases where $\eta_{L}/\eta_{ref} =$ (a) $10^{1.5}$, (b) $10^{2.0}$, and (c) $10^{2.5}$ (Cases~{\tt r7e6v1}, {\tt r7e6v2}, and {\tt r7e6v3}, respectively). Blue iso-surfaces indicate (b) $\delta T = -0.20$, (c) $-0.25$, and (d) $-0.25$. Yellow indicate (b) $\delta T = +0.20$, (c) $+0.25$, and (d) $+0.25$. The logarithmic power spectrum are normalized by the maximum values at each depth. White regions in maps indicate the values with lower than $10^{-2}$ (see color bars). } \end{figure} \begin{figure} \noindent\includegraphics[width=20pc]{fig06.eps} \caption{ Radial profiles of the horizontally averaged temperature (left), and the horizontally averaged viscosity (right) at each depth. Three cases (a)--(d) correspond to each case where $\eta_{L}/\eta_{ref} =$ (a) $1$ (i.e., no viscosity stratification), (b) $10^{1.5}$, (c) $10^{2.0}$, and (d) $10^{2.5}$ (Cases~{\tt r7e6v1}, {\tt r7e6v2}, and {\tt r7e6v3}, respectively). } \end{figure} \begin{figure*} \noindent\includegraphics[width=30pc]{fig07.eps} \caption{ The time sequence of (a) the Nusselt number (dashed line) and the root-mean-square velocity averaged over the entire mantle (solid line), and (b) the maximum power spectrum at each depth. The range of the spherical harmonic degrees ($\ell$) is analyzed up to $\ell = 10$. } \end{figure*} \begin{figure} \noindent\includegraphics[width=18pc]{fig08.eps} \caption{ The contour plots of the distribution of geoid anomaly for each case where $\eta_{L}/\eta_{ref} =$ (a) 1 (i.e., no viscosity stratification), (b) $10^{1.5}$, (c) $10^{2.0}$, and (d) $10^{2.5}$. The results are shown by the spherical harmonic expansion up to $\ell = 24$. The spectrum are normalized by the maximum values at each degree. (e) The power spectrum of the calculated geoid anomaly for each case (thin colored lines) and the observed geoid anomaly from the data by {\it Konopliv et al}. [1999] (thick black line). The spectrum are normalized by the maximum values of all degrees. } \end{figure} \begin{figure} \noindent\includegraphics[width=20pc]{fig09.eps} \caption{ The iso-surface of residual temperature ($\delta T$) and the power spectrum of the spherical harmonics of temperature field at each depth for the cases where $\Delta \eta_L = $ (a) $10^{1.5}$, (b) $10^{2.0}$, and (c) $10^{2.5}$ (Cases~{\tt r7e6w1}, {\tt r7e6w2}, and {\tt r7e6w3}, respectively). Blue iso-surfaces indicate (a) $\delta T = -0.30$, (b) $-0.30$, and (c) $-0.20$. Yellow indicate (a) $\delta T = +0.30$, (b) $+0.30$, and (c) $+0.20$. The logarithmic power spectrum are normalized by the maximum value at each depth. White regions in maps indicate the values with lower than $10^{-2}$ (see color bars). } \end{figure} \begin{figure} \noindent\includegraphics[width=20pc]{fig10.eps} \caption{ Radial profiles of horizontally averaged temperature (left), and the horizontally averaged viscosity (right) at each depth for each case where $\Delta \eta_L =$ (a) $1$ (i.e., no viscosity stratification), (b) $10^{1.5}$, (c) $10^{2.0}$, and (d) $10^{2.5}$ (Cases~{\tt r7e6w1}, {\tt r7e6w2}, and {\tt r7e6w3}, respectively). } \end{figure} \begin{table*} \caption{List of runs employed in this study \label{symbols} \begin{tabular}{llllllll} \hline Case Name & $Ra$ & $T_{ref}$ & $E$ & $\eta_L/\eta_U$ & $\Delta \eta_L$ & I.C.$^{*1}$ & Corresponding figures \\ \hline {\tt r6e0} & $10^6$ & -- & $\ln 10^0$ & -- & -- & -- & -- \\ {\tt r6e1} & $10^6$ & 1.0 & $\ln 10^1$ & -- & -- & {\tt r6e0} & -- \\ {\tt r6e2} & $10^6$ & 1.0 & $\ln 10^2$ & -- & -- & {\tt r6e1} & -- \\ {\tt r6e3} & $10^6$ & 1.0 & $\ln 10^3$ & -- & -- & {\tt r6e2} & -- \\ {\tt r6e4} & $10^6$ & 1.0 & $\ln 10^4$ & -- & -- & {\tt r6e3} & Fig. 4b \\ {\tt r7e0} & $10^7$ & -- & $\ln 10^0$ & -- & -- & {\tt r6e0} & Figs. 2a and 3 \\ {\tt r7e1} & $10^7$ & 1.0 & $\ln 10^1$ & -- & -- & {\tt r7e0} & -- \\ {\tt r7e2} & $10^7$ & 1.0 & $\ln 10^2$ & -- & -- & {\tt r7e1} & -- \\ {\tt r7e3} & $10^7$ & 1.0 & $\ln 10^3$ & -- & -- & {\tt r7e2} & -- \\ {\tt r7e4} & $10^7$ & 1.0 & $\ln 10^4$ & -- & -- & {\tt r7e3} & Fig. 4c \\ {\tt r7e5} & $10^7$ & 1.0 & $\ln 10^5$ & -- & -- & {\tt r7e4} & -- \\ {\tt r7e6} & $10^7$ & 1.0 & $\ln 10^6$ & -- & -- & {\tt r7e5} & Fig. 4d \\ {\tt r7e8} & $10^7$ & 1.0 & $\ln 10^8$ & -- & -- & {\tt r7e6} & Fig. 4e \\ {\tt r7eA} & $10^7$ & 1.0 & $\ln 10^{10}$ & -- & -- & {\tt r7e6} & -- \\ {\tt r7e6r} & $10^7$ & 0.5 & $\ln 10^6$ & -- & -- & {\tt r7e6} & Figs. 2b and 3 \\ {\tt r7e6v1} & $10^7$ & 0.5 & $\ln 10^6$ & $\ln 10^{1.5}$ & -- & {\tt r7e6r} & Figs. 5a and 6 \\ {\tt r7e6v2} & $10^7$ & 0.5 & $\ln 10^6$ & $\ln 10^{2.0}$ & -- & {\tt r7e6r} & Figs. 5b and 6 \\ {\tt r7e6v3} & $10^7$ & 0.5 & $\ln 10^6$ & $\ln 10^{2.5}$ & -- & {\tt r7e6r} & Figs. 5c and 6 \\ {\tt r7e6w1} & $10^7$ & 0.5 & $\ln 10^6$ & -- & $\ln 10^{1.5}$ & {\tt r7e6r} & Figs. 9a and 10\\ {\tt r7e6w2} & $10^7$ & 0.5 & $\ln 10^6$ & -- & $\ln 10^{2.0}$ & {\tt r7e6r} & Figs. 9b and 10\\ {\tt r7e6w3} & $10^7$ & 0.5 & $\ln 10^6$ & -- & $\ln 10^{2.5}$ & {\tt r7e6r} & Figs. 9c and 10\\ {\tt r7e8w2} & $10^7$ & 0.5 & $\ln 10^8$ & -- & $\ln 10^{2.0}$ & {\tt r7e6w2} & -- \\ {\tt r7eAw2} & $10^7$ & 0.5 & $\ln 10^{10}$ & -- & $\ln 10^{2.0}$ & {\tt r7e6w2} & -- \\ {\tt r8e6w2} & $10^8$ & 0.5 & $\ln 10^6$ & -- & $\ln 10^{2.0}$ & {\tt r7e6w2} & -- \\ {\tt r7e6w2h}$^{*2}$ & $10^7$ & 0.5 & $\ln 10^6$ & -- & $\ln 10^{2.0}$ & {\tt r7e6w2} & -- \\ \hline \end{tabular} \\ \tablenotetext{}{ (*1) ``I.C.'' indicates the Initial conditions. (*2) ``Case {\tt r7e6w2h}'' is a case with internal heating (see text). } \end{table*} \begin{table*} \caption{List of parameters used in the calculation of the geoid anomaly} \label{symbols} \begin{tabular}{lll} \hline & Symbols & Values \\ \hline outer radius & $r_1$ & $6.052 \times 10^6$ m \\ inner radius & $r_0$ & $0.55 r_1$ m \\ thickness of the mantle & $D$ & $0.45 r_1$ m \\ density & $\rho$ & $3.3 \times 10^3$ kg m$^{-3}$ \\ density contrast at the top surface & $\Delta \rho_{bot}$ & $2.3 \times 10^3$ kg m$^{-3}$ \\ density contrast at the bottom surface & $\Delta \rho_{top}$ & $4.3 \times 10^3$ kg m$^{-3}$ \\ gravity acceleration & $g$ & $8.9$ m s$^{-2}$ \\ thermal expansivity & $\alpha$ & $1.0 \times 10^{-5}$ K$^{-1}$ \\ temperature difference across the mantle & $\Delta T$ & $2.0 \times 10^{3}$ K \\ specific heat at constant pressure & $c_p$ & $1.2 \times 10^3$ J kg$^{-1}$ K$^{-1}$ \\ thermal diffusivity & $\kappa = k / \rho c_p$ & $8.1 \times 10^{-7}$ m$^2$ s$^{-1}$ \\ thermal conductivity & $k$ & $3.2$ W m$^{-1}$ K$^{-1}$ \\ reference viscosity & $\eta$ & $1.5 \times 10^{21}$ Pa s \\ gas constant & $R$ & $8.3145$ J mol$^{-1}$ K$^{-1}$ \\ gravitational constant & $G$ & $6.6726 \times 10^{-11}$ N m$^2$ kg$^{-2}$ \\ \hline \end{tabular} \\ \tablenotetext{}{ The values are referred with {\it Schubert et al.} [1990], {\it Solomatov and Moresi} [1996], and {\it Turcotte and Schubert} [2002]. } \end{table*} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,226