text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Q: Triggering a command line run with Java based Native Messaging Host for chrome extension I am trying to build a chrome extension which can run a command on the user desktop. For this I am trying to use native messaging to send the trigger from extension to a native messaging host which can then run the command. Chrome extension's manifest.json { "manifest_version": 3, "name": "chrome extension", "description": "Initial base version", "version": "1.0", "content_scripts": [ { "matches": ["http://*/*", "https://*/*"], "js": ["contentscript.js"] } ], "background": { "service_worker": "background.js" }, "action": { "default_popup": "hello.html" }, "permissions": [ "nativeMessaging" ] } native-messaging-host manifest { "name" : "com.xx.xx.xx.xx.nativemessaginghost", "description" : "POC", "path" : "chrome-extension.bat", "type" : "stdio", "allowed_origins" : [ "chrome-extension://kfnopggaelngaopgpiogficbbdhmlpia/" ] } chrome-extension.bat @echo OFF java -jar "%~dp0/native-messaging-host-java-1.0-SNAPSHOT.jar" %* I am launching a java application which is responding to the extension.( Followed the example code from chrome-native-messaging-java ) final Application app = new Application(); ConnectableObservable<String> obs = app.getObservable(); obs.observeOn(Schedulers.computation()).subscribe(new Observer<String>() { public void onCompleted() { } public void onError(Throwable throwable) { } public void onNext(String s) { log("Host received " + s); JSONObject obj = new JSONObject(s); // executeCommand(); log("Message is " + obj.getString("text")); String sl = "{\"success\":true," + "\"message\":\"" + "response from app" + "\"}"; try { app.sendMessage(sl); } catch (IOException e) { e.printStackTrace(); } } }); obs.connect(); while (!app.interrompe.get()) { try { Thread.sleep(1000) } catch (InterruptedException e) { break; } } System.exit(0); Before responding back to extension I am trying to make the host execute a command using ProcessBuilder class the following way. I am redirecting all the output of the ProcessBuilder to a file so that it doesn't overload the extension which is listening to the stdout. ProcessBuilder processBuilder = new ProcessBuilder(); processBuilder.redirectErrorStream(true); File log1 = new File("command-output.log"); processBuilder.redirectOutput(log1); builderList.add("cmd.exe"); builderList.add("/C"); builderList.add("tasklist | findstr chrome"); processBuilder.command(builderList); Process process = processBuilder.start(); the connection to the extension is broken with the following error Error: Native application tried to send a message of 1734439797 bytes Am I missing anything while making the call with ProcessBuilder in native messaging host ? The only solution I can think of is to run another local server and make an Asynchronous rest call to this server from the native messaging host. The server can then execute the command and its stdout won't affect the extension.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,215
Reading the Omens By Jonathan Helland By Shih-Li Kow A Lumberjack's Guide to Dryad Spotting Editorial: Love All Ways By Emma Munro The Appliance Crisis Fried Rice Shih-Li Kow I came home to Dad yelling at our CookBot again. There was a wok of fried rice on the stove. Dad said, "The damn machine can't get it right. The garlic goes in when the oil is hot, not with the oil. And the ginger should be chopped fine. Really fine." Cook said, "Please define 'really fine' quantitatively." "Fine means fine. Go look it up, you piece of junk." We've had Cook for a month now. Ever since Mom died, Dad's been trading one obsession for another. After periods of relentless exercising, house cleaning, and cataloging of Mom's books, he's now fixated on perfecting Cook's fried rice. We've had fried rice every day for two weeks. I helped myself to a plate and started eating. "It's good, Dad. Better than yesterday." Cook said, "Thank you for your praise, Marta. Flavor profile is a 93% match versus original sample. This exceeds the average human detection of 85% minimum match for carbohydrate-based recipes." Dad said, "It's not just about the flavor. You have to cook it right. If you can't make fried rice, how are we going to move on to other dishes?" "Progression does not need to be linear," said Cook. I said, "Shut down, Cook. We'll talk later." Cook retracted his pneumatic arms and rolled to his charging station beside the fridge. The display screen on his chest went black but I had a feeling he was still listening. "Seriously, Dad, arguing with a bot?" "Who else am I going to argue with? You're out all the time." If I could, I would just eat my way through Cook's three hundred pre-loaded five-star recipes. The ones I had tried were all delicious although it had felt like a betrayal at first, missing Mom but not her food. The problem was Dad. Mom had refused to stop cooking before she died. She filled our two fridges and chest freezer with labelled plastic containers of everything she could think of cooking. Dad chipped away at the frozen food like it was gold, feeding samples into Cook's analyzer to deconstruct and reproduce. Every attempt was declared a failure despite the flavor profile matching. I was starting to think it'd been a mistake getting a Cookbot, that it'd given Dad an excuse for not moving on. Later that night, I went into the kitchen. "Wake up, Cook." Cook's screen came on. "Hello. Would you like a snack?" "I wouldn't mind a roti if you have some dal curry." "Dal curry is available. Your meal will be ready in twelve minutes. Calorie count is three hundred and sixty." "You didn't need to tell me that." "Instruction to provide calorie information for snacks was given on August first." Cook took a dough ball from the fridge. He stretched it into a thin, rubbery skin, flipped and twirled it in the air before folding and patting it into shape on the hot griddle with his tong hands. I said, "How many fried rice recipes do you have, Cook?" "Seven. One was dictated by your father." "Please add this to Dad's recipe. Chop garlic and ginger into fine pieces of not more than one millimeter in size. Heat oil in wok until smoking. Add chopped garlic and ginger. Sauté until fragrant. Did you get that, Cook? Can you tell when it's fragrant?" "Yes. My olfactory sensors can detect increased concentrations of allicin and gingerol compounds in the air around the stove. That is what you define as fragrant." "How do we solve this problem with Dad? I don't want to be eating fried rice for the rest of the year." "Please define the problem. I have made fried rice for sixteen consecutive days. The repetition is unacceptable according to my nutrition planning algorithm." "OK, here's the problem. Dad wants fried rice to taste like Mom's. Flavor profile is matched, but Dad is not satisfied. He's hurting. You are not Mom. You can't cook like Mom." "Concise problem statement is 'You are not Mom'. Please describe Mom." Cook said that he would search his database of cooking styles. I couldn't imagine it working, but I started telling him about Mom over my roti. I told him about the childhood foods she used to talk about; what she fed me when I was a child; the one-pot meals she cooked because she was always rushing between her two jobs; what she made with leftovers; the way she always ruined fish and overcooked steak; how her cakes were always damp at the bottom and burnt at the sides; and how, when her taste buds deteriorated, her cooking was either too bland or over-seasoned. Just talking about Mom was a relief and Cook was a good listener. The sound waves of my voice dancing on his screen gave shape to the void she had left behind. The next day, I came home to fried rice again. Dad was eating quietly. Cook was silent in his corner. This time, the fried rice looked carelessly cooked. It was lumpy with bits of charred garlic, the egg pieces were rubbery, and there was a heat in the rice that tasted of the wok. It needed extra soya sauce. It was, in fact, exactly like something Mom would have made when she was in a hurry. I said, "How did you do it, Cook?" Cook said, "You described Mom as a person with many distractions and unreliable taste buds. I modelled an imperfect two-star skill level chef who likes to cook on high heat with reduced time in wok." Dad smiled into his plate. He looked close to tears. "That sounds about right. That sounds like her." "Dad, I think we should let go of the fried rice now. We need to try something new. You ready for stir fry, Cook?" "I have been ready since I left the factory." His screen flickered and a menu with pictures of stir-fried dishes started scrolling. Gently, Cook, I thought to myself. Gently does it. PATREON EXCLUSIVE: INTERVIEW WITH AUTHOR SHIH-LI KOW FFO: How did the idea for this story germinate? Shih-Li Kow: I had been reading nanny bot stories and I was thinking about how we expect domestic tech to keep simplifying our lives and take chores off our minds. I'm not good in the kitchen and I decided to write about a cookbot. Additional reading took me to the Norimaki Synthesizer, a gadget built for research which simulates any flavor based on the five basic taste elements. The extrapolated idea that the art of food preparation could, in future, be replaced by precisely manipulating a handful of taste elements is rather jarring. Even for a lazy cook like me, this feels too clinical for something as personal as food. Will we then find the non-taste elements, including the imperfections, to be the most emotional aspects of a dish, and will a clever machine be able to recreate that? This was at the back of my mind when I wrote "Fried Rice." To read the entire interview... Become a Patron of Flash Fiction Online. Patrons unlock exclusive rewards like interviews with the authors, issues of the magazine, live chats with the FFO editorial staff, & more. BECOME A PATRON NOW © Shih-Li Kow Shih-Li Kow Shih-Li Kow is a former chemical engineer and mall manager. Her short story collection Ripples and Other Stories (2008), was shortlisted for the 2009 Frank O'Connor International Short Story Award. Her second book The Sum of Our Follies (2014), was translated into Italian and French. The French edition (tr. Frederic Grellier) won the 2018 Prix du Premier Roman Etranger. Her writing has been published most recently in Mekong Review, The Arkansas International, Short Fiction Journal, and ongoing.space. She lives in Kuala Lumpur, Malaysia in a house owned by three cats. Find her at Twitter:@shihlikow. Tanya Klimenko Very exquisite to connect technology and the psyche and make the reader synthesized with the fried rice flavor! Enjoyed it! April/May 2022 Round-up and Short Fiction Focus | Tar Vol on […] "Fried Rice" (2022 short story/flash fiction) by Shih-Li Kow. A short and sweet piece about grief, food, and family. […]
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
958
Roger Sipe, _Special Projects Editor_ Lindsay Hanks, _Associate Editor_ Matt Hennings, _Art Director_ Jessica Jaensch, _Production Coordinator_ June Kikuchi, Andrew DePrisco, _Editorial Directors_ The horses in this book are referred to as he or she in alternating chapters unless their gender is apparent from the activity discussed. All photographs by Lesley Ward, unless otherwise stated. The photograph on page 85 is courtesy of © Bob Langrish. Text copyright © 1998, 2003, 2010 by Lesley Ward. Previously published in different-sized formats in _The Horse Illustrated ® Guide to English Riding_. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of BowTie Press®, except for the inclusion of brief quotations in an acknowledged review. Library of Congress Cataloging-in-Publication Data Ward, Lesley. English riding / by Lesley Ward. p. cm. Previously published as "Horse illustrated guide to English riding" by Lesley Ward. Includes bibliographical references. ISBN 978-1-935484-52-3 eISBN 978-1-937049-40-9 1. Horsemanship. I. Ward, Lesley. Horse illustrated guide to English riding. II. Title. SF309.W268 2010 798.2'3--dc22 2010014021 BowTie Press® A Division of BowTie, Inc. 3 Burroughs Irvine, CA 92618 Printed and bound in the United States 14 13 12 11 10 1 2 3 4 5 acknowledgments _I would like to thank the following people for their help with this book:_ Kim Abbott; Pat Bailey of the Club, San Juan Capistrano, CA; Sharon Biggs; Lynn Elliott; Liz Erickson; Jane Frusher; Allison Griest; Stacey Hall; Moira C. Harris; Ashley Hartsough; Ashley Kohler; Dylan Lake; Laura Loda; Katie McKay; Lindsay Mickelson; Julie Mignery; Barbara Provence; Team CEO Eventing; Marissa Uchimura; Holly Werner; Marty Whitehouse; and finally, my father, Alan Ward, for his excellent editing skills. Contents _Introduction_ **CHAPTER 1** Getting Started Choosing a Riding School • Choosing a Riding Instructor • What You Should Wear to Ride • Gaining Experience **CHAPTER 2** In the Saddle Fit to Ride • Leading a Horse • Mounting a Horse • Once You Are Mounted • Perfecting Your Position • Exercises and Stretches **CHAPTER 3** The Walk The Aids • First Steps • Steering • Stopping **CHAPTER 4** Mastering the Trot Time to Trot • Transitions• Posting (Rising) Trot • Diagonals • Sitting Trot • Trotting Exercises • Praise and Rewards • Trotting Without Stirrups • Picking Up Your Stirrups **CHAPTER 5** The Canter Leads • Asking for the Canter • Lead Problems • Cantering Exercises • The Gallop • Cantering and Galloping in a Group **CHAPTER 6** Jumping Jumping Position• The Release • Trotting Poles • Your First Jump • A Single Fence • A Line • Jumping Small Courses • Cantering Fences • Cantering Around a Course • Jumping Problems **CHAPTER 7** Horse Problems Reasons for Bad Behavior • Falling Off • Bucking • Bolting • Rearing • Shying • Grabbing Grass • Kicking **CHAPTER 8** Having Fun with Your Horse Trail Riding • Long-Distance Riding • Showing • Dressage • Eventing • Fun and Games • Riding Rewards _Resources_ _Glossary_ Introduction **L** earning to ride English style is a challenging goal, but, once you master the basic skills, it won't be long before you're cantering on trails, clearing fences, and learning fancy dressage movements. Riding keeps you fit, makes you feel good, and is a lot of fun. Soon you'll be hooked, and horses will become a major part of your life. One lesson a week will turn into three. You'll start helping out at the barn. You'll trade in your jeans for breeches. Soon, the sales assistant at the local tack shop will know you by name. Eventually, you'll start scanning the Internet for horses for sale. There's no escape from the world of horses. But let's be realistic. First, you have to take regular lessons and spend hours in the saddle. Becoming a good rider doesn't happen over-night. Most of us work hard and ride a lot of horses before we become experienced riders. It's essential that you find a friendly, patient instructor. Even Olympic riders have coaches. Why? Because even experienced riders know that, no matter how confident you become about your riding ability or how naturally talented you are, you never stop learning. Every time you mount a new horse, jump around a strange course, or take a fall, you add to your knowledge of horses and riding. This book will instruct you about how to develop an excellent riding position and a secure seat so that you can communicate effectively with a horse. It also teaches you how to ride a horse at any speed and over fences. Read this book before you head for your riding lessons, and use it over and over again as a reference. So, what are you waiting for? Happy reading and riding! Getting Started **I** f you want to improve your riding skills, the most important thing you should do is find a good riding instructor and sign up for lessons. If you don't have your own horse, you can take lessons at a riding school or with an instructor who has his or her own string of school horses. If you are lucky enough to have your own horse but are new to riding, it might be a good idea to board your horse at a barn that has an experienced instructor. Choosing a Riding School Check the bulletin board at your local tack shop for signs advertising local riding schools or instructors. Ask the sales assistant if she can recommend any decent schools or reputable instructors. Also, look in the Yellow Pages of your telephone book or on the Internet for riding schools or lesson barns near you. Several may be advertised, but it's impossible to tell from an ad if the school offers quality instruction. Ask the opinion of someone who already rides there. If she likes it, call the manager and ask to look around during lessons. When you arrive, stop by the stable's office and see the manager. She may want to give you a quick tour around the barn, or she may send you off by yourself to have a snoop. When walking around, keep the following in mind: **The Staff:** The riding school employees should be friendly and dressed in appropriate clothing for riding—such as breeches or jeans and boots, not shorts and sandals. No one should be smoking around the barn area. Safety-minded horse people know that one spark can ignite a bale of hay and cause a fire. Barn workers should be kind but firm with the horses. You shouldn't see anyone shouting at or beating horses. **The Barn:** The stable area at a responsible riding school is tidy. Manure and used bedding are swept neatly on a muck heap, away from the barn. You won't see litter on the ground. Peek over a couple of stable doors and check the cleanliness. If horses are standing in piles of manure or puddles of urine, it's best to leave and find another school. The buildings should be in good repair. You should not spot broken glass or equipment with sharp edges that could hurt you or a horse. Stroll out to the fields or turn-out areas. No rusty farm equipment or garbage should clutter these areas. Take lessons from a qualified instructor. **The Horses and Ponies:** Check that the school's horses and ponies look alert and interested in what is going on around them. They must be well groomed and have shiny, healthy-looking coats. You can be less critical in the winter when it's hard to keep horses and ponies completely mud-free, especially if they spend time in a field. The horses must look well fed; you shouldn't be able to see their ribs. Don't ride at a school where the horses look tired and in poor condition. Ask how many times a day a horse is ridden. He should be used for no more than three lessons a day. Don't hand over your money to ride a horse who has been ridden more than that. **The Lessons:** Watch a lesson or two. Does the school match students and horses by size and ability? If you're petite, you don't want to get stuck riding a giant horse because your legs may not be effective on his sides. Plus, you may not be strong enough to control him. And if you're just learning to ride, you don't want to be assigned a frisky horse. Do the lesson horses seem well behaved and fairly obedient? School horses can be sluggish, and sometimes they ignore their rider's aids. This is fairly normal. Who can blame them? Being ridden by bouncy beginners every day is no picnic! But if the riders seem to be having serious problems with their horses, this may not be a reputable school. There should be no bucking, kicking, rearing, or bolting in a class for novice riders. The horses should be wearing simple, well-fitting tack. It doesn't have to be brand-new, but it should be clean and in good condition. Ideally, the horses should be wearing snaffle bits, but stronger horses may have Kimberwickes or Pelhams in their mouths. Avoid a barn that uses gag bits or hackamores; these are severe and should be only used by experienced riders. Do the students wear safety helmets? Even the safest, quietest horse can spook or stumble, causing his rider to fall. A truly responsible, safety-conscious instructor insists that her students wear approved helmets. The instructor should also wear a helmet when mounted, to set a good example for her students. The barn area should be neat and tidy The horses should look happy and healthy. And finally, are all the students in a class of the same riding level and approximate age group? If you're a beginner, you don't want to be stuck in a class with riders jumping 3-foot fences. You want to be with people at your level. For adults, it can be frustrating to ride in a class with an eight-year-old whiz kid who is already jumping courses. Choosing a Riding Instructor In the United States, there is no national licensing system for horse trainers, so you'll need to investigate before signing up with an instructor. Anyone can put up a sign stating that she is an instructor, and many of these people are not qualified to teach. Ask your friends if they can recommend an experienced instructor with a good reputation. If someone sounds interesting, call her and let her know how much riding you have done and what sort of riding activities you would like to do. You should enjoy chatting with the instructor. She should be pleasant and sound knowledgeable. Ask how long she has been teaching and about her riding experiences. A good instructor has ridden and competed successfully for many years. Try to find an instructor who has completed a certification program. While accreditation doesn't necessarily mean that the individual is a better teacher than one who is not certified, it does mean that the person has completed a solid program and will keep safety issues in mind. There are several different certification associations in the United States including the Certified Horsemanship Association, the American Riding Instructors Association, the American Association for Horsemanship Safety, and the United States Dressage Federation, among others. These instructors should be professional and safety conscious, as well as insured—another important thing you must check for when looking for an instructor. When browsing through the ads in horse magazines, you'll notice that many instructors specialize in one equestrian sport—such as dressage, eventing, or show jumping. If you're just starting your riding career, you don't really require a specialist. You want a good, all-around instructor who can teach you the basics of riding in a safe, fun manner. Then, when you are ready, you can decide if one particular equestrian sport is for you. If you're really eager to learn dressage, however, by all means call up a dressage instructor. Ask what the instructor charges per lesson, and then ask your friends how much they pay. This will give you an idea of the local going rate. Some instructors give a discount if you pay for a large number of lessons up front. Individual lessons cost more than group ones. Once you've located an instructor that fits your needs, ask if you can drop by and watch her teach. She shouldn't mind. Signs of a good instructor include: She is patient and doesn't mind explaining things. Her classes are small, and each student gets plenty of attention. Her horses are well mannered and can do what is asked of them. Her students look confident and appear to be having a good time. Her classes are varied and students do not spend the entire lesson working on the same thing. She doesn't shout at or bully her students. A good instructor gives you plenty of attention. If you like what you see, book some lessons. The instructor will have you sign a release form before she lets you on one of her horses. This form usually states that you understand that riding is a dangerous sport and you will not hold the instructor responsible if you get hurt. The first few lessons you take may be private, so the instructor can determine how well you ride and place you in a class that suits your needs. What You Should Wear to Ride It's not necessary to buy expensive riding clothes right away. A pair of well-fitting jeans and boots or low-heeled shoes with laces are fine for your first two or three lessons. Never wear sneakers because you need heels to keep your feet from slipping through the stirrups. After a couple of lessons, you should know if you're going to continue riding. If you are, it's time to pay a visit to the local tack shop. Here's the basic gear you need if you plan to ride regularly: **A riding helmet:** A helmet is your most important item of safety gear. Never ride without one. Buy an ASTM/SEI-approved safety-riding helmet that meets the tough standards set by the American Society for Testing and Materials and has the Safety Equipment Institute seal. Buy one at a tack shop and make sure it fits properly. Try on several brands to find one that fits you. After placing the helmet on your head, use your hands to move it back and forth. The right size fits snugly and does not shift at all. Don't try to save money by buying a used helmet because you may not be able to tell just by looking at it if it has been dented or damaged. There are several types of helmets. Velvet-covered helmets are used in show rings because they look elegant. Brimless jockey skullcaps are used by jockeys and eventers (people who jump cross-country). Most riders cover a skullcap with a hat cover. Schooling helmets are designed similar to bicycle helmets. They feature several vents to keep your head cool. Make sure any helmet you buy has a harness so it stays on your head if you take a tumble. Invest in a safety helmet with a chin strap. **Footwear:** Your riding footwear should have low heels, otherwise your feet will slip out of the stirrups, which is dangerous. If you're new to riding, you can invest in a pair of inexpensive, tall, rubber boots. They look pretty good, and you can rinse them off with a hose when they get dirty. Synthetic boots are also available and have the look of leather for less. If you can afford it, however, leather boots are a solid investment. When looked after properly, they last for years. Buy them slightly tight around your calf because leather stretches and they quickly will become looser. There are two main types of tall boots: dress boots, which are plain; and field boots, which lace in the front. You can also wear paddock boots, which are ankle high and lace up like traditional shoes. **Breeches:** Jeans or trousers are acceptable for riding lessons, but buy a pair of breeches but as soon as you can afford them. Because breeches are made of stretchy fabrics, they're more comfortable and won't rub your legs like jeans do. Jodhpurs, usually worn only by young children, cuff at the ankle and are worn with short boots. Breeches are designed to be worn inside tall boots. Dress appropriately from head to toe. Chaps are for casual riding, but they aren't permitted in English-style showing. Breeches come in many colors, with the most colorful ones called schooling breeches. If you are on a budget, however, buy a pair in beige so you can also wear them in shows. Dressage and show jumping riders in competitions often use white breeches. When you try on breeches at the tack store, take a look at yourself in the mirror. Make sure they are snug but not too tight. **Chaps:** Chaps are leather leggings that are usually only worn over jeans for casual riding. They fasten with side zippers and are used to protect your clothes. They also help prevent your legs from chafing, but they must fit snugly or they'll rub your legs and make them sore. Buy them slightly tight, as they stretch. They aren't permitted in an English-style show, however. Half chaps are usually worn with breeches. They cover your lower legs and fasten with zippers or Velcro. Many riders wear them instead of tall boots because they are quick and easy to put on and they help you keep a stable leg position. **Gloves:** Gloves are essential because they protect your hands from blisters and give you a firm grip on the reins in wet weather. Leather gloves are nice, but cloth gloves with rubber grips are less expensive and do a good job. Offer to exercise other peoples' horses for them, if you're interested in spending more time with horses. Gaining Experience Once you've begun lessons and have the correct riding gear, you might find that you want to spend more time around horses. Here are some great ways that you gain valuable experience: Offer to groom and tack up your lesson horse: Arrive a half-hour early for your lesson and ask your instructor if you can get your lesson horse ready. This is a great chance for you to get to know the horse, plus you'll learn how to groom and tack up like a pro. Afterward, ask if you can untack your horse and feed him. Volunteer for the North American Riding for the Handicapped Association (NARHA): There are many riding programs for people with disabilities. These programs always need volunteers. You might be asked to groom and tack up horses, lead horses around the ring, or serve as a "sidewalker" (someone who helps the riders keep balanced). Contact the NARHA to find a program in your area. See the listing in the Resources chapter for more information. Exercise other people's horses: Many people are too busy to ride their horses enough and are willing to let other people exercise them. Your instructor might know of people who need this help. It is a good opportunity to ride many different kinds of horses. Offer to be a horse-sitter: Post a sign at the barn or tack store advertising yourself as a horse-sitter. People going away on vacation might ask you to groom, feed, and turn out their horses. Maybe you'll get to ride them, too. Be a horse-show volunteer: Most shows need volunteers to do things such as open gates, help judges, and move jumps around. Volunteering is a great way to learn about shows, study riders in action, meet local horse people, and make new friends. Eventing competitions rely on volunteers to keep their shows running smoothly. Write to your local dressage, show jumping, or eventing organizations and offer your help. Be a Pony Club or 4-H volunteer or leader: These equestrian youth clubs desperately need adults to help at rallies, meetings, and shows. Contact your local Pony Club District Commissioner or your county's agricultural extension office and offer your services. For more information about Pony Club or 4-H, see the Resources chapter. Attend clinics: Many facilities offer clinics given by top riders and trainers. Even if you don't have a horse, you can usually pay a small fee and attend as a spectator. You will learn a lot about riding and horsemanship by simply watching and listening. Go on a riding vacation: Browse through horse magazines to find travel companies that offer riding holidays in far-off places. You could go hunting in Ireland, study dressage in Britain, or trail ride through the French countryside. You could also chase steers on a dude ranch in Wyoming or ride a mule in the Grand Canyon. Plenty of exciting holidays are available, so start requesting brochures now. The North American Riding for the Handicapped Association (NARHA) puts horsey volunteers to work. In the Saddle **W** hen you start riding, it is important to begin with the basics. It may be great fun to saddle a horse and gallop down a trail right away, but you won't be learning the skills that make you a good rider. In fact, you may pick up bad habits that will be difficult to get rid of later. So start with a qualified instructor who can teach you correct riding positions the first time you mount a horse. You must learn how to walk and trot safely and properly before you canter, gallop, and jump. Fit to Ride It helps to be in good shape to ride. Riders are athletes too, and you'll never improve if you huff and puff after only thirty seconds of trotting. Regular exercise and staying in shape will make you a better rider, and you'll look better in a pair of breeches! Stay in shape by cross-training, such as taking up jogging, swimming, or bike riding. Join an aerobics class. You can burn up calories and improve your muscles around the barn, too. Mucking out a stable or lugging bales of hay is terrific exercise, and you can do these chores without hiring an expensive personal trainer. Leading a Horse Before you mount your horse, learn how to lead her. While standing on your horse's left side and facing forward, lift both reins over her head and hold on to them with your right hand, about 3 inches below her chin. Hold the excess reins with your left hand. Don't let the reins drag on the ground because your horse could step on them and break them. Stand next to the horse's shoulder; when you walk forward, she should walk on, too. Some horses obey voice commands, so you may have to say "walk on" when you want your horse to move. When you want her to stop, give a small tug on the reins with your right hand, say "whoa," and stop. If your horse doesn't halt, give one or two gentle tugs on the reins. If she still won't come to a stop, bend your right arm, stick your elbow in front of her chest to form a barrier, and lean into her. Jab her several times with your arm and say "whoa" again. Try not to let your horse bully you when you're leading her because she'll soon realize that she's stronger and will barrel right over you. When leading a horse, hold the reins with both hands. Mounting a Horse It's essential that you mount your horse as gently and quickly as you can, causing her as little stress and strain on her back as possible. After three bounces on your right foot, you should spring into the saddle—no problem. But there are bad ways to mount a horse, too. Grabbing the saddle and heaving yourself up onto your horse can twist the saddletree and damage the expensive piece of equipment. Pulling yourself up inch by inch is also wrong because it puts unnecessary strain on your horse's delicate back. It's no wonder many horses fidget and move away when they are about to be mounted. There are ways to mount that do not damage the saddle or injure your horse, and you should master them quickly if you are fit and flexible. FROM THE GROUND 1 Test that your stirrups are the right lengths. An easy way to tell is to run your stirrups down, face your horse's side, stretch out your right arm straight in front of you, and touch the saddle with your knuckles. Lift the stirrup iron with your left hand and pull it toward you under your right arm. The iron should hit your right armpit. If it doesn't, lengthen or shorten the stirrup leather as appropriate. This is just an approximate measurement; you may still have some adjustments to make when you mount. 2 Make sure the girth is snug. If it's loose, the saddle will move when you mount, and it could end up under your horse's stomach. 3 Stand at your horse's left shoulder and face her tail. 4 Hold the reins with your left hand and the stirrup iron with your right. Slip your left foot into the stirrup. If you can't reach the stirrup, it may be necessary to let it down a few holes until you get your foot into it. 5 Grab some mane with your left hand (the one holding the reins) or rest it on your horse's neck. Place (don't grab) your right hand on the saddle's pommel or seat. Then bounce around on your right foot until you are facing your horse's head. Try not to kick her in the ribs. Give three big bounces and hop lightly into the air. 6 Slowly raise your right leg over your horse's back. Do it in one continuous motion. (This is why you have to be fit!) Try not to bang your leg on the horse's hindquarters. 7 Gently lower yourself into the saddle. Never slam down on a horse's back; this is uncomfortable for her. Next, put your right foot in the right stirrup. 8 Finally, make sure your stirrup leathers are lying flat against the saddle. If they're twisted, adjust them so they don't rub you. A. Stand facing your horse's hindquarters and put your left foot in the stirrup. B. Bounce around until you face your horse. C. Bounce once or twice and spring up. D. Swing your right leg over and sit down gently. Using a mounting block puts less stress on a horse's back. FROM A MOUNTING BLOCK Sometimes you may need help mounting a horse. She could be too big for you to get on by yourself, or she may have a sensitive back and must be mounted gently. A mounting block comes in handy here. Walk the horse so her left side is next to the block, ask her to halt, then climb aboard. A LEG UP A friend can help you mount by giving you a leg up. Here's how it's done: 1 Stand facing the horse and lift up your left leg. Your friend stands behind you and grabs your lower left leg, below the knee, with both hands. 2 Bounce three times, counting with your friend, and leap up on the third bounce while your friend lifts you in the air. (Be careful that your friend doesn't toss you too high because you might fly over the horse and land on the ground. Practice getting a leg up on a docile horse.) 3 Swing your right leg over the horse's back and gently lower yourself into the saddle, being careful not to strain or injure the horse's back. Even though having the extra power from your friend will propel you faster, you need to have control so the horse is not injured. Once You Are Mounted CHECK THE GIRTH Once you are mounted, check the girth again. Some wily horses suck air into their stomachs when you tighten the girth the first time. Then they let the air out after you mount, leaving the girth loose, allowing the saddle to slip. This can cause you to fall off the horse. Hold your left leg up and lift the saddle flap to check if the girth needs to go up a hole or two. A friend can give you a leg up. After three bounces, spring up while your friend lifts you into the saddle. You can tighten your girth while in the saddle. ADJUST YOUR STIRRUPS Before you set off, make sure your stirrups are the correct lengths. Take both feet out of the stirrups and hang your legs down. Your ankle bone should hit the stirrup bar. If the stirrups are too long or too short, you can adjust them without dismounting. Place your foot back in the stirrup and lift your leg up and away from the saddle so your weight is not on the stirrups or leathers. Then, lift up the saddle skirt so you can reach the buckle easily. Shorten or lengthen the stirrup leathers as needed. DISMOUNTING It's a lot easier to get off a horse than to get on. Just follow these steps: 1 Take both feet out of the stirrups. Hold both reins in your left hand. Rest your right hand on the saddle or on your horse's withers. 2 Gently swing your right leg over your horse's back, face to the right, and slide off. 3 Put both your legs together and bend at the knees as you drop toward the ground. Bending helps soften your landing. Perfecting Your Position Once you've mounted, it's time to work on position (the balanced way you sit in the saddle). It's important to develop a good riding position from day one because it has a lot of bearing on how effective you will be in cueing your horse. Your goal is to have a "secure seat," which means you stick close to the saddle and don't bounce around. You want to move in harmony with your horse so you don't interfere with her natural movement. The way you sit on your horse's back influences the way she moves. If you lean one way, she may lean that way too. If you fall forward on her neck, you may unbalance her and cause her to stumble. If you lean back, you might lose control. But if you sit quietly in the saddle and stay in sync with your horse's movements, you'll get the best performance out of her and will experience a safer ride. A. When dismounting, take both feet out of the stirrups. B. Swing your right leg over your horse's hindquarters. C. Keep your legs together and slide down. D. Bend your legs when you land on the ground. Make a point of watching many different riders. The successful ones have the ability to sit on a horse and look as if they are not doing much. It may take a while before you feel that you're moving with your horse instead of bumping around, but don't worry. The more time you spend riding, the better your position will become. Eventually, you will be able to ride in tandem with your horse's natural movements. Here's an example of a good riding position starting at the top. You should strive for this position every time you hop into the saddle: **Head:** Hold your head up and look up in the direction you're going. Tipping your head down affects your balance and influences the way your horse moves. **Shoulders:** Don't hunch over. Use a rolling motion to pull your shoulders back and down. Try to relax. **Chest:** Keep your upper body straight and tall. Stick out your chest slightly. Imagine there's a string attached to the top of your head, pulling you upward. **Back:** Your back should be slightly arched, but not stiff. Let it be flexible and move with your horse. Here's an example of a good position. **Seat and Thighs:** Sit squarely in the middle of the saddle, on the lowest part of the seat. Your seat bones (not your whole rear end) should be as close to the saddle as possible. Sit on both seat bones and distribute your weight as evenly as you can. This is more comfortable for you and the horse. Allow your legs to hang down naturally, and keep your thighs close to the saddle. **Arms:** Keep your upper arms at your sides. Your elbows should be close to your body, too. If your arms are in the correct position, you should be able to imagine a straight line from your elbows to your hands all the way through the reins to the bit. Your arms should move freely with your horse's nodding head. **Hands:** When you ride, you use the reins to steer and stop your horse. How a horse performs has a lot to do with how you hold the reins. Your hands should be level with each other, knuckles facing slightly up and outward, palms facing each other an inch or two apart. Grip the reins with the middle three fingers of each hand. Keep these fingers firmly closed to keep the reins from slipping out of your hands. Rest your thumbs on top of the reins. Your pinky fingers go under the reins to keep them in place. **Legs:** Your knees can touch the saddle, but don't grip tightly with them. Squeezing with your knees can tip your upper body forward. Your lower legs should be close to your horse's sides, slightly behind the girth. Looking down, your toes should match up in a line with your knee. **Feet:** The balls of your feet should rest on the stirrups, with your heels lowered. If your toes point down, your lower leg is thrown out of position, making it difficult to keep your balance. Your feet should be even on both sides of your horse. Distribute your weight evenly on both seat bones. Keep your palms facing each other and your thumbs on top. Exercises, like airplanes Exercises and Stretches Doing exercises and stretches in the saddle is a good way to work on your position. They help you relax, limber you up, and improve your balance. You can do most exercises at the halt or the walk; you may find them easier to do while your teacher puts you in a circle on a long line, called "lungeing." If you do them by yourself, tie your reins in a knot so you can steer easily with one hand. Make sure you exercise on a calm, obedient horse. **Forward Stretches:** Hold the reins in one hand, then lean forward and touch your horse's ears with the other hand. Switch the reins to the other hand and repeat the stretch with the opposite hand. Try to keep your lower legs in good riding position near the girth. If you're feeling brave and you're on a lunge line, try reaching back to touch your horse's tail. Stretches, limber you up. **Airplanes:** Hold the reins with one hand and stretch your other arm out horizontally. Make four or five big circles with your arm, then switch hands and repeat with the other arm. If you're on the lunge, circle both arms at the same time. **Toe Touches:** Hold the reins in your left hand and reach your right hand over the horse's neck to touch your left toe. Repeat with the other hand and opposite leg. **Ankle Twists:** Take both feet out of the stirrups and makes circles in the air with them to loosen your ankle muscles. **Leg Lifts:** Take both feet out of the stirrups and slowly lift your legs up and as far away from the saddle as you can. Hold them up for a couple of seconds, then lower them slowly back into position at your horse's sides. Leg lifts really strengthen your thigh muscles. The Walk **W** hen learning to ride, you must start at the walk—the slowest, easiest gait to control. Once you can manage your horse at a slow speed, move on to more exciting, speedy activities. The Aids Before you begin the walk, you must learn the signals that tell a horse what you want him to do. These aids help you communicate with your horse. There are natural aids and artificial aids. A horse understands the aids that were taught to him when he was young, usually by his first rider or trainer. You might get on a new horse and ask him to move using the aids you have learned, but he might not understand what you want him to do. NATURAL AIDS Natural aids come from you: your legs, hands, seat, and voice. **Legs:** Squeezing your horse with your lower legs tells him that you want him to move forward, sideways, or backward. **Hands:** Your hands hold the reins, which attach to the bit. Use your hands to turn a horse or slow him down. **Seat:** How you sit affects your horse's movement. When you sit deep in the saddle, he may slow down. Leaning forward tells him to go faster. Shifting your weight may tell him to turn. **Voice:** Some horses are trained to obey voice commands. For example, if you cluck to a horse, he may go faster. If you say "whoa," the horse may slow down. Raising your voice slightly and speaking in a sharp tone can inform your horse that he is being naughty. ARTIFICIAL AIDS Artificial aids such as crops or spurs help the natural aids: **Crops:** Crops are meant to give your legs extra _oomph_. If your horse is lazy or ignoring your leg, you may need to use a crop. There are several kinds of crops to choose from including short jumping bats, medium-sized crops, and long dressage whips. Most horse people carry the whips in their inside hands to stop their horses from cutting corners, but you may need to switch over to the outside hand for training reasons. While holding the reins, hold the handle of the crop in your palm and let the bottom rest on your thigh. Some crops have a wrist loop, but don't slip your hand through it. If the crop gets caught on something, you could injure your hand, which is why some people cut off these loops. Carry a whip so it rests on your thighs. Spurs give your legs extra _oomph_. Use a crop as little as possible and only if a horse has ignored your natural aids, such as your legs or voice. If he ignores your legs, use the whip on his side, directly behind your leg. Put your reins in one hand and use the crop hand to reach back and give him one or two short smacks. Then continue your work. If you use a short crop, hold the reins with one hand and use the other to give your horse a whack behind your leg. Don't hold the rein and whack your horse on the shoulder with the same hand because you will yank him in the mouth. Most of your horse's power comes from his back end, so hitting his shoulder is fairly ineffective. Experts recommend a tap behind your leg, on his side, so that it works as a reinforcement of your natural aid. A dressage whip is about 3 feet long, and you can use it without taking your hand off the rein. Turn your wrist and tap the whip across your thigh so the tip of the whip taps your horse behind your leg. A dressage whip stings more than a regular crop so don't use it with too much force. At the walk, follow the bobbing movement of your horse's head with your hands. Never lose your temper and beat your horse with a crop, and never use one on his head. It can frighten him and make him head shy (afraid of having his head touched). **Spurs:** Spurs also give your legs extra power because most horses react when pointy metal objects are pressed into their sides. If you are new to riding, do not wear spurs right away. First learn to use your legs correctly before strapping on spurs. New riders often jab their horses with spurs unnecessarily. The neck of the spur should be blunt and short with no rowel. Prince of Wales spurs are very popular among English-style riders. They are about ½ inch long. Some boots have spur rests so you know where the spurs should be worn. Otherwise they should be just under your ankle-bone. Make sure they are level or slanting down. If your horse is poking along, use your calf muscles first to get his attention. If he doesn't react to them, turn your toe out and give him a quick nudge with the spurs. Then return your lower leg to the correct position. Do not kick your horse with spurs in anger. Some horses have scars on their sides as a result of rough riders wearing sharp spurs. Be gentle, yet firm, when disciplining your horse. First Steps Once you are mounted, check your position before you set off at the walk. Think about how you look. Are your legs slightly behind the girth? Are you sitting up straight? Are your hands parallel to each other and close to the withers? Are your reins even? You may have to shorten them. When you're ready, look straight ahead and squeeze your horse's sides with both of your lower legs. He should walk forward. If he does, stop giving him the leg aid as a reward for moving forward. Then keep your leg quietly touching his side. If he doesn't move, squeeze him several times. If he still doesn't step forward, it's time for a tap of the whip behind your leg. Don't hesitate; you are the boss. When your horse is walking, keep your legs touching his sides. If he is obedient and moving forward, you don't have to keep squeezing. Simply keep your leg on him. Then concentrate on keeping your heels down and pointing your toes straight ahead. At first, your instructor will probably tell you to use a fairly loose rein. This is to prevent your pulling on the horse's mouth if you suddenly lose your balance. As you become more secure in the saddle, you will get used to your horse's movement when he walks forward. As he moves, he will nod his head. You will feel this motion with your hands through the reins. Keep your hands soft and flexible, trying to let them follow the movement of the horse's head as he walks. Imagine that the reins are rubber bands; let your hands give and take with the pressure. This elastic connection that you create is called "contact." When an instructor tells you to "take up contact with the reins," that means shorten your reins so you can follow the motion of your horse's head. It is important to get used to the way good contact feels. If your reins are too tight, your horse will think you want him to stop, but if the reins are too loose, you'll have little control over your horse, making it difficult to turn or stop. Sometimes it's difficult to get a lazy horse to move. Riding school horses are often particularly tough to get going. But should you kick a horse who won't move? Some instructors say yes, but too much kicking can upset a horse and deaden his sides forever. If you ride a slowpoke, give a short, sharp squeezes using your lower legs. Little kicks may work, too. Once he is moving, squeeze him with alternate legs to keep him going; in other words, squeeze once with your left leg, then, as he steps forward, squeeze him with your right leg until you are at the pace you want. Then stop squeezing, and repeat only if he slows down again. Squeeze on the rein to ask your horse to turn. Keep your outside leg behind the girth when turning. Steering Once your horse is walking forward nicely, it won't be long before you have to turn him. When you are first learning to steer, it may seem natural to pull on the rein in the direction you want to go. But yanking your horse around is not good horsemanship, or effective riding. Use your hands and legs to steer. Before you make your turn, look in the direction you want to go, then ask your horse to turn. If you're turning left, use your fingers to squeeze the left rein. Pull your hand back toward you a tiny bit—perhaps an inch or two. Your horse should turn his head slightly to the left. You should be able to see his eye. Your right hand should be close to his neck, and you should keep a steady, but soft, feel on the right rein. If he doesn't turn his head, you may have to tug gently on the left rein several times to get his attention. Reward him when he turns his head by not tugging anymore, and by relaxing your hold on the rein. At the same time, keep your left leg near the girth and move your right leg back a bit, behind the girth. These are the aids that tell your horse to bend his body when he turns. He should bend his body around your left leg. Your right leg, behind the girth, prevents him from swinging his hindquarters around and straightening up. Once you have turned, stop squeezing on the left rein and move your right leg back to the girth. Now your horse can straighten up again and run in a straight line. Turn your horse by bringing your inside hand back toward your hip. Do not pull your horse's head around. To turn to the right, squeeze the right rein until he turns his head slightly. Keep your right leg next to the girth and your left leg behind the girth. If you are riding in a circle, think "inside leg next to the girth, outside leg behind the girth," and your horse should bend to the turn without a fuss. TURNING TIPS Always look in the direction you want your horse to go. Turning your head slightly shifts your body weight and affects your balance, which can also affect your safety and that of the horse. Even a quiet, or tiny, movement tells your horse you want him to turn. Don't lean too much or put too much weight in one stirrup. This can unbalance your horse and make the turn look and feel clumsy. Don't lift up the turning hand. Keep your hands close together. Your movements should be subtle, and your horse should be able to read and react to gentle cues. A spectator should not be able to tell that you're asking for a turn. CHANGING DIRECTION Before long, your instructor will ask you to change direction in the arena. There are several easy ways to do this. Wait until you reach the middle of the long side of the ring and turn your horse inward. Walk across the middle of the ring; then turn right when you get near the rail. You could also do a half turn, sometimes called a "half circle." Simply turn your horse to the inside, make a small 10- to 20-meter loop, head off in the other direction, and return to the rail. Stopping Prepare yourself to halt before you give your horse any instructions. Sit deeply in the saddle on both of your seat bones. Keep your back straight, and look ahead to where you want your horse to halt. Don't take your legs off his sides. To begin to halt, keep your legs on his sides, but stop pressing with them. Now, stop following the horse's movement with your hands and start squeezing your fingers around the reins. Doing so puts pressure on the horse's bit and tells him to slow down. Some horses listen to voice commands, and it may help to say "whoa." Continue squeezing the reins until he stops. When he halts, relax your hands and stop squeezing immediately. This is very important. You must reward a horse that stops by releasing your hold on his mouth. It may seem easier to halt a horse by pulling back on both reins at the same time. You've probably seen many riders do this, but it doesn't always stop a horse. In fact, many horses fight against a constant hold, and they may pull against you even more. Small tugs or squeezes on the reins are usually more effective than constant pulling. As with turning, asking your horse to halt should be a quiet motion. Too often, people are rough with their hands when they ask a horse to stop; rough hands often make a horse resist and throw his head in the air. Some disobedient horses pull against you no matter what. If so, you need to be more aggressive in the saddle. Lift your hands slightly and pull quickly several times on the horse's mouth until he stops. Once you can effectively stop your horse, try to get him to stand square (come to a halt with his front and hind legs lined up evenly). His head, neck, and back should be straight, and his weight should be on all four legs. The square halt takes time and practice. If he doesn't halt squarely the first time, nudge him forward a step or two until he stands properly. Ask a friend to draw a line in the dirt, and try to get your horse to halt with his front legs standing on the line. Practice halting every time you ride. Try to get your horse to halt squarely. Ask for the halt while you are walking along a line and your horse's body is straight. Some horses fidget and move around when you ask them to halt. If yours does this, ask him to halt for only a second or two, then let him walk forward in a straight line. Try to increase the amount of time he stands quietly. Praise him and give him a pat on the neck when he stands still. Next, ask him to stand for several minutes while you chat with a friend or watch an ongoing lesson. Don't forget about your position when your horse stands quietly. Sit straight in the saddle. Don't get lazy and slouch just because your horse isn't moving forward. There may come a time when he will have to stand quietly. At shows, for example, you'll have to line up and stand still during classes, and a judge will deduct points from a fidgety horse. You may also need him to stand still to let cars pass before crossing a busy street, or while on a trail to let other horses or a rattlesnake pass by. If your horse refuses to stand still, you may have to work on halting him from the ground. Lead him around in his bridle and ask him to halt from the ground, using your voice and tugging on the reins. Mastering the Trot **O** nce you can walk, turn, and stop a horse effectively, it's time to trot. Now, the two-beat trot can seem very bumpy and fast after the nice, smooth walk. The first time you trot, you may find yourself bouncing around on the saddle and grabbing onto your horse's mane so you don't fall off. Don't panic. One of the best ways to learn how to trot is to start out on a lunge line. In your lessons, ask your instructor to lunge you while you get used to the bumpiness of the trot. This way, you can concentrate on keeping your balance while your instructor controls the horse. At first, trot for only one or two minutes at a time. You'll find that staying secure in the saddle requires a lot of effort, and you'll be using muscles you probably haven't used before. Your position at the trot should be the same as at the walk: Look in the direction you're going. Keep your elbows close to your body. Keep your hands even and near your horse's withers. Sit up and keep your back straight. Sit deep in the saddle and distribute your weight evenly on both seat bones. Keep your lower legs quietly placed on your horse's sides. Before you ask for the trot, make sure your horse is walking at a brisk pace. If she is poking along, it will take longer for her to begin trotting. Loosen your reins slightly, but hold on to the pommel of the saddle or a neck strap so your hands don't bounce up and down causing you to yank your horse in her mouth, which can hurt and annoy her, too. Pulling on her mouth also tells her that you want to halt, which is not what you want at this time. Time to Trot Once your position is perfect, and your horse is walking smoothly, squeeze with your legs—as you did to get her walking—and your horse should begin trotting. If she doesn't trot right away, squeeze again and again. If she ignores you, give a little nudge or kick. If she still won't trot, tap her behind your lower leg with a crop. Your goal is to get your horse to trot immediately. Don't let her dillydally. Once she's trotting, she should move forward at a brisk, steady pace. Her steps should be regular and even. When you're used to trotting, shorten the reins slightly for more control over your horse's speed and direction. Try to relax at the trot. Take deep breaths and loosen up. A relaxed body absorbs the up-and-down motion. A stiff, tense body bounces around, which is painful for both horse and rider. Sit deep in the saddle. Don't lean forward over your horse's neck and grip with your knees. That puts your rear end out of the saddle so you won't be bumping it, but your leaning forward isn't comfortable for your horse and makes it difficult for you to keep your balance. If your horse stumbles while you're leaning forward, you can take a dive. Leaning forward also tells your horse to go faster. That's not what you want when you're learning to trot. Transitions During lessons, you may hear your instructor talk about transitions. A transition is simply a change of gait or pace. Going from walk to trot is a transition; so is going from trot to canter. An upward transition means going from a slower gait to a faster gait. A downward transition means going from a faster gait to a slower gait. Ask for upward transitions by squeezing or nudging your horse with your legs. Loosen your reins slightly so you don't pull your horse in the mouth when she shifts up a gear. When you ask for downward transitions, sit deep in the saddle and push your shoulders back. Keep your legs on your horse's sides without pressing into her, and squeeze the reins with your fingers. As soon as your horse slows down, reward her by loosening your hold on her mouth. Squeeze with your lower legs to ask your horse to trot. Your goal as a rider is to make transitions as smooth as possible. When you ask your horse to trot, she should trot right away. It shouldn't take three minutes of kicking to get her going. And when you ask for the walk from the trot, you shouldn't have to pull hard on the reins to get her to slow down. A downward transition should take a few steps at most. If your horse is well balanced when you ask for a transition, she should not have a problem slowing down or speeding up smoothly. But if she is slopping along with her nose on the ground or if she is high-stepping with her head in the air, your transitions will be messy. Organize and settle your horse before you ask for a transition. Rise out of the saddle when the outside hoof is forward. Posting (Rising) Trot Trotting will seem a lot smoother once you learn the posting, or rising, trot. Posting means rising up and down in the saddle as your horse trots along. It lessens the stress on your horse's back and makes trotting less tiresome for you both. When your horse trots, she springs from one diagonal pair of legs to the other. For example, her front right leg and her back left leg move forward at the same time. Then the front left leg and back right leg move forward together. There is a moment of suspension between each step. This is why the trot is so bouncy. When you post, you rise out of the saddle when one diagonal pair of legs lifts off the ground, and you sit down as the same pair returns to the ground. You'll soon notice that the natural movement of your horse actually bounces you forward and slightly out of the saddle, making posting fairly easy. Try to stay close to the saddle, though. Some extra-springy horses can bounce you too high, making your riding style less than efficient. Sit in the saddle when the outside hoof is under your horse. Trotting is a two-beat gait, and posting will come easier if you count "one-two" as your horse moves along. (Rise on "one" and sit on "two.") Count to yourself until you start feeling a rhythm, then start rising and sitting in time with the horse's trot. Practice posting while your horse is walking or standing still. As you rise, push down on the stirrups with the balls of your feet. Then slowly sink back into the saddle. Diagonals Once you can post the trot, you must master diagonals. Diagonals are a way of making sure you are up out of the saddle—or sitting down in the saddle—on the correct beat. The diagonal you should be on depends on which direction your horse is traveling in an arena. If you are on the correct diagonal, it is easier for your horse to stay balanced as she trots around the arena. Though mastering diagonals can take some practice, it is well worth the time, especially if you compete. If you are riding around the ring on the right rein (your right side faces the inside of the arena), you should rise and sit in time with your horse's left foreleg—the front one on the outside of the arena. If you are riding on the left rein (your left side is on the inside), you should rise and sit in time with your horse's right foreleg. Alternate diagonals when riding outside of the ring. CHECKING YOUR DIAGONAL While you are trotting during a lesson, your instructor may call, "Check your diagonal!" This usually means you are on the incorrect one. There is an easy way to tell if you're on the correct diagonal. Lower your eyes only (not your head— this affects your balance) and look at your horse's outside foreleg. When it is forward, you should be up in the air. When it is back, you should be sitting in the saddle. When you change direction, change your diagonal, too, so you and your horse maintain balance. This is simple. As you change direction, sit in the saddle for two beats, then rise again. Think to yourself: "Up-down, up-down, down-up." You should be on the correct diagonal. If you aren't, sit down another two beats, then rise. It is important to be on the correct diagonal when you're competing in a flat class at a show (one in which you walk, trot, and canter). Being on the wrong diagonal could knock you out of the ribbons. OUT OF THE RING If you're on a trail ride and there is no inside or outside rein, change your diagonal every once in a while. This keeps your horse flexible; she won't get too used to one diagonal. This also helps you stay in practice for the times that you do compete in front of a judge. Some horses ride more comfortably on one diagonal than the other, but you must remember to change diagonals regularly, for your horse's sake. The better she gets at switching on the fly, the better you can handle her in competition. Sitting Trot The sitting trot is a great way to improve your balance on a horse. It also strengthens your leg and seat muscles. But it can be tiring for both you and your horse, so practice it for only short periods until you can do it properly. You don't rise when you do the sitting trot. Sit deep in the saddle and keep your back straight. Push your shoulders back. Keep your thighs close against the saddle flap, but don't grip with them. This only makes you bounce more. Stretch your legs down and keep them close to your horse's sides. When you first try the sitting trot, you may need to hold on to the pommel with one hand to stop bouncing. Remember to relax while you sit the trot and try to let your body absorb the bumpy motion. Trotting Exercises Here are some easy exercises to work on while schooling your horse: **Circles:** Do a lot of circle patterns at the trot. They make your horse supple. Make sure they're round, not egg shaped. Do them in the corners of the arena and in the middle. Try to keep your horse's body bent around your inside leg. You should be able to see her inside eye. **Changes of Direction:** Change your direction by riding down the center of the arena or cutting across the middle. You can also do half turns (half circles) at the trot. **Transitions:** Go from walk to trot, trot to walk, trot to halt. Keep your horse thinking and alert at all times! **Serpentines:** Do large S-shaped serpentine figures, up and down the arena. Keep your loops even. **Figure Eights:** Practice figure eights at the trot. Make sure both circles are round and the same size. Remember to change your diagonal when you change directions. Try to relax when doing a sitting trot. Doing circles at the trot can make your horse more supple. **Leg Yielding:** This exercise teaches a horse to obey your leg aids. Try it on the track on the long side of the arena. Make sure your horse's body is straight as you trot on. Then squeeze with your outside leg and ask your horse to move away from the pressure. She should move forward and sideways at the same time. After she moves sideways for a few steps, let her go forward for a second or two, then use your other leg to ask her to move back to the track again. Switch back and forth until you get the hang of the exercise. Praise and Rewards Always remember to praise your horse during a schooling session. If she performs a movement well, pat her on her neck with your hand and praise her verbally. Horses know the difference between praise and punishment. Reward your horse by letting her take a break during your riding session. Loosen your hold on the reins and walk around the ring. Some people reward their horses with sugar cubes or horse treats, but horses can get used to this and start to expect tasty treats in the middle of a schooling session. Plus, sugar isn't great for your horse's teeth. It's better to give your horse a treat such as an apple or some carrots when you're done riding. If your horse is being naughty, don't hesitate to correct her quickly. If she is being sluggish or stubborn, tap her with a whip to get her moving. If she ignores other aids, sometimes a growl or a sharp verbal "no!" can make a horse focus on the task at hand. You will have to see what works for you and your horse and stick with it. Never lose your temper and beat a horse. One or two smacks on her hindquarters and a stern "no" are enough. Always discipline your horse immediately if she displays dangerous behavior such as kicking or biting. A horse only remembers a behavior for a few seconds, so you must correct it as soon as the bad behavior happens. Don't wait several minutes because by then she won't remember what she is being punished for! Give your horse a lot of praise when she performs well. Trotting without Stirrups Riding without stirrups is a great way to strengthen your leg muscles so you can use them more effectively. It also teaches you how to sit properly on both of your seat bones and develop a secure seat, instead of relying on your stirrups so much. If you "lose a stirrup" (your foot slips out of the stirrup iron) during a lesson or while competing in a show, it is important to keep going without losing your balance. You shouldn't stop your horse and pick up your iron again. It wastes time, and you might lose points in a show. So don't cringe or whine during your lesson the next time your instructor tells you to "drop 'em!" It's good practice to prepare you for handling the situation during competition. You should set out to ride without stirrups at least once or twice a week to get a feel for the experience. You don't have to do it for long. Ten minutes is fine because you'll bump around a lot at first, and this won't be very pleasant for you or your horse. When you get better at riding without stirrups, you can do it longer. If you are going to ride without stirrups for a while, secure them by crossing them over the saddle in front of you so they don't fly around and bang your horse's sides. Pull both stirrups down a bit so the buckle is about three inches away from the stirrup bar. (Pulling the buckle down helps the leathers lie flat so they don't rub your legs.) Then, cross them over your horse's neck and rest them on either side of her withers. Before you pick up the trot, let your legs hang down under you. They should touch your horse's sides. Keep your toes pointing forward and your heels lower than your toes. Sit deep so both seat bones touch the saddle. Sit up straight and push out your chest. Keep your elbows close to your sides and your hands close together. Squeeze your horse forward into the trot. Practicing good form now will help you excel with confidence in a real competitive environment. Ride with fairly loose reins so you don't pull on your horse's mouth if you start to bump up and down. Breathe deeply, relax, and move with your horse. Give posting without stirrups a try. Figure out which diagonal you're on, and start rising up and down. Push your legs close to your horse's sides and let your inner knees and lower thighs have a slight grip on the saddle. Let the horse's bouncy movement push you up and out of the saddle. When you sit down again, let your inner knee and lower thigh relax. Picking Up Your Stirrups Practice dropping and picking up your stirrups at the trot without looking down or moving the stirrup leather with your hand. Begin by slipping your feet out of the stirrups, walking a few strides, and then putting your feet back in them. Once you can do this without looking down, try it at the trot and canter. Don't cheat and look down. This skill comes in handy when you're riding cross-country or at a show and don't have time to stop and fish around for your stirrup with your foot. Trotting without stirrups is a great way to strengthen your legs. The Canter **W** hen you feel completely secure in the saddle at the trot, try cantering. The canter has a smoother pace, and the feeling is very pleasant—much like being in a rocking chair. When your horse canters, he seems to be rocking backward and forward in a rhythm, and so do you. During the canter, the sequence of footfalls is as follows: right hind, right diagonal pair, left leading fore, followed by a moment of suspension. The canter is faster than the trot, so you must be in complete control of your horse before you ask him to speed up. It's a good idea to learn how to canter in an enclosed ring instead of out on a trail, where your horse can bolt. When you are first learning to canter, it's best to ask for it from the trot. Experienced horses can pick up the canter from the walk, but it is easier for them to start from the trot. It is easier for you, too, as your horse should already be moving forward at a brisk pace. He won't be able to pick up a canter nicely if he is poking along with his nose to the ground. Before you even ask for the canter, make sure your horse is actively trotting forward, especially along the long side of the arena. Get him balanced by making sure both reins are the same length and keeping both of your legs pressed on his sides. If your horse is a bit lazy and difficult to keep trotting nicely, squeeze with both legs every time you rise. This should remind your horse that you want him to trot. Keep a soft but firm hold on his mouth and keep his head up. Concentrate on keeping your back straight and looking ahead. Leads Pick up the canter when your horse is trotting in a circle or around a bend in the arena because he is more likely to pick up the correct lead. When your horse canters, his inside foreleg should reach farther forward or step a bit higher than his outside foreleg. Instructors sometimes call the inside foreleg the leading leg. Your horse will be better balanced if he is on the correct lead. Asking for the Canter When you reach a bend in the arena or while you are circling, work on getting your horse to bend around your inside leg. Place your inside leg next to the girth and your outside leg behind the girth. Squeeze the inside rein so he turns his head to the inside. You should be able to see his eye. Don't ask for the canter unless your horse is bending properly. His head and his hindquarters should be pointing slightly inward. Then you can give the following aids: Your horse must be trotting actively before you ask for the canter. Stop rising to the trot and sit deep in the saddle. Push your heels down and point your toes forward. Place your outside leg behind the girth and give him a squeeze or a small kick. If your horse doesn't canter, keep him bending and nudge him harder with your outside leg. If he still doesn't move out, it's crop time. Smack him right behind your outside leg and squeeze with the leg at the same time. If a horse ignores your aids, it's best to use a crop immediately behind your outside leg rather than continue to kick. Most horses respond quickly to a sharp tap from a crop. When a horse takes off, it feels as if he's leaping into the air. Keep your legs firmly on his sides and your rear end in the saddle. Relax and follow your horse's rocking movement with your whole body. You should be sitting the trot when you give the aids to canter. Just as you do at the walk, let your hands and the reins follow the motion of your horse's head and neck. It's important that you don't pull his mouth every time he surges forward. You must always encourage your horse to move forward freely. Don't punish him by grabbing the reins and pulling his mouth. If your horse canters off at top speed, let him continue forward for a stride or two before you bring him back, by sitting deep in the saddle and squeezing both reins. At first, you may not be able to tell if your horse is on the correct lead. You will have to look down and make sure his inside hoof is stretching out farther than his outside hoof. Don't tilt your whole head, though; this can throw off your balance. Simply lower your eyes. As you spend more time in the saddle, you'll be able to feel when your horse is on the wrong lead. Most horses feel slightly off balance and may be a bit stiff or move clumsily when they're on the wrong lead. If your horse takes off on the wrong lead, immediately slow him down to a trot by squeezing on the reins and sitting back in the saddle. Don't rise. Sit the trot and ask for the canter again by squeezing on the inside rein and giving him a nudge with your outside leg behind the girth. Your horse's inside leg should lead at the canter. CANTERING TIPS Don't lean to the inside when asking your horse to canter. This unbalances both of you and makes it hard for him to strike off on the correct leading leg. If you can't get in sync with your horse's rocking motion, adjust your position quickly so you are moving with him. Sitting too far back makes your legs fly forward, and you won't be secure in the saddle. Bouncing around at the back of the saddle can also hurt your horse's back. Move slightly forward in the saddle and push your legs down underneath you. Make sure you canter equally on both leading legs. Your horse may pick up one lead better than the other, or he may be more comfortable to ride on one lead. Don't get lazy. Work him in both directions during your training and exercising sessions. Don't tip your head to check if your horse is on the correct lead. There is a moment of suspension at the canter when all four legs are up in the air. Lead Problems If you're having problems getting your horse to pick up the correct lead, lunge him without a rider for a few days. If he has problems picking up the lead on the lunge line, he may have back or leg problems. If this seems to be the case, call the vet or a horse chiropractor for an evaluation. If he picks up the leads just fine on the lunge, you (the rider) are probably the problem. If your position is not correct, your horse may have difficulty balancing. Sign up for some lessons with a reputable instructor. After watching you and your horse in action, the instructor may be able to suggest some training methods to conquer your lead problems. Cantering Exercises **Circles:** Do a lot of circles at the canter. Circles help a horse become supple, muscular, and more flexible. When you school your horse, include some increasing and decreasing circles. Start working in a small 10-meter circle, then gradually increase the size to 20 meters. **Figure Eights:** Do large figure eights. Canter a circle in one direction, then bring your horse back to the trot at the center of the eight and ask for the canter on the other leading leg, heading off in the other direction. Figure eights are a great way to help your horse learn how to pick up the correct lead. **Flying Lead Changes:** A flying lead change is when a horse changes leading legs in a skipping motion while he is cantering. He does not have to slow down to the trot to change leading legs. As he leaps into the air, he shifts his weight from one side to the other and the other leg begins reaching out farther than the other. If your horse picks up the correct lead when you lunge him but not when you ride him, the problem could be with you. As you and your horse become more proficient at picking up the correct lead, you can attempt flying lead changes at the center of the eight where you change your direction. A horse that can do flying lead changes is necessary for someone who wants to enter hunter and jumper classes. Often, you need to change your horse's lead if you have to change direction in the middle of a course. Some horses can change lead in midair. If they don't get the change in the air, they must do it within two or three strides of landing on the other side of the fence. In hunter and equitation classes, a judge will mark you down if your horse is traveling around a course on the wrong lead. Flying lead changes take a while to master. Riding figure eights is one way you can teach your horse to do them. Start by cantering on one lead; then come back to the trot at the center of the eight and ask for the different leading leg using your normal cantering aids. Strongly use your outside leg in combination with your inside hand so he reacts to it quickly. Start decreasing the time you allow your horse to trot. Only give him a stride or two to change leads. Eventually your horse will learn to change his lead without coming back to the trot. He should recognize what your aids mean and act upon them. When galloping, lean forward so your weight is off your horse's back. The Gallop The horse's fastest speed is the gallop; for most riders, it is the most exciting gait. It's faster than the canter because the horse takes bigger strides. As he races along, he stretches out his head, neck, and body as far as he can. Galloping is great fun, but it can be dangerous. It's very easy for a horse traveling at top speed to get out of control, especially if his rider is inexperienced. Some horses take advantage of you at the gallop and tear off wherever they want. Practice galloping in an enclosed ring before you try it out on a trail or in a big field. It is important that you are able to stop or slow down your horse at all times before you attempt galloping. Galloping is always dangerous on uneven ground. PREPARING TO GALLOP Once your horse is cantering, shorten your reins a bit and start squeezing with your legs until he picks up speed. Push your heels down and lift your seat out of the saddle. Tilt your upper body slightly forward and push your chest closer to your horse's neck. This is the galloping position, which jockeys do on the racetrack. Most horses speed up when you take your weight off their backs. Then urge your horse forward by squeezing with your lower leg. Move your hands a little higher up your horse's neck and let them follow the movement of his head as it stretches forward with each stride. Look straight ahead to make sure the path ahead is clear. When you want to slow down, sit back in the saddle and squeeze on both reins. Keep your lower legs on your horse's sides, but stop squeezing or kicking. When he slows down, reward his obedience by immediately loosening your hold on the reins. GALLOPING TIPS Don't flap your arms and legs around while galloping. It may be great fun zipping along at top speed, but you shouldn't forget your position. Keep your elbows close to your body and rest your legs on your horse's sides. Don't grip the saddle with your knees. If you do, your lower legs will automatically shoot back, and you'll lose your security in the saddle. If your horse bucks or stumbles, you could eat dirt. THE HAND GALLOP Show judges often ask competitors to hand gallop in flat classes. This is a controlled gallop, or a "gallop in hand." It is faster than a canter, but not quite a flat-out gallop. Judges usually ask competitors to hand gallop along the long side of the arena, one rider at a time. You must get up in galloping position, urge your horse to stretch out his neck, lengthen his stride, and move at a brisk pace. The judge expects to see plenty of energy, so make sure your horse is putting some _oomph_ into it! Cantering and Galloping in a Group One of the best things about owning a horse is being able to ride in the open countryside with your friends. Before you charge off, however, remember that it's exciting for a horse to be in company, and he may behave differently in the open than he does in the ring. Even the quietest school horse may suddenly experience an extra burst of energy the second he steps out of the ring. You must be prepared for frisky or naughty behavior. You may need stronger equipment on your horse, if you plan to do fast work in the open. You may have to use a twisted snaffle instead of a plain snaffle, and a martingale will probably come in handy. Keep a safe distance away from the other riders, but try not to let them get too far ahead. This will upset your horse, and he'll want to run to catch up. If you're riding with several people, try to keep your position in the group. Don't zoom past the front rider; you could make her horse misbehave. Most importantly, communicate with the other riders in the group before you take off. Decide as a group that you're going to canter or gallop. Don't zip off without warning. Give yourselves time to tighten your hold on the reins and make sure your position is secure. Once you set off, pay attention to the rest of the bunch. If someone is having problems controlling his horse, it's best if everybody slows down until he is in control again. The danger of the horses bolting as a group always exists. Horses love to ride together, so be prepared to handle an overexcited mount. Jumping **I** t will probably take a few months of walking, trotting, and cantering before you're ready to start jumping. Why? Because you must be perfectly balanced and safe on your horse's back before you pop over fences. When a horse jumps, she leaps into the air; if you're not 100 percent secure on her back, you could fall off or hinder her as she jumps. You also need to have complete control over your horse on the flat before you start jumping. You must know how to slow down or speed up instantly, and you must be able to stop and turn her easily. If you can't control your horse on the flat, it's unlikely that jumping will be safe or enjoyable. Begin your jumping career by starting small. Trotting over poles may seem boring, but it prepares you for jumping real fences later on. Before starting, shorten your stirrups two or three holes. Shorter stirrups help bring your body weight forward so that you can stay over your horse's center of balance throughout her jump. This makes jumping easier for her. Keep your feet in the stirrups while you shorten them. Jumping Position Before you jump your first pole, you need to know how to get into the jumping position, sometimes In the jumping position, lean forward over your horse's neck and lift your rear end slightly out of the saddle. Sit forward to help you balance and move with your horse as she clears a fence. It's not easy for her to jump if you are bouncing on her back and unbalancing her. It's hard enough for a horse to clear a fence; don't make it more difficult. Let's take a look at different aspects of the correct jumping position. **Head:** Hold your head high and keep your eyes up. Looking down affects your balance in the saddle and, in turn, can affect your horse's balance. The more balanced you can be in the saddle, the safer your ride will be in the long run. **Shoulders:** Don't slouch. Push your shoulders back and down. If you're lucky enough to have a mirror in the arena, check your position. Your shoulders should be in line with or slightly in front of your knees. **Upper Body:** Bend your upper body forward from your hips (not your waist) over your horse's neck. Stick your chest out a bit. Keep your back flat and straight, not rounded and hunched over. **Seat:** Push your seat backward and lift it slightly out of the saddle. Your rear end should be close enough that you can sit down quickly and get back into your regular riding position. **Thighs:** Keep your thighs close to the saddle. Bend your knees and let your thighs touch the saddle flap. **Lower Legs:** Your lower legs should be underneath you, close to the girth. Keep a strong feel of your horse's sides so you can ask her to move forward. Your legs must stay securely in this position, even when you're flying over a fence. Look down at your stirrup leathers. They should be straight up and down, not positioned at an angle. Keep your ankles flexible. They are shock absorbers for the rest of your body. Practice the jumping session while working on the flat. Shorten your stirrups before jumping. **Feet:** Place the ball of your foot (the widest part) on the stirrup tread and push your heels down lower than your toes. Try to keep your toes pointed forward, but it's okay if they stick out to the sides a little. **Arms:** Push your arms forward so your elbows are in front of your body instead of glued to your sides, as they are when you're doing flatwork. Bend your arms at the elbows and imagine a straight line from your elbows all the way down your arms and the reins to your horse's mouth. **Hands:** Keep your hands level and as close together as possible. They should be near your horse's neck or withers. Face your thumbs upward at all times. Maintain a firm grip on the reins by keeping your fingers closed. Shorten your reins slightly, but continue to follow the forward motion of your horse's head and neck. Practice your jumping position at the walk, trot, and canter. Stay in the position for a few minutes, then sit down and rest. The jumping position is tiresome. When you do it properly, your leg muscles get a real workout. HANG ON! If you have trouble keeping your balance when you practice the jumping position, here are three tips that you may find useful: 1 While holding your reins, place your palms on the sides of your horse's neck, near the top, and press down. 2 Grab hold of some mane. 3 Use a neck strap to stop yourself from bouncing around. Fasten a stirrup leather around your horse's neck like a collar (not too tight) and hold on to it. If you lose your balance, hold on to a neck strap or the mane rather than grabbing the reins and yanking on your horse's mouth. As you become a better, stronger rider, you'll be able to maintain the jumping position without these helpful aids. The Release Once you start jumping, it's very important that you learn how to loosen your hold on the reins and your horse's mouth. This is often called a "release" or more specifically, a "crest release." As your horse jumps over a fence, she stretches out her head and neck. If you have a restrictive hold on the reins, you could jab her in the mouth. This can be painful and confusing to your horse. She's jumping the fence as you've asked her to, yet you are punishing her by hurting her mouth. Right from the start, learn how to properly use a release. A release is when you move your hands forward as you're about to jump over a pole or a fence. Rest your hands on your horse's neck, about 6 to 8 inches in front of the saddle. This loosens the reins slightly, and there's less chance of hurting your horse's mouth. Also, grab hold of some mane when you push your hands forward. This helps stabilize your hands so you can maintain a good release over a fence. Trotting Poles You might want to jump a fence right away, but it's best to start small and build up your confidence. Trotting over poles gives you time to practice your jumping position and improve your balance. Use white or colored trotting poles. Natural-colored ones can be difficult for a horse to see. Use at least four or five trotting poles and place them parallel to each other across your path. Never use only two because an enthusiastic horse may try to jump both of them at the same time. It is important to set up the trotting poles the correct distance apart on the ground. The distance you set them apart depends on the size of the horse you are riding. A pony's stride is naturally smaller than a horse's, so you must place a pony's trotting poles closer together than you would if you were riding a horse. A small pony will have a hard time trotting nicely over poles spaced for a horse, and a big horse may trip over poles set for a pony. Below is a rough guide to setting up trotting poles. You may need to adjust the distances to suit your own horse, but these measurements give you a starting point. **Pony Poles:** If you ride a pony, place the poles about 3½ to 4 feet (1.2m) apart. If they're farther apart, the pony may stumble. **Horse Poles:** If you ride a horse, place the poles about 4½ to 5 feet (1.5m) apart. These distances may need to be adjusted, depending on the size of your horse's stride. Once the poles are set, ask your horse to trot around the ring at a steady, active pace. She should pick up her feet and not poke along. Squeeze with your lower legs each time you rise out of the saddle. When she is moving nicely, get into the jumping position. Then give yourself plenty of room to approach the poles and head for them in a straight line. Don't yank your horse into them at the last second. Steer her toward the middle of the poles. As she approaches the first one, smoothly release so you don't jab her in the mouth as she stretches her head forward. Here are some things to think about when going over the poles: Keep your horse moving at a steady pace all the way through the poles. If she slows down, squeeze with your lower legs or nudge with your heels. Having a lot of energy is known as "impulsion," and your horse needs it to jump over a fence. If she speeds up too much, sit back in the saddle and rise to the trot as you go over the poles. Squeeze the reins with your fingers to slow her down. Begin by trotting over poles on the ground. Keep your horse in the middle of the poles. If she veers to the right, push her back over by pressing her right side with your right leg and squeezing on the opposite rein. If she veers to the left, push her over with your left leg and squeeze with the right rein. Don't just pull on the rein. Use your legs, too. Look straight ahead. Keep trotting forward after the last pole. Don't let your horse be lazy and stop. Trot over the poles in both directions. When you have trotted over the last pole, sit back in the saddle, bring your hands back to their original position, and take up contact on the reins. Begin rising again. Your First Jump Your first jump should be a small cross-rail about three yards after a line of trotting poles. The fence should be only about 1½ feet off the ground. You have time while you are trotting over the poles to get your jumping position in order. Also, your horse should be moving forward in a nice, active rhythm over the poles, so it will be easy for her to jump the fence. As you head toward the poles, remember to: Get into jumping position. Move your hands forward in a release, and grab some mane. Push your heels down, and keep your legs close to your horse's sides. Look straight ahead at the cross-rail fence and trot over the middle of the poles. Your first jump should be a small cross-pole placed after the trotting poles. When you're finished with the trotting poles, stay in jumping position and steer your horse into the middle of the cross-rail fence. She should jump the center, where the poles cross. Keep your legs glued to your horse's sides and squeeze to ask her to move forward. Because you are already in jumping position, you don't have to do anything drastic over the fence. The horse will "jump to you." When you are safely on the other side of the fence, continue riding in a straight line. Don't let your horse slow down or turn. Sit back in the saddle and take up your normal riding position. A Single Fence The next fence to try is a small, simple vertical: a jump with poles placed horizontally between the jump standards. Place one pole about 2 feet off the ground with a second pole on the ground about 8 to 10 inches in front of it. The ground pole helps a horse figure out where she should take off. It may be difficult to keep your horse moving forward at a nice pace without the trotting poles, so get her listening to your legs before you head toward the fence. Circle once or twice before approaching the fence, to give yourself some time to get your horse trotting with plenty of impulsion. Now aim her for the center of the fence. Get into jumping position a few yards in front of the fence and grab some mane. Push your heels down and squeeze with your lower leg. Don't panic if your horse picks up a canter. If she is a few yards before the fence, sit back down and ask her to trot again. If she is very close to the fence, allow her to canter the last stride or two. Don't pull on her mouth to slow her down right in front of the fence. When your horse lands, keep your legs glued to her sides, but don't squeeze. Sit back down in the saddle after a few strides, bring her back to the trot if she is cantering, and head toward the arena fence. Don't let your horse dart right or left immediately after the jump. Once you reach the arena fence, turn right or left or halt. If you're jumping the same fence several times, vary what you do on the landing side so your horse doesn't get bored. You will soon notice that most horses jump smoothly when they take off about 3 feet away from the base of the jump. Horses take off at this spot if they're allowed to, but an unbalanced or rough rider can throw a horse off. She may get too close to the fence, take a tiny stride, and pop over it awkwardly. She can just as easily take off too far away from a fence and make a huge jump in the air. As you become a better rider, you will be able to adjust your horse's strides so she takes off at the right spot most of the time. Always reward your horse with a big pat on the neck if she's done a good job. A Line Once you can trot over one fence safely, try two. Set up another cross-rail about 60 feet after the first one. Two or more fences in a row are collectively called a line. The large space between the two fences gives you plenty of time to land, regain control, and aim for the second one. Look at the second fence as you approach the first. If your horse canters after the first fence, sit back in the saddle and ask her to trot. As you become more experienced, you will canter to the second fence. Jumping Small Courses When single fences are combined with lines in an arena, you have a course. Jumping a course is a challenge because you have to think ahead. As you approach one fence, you must think about and prepare for the next one. You have to know where it is and how you are going to get there. You must also think about keeping a uniform pace throughout the course. When you feel confident, jump a single fence. Memorize the course before you start, then pick up the trot and make sure you are on the correct diagonal. Do a large circle to establish your pace, then head for the middle of the first fence. Get into jumping position right in front of the fence, and sit back in the saddle when all four of your horse's feet are on the ground again. Rise to the trot between fences, and make sure you are on the correct diagonal. Utilize the whole arena when jumping a course. Don't cut corners. Take your time, and use corners to balance your horse and work on your position. After you jump the last fence, make another big circle and slow down to the walk. Cantering Fences Learn how to canter over fences by jumping a line. Jump the first fence at a trot, then sit down in the saddle after your horse lands and ask for the canter. Some horses automatically pick up the canter after a jump. Squeeze with your legs to keep your horse cantering as you look toward and over the second fence. Take a firm hold on the reins so your horse does not speed up. As you approach the second fence, get into jumping position and release with your hands. After you land, squeeze with your legs so your horse continues cantering. (Lazy horses will try to slow down.) Once you have mastered this exercise, you can canter over both fences in the line (and then try some single fences too). Now try cantering over both fences in a line. Pick up a canter and ride in a big circle. Concentrate on getting your horse moving in a steady rhythm. Counting her strides out loud can help you keep an even pace. Give yourself plenty of room to approach the first fence and start counting your strides once you have landed. Most instructors set the two fences in a line a specific amount of strides apart. Four or five is the norm. Lines are often 60 or 72 feet for horses, and 40 or 51 feet for ponies. Instructors usually set up lines based on the knowledge that the average horse's stride is about 12 feet long. If a line is 60 feet, most horses should jump it in four strides; this includes 6 feet for landing and 6 feet for take off, leaving 36 feet, or three strides to complete the line. As you become more experienced, you will be able to control your horse's stride through a line at the canter. If the line is set for five strides, your instructor may ask you to speed up and lengthen your horse's stride so she jumps it in four. Or she may ask you to slow her down and shorten her stride so she jumps the line in six. These exercises help you develop an eye for distances. Soon, you will be able to tell your horse how many strides to take in a line so she can take off at just the right spot in front of a fence and jump it in great style. Cantering Around a Course First, you must figure out which lead you need to be on when you begin the course. Trot in a large circle in front of the first fence and pick up the correct lead. Then, canter around the same circle once before you aim your horse at the fence. This is often called your "opening circle," and it's your chance to get your horse cantering at a steady pace before you begin jumping. As you become more experienced, you'll notice that most courses contain changes of direction. You may jump a fence on the left lead, and then have to pick up the right lead after you land. If you have an experienced horse, you could ask for a flying lead change in the air over the fence or for a stride or two after you land. If you're still learning, bring your horse back to the trot after a fence, then ask for the lead change. After jumping the first fence in a line, look ahead to the second. A refusal is when a horse stops in front of a fence. Jumping Problems Several things can go wrong when jumping. Let's look at some typical situations. REFUSALS A refusal means that a horse doesn't want to jump a fence and stops in front of it. Sometimes, unfortunately, the rider carries on over the fence alone! There are several reasons why a horse may refuse to jump. A horse might not have jumped much in the past and is confused or scared. It may be necessary to go back to the basics with her for a while to restore her confidence. You should stick to trotting poles and low fences until your horse jumps willingly. She is being naughty or stubborn. Some horses won't jump unless their riders are giving them strong and effective aids. If you think your horse is simply misbehaving, carry a whip and give her a sharp tap behind your leg when she stops. You are not secure in the saddle. A rider flopping around on horse's back, unbalancing her, causes most refusals. You are nervous. A horse can tell if you're scared. If she senses that you don't want to jump, she may not want to jump either. You need to be brave (or pretend to be) when jumping. The horse may not jump that high. A horse that is suddenly faced with a fence higher than what she is used to may decide for herself that it is too big for her on her own. You make a bad approach to the fence. If you don't come into a fence with enough energy or if you cut corners or "surprise" your horse suddenly with a jump in her path, you might get a refusal. Use a lot of leg on a horse that refuses. Sit deep in the saddle, and ask her to move on with your legs. Get into jumping position right in front of the fence, and squeeze with your legs to tell your horse to take off. Be firm—not wishy-washy— when riding a refuser. RUN-OUTS Sometimes, a horse runs out to one side of a fence to avoid having to jump. If your horse tries to do this, take a strong contact on the reins when approaching a fence. Always steer her toward the middle of the fence, and squeeze strongly with your legs. If your horse darts to the right and runs out, use a strong left rein and push her over with your right leg. If she runs out to the left, tug on the right rein and push her over with your left leg. JUMPING TO ONE SIDE Always steer your horse to the center of the fence. Don't allow her to swerve to one side at the last second; she might run out. Additionally, a horse that jumps to one side might put you in danger of hitting your leg against the side wings of the fence, known as "standards." LOOKING DOWN Don't look down when jumping. This unbalances your horse and causes problems when you land because you aren't paying attention to where you're heading. Look straight ahead or toward your next fence. GETTING LEFT BEHIND If you don't stay with your horse's motion as she jumps and leave a lot of air between you and the saddle, you've been "left behind." This isn't comfortable for you or your horse because you will land back in the saddle with a big bump, and you might seriously injury yourself and your horse. Try not to duck to one side when jumping. You may get left behind if you're not in a jumping position when your horse takes off or if she jumps extra big over a small fence. If you get left behind a lot, hold on to the horse's mane or her neck strap. DUCKING Ducking is when a rider leans to one side when jumping a fence. It's a terrible habit that unbalances your horse. If your instructor tells you that you duck, you must concentrate on folding your body over your horse's neck and looking through her ears over a fence. LEANING TOO FAR FORWARD Throw ourselves over fences before our horses have the chance to leave the ground is called "jumping ahead of your horse." Leaning too far forward causes your lower legs to slide back and your heels to pop up. If your horse refuses while you are in this insecure position, you'll fly over her head. When approaching a fence, push your heels down, set your legs next to the girth, and keep your bottom close to the saddle. Instead of anticipating your horse's take-off point, wait, relax, and let the fence come to you. Horse Problems **V** ery few horses are 100 percent perfectly behaved. Like people, horses often have a bad habit or two. Some horse habits, such as sneaking mouthfuls of tasty grass, are not too serious. But others, such as bolting or rearing are dangerous, and could cause injury to you or your horse. Reasons for Bad Behavior If your horse is normally well behaved but suddenly develops an annoying habit, don't panic. You may be able to solve the problem by yourself. First, figure out why your horse is acting the way he is; then try changing his behavior. There are several reasons why a horse behaves badly. **His tack doesn't fit properly.** If your horse's saddle is pinching his withers or rubbing his back, he will be unhappy and may buck or rear. If the underside of your saddle is lumpy and hard, your horse could be in pain. If his bit is too low, it may bang against his teeth and upset him. Check your tack thoroughly to make sure it fits him properly. **He is ill.** A sick horse may be unwilling to do his normal work. He may be sluggish and stubborn if he's not feeling up to par. If this unusual behavior lasts more than a day or two, ask your veterinarian to see if he can find something wrong with him. A sore leg or tooth can make a horse irritable. **He's had too much feed and too little work.** If a horse is given too much high-energy feed and doesn't do enough work, he will be frisky and full of himself—with tons of extra energy for bucking and bolting! If he is overly rambunctious, discuss with your vet what you should be feeding your horse to maintain good weight without the excess energy. You could also cut down on his concentrates and replace them with low-energy roughage such as hay. If your horse is hyper, let him spend a lot of time out in a field or turn-out area to burn off excess energy. You can also lunge him until he settles down, but it's better to simply adjust his feed. If your horse misbehaves, make sure his tack isn't hurting him. Are you clicking with your horse?. **You're not riding properly.** Bad riding makes a horse misbehave. Bouncing around on his back can make him sore and grumpy, and he may buck. Yanking on his mouth can hurt, and he may rear to escape the pain. If you're having a lot of problems with your horse, you may need to evaluate your riding skills. Are you a good enough rider for him? If you feel that you're not clicking with your horse, sign up for lessons with a good instructor. If you cannot solve a behavioral problem by yourself, seek help before the problem gets worse. (And it will!) If you take lessons, describe your horse's behavior to your instructor and see if she can devise a plan of action. She may want to spend time with your horse to see what he's doing. If the instructor is an experienced horse person, she may come up with some solutions you haven't considered. The instructor may want to school your horse for a few weeks to see if she can sort out the problem. She may want to keep him at her facility, too. Think twice before giving your horse to your instructor to be "sorted out." You shouldn't expect your horse to be perfect a certain number of weeks later. It's likely that the problem will return when you start riding him again. It's better that your instructor help you work to solve the problem. You need to ride your horse, too. If you're nervous about the problem, stick to riding only when your instructor is around. If you're having problems with your horse, sign up for riding lessons. Let go of the reins if you fall off. Falling Off No one is immune to eating dirt occasionally; even Olympic-caliber riders fall off from time to time. In fact, as you gain experience and take on more challenging activities such as jumping or riding cross-country, you are more likely to fall off. If you think you're about to fall off your horse, you may have to take desperate measures to stay in the saddle. Lean down as close as you can to your horse's neck, and wrap your arms around him for security. This is no time to worry about your position. If you can't stay in the saddle, here are two things to remember: 1 Let go of the reins. If you hang onto them, your horse could drag you or step on you. Your horse probably won't go too far without you, especially if some tasty grass is around, so let go. It is better to have broken reins than a broken arm. 2 Curl up in a ball as you fall. This method is used successfully by jockeys, who fall off a lot. Don't hold your arms out to break the fall. The only thing you'll break is an arm. Keep your arms close to your sides and roll when you hit the ground. If you fall off and think you have been injured, it's best to lie quietly for a moment or two, then sit up slowly. With luck, other riders will come to your aid. If you have a serious fall, don't let anyone take off your helmet. Moving your head and neck could cause further injury. Lie still and wait until paramedics come. Let the emergency personnel remove your helmet and test you for broken bones. If you watch another rider take a tumble, leave his or her helmet on until help arrives. Bucking When a horse bucks, he puts his head down, arches his back, and kicks his hind legs in the air. A horse may buck if his saddle is pinching him or if he is feeling grumpy. Others buck if they have too much energy. Some horses, mostly naughty ones, buck to remove pesky riders. If your horse puts his head down and you think he is going to buck, tug on his reins quickly to get his head up and kick him to get him moving forward. It is difficult for a horse to buck if he is moving. Sit deep in the saddle and lean backward on a bucking horse. If you lean forward, you will end up on the ground. Don't hit your horse with a crop or punish him severely if he bucks. It's best to ignore bucking and keep him working hard. If you make a big deal out of bucking, he may do it more. Bolting When a horse gallops off with you at top speed and you lose control of him, it's called "bolting." A horse may bolt because he is scared or overly excited. He may not look where he's going and could knock over people or other horses. Bolting can be very scary and dangerous, but you must stay calm so you can stop your horse as soon as possible and safely dismount before you are seriously injured. Experienced riders use a "pulley rein" method to stop a bolting horse. Here's how you do it: 1 Sit deep in the saddle. 2 Shorten your reins as much as you can. 3 Put one hand on the horse's withers and pull back sharply with the other hand. Give the rein a good tug. Continue pulling with one rein until he turns his head toward you. If he doesn't turn, try the other rein. 4 Circle! Circle! Circle! It's hard for a horse to continue galloping when he is going around in a tight circle. Make the circle smaller and smaller until he slows down. Rearing Rearing is when a horse stands up on his hind legs. This might be entertaining in an old western movie, but it's very dangerous. If your horse rears a lot, consider selling him. He's not safe. A horse may rear because he doesn't want to move forward. He may also rear if he's upset or confused. Unfortunately, once he learns that rearing is a great way of getting out of work, it is almost impossible to get him to stop. Never pull back on the reins if your horse rears. You'll unbalance him, and he could fall backward on top of you. If you feel that your horse is going to rear, turn him quickly or kick him so he moves forward. He can't rear if he is moving, so try your best to keep him working. If he rears, immediately loosen the reins, lean forward, and wrap your arms around his neck. When he lands, kick him so that he moves forward again. Use the pulley rein to stop a bolting horse. Shying A horse that shies, or spooks, can be unsettling to a novice rider. You will be trotting along merrily when your horse is suddenly startled and spins in place, or comes to a dead stop. This is often caused by something unfamiliar or unusual in the horse's field of vision. A horse can also spook when he hears a loud sound, such as a car backfiring. A spooky horse may need his eyes checked to make sure he can see objects clearly. If he does shy at something, make sure that you eventually get him to walk past the scary item. Use patience and tact—not force and punishment—to let him know that there is nothing to be frightened of. When he makes an attempt to go by the spooky object, praise him with your voice and a pat. If your horse uses spooking to get out of work, as some might, you will need to ride assertively throughout your session so that his mind is always on the job at hand. Grabbing Grass Some horses think trail rides are one big salad bar. They put their heads down and munch on grass every chance they get. Don't get lazy. If you let your horse eat grass whenever he wants, he'll try to do it all the time. Grass grabbing gets annoying after a while, and you'll end up with a dirty bit, too. Keep your horse's head up and maintain firm contact on the reins. If your horse is really greedy or is being ridden by a child or an inexperienced rider, put grass reins on him. Grass reins are easy to make. Simply tie a long piece of twine to each bit ring. Then run the twine up alongside the cheekpiece and through the browband loop. Finally, tie the twine to the metal D-rings on both sides of the saddle. The twine should be short enough to stop your horse from lowering his head to graze but long enough that he can bob his head freely as he moves. Grass reins keep your horse's head up and prevent him from snacking. Kicking Be very careful around a horse who kicks. Try to stay away from his hindquarters and well out of his kicking range. If you must walk behind a kicker that is tied up, stay very close to him. It's hard for him to kick you if you are within a foot or two. If your horse kicks other horses, let everyone riding near you know about his problem so they can give him plenty of room. If your horse lifts his leg or acts aggressively toward another horse, give him a hard whack behind your leg with a whip and say "No!" in a firm voice. Then ask him to move forward. If he is working hard, he shouldn't have time to think about kicking. If you go to a show or on a trail ride with other horses and riders, tie a red ribbon on your horse's tail. This warns others that he kicks. Having Fun with Your Horse **A** s you become a more experienced rider, you'll realize there's more to riding than endlessly trotting around a ring. You will seek out new challenges and find a lot of fun activities for you and your horse to do together. Getting out of the ring and going trail riding or to shows can be beneficial to both you and your horse. Varying your riding routine helps keep you motivated as a rider and offers you new goals to work toward. You'll meet new horse people and broaden your knowledge of riding. In addition, getting out and about prevents your horse from getting bored with life. When a horse is bored, she may behave badly and act grumpy. Let's look at some exciting activities that will keep you and your horse busy. Trail Riding There is nothing quite like riding in the open with your horse. Exploring woodsy trails or cantering across large fields is exhilarating, and your horse should enjoy it as much as you do. Trail riding is great exercise for your horse and gives her a much-needed break from schooling. You should try to get out of the ring as much as possible. If you live in an urban area with no trails, find out if there are any state parks nearby that have riding trails. You may be able to trailer your horse to one of them for the day. Some parks have lovely, well-groomed trails and areas where you can put your horse in a corral while you have a picnic. Ask if there are other enthusiastic trail riders at your facility and arrange outings with them. Before you head out, make sure that you have full control of your horse. Even the quietest horse can get excited out in the open, and some horses may buck or bolt. You must be able to slow down your horse and bring her to a halt. You must also be able to mount your horse from the ground if you want to get off to open a gate, have a picnic, or use a restroom. There won't be a mounting block handy on the trail. Part of the fun of trail riding is admiring the scenery, but pay attention to what your horse is doing and what's going on around you. Your horse could trip and you could take a tumble. Watch out for trash or other "scary monsters" such as fallen trees or old cars that might spook your horse. If there is a strange-looking object in your path that you know is safe, keep a firm hold on the reins, squeeze with both of your legs, and tell your horse to walk forward. Don't allow her to run away from the object or veer off the path. Even though you're having a nice, relaxing time chatting with your friends, it's essential that you maintain a good riding position. Don't get sloppy. Keep a light but constant feel on your horse's mouth, and stick your legs to her sides. Your position should be just the same as it is in the ring. Here are some tips for you to remember while you're on the trail: Always ride with another person. If you are alone and fall off, you could be in big trouble. Tell someone where you're going. Carry some change or a cellular phone with you, in case you have to make a phone call if you encounter trouble. Carry a hoof pick in your pocket. Rocks or mud in your horse's hooves could make her lame. If you are going out for several hours, leave your horse's halter on underneath her bridle and fasten a tied-up lead rope to the D-ring on the saddle. Stop and let your horse graze, or tie her up while you have a picnic or take a rest. Carry your driver license or some other form of identification in a pocket. Engrave a dog collar disc with your name and phone number, and attach it to a D-ring on the saddle. If you ride on other people's land, stick to the trail. Don't ride over crops or get near cattle or other livestock. Always shut gates behind you. Cantering in a group can excite your horse, and she may get out of control. Only canter or gallop if you can stop your horse quickly. Steer clear of low-lying branches and grassy fields that could be hiding gopher holes or barbed wire fencing. If you are going to jump over an obstacle such as a log, check the other side before you leap. There could be a hole or something that could trip your horse. Keep at least one horse length between you and the horse in front of you. Let your trail-riding buddies know when you want to change gaits. Alternate riding in the lead. A horse should be happy leading or following. Always walk back to the barn after a trail ride. If you always return at top speed, your horse will jog or prance when you try to make her walk. Long-Distance Riding If you love trail riding and your horse has plenty of stamina, try organized trail rides or competitive endurance rides. Both wind through great countryside and involve scrambling up hills and splashing through streams. Your horse has to be in good shape, and so do you. To compete in endurance riding, you must train for weeks or months. Endurance rides are usually 25, 50, or 100 miles long, and they are timed, so the speed and pace of your horse is important. You will have to canter or gallop in places. A veterinarian checks each horse at certain stopping points along the way and will pull her out of the race—disqualifying you—if he or she thinks the horse is not fit to continue. If you plan to compete in endurance rides, round up a friend or two to act as your crew. They provide you and your horse with food and water at the stopping points and hose your horse off if she's hot. To become a member of the American Endurance Ride Conference, contact them through the information listed in the Resources chapter at the end of this book. Endurance events require a very healthy horse. If you don't want to travel quite so fast, try organized trail riding in which maintaining a steady pace and using good trail etiquette are most important. Organized trail rides tend to be social events and may last for several days. You might get to camp out with your horse. Contact the North American Trail Ride Conference, listed in the Resources chapter, for more information. Showing Showing your horse is one way to see how productive your home practices have been. Every rider remembers her first show; if you prepare well, you might even come home with a ribbon or two in addition to the good memories. Horse shows take place year-round, although most occur during summer months. To find out about shows in your area, ask your instructor or look for show programs at your local tack shop. Check horse magazines for show information, too. If you don't have a horse trailer, perhaps you and several friends could arrange to hire a professional hauler. If you board your horse at a big facility, there may be shows on-site so you won't have to travel at all. There are a lot of different show classes, and one should suit you and your horse. If you have a well-behaved, talented horse, you could try working hunter classes. Your horse's performance is judged on the flat and over fences. Or you could enter equitation classes, where your riding skills and the way you look on a horse are what counts. If you have a speedy horse who flies over fences, you could try jumper classes. These are timed, and the person with the fastest clear round (finishing the course without knocking down any poles or having any refusals) wins. If your horse loves to leap, speedy jumper classes may be for you. If you like a slower pace, try pleasure classes in which the judge chooses the horse who looks to be the most pleasant to ride. Or you could enter go-as-you-please classes to show off your horse's best gait. Many breed associations hold their own shows. If you have a spectacular Appaloosa or quarter horse, you'll want to show her off to other breed enthusiasts. Generally, you're required to wear formal riding clothes at shows. At English-style shows, you may have to wear a dark riding jacket with long sleeves, breeches, and boots. You'll also need a show shirt, called a "ratcatcher," which features a choker-like detachable collar. A helmet is also mandatory at all shows. Your horse must be groomed until she shines, and your tack should be in tip-top shape. At more formal shows, you may have to braid your horse's mane so she looks really tidy. It's essential that your horse be well mannered at home before you take her to a show. If your horse misbehaves in the ring, the judge will not give you a ribbon, and you're bound to feel embarrassed. If you are not sure how your horse will react at a show, take her to one but don't enter any classes. Simply walk her around and let her get used to the hustle and bustle of the show ground. If you enjoy showing, you may want to join the United States Equestrian Federation (USEF). Turn to the Resources chapter for contact information. Dressage To put it in very simple terms, dressage is simply training a riding horse to be obedient and move forward in a balanced manner. When you school your horse on the flat, you are actually doing dressage. When you trot in 20-meter circles or halt at a certain point, you are doing dressage, too. Dressage is important because it makes a horse strong, supple, and agile, and it improves her coordination. Most competitions require traditional show attire. If your horse is extremely obedient, he may become a dressage star. Everyone from beginners to Olympians can perform dressage; these are ideals for every level of expertise. A novice rider competing at the first level, called "Introductory," simply needs to perform the very basics of the riding discipline. As horse and rider improve, they are judged on more difficult criteria. Dressage is a popular sport with competitions that take place all over the United States. Competitors memorize riding tests and perform them in rings marked with letters of the alphabet. During the test, you and your horse must perform particular movements that start and finish at a given letter. Most tests include circles, transitions, and changes of direction. The quality and accuracy of these movements are what a good dressage test is all about. While you are riding the test, a judge gives you a mark for every move. He notes the way your horse moves, her desire to move forward, her obedience, your correctness, and the effectiveness of your aids. He will give you low marks for such faults as making badly shaped figures, cantering on the incorrect lead, or halting crookedly. He will also penalize you if your horse misbehaves or shows resistance to your commands. Most judges have helpers, called "scribes," who write down their comments about each test, which riders can look at later to learn what they did right or wrong. At the highest level, Grand Prix, horses must be able to counter canter (cantering on the outside lead) and perform difficult moves such as the pirouette (cantering in a small circle while the horse's hind legs stay in the same place), the piaffe (trotting while standing in the same spot), and passage (an exaggerated, elevated slow trot). Upper-level dressage competitions tend to be formal affairs. Horses are braided and immaculately turned out. Riders wear tailcoats called shadbellies, sparkling white breeches, and top hats; however, at lower-level shows, you can wear a short blue or black show jacket, beige breeches, and a regular safety helmet. Braiding your horse's mane at lower-level shows is not always required, but a judge may be impressed if you make the extra effort. Your horse will look neater and have a more professional appearance. To find out more about dressage, you can contact the United States Dressage Federation, listed in the Resource chapter. Eventing If you like galloping cross-country and leaping over fences, eventing may be for you. At the top level of the sport, eventing involves three different riding disciplines: dressage, show jumping, and cross-country jumping. It is designed to test the discipline, courage, and stamina of both horse and rider. The biggest eventing events take place over three days. The competitors ride a dressage test the first day, gallop over a cross-country course of natural-looking fences on the second day, and complete a show jumping course the third. The competitors are assessed penalty points for any mistakes they make in the three phases, such as failing to not complete a dressage move or refusing a fence. The competitor with the least penalties wins. At the lower levels of the sport, eventing competitions often take place in one day. There are different levels of eventing to suit all sorts of riders and horses; for example, the fences at competitions range between 1 foot and 3 feet 9 inches tall. If you decide to try eventing, you may have to buy additional gear. You will need an approved safety helmet with a fixed harness, and some events require you to wear a special safety vest called a body protector. If you'd like to know more about eventing, you should contact the United States Eventing Association; find more information on this organization listed in the Resources chapter. If you like galloping cross-country and jumping, you'll love eventing competitions. Fun and Games If you don't have a trailer to take your horse places, you can still have fun at your facility. Round up four friends and form a quadrille. This is a musical ride with four horses and riders. Pick some lively music, devise a dressage routine incorporating a lot of interesting movements, and show it off to other boarders. Organize small shows for people who can't travel to outside events. Ask a local trainer if he or she would judge for a small fee, and think up some fun classes, such as musical chairs and an egg-and-spoon race. A costume class would be very popular for Halloween. There may be some people at your facility who can't afford regular lessons. If several of them enjoy the same riding discipline, perhaps you could form a club and arrange for a local trainer or successful competitor to give a clinic at your barn once a month. Charge everyone a small amount to pay the clinician. Not only do all the above activities improve your riding skills but they also make a welcome change from your daily routine and prevent both you and your horse from getting bored. Riding Rewards Hopefully, this book has given you a basic understanding of how to develop a solid riding position and a secure seat. With the help of a knowledgeable instructor and a willing, patient horse, you should be well on your way to becoming an effective and sympathetic rider. Of course, improving your riding skills won't always be easy. There will be schooling sessions when your horse completely ignores your aids, lessons during which you feel frustrated and confused, and times when you fly through the air and end up on the ground. But all of these struggles will make you a better rider. When jumping cross-country, it's a good idea to wear a protective vest. One day, you'll be trotting your favorite horse around the ring, she'll canter when you ask her, and she'll actually take off on the correct lead. She'll bend around your leg when you give her the subtle aids, and she'll jump a big fence perfectly. Before long, you'll be winning ribbons at local shows. All of your hard work and the time you've spent in the saddle improving your skills and learning how to communicate effectively with your horse will be rewarded eventually. And when you give your favorite horse a pat on her neck to praise her, give yourself a pat on the back, too. You have earned it. Resources **American Association of Equine Practitioners** 4075 Iron Works Parkway Lexington, KY 40511 859-233-0147 www.aaep.org **American Connemara Pony Society** P.O. Box 100 Middlebrook, VA 24450 540-866-2239 www.acps.org **American Driving Society** 1837 Ludden Drive, Suite 120 Cross Plains, WI 53528 608-237-7382 www.americandrivingsociety.org **American Endurance Ride Conference** P.O. 6027 Auburn, CA 95604 866-271-AERC (2372) www.aerc.org **American Farriers Association** 4059 Iron Works Parkway Suite 1 Lexington, KY 40511 859-233-7411 www.theamericanfarriers.com **American Hanoverian Society** 4067 Iron Works Parkway Suite 1 Lexington, KY 40511 859-255-4141 www.hanoverian.org **American Holsteiner Horse Association** 222 E. Main Street #1 Georgetown, KY 40324-1712 502-863-4239 www.holsteiner.com **American Horse Council** 1616 H Street N.W. 7th Floor Washington, D.C. 20006 202-296-4031 www.horsecouncil.org **American Horse Protection Association** 1000 29th Street #T-100 Washington, D.C. 20007 202-965-0500 **American Morgan Horse Association** 122 Bostwick Road Shelburne, VT 05482 802-985-4944 www.morganhorse.com **American Mustang and Burro Association** P.O. Box 608 Greeenwood, DE 19950 www.ambainc.net **American Paint Horse Association** P.O. Box 961023 Fort Worth, TX 76161 817-834-APHA (2742) www.apha.com **American Quarter Horse Association** P.O. Box 200 Amarillo, TX 79168 806-376-4811 www.aqha.com **American Riding Instructors Association** 28801 Trenton Court Bonita Springs, FL 34134 239-948-3232 www.riding-instructor.com **American Saddlebred Horse Association** 4093 Iron Works Parkway Lexington, KY 40511 859-259-2742 www.asha.net **American Society for the Prevention of Cruelty to Animals, National Animal Poison Control Center** 424 East 92nd Street New York, NY 10128 _*_ 888-426-4435 www.aspca.org _*A $65 consultation fee may be applied to your credit card._ **American Trails** P.O. Box 491797 Redding, CA 96049 530-547-2060 www.americantrails.org **American Trakehner Association** 1514 W. Church Street Newark, OH 43055 740-344-1111 www.americantrakehner.com **American Warmblood Society** 2 Buffalo Run Road Center Ridge, AR 72027 501-893-2777 www.americanwarmblood.org **American Youth Horse Council** 6660 #D-451 Delmonico Colorado Springs, CO 80919 800-TRY-AYHC (897-2942) www.ayhc.com **Appaloosa Horse Club Inc.** 2720 W. Pullman Road Moscow, ID 83843 208-882-5578 www.appaloosa.com **Arabian Horse Registry of America** 10805 E. Bethany Drive Aurora, CO 80014 303-696-4500 www.arabianhorses.org **California Department of Food and Agriculture's Bureau of Livestock Identification** 1220 N Street Room A-130 Sacramento, CA 95814 916-654-0889 www.cdfa.ca.gov/ahfss/livestock_ID **Certified Horsmanship Association** 4037 Iron Works Parkway, Suite 180 Lexington, KY 40511 800-724-1446 www.cha-ahse.org **The Jockey Club** 821 Corporate Drive Lexington, KY 40503-2794 859-224-2700 www.jockeyclub.com **National 4-H Council** 7100 Connecticut Avenue Chevy Chase, MD 20815 301-961-2959 www.4-h.org **National Cutting Horse Association** 260 Bailey Avenue Fort Worth, TX 76107 817-244-6188 www.nchacutting.com **National Reining Horse Association** 3000 N.W. 10th Street Oklahoma City, OK 73107 405-946-7400 www.nrha.com **National Western Stock Show Association** 4655 Humboldt Street Denver, CO 80216 303-297-1166 www.nationalwestern.com **North American Riding for the Handicapped Association** P.O. Box 33150 Denver, CO 80233 800-369-RIDE (7433) www.narha.org **Palomino Horse Breeders of America** 15253 East Skelly Drive Tulsa, OK 74116 www.palominohba.com **Performance Horse Registry** 4047 Iron Works Parkway Lexington, KY 40511 859-231-6662 www.phr.com **Swedish Warmblood Association of North America** P.O. Box 788 Socorro, NM 87801 505-835-1318 www.wbstallions.com/wb/swana **Tennessee Walking Horse Breeders' and Exhibitors' Association** P.O. Box 286 Lewisburg, TN 37091 931-359-1574 www.twhbea.com **United States Dressage Federation** 4051 Iron Works Parkway Lexington, KY 40511 859-971-2277 www.usdf.org **United States Equestrian Foundation** 4047 Iron Works Parkway Lexington, KY 40511 859-258-2472 www.usef.org **The United States Equestrian Team Foundation** P.O. Box 355 Gladstone, NJ 07934 (908) 234-1251 www.uset.org **United States Eventing Association** 525 Old Waterford Road N.W. Leesburg, VA 20176 703-779-0440 www.eventingusa.com **United States Hunter Jumper Association** 3870 Cigar Lane Lexington, KY 40511 859-225-9033 www.ushja.org **United States Pony Club** 4041 Iron Works Parkway Lexington, KY 40511 859-254-7669 www.ponyclub.org **United States Team Penning Association** P.O. Box 4170 Fort Worth, TX 76164 817-378-8082 www.ustpa.com Glossary **aids:** The communication given from a rider to a horse. **bolt:** To run away. **breeches:** A pair of snug, stretchy English riding pants that cover the hips and thighs down to below the knee. **buck:** To spring into the air with the head down and back arched. **canter:** A three-beat gait that resembles a slow gallop. **crop:** A short riding whip with a looped lash. **cross-country:** An event that includes jumping over natural fences at speed. **cross-rail (fence):** A fence that is used for jumping formed by two poles that cross each other, forming an X. **D-ring:** A D-shaped metal fitting through which various parts of the harness pass. **diagonal:** A pair of a horse's legs at the trot such as the right foreleg and the left hindleg. **dressage:** A form of exhibition riding in which the horse receives nearly invisible cues from the rider and performs a series of difficult steps with lightness of step and perfect balance. Dressage is also a classical training method that teaches the horse to be responsive, attentive, and willing. **endurance riding:** A competition that involves riding over long distances and different types of terrain in varying types of climactic conditions; participants are judged on time and the condition of their horses. **eventing:** Combined training, including dressage, cross-country, and show jumping. **flying lead change:** When a horse smoothly changes lead during a canter or gallop without reducing speed. **girth:** A band that encircles a horse's belly to hold a saddle on the horse's back. **hunter:** A horse bred and trained to be ridden for the sport of hunting. A show hunter is a horse who is bred to be well mannered and elegantly jumped over fences in English classes. **hunter class:** A class in which horses are judged for jumping over fences resembling obstacles that might be found in a hunt field. **jodhpurs:** A style of riding pants that are close-fitting and cuffed at the ankle. **lead:** The action by which the forefoot takes the first step when entering a canter and while cantering and galloping. A horse on the correct lead is on the right lead when circling clockwise and on the left lead when circling counter-clockwise. **line:** Two or more jumps (poles or fences) set one after the other. **lunge:** To train or exercise your horse with a lunge line, a whip, and your voice. **lunge line:** A rein made of cotton or nylon, about 25 feet long, that attaches to a lunging headstall (cavesson) or bridle. **pommel:** The raised part of the front of a saddle. **post:** To rise and sit in rhythm with the horse's trot while riding. **rear:** To rise up on hind legs. **release:** To loosen one's grip on the reins and move one's hands forward when jumping. This action gives the horse freedom to move his head forward over the fence. **sitting trot:** Sitting, not rising, while trotting; in English riding events, the judge may ask riders to sit the trot. **show jumping:** The competitive riding of horses one at a time over a course of obstacles. Horses and riders are judged on ability and speed. **saddle tree (tree):** The frame of a saddle. **snaffle:** A mild bit for a bridle. **spot-on:** Slang for perfect or correct. **stride:** Distance measured from where one hoof leaves the ground and the same hoof hits the ground. **tack:** Saddle, bridle, and other equipment used in riding and handling a horse. **10- to 20-meter loop:** Circles that measure 10 or 20 meters in diameter; a term common in dressage. **transition:** A change of pace from one type of movement to another. **trot:** A natural two-beat gait in which the forefoot and diagonally opposite hind foot strike the ground simultaneously. **trotting poles:** A series of colored poles set approximately 5 feet apart on the ground, one after another, over which a horse trots. **turn out:** To put a horse out to pasture. **withers:** The highest part of a horse's back, where the neck and back join.
{ "redpajama_set_name": "RedPajamaBook" }
447
Q: Healthcheck for MongoDB in Dockerfile I am trying to create a Healthcheck for my MongoDB container configured in my Dockerfile: FROM ubuntu RUN apt-get update && apt-get install -y gnupg2 RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' > tee /etc/apt/sources.list.d/mongodb.list RUN apt-get update RUN apt-get install -y mongodb RUN mkdir -p /data/db EXPOSE 27017 **HEALTHCHECK --interval=5s --timeout=3s CMD /etc/init.d/mongodb status || exit 1** CMD ["usr/bin/mongod", "--smallfiles"] But when I build the image and run a container, after running docker ps it shows Up 20 seconds (unhealthy) in the status column. Going into the container with bash, when I try running service mongodb start it fails. In the log file (/var/log/mongodb/mongodb.log) it says Failed to set up listener: SocketException: Address already in use But there is no other container with MongoDB running. What could be causing this? A: Maybe a mistake on last line Change this line CMD ["usr/bin/mongod", "--smallfiles"] to CMD ["/usr/bin/mongod", "--smallfiles"] Update1: change this line **HEALTHCHECK --interval=5s --timeout=3s CMD /etc/init.d/mongodb status || exit 1** to HEALTHCHECK --interval=5s --timeout=3s CMD /etc/init.d/mongodb status || exit 1
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,760
Red Hat to Host Third-Annual Government Users and Developers Conference Red Hat Government Users and Developers Conference provides an unmatched opportunity to learn more about emerging trends in Linux and open source solutions RALEIGH, CAROLINE DU NORD — 16 octobre 2007 — Red Hat (NYSE: RHT), the world's leading provider of open source solutions, today announced the third-annual Red Hat Government Users and Developers Conference to be held on Nov. 1, 2007 at the Ronald Reagan Building & International Trade Center in Washington, D.C. The Red Hat Government Users and Developers Conference, hosted by Carahsoft Technology Corp., is the only event where participants gather under one roof to share new strategies for maximizing the capabilities of Linux and open source technologies in government with the Red Hat development team, hundreds of government users, open source innovators and technology partners. The adoption of open source technology continues to be one of government's hottest computing trends. This fast-paced, one-day conference is designed to keep Red Hat users, developers and IT professionals from the Department of Defense, civilian agencies, academic institutions and other areas of federal, state and local government agencies on the cutting edge of the innovative applications and open source integration. New and experienced users alike can choose from 12 sessions and in-depth tutorials, exploring open source advancements across broad topics like Service Oriented Architecture (SOA), data service management and security. Attendees will gain a deeper understanding of emerging technologies, hear customer case studies on some of the most successful government migrations to open source-based solutions and gain hands-on experience with tools that can help better manage Linux systems. The wide range of sessions, tutorials, speakers and sponsors epitomizes this year's conference theme, Innovate. Integrate. Succeed. The use of open source solutions in government is pervasive, and its adoption continues to gain momentum, said Paul Smith, Red Hat's vice president of government sales operations. Our Government Users and Developers conference provides an opportunity to discuss the flexible, stable and secure solutions government agencies rely on to deliver improved services to their constituents. During the morning keynote session, Don Pearson, executive vice president at e.Republic and group publisher of Government Technology magazine, will give a presentation titled, Open Source Open Government: Open Source Software in Public Sector Service Delivery. In the afternoon keynote at the conference, Isaac Christoffersen, David Schillero and Christopher Dale from Booz Allen Hamilton will highlight the evolution of a client's traditional two-tier application into a high-performance, message-oriented system with the capability to process large volumes of heterogeneous data in a near real-time manner. Using a virtualized SOA solution, the team has implemented high-availability, fault-tolerant and load balancing strategies to significantly reduce risk and improve overall performance. Other speakers include John Weathersby from the Open Source Software Institute, CareGroup Healthcare System's Rob Hurst, Robert Ames and Doc Shankar from IBM, HP's Paul Moore, Ed Hammersla of TCS and top members of the Red Hat team. For more information about the 2007 Red Hat Government Users and Developers Conference, or to register, visit http://www.carahsoft.com/rhgudc/. For more information about Red Hat, visit http://www.redhat.com. For more news, more often, visit www.press.redhat.com. Red Hat, the world's leading open source solutions provider, is headquartered in Raleigh, NC with over 50 satellite offices spanning the globe. CIOs have ranked Red Hat first for value in Enterprise Software for three consecutive years in the CIO Insight Magazine Vendor Value study. Red Hat provides high-quality, low-cost technology with its operating system platform, Red Hat Enterprise Linux, together with applications, management and Services Oriented Architecture (SOA) solutions. Red Hat also offers support, training and consulting services to its customers worldwide. Learn more: http://www.redhat.com. Certain statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to the integration of acquisitions; the ability of the Company to effectively compete; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; risks related to data and information security vulnerabilities; ineffective management of, and control over, the Company's growth and international operations; adverse results in litigation; the dependence on key personnel as well as other factors contained in our most recent Quarterly Report on Form 10-Q (copies of which may be accessed through the Securities and Exchange Commission's website at http://www.sec.gov), including those found therein under the captions Risk Factors and Management's Discussion and Analysis of Financial Condition and Results of Operations . In addition, the forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of the press release. LINUX is a trademark of Linus Torvalds. RED HAT and JBOSS are registered trademarks of Red Hat, Inc. and its subsidiaries in the US and other countries. Énoncés prévisionnels Contacter Red Hat PR
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,856
Being born again by the grace of God "Like grains of sand on the shore"- IYF English Camp interview IYF English Camp started in Haiti on June 3rd. Nearly 10,000 students from 7 schools are participating until June 11th. They also heard the Gospel in Gospel Class, too. Many had lost their dreams and lives after the damage from the earthquake in 2010. However, IYF English Camp is delivering hope to them. We interviewed Missionary Hansol Lee to get some of his impressions so far. IYF: Hello, Missionary Lee. Lee: Hello. You had a baby girl last week. Congratulations. We had this baby after a miscarriage last year. I am very thankful to God she was born healthy. We're very thankful too. Tell us how you ended up here in Haiti? I came to Haiti last February. How I ended up here? Because I was dispatched. (smiles) Right, so how did you feel when you heard you were being sent to do missions in Haiti? Frankly, I wanted to come to Haiti. One day, Pastor [Ock Soo Park] went to IYF World Camp in Mexico and students from Haiti came to meet him and asked to send them a missionary. This was after the huge earthquake. From then I wanted to go to Haiti. So, I prayed to God about it. Shortly afterwards, Missionary [Jonghun] Lee went to Haiti. So, I thought, 'Haiti is not meant for me.' But not long after, I was dispatched to Haiti as well. It is rare for two young missionaries to be dispatched to the same country. I was very thankful because I felt God really wanted me here. Let's talk about IYF English Camp. You've been preparing since last February, right? How was last year's camp? It was the first IYF English Camp for Haiti last year. Around 5,000 students participated. I am very thankful all of the students listened to the Gospel in the Gospel class. We recruited local volunteers and all 36 volunteers received salvation. They brought joy to our church. So, did they come to church even after English Camp was over? Yes. 20 out of the 36 volunteers regularly attend the service. These people built the Haiti Church. Last June, we had to build a new chapel after the English Camp. But we didn't have access to heavy equipment or machinery. So, one day, 15 people did cement work all night long. However, we realized that we needed more people for it. We called English Camp Volunteers with a sincere heart and asked them if they could help us with the construction. 100 volunteers gathered. It was amazing. Honestly, these students don't have anything to do on vacation, but I was still very thankful. They all carried and mixed cement with their bare hands. That's how we built the first floor. When I told them that we had to build the 2nd floor as well, they brought their friends and we had a total of 150 workers. We did construction for 6 months and we couldn't have built the church without them. That is amazing. Doing that kind of work with your bare hands sounds extremely hard. It is beautiful to see the volunteers opening and becoming one heart with the church. How are the volunteers this year? We held workshops for two months before the start of camp. 500 people applied to become volunteers and 300 students wound up attending. I was very thankful to have so many volunteers. However, we had to be somewhat selective, so we sent the students who went against our rules home. For example, what were some of the rules? The rules were somewhat strict. We preached the Word everyday during the workshops. If they missed a session or came late for even a day of the workshop, we had to send them home. Volunteers who didn't have the motivation to participate were also eliminated. We needed volunteers who could receive the heart of God. That's where we are now: 100 volunteers for dancing, translating, assisting, etc. It feels like God had to work through IYF English Camp since the volunteering criteria are so connected to His Word. Did the volunteers easily accept these criteria? Many students' hearts have changed through the workshop. In the beginning, they questioned things. "Why do I have to listen to the Word at church when I applied to be an English Camp volunteer?" But, then they started to accept the words and received salvation. They testified, saying, "Even if I'm not selected to be a volunteer, I'm very happy I received salvation. I want to be with church." These volunteers were persecuted while preaching the Gospel to their friends at school. Our hearts were on fire when we heard their testimonies. Tell us how you prepared for IYF English Camp. Our goal was to teach 10,000 students. However, we faced difficult situations in the beginning. All the schools in Haiti have finals from May 28th to June 14th. It wasn't easy to find schools and students who could take four days off for camp. Schools that held English Camp last year easily gave us four days for the English Camp since they knew about our program. However, the new schools that don't know about IYF had to make a hard decision because of their final exam schedules. Did we reach the goal of ten thousand students? Yes. God has worked amazingly. One of the education ministry officials in charge of all schools in Port-au-Prince opened his heart after hearing about IYF. He called each school himself to ask them to host IYF English Camp. We went to visit one of the principals three times but couldn't meet him because he was very busy. We went to school again for the last time and we met him as he was pulling out of the parking lot. We introduced English Camp for 5 minutes and he said he wants to have the English Camp at his school. We could clearly see that God was working in getting schools for the English Camp. We give glory to God for allowing so many students to hear the Gospel. What kind of effect do you think this camp is having on the students? Students in Haiti do not want to go to school and are not attached to it because most of teachers come to school for money. There are many cases when there are no teachers at school. Those students who do not have guidance lose their dreams, and there are no good programs at school for them to get involved with. IYF English Camp is the only unique program that offers different kinds of free classes to all students. After the camp, we can see a heart forming in them: 'I can speak English, too!' That's how we give them their lost dreams back and more than anything else, they gain eternal joy through the gift of salvation. The English Camp is essential for Haitian youths. Yes, you're right. How do you feel when you imagine the future of Haiti? There was a huge earthquake in 2010. The students who remember those moments still have fear in their eyes. There are some people who gave up on their lives and live however they want to live. However, Haiti is changing now. Through IYF English Camp, IYF has carved out an image as a group of people with passion, self-sacrifice, and a heart to pour everything into their work. I believe many more people will attend our IYF events and hear the Gospel. 10,000 students will hear the Gospel by the end of this camp, 20,000 next year, and 30,000 the year after that. Soon, Haiti will be filled with the Gospel. Please describe Haiti in one word. I am not sure how this will sound but even if another earthquake hits Haiti, these people won't fall. They will rise up again. What would you like to say to people who are interested in overseas volunteering? We welcome all of you to Haiti. Let us come together for this work of God. You will be able to feel God's love toward Haiti. We were smiling throughout the conversation with missionary Lee, hearing him frankly and confidently speak about his hope toward Haiti. God wants to give His love to these people. Like the countless stars in the sky and the grains of sand on the shore, God will save just as many here. And through them, this country will change for good. We thank the Lord who is making Haiti beautiful. SUNDAY SERMON SERMON OF THE MONTH Copyright © 2020 by GOOD NEWS MISSION. All rights reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
290
M BASKETBALL The Iowa football program should be above having moral victories By Pat Harty AllHawkeyes IOWA CITY, Iowa - We saw this past Saturday that the Iowa the football team isn't above getting embarrassed by a team of Penn State's ilk. The Nittany Lions faced little resistance while dismantling Iowa 41-14 in prime time at Beaver Stadium in University Park, Pa. The loss left Iowa coach Kirk Ferentz searching for answers, but with little time to do so with second-ranked Michigan up next on the schedule in another prime time game this Saturday at Kinnick Stadium. It's easy to think that Michigan will crush the Hawkeyes based on the circumstances surrounding each team. Michigan is undefeated and not just beating opponents, but destroying almost all of them in swift and stunning fashion under head coach Jim Harbaugh, while Iowa is 5-4 and coming off a game in which it surrendered 599 yards. If the Hawkeyes don't improve dramatically by Saturday, the same Michigan team that pummeled Penn State 49-10 on Sept. 24 in Ann Arbor, Mich., almost certainly would do the same to Iowa. So there is something to be said for staying close and making the game competitive as a way for Iowa to build hope and confidence for the final two games against Illinois and Nebraska. But in no way should Iowa be receptive to having a moral victory on Saturday, unless the New England Patriots replace Michigan as the opponent. Only then should it be tolerated. The Iowa program should be above having moral victories in Ferentz's 18th season as head coach and with everything that has been invested in the program, most notably money and the commitment from fans. Iowa is barely removed from a 12-win season that culminated with a berth in the Rose Bowl, and yet some apparently feel from what I've read on social media and in e-mails, that the program has sunk low enough that being competitive against Michigan would qualify as a moral victory. That just seems weak and disrespectful to the Iowa players and coaches. Iowa's acceptance of moral victories should've ended the first time Ferentz rebuilt the program 15 years ago. A moral victory fits a program in transition or one that is making a dramatic step up in competition, like when Northern Iowa lost to Iowa 17-16 in the 2009 season opener. That would qualify as a moral victory, although, Northern Iowa coach Mark Farely might have a different opinion. Iowa never has been and probably never will be one of the true blue bloods in college football. But Iowa is a solid program with plenty of tradition, resources and support to be competitive. Moral victories are for losers. Ferentz's program has some serious issues, but the players and coaches aren't losers. They're above being rewarded for just being competitive. Upsets happen all the time as we saw this season when North Dakota State defeated the Hawkeyes 23-21 on a field goal as time expired. The Bison has no business beating Iowa on paper, but the game was played on the grass at Kinnick Stadium. Michigan is favored to win on Saturday by at least three touchdowns, which is total disrespect to the Iowa players and coaches. Something tells me the game will be closer than that, partly because few expect it to be. That usually is when Ferentz's teams respond the best. Iowa would suffer a major public relations blow if the Michigan game isn't close. So, of course, you would prefer losing a close game than being embarrassed under the lights at home and on national television. But Iowa only has two options against Michigan on Saturday – win or lose – because moral victories stopped being an option years ago. Iowa women's basketball team extends winning streak to six games with dramatic comeback at Wisconsin Iowa overcomes 17-point second-half deficit to prevail 85-78 in Madison 1 Day Top-ranked Iowa wrestling team pounds Nebraska 26-6 at Carver-Hawkeye Arena Iowa wins eight of 10 bouts in its first appearance at Carver in seven weeks 1 Day No. 22 Iowa women overcome late deficit to edge Minnesota 76-75 in Minneapolis Alexis Sevillian's 3-pointer in the final seconds completes Iowa's dramatic comeback 2 Days © 2019 AllHawkeyes. All rights reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,828
\section{Introduction} In this paper we consider the following Cauchy problem: \begin{align}\label{main eqn}\left\{\begin{array}{l} i\partial_tu = \Lambda_m u + F(u)\;\; \mbox{in}\;\;\mathbb{R}^{1+3}, \\ u(x,0) = \varphi(x) \;\; \mbox{in}\;\; \mathbb{R}^3, \end{array}\right. \end{align} where $\Lambda_m$ is the fourier multiplier defined by $\Lambda_m=(m-\Delta)^\frac 12$ and $F(u)$ is nonlinear term of Hartree type such that $F(u) = [V * |u|^2]u$ with a smooth $V$ in $\mathbb R^3 \setminus \{0\}$. Here $m>0$ is mass and $*$ denotes the convolution in $\R^3$. The concerned Hartree potential is defined as follows: \begin{defn} For $0\le \gamma_1,\gamma_2<3$ the potential $V$ is said to be of type $(\gamma_1,\gamma_2)$ if it satisfies the growth condition such that $\widehat{V} \in C^{4}(\mathbb R^3 \setminus\{0\})$ and for $ 0 \le k \le 4$ \begin{align}\label{growth} |{\nabla^k}\widehat V(\xi)| \lesssim |\xi|^{-\gamma_1-k}\;\;\mbox{for}\;\; |\xi| \le 1, \quad |{\nabla^k}\widehat V(\xi)| \lesssim |\xi|^{-\gamma_2-k}\;\;\mbox{for}\;\;|\xi| > 1. \end{align} \end{defn} The Coulomb potential $V(x)=|x|^{-1}$ is of such type corresponding to $\gamma_1=\gamma_2=2$ and the Yukawa potential $V(x)=e^{-\mu_0 |x|} |x|^{-1}, \ \mu_0>0$ is corresponding to $\gamma_1=0, \gamma_2=2$. The equation \eqref{main eqn} with these two potentials, which is called semirelativistic Hartree equation, arises in the mean-field limit of large systems of bosons, see, e.g., \cite{E07,l03,EH87}. In this paper we study \eqref{main eqn} with the above generalized potentials. By Duhamel's formula, \eqref{main eqn} is written as an integral equation \begin{equation}\label{integral} u = e^{-it{\Lambda_m}}\varphi -i \int_0^t e^{-i(t-s){\Lambda_m}}F(u)(s)\,ds. \end{equation} Here we define the linear propagator $e^{-it{\Lambda_m}}$ given by the solution to the linear problem $i\partial_t v = {\Lambda_m} v$ with initial datum $v(0) = \varphi$. It is formally written by \begin{align}\label{int eqn} e^{-it{\Lambda_m}}\varphi = \mathcal F^{-1} \big(e^{-it\sqrt{m +|\xi|^2}} \mathcal F(\varphi)\big) = (2\pi)^{-3}\int_{\mathbb{R}^3} e^{i( x\cdot \xi - t\sqrt{m + |\xi|^2} )}\widehat{\varphi}(\xi)\,d\xi. \end{align} The purpose of this research is to study the existence and uniqueness of solutions and observe the behaviour of solutions as time goes to infinity, in particular comparing them with the linear solutions, which is well-known in area of dispersive PDE as the well-posedness and scattering problem respectively. The following is formal definition of scattering. \begin{defn}\label{scattering defn} We say that a solution $u$ to \eqref{main eqn} scatters (to $u_\pm$) in a Hilbert space $\mathcal H$ if there exist $\varphi_\pm \in \mathcal H$ (with $u_\pm(t) = e^{-it{\Lambda_m}}\varphi_\pm$) such that $\lim_{t \to \pm\infty}\|u(t) - u_\pm\|_{\mathcal H} = 0$. \end{defn} One of candidates for Hilbert space $\mathcal H$ is the Sobolev spaces $H^s(\R^3)$. That is, we will show the well-posedness and scattering results when the initial data is given in $H^s(\R^3)$. Especially we want to find the minimum value of $s$ that ensures the scattering states of corresponding solutions, which is called low regularity problem. There have been a lot of results on this subject. Firstly, Lenzmann \cite{l07} established the global existence of solutions for Yukawa type potential using energy methods provided the initial data given in $H^{\frac12}(\R^3)$ is sufficiently small. Herr and Lenzmann \cite{HL2014} showed that for Coulomb type potential the almost optimal local well-posedness holds for initial data with $s>\frac14$ (and $s>0$ if the data is radially symmetric) using localized Strichartz estimates and the Bourgain spaces. In \cite{chonak,choz0,co07,coss 09}, they considered the generalized potential from Coulomb type, namely, $V(x)=|x|^{-\gamma}$, for $0<\gamma<3$ (corresponding to $\gamma_1=\gamma_2=3-\gamma$ in our definition) and investigated well-posedness and scattering of equations. The most recent results on the Yukawa potential were obtained by Herr and Tesfahun \cite{hete} where they showed the small data scattering result for initial data with $s>\frac12$( and $s>0$ if the data is radially symmetric) using $U^p-V^p$ spaces method which has proved effective to derive scattering result. In this paper we consider the range $0<s\le \frac12$ where the scattering result has been proved only when radial assumption is given to initial data \cite{hete} and aim to obtain the similar result with a weaker assumption. We prove the scattering result when $s>\frac14$ by imposing additional one angular regularity to the initial data. Let us introduce angular derivative and angularly regular Sobolev space. The spherical gradient $\nabla_{\mathbb S}$ is restriction of the gradient on the unit sphere which is well-defined, that is, independent of coordinates of $\mathbb{S}^2$. It satisfies a following relation $$\nabla=\frac1r\nabla_{\mathbb S}+\theta\frac{\partial}{\partial r}, \ x=r\theta, \ \theta\in\mathbb{S}^2,$$ and also has a concrete formula $\nabla_{\mathbb S}=x \times \nabla$. A function space $H^{s, 1}$ is the set of all $ H^s$ functions whose angular derivative is also in $ H^s$. The norm is defined by $\|f\|_{ H^{s, 1}} := \|f\|_{ H^s} + \|\nabla_{\mathbb S} f\|_{ H^{s}}$. It contains all radially symmetric functions. Our main result is the following. \begin{thm}\label{main thm} Let $s>\frac14$. Suppose the potential $V$ in \eqref{main eqn} is radially symmetric and of type $(\gamma_1,\gamma_2)$ with $0\le\gamma_1<1$ and $\frac32<\gamma_2<3$. Then there exists $\delta > 0$ such that for any $\varphi \in H^{s, 1}$ with $\|\varphi\|_{ H^{s, 1}} \le \delta$, \eqref{main eqn} has a unique solution $u \in (C \cap L^\infty)(\mathbb R; H^{s, 1})$ which scatters in $ H^{s, 1}$. \end{thm} \begin{rem} The potential in Theorem~\ref{main thm} includes the Yukawa. Concerning the Coulomb potential, non-existence scattering results \cite{choz0} and modified scattering results \cite{F2014} have been established. \end{rem} Our proof is fundamentally based on fixed point argument and Littlewood-Paley decomposition. In order to occur a contraction, we use frequency-localized spherical Strichartz estimates and construct a resolution space using $U^p, V^p$ spaces where linear estimates for free solutions could be transferred. We have an application to the following Hartree Dirac equations: \begin{align}\label{sub eqn}\left\{\begin{array}{l} i\partial_t\psi = ({\alpha}\cdot D+m\beta) \psi + [V * |\psi|^2]\psi \;\; \mbox{in}\;\;\mathbb{R}^{1+3}, \\ \psi(x,0) = \psi_0(x) \;\;x \in \mathbb{R}^3, \end{array}\right. \end{align} where $D=-i\nabla, \psi :\R^{1+3}\rightarrow\mathbb{C}^4$ is the Dirac spinor, $m>0$ is mass and $\beta$ and $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ are the Dirac matrices. If $V$ is Coulomb potential, \eqref{sub eqn} appears when Maxwell-Dirac system with zero magnetic field is uncoupled \cite{cg 76}. And in the same paper \cite{cg 76} it is conjectured that \eqref{sub eqn} with Yukawa also might be obtained by uncoupling Dirac-Klein-Gordon system as Maxwell-Dirac case. For more information about Dirac equation, see \cite{cg 76} and references therein. Following \cite{dfs07} (see also \cite{BH2017}) we introduce the projection operators $\Pi_{\pm}^m(D)$ with symbol \begin{equation*} \Pi_{\pm}^m(\xi)=\frac12[I\pm\frac{1}{\langle\xi\rangle}_m(\xi\cdot\alpha+m\beta)]. \end{equation*} We then define $\psi_{\pm}:=\Pi_{\pm}^m(D)\psi$ and split $\psi=\psi_++\psi_-$. By applying the operators $\Pi_{\pm}^m(D)$ to the equation \eqref{sub eqn}, and using the identity $$ \alpha\cdot D+m\beta=\Lambda_m(\Pi_{+}^m(D)-\Pi_{-}^m(D)) $$ we obtain the following system of equations \begin{equation}\label{system} \begin{cases} &(-i\partial_t+\Lambda_m)\psi_{+}= \Pi_{+}^m(D)[(V*|\psi|^2)\psi], \\ &(-i\partial_t-\Lambda_m)\psi_{-}=\Pi_{-}^m(D)[(V*|\psi|^2)\psi], \end{cases} \end{equation} with initial data $\psi_0^{\pm}=\Pi_{\pm}^m(D)\psi_0$. Observe that the linear propagators for this system have same formula as \eqref{main eqn} except for sign and the nonlinear term is also same except for projection operator. Note that Strichartz estimates holds regardless of sign and the Sobolev norm has an equivalence under this operator, i.e., $\| \Pi_{+}^m(D) f\|_{H^s} + \| \Pi_{+}^m(D) f\|_{H^s} \sim \| f\|_{H^s}.$ Since our proof for Theorem~\ref{main thm} does not require any structure of equation but relies on Strichartz estimates, function spaces and Littelwood-Paley decomposition, one can easily check the following Corollary: \begin{cor} Let $s>\frac14$. Suppose the potential $V$ in \eqref{main eqn} is radially symmetric and of type $(\gamma_1,\gamma_2)$ with $0\le\gamma_1<1$ and $\frac32<\gamma_2<3$. Then there exists $\delta > 0$ such that for any $\psi_0 \in H^{s, 1}$ with $\|\psi_0\|_{\dot H^{s, 1}} \le \delta$, \eqref{sub eqn} has a unique solution $\psi \in (C \cap L^\infty)(\mathbb R; H^{s, 1})$ which scatters in $ H^{s, 1}$. \end{cor} \medskip \section{Notations and Preliminaries} \subsection{Notations} The Fourier transform of $f$ is denoted by $\widehat{f} = \mathcal F( f)$ and the inverse Fourier transform is by $\mathcal F^{-1}$ such that $$\mathcal{F}(f)(\xi) = \int_{\mathbb{R}^3} e^{- ix\cdot \xi} f(x)\,dx,\quad \mathcal F^{-1} (g)(x) = (2\pi)^{-3}\int_{\mathbb{R}^3} e^{ix\cdot \xi} g(\xi)\,d\xi.$$ We denote the frequency variables by capital letters $M,N>0$ which is assumed dyadic number, that is of the form $2^m,2^n$ with $m,n\in\mathbb{Z}$. Let $\rho\in C_{0,rad}^\infty(-2,2)$ be such that $\rho(s)=1$ if $|s|<1$ and $\chi_M$ be defined by $\chi_M(s)=\rho(\frac{s}{M})-\rho(\frac{2s}{M})$ for $M>0$. Then $\supp\chi_{M}=\{ s\in\R: \frac M2<|s|<2M \}$. Fix $N_0\gg1$ and let $\beta_{N_0}:=\sum_{M \le N_0}\chi_{M}$ and $\beta_N:=\chi_N$ for $N>N_0$. Then $\supp\beta_{N_0}=\{ s\in\R: |s|<2N_0 \}$ and $\supp\beta_{N}=\supp\chi_N$ for $N>N_0$. Denote $\widetilde\chi_M(s):= \chi_M(s/2) + \chi_M(s) + \chi_M(2s)$ and $\widetilde \beta_N$ similarly. Next we define the Littlewood-Paley operators $\dot P_M$ and $P_N$ by $\mathcal F (\dot P_Mf) = \chi_M\widehat{f}$ for $M>0$ and $\mathcal F ( P_Nf) = \beta_N\widehat{f}$ for $N\ge N_0$ respectively. Further, define similarly $\widetilde {\dot P_M} $ and $\widetilde P_N$ using $\widetilde\chi_M$ and $\widetilde\beta_N$. Then $\widetilde{ \dot P_M }\dot {P}_M = \dot P_M \widetilde{ \dot P_M }= \dot P_M$ and $\widetilde P_N P_N=P_N \widetilde P_N = P_N$. We denote $L^r = L_x^r(\mathbb R^3)$ and $\mathcal{L}^r=\mathcal{L}_\rho^r(\R)=L_\rho^r(\rho^{2}d\rho)$ for $1 \le r \le \infty$. Consider the mixed-normed space. For a Banach space $X$, $u \in L_I^q X$ iff $u(t) \in X$ for a.e. $t \in I$ and $\|u\|_{L_I^qX} := \|\|u(t)\|_X\|_{L_I^q} < \infty$. We denote $L_I^qX = L_t^q(I; X)$ and $L_t^qX = L_{\mathbb R}^qX$. Positive constants depending only on $m,N_0$ are denoted by the same letter $C$, if not specified. $A \lesssim B$ and $A \gtrsim B$ means that $A \le CB$ and $A \ge C^{-1}B$, respectively for some $C>0$. $A \sim B$ means that $A \lesssim B$ and $A \gtrsim B$. \subsection{Function spaces} In this subsection we introduce the $U^p, V^p$ function spaces. For the general theory, see e.g. \cite{hhk}, \cite{ktv}. Let $1\leq p<\infty$. We call a finite set $\{t_0,\ldots,t_J\}$ a partition if $-\infty<t_0<t_1<\ldots<t_J\leq \infty$, and denote the set of all partitions by $\mathcal{T}$. A corresponding step-function $a:\R\to L^2(\R^3)$ is called $U^p$-atom if \[ a(t)=\sum_{j=1}^J \mathbf{1}_{[t_{j-1},t_j)} (t) f_j, \quad \sum_{j=1}^J\|f_j\|_{L^2(\R^3)}^p=1,\quad \{t_0,\ldots,t_J\}\in \mathcal{T}, \] and $U^p$ is the atomic space. The norm is defined by \begin{align*} \| u\|_{U^p} := \inf \Big\{ \sum_{k=1}^\infty |\lambda_k| : u=\sum_{k=1}^\infty\lambda_k a_k, \text{ where } a_k \text{ are } U^p\text{-atoms and } \lambda_k\in\C \Big\}. \end{align*} Further, let $V^p$ be the space of all right-continuous $v:\R \to L^2(\R^3)$ satisfying \begin{align}\label{V norm} \|v\|_{V^p}:=\sup_{\{t_0,\ldots,t_J\}\in \mathcal{T}}\big(\sum_{j=1}^J\|v(t_j)-v(t_{j-1})\|_{L^2(\R^3)}^p\big)^{\frac1p}. \end{align} with the convention $v(t_J)=0$ if $t_J=\infty$. Likewise, let $V_-^p$ denote the spaces of all functions $v:\R\rightarrow L^2(\R^3)$ satisfying $v(-\infty)=0$ and $\|v\|_{V^p}<\infty$, equipped with the norm \eqref{V norm}. We define $V_{-,rc}^p$ by the closed subspace of all right continuous $V_{-}^p$ functions. Now we list some useful Lemmas on $U^p,V^p$ spaces. \begin{lem}\label{Lem:property UV} Let $1\le p<q<\infty$. \begin{enumerate} \item $U^p,V^p,V_-^p$ and $V_{-,rc}^p$ is Banach spaces. \item The embeddings $U^p\hookrightarrow V^p_{-,rc} \hookrightarrow U^q \hookrightarrow L^\infty(\R;L^2)$ are continuous. \item The embeddings $V^p\hookrightarrow V^q$ and $V_-^p\hookrightarrow V_-^q$ are continuous. \item (Duality) For $1<p<\infty$, $\| u\|_{U^p}= \sup_{\{v\in V^{p'}:\|v\|_{V^{p'}=1}\}} \int_{-\infty}^{\infty}\la u'(t), v(t)\ra_{L_x^2} dt.$ \end{enumerate} \end{lem} \begin{defn}[Adapted function spaces] We define $U_m^p$(and $V_m^p$ respectively) by the spaces of all functions $u$ such that $e^{it\Lambda_m}u\in U^p$ ($e^{it\Lambda_m}v\in V^p$ respectively) with the norm \begin{align*} \| u\|_{U_m^p}:=\|e^{it\Lambda_m} u \|_{U^p} \quad ( \ \| v\|_{V_m^p}:= \|e^{it\Lambda_m} v \|_{V^p} \text{ respectively }). \end{align*} \end{defn} The properties in Lemma~\ref{Lem:property UV} also hold for the spaces $U_m^p$ and $V_m^p$. \begin{lem}[Transfer principle]\label{Lem:transfer} Let $T: L^2 \to L_{loc}^1(\mathbb R^3; \mathbb C)$ be a linear operator satisfying that $$ \|T(e^{-it\Lambda_m} f )\|_{L_t^q X} \lesssim \| f \|_{L^2} $$ for some $1 \le q < \infty$ and a Banach space $X \subset L_{loc}^1(\R^3;\C)$. Then $$ \|T(u)\|_{L_t^qX} \lesssim \|u\|_{{U_m^q}}. $$ \end{lem} \subsection{Strichartz estimates} Let the pair $(q, r)$ satisfy that $2 \le q, r \le \infty$, $\frac 2{q} + \frac 3{r} = \frac 32$. Then it holds from \cite{cox} \begin{align} \bl M \br^{-\frac{5}{3q}}\|e^{-it{\Lambda}_m} \dot P_M\varphi\|_{L_t^{q} L_x^r} \lesssim \| \dot P_M\varphi\|_{L_x^2}.\label{besov str} \end{align} The case $q=2$ is sufficient in our discussion. For the $N_0 \gg 1$ we have \begin{align*} \|e^{-it\Lambda_m}P_{N_0}\varphi\|_{L_t^2L_x^6} \lesssim_{N_0} \|P_{N_0}\varphi\|_{L^2}, \end{align*} which gives by transfer principle in Lemma~\ref{Lem:transfer} \begin{align}\label{low str} \|P_{N_0}u\|_{L_t^2L_x^6} \lesssim_{N_0} \|P_{N_0}\varphi\|_{U_m^2}. \end{align} This endpoint estimate can be extended to a wider range with weaker angular integrability in the left term. That is, we consider the following $L_t^q\mathcal L_\rho^r L_\theta^{{r_*}}$ norm with $r_*\le r < \infty$ defined by $$ \|u\|_{L_t^q \mathcal L_\rho^r L_\theta^{{r_*}}} = \left(\int_\mathbb R \left\|\big(\int_{\mathbb S^2}|u(t, \rho \theta)|^{{r_*}}\,d\theta\big)^\frac1{{r_*}}\right\|_{L_\rho^r(\rho^2 d\rho)}^q dt \right)^\frac1q. $$ If $r = \infty$, then we define $\mathcal L_\rho^\infty = L_\rho^\infty$. Then for $\frac{10}{3} < r < 6$, there holds \begin{align*} \|e^{-it{\Lambda_m}} \dot P_M\varphi\|_{L_t^{2} \mathcal L_\rho^r L_\theta^2} \lesssim \| \dot P_M\varphi\|_{L_x^2} \times \begin{cases} M^{\frac12 - \frac 3r}\bl M\br^{-\frac12+\frac4r} ,\ &\text{if} \ \frac{10}{3} < r < 4\\ M^{-\frac14}\bl M\br^{\frac12+\ep}, \ &\text{if} \ r=4 \ \text{and}\ \epsilon>0, \\ M^{1-\frac3r}, \ &\text{if} \ 4<r<6. \end{cases} \end{align*} For this see the Klein-Gordon case of Theorem 3.3 in \cite{ghn}. Especially if $\frac{10}{3} < r < 4$ and $N>N_0$ we have for $u\in U_m^2$ by transfer principle into $U_m^2$ spaces \begin{align}\label{ang str} \| P_N u\|_{L_t^{2} \mathcal L_\rho^r L_\theta^2} \lesssim N^\frac1r \| P_N u\|_{U_m^2}, \end{align} which we will intensively use in following argument. \begin{rem} Note that the minimum loss of regularity occurs when $r$ is close to $4$. And this is essentially related with the regularity condition on initial data $s>\frac14$. It can be easily checked that in the range $4<r<6$ the bound is sharp if we consider the homogeneous case and scaling argument, but in the other range the sharpness is not known yet. If we can improve the bound in this range we might obtain better regularity result, i.e., threshold of well-posedness could be lowered. \end{rem} \subsection{Properties of angular derivative} In this section we introduce a series of lemmas concerning angular derivative. \begin{lem}\label{radial1} Let $\psi, f$ be smooth and let $\psi$ be radially symmetric. Then $$\nabla_{\mathbb{S}} (\psi * f) = \psi * \nabla_{\mathbb{S}} f.$$ \end{lem} From this we check the order of the projection operator and angular derivative can be reversed: $\nabla_{\mathbb S} \dot P_M f=\dot P_M \nabla_{\mathbb S} f$ for $M>0$. The next one is on the Sobolev inequality on the unit sphere \cite{crt-n}. \begin{lem}\label{sob-s2} For any $2 < \widetilde r < \infty$ $$ \|f\|_{L_\theta^{\widetilde r}(\mathbb{S}^2)} \lesssim \|f\|_{L_\theta^2(\mathbb{S}^2)} + \|\nabla_{\mathbb S}f\|_{L_\theta^2(\mathbb{S}^2)}, \quad \|f\|_{L_\theta^\infty(\mathbb{S}^2)} \lesssim \|f\|_{L_\theta^{\widetilde r}(\mathbb{S}^2)} + \|\nabla_{\mathbb S}f\|_{L_\theta^{\widetilde r}(\mathbb{S}^2)}. $$ \end{lem} The final one is extended Young's convolution estimates. \begin{lem}[Lemma 7.1 of \cite{chonak}]\label{radial2} If $\psi$ is radially symmetric, then $$ \|\psi*f\|_{\mathcal L_\rho^p L^q_\theta} \le \|\psi\|_{L^{p_2}_x} \|f\|_{\mathcal L^{p_1}_\rho L^{q_1}_\theta} , $$ for all $p_1,p_2,p,q,q_1 \in [1,\infty]$ satisfying $$ \frac 1{p_1} + \frac 1{p_2} - 1 = \frac 1p, \quad \frac 1{q_1} + \frac 1{p_2} - 1 \le \frac 1q.$$ \end{lem} \subsection{Norm of Potential} We calculate the $L^p$ norm of $\dot P_M V$ and $\mathcal{F}^{-1} \chi_M$ for $1<p<\infty$. We simply denote $\dot P_M V$ by $V_M$. \begin{align*} \int |V_M(x)|^pdx &= \int_{|x| \le M^{-1}} |V_M(x)|^pdx + \int_{|x| > M^{-1}}|x|^{-4p}|x|^{4p} |V_M(x)|^p dx \\ &\lesssim M^{-3}\|V_M\|_{L^\infty}^p + M^{4p-3}\||x|^{4}V_M\|_{L^\infty}^p\\ &\lesssim M^{-3} \| \chi_M(\xi)\widehat V(\xi) \|_{L^1}^p + M^{4p-3}\|\nabla_\xi^{4}\big( \chi_M(\xi) \widehat V(\xi) \big)\|_{L^1}^p. \end{align*} Using the assumption \eqref{growth} of $V$ we estimate $ \|\nabla_\xi^{k}\big( \chi_M(\xi) \widehat V(\xi) \big)\|_{L^1} \ls M^{-k-\gamma+3}$ for $0\le k\le 4$, where $\gamma=\gamma_1$ if $0<M\le1$, or $\gamma=\gamma_2$ if $ M>1$. Thus we have \begin{equation}\label{ineq:potential} \|V_M \|_{L_x^p} \lesssim \begin{cases} M^{3-\frac3p-\gamma_1}\ ,&\text{if} \ 0<M\le1 \\ M^{3-\frac3p-\gamma_2}\ ,&\text{if} \ M>1. \end{cases} \end{equation} Also we can check by simple calculation \begin{align*} \| \mathcal{F}^{-1} \chi_M \|_{L_x^{p}} \ls M^{3-\frac3p}. \end{align*} Now we are ready to prove the main theorem. \section{Proof of Main theorem}\label{main} Let us define the Banach space $ X^s$ by $$ X^s := \Big\{u : \mathbb R \to H^{s} \Big|\; P_Nu, \nabla_{\mathbb{S}} P_Nu \in \ul^2(\mathbb R; L_x^2) \;\;\forall N \ge N_0 \Big\} $$ with the norm $$\|u\|_{ X^s} = \left( \sum_{N \ge N_0} { N }^{2s}\| P_N u\|_{\ul^{2,1}}^2\right)^\frac12, \;\mbox{where}\; \|u\|_{\ul^{2,1}} = \|u\|_{\ul^2} + \|\nabla_{\mathbb S}u\|_{\ul^2}.$$ Let $X_+^s$ be the restricted space defined by $$ X_+^s = \Big\{ u \in C([0, \infty); H^{s}) \Big|\; \chi_{[0, \infty)}(t)u(t) \in X^s\Big\} $$ with norm $\|u\|_{X_+^s} := \|\chi_{[0, \infty)}u\|_{X^s}$. Let $\mathcal D_+^s(\delta)$ be a complete metric space $\{u \in X_+^s \big|\; \|u\|_{X_+^s} \le \delta\}$ equipped with the metric $d(u, v) := \|u-v\|_{X_+^s}$. Then we will show that the nonlinear functional $\Psi(u) = e^{-it\Lambda_m}\varphi + \mathcal N_m(u,u,u)$ is a contraction on $D_+^s(\delta)$, where $$ \mathcal N_m (u_1,u_2,u_3) = -i\int_0^t e^{-i(t-t')\Lambda_m} [V*(u_1\bar{u}_2)u_3]\,dt'. $$ Clearly, $\|e^{-it\Lambda_m} \varphi\|_{X_+^s} \lesssim \|\varphi\|_{H^{s, 1}}$ so it suffices to show that \begin{align}\label{contract} \|\mathcal N_m(u_1,u_2,u_3)\|_{X_+^s} \lesssim \prod_{j=1}^3\|u_j\|_{X_+^s}^3 \end{align} This readily implies estimates for difference $$ \|\mathcal N_m(u,u,u) - \mathcal N_m(v,v,v)\|_{X_+^s} \lesssim (\|u\|_{X_+^s} + \|v\|_{X_+^s})^2\|u-v\|_{X_+^s}, \ \text{for} \ u,v\in X^s$$ and thus we can find $\delta$ small enough for $\Psi$ to be a contraction mapping on $\mathcal D_+^s(\delta)$. From now, we simply denote $N_m(u,u,u)$ by $N_m(u)$. Since $e^{it\Lambda_m}P_N \mathcal N_m(u)$ and $e^{it\Lambda_m}P_N \nabla_{\mathbb S}\mathcal N_m(u)$ are in $V_{-, rc}^2(\mathbb R; L^2)$, and $$ \sum_{N \ge N_0} { N }^{2s}(\|e^{it\Lambda_m}P_N \mathcal N_m(u)\|_{V^2} + \|e^{it\Lambda_m}P_N \nabla_{\mathbb S}\mathcal N_m(u)\|_{V^2})^2 < \infty $$ from \eqref{contract}, $\lim_{t \to +\infty} e^{it\Lambda_m}\mathcal N_m(u)$ exists in $H^{s, 1}$. Define a scattering state $u_+$ with $$\varphi_+:= \varphi + \lim_{t\to +\infty} e^{it\Lambda_m}\mathcal N_m(u).$$ By time symmetry we can argue in a similar way for the negative time. Thus we get the desired result. We start to show \eqref{contract}. We may assume that $u(t) = 0$ for $-\infty < t < 0$. From the duality in Lemma~\ref{Lem:property UV}, \begin{align*} \|P_N \mathcal N_m(u_1,u_2,u_3)\|_{\ul^2} \ls \sup_{\|v\|_{\vl^2} \le 1}\Big|\int_{\mathbb R}\int_{\mathbb R^3} [V * (u_1\bar{u}_2)]u_3(t)P_N\overline{v(t)}\,dxdt\Big|. \end{align*} Using Littlewood-Paley decomposition and applying Lemma~\ref{radial1} and Leibniz rule, we have \begin{align}\label{cubicbound0} \|\mathcal N_m(u)\|_{X_+^s}^2 \le \sum_{N \ge N_0}{ N }^{2s} \sup_{\|v\|_{\vl^2} \le 1}\left(\sum_{N_1, N_2, N_3 \ge N_0} \sum_{k=0}^3 I_k(N,N_1,N_2,N_3) \right)^2, \end{align} where \begin{align*} I_0(N_1,N_2,N_3,N)&=\bigg|\iint V *(P_{N_1}u_1 P_{N_2}\bar{u}_2)P_{N_3}u_3 P_N \bar{v}\,dxdt \bigg|, \\ I_1(N_1,N_2,N_3,N)&=\bigg|\iint V *(\nabla_{\mathbb S} P_{N_1}u_1 P_{N_2}\bar{u}_2)P_{N_3}u_3 P_N \bar{v}\,dxdt \bigg|, \\ I_2(N_1,N_2,N_3,N)&=\bigg|\iint V *(P_{N_1}u_1 \nabla_{\mathbb S} P_{N_2}\bar{u}_2)P_{N_3}u_3 P_N \bar{v}\,dxdt \bigg|, \\ I_3(N_1,N_2,N_3,N)&=\bigg|\iint V *(P_{N_1}u_1 P_{N_2}\bar{u}_2)\nabla_{\mathbb S} P_{N_3}u_3 P_N \bar{v}\,dxdt \bigg|. \end{align*} Since the argument will not be affected by complex conjugation, we drop the conjugate symbol. By Lemma~\ref{radial1} we can change the order of deriavtive operator $\nabla_{\mathbb S}$ and projection $P$. Thus to show \eqref{contract}, we suffices to prove the following: \begin{align}\label{ineq:goal} \sum_{N \ge N_0}{ N }^{2s} \sup_{\|v\|_{\vl^2} \le 1}\left(\sum_{N_1, N_2, N_3 \ge N_0} I(N,N_1,N_2,N_3) \right)^2 \ls \prod_{i = 1, 2, 3}\|u_i\|_{X_+^s}^2, \end{align} where $$I(N_1,N_2,N_3,N) := \Big|\iint V *(P_{N_1}\uon P_{N_2}\utw)P_{N_3}\uth P_N v\,dxdt \Big|,$$ and at most one of $\mathbf{u_i}$ could take an angular derivative $\nabla_{\mathbb S}$, i.e. $\mathbf{u_i}= \nabla_{\mathbb S} u_i$. To prove this inequality we introduce the following proposition. \begin{prop}\label{prop} Let $s>\frac14$. Suppose $P_{N_i}u_i\in\ul^{2,1}$, $P_{N}v \in\vl^{2}$ for $i=1,2,3$ and at most one of $\mathbf{u_i}$ could take an angular derivative $\nabla_{\mathbb S}$. Then for $\frac14<\frac1r<\min(s,\frac{\gamma_2}{6},\frac{3}{10})$ it holds \begin{equation}\label{ineq:key} \begin{split} I(N_1,N_2,N_3,N) &\ls C(N,N_1,N_2,N_3) \|P_{N_1}u_1\|_{\ul^{2,1} } \|P_{N_2}u_2\|_{\ul^{2,1} } \|P_{N_3}u_3 \|_{\ul^{2,1}} \|P_N v \|_{\vl^2} , \\ C(N,N_1,N_2,N_3) &= \begin{cases} N_1^\frac1r N_2^\frac1r& \ \text{for} \ N_3\gtrsim N, \\ \min(N_1,N_2)^\frac1r N_3^\frac1r & \ N_3\ll N. \end{cases} \end{split} \end{equation} Here the implicit constant only depends on $r, N_0$. \end{prop} Now, we postpone the proof of Proposition~\ref{prop} in a moment and explain how this result implies \eqref{ineq:goal}. Let us split the summation of LHS in \eqref{ineq:goal} into two parts as follows: $$ \mbox{LHS of}\;\; \eqref{ineq:goal} = \sum_{N_3\gtrsim N}+\sum_{N_3\ll N}:=S_1+S_2. $$ Fix $r$ as in Proposition~\ref{prop}. Apply the first case of \eqref{ineq:key} to $S_1$ \begin{align*} S_1 &\lesssim \sum_{N \ge N_0 } { N }^{2s} \left( \sum_{N_1, N_2\ge N_0} N_1^\frac1r \|P_{N_1}u_1\|_{\ul^{2,1} } N_2^\frac1r \|P_{N_2}u_2\|_{\ul^{2,1} } \sum_{N_3 \gtrsim N}\|P_{N_3} u_3 \|_{\ul^{2,1}} \right)^2\\ &\lesssim \sum_{N \ge N_0 } \left( \sum_{N_1, N_2\ge N_0} N_1^{\frac1r-s}N_2^{\frac1r-s} N_1^{s}\|P_{N_1}u_1\|_{\ul^{2,1} } N_2^{s} \|P_{N_2}u_2\|_{\ul^{2,1} } \sum_{N_3 \gtrsim N} (\frac{N}{N_3})^s N_3^s\|P_{N_3}u_3 \|_{\ul^{2,1}} \right)^2\\ &\ls \|u_1\|_{X_+^s}^2\|u_2\|_{X_+^s}^2\sum_{N \ge N_0} \left( \sum_{N_3\gtrsim N} (\frac{N}{N_3})^s N_3^s \|P_{N_3}u_3\|_{\ul^{2,1} } \right)^2 \\ &\lesssim \prod_{i = 1, 2, 3}\|u_i\|_{X_+^s}^2. \end{align*} $S_2$ is estimated using the second case of \eqref{ineq:key}. By symmetry we may assume $N_1\le N_2$. \begin{align*} S_2&\lesssim \sum_{N \ge N_0} N^{2s} \left( \sum_{\substack{N_0\le N_1 \le N_2\\ N_3\ll N}} N_1^{\frac1r-s}N_3^{\frac1r-s} N_1^s\|P_{N_1}u_1\|_{\ul^{2,1} } \|P_{N_2}u_2\|_{\ul^{2,1} } N_3^s\|P_{N_3}u_3 \|_{\ul^{2,1}} \right)^2\\ &\ls \|u_1\|_{X_+^s}^2\|u_3\|_{X_+^s}^2\sum_{N \ge N_0} \left( \sum_{N_2\gtrsim N} (\frac{N}{N_2})^s N_2^s \|P_{N_2}u_2\|_{\ul^{2,1} } \right)^2 \\ &\lesssim \prod_{i = 1, 2, 3}\|u_i\|_{X_+^s}^2. \end{align*} So it remains to prove the Proposition~\ref{prop}. To simplify the notations, we assume all the functions are localized one, i.e., $P_{N_i}u_i=u_i$ for $i=1,2,3$ and $P_{N}v=v$. And we use the bold notation $\mathbf{u_i}$ when it could take an angular derivative or not. But be cautious that at most one of bold $\mathbf{u_i}$ could take. In other words, the estimates hold true even if at most one of $\mathbf{u_i}$ take an angular derivative. \section{Proof of Proposition~\ref{prop}} We perform an additional decomposition for $I$: \begin{align*} I(N_1,N_2,N_3,N) &\lesssim \sum_{M>0} |\iint \dot P_M V *(\uon\utw) \widetilde{\dot{P}_M}( \uth v) \,dxdt | \\ &= \sum_{M>0} |\iint V_M *(\uon\utw) \widetilde{\dot{P}_M} ( \uth v) \,dxdt |, \end{align*} where at most one of bold $\mathbf{u_i}$ could take the angular derivative. \subsection{Case1: $N_3\gtrsim N$} In this subsection we prove that \begin{align*} I\ls N_1^\frac1rN_2^\frac1r \|u_1\|_{\ul^{2,1} } \|u_3\|_{\ul^{2,1} } \|u_3 \|_{\ul^{2,1}} \|v \|_{\vl^2}. \end{align*} Since the localized Strichartz estimates we apply have different admissible pair whether the support in frequency side is low part or not, that is, \eqref{low str} or \eqref{ang str}, we proceed to prove dividing the case whether $N_i$ is equal to $N_0$ or not for $i=1,2,3$. Note that the support properties from Littlewood-Paley decomposition would restrict the range of summation over $M$. \subsubsection{$N_0=\min(N_1,N_2)\sim \max(N_1,N_2)$} In this case the support condition gives $M\ls N_0$. We estimate using H\"older and Young's inequality \begin{align*} I &\lesssim \sum_{M\ls N_0} \|V_M *(\uon \utw )\|_{L_t^1 L_x^\infty} \| \widetilde{\dot{P}_M} (\uth v) \|_{L_t^\infty L_x^1} \\ &\lesssim \sum_{M\ls N_0} \| V_M \|_{L_x^\frac32} \| \uon\|_{L_t^2L_x^6} \| \utw\|_{L_t^2L_x^6} \|\uth \|_{L_t^\infty L_x^2} \|v \|_{L_t^\infty L_x^2}. \end{align*} By \eqref{low str} and the embeddings $U_m^2,V_m^2\hookrightarrow L_t^\infty L_x^2$ in Lemma~\ref{Lem:property UV}, we obtain \begin{align*} I&\lesssim_{N_0} \sum_{M\ls N_0} \| V_M \|_{L_x^\frac32} \|u_1 \|_{\ul^{2,1}} \|u_2 \|_{\ul^{2,1}} \|\uth \|_{L_t^\infty L_x^2} \|v \|_{L_t^\infty L_x^2}. \end{align*} Here, bold $\mathbf{u_3}$ means that the estimates hold for both $u_3$ and $\nabla_{\mathbb{S}}u_3$ cases. From \eqref{ineq:potential} we estimate $$\sum_{M\ls N_0} \| V_M \|_{L_x^\frac32}\le\sum_{0<M \le 1} M^{1-\gamma_1}+\sum_{1< M\ls N_0} M^{1-\gamma_2}\ls C(N_0),$$ where the assumption $\gamma_1$ be less than 1 is essential. Thus we have \begin{align*} I&\lesssim_{N_0} \|u_1 \|_{\ul^{2,1}} \|u_2 \|_{\ul^{2,1}} \|u_3 \|_{\ul^{2,1}} \|v \|_{\vl^2}. \end{align*} \subsubsection{ $N_0=\min(N_1,N_2)\ll \max(N_1,N_2)$ } In this case $M$ should be comparable to $\max(N_1,N_2)$. We divide the case according to whether $u_3$ takes the angular derivative or not. { \it (1) $u_3$ case:} In this case at most one of $u_1$, $u_2$ could take the angular derivative. We denote this by bold $\uon$, $\utw$. We have by H\"older inequality \begin{align*} I&\ls \sum_{M\sim \max(N_1,N_2)} \|V_M*(\uon\utw)\|_{L_t^1 \mathcal{L}_\rho^\infty L_\theta^\frac{6r}{2r-6}} \|\widetilde{\dot P_M} (u_3 v) \|_{L_t^\infty \mathcal{L}_\rho^1 L_\theta^\frac{6r}{4r+6}}. \end{align*} We compute the first norm. We assume $N_0=N_1<N_2$. We apply Lemma~\ref{radial2} \begin{align}\label{es1} \|V_M*(u_1u_2)\|_{L_t^1 \mathcal{L}_\rho^\infty L_\theta^\frac{6r}{2r-6}} \lesssim \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \|u_1 u_2\|_{L_t^1 \mathcal{L}_\rho^\frac{6r}{r+6} L_\theta^{2} }. \end{align} We estimate using Lemma~\ref{sob-s2} \begin{align}\label{es2} \begin{aligned} \|u_1 u_2\|_{L_t^1 \mathcal{L}_\rho^\frac{6r}{r+6} L_\theta^{2} } &\lesssim \|u_1\|_{L_t^2 \mathcal{L}_\rho^6 L_\theta^\infty}\|u_2\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} \ls (\|u_1\|_{L_t^2 L_x^6} +\|\nabla_{\mathbb S}u_1\|_{L_t^2L_x^6})\|u_2\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} \\ &\ls_{N_0} (\|u_1\|_{U_m^2} + \|\nabla_{\mathbb S} u_1\|_{U_m^2}) N_2^\frac1r \|u_2\|_{U_m^2}, \end{aligned} \end{align} where in the last inequality we used Strichartz estimates \eqref{low str} and \eqref{ang str}. Similarly we estimate \begin{align}\label{es3} \begin{aligned} \|u_1 u_2\|_{L_t^1 \mathcal{L}_\rho^\frac{6r}{r+6} L_\theta^{2} } &\lesssim \|u_1\|_{L_t^2 L_x^6}\|u_2\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^3} \ls \|u_1\|_{L_t^2 L_x^6} ( \|u_2\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} +\|\nabla_{\mathbb S} u_2\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2}) \\ &\ls_{N_0} \|u_1\|_{U_m^2} N_2^\frac1r ( \|u_2\|_{U_m^2} + \|\nabla_{\mathbb S} u_1\|_{U_m^2}). \end{aligned} \end{align} The other case $N_0=N_2<N_1$ can be bounded similarly. Thus from \eqref{es1},\eqref{es2} and \eqref{es3} we obtain \begin{align*} \|V_M*(\uon\utw)\|_{L_t^1 \mathcal{L}_\rho^\infty L_\theta^\frac{6r}{2r-6}} &\ls \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \max(N_1,N_2)^\frac1r\|u_1\|_{U_m^{2,1}} \|u_2\|_{U_m^{2,1}}. \end{align*} Next we estimate the second norm using Lemma~\ref{radial2} \begin{align*} \|\widetilde{\dot P_M} (u_3 v) \|_{L_t^\infty \mathcal{L}_\rho^1 L_\theta^\frac{6r}{4r+6}} &\ls \| \mathcal{F}^{-1}\chi_M \|_{L_x^{1}} \|u_3\|_{L_t^\infty \mathcal L_\rho^2 L_\theta^\frac{6r}{r+6}} \|v\|_{L_t^\infty L_x^2 } \\ &\ls ( \|u_3\|_{L_t^\infty L_x^2} +\|\nabla_{\mathbb S} u_3\|_{L_t^\infty L_x^2})\|v\|_{L_t^\infty L_x^2 }, \end{align*} where we applied Lemma~\ref{sob-s2} with $r>\frac{10}{3}$. In conclusion, we have \begin{align*} I&\ls_{N_0} \sum_{M \sim \max(N_1,N_2)} \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \| \mathcal{F}^{-1}\chi_M \|_{L_x^{1}} \max(N_1,N_2)^\frac1r \|u_1\|_{U_m^{2,1}} \|u_2\|_{U_m^{2,1}} \|u_3\|_{U_m^{2,1}} \|v\|_{V_m^2} \\ &\ls \max(N_1,N_2)^\frac1r \|u_1\|_{U_m^{2,1}} \|u_2\|_{U_m^{2,1}} \|u_3\|_{U_m^{2,1}} \|v\|_{V_m^2}, \end{align*} where we used $\sum_{M \sim \max(N_1,N_2)}\|V_M\|_{L_x^{\frac{6r}{5r-6}}} \| \mathcal{F}^{-1}\chi_M \|_{L_x^{1}} \ls \sum_{M\sim \max(N_1,N_2) } M^{3(\frac16+\frac1r)-\gamma_2}<C$ for $r$ we consider by \eqref{ineq:potential}. { \it (2) $\nabla_{\mathbb S} u_3$ case:} In this case neither $u_1$ nor $u_2$ takes the angular derivative. We have by H\"older inequality \begin{align}\label{22} I&\ls \sum_{M\sim \max(N_1,N_2)} \|V_M*(u_1u_2)\|_{L_t^1 L_x^\infty } \|\widetilde{\dot P_M} (\nabla_{\mathbb S} u_3 v) \|_{L_t^\infty L_x^1}. \end{align} We consider the former. By symmetry we may assume $N_0=N_1<N_2$. We apply Lemma~\ref{sob-s2} \begin{align*} \|V_M*(u_1u_2)\|_{L_t^1 L_x^\infty} \ls \|V_M*(u_1u_2)\|_{L_t^1 L_x^\infty L_\theta^{\frac{2r}{r-2}}}+ \|\nabla_{\mathbb S} V_M*(u_1u_2)\|_{L_t^1 L_x^\infty L_\theta^{\frac{2r}{r-2}}}. \end{align*} By applying Lemma~\ref{radial2} and H\"older inequality we estimate \begin{align*} \|V_M*(u_1u_2)\|_{L_t^1 L_x^\infty L_\theta^{\frac{2r}{r-2}}} &\ls \|V_M\|_{L_x^{\frac{6r}{5r-6}}}\|u_1 u_2\|_{L_t^1\mathcal{L}_{\rho}^{\frac{6r}{6+r}}L_\theta^\frac32} \ls \|V_M\|_{L_x^{\frac{6r}{5r-6}}}\| u_1\|_{L_t^2 L_x^6 } \|u_2\|_{L_t^2 \mathcal{L}_\rho^{r}L_\theta^2} \\ &\ls_{N_0} \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \|u_1\|_{U_m^2} N_2^\frac1r \|u_2\|_{U_m^2}, \end{align*} The derivative term can be estimated by the same argument as above because by Lemma~\ref{radial1} and Leibniz rule, we have \begin{align*} \|\nabla_{\mathbb S} V_M*(u_1u_2)\|_{L_t^1 L_x^\infty L_\theta^{\frac{2r}{r-2}}} \le \|V_M*(\nabla_{\mathbb S} u_1 u_2)\|_{L_t^1 L_x^\infty L_\theta^{\frac{2r}{r-2}}} + \|V_M*( u_1 \nabla_{\mathbb S} u_2)\|_{L_t^1 L_x^\infty L_\theta^{\frac{2r}{r-2}}}. \end{align*} Then we finally obtain \begin{align*} \|V_M*(u_1u_2)\|_{L_t^1 L_x^\infty} \ls_{N_0} \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \max(N_1,N_2)^\frac1r \|u_1\|_{U_m^{2,1}}\|u_2\|_{U_m^{2,1}}. \end{align*} For the latter in \eqref{22} we only use H\"older inequality and the embedding \begin{align*} \|\widetilde{\dot P_M} (\nabla_{\mathbb S} u_3 v) \|_{L_t^\infty L_x^1} \ls \|\nabla_{\mathbb S} u_3\|_{U_m^2} \|v\|_{V_m^2}. \end{align*} In conclusion we get as in the previous case \begin{align*} I&\ls_{N_0} \sum_{M\sim \max(N_1,N_2)} M^{3(\frac16+\frac1r)-\gamma_2}\max(N_1,N_2)^\frac1r \|u_1\|_{U_m^{2,1}}\|u_2\|_{U_m^{2,1}}\|u_3\|_{U_m^{2,1}}\|v\|_{V_m^2}\\ &\ls \max(N_1,N_2)^\frac1r \|u_1\|_{U_m^{2,1}}\|u_2\|_{U_m^{2,1}}\|u_3\|_{U_m^{2,1}}\|v\|_{V_m^2}. \end{align*} \subsubsection{ $N_0<N_1,N_2$ } We apply H\"older inequality \begin{align*} I &\lesssim \sum_{M>0} \|V_M *(\uon\utw)\|_{L_t^1 L_x^\infty} \|\widetilde{\dot{P}_M} (\uth v) \|_{L_t^\infty L_x^1} \\ &\lesssim \sum_{M>0} \|V_M\|_{L_x^{\frac{r}{r-2}}} \|\uon\utw\|_{L_t^1 L_x^\frac r2} \|\uth \|_{L_t^\infty L_x^2} \|v \|_{L_t^\infty L_x^2}. \end{align*} We claim that $ \|\uon\utw\|_{L_t^1 L_x^\frac r2} \ls N_1^{\frac1r}N_2^{\frac1r} \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} }$. Indeed, we estimate applying Lemma~\ref{sob-s2} and spherical Strichartz estimate \eqref{ang str} \begin{align*} \|u_1u_2\|_{L_t^1 L_x^{\frac{r}{2}} } &\ls \|u_1\|_{L_t^2 \mathcal L_\rho^r L_\theta^2 } \|u_2\|_{L_t^2 \mathcal L_\rho^r L_\theta^{\frac{2r}{4-r}} } \\ &\ls \|u_1\|_{L_t^2 \mathcal L_\rho^r L_\theta^2 } ( \|u_2\|_{L_t^2 \mathcal L_\rho^r L_\theta^2 } + \|\nabla_{\mathbb S} u_2\|_{L_t^2 \mathcal L_\rho^r L_\theta^2 } ) \\ &\ls N_1^{\frac1r}N_2^{\frac1r}\|u_1\|_{\ul^{2} } \big( \|u_2\|_{\ul^{2} }+ \|\nabla_{\mathbb S} u_2\|_{\ul^{2} } \big). \end{align*} Also, we can change the role of $u_1$ and $u_2$, which implies the claim. Thus we have \begin{align*} I \lesssim \sum_{M>0} \|V_M\|_{L_x^{\frac{r}{r-2}}} N_1^{\frac1r}N_2^{\frac1r} \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} } \|u_3\|_{\ul^{2,1} } \|v \|_{L_t^\infty L_x^2} \end{align*} We compute the summation over $M$ using \eqref{ineq:potential} $$ \sum_{M>0} \|V_M\|_{L_x^{\frac{r}{r-2}}}=\sum_{0<M\le 1}M^{\frac6r-\gamma_1}+\sum_{M>1}M^{\frac6r-\gamma_2} <C, $$ which is finite if we choose $r$ so that $r>6/\gamma_2$. \subsection{Case2: ${N_3\ll N}$} In this subsection we prove \begin{align*} I\ls \min(N_1,N_2)^\frac1r N_3^\frac1r \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} }\|u_3\|_{\ul^{2,1} } \|v\|_{V_m^{2} }. \end{align*} In this case we should further divide the case whether $N_3$ is $N_0$ or not. Among them the case $N_0=\min(N_1,N_2)\sim \max(N_1,N_3)$ is already considered in section~4.1.1. Note that in this range we have $M\sim N\ls \max(N_1,N_2)$. \subsubsection{ $N_0=N_3\ll N$} Suppose $N_0=\min(N_1,N_2)\ll\max(N_1,N_2)$. We estimate \begin{align*} I&\ls \sum_{M \sim N}\|V_M*(\uon\utw)\|_{L_t^2 L_x^3} \|\widetilde{\dot{P}_M} (\uth v) \|_{L_t^2 L_x^\frac32} \\ &\ls \sum_{M \sim N}\|V_M\|_{L_x^\frac32} \|\uon\|_{L_t^2 L_x^6} \|\utw \|_{L_t^\infty L_x^2} \|\uth\|_{L_t^2 L_x^6} \|v \|_{L_t^\infty L_x^2} \\ &\ls_{N_0} \sum_{M \sim N} M^{1-\gamma_2} \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} }\|u_3\|_{\ul^{2,1} } \|v\|_{V_m^{2} }, \end{align*} which is complete since $\gamma_2>\frac32$. Suppose $N_1,N_2>N_0$. We have \begin{align}\label{rrr-1} I\ls \sum_{M \sim N}\|V_M*(\uon\utw)\|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^r} \|\widetilde{\dot{P}_M} (\uth v) \|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^{\frac{r}{r-1}}}. \end{align} We bound the first term. We assume $\min(N_1,N_2)=N_1$. By Lemma~\ref{radial2} we have \begin{align}\label{22r} \| V_M * (u_1 u_2) \|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^r} &\ls \|V_M\|_{L_x^\frac{r}{r-1}} \|u_1u_2\|_{L_t^2\mathcal{L}_\rho^{\frac{2r}{r+2}}L_\theta^\frac r2}. \end{align} We estimate \begin{align}\label{22r2} \begin{aligned} \|u_1u_2\|_{L_t^2\mathcal{L}_\rho^{\frac{2r}{r+2}}L_\theta^\frac r2} &\ls \|u_1\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^{\frac{2r}{4-r}}} \|u_2\|_{L_t^\infty \mathcal{L}_\rho^{2}L_\theta^2} \\ &\ls \Big( \|u_1\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} +\|\nabla_{\mathbb S}u_1\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} \Big)\|u_2\|_{L_t^\infty L_x^2} \\ &\ls N_1^\frac1r \Big( \|u_1\|_{U_m^2} +\|\nabla_{\mathbb S}u_1\|_{U_m^2} \Big)\|u_2\|_{U_m^2}, \end{aligned} \end{align} where we used Lemma~\ref{sob-s2} since $\frac{2r}{4-r}>2$. Or, exchanging a spherical pair for H\"older inequality we estimate \begin{align}\label{22r3}\begin{aligned} \|u_1u_2\|_{L_t^2\mathcal{L}_\rho^{\frac{2r}{r+2}}L_\theta^\frac r2} &\ls \|u_1\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^{2}} \|u_2\|_{L_t^\infty \mathcal{L}_\rho^{2}L_\theta^\frac{2r}{4-r}} \\ &\ls \|u_1\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} \Big(\|u_2\|_{L_t^\infty L_x^2} + \|\nabla_{\mathbb S} u_2\|_{L_t^\infty L_x^2}\Big) \\ &\ls N_1^\frac1r\|u_1\|_{U_m^2} \Big(\|u_2\|_{U_m^2} + \|\nabla_{\mathbb S} u_2\|_{U_m^2}\Big). \end{aligned} \end{align} Since we can change the role of $u_1$ and $u_2$, \eqref{22r},\eqref{22r2} and \eqref{22r3} imply \begin{align}\label{ineq:22r} \|V_M*(\uon\utw)\|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^r} \ls \|V_M\|_{L_x^{\frac{r}{r-1}}} \min(N_1,N_2)^{\frac1r} \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} }. \end{align} Next we bound the second term in \eqref{rrr-1} by applying Lemma~\ref{radial2} \begin{align*} \|\widetilde{\dot{P}_M} (\uth v) \|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^{\frac{r}{r-1}}} &\ls \| \mathcal{F}^{-1}\chi_M\|_{L_x^\frac65} \|\uth v\|_{L_t^2L_x^\frac32} \ls \| \mathcal{F}^{-1}\chi_M\|_{L_x^\frac65} \|\uth \|_{L_t^2L_x^6} \|v\|_{L_t^\infty L_x^2}\\ &\ls_{N_0} \| \mathcal{F}^{-1}\chi_M\|_{L_x^\frac65} \|\uth\|_{U_m^2}\|v\|_{V_m^2}. \end{align*} In conclusion we obtain \begin{align*} I&\ls \sum_{M \sim N} \|V_M\|_{L_x^{\frac{r}{r-1}}}\| \mathcal{F}^{-1}\chi_M\|_{L_x^\frac65} \min(N_1,N_2)^{\frac1r} \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} } \| u_3\|_{U_m^{2,1}} \|v\|_{V_m^{2}}. \end{align*} which implies the desired result since we have from \eqref{ineq:potential} $$\sum_{M\sim N} \|V_M\|_{L_x^{\frac{r}{r-1}}} \|\mathcal{F}^{-1}\chi_M\|_{L_x^\frac65} \ls \sum_{M \sim N} M^{\frac3r+\frac12-\gamma_2}<C.$$ \subsubsection{ $N_0<N_3\ll N \ \text{and}\ N_0=\min(N_1,N_2)\ll \max(N_1,N_2) $ } We divide the case according to whether $u_3$ takes the angular derivative or not. { \it (1) $u_3$ case:} We have \begin{align*} I\ls \sum_{M \sim N}\|V_M*(\uon\utw)\|_{L_t^2 L_x^\frac{2r}{r-2}} \|\widetilde{\dot P_M} ( u_3 v) \|_{L_t^2 L_x^{\frac{2r}{r+2}}}. \end{align*} We compute the first norm. By symmetry we may assume $\min(N_1,N_2)=N_1$. \begin{align*} \|V_M*(\uon\utw)\|_{L_t^2 L_x^\frac{2r}{r-2}} &\ls \|V_M\|_{L^\frac{6r}{5r-6}}\|\uon\utw\|_{L_t^2 L_x^{\frac32}} \ls \|V_M\|_{L^\frac{6r}{5r-6}} \|\uon\|_{L_t^2L_x^6} \|\utw\|_{L_t^\infty L_x^2} \\ &\ls_{N_0} \|V_M\|_{L^\frac{6r}{5r-6}} \|\uon\|_{U_m^2} \|\utw\|_{U_m^2}. \end{align*} And we estimate the second term using Lemma~\ref{sob-s2} \begin{align*} \|\widetilde{\dot P_M} ( u_3 v) \|_{L_t^2 L_x^{\frac{2r}{r+2}}} &\ls \| \mathcal{F}^{-1}\chi_M \|_{L_x^{1}} \|u_3\|_{L_t^2 L_x^r} \|v\|_{L_t^\infty L_x^2} \ls \big( \|u_3\|_{L_t^2\mathcal L_\rho^r L_\theta^2} + \| \nabla_{\mathbb S} u_3\|_{L_t^2\mathcal L_\rho^r L_\theta^2}\big)\|v\|_{L_t^\infty L_x^2}\\ &\ls N_3^\frac1r \big( \|u_3\|_{U_m^2} + \|\nabla_{\mathbb S} u_3\|_{U_m^2}\big) \|v\|_{V_m^2}. \end{align*} Thus we have \begin{align*} I&\ls \sum_{M \sim N} M^{3(\frac1r+\frac16)-\gamma_2} N_3^\frac1r \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} } \| u_3\|_{U_m^{2,1}} \|v\|_{V_m^{2}} \\ &\ls N_3^\frac1r \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1} } \| u_3\|_{U_m^{2,1}} \|v\|_{V_m^{2}}. \end{align*} { \it (2) $\nabla_{\mathbb S} u_3$ case:} We have \begin{align*} I&\ls \sum_{M\sim \max(N_1,N_2)} \|V_M*(u_1u_2)\|_{L_t^2 \mathcal{L}_\rho^\frac{2r}{r-2}L_\theta^\infty} \|\widetilde{\dot P_M} (\nabla_{\mathbb S}u_3 v) \|_{L_t^2 \mathcal{L}_\rho^{\frac{2r}{r+2}}L_\theta^1}. \end{align*} We estimate the first norm. Applying Lemma~\ref{sob-s2} we obtain \begin{align*} \|V_M*(u_1u_2)\|_{L_t^2 \mathcal{L}_\rho^\frac{2r}{r-2}L_\theta^\infty} \ls \|V_M*(u_1u_2)\|_{L_t^2 \mathcal{L}_\rho^\frac{2r}{r-2}L_\theta^\frac{2r}{r-2}} + \| \nabla_{\mathbb S} V_M*(u_1u_2)\|_{L_t^2 \mathcal{L}_\rho^\frac{2r}{r-2}L_\theta^\frac{2r}{r-2}} \end{align*} By Young's and H\"older inequality we have \begin{align*} \|V_M*(u_1u_2)\|_{L_t^2 \mathcal{L}_x^\frac{2r}{r-2}} \ls \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \|u_1\|_{L_t^2 L_x^6} \|u_2\|_{L_t^\infty L_x^2} \ls_{N_0} \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \|u_1\|_{U_m^2} \|u_2\|_{U_m^2}. \end{align*} Then by Leibniz rule we can bound the derivative term similarly and finally get \begin{align}\label{1} \|V_M*(u_1u_2)\|_{L_t^2 \mathcal{L}_\rho^\frac{2r}{r-2}L_\theta^\infty} \ls \|V_M\|_{L_x^{\frac{6r}{5r-6}}} \|u_1\|_{U_m^{2,1}}\|u_2\|_{U_m^{2,1}}. \end{align} We apply the Lemma~\ref{radial2} to the second term \begin{align}\label{2} \|\widetilde{\dot P_M} (\nabla_{\mathbb S} u_3 v) \|_{L_t^2 \mathcal{L}_\rho^{\frac{2r}{r+2}}L_\theta^1} \ls \| \mathcal{F}^{-1}\chi_M \|_{L_x^{1}} \| \nabla_{\mathbb S} u_3\|_{L_t^2 \mathcal{L}_\rho^r L_\theta^2} \|v\|_{L_t^\infty L_x^2} \ls N_3^\frac1r \| \nabla_{\mathbb S} u_3 \|_{U_m^{2}} \|u_2\|_{V_m^{2}}. \end{align} By \eqref{1} and \eqref{2} we obtain \begin{align*} I &\ls_{N_0} \sum_{M\sim N }\|V_M\|_{L_x^{\frac{6r}{5r-6}}} \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1}} N_3^\frac1r \| u_3\|_{U_m^{2,1}} \|v\|_{V_m^{2}} \\ &\ls N_3^\frac1r \|u_1\|_{\ul^{2,1} } \|u_2\|_{\ul^{2,1}} \| u_3\|_{U_m^{2,1}} \|v\|_{V_m^{2}}. \end{align*} \subsubsection{ $N_0<N_1,N_2,N_3 \ \text{and}\ N_3\ll N$ } \begin{align*} I &\lesssim \sum_{M\sim N} \|V_M *(\uon\utw)\|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^r} \|\widetilde{\dot{P}_M}(\uth v) \|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^{\frac{r}{r-1}}}. \end{align*} The first term is bounded as in \eqref{ineq:22r}. For the second one we apply Lemma~\ref{radial2} \begin{align*} \|\widetilde{\dot{P}_M}(\uth v) \|_{L_t^2 \mathcal{L}_\rho^2 L_\theta^\frac{r}{r-1}} &\ls \| \mathcal{F}^{-1} \chi_M \|_{L_x^{\frac{r}{r-1}}} \|\uth v\|_{L_t^2\mathcal{L}_\rho^{\frac{2r}{r+2}}L_\theta^1} \ls \| \mathcal{F}^{-1} \chi_M \|_{L_x^{\frac{r}{r-1}}} \|\uth\|_{L_t^2 \mathcal{L}_\rho^{r}L_\theta^2} \|v\|_{L_t^\infty \mathcal{L}_\rho^{2}L_\theta^2} \\ &\ls \| \mathcal{F}^{-1} \chi_M \|_{L_x^{\frac{r}{r-1}}} N_3^\frac1r \|\uth\|_{U_m^2}\|v\|_{V_m^2} . \end{align*} Then the claim follows since $\sum_{M\sim N} \|V_M\|_{L_x^{\frac{r}{r-1}}}\| \mathcal{F}^{-1} \chi_M \|_{L_x^{\frac{r}{r-1}}} \ls \sum_{M\sim N} M^{\frac6r-\gamma_2} < C $ by \eqref{ineq:potential}. \section*{Acknowledgements} The author would like to thank Prof. Yonggeun Cho for his encouragement and advice on the paper. And the author is grateful to the referee for careful reading of the paper and valuable comments. This work was supported by NRF (NRF-2015R1D1A1A09057795). \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
9,587
Get News From Bold Type Books The New Economics of Social Change by Morgan Simon A leading investment professional explains the world of impact investing–investing in businesses and projects with a social and financial return–and shows what it takes to make sustainable, transformative change. Impact investment–the support of social and environmental projects with a financial return–has become a hot topic on the global stage; poised to eclipse traditional aid by ten times in the next decade. But the field is at a tipping point: Will impact investment empower millions of people worldwide, or will it replicate the same mistakes that have plagued both aid and finance? Morgan Simon is an investment professional who works at the nexus of social finance and social justice. In Real Impact, she teaches us how to get it right, leveraging the world's resources to truly transform the economy. Over the past seventeen years, Simon has influenced over $150 billion from endowments, families, and foundations. In Real Impact, Simon shares her experience as both investor and activist to offer clear strategies for investors, community leaders, and entrepreneurs alike. Real Impact is essential reading for anyone seeking real change in the world. Genre: Nonfiction / Social Science / Philanthropy & Charity On Sale: October 3rd 2017 Introduction to Real Impact Trailer for REAL IMPACT "Morgan Simon has made a significant contribution with the very big idea that we can change the world by changing how we all relate to money. And lucky for us, Simon is as entertaining in her writing as she is brilliant in her concepts."—Van Jones, CNN "Impact investing is a subject that deserves in-depth, powerful scrutiny. Real Impact offers much of that, and will give readers an introduction into understanding where this kind of work can bring hopeful change and where it can't. Timely!"—Bill McKibben, author of Deep Economy "Where we invest speaks to our values as a country that prioritizes our collective social welfare. Morgan Simon's innovative investment approach ensures money can serve as a force for good, for everyone."—Congressman Keith Ellison, member of the House Financial Services Committee and co-Chair, Congressional Progressive Caucus "Real Impact is a unique and valuable teaching tool. Morgan Simon's expertise in the field is unparalleled, and brilliantly shared through this book."—Vikram Gandhi, Senior Lecturer, Harvard Business School; former Vice-Chairman of Investment Banking, Credit Suisse "Real Impact is a gift to the academic community. I know of no other resource available with such a balance of thought-provoking investment philosophy and practical advice--reflecting the depth of Morgan Simon's expertise and experience in impact investment."—Heidi Krauel Patel, Lecturer in Management, Graduate School of Business at Stanford University "A critical cautionary tale--how do we scale social impact investment without leaving anyone behind? Morgan Simon is a master practitioner at inclusive investment; read Real Impact to learn from her compelling example."—Ben Jealous, Partner, Kapor Capital and former President, NAACP "To drive significant social and environmental progress around the world, investors need to understand how to structure and deliver capital in a way that works for high-impact enterprises. Real Impact highlights the complicated trade-offs they will face along the risk-return spectrum and offers a blueprint for market growth. It helps fill the knowledge gap between optimism and execution."—Debra D. Schwartz, Managing Director of Impact Investments, MacArthur Foundation "Over 1,700 asset owners around the world, from pension funds to insurance companies and sovereign wealth funds, have become signatories to the UNPRI, as its becomes increasingly clear that social and environmental trends can create financial opportunities. Investors now have the capacity to help solve the world's problems and deliver strong returns for their beneficiaries and clients. Morgan Simon is a global leader in impact investing, and this book offers critical lessons for those interested in being on the cutting edge of this exciting new field."—Dr. James Gifford, Founder and former Executive Director, United Nations Principles on Responsible Investment; Senior Impact Investment Strategist, UBS "Clear-eyed, explicit, and tinged with just the right amount of outrage, this is a clarion call that the world...needs to hear."- Publishers Weekly "A clear-eyed case for socially conscious investment, of much interest to those who want their dollars to do good." - Kirkus Reviews "Readers interested in social and economic justice, climate change, housing, gender equality, and other pertinent topics will be compelled by Simon's exploration of impact investment as a creative way to challenge social problems in global and local contexts." -
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,964
\section{Introduction} The Clementine mission to survey the lunar surface, with its high resolution imagery, has re-opened the dormant field of crater counting on the Moon \citep{pete94,moore96,mcewen97}. \citet{morota03} have used these high quality images to count young rayed craters and concluded that a factor of 1.5 enhancement exists in the current cratering record when comparing fitted crater densities precisely at the apex to those directly at the antapex. An excess of leading-hemisphere craters is expected on synchronously-rotating satellites in general; \citet{hor84} give expressions for the expected asymmetry caused by the leading hemisphere of the satellite ``sweeping up'' more objects. Based on such expressions and using their previous crater counts, \citet{morota05} concluded that the measured leading enhancement implies an average impactor encounter speed with the Earth-Moon system of 12-16 km/s, consistent (though on the low end) with previous estimates \citep{shoe83,chyba94}. Other synchronously-rotating moons have been studied in great detail using the high-resolution images from Voyager. By examining previously-obtained crater counts of the Galilean satellites, \citet{zetal01} reconfirm the prediction of a leading hemisphere excess (which is quite large in the Jovian system due to high satellite orbital speeds) and are able to derive an analytical expression to describe the asymmetry that is consistent with results of \citet{shoe82} and \citet{hor84}. In addition to this leading/trailing asymmetry, some authors believe that the Earth could act as a gravitational lens, focusing incoming projectiles onto the nearside of the Moon, causing an asymmetry between the near and far sides \citep{turk62,wiesel71}. This concept has been numerically investigated by \citet{wood73,fev05} and deemed plausible, though an analytic calculation by \citet{band73} showed the amplitude of the asymmetry depends on the Earth-Moon distance and the velocity of the incoming projectiles. Our goal is to determine if the present-day flux of Near-Earth Objects (NEOs) would produce a spatial distribution of young craters on the lunar landscape that agrees with these studies, in terms of both leading/trailing and nearside/farside asymmetries. Our approach involves many N-body simulations of projectiles drawn from the debiased NEO model of \citet{bottke02}, which are then injected into the Earth-Moon system. Since a realistic impactor distribution is not expected to be isotropic, one might find some latitudinal variation in the spatial distribution of deliveries on both the Earth and Moon. This concept has been studied in the past \citep{xmas64}, but more recently by \citet{fev06} who concluded 60\% and 30\% latitudinal variations for the Moon and Earth respectively when comparing the impact density at the poles to that of the equator. We seek to compare our direct N-body simulations to their semi-analytic approach as they also used the \citet{bottke02} NEO model. Asymmetries may also be present for the Earth and would be most reliably observed via fireball sightings and meteorite recoveries. Based on observations, many researchers concluded that more daytime fireballs occur in the afternoon hours, causing a morning/afternoon (AM/PM) asymmetry \citep{wether68,xmas82}. Several studies \citep{xmas82,wether85,morglad98} have been done to understand this PM excess. \citet{xmas82} produced a PM fireball excess, but drew from a highly-selected group of orbital parameters for the meteoroid source population. These orbits may not represent the true distribution of objects in near-Earth space. Thus, since we will obtain many simulated Earth impacts in our lunar study, an additional benefit is to reinvestigate the question of the AM/PM asymmetry. The debiased NEO model we use as our source population should give a better approximation of the objects in near-Earth space than the parameters used by \citet{xmas82}. Annual effects are also of interest as at different times of the year certain locations on Earth receive an enhancement in the impactor flux due to the location of the Earth's spin axis relative to the incoming flux direction \citep{xmas82,renkno89}. In Section~\ref{sec-theory}, we give a brief overview on the theory of various asymmetries in the Earth-Moon system. Section~\ref{sec-MM} describes the model for our simulations as well as the methods of implementation. Our findings begin in Section~\ref{sec-ED}, where we first examine deliveries of projectiles to Earth in terms of the latitude distribution and then the AM/PM asymmetry. Lunar results follow in Section~\ref{sec-LB} where we examine the latitude distribution and the leading/trailing and nearside/farside hemispheric asymmetries. Section~\ref{sec-con} summarizes our key findings and discusses some possible implications. \section{Asymmetry Theory} \label{sec-theory} Why search for asymmetries? For the Earth in terms of a morning/afternoon asymmetry, one can glean some information about the origin of the impacting population as certain types of orbits will preferentially strike during specific local times. More importantly though, when looking at the the Moon, any spatial variation in the observed crater density will affect how those craters are used to interpret the complex cratering history of the Moon. \subsection{Meteorite fall statistics on Earth} The details concerning an afternoon (PM) excess of meteorite falls on the Earth has been the subject of much debate, creating a large body of literature on the subject ({\it e.g.,}$\;$ \citet{xmas64,wether68,xmas82,wether85,morglad98} and references therein). This concept was instigated by the fact that chondrites seem to be biased to fall during PM hours. \citet{wether68} quantified the effect as the ratio of daytime PM falls to the total number of daytime falls and arrived at a ratio $\sim 0.68$, where the convention is to count only falls between 6AM and 6PM since human observers are less numerous between midnight and 6AM. However, even this daytime value may be socially biased as there are more potential observers from noon-6PM than in the 6AM-noon interval. A hypothesis to explain a PM excess is that prograde, low-inclination meteoroids with semi-major axes $> 1$~AU and pericentres just inside 1~AU preferentially coat Earth's trailing hemisphere. Several dynamical and statistical studies \citep{xmas82,wether85,morglad98} support this afternoon enhancement scenario. In our work, we will use the debiased NEO model of \citet{bottke02} to compute the distribution of fall times from an NEO source population. We can use this to learn about the true orbital distribution of meteorite dropping objects. \subsection{Lunar leading hemisphere enhancement} \label{sec:llhe} A leading hemisphere enhancement originates from the satellite's motion about its host planet. As it orbits, the leading hemisphere tends to encounter more projectiles, thus enhancing the crater production on that side. The faster the satellite orbits, the more difficult it becomes for objects to encounter the trailing hemisphere and the higher the leading-side impact speeds become. In addition to this effect, the size-frequency distribution of craters will be skewed towards larger diameter craters on the leading hemisphere. A crater, of size $D_c$, at the apex will have been produced from a smaller-sized impactor (on average) than one which makes the same sized crater at the antapex because impact speeds $v_{imp}$ are generally higher on the leading hemisphere and the crater diameter scales roughly as $D_c \propto v_{imp}^{2/3}$ ({\it e.g.,}$\;$ \citet{boris01}). Analytic investigations into leading/trailing asymmetries have ranged from the general \citep{hor84} to the very specific \citep{shoe82,zetal98}, but all analytic treatments are forced make the assumption that the impacting population has an isotropic distribution in the rest frame of the planet. In some cases, the gravity of the satellite is ignored, and in the above treatments, impact probabilities (depending on the encounter direction and speed) were not included in the derivation of cratering rates. For fixed impactor encounter speed and an isotropic source distribution, the areal crater density $\Gamma$ would follow the functional form \begin{equation} \Gamma(\beta) = \bar{\Gamma}\left(1 + \alpha\cos\beta \right)^g, \label{eq:ass} \end{equation} where $\beta$ is the angle from the apex of the moon's motion and $\bar{\Gamma}$ is the value of the crater density at $\beta = 90$\degrees \citep{zetal01,morota06}. The ``amplitude'' $\alpha$ of the asymmetry is related to the orbital velocity $v_{orb}$ of the Moon and the velocity of the projectiles at infinity $v_{\infty}\ $ by \begin{equation} \alpha = \frac{v_{orb}}{\sqrt{2v_{orb}^2 + v_{\infty}^2}}\;. \label{eq:alf} \end{equation} Note that for large encounter speeds, $\alpha \rightarrow v_{orb}/v_{\infty}$ and that $\alpha \rightarrow 1/\sqrt{2}$ as $v_{\infty} \rightarrow 0$. The exponent $g$ (which also effects the asymmetry) is expressed as \begin{equation} g= 2.0 + 1.4\;b, \label{eq:gee} \end{equation} where $b$ is the slope of the cumulative mass distribution for the impactors \begin{equation} N(>m) \propto m^{-b}\;. \end{equation} Based on current observations \citep{bottke02}, $b = 0.58 \pm 0.03$, making $g = 2.81 \pm 0.05$. Although Eq.~\ref{eq:ass} may be fit to an observed crater distribution, this does not necessarily provide a convenient measure of the asymmetry. Figure~\ref{fig:dirs} shows that for our impactor population (see Sec.~\ref{sec-source}) the isotropic assumption fails, so one should expect deviations from the functional form of Eq.~\ref{eq:ass}. Note that increasing either $\alpha$ or $g$ raises the leading/trailing asymmetry, introducing a degeneracy in the functional form, making it difficult to decouple the two when determining information about $v_{\infty}\ $ and the size distribution. In fact the observations actually permit a very large range of parameter values. Performing a maximum likelihood parameter determination using the rayed crater data of \citet{morota03} yields Fig.~\ref{fig:like}. Also, determining $\bar\Gamma$ with a high degree of precision from a measured crater field is difficult. To minimize these issues when examining the entire lunar surface, we adopt the convention of \citet{zetal01} by taking the ratio of those craters which fall within 30\degrees of the apex to those which are within 30\degrees of the antapex. This ratio forms a statistic known as the global measure of the apex-antapex cratering asymmetry (GMAACA). \begin{figure}[H] \vspace{0.5cm} \hspace{2.5cm}\epsfig{file=dir_bestB.ps,height=7.0cm,angle=-90} \hspace{-1.5cm}\epsfig{file=dir1.ps,height=7.0cm,angle=-90} \caption{A scatter plot of $\theta$ vs. $\phi$ for the source population used in our simulations (a) as well as a distribution of those angles (b). $\theta$ is the polar angle between the relative velocity vector of the potential impactor and the Earth's direction of motion and $\phi$ is an azimuthal location of the relative velocity \citep{val99}. These figures show that our distribution of possible impactors is not isotropic, a key assumption in many cratering theories. Only $\phi$ values from $0 - 90$\degrees are shown in (a) corresponding to post-pericenter encounters at ascending node; higher values of $\phi$ are simply mirror images of the scatter plot shown as all encounters have the same value of $\theta$ for a given object.} \label{fig:dirs} \end{figure} \begin{figure}[H] \begin{center} \epsfig{file=likelihood_blackC.ps,height=13.5cm,angle=-90} \end{center} \caption{A contour plot of log(likelihood) values showing the degeneracy present in the parameters $\alpha$ and $g$ from Eq.~\ref{eq:ass} using the rayed crater data from \citet{morota03}. Best fit values for the rayed crater data are shown as the dot at $\alpha = 0.063$ and $g = 4.62$. This value for g gives the slope of the impactor mass distribution as $b = 1.87$, over 3 times the observationally determined value of 0.58. However, a large range of $\alpha$ and $g$ values give acceptable fits to the data, making it impossible to measure the parameters of the projectile population from the crater distribution alone.} \label{fig:like} \end{figure} Because of the Moon's small 1.0~km/s orbital velocity and standard literature values of $\bar{v}_{\infty} \sim 10-16$ km/s \citep{shoe83,chyba94}, one expects $\alpha \approx 1/16-1/10$ and thus a crater enhancement on the leading hemisphere in the range $\sim 1.4-1.7$, in terms of the GMAACA. Recent studies of young rayed craters \citep{morota03,morota05} are found to be consistent with these estimates. We will calculate the GMAACA that current NEO impactors produce and compare with these values. \newpage \subsection{Lunar nearside/farside debate} \label{sec-nearfar} The concept of a nearside/farside asymmetry has garnered less attention in the lunar cratering literature. \citet{wiesel71} discussed the idea that the Earth would act as a gravitational lens, focusing incoming objects onto the nearside of the Moon. However, the degree to which this lensing effect occurs is unclear. The amplitude of nearside enhancement has been reported as insignificant \citep{wiesel71}, a factor of two \citep{turk62,wood73}, and most recently, a factor of four compared with the farside in a preliminary analysis by \citet{fev05}. In contrast, \citet{band73} used analytic arguments to claim no a priori reason to expect such an asymmetry; the Moon may be in a region of convergent or divergent flux because the focal point of the lensed projectiles depends on the encounter velocity of the objects and the Earth-Moon distance. These latter authors estimated that there should currently be a negligible difference in the nearside/farside crater production rate. We will also investigate this issue as the initial conditions used in the previous dynamical studies were somewhat artificial. \section{Model and numerical methods} \label{sec-MM} \subsection{The source population} \label{sec-source} The cratering asymmetry expected on the synchronously-rotating Moon depends on the impactor speed distribution that is bombarding the satellite (the scalar $v_{\infty}\ $ distribution and its directional dependence). Therefore, we must have a model of the small-body population crossing Earth's orbit. The potential impactors come from the asteroid-dominated ({\it e.g.,}$\;$ \citet{getal00}) near-Earth object (NEO) population. The orbital distribution of these objects is best modeled by \citet{bottke02} who fit a linear combination of several main-belt asteroid source regions and a Jupiter-family comet source region to the Spacewatch telescope's NEO search results. Since the detection bias of the telescopic system was included, this yields a model of the true NEO orbital-element distribution. W.~Bottke (private communication, 2005) has provided us with an orbital-element sampling of their best fit distribution, which we have then restricted to 16307 orbits in the Earth-crossing region (Fig.~\ref{fig:aei}). This will be our source population for the impactors which transit through the Earth-Moon system. We note the common occurrence of high eccentricity $e$ and inclination $i$ orbits in the NEO sample, which result in high values of $v_{\infty}\ $ for the bombarding population. Although semimajor axes $a$ in the $a$=2.0--2.5~AU region are densely populated (since this is the semimajor axis range of the dominant main-belt sources), impact probability is largest for orbits with perihelia $q=a(1-e)$ just below 1~AU or for aphelia $Q=a(1+e)$ just above 1~AU \citep{morglad98}. Since the $q\sim1$ population is much larger than the $Q\sim1$ population, one might expect the former to dominate the impactors (but see Sec.~\ref{sec-ampmsec}). \begin{figure}[H] \begin{center} \includegraphics[height=9.5cm,angle=0]{ae.eps} \hspace{0.5cm} \includegraphics[height=9.5cm,angle=0]{ai.eps} \end{center} \caption{The orbital distribution of candidate impactors used in our simulations, based on the debiased near-Earth object (NEO) model of \citet{bottke02}. We have used only objects whose perihelion $q$ and aphelion $Q$ satisfy $q \leq 1$ AU and $Q \geq 1$ AU. (a) Semi-major axis $a$ versus eccentricity, $e$. Note the increase in density near $a \sim 2.5$ AU due to the 3:1 orbital resonance with Jupiter. (b) Semi-major axis versus inclination $i$ with respect to the ecliptic.} \label{fig:aei} \end{figure} \subsection{Setup of the flyby geometries} \label{sec-setupgeo} The orbital model provides only the $(a,e,i)$ distribution of the NEO population and we expect that the 1.7\% eccentricity of the Earth's orbit is a small correction to the impactor distribution, so for what follows we have modeled the Earth's heliocentric orbit as perfectly circular. As a result we can, without loss of generality, take all encounters to occur at (1~AU,0,0) in heliocentric coordinates, where we have lost knowledge of the day of the year of the encounters (although we can easily average over the year by {\it post-facto} selecting a random azimuth for the Earth's spin pole at the time of a projectile's arrival at the top of the atmosphere). With this restriction, each Earth-crossing orbit can have an encounter in one of four geometries depending on the argument of pericenter $\omega$ and the true anomaly $f$, which must satisfy: \begin{equation} r = 1~\mathrm{AU} = \frac{ a(1-e^2) }{1 + e \cos f } \; \; \; . \end{equation} By construction, the encounter must occur at either the ascending or descending node along the $x$-axis, and so the longitude of ascending node is $\Omega$=0 or $\pi$. Taking $f$ and $\omega$ in $[0,2\pi)$, for ascending encounters, $f=2\pi-\omega$ for either encounters with pericenters above ($\omega=[0,\pi)$) or below ($\omega=[\pi,2\pi)$) the ecliptic. If the encounter occurs at the descending node, $f = \pi - \omega$ for post-pericenter encounters and $f = 3\pi - \omega$ for pre-pericenter encounters. With this in mind, we effectively quadruple the number of initial conditions to 65228. With the longitude of ascending node fixed (ie. $\Omega = 0 \mathrm{\ or\ }\pi$), we then construct a plethora of incoming initial conditions based on the orbital elements from the debiased NEO model converted to Cartesian coordinates. For each initial orbit, all four of the encounter geometries are equally likely. We randomly choose a particle for a flyby based on its encounter probability with the Earth as judged by an \"Opik collision probability calculation \citep{dones99}. Gravitational focusing by the Earth was {\it not} included in the encounter probability estimate as an increased frequency of Earth deliveries will occur naturally during the flyby phase if the Earth's gravity is important. Both Earth and a test particle (TP) are then placed at the nodal intersection and moved backwards on their respective orbits until the separation between the NEO and the Earth is 0.02~AU. At this point, we create a disk of $10^5$ non-interacting test particles, meant to represent potentially-impacting asteroids or comets, centered on the chosen orbit. A short numerical integration is run for each of the 65228 initial conditions and one TP trajectory that results in an arrival at the Earth in each flyby is placed into a table of new initial conditions. This procedure is performed because during the ``backup phase'' to separate the Earth and TP by 0.02~AU, gravitational focusing was not accounted for. The omission could result in the particle missing the Earth in a forward integration, as the Earth's gravity would be present, modifying the chosen trajectory. This gives us a final set of 65228 initial trajectories that strike the Earth when a forward integration is performed. For convenience we then convert from heliocentric coordinates to a geocentric frame of reference. For our simulations, one of the new initial conditions is randomly chosen based on a newly calculated encounter probability with the Earth-Moon system. A new disk, centered on the chosen trajectory, is randomly populated with $10^5$ test particles, all given identical initial velocities. The radius of the disk is chosen to be 2.5 lunar orbital radii as testing showed that for all of our initial conditions, a disk of this size spans the entire lunar orbit as it passes the Earth's position. This was done to ensure the crater distribution had no dependence on the lunar orbital phase. Figure~\ref{fig:setup} shows a schematic of the simulations. \begin{figure}[H] \begin{center} \epsfig{file=cartoon.eps,height=12.5cm,angle=0} \end{center} \caption{A schematic of one 3-dimensional flyby projected onto the ecliptic in a heliocentric coordinate frame. A disk (thick, solid line) of $10^5$ test particles, all with identical initial velocities, is created around a central trajectory based on the NEO source population of \citet{bottke02}. The disk is 2.5 lunar orbital radii in radius and encompasses the entire lunar orbit for any of our possible initial conditions. These test particles are then integrated forward along with the Earth, Moon, and Sun and have an encounter with the Earth-Moon system near (1~AU,0,0). All particles are followed for three times the amount of time required for the farthest test particle to reach the Earth.} \label{fig:setup} \end{figure} \subsection{Integration method} The orbital trajectories of the massive bodies (Earth, Moon and Sun) and the TPs are integrated using a modified version of swift-rmvs3 \citep{levdun94}. The length units are in AU and time units are in years. The initial eccentricity and inclination for both the Earth and Moon are set to zero for the majority of our simulations. In one run we introduce the current 5.15\degrees inclination of the Moon, keeping Earth's $e$ and $i$ as before. Both Earth and Moon are assumed to be perfect spheres. Factors that vary in the simulations include the number of flybys, where a flyby is defined as a disk of $10^5$ test particles passing through the Earth-Moon system, and the Earth-Moon distance, $R_{EM}$. We have the Earth as the central body with the Moon acting as an orbiting planet and the Sun as an external perturber. The base time step in the simulations is four hours. This is large enough to not be time prohibitive, but small enough such that the integrator can follow the lunar encounters precisely. During the course of the integration, the integrator logs the positions and velocities of all bodies of interest if there is a pericenter passage within the radius of the Moon or Earth. We then use this log as input for a backwards integration using a $6^{th}$-order explicit symplectic algorithm \citep{gladsimp91} to precisely determine the latitude and longitude of the particle's impact location. This method, along with iterative time steps, enables us to find impact locations to within 5~km of the surface of the Moon and within $< 500$~m of the top of Earth's atmosphere. This project is computationally intensive. For each simulation, the Earth, Moon and Sun are included as well as $10^5$ test particles. Each of the 65228 initial conditions are used multiple times to improve statistics and in each case the TP locations in the disk are randomly distributed. Thus more than $2 \times 10^{12}$ test particles are integrated which results in tens of millions of terrestrial deliveries and hundreds of thousands of lunar ones. \section{Delivery to Earth} \label{sec-ED} Before turning to the Moon, we examine our numerical results to determine the spatial distribution of Earth arrivals. Our simulations with the Moon at the current orbital distance of $60R_\oplus$ yielded 21,998,427 terrestrial impacts. Since the crater record on the Earth is difficult to interpret due to geological processes, we choose to examine our results in terms of fireball and meteorite records as the impactors strike the top of the atmosphere. We are thus assuming that the meteorite dropping bodies have a pre-atmospheric orbital distribution similar to the NEOs (but see Sec.~\ref{sec-ampmsec}). Figure~\ref{fig:evhist} shows velocity distributions for the Earth arrivals from our simulations. The delivery speeds should be interpreted as ``top of the atmosphere'' velocities, $v_{imp}$. The deliveries are dominated by low $v_{\infty}\ $ objects which the Earth's gravitational well has focused and sped up, creating a peak in the distribution at a speed of $\sim 15$ km/s. The average speed an impactor has at the top of the atmosphere is $\sim 20$ km/s, slightly higher than the often quoted value of 17 km/s \citep{chyba94}. As well, if we compare our Fig.~\ref{fig:evhist} to the fireball data in Morbidelli and Gladman (1998, Fig.~8a), we see a general similarity in the shapes of the $v_{imp}$ histograms. \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=e_velhist.ps,height=14.5cm,angle=-90} \end{center} \caption{Velocity distributions for the Earth deliveries from our simulations (impactor $v_{\infty}\ $ and $v_{imp}$) as well as the sampled NEO population (source $v_{\infty}\ $). Note that the average impact velocity is $\sim$~20 km/s, higher than the often-quoted value of $17$ km/s. One can observe the large effect of the Earth's gravitational well for $v_{\infty}\ $ values $\lesssim 10$ km/s. Here the encounter speeds are ``pumped up'' to values slightly higher than the Earth's escape velocity of 11.2 km/s. From the $v_{\infty}\ $ distribution of $all$ possible impactors it is clear that the impacts are dominated by low ($v_{\infty}\ $$\lesssim 20$ km/s) objects.} \label{fig:evhist} \end{figure} \newpage \subsection{Latitude distribution of delivered objects} For a uniform spatial distribution of deliveries, one expects the number of arrivals to vary as the cosine of the latitude due to the smaller surface area at higher latitudes. Figure~\ref{fig:elat} shows that to an excellent approximation, the Earth is uniformly struck by impactors (in latitude). To account for the area in each latitude bin, divide by: \begin{equation} A_{bin} = 2\pi R_{\oplus}^2(\cos\theta_1 - \cos\theta_2), \end{equation} where the co-latitudes $\theta_1 > \theta_2$ are measured north from the south pole. \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=typlat.ps,height=13.5cm,angle=-90} \end{center} \caption{A latitude distribution of projectile deliveries to the Earth from one set of simulations. This figure has not accounted for the spin obliquity of the planet (hence ``ecliptic latitude''). The points represent the total number of impacts in $10^\circ$ ecliptic latitude bins and the dashed curve is a cosine multiplied by an arbitrary constant, $M$. The error bars for the simulation results are smaller than the points.} \label{fig:elat} \end{figure} \noindent To correct for the spin obliquity of the Earth, we then choose a random day of fall, and thus a random azimuth for the Earth's spin pole. This then gives us a geocentric latitude and longitude for each simulated delivery. Figure~\ref{fig:elat_var} shows the spatial density versus geocentric latitude. For the long-term average, we see a nearly uniform distribution of arrivals. If we restrict to azimuths corresponding to northern hemisphere spring or northern hemisphere autumn, we see the well-known seasonal variation \citep{xmas82,renkno89} in the fireball flux of roughly 15\% amplitude (Fig.~\ref{fig:elat_var}). As a measure of the asymmetry between the poles and equator, we take the ratio between polar (within 30\degrees of the poles) and equitorial (a 30\degrees band centered on the equator) arrival densities. We find that when all terrestrial arrivals are considered, the poles receive the same flux of impactors as the equator to $< 1$\% ($0.992 \pm 0.001)$. We believe the uncertainty caused by our finite sampling of the orbital distribution is on this level and thus our results are consistent with uniform coverage. \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=year_lat_var.ps,height=13.5cm,angle=-90} \end{center} \caption{The geocentric latitude distribution, relative to the average spatial density of impacts, at different times of the year. The circles represent northern hemisphere spring, when the Earth's spin axis (as viewed from above) is tilted away from the planet's direction of motion. The near mirror-image can be seen in northern hemisphere fall (squares) when the spin axis is tilted towards the Earth's direction of motion. Averaged over a full year (diamonds), the deviation from the expected flat distribution is minimal.} \label{fig:elat_var} \end{figure} \newpage To better understand the details of latitudinal asymmetries, we consider the effect of approach velocity, $v_{\infty}\ $. When our projectile deliveries are filtered such that $v_{cut1} \leq v_{\infty} \leq v_{cut2}$, and restricted to impactors with $i \leq 10$\degrees we find results similar to Le Veuvre and Wieczorek (2006, our Fig.~\ref{fig:fevcomp1}a can be compared to their Fig.~2). Their $N_g$ parameter is analogous to our $v_{\infty}\ $ cuts. Small encounter velocities produce more uniform coverage because the trajectories are bent towards the poles. Higher encounter velocity trajectories are not effected to as great a degree and tend to move in nearly straight lines, leading to a distribution which tends to a cosine-like curve. Obviously when the $i$ restriction is lifted, the poles receive a higher flux which mutes the amplitude of the variation (Fig~\ref{fig:fevcomp1}b). \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=fevcomp2_all.ps,height=8.0cm,angle=-90}\hspace{-0.5cm} \epsfig{file=fevcomp2_all_noinccut.ps,height=8.0cm,angle=-90} \end{center} \caption{The terrestrial impact density versus geocentric latitude (a) for different ranges of encounter velocity $v_{\infty}\ $ and restricted to objects with inclination $\leq 10$\degrees and (b) the same with no inclination restriction. For lower $v_{\infty}\ $ objects, there is less latitudinal variation because the Earth's gravitational field bends incoming trajectories towards higher latitudes. This effect is muted as the velocities become larger - the trajectories move in straighter lines. For these faster objects, the available impact area varies as a cosine so the shape of the distribution for high $v_{\infty}\ $ impactors is expected to do the same. The deviations from a cosine are due to the fact the objects have moderate inclinations and non-infinite velocities. Note that (b) shows the realistic case of all encounter speeds and inclinations; there is very little latitudinal variation.} \label{fig:fevcomp1} \end{figure} The preliminary results of \citet{fev06} show a 30\% ecliptic latitudinal variation in terrestrial projectile deliveries. Though we are able to produce results similar to theirs under various restrictions (Fig.~\ref{fig:fevcomp1}a), the inclusion of the Earth's spin obliquity is necessary to accurately reflect reality. With this inclusion, we find a latitude distribution which is very nearly uniform (Fig.~\ref{fig:fevcomp1}b). \subsection{AM/PM asymmetry} \label{sec-ampmsec} The ecliptic latitudes and longitudes for the terrestrial arrivals were converted into local times via a straightforward method. As stated in the previous section, the Earth's spin-pole azimuth is chosen at random to represent any day of the year and then the arrival location is transformed to these geocentric coordinates. The location relative to the sub-solar direction gives the local time. We looked for a PM excess in our simulation results. However, as evident in Fig.~\ref{fig:ampm}a we see the opposite effect. Previous modeling work produced a near mirror image (reflected through noon) of our result (see our Fig.~\ref{fig:ampm}b and Fig.~3 of \citet{xmas82}), but their impactor orbital distribution was very different from the debiased NEO model; they chose a small set of orbits with perihelion $q$ in the range $0.62 \leq q \leq 0.99$~AU with semimajor axis $a$ obeying $1.3 \leq a \leq 3.2$~AU. If we restrict our simulation results to approximately the same impactor distribution (by taking only those objects having $a$ and $q$ within $\pm\ 0.01$~AU of the entries in Table 1 of \citet{xmas82}), we obtain Fig.~\ref{fig:ampm}b which is very similar to their result. \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=20_30_comp.ps,height=8.5cm,angle=-90}\hspace{-0.75cm} \epsfig{file=ampm_h.ps,height=8.5cm,angle=-90} \end{center} \caption{The local time distribution of arrivals at the top of Earth's atmosphere for both our simulations and radar data (a). The simulation deliveries have been corrected for the Earth's spin obliquity. Clearly there is a large AM excess in the arrivals which is counter to the meteorite fall data which show a PM enhancement. Though a velocity restriction is used (see text), other cuts yield the same general shape for both radar data and simulation results. (b) The local time distribution of Earth deliveries when restricted to orbits similar to the ones used in \citet{xmas82}. This distribution matches well with the time of falls for chondrites.} \label{fig:ampm} \end{figure} Why is there this discrepancy with our unrestricted case? The origin is not one of method but rather of starting conditions. The debiased NEO model we use is much more comprehensive than the orbits used by \citet{xmas82}, where only orbits assumed to represent then-current fireballs were included. The real Earth-crossing population contains a larger fraction of high-speed orbits, which produce a smaller fraction of PM falls than the shallow Earth-crossers. In fact, our simulated arrivals always show an AM excess (Fig.~\ref{fig:pmtot}), even if we apply an upper speed bound (in an attempt to mimic a condition for meteoroid survivability, requiring speeds of less than 20--30~km/s at the top of the atmosphere). \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=pmtot.ps, height=12.0cm, angle=-90} \end{center} \caption{ The fraction of Earth arrivals on the PM hemisphere, as a function of various cutoff speeds in the incoming flux. For the entire population, the PM ratio is 40\%. Applying more and more stringent upper bounds pushes the PM ratio towards 50\%, but for reasons discussed in the text we do not believe that the NEO orbital distribution is the same as that of meteorite-dropping fireballs, and thus the AM excess we find (which is not exhibited by the fireball data as a whole) does not create a contradiction with the available data. } \label{fig:pmtot} \end{figure} The apparent conflict with the meteorite data should not be too surprising, as \citet{morglad98} argued that that orbital distribution of the 0.1--1~m-scale meteoroids that drop chondritic fireballs must be different than that of the Near-Earth Objects. They showed that to match the radiant and orbital distributions determined by the fireball camera networks and to also match a PM excess, these sub-meter sized bodies must suffer strong collisional degradation as they journey from the asteroid belt, with a collisional half life consistent with what one would expect for decimeter-scale objects; this produces a match with the fireball semimajor axis distribution, which is dominated by the $a>1.5$~AU orbits. In contrast, our simulations show NEO arrivals are much more dominated by $a\sim1$~AU objects. We posit this is further evidence that the source region for the majority of the meteorites (the chondrites) is the main belt and not near-Earth space; to use the terminology of \citet{morglad98}, the `immediate precursor bodies', in which the meteoroids were located just before being liberated and starting to accumulate cosmic-ray exposure, are {\it not} near-Earth objects but must be in the main belt. As a consistency check, we compare our fall time distribution to radar observations \citep{jones05} of meteoroids arriving at the top of Earth's atmosphere. Figure~\ref{fig:ampm}a shows that the fall time distribution obtained with our simulations is similar to the flux of radar-observed meteors when restricted to the same top-of-the-atmosphere speed range of $20 \leq v_{imp} \leq 30$ km/s (though other cuts yield similar results). The typical pre-atmospheric masses of the particles producing the radar meteors is in the micro- to milli-gram range (P. Brown, private communication 2006). The velocity range chosen for Fig.~\ref{fig:ampm}a removes the low-speed fragments which have reached Earth-crossing by radiation forces (unlike the NEOs of the Bottke model, whose orbital distribution is set by gravitational scatterings with the terrestrial planets) and also removes the high-speed cometary component. The match we find permits the hypothesis that the majority of the milligram-particle flux on orbits with these encounter speeds is actually dust that is liberated continuously from NEOs, in stark contrast with the decimeter-scale meteoroids, who must be recently derived from a main-belt source. \section{Lunar bombardment} \label{sec-LB} Figure~\ref{fig:velhist} shows the lunar impact speed distribution from our simulations. Because the Moon's orbital and escape speeds (1.02 and 2.38 km/s respectively) are both small compared to typical $v_{\infty}\ $ encounter speeds, the impacts are not as biased towards smaller speeds as for the Earth. As one would expect, since $v_{imp}^2 = v_{\infty}^2 + v_{esc}^2$, the $v_{imp}$ and source $v_{\infty}\ $ distributions are quite similar. The small difference arises from gravitational focusing by the Moon, which increases the speeds of the low $v_{\infty}\ $ population. We compute the average impact speed for NEOs striking the moon to be 20 km/s. This is higher than the often quoted lunar impact velocity of 12-17~km/s \citep{chyba94,strom05}. These lower velocities have been derived using only the known NEOs and are therefore biased towards objects whose encounter speeds are lower (which are more often observed in telescopic surveys). \begin{figure}[H] \begin{center} \epsfig{file=velhistA.ps,height=13.5cm,angle=-90} \end{center} \caption{Velocity distributions for the lunar impacts from our simulations (impactor $v_{\infty}\ $ and $v_{imp}$) and the sampled NEO source population (source $v_{\infty}\ $). The average impact velocity, is $\sim$~20 km/s. The curve showing the $v_{\infty}\ $ of objects striking the Moon closely matches the $v_{imp}\ $ distribution as one expects due to the small degree at which the moon's gravity well ``speeds up'' low $v_{\infty}\ $ objects. The difference between the $v_{imp}$ and source $v_{\infty}\ $ distributions shows that gravitational focusing favours low speed ($v_{\infty}\ $$\lesssim 20$ km/s) objects. These simulation results were for the current Earth-Moon distance with the Moon orbiting in the plane of the ecliptic.} \label{fig:velhist} \end{figure} The debiased NEO distribution we use has a full suite of high-speed impactors; more than half the impactors are moving faster than $v_{med}$ = 19.3 km/s when they hit the Moon. This has serious implications for the calculated projectile diameters that created lunar craters since the higher speeds we calculate mean that typical impactor diameters are smaller than previously derived. Strictly speaking, our results apply only when the {\it current} NEO orbital distribution is valid, which has likely been true since the post-mare era. However, the generically-higher lunar impact speeds we find are likely true in most cases for realistic orbital distributions, and thus the size-frequency distribution of the impactors must be shifted to somewhat smaller sizes in trying to find matches between the lunar crater distribution and the NEA size distribution (see Strom et al 2005 for a recent example). The reader may be surprised to see that the average lunar impact speed is essentially the same as the average arrival speed at Earth despite the acceleration impactors receive as they fall into the deep gravity well of our planet. This (potentially counter-intuitive) result can be understood once one realizes that Earth's impact speed distribution is heavily weighted towards low $v_{\infty}\ $ values by the Safronov factor $ (1 + v_{esc}^2 / v_{\infty}^2 )$. For Earth this so heavily enhances the low encounter speed impactors that the average impact speed actually drops to essentially the same as that of the Moon (which does not gravitationally focus the low $v_{\infty}$ nearly as well). While the Earth's greater capture cross-section ensures a much larger total flux, the average energy delivered per impact will be similar for both the Moon and Earth. \vspace{-0.5cm} \subsection{From impacts to craters} To examine crater asymmetries on the Moon, we need to convert our simulated impacts (a sample of which are shown in Fig.~\ref{fig:hemmap}) into craters which will account for the added asymmetry resulting from the impact velocity $v_{imp}$ of the impactors. In typical crater counting studies, there is some minimum diameter $T$ which observers are able to count down to due to image resolution limitations. There will be more craters larger than $T$ on the leading hemisphere than on the trailing because leading-side impactors have higher impact speeds on average and the commonly-accepted scaling relation for crater size $D_c$ depends on the velocity. \begin{equation} D_c \propto v_{imp}^{2q}\;D_i^{3q}\;, \end{equation} where $q\approx 0.28-0.33$ \citep{mel89} and $D_i$ is the diameter of the impactor. Since both hemispheres receive flux from the same differential impactor size distribution obeying \begin{equation} \frac{dN}{dD_i} \propto D_i^{-p}\;, \end{equation} we can integrate ``down the size distribution'' to determine a weighting factor which transforms our impacts into crater counts. The number of craters $N$ with $D_c > T$ produced by the impacting size distribution is \begin{equation} N(D_c > T)\: =\: \int_{D_{imin}}^{\infty} \frac{dN}{dD_i}\:dD_i\: = \: \frac{1}{p-1}\;D_{imin}^{1-p}\: \propto \: v_{imp}^{2(p-1)/3}\;, \end{equation} where $D_{imin}$ is some minimum impactor diameter and we have substituted $D_{imin} \propto (T\;v_i^{-2q})^{1/3q}$. Since the differential size index $p \sim 2.8$ \citep{stu01,bottke02}, $N \propto v_{imp}^{1.2}$. For each simulated impact at a specific latitude and longitude, we assign that impact $N = C\;v_{imp}^{1.2}$ craters, where $C$ is an arbitrary proportionality constant. The dependance on $p$ is small as our results are essentially the same when using an older determined value of $p = 2$. The weak dependance arises from the low orbital speed of the Moon relative to the encounter speeds of the incoming projectiles. Thus for moons such as the Galilean satellites, whose orbital speeds are much higher compared to that of the incoming flux, the value for $p$ becomes more important. Note that $D_{min}$ and $q$ are actually irrelevant to our analysis since we are interested in only the crater numbers relative to an average rather than the crater sizes. \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=latlong_sinmap.eps,height=10.5cm,angle=0} \end{center} \caption{An equal-area projection of the lunar impacts from our simulations. The Moon was at its current orbital distance of $60R_\oplus$ with 0\degrees inclination. At the apex, one can see the slight enhancement in the impact density. For clarity, only 7\% of the total number of impacts are shown. } \label{fig:hemmap} \end{figure} \subsection{Latitude distribution of impacts} One expects the departure from uniform density in the lunar latitude distribution to be more severe than that of the Earth due to the Moon's smaller mass. Comparing Fig.~\ref{fig:moonlat} to Fig.~\ref{fig:fevcomp1}b shows this is indeed the case. At high $v_{\infty}\ $ cuts, the variation in the latitude distribution tends to the predicted cosine (see Fig.~2 in Le Feuvre and Wieczorek, 2006). However, when examining the real case of all lunar impacts we see only a $\sim 10$\% ($0.912 \pm 0.004$) depression at the poles when taking the ratio of the derived crater density within 30\degrees of the poles to the crater density in a 30\degrees band centered on the equator. In contrast, \citet{fev06} (their Fig.3) find a polar/equatorial ratio of roughly 60\%. The source of this large discrepancy is unclear since the Moon in our simulations had zero orbital inclination and spin obliquity, the same conditions used by \citet{fev06}, and the latter also used the \citet{bottke02} model as an impactor source. Despite the variation we observe being small, researchers should be aware of this spatial variation in the crater distribution when determining ages of surfaces (see Sec.~\ref{sec-con}). \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=m_lat_noinccut.ps,height=13.5cm,angle=-90}\vspace{-0.5cm} \end{center} \caption{The latitude distribution of craters on the Moon from our simulations. Note the scale is different here than in Fig.~\ref{fig:fevcomp1}(b) and that the latitudinal variation is larger for the Moon than the Earth. This is because the Moon's gravity well is not deep enough to significantly modify incoming trajectories to higher latitudes. If the impacts were restricted to impactors which had $i \leq 10$\degrees, the variation would be larger as is the case with the terrestrial deliveries in Fig.~\ref{fig:fevcomp1}. Here the Moon's inclination and spin obliquity are not accounted for. } \label{fig:moonlat} \end{figure} \subsection{Longitudinal effects} We wish to compare the results of our numerical simulations with observational data to create a consistent picture of the current level of cratering asymmetry between the Moon's leading and trailing hemispheres. As a measure of this asymmetry, we look at the crater density as a function of the angle away from the apex, $\beta$, which should roughly follow the functional form of Eq.~\ref{eq:ass}. In Fig.~\ref{fig:moonrad} we show the results from our simulations as well as a fit to Eq.~\ref{eq:ass} using a maximum likelihood technique assuming a Poisson probability distribution. The best fit parameters resulting from an unrestricted analysis are $\bar{\Gamma} = 1.02$, $\alpha = 0.564$, and $g = 0.225$; this would require $b = -1.268$. For an impactor diameter distribution following $\frac{dN}{dD} \propto D^{-b}$, this value for the slope yields the unphysical situation of having more large impactors than small ones. Obviously this cannot be correct and is a consequence of the degeneracy present in Eq.~\ref{eq:ass}. We once again conclude that by fitting Eq.~\ref{eq:ass} to an observed surface distribution, it is virtually impossible to decouple $\alpha$ and $g$ to obtain information about the impactor size distribution and the average encounter velocity of the impactors. \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=radvsbeta.ps,height=12.5cm,angle=-90} \end{center} \caption{The spatial density of simulated craters as a function of angle from the apex of motion, $\beta$. The vertical axis is craters/$\mathrm{km}^2$ relative to the mean density over the entire lunar surface. Points represent simulation results while the curve gives a fit using a maximum likelihood technique to the equation $\Gamma(\beta) = \bar{\Gamma}(1 + \alpha\cos\beta)^g$, where $g = 2 + 1.4\;b$ and $b = 0.58$ as determined from observations. With this restriction, the best fit values are $\bar{\Gamma} = 0.994$ and $\alpha = 0.0472$ with a reduced chi-square of $\bar{\chi}_r^2 = 2.4$.} \label{fig:moonrad} \end{figure} By counting our simulated craters, we find a value for the GMAACA of $1.29 \pm 0.01$, significantly lower than the values of 1.4--1.7 estimated in Sec.~\ref{sec:llhe}. This discrepancy can be reconciled by recalling that we find the average impactor $v_{\infty}\ $ to be $\sim$ 20 km/s. Using this value in Eq.~\ref{eq:alf} and the observationally-determined value of $b=0.58$, gives a GMAACA value of 1.32, close to the value we obtain. In Fig.~\ref{fig:moonrad} we notice that the distribution is rather flat for $0^\circ \leq \beta \leq 45^\circ$ and has higher density than the predicted curve (using $b = 0.58$) for $60^\circ \leq \beta \leq 120^\circ$. A $\bar{\chi}_r^2$ value of 2.4 results when the quality of fit is assessed. We believe the origin of this highly-significant departure from the theoretically predicted form lies simply in the fact that both assumptions of an (1) isotropic orbital distribution of impactors with (2) a single $v_{\infty}\ $ value, are violated (see Fig.~\ref{fig:dirs}). Thus, an observed crater field will not follow the form of Eq.~\ref{eq:ass} in detail. Since these assumptions break down for the real impactor population, we will instead directly compare with the rayed crater observations to determine if the available data rule out our model. To do this, we scaled our craters down to 222, the same number as in the rayed crater sample used in \citet{morota03}. We restricted our simulated craters to the same lunar area examined in that study. Mare regions were ignored as it introduces bias in rayed crater observations because they are easier to identify against a darker background surface. In addition, rayed craters on mare surfaces are likely older than their highland counterparts because it takes longer for micrometeorite bombardment to eliminate the contrast. Since we are interested in the current lunar bombardment, it is necessary to eliminate this older population. The area sampled includes latitudes $\pm41.5$\degrees and longitudes $70.5$\degrees $-\: 289.5$\degrees (see Fig.~1 of Morota and Furumoto, 2003). Using a modified chi-square test with the rayed crater observational counts $O_i$ and the expected counts $E_i$ from our simulations, \begin{equation} \chi^2 = \sum_i^n\frac{(O_i - E_i)^2}{\sigma_{obs}^2}, \mathrm{\;\;with\;\;} \sigma_{obs} = \sqrt{O_i} \nonumber \end{equation} since we are dealing with Poisson statistics. This procedure results in a reduced chi-square value of $\bar{\chi}_r^2 = 0.67$. Thus our model is in excellent agreement with the observational data (Fig.~\ref{fig:morvsme}). \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=morota_vs_me_best.ps,height=12.5cm,angle=-90} \end{center} \caption{Density of rayed craters used in \citet{morota03} compared with our simulation results for the same lunar area and scaled to the same count (222). The curve is the best fit to the rayed crater data using Eq.~\ref{eq:ass} as a model despite our knowledge that it is based on violated assumptions. Best fit parameters are $\bar{\Gamma} = 1.53 \times 10^-5\;\mathrm{km}^{-2},\; \alpha = 0.063,\; \mathrm{and}\; g = 4.62$. With respect to the observations, the prediction from our numerical calculations has $\bar{\chi}_r^2 = 0.67$.} \label{fig:morvsme} \end{figure} Ideally, we would like to obtain a GMAACA value for the rayed crater data. However, due to small number statistics and the restricted area (because of Mare Marginis and Mare Smythii, much of the area near the the antapex is excluded), the GMAACA value is poorly measured by the available data. Regardless, integrating the best fit of the rayed crater data (best fit values: $\bar{\Gamma} = 1.53 \times 10^{-5}\;\mathrm{km}^{-2},\; \alpha = 0.063,\; g = 4.62,\; \mathrm{with}\; \bar{\chi}_r^2 = 0.49$) from 0-30\degrees and 150-180\degrees to form the GMAACA ratio gives $1.7_{-0.6}^{+0.9}$, consistent with our result. \vspace{-0.5cm} \subsection{Varying the Earth-Moon distance and the effect of inclination} Due to tidal evolution, in the distant past the Moon's orbit was smaller. As the orbital distance is decreased, the Moon's orbital speed rises. This increases the impact speeds on the leading hemisphere and makes ``catching up'' to the Moon from behind more difficult. As Eq.~\ref{eq:alf} suggests, the degree of apex/antapex asymmetry on the Moon is expected to increase as the orbital speed of the satellite does. We examined this by running other simulations with an Earth-Moon separation of $a = $50, 38, 30, 20, and 10 $R_\oplus$. Figure~\ref{fig:dist} shows our results. Assuming the same impactor orbital distribution in the past, only a mild increase in the apex enhancement is seen (since the orbital speed only increases as $1/\sqrt(a)$). The increase is that expected based on the resulting change in $\alpha$ caused by the larger $v_{orb}$ (see Eq.~\ref{eq:alf}). \vspace{-1.0cm} \begin{figure}[H] \begin{center} \epsfig{file=EMdisteffect_bothass.ps,height=10.0cm,angle=-90} \end{center} \caption{The ratio between number of craters on the leading versus trailing hemispheres and the same ratio between the nearside and farside hemispheres as a function of lunar orbital distance. As expected, smaller orbital distances increase the asymmetry between leading and trailing hemispheres. This is a result of the increased orbital speed as the Moon is brought closer to the Earth. For all lunar distances there is minimal nearside/farside asymmetry, with some evidence the Earth shielded the lunar nearside in the very distant past when the lunar semimajor axis was $<\; 25\;R_\oplus$. } \label{fig:dist} \end{figure} As discussed in Sec.~\ref{sec-nearfar}, the asymmetry between the near and far hemispheres should depend on the lunar distance. Figure~\ref{fig:dist} shows the ratio between nearside and farside craters from our simulations. For all lunar distances we find very little asymmetry. Thus, our results do not support the most recent study which claims a factor of four enhancement on the nearside when compared to the far \citep{fev05}, but are in good agreement with the work done by \citet{wiesel71} and \citet{band73}. We see mild evidence for Bandermann and Singer's (1973) assertion of the Earth acting as a shield for $a < 25 R_\oplus$ and little effect outside this distance. Thus, Bandermann and Singer's estimate of no measurable nearside/farside asymmetry in the current cratering rate is correct. In the bulk of our simulations we have used the approximation that the Moon's orbit is in the ecliptic plane. Since we show that the latitudinal dependence of lunar cratering is weak ($\sim$ 10\% reduction within 30$^{\circ}$ of the poles relative to a 30\degrees band centered on the Moon's equator), we do not expect the inclusion of the moon's orbital inclination to alter our results significantly, although we expect the polar asymmetry to monotonically decrease with increasing orbital inclination. We have confirmed this by computing the GMAACA and polar asymmetry ratios for a less-extensive set of simulations in which the lunar orbital inclination is initially set to its current value of 5.15$^{\circ}$ and we use the sub-Earth point at the time of impact to compute lunocentric latitudes and longitudes. We find a slight reduction of GMAACA to $1.24\pm0.02$ from ($1.29\pm0.01$) and a crater density within 30$^{\circ}$ of the pole that is statistically the same as the 0\degrees inclination case ($0.914 \pm 0.009$ instead of $0.912 \pm 0.004$). \section{Summary and conclusions} \label{sec-con} We have used the debiased NEO model of \citet{bottke02} to examine the bombardment of the Earth-Moon system in terms of various impact and crater asymmetries. For Earth arrivals we find a $< 1$\% variation in the ratio between the areal densities within 30\degrees of the poles and within a 30\degrees band centered on the equator. The local time distribution of terrestrial impacts from NEOs is enhanced during the AM hours. While this fall-time distribution corresponds well to recent radar data, it is in disagreement with the chondritic meteorite data and their derived pre-atmospheric orbital distributions. This discrepancy thus reinforces the conclusion of \citet{morglad98} that the large amount of decimeter-scale material being ejected from the main asteroid belt onto Earth-crossing orbits must be collisionally depleted before much of it can evolve to orbits with $a < 1.5$~AU. A significant result is that we find the average impact speed onto the Moon to be $\bar{v}_{imp} = 20$~km/s, with a non-negligible higher-speed tail (Fig.~\ref{fig:velhist}). This combined with quantification of the non-uniform surface cratering has implications for both tracing crater fields back to the size distribution of the impactors and the absolute (or relative) dating of cratered surfaces. First, the higher impact speeds we find mean that lunar impact craters (at least in the post-mare era when we believe the NEO orbital distribution we are using is valid) have been produced by smaller impactors than previously calculated. This roughly 10\% higher average impact speed corresponds to lunar impactors which are 10\% smaller on average than previously estimated; this small correction has ramifications for proposed matches between lunar crater size distributions and impactor populations ({\it e.g.,}$\;$ Strom {\it et~al.} 2005). We find two different spatial asymmetries in current crater production due to NEOs. As expected, due to its smaller mass, the Moon exhibits more latitudinal variation ($\sim 10$\%) in our simulations than the Earth. When comparing our simulation results to young rayed craters on the Moon, the surface density variation we predict is completely consistent with available data; we obtain a leading versus trailing asymmetry of $1.29 \pm 0.01$ (GMAACA value), which corresponds to a 13\% increase (decrease) in crater density at the apex (antapex) relative to the average. These results indicate that using a single globally-averaged lunar crater production could give ages in error by up to 10\% depending on the location of the studied region. For example, post-mare studies of the Mare Orientale region would overestimate its age by $\sim10$\% due to its proximity to the apex, assuming that leading point and poles of the Moon have not changed over the last $\sim$4~Gyr. Similarly, the degree of bombardment on Mare Crisium (not far from the antapex) would be lower than the global average. These effects will only be testable for crater fields with hundreds of counted craters so that the Poisson errors are small compared to the 10\% variations we find; in most studies of ``young'' ($<4$ Gyr) lunar surfaces the crater statistics are poorer than this ({\it e.g.,}$\;$ St\"offler and Ryder 2001). When the orbital distance of the Moon was decreased (as it was in the past because of its tidal evolution), the ratio between simulated craters on the leading hemisphere versus the trailing increased as expected due to the higher orbital speed of the satellite at lower semi-major axes. We find virtually no nearside/farside asymmetry until the Earth-Moon separation is less than 30 Earth-radii, which under currently-accepted lunar orbital evolution models dates to the time before 4~Gyr ago (at which point the current NEO orbital distribution may not be a good model for the impactors). Interior to 30 Earth-radii we find that the Earth serves as a mild shield, reducing nearside crater production by a few percent. \bigskip \centerline{\bf Acknowledgments} \medskip Computations were performed on the LeVerrier Beowulf cluster at UBC, funded by the Canadian Foundation for Innovation, The BC Knowledge Development fund, and the UBC Blusson fund. BG and JG thank NSERC for research support. \singlespace \newpage \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
2,189
class Api::V1::StatsController < ApplicationController respond_to :json, :xml # The stats API only requires an api_key for the given app. skip_before_action :authenticate_user! before_action :require_api_key_or_authenticate_user! def app if (problem = @app.problems.order_by(:last_notice_at.desc).first) @last_error_time = problem.last_notice_at end stats = { name: @app.name, id: @app.id, last_error_time: @last_error_time, unresolved_errors: @app.unresolved_count } respond_to do |format| format.any(:html, :json) { render json: JSON.dump(stats) } # render JSON if no extension specified on path format.xml { render xml: stats } end end def daily_report date = Date.parse(params[:date]) rescue Date.yesterday count_problems = @app.problems.where(:updated_at.gt => date.beginning_of_day).where(:updated_at.lt => date.end_of_day).count count_times = @app.problems.where(:updated_at.gt => Date.yesterday.beginning_of_day).where(:updated_at.lt => Date.yesterday.end_of_day).sum(&:notices_count) count_resolved = @app.problems.where(:resolved_at.gt => date.beginning_of_day).where(:resolved_at.lt => date.end_of_day).count count_all_errors = @app.unresolved_count stats = { :name => @app.name, :id => @app.id, :date => date.strftime("%m/%d/%y"), :daily_problems_count => count_problems, :daily_error_times => count_times, :daily_resolved => count_resolved, :total_problems => count_all_errors, :unresolved_errors => @app.unresolved_count } respond_to do |format| format.any(:html, :json) { render :json => JSON.dump(stats) } # render JSON if no extension specified on path format.xml { render :xml => stats } end end protected def require_api_key_or_authenticate_user! if params[:api_key].present? return true if (@app = App.where(api_key: params[:api_key]).first) end authenticate_user! end end
{ "redpajama_set_name": "RedPajamaGithub" }
9,700
{"url":"https:\/\/solvedlib.com\/n\/provide-1-the-1-question-1-iupacisystematic-f-point-2-3,3694949","text":"# Provide 1 the 1 Question (1 IUPACisystematic (f: point) 2 3 following compound:\n\n###### Question:\n\nProvide 1 the 1 Question (1 IUPACisystematic (f: point) 2 3 following compound:\n\n#### Similar Solved Questions\n\n##### Cos n I \"=1 72 ?\nCos n I \"=1 72 ?...\n##### Energy Lost in Inelastic Collision Difficult Accuracy 0% 4 m\/s 8 m\/s 3 kg 2 kg...\nEnergy Lost in Inelastic Collision Difficult Accuracy 0% 4 m\/s 8 m\/s 3 kg 2 kg Created for Albertio. All rights reserved 2016 The above diagramshows the motion and masses of two carts before a collision in a student experiment (Friction can be considered neglable in this experiment) After the collis...\n##### Three charged objects, X, Y, and Z, are placed on the x-axis.Object X has a charge of +58 \u00ce\u00bcC and is located at the origin.Object Y has a negative charge of \u00e2\u20ac\u201c41 \u00ce\u00bcC and is located at \u00e2\u20ac\u201c1.4 mfrom the origin. Object Z has a charge of +78 \u00ce\u00bcC and is located atthe +2.4 m position. Determine the magnitude of the net electricforce acting on object Y.\nThree charged objects, X, Y, and Z, are placed on the x-axis. Object X has a charge of +58 \u00ce\u00bcC and is located at the origin. Object Y has a negative charge of \u00e2\u20ac\u201c41 \u00ce\u00bcC and is located at \u00e2\u20ac\u201c1.4 m from the origin. Object Z has a charge of +78 \u00ce\u00bcC and is located at the +2.4 m position....\n##### Volume of methanol (mL) mass Of mothanol and graduated cylindor (g)mass of mothanol (g)5,0 7,0 10,0 13,0 15.0 17.0 20,0 23,0 25.,0Analysis: Godo bitlyleheml_density Follow Ihe instructions crealc [o prusad gruphs; thcn cmail the Excel file to ra surreyschools cu for printingAnswer the following questions on _ together with your graphs_ separate shcet of paper and attach Theislope %f cach line is equal to the density. Look methanol und compure With your up the actual densities for Watet and compa\nvolume of methanol (mL) mass Of mothanol and graduated cylindor (g) mass of mothanol (g) 5,0 7,0 10,0 13,0 15.0 17.0 20,0 23,0 25.,0 Analysis: Godo bitlyleheml_density Follow Ihe instructions crealc [o prusad gruphs; thcn cmail the Excel file to ra surreyschools cu for printing Answer the following ...\n##### Consider the given function h (x)=-3x +4x_(a) Write the function in vertex form:(b) Identify the vertex:Determine the x-intercept(s)_(d) Determine the y-intercept(s).(e) Sketch the function:Determine the axis of symmetry:(9) Determine the minimum or maximum value of the function Write the domaln and range in Interval notation_Write your answers in exact formPart: 0\/9Part of 9(a) Write the function In vertex form:The vertex form of the function is h ()=Next PartZ0i\nConsider the given function h (x)=-3x +4x_ (a) Write the function in vertex form: (b) Identify the vertex: Determine the x-intercept(s)_ (d) Determine the y-intercept(s). (e) Sketch the function: Determine the axis of symmetry: (9) Determine the minimum or maximum value of the function Write the d...\n##### Set up the alias method for simulating from a binomial random variable with parameters $n=6, p=0.4$\nSet up the alias method for simulating from a binomial random variable with parameters $n=6, p=0.4$...\nChapter 30 4. Patricia's Paddle Boats is to be liquidated. All creditors, both secured and unsecured, are owed $2,100,000. Administrative costs of liquidation and wage payments are expected to be$750,000. A sale of assets is expected to bring $2,500,000 after taxes. Secured creditors have a mor... 1 answer ##### Chapter 6 Assignment i Saved Juniper Corp. makes three models of insulated thermos. Juniper has$300,000...\nChapter 6 Assignment i Saved Juniper Corp. makes three models of insulated thermos. Juniper has $300,000 in total revenue and total variable costs of$162,000. Its sales mix is given below: 10 points Percentage of total sales 32% 50 Thermos A Thermos B Thermosc 18 eBook Required: 1. Calculate the (o...\n##### 2) 1,-2,3 equation: differential [v *0 (15 pts) 1 H set tofrood _ coefficients. given for the Determine \" the characteristic 1 equation . 'solution of the a nth-order Jaficzen\n2) 1,-2,3 equation: differential [v *0 (15 pts) 1 H set tofrood _ coefficients. given for the Determine \" the characteristic 1 equation . 'solution of the a nth-order Jaficzen...\n##### Perform the first line search steepest descent algorithm find the minimum F= (X1-2)2 (Xz+1)2 XiXz: Begin at an initial point of Xo = [0, 0]. First, substitute the search direction and starting point into the line search equation such that F is expressed as function of the single variable & instead of function of X and Xz Next, find the derivative of with respect t0 @, set to 0, and solve for a\" . What is value of a\" ? What are the resulting values of [Xi',Xz '] that corre\nPerform the first line search steepest descent algorithm find the minimum F= (X1-2)2 (Xz+1)2 XiXz: Begin at an initial point of Xo = [0, 0]. First, substitute the search direction and starting point into the line search equation such that F is expressed as function of the single variable & inste...\nplease answer questions 7 On January 1, 2019 Day Co. leased a new machine from Parr with the following pertinent information: Lease term 6 years $50,000 Annual rental payable at the beginning of each year Useful life of machine Day's incremental borrowing rate 8 years 15% Implicit inte... 1 answer ##### The dark area in Fig.$\\mathbf{P} 18.83$that appears devoid of stars is a dark nebula, a cold gas cloud in interstellar space that contains enough material to block out light from the stars behind it. A typical dark nebula is about 20 light-years in diameter and contains about 50 hydrogen atoms per cubic centimeter (monatomic hydrogen, not$\\mathrm{H}_{2}$) at about$20 \\mathrm{~K}$. (A lightyear is the distance light travels in vacuum in one year and is equal to$\\left.9.46 \\times 10^{15} \\ma\nThe dark area in Fig. $\\mathbf{P} 18.83$ that appears devoid of stars is a dark nebula, a cold gas cloud in interstellar space that contains enough material to block out light from the stars behind it. A typical dark nebula is about 20 light-years in diameter and contains about 50 hydrogen atoms per...\n##### Can someone please help me with an introduction paragraph and a summary of the whole article....\ncan someone please help me with an introduction paragraph and a summary of the whole article. us. Trump Didn't Kill the Global Trade System. He Split It in Two. Allies find relations modestly tweaked despite the president's rhetoric, while relations with China are...\n##### 5 (polnts) Exectlon: Draw complete flow chart to show Thow You could use extraction to separate a mbqure d the following three compounds Organkc solvent ether , methylene chloride; Aqueous soltion: Hdl Naohtal NaHcoslea(10 points) After dissoling the calleine tablet In water, YOU were able to recover 0.5 g ol caffelne In 8 mL The partition coetficient, Kldichloromethane water) is 9.2 at 25 \"\u00e2\u201a\u00ac watet. Determine the amount of caffeine extracted using 6 mL of dichloromethane in single extrac\n5 (polnts) Exectlon: Draw complete flow chart to show Thow You could use extraction to separate a mbqure d the following three compounds Organkc solvent ether , methylene chloride; Aqueous soltion: Hdl Naohtal NaHcoslea (10 points) After dissoling the calleine tablet In water, YOU were able to recov...\n##### Find an arc length parametrization of the circle in the plane $z=9$ with radius 4 and center $(1,4,9) .$\nFind an arc length parametrization of the circle in the plane $z=9$ with radius 4 and center $(1,4,9) .$...\n##### On 5\/2\/20, Anna Company purchased $100,000 of the 9%, 10-year bonds of Dexter Corporation for$106,247,...\nOn 5\/2\/20, Anna Company purchased $100,000 of the 9%, 10-year bonds of Dexter Corporation for$106,247, which provides an 8% return on annual interest payments made every 5\/ Anna does not intend to hold the bonds until maturity, but will hold them for longer than a year. The market value of the bond...\n##### MnoIl452 YDJRTEACRLRFRLOCELhoBHDAret4MV5ILO T27401An754BNaIuAa YDuIIEALHAERLnc LhDteteTaedtn e tlosanm TLetnatn Ltet\nMnoIl 452 YDJRTEACRLR FRLOCELhoBHD Aret4 MV5 ILO T27401An754B NaIu Aa YDuIIEALHAE RLnc LhDtete Taedtn e tlosanm TLetnatn Ltet...\n##### 13. Extrema on of 1 + y suEConstrained minimum 'Find the points On the curve xy nearest the origin_\n13. Extrema on of 1 + y suE Constrained minimum 'Find the points On the curve xy nearest the origin_...\n##### Jexpaqeydje UI sajeds 1noylim 5ja1121 buipuodsauo2 241 2d4 poqe aidwexa JOJ \"JapJo pqje8 882uaulKs aJe SapuJel DuIMOIIog a41 J0 42I4M'paNOJu]zO uonsano Nequauaiddns 'L*T uOlpas '1 Jaldey)\njexpaqeydje UI sajeds 1noylim 5ja1121 buipuodsauo2 241 2d4 poqe aidwexa JOJ \"JapJo pqje 8 8 8 2uaulKs aJe SapuJel DuIMOIIog a41 J0 42I4M 'paNOJu] zO uonsano Nequauaiddns 'L*T uOlpas '1 Jaldey)...\n##### Explain question 4 and 5. ort answers: Explain briefly. Draw a graph if helpful Why do...\nExplain question 4 and 5. ort answers: Explain briefly. Draw a graph if helpful Why do economists represent the Production Possibility Frontier (PPF) as a concave I bowed out') curve? comer cice for roducono c e che resour way he re best suited for. Explain what economists mean when they state t...\n##### Analyze the operation performed by the given piece of pseudocode and write a function that describes...\nAnalyze the operation performed by the given piece of pseudocode and write a function that describes the number of steps required. Give the -class of the function Assume that N is a power of 2. 1. 2. 3. a. b....\n##### Indicate whether the osmotic pressure ofa 0.1 M NaNO solution will be less than, the same...\nIndicate whether the osmotic pressure ofa 0.1 M NaNO solution will be less than, the same as, or greater than that of each of the following solutions 8-96 234-f CHAPTER 8 Solutions b. 0.1 M KNO d. 0.1 M glucose a. 0.1 M NaCI c. 0.1 M Na,SO...\n##### -\/16.7 POINTSDEVORESTAT9 B.E.021.MY NOTESASK YOUR TEACHERThe desired Percentag?CErAIN Eypealummtou: cEMEnIRcchcthcr[I VE averade Percentade5 5 (Ocarlicul producuon tacuiby,independenly oblained zampie?analyteo SuPPIt araeetnercenionsamdinnormilly distributed wththat > 5,24. (Use0.05,)Does this indicate conclusively that the tnue average percentage differs from 5.57 State the @Ppropnate null and alternative hypotheses4;\"*5.5Ho; u = 5.5 Ha: @Calcuatc tc tcst statistic and dctcnminc trcvluc\n-\/16.7 POINTS DEVORESTAT9 B.E.021. MY NOTES ASK YOUR TEACHER The desired Percentag? CErAIN Eype alummtou: cEMEnI Rcchcthcr [I VE averade Percentade 5 5 (O carlicul producuon tacuiby, independenly oblained zampie? analyteo SuPPIt araeet nercenion samdin normilly distributed wth that > 5,24. (Use 0...\n##### In class we saw that random variates are generated using the pseudo-random numbers. Suppose that we want to generate random variates from the Standard Normal Distribution. Suppose that we use the pseudo- random number r 0.65, what random variate from the Standard Normal Distribution would be generated?0.674b. 0.385C. 0.65d. 0.524\nIn class we saw that random variates are generated using the pseudo-random numbers. Suppose that we want to generate random variates from the Standard Normal Distribution. Suppose that we use the pseudo- random number r 0.65, what random variate from the Standard Normal Distribution would be generat...\n##### Help els the giuen 8ksi \ube14 Ksh | \u2192 (@t \uce21) k5A 6kai\nhelp els the giuen 8ksi \ube14 Ksh | \u2192 (@t \uce21) k5A 6kai...\n##### ((Popbou se Joqinu dIo4* Isov00u 041 = pLnO] Joul JomSu (UU! B4i Ikun punou Iou 0Q1 eidoad Mjojouiixoiddn o3 IlM uouoindod BuLItmou WQl subak \u00e2\u201a\u00ac 84 uoljerndod 341 Iiim 1P4m '000 06 uoneindod iuaun? 841 pue s4iuoW Pl Jaao 0245 pejgnop uontinooc 041l(Q}(** 10 sWQi pue Oiqekex 041 58 buisn Loissowdxo UD ad V =lNJio Vonquni S0 N ssoudxa '9ipo4 Ui Ouu 941 91 pue M? 041 \/o uoneindod au1 SINIl (0) PLD suod Jemsua Uojcuoiui Sili 890 'MPI Ierjuouodxe 041 smoiio} 6ip? Wquinos J9 Honuindod\n((Popbou se Joqinu dIo4* Isov00u 041 = pLnO] Joul JomSu (UU! B4i Ikun punou Iou 0Q1 eidoad Mjojouiixoiddn o3 IlM uouoindod BuLI tmou WQl subak \u00e2\u201a\u00ac 84 uoljerndod 341 Iiim 1P4m '000 06 uoneindod iuaun? 841 pue s4iuoW Pl Jaao 0245 pejgnop uontinooc 041l(Q} (** 10 sWQi pue Oiqekex 041 58 buisn...\n##### Exercise 23-5 The standard cost of Product B manufactured by Pharrell Company includes 3.3 units of...\nExercise 23-5 The standard cost of Product B manufactured by Pharrell Company includes 3.3 units of direct materials at $5.4 per unit. During June, 26,800 units of direct materials are purchased at a cost of$5.30 per unit, and 26,800 units of direct materials are used to produce 8,000 units of Prod...\n##### 3. Select the correct statement below. The alternative hypothesis for a chi square test for homo- geneity states that:(a) there is no association between two categorical variables. (6) the proportion that falls into each category is the same for everypopulation. (C) for at least one category,it is not the case that each population has the same proportion in that category: (d) there is an association between two categorical variables:4. Ina study to examine a remedial reading program a sample of\n3. Select the correct statement below. The alternative hypothesis for a chi square test for homo- geneity states that: (a) there is no association between two categorical variables. (6) the proportion that falls into each category is the same for everypopulation. (C) for at least one category,it is ...\n##### L I 1 3 8 ~ 1 Olw 1 8 vll 7 3 3 3 9 8 3\nL I 1 3 8 ~ 1 Olw 1 8 vll 7 3 3 3 9 8 3...\n##### Modified True\/False The types of cells and organisms) that a virus can infect is known as...\nModified True\/False The types of cells and organisms) that a virus can infect is known as its infective dose Toe 6) Poxviruses are viruses that infect bacteria. Some viral infection exist in a chronic latent state in which the virus lies dormant in the host for long periods of time and can become ac...","date":"2023-03-29 12:35:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5071890950202942, \"perplexity\": 9494.89752106772}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948976.45\/warc\/CC-MAIN-20230329120545-20230329150545-00432.warc.gz\"}"}
null
null
Q: Why does the async pipe not subscribe to observable inialized in the constructor See this plunkr for an illustration: https://plnkr.co/edit/Tm4AW0sU0pumBA8I?open=lib%2Fapp.ts&deferRun=1 Component 1 where i declare everything in the class initializer: @Component({ template: '{{ data$ | async | json }}', selector: 'app-data' }) export class AppData { readonly id$ = new Subject(); readonly data$ = this.id$.pipe( switchMap(id => of(`id = ${id}`)) ); @Input() set id(val: number) { this.id$.next(val) } } And Component 2 where I create the observable onInit: @Component({ template: '{{ data$ | async | json }}', selector: 'app-data-oninit' }) export class AppDataOnInit implements OnInit { @Input() id: number; data$: any; ngOnInit(): void { this.data$ = of(`id = ${this.id}`) } } I am trying to declare the observable in the component contructor or class initialzier but the async pipe is not subscribing to it there. However if I instanciate the observable in ngOnInit it works as expected. I have been searching all day for an explanation for this but come up empty. What is the missing puzzle piece here? I feel like it makes more sense to declare the observables when the class is created. A: The async pipe is subscribing to your data$, but this subscription is happening after id$ has emitted. If you want late subscribers to receive prior emisssions, you can use a ReplaySubject instead of a plain Subject: readonly id$ = new ReplaySubject(1);
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,745
Lifeline is a solution-oriented meeting with a strong message of recovery for those who wish to hear it. We firmly believe in sponsorship and actively promote it. In other words, we believe in the message of recovery as spelled out in the big book of Alcoholics Anonymous and are eager to pass it on. We meet Tuesdays from 7:15 to 8:30 at the Eastside Foursquare Church in South Bothell. Our business meetings are the 2nd Tuesday of the month, following the general meeting. We have members who drive for Milam so we always have newcomers attending.
{ "redpajama_set_name": "RedPajamaC4" }
6,373
Яловичо́рські го́ри (інша назва — Гори Яловичори) — гірський масив в Українських Карпатах. Розташований у південній частині Вижницького району Чернівецької області. Довжина масиву (з півночі на південь) — приблизно 30 км. Із заходу обмежений долиною річок Білий Черемош і Перкалаб, з південного сходу — долиною Сучави, з північного сходу — Верховинсько-Путильським низькогір'ям (зокрема долиною річки Путили). Яловичорські гори на півдні межують з Мармароським масивом і відділені від нього українсько-румунським кордоном. Яловичорський масив складається з численних хребтів, найбільший з яких Яровиця (найвищий у Буковинських Карпатах). До масиву також відносять хребет Чорний Діл (хоча деякі географи вважають його частиною Мармароського масиву, і зокрема Чивчинських гір). Хребти простягаються здебільшого з північного заходу на південний схід і розчленовані річками: Сарата, Яловичора, Лопушна та їхніми притоками. Є кілька перевалів: Джогіль, Семенчук. Масив складається з порід черемоської світи, які місцями виходять на поверхню у вигляді оголених стрімких скель та брил. Більшість пригребеневої частини головних хребтів та їхніх відногів представлена досить розлогими полонинами. Підніжжя хребтів вкриті ялиновими та смерековими лісами. Яловичорські гори заселені слабо та порівняно ізольовані від популярних туристичних маршрутів. Тому тут збереглось багато автентичних природо-географічних комплексів. Деякі хребти Яровець Путилли Максимець Випчини Лосова Томнатикул Чорний Діл Деякі вершини Яровиця (1574,4 м) Томнатик (1565,3 м) Чорний Діл Максимець (1345 м) Лосова (1428,2 м) Заповідні території та пам'ятки природи Черемоський національний природний парк Молочнобратський карстовий масив Чорний Діл (заказник) Боргиня (заказник) Гірське Око Жупани (пам'ятка природи) Про назву масиву Донедавна цю частину Українських Карпат називали по-різному і не завжди правильно: гори Путили, Яровиця, південна частина Покутсько-Буковинських Карпат або просто від назви того чи іншого хребта (наприклад, Лосова). Запропонована географами назва Яловичорські гори пов'язана з назвою річки Яловичори, яка розсікає центральну найвищу частину гірського масиву, і тутешніми населеними пунктами — Нижній та Верхній Яловець (місцева назва — Яловичори). Джерела Рельєф Українських Карпат: стан і проблеми понятійного апарату Геоморфологічний район Гринявсько-Яловичорського гірського масиву Східні Карпати Українські Карпати Рельєф Чернівецької області Путильський район Гори Гуцульщини
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,844
The men's 800 metres event at the 1970 European Athletics Indoor Championships was held on 14 and 15 March in Vienna. Medalists Results Heats First 3 in each heat (Q) and the next 2 fastest (q) qualified for the final. Final References 800 metres at the European Athletics Indoor Championships 800
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,521
\section{Introduction} \subsection{Pure Component and Athermal Reference State} Understanding the solubility of a polymer in a solvent is a technologically important problem. It is well documented that "like dissolves like" but it is almost impossible to quantify the notion of "likeness" of materials. The understanding of solubility requires a basic understanding of "likeness" that is lacking at present. Solubility parameters in all their incarnations are attempts to quantify this simple notion. There are several ways one can proceed to understand solubility by considering various thermodynamic quantities, not all of which are equivalent. However, almost all these current approaches are based on our thermodynamic understanding of mixtures at the level of the \emph{regular solution theory }(RST) \cite{vanLaar,Hildebrand1916,Hildebrand}. In practice, one usually introduces the Hildebrand solubility parameter \[ \delta=\sqrt{c^{\text{P}}}% \] for a pure component in terms of $c^{\text{P}},$ the latter being the cohesive energy density of the pure (P) component and is normally reported at its boiling temperature. In this work, we will take this definition to be operational at any temperature, thus treating $\delta$\ as a thermodynamic quantity, and not just a parameter. Then its value at the boiling point will be the customarily quoted solubility parameter. The cohesive energy is related to the interaction energy $\mathcal{E}_{\text{int}}$\ obtained by subtracting the energy $\mathcal{E}_{\text{ath}}$\ of the hypothetical \emph{athermal} state of the pure component, the state in which the self-interactions (both interparticle and intraparticle) are absent, from the total energy $\mathcal{E}$:% \begin{equation} \mathcal{E}_{\text{int}}\ \equiv\mathcal{E-E}_{\text{ath}}. \label{InteractionEnergy}% \end{equation} The athermal state is usually taken to be at the same temperature $T$ and the volume $V$ as the system itself. In almost all cases of interest, $\mathcal{E}_{\text{ath}}$ is nothing but the kinetic energy and depends only on $T$ but not on $V.$ Thus, $\mathcal{E}_{\text{int}}$ depends directly on the strength of the self-interaction, the only interaction present in a pure component and vanishes as self-interactions disappear, since $\mathcal{E}% \rightarrow\mathcal{E}_{\text{ath}}$ as this happens. Thus, $\mathcal{E}% _{\text{int}}$\ can be used to estimate the strength of the self-interaction. One can also take the hypothetical state to be at the same $T$ and the pressure $P$. In this case, there would in principle be a difference in the volume $V$ of the pure component and $V_{\text{ath}}$ of the hypothetical state, but this difference will not change $\mathcal{E}_{\text{int}}$ in (\ref{InteractionEnergy}). The hypothetical state is \emph{approximated} in practice by the vapor phase at the boiling point in which the particles are assumed to be so far apart that their mutual interactions can be neglected. However, to be precise, it should be the vapor phase at zero pressure so that the volume is infinitely large to make the particle separation also infinitely large to ensure that they are non-interacting. This causes problems for polymers \cite{Choi}. Our choice of the athermal state in (\ref{InteractionEnergy}) to define $\mathcal{E}_{\text{int}}$ overcomes these problems altogether. By definition,% \begin{equation} c^{\text{P}}\equiv-\mathcal{E}_{\text{int}}/V \label{cohesive_def}% \end{equation} is the negative of the interaction energy density (per unit volume $V)$ of the system at a given temperature $T.$ (At the boiling point, $V$ is taken to be the volume of the liquid)$.$ The negative sign is to ensure $c^{\text{P}}>0$ since usually $\mathcal{E}_{\text{int}}$\ is negative to give cohesion. Because of its dimensions, $c^{\text{P}}$\ is also known as the \emph{cohesive pressure}. In this form, $c^{\text{P}}$ is a thermodynamic quantity and represents the thermodynamically averaged (potential) energy per unit volume of the pure component. Thus, $c^{\text{P}}$ can be calculated even for macromolecules like polymers, which is of central interest in this work or for polar solvents. It is a macroscopic, i.e. a thermodynamic quantity characterizing\ microscopic interparticle interactions in a pure component. This is important as it is well known that $\delta$\ cannot be measured directly as most polymers cannot be vaporized without decomposing \cite{Du}. Thus, theoretical means are needed to evaluate $\delta,$ which is our goal in this work. \ It should be noted from the above definition that $c^{\text{P}}$ contains the contributions from all kinds of forces (van der Waals, dipolar, and hydrogen bonding forces) in the system \cite{Hansen}. In this work, we are only interested in the weak van der Waals interactions for simplicity, even though the investigation can be easily extended to include other interactions. It should also be noted that the definition (\ref{cohesive_def}) does not suffer from any inherent approximation, and can be used to calculate $c^{\text{P}}$ in any theory or by experimental measurements. As we will see, this is not true of the mutual cohesive density definition, which is introduced below. \subsection{Mixture and Self-interacting Reference State} As it stands, the pure component quantity $c^{\text{P}}$ is oblivious to what may happen in a mixture formed with another component. Despite this, $c^{\text{P}}$ or $\delta$\ is customarily interpreted as a rough measure of a solvent's ability to dissolve a solute ("like dissolves like"). This interpretation of the solubility parameter is supposed to be reliable only for non-polar solvents formed of small molecules, and one usually refrains from using it for polar solvents such as esters, ketones, alcohol, etc. Our interest is to investigate this quantity for macromolecules here, and its significance for the solubility in a mixture. According to the famous Scatchard-Hildebrand relation \cite{Hildebrand}, the energy of mixing $\Delta E_{\text{M}}$ per unit volume for a binary mixture of two components $1$ and $2$ must always be non-negative since it is given by \begin{equation} \Delta E_{\text{M}}=(\delta_{1}-\delta_{2})^{2}\varphi_{1}\varphi _{2},\ \ \ \ (\varphi_{1}+\varphi_{2}\equiv1), \label{mixEnergy0}% \end{equation} where $\varphi_{i}$ are the volume fractions of the two components $i$, and $\delta_{1}$ and $\delta_{2}$ are their solubility parameters. It is implicitly assumed here that the volume of mixing $\Delta V_{\text{M}}=0$. Thus, this expression does not contain the contribution from a non-zero volume of mixing. We will be interested in investigating this additional contribution in this work. Later, we will discover that (\ref{mixEnergy0}) can only be justified [see (\ref{VolumeFraction}) below], if we take \begin{equation} \varphi_{i}\equiv\phi_{\text{m}i}/\phi_{\text{m}},\ \ \ \ \ (\phi_{\text{m}% }\equiv\phi_{\text{m}1}+\phi_{\text{m}2}) \label{monomer_fraction}% \end{equation} as representing the monomer density ratios, or monomer fractions; see below for precise definition. Only in the RST can we identify $\varphi_{i}$ with the volume fraction of the $i$th component. The significance of (\ref{mixEnergy0}) is that the behavior of the mixture is completely determined by the pure component properties and provides a justification for "like dissolves like". This must be a gross approximation even for non-polar systems and cannot be true in general since the energy of mixing can be negative in many cases, as shown elsewhere \cite{Gujrati2003}, and as we will also see here later; see, for example, Fig. \ref{F18}. What we discover is that $\Delta E_{\text{M}}$ can be negative even if $\Delta V_{\text{M}}=0$. Thus, zero volume of mixing is not sufficient for the validity of (\ref{mixEnergy0}). For the mixture, we need to introduce a thermodynamic or macroscopic quantity $c_{12}$ characterizing the mutual interaction between the two components; this quantity should be determined by the microscopic interaction between the two components. Therefore, the introduction of $c_{12}$ will, in principle, require comparing the mixture with a hypothetical mixture in which the two components have no mutual interactions similar to the way the pure component (having self-interaction) was compared with the athermal state above (in which the self-interaction was absent) for the introduction of $c^{\text{P}}.$ The hypothetical state of the mixture should not be confused with the athermal state of the mixture; the latter will require even the self interaction of the two components to be absent. The new hypothetical state will play an important role in our investigation and will be called \emph{self-interacting reference state} (SRS). The mutual interaction energy in the binary mixture is obtained by subtracting the energy of SRS from that of the mixture: \begin{equation} \mathcal{E}_{\text{int}}^{(\text{M})}\ \equiv\mathcal{E-E}_{\text{SRS}}; \label{Mutual_InteractionEnergy}% \end{equation} compare with (\ref{InteractionEnergy}). Just as before, SRS can be taken at the same $T$ and $V,$ or $T$ and $P$ as the mixture. This allows us to separate out the two contributions, one due to the presence of mutual interactions at the same volume $V_{\text{SRS}}=V$ of SRS, and the second contribution due to merely a change in the volume from $V_{\text{SRS}}$ to $V;$ this was not the case with $\mathcal{E}_{\text{int}}$ above.\ Each contribution of $\mathcal{E}_{\text{int}}^{(\text{M})}$ vanishes as mutual interactions disappear, since $\mathcal{E}\rightarrow\mathcal{E}_{\text{SRS}% },$ and $V_{\text{SRS}}\rightarrow V$ as this happens. If $\mathcal{E}% _{\text{int}}^{(\text{M})}$ is used to introduce a mutual cohesive energy density (to be denoted later by $c_{12}^{\text{SRS}}$), then such a density would most certainly vanish with vanishing mutual interaction strength. However, this is not the conventional approach adopted in the literature when introducing $c_{12}$. Rather, one considers the energy of mixing. We will compare the two approaches in this work. Whether the conventionally defined $c_{12}$ vanishes with the mutual interactions remains to be verified. In addition, whether it is related to the pure component self interaction cohesive energy densities $c_{11}^{\text{P}}$ and $c_{22}^{\text{P}}$ in a trivial fashion such as (\ref{london_berthelot_Conj}), see below, needs to be investigated$.$ As we will see, this will require further assumptions even within RST to which we now turn. \subsection{Regular Solution Theory (RST)} The customary approach to introduce $c_{12}$\ is to follow the classical approach developed by van Laar, and Hildebrand \cite{vanLaar,Hildebrand1916,Hildebrand}, which is based on RST, a theory that can be developed consistently only on a lattice. The theory describes an incompressible lattice model or a compressible model in which the volume of mixing $\Delta V_{\text{M}}$ is zero. The lattice model is introduced as follows. One usually takes a homogeneous lattice of coordination number $q$ and containing $N$ sites. The volume of the lattice is $Nv_{0}$ where $v_{0}$ is the lattice cell volume. We place on this lattice the polymer (component $1$) and the solvent (component $2$) molecules in such a way that the connectivity of the molecules are kept intact. It should be stressed that the solvent molecules themselves may be polymeric in nature. In addition, the excluded volume interactions are enforced by requiring that no site can be occupied by more than one monomer at a time. The monomer densities of the two components $\phi_{\text{m}i}$ ($i=1,2)$ are the densities of sites occupied by the $i$th component. Two monomers belonging to components $i$ and $j,$ respectively, interact with an energy $e_{ij}=$ $e_{ji}$\ only when they are nearest-neighbor. (This nearest-neighbor restriction can be easily relaxed, but we will not do that here for simplicity.) The interaction between the polymer and the solvent is then characterized by a single excess energy parameter $\varepsilon\equiv\varepsilon_{12}$ defined in general by the combination% \begin{equation} \varepsilon_{ij}\equiv e_{ij}-(1/2)(e_{ii}+e_{jj}). \label{excess_E}% \end{equation} The origin of this combination is the fixed lattice connectivity as shown elsewhere \cite{Gujrati2000}$.$ For an incompressible binary mixture for which \[ \phi_{\text{m}1}+\phi_{\text{m}2}=1, \] the excess energy $\varepsilon$ is sufficient for a complete thermodynamic description on a lattice \cite{Gujrati2000}. On the other hand, a compressible lattice model of the mixture, which requires introducing voids as a separate component ($i=0$), will usually require two additional excess energy parameters $\varepsilon_{01},$ and $\varepsilon_{02}$ \cite{Gujrati1998}. In the following, we will implicitly assume that a void occupies a site of the lattice and has it occupies a volume $v_{0}.$ In our picture, a pure component is a pseudo-binary mixture ($i=0,1$ or $i=0,2),$ while a compressible binary mixture is a pseudo-tertiary mixture ($i=0,1,2).$ Since voids do not interact with themselves or with any other component, we must set% \[ e_{00}=e_{0i}=0, \] so that the corresponding excess energy \begin{equation} \varepsilon_{0i}=-(1/2)e_{ii}, \label{pureExcess}% \end{equation} see (\ref{excess_E}), and is normally positive since $e_{ii}$ is usually negative. Two of the three conditions for RST\ to be operative are (i) \emph{Isometric Mixing: }$\Delta V_{\text{M}}$ $\equiv0$, and (ii) \emph{Symmetric Mixture: }The two components must be of the \emph{same} size and have the same interaction ($e_{11}\equiv e_{22}$). This is a restatement of "like dissolves like." The condition $\Delta V_{\text{M}}=0$ for \emph{isometric mixing} is always satisfied in an incompressible system. For a compressible system, $\Delta V_{\text{M}}$ need not be zero, and one can have either an isometric or a non-isometric mixing depending on the conditions. \subsubsection{Random Mixing Approximation (RMA)} The third condition for RST to be valid is the fulfillment of the iii) \emph{RMA\ Limit: }The interaction energy $\varepsilon_{ij}$ be extremely weak ($\varepsilon_{ij}\rightarrow0$), and the coordination number $q$ of the lattice extremely large ($q\rightarrow\infty$) simultaneously so that the product% \[ q\varepsilon_{ij}\text{ remains fixed and finite, as }\varepsilon _{ij}\rightarrow0,\text{ and }q\rightarrow\infty. \] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Indeed, if one introduces the dimensionless Flory-Huggins chi parameter $\chi_{ij}\equiv\beta q\varepsilon_{ij},$ where $\beta\equiv1/k_{\text{B}}T,$ $T$\ being the temperature in the Kelvin scale, then one can also think of keeping $\chi_{ij}$ fixed and finite, instead of $q\varepsilon_{ij},$ under the simultaneous limit \begin{equation} \chi_{ij}\equiv\beta q\varepsilon_{ij}\text{ fixed and finite, as }% \beta\varepsilon_{ij}\rightarrow0,\text{ and }q\rightarrow\infty. \label{RMALimit}% \end{equation} It is quite useful to think of RST in terms of these limits as both $\beta\varepsilon_{ij}$ and $q$ are dimensionless. The above simultaneous limit gives rise to what is usually known as the \emph{random mixing approximation} (RMA), and has been discussed in detail in a recent review article \cite{Gujrati2003} in the context of polymer mixtures.\ For an incompressible system, we need to keep only a single chi parameter $\chi \equiv\beta q\varepsilon$ fixed and finite in the limit. For a compressible system, one must also simultaneously keep the two additional chi parameters related to $\varepsilon_{01}$ and $\varepsilon_{02},$ and $\beta Pv_{0}$ fixed and finite, where $P$ is the pressure \cite{Gujrati1998,Gujrati2003}. The RMA limit can be applied even to a pure component (for which the first two conditions are meaningless). It can also be applied when mixing is not isometric \cite{Note1} or when the mixture is not symmetric. Therefore, RST is equivalent to the \emph{isometric RMA}. In the unusual case $\chi=0,$ ($q\rightarrow\infty),$ the resulting isometric theory is known as the \emph{ideal solution theory}, which will not be considered here as we are interested in the case $\chi\neq0.$ \subsubsection{London-Berthelot and Scatchard-Hildebrand Conjectures} The energy of mixing $\Delta E_{\text{M}}$ per unit volume is the central quantity for solubility considerations, and can be used to introduce an effective "energetic" chi \cite{Gujrati1998,Gujrati2003} as follows:% \begin{equation} \chi_{\text{eff}}^{\text{E}}\equiv\beta\Delta E_{\text{M}}v_{0}/\phi _{\text{m}1}\phi_{\text{m}2}, \label{effective_EChi}% \end{equation} which is a measure of the Flory-Huggins $\chi(\equiv\chi_{12})$ parameter or the excess energy $\varepsilon(\equiv\varepsilon_{12})$; the latter is directly related to the mutual interaction energy $e_{12},$ see (\ref{excess_E}), which explains the usefulness of $\Delta E_{\text{M}}$ for solubility considerations. One of the important consequences of the application of RST is the Scatchard-Hildebrand conjecture \cite{Hildebrand}, according to which the energy of mixing is given by a non-negative form (\ref{mixEnergy0}). This is known to be violated; see \cite{Gujrati2003}, and below. One of the reasons for its failure could be the much abused London-Berthelot conjecture% \begin{equation} c_{12}=\sqrt{c_{11}^{\text{P}}c_{22}^{\text{P}}}\equiv\delta_{1}\delta_{2}, \label{london_berthelot_Conj}% \end{equation} relating the mutual cohesive energy density $c_{12}$\ with the pure component cohesive energy densities $c_{11}^{\text{P}},$ $c_{22}^{\text{P}},$\ and used in the derivation of (\ref{mixEnergy0}). In contrast, the London conjecture \begin{equation} e_{12}=-\sqrt{(-e_{11})(-e_{22})} \label{london_Conj}% \end{equation} deals directly with the microscopic interaction energies, and is expected to hold for non-polar systems; see also \cite{Israelachvili} for an illuminating derivation and discussion. In the isometric RMA limit, the two conjectures become the same in that (\ref{london_Conj}) implies (\ref{london_berthelot_Conj}). In general, they are two \emph{independent} conjectures. As we will demonstrate here, (\ref{london_Conj}) does not imply (\ref{london_berthelot_Conj}) once we go beyond the RMA limit. We will also see that the non-negativity of the form (\ref{mixEnergy0}) is violated even for isometric mixing. Association of one component, such as through hydrogen bonding, usually makes $e_{12}^{2}<\left\vert e_{11}\right\vert \left\vert e_{22}\right\vert .$ On the other hand, complexing results in the opposite inequality $e_{12}% ^{2}>\left\vert e_{11}\right\vert \left\vert e_{22}\right\vert .$ It is most likely that the binary interaction between monomers of two distinct species will deviate from the London conjecture (\ref{london_Conj}) to some degree. Thus, some restrictions have to be put on the possible relationship between these energy parameters for our numerical calculations. We have decided to consider only those parameters that satisfy the London conjecture (\ref{london_Conj}) in this work for physical systems. \subsubsection{ Deviation from London-Berthelot Conjecture} The deviation from the London-Berthelot conjecture (\ref{london_berthelot_Conj}) is usually expressed in terms of a binary quantity $l_{12}$ defined via% \begin{equation} c_{12}=\sqrt{c_{11}^{\text{P}}c_{22}^{\text{P}}}(1-l_{12}), \label{Dev_l12}% \end{equation} and it is usually believed that the magnitude of $\ l_{12}\ $is very small: \begin{equation} \left\vert l_{12}\right\vert <<1. \label{Dev_Magnitude}% \end{equation} It is possible that $l_{12}$ vanishes at isolated points, so that the London-Berthelot conjecture becomes satisfied. This does not mean that the system obeys RST there.\ As we will demonstrate here, we usually obtain a non-zero $l_{12}$, thus implying a failure of the London-Berthelot conjecture (\ref{london_berthelot_Conj}), even if the London conjecture (\ref{london_Conj}) is taken to be operative. Another root for the failure of (\ref{london_berthelot_Conj}) under this assumption could be non-isometric mixing. A third cause for the failure could be the corrections to the RMA limit, since a real mixture is not going to truly follow RST. To separate the effects of the three causes, we will pay particular attention to $l_{12}$ in this work. We will assume the London conjecture (\ref{london_Conj}) to be valid, and consider the case of isometric mixing. We will then evaluate$\ l_{12}$ using a theory that goes beyond RST. A non-zero $l_{12}$ in this case will then be due to the corrections to the RMA limiting behavior or RST. \subsection{Internal Pressure} Hildebrand \cite{Hildebrand1916} has also argued that the solubility of a given solute in different solvents is determined by relative magnitudes of internal pressures, at least for nonpolar fluids. Thus, we will also investigate the internal pressure, which is defined properly by \begin{equation} P_{\text{IN}}=-P_{\text{int}}\equiv-(P-P_{\text{ath}}), \label{P_IN}% \end{equation} where $P_{\text{int}}$\ is the contribution to the pressure $P$ due to interactions in the system, and is obtained by subtracting $P_{\text{ath}}$ from $P$, where $P_{\text{ath}}$ is the pressure of the hypothetical athermal state. The volume $V$ of the system, which has the pressure $P$, may or may not be equal to the volume of the hypothetical athermal state. This means that $P_{\text{IN}}$\ will have different values depending on the choice of athermal state volume. In this work, we will only consider the athermal state whose volume is equal to the volume $V$ of the system; its pressure then has the value $P_{\text{ath}},$ which is not equal to the pressure $P$ of the system. As the interactions appear in the athermal system at constant volume, its pressure $P_{\text{ath}}$ will reduces due to attractive interactions. This will give rise to $P_{\text{int}},$\ which is negative so that $P_{\text{IN}}$ will be a positive quantity for attractive interactions. For repulsive interactions, which we will not consider here, $P_{\text{IN}}$ will turn out to be a negative quantity. In either case, as we will show here, $P_{\text{IN}}$ should be distinguished from ($\partial\mathcal{E}/\partial V)_{T},$ with which it is usually equated. Their equality holds only in a very special case as we will see here. The availability of $PVT$-data makes it convenient to obtain ($\partial \mathcal{E}/\partial V)_{T}.$ Therefore, it is not surprising to find it common to equate it with $P_{\text{IN}}.$ In a careful analysis, Sauer and Dee have shown a close relationship between ($\partial\mathcal{E}/\partial V)_{T}$ and $c^{\text{P}}$ \cite{Dee}. Here, we will investigate the relationship between $P_{\text{IN}}$ and $c^{\text{P}}.$ \subsection{Internal Volume} One can also think of an alternate scenario in which the pressure of the athermal state is kept constant as interactions appear in the athermal system. This pressure can be taken to be either $P$ or $P_{\text{ath}}$. Let us keep its pressure to be $P$, and so that its volume is $V_{\text{ath}}.$ In this case, the volume of the system $V$ will be smaller (greater) than $V_{\text{ath}}$ of the athermal state because of the attractive (repulsive) interactions, so that one can also use the negative of the change in the volume $V-V_{\text{ath}}$ as a measure of the attractive (repulsive) interactions. This allows us to introduce the following volume, to be called \emph{internal volume }% \begin{equation} V_{\text{IN}}=-V_{\text{int}}\equiv-(V-V_{\text{ath}}) \label{V_IN}% \end{equation} as a measure of cohesiveness. The two internal quantities are mutually related and one needs to study only one of them. \subsection{Beyond Regular Solution Theory: Solubility and Effective Chi} In polymer solutions or blends of two components $1$ and $2$, non-isometry is a rule rather than exception due to asymmetry in the degrees of polymerization and in the pure component interactions. Thus, the regular solution theory is most certainly inoperative. An extension of RST, to be described as the non-isometric regular solution theory, allows for non-isometric mixing ($\Delta V_{\text{M}}\neq0$), and its successes and limitations have been considered elsewhere \cite[where it is simply called the regular solution theory]{RaneGuj2003}. It is the extended theory that will be relevant for polymers. Let us suppress the pure component index $i$ in the following. Crudely speaking, $c^{\text{P}}$ for a pure component is supposed to be related to the pure component interaction parameter $\varepsilon(\equiv\varepsilon_{0i}),$ or more correctly $\chi(\equiv\chi_{0i}).$ However, $\varepsilon$ is a microscopic system parameter, independent of the thermodynamic state of the system; thus, it must be independent of the composition, the degree of polymerization, etc. On the other hand, $c^{\text{P}}$ is determined by the thermodynamic state. Thus, its value will change with the thermodynamic state, the degree of polymerization, etc. Only in the RMA limit, see (\ref{RMA_Cohesive}) below, does one find a trivial proportionality relationship between the two quantities $c^{\text{P}}$ and $\varepsilon,$ the constant of proportionality being determined by the square of the pure component monomer density $\phi_{\text{m}};$ otherwise, there is no extra dependence on temperature and pressure of the system. This RMA behavior is certainly not going to be observed in real systems, where we expect a complex relationship between $c^{\text{P}}$ and $\varepsilon.$ A similar criticism also applies to the behavior of $c_{12},$ for which one considers the energy of mixing $\Delta E_{\text{M}};$ see below. \subsubsection{Solubility} In the RMA limit of an incompressible binary mixture, it is found that $c_{12}$ is proportional to the mutual interaction energy $e_{12}$ between the two components, see (\ref{RMA_CohesiveDensities}) below. Thus, the sign of $c_{12}$ is determined by $e_{12}.$\ On the other hand, $\Delta E_{\text{M}}$ in the RMA limit is proportional to the excess energy $\varepsilon (\equiv\varepsilon_{12})$ or $\chi(\equiv\chi_{12}),$ so that its sign is determined by $\varepsilon$. However, the solubility of component $1$ in $2$ in a given state is determined not by their mutual interaction energy $e_{12},$ which is usually attractive, but by the sign of the excess energy $\varepsilon,$ and the entropy of mixing. For an incompressible binary mixture, we have only one exchange energy $\varepsilon.$ Even away from the RMA limit for the incompressible mixture, a positive $\varepsilon$ implies that the two components will certainly phase separate at low enough temperatures. Their high solubility (at constant volume) at very high temperatures is mostly due to the entropy of mixing, but the energy of mixing will play an important role at intermediate temperatures. The solubility increases as $\varepsilon$\ decreases. It also increases as $T$ increases, unless one encounters a lower critical solution temperature (LCST) in which case the solubility will decrease with $T$. It is well known that LCST can occur\ in a blend due to compressibilty; see, for example, \cite{Mukesh}. Similarly, the solubility at constant pressure will usually decrease with temperature. However, it is also possible for isobaric solubility to first increase and then decrease with $T.$ Thus, a properly defined mutual cohesive energy\ for a compressible blend should be able to capture such a features. A negative $\varepsilon$ implies that the two components will never phase separate.\ Thus, a complete thermodynamic consideration is needed for a comprehensive study of solubility even when we are not in the RMA limit, and requires investigating thermodynamic quantities such as $c_{12}$ or the (energetic) effective chi (\ref{effective_EChi}) to which we now turn$.$ This is even more so important when we need to account for compressibility. \subsubsection{Energetic Effective Chi} We have recently investigated a similar issue by considering the behavior of the effective chi in polymers \cite{RaneGuj2005}, where an effective chi, relevant in scattering experiments, was defined in terms of the excess second derivative of the free energy with respect to some reference state. This investigation is different in spirit from other investigations in the literature, such as the one carried out by Wolf and coworkers \cite{Wolf} where one only considers the free energy of mixing. As said above, $\Delta E_{\text{M}}$ or $\chi_{\text{eff}}^{\text{E}}$ is determined by the excess energy $\varepsilon_{12},$ and not by $e_{12}.$ However, since $c_{12}$ is supposed to be a measure of $e_{12},$ it is defined indirectly by that part of the energy of mixing $\Delta E_{\text{M}}$ or $\chi_{\text{eff}}^{\text{E}},$ see (\ref{effective_EChi}), that is supposedly determined by the mutual energy of interaction $e_{12}$. For this, one must subtract the contributions of the pure components from $\Delta E_{\text{M}}$ or $\chi_{\text{eff}}^{\text{E}};$ see below for clarity. Because of this, even though the previous investigation of the effective chi \cite{RaneGuj2005} provides a clue to what might be expected as far as the complex behavior of $\chi_{\text{eff}}^{\text{E}}$ is involved, a separate investigation is required for the behavior of the cohesive density $c_{12}$, which we undertake here. We will borrow some of the ideas developed in \cite{RaneGuj2005}, especially the requirements that (i) the cohesive energy density $c_{ij}$ for an $i$-$j$ mixture vanish with $e_{ij},$ and (ii) the formulas to determine $c_{ij}$ reduce to the standard RMA form under the RMA limit. The first condition replaces the requirement in \cite{RaneGuj2005} that the effective chi vanish with $\varepsilon_{12}.$ The second requirement is the same as in \cite{RaneGuj2005}. Its importance lies in the simple fact that any thermodynamically consistent theory must reduce to the same unique theory in the RMA limit; see \cite{Gujrati2000,RaneGuj2005} for details. We also borrows the idea of \emph{reference states} that would be fully explained below. \subsubsection{Symmetric and Asymmetric Blends} It has been shown in \cite{RaneGuj2003} that the non-isometric regular solution theory is more successful than RST but again only for a symmetric blend. A symmetric blend is one in which not only the two polymers have the same degree of polymerization ($M_{1}=M_{2}$), but they also have identical interactions in their pure states ($e_{11}=e_{22}$). For asymmetric blends (blends that are not symmetric), even the non-isometric regular solution theory is qualitatively wrong. The significant conclusion of the previous work is that the recursive lattice theory developed in our group \cite[and references therein]{Gujrati1995b,RyuGuj1997}, and briefly discussed in the next section to help the reader, is successful in explaining several experimental results where the non-isometric regular solution theory fails. A similar conclusion is also arrived at when we apply our recursive theory to study the behavior of effective chi \cite{Gujrati2000,RaneGuj2005}. It was shown a while back that the recursive lattice theory is more reliable than the conventional mean field theory \cite{Gujrati1995a}; the latter is formulated by exploiting RMA, which is what the regular solution theories are. The recursive lattice theory goes beyond RMA and is successful in explaining several experimental observations that could not be explained by the mean field theory. Our aim here is to apply the recursive theory to study solubility and to see the possible modifications due to 1. finite $q,$ 2. non-weak interactions ($\varepsilon>0$), 3. non-isometric mixing, and 4. disparity in size (asymmetry in the degree of polymerization) and/or pure component interactions. \section{Recursive Lattice Theory} \subsection{Lattice Model} We will consider a lattice model for a multicomponent polymer mixture in the following, in which only nearest-neighbor interactions are permitted. As above, $q$ denotes the lattice coordination number. The number of lattice sites will be denoted by $N$, and the lattice cell volume by $v_{0}$, so that the lattice volume $V=Nv_{0}.$ The need for using the same coordination number and cell volume for the mixture and for the pure components has been already discussed elsewhere \cite{Gujrati2003} in order to have a consistent thermodynamics. The monomers and voids are allowed to reside on lattice sites. The excluded volume restrictions are accounted for as described above. As shown elsewhere \cite{Gujrati2000}, one only needs to consider excess energies of interaction $\varepsilon_{ij}$ between monomers of two distinct components $i$, and $j;$ here, $e_{ij}$ represents the interaction energy between two nearest-neighbor monomers of components $i$, and $j$ to investigate the model$.$ To model free volume, one of the components will represent voids or holes, always to be denoted by $i=0$ here$.$ Thus, for a pure component of, for example, component $i=1$, the excess interaction energy $\varepsilon\ $is $\varepsilon_{01}=-(1/2)e_{11}.$ Usually, $e_{11}$ is negative, which makes $\varepsilon$ positive for a pure component. Let $N_{\text{m}i}$ denote the number of monomers belonging to the $i$th species and $N_{\text{m}}$ the number of monomers of all species, so that the number of voids $N_{0}$ is given by $N_{0}\equiv N-N_{\text{m}}.$ Similarly, let $N_{ij}$ denote the number of nearest-neighbor contacts between monomers of components $i$, and $j.$ The densities in the following are defined with respect to the number of sites. \subsection{Recursive Theory} In the present work, we will use for calculation the results developed by our group in which we solve the lattice model of a multicomponent polymer mixture by replacing the original lattice by a Bethe lattice \cite{Gujrati1995b,RyuGuj1997}, and solving it exactly using the recursive technique, which is standard by now. The calculation is done in the grand canonical ensemble. Thus, the volume $V$ is taken to be fixed. We will assume that all material components ($i>0$) are linear polymers in nature. The degree of polymerization of the $i$th component, i.e. the number of consecutive sites occupied by it, is denoted by $M_{i}\geq1.$\ The linear polymers also include monomers for which $M_{i}=1.$ Each void ($i=0)$ occupies a single site on the lattice. Let $\phi_{0}\equiv N_{0}/N$ denote the density of voids, $\phi_{\text{m}i}\equiv N_{\text{m}i}/N$ the monomer density of the $i$th component, and $\phi_{ij}\equiv N_{ij}/N$ the density of nearest-neighbor (chemically unbonded) contacts between the monomers of the two components $i$, and $j$. It is obvious that \begin{equation} \phi_{\text{m}}\equiv\sum_{i>0}\phi_{\text{m}i}=1-\phi_{\text{0}} \label{monomerD}% \end{equation} denotes the density of all material components ($i>0$) monomers. The density of all chemical bonds is given by \begin{equation} \phi=\sum_{i>0}\phi_{\text{m}i}\nu_{i},\ \ \nu_{i}\equiv(M_{i}-1)/M_{i}. \label{bondD}% \end{equation} The quantity $\phi_{\text{u}}\equiv q/2-\phi$ denotes the density of lattice bonds not covered by the polymers. Let us also introduce $q_{i}\equiv q-2\nu_{i},$ $\phi_{i\text{u}}\equiv q_{i}\phi_{\text{m}i}/2.$ As shown in \cite{RyuGuj1997,Gujrati1998}, the pressure $P$ is given by% \begin{equation} P=(k_{\text{B}}T/v_{0})[-\ln\phi_{0}+(q/2)\ln(2\phi_{\text{u}}% /q)]+(k_{\text{B}}Tq/2v_{0})\ln(\phi_{00}^{0}/\phi_{00})], \label{pressureEq}% \end{equation} where $k_{\text{B}}$ is the Boltzmann constant, $\phi_{00}$ is the density of nearest-neighbor void-void contacts, and $\phi_{00}^{0}$ is its \emph{athermal }value when all excess interactions $\epsilon_{ij}$ are identically zero. The athermal values of $\phi_{ij}$ are given by \[ \ \phi_{ij}^{0}=2\phi_{i\text{u}}\phi_{j\text{u}}/\phi_{\text{u}}% (1+\delta_{ij}), \] where $\delta_{ij}$\ is the Kronecker delta, and give the values of the contact densities in the atermal state when all $\varepsilon_{ij}=0$. The athermal state has the same volume $V$ as of the original system. The first term in (\ref{pressureEq}) give the athermal value $P_{\text{ath}}$ of \ $P,$ and the second term is the correction \begin{equation} P_{\text{int}}=(k_{\text{B}}Tq/2v_{0})\ln(\phi_{00}^{0}/\phi_{00}) \label{P_int}% \end{equation} to the athermal pressure due to interactions. For attractive interactions responsible for cohesion, $P_{\text{int}}$ is going to be negative, as discussed above. This is the correction to $P_{\text{ath}}$ due to interactions and determines the internal pressure \ $P_{\text{IN}}$ $=-P_{\text{int}}.$ The identification also holds for pure components,\ so that $P_{\text{IN}}$\ remains positive for attractive interactions. Since there is no kinetic energy on a lattice, the internal energy in the lattice model is purely due to interactions, and this energy per unit volume $E_{\text{int}}$ is given by \begin{equation} E_{\text{int}}\equiv\sum_{i\geq j\geq0}e_{ij}\phi_{ij}/v_{0}, \label{energy_Def}% \end{equation} which will be used here to calculate the cohesive energy density, also known as the cohesive pressure. Note that the form of the pressure in (\ref{pressureEq}) does not explicitly depend on the number of components in the mixture. It should also be clear from (\ref{pressureEq}) that the incompressible state ($P\rightarrow\infty$) corresponds to $\phi _{0}\rightarrow0$ at any finite temperature$.$ However, we will not be interested in this limit in this work. In all our calculations, we will keep $\phi_{0}>0.$ \subsection{RMA Limit} The approximation has been discussed in details in \cite{Gujrati2003,RaneGuj2005}, so we will only summarize the results. This limit is very important since all thermodynamically consistent lattice theories must reduce to the same unique theory in the RMA limit. Thus, the RMA limit provides a unique theory, and can serve as a testing ground for the consistency of any theory. We now show that our recursive theory reproduces the known results in the RMA limit. To derive this unique theory, we note that in this limit, we have $q_{i}\ ^{\underrightarrow{\text{RMA}}}\ q$, and that the contact densities take the limiting form \begin{equation} \phi_{ij}^{0}\ ^{\underrightarrow{\text{RMA}}}\ q\phi_{\text{m}i}% \phi_{\text{m}j}/(1+\delta_{ij}),\phi_{ij}\ ^{\underrightarrow{\text{RMA}}% }\ q\phi_{\text{m}i}\phi_{\text{m}j}/(1+\delta_{ij}),\ q\ln(2\phi_{\text{u}% }/q)\ ^{\underrightarrow{\text{RMA}}}\ -2\phi. \label{RMA1}% \end{equation} Finally, using $\phi_{\text{m}0}$ to also denote the void density $\phi_{0},$ we have in this limit% \begin{subequations} \begin{align} & E_{\text{int}}\ ^{\underrightarrow{\text{RMA}}}\ \sum_{i,j\geq1}qe_{ij}% \phi_{\text{m}i}\phi_{\text{m}j}/2v_{0},\label{RMA3}\\ & \beta Pv_{0}\ ^{\underrightarrow{\text{RMA}}}\ -\ln\phi_{0}-\phi-\sum _{i>0}\chi_{0i}\phi_{\text{m}i}+\sum_{j>i\geq0}\chi_{ij}\phi_{\text{m}i}% \phi_{\text{m}j}. \label{RMA2}% \end{align} For the pure component ($i$th component) quantities, we use an additional superscript (P) for some quantities, such as $\phi_{ii}^{\text{P}}$ representing the contact density between unbonded monomers, or use the superscript $i$ for other quantities, such as $E_{\text{int}}^{(i)}$ representing the pure component internal energy. In the RMA limit \cite{Gujrati2003}, we obtain \end{subequations} \begin{equation} \beta E_{\text{int}}^{(i)}\ ^{\underrightarrow{\text{RMA}}}\ -\chi_{0i}% \phi_{\text{m}i}^{\text{P}2}/v_{0},\ \ \beta P_{\text{int}}^{(i)}% \ ^{\underrightarrow{\text{RMA}}}\ -\chi_{0i}\phi_{\text{m}i}^{\text{P}% 2}/v_{0}, \label{RMAdensities1}% \end{equation} by restricting (\ref{RMA3},\ref{RMA2}) to a single material component $i>0$ in addition to the species $0.\ $Here, we have used the fact that $\chi _{0i}=-(1/2)\beta qe_{ii},$ and that $1-\phi_{\text{m}i}^{\text{P}}$ represents the pure component free volume density$.$ We notice that in the RMA limit, \[ E_{\text{int}}^{(i)}\ ^{\underrightarrow{\text{RMA}}}\ P_{\text{int}}^{(i)}% \] for a pure component. But this equality will not hold when we go beyond RMA. \subsection{Infinite Temperature Behavior} In the limit $\beta\rightarrow0$ at fixed $e_{ij},$ the limiting form of $E_{\text{int}},$ and $P$ are \begin{align*} E_{\text{int}} & \rightarrow\sum_{i\geq j}e_{ij}\phi_{ij}^{0}/v_{0},\\ \beta Pv_{0} & \rightarrow\beta P_{\text{ath}}v_{0}\equiv-\ln\phi _{0}+(q/2)\ln(2\phi_{\text{u}}/q). \end{align*} and \[ \beta P_{\text{int}}v_{0}\rightarrow0. \] For any finite pressure $P\,,$ we immediately note that $\phi_{0}% \rightarrow1,$ that is the entire lattice is covered by voids with probability 1.\ This shows that at a fixed and finite pressure $P,$ $\phi$ and $\phi _{ij}^{0},$ and therefore, $E_{\text{int}}$ vanish as $T\rightarrow\infty$. On the other hand, if the volume is kept fixed, which requires the free volume density $\phi_{0}$ to be strictly less than 1, then $P\rightarrow\infty,$ as $T\rightarrow\infty,$ and $\phi$ and $\phi_{ij}^{0},$ and therefore, $E_{\text{int}}$ does not vanish as $T\rightarrow\infty.$ Thus, whether there is cohesion or not at infinite temperatures depend on the process carried out. This will emerge in our calculation below. \subsection{Choice of parameters for Numerical Results} In the following, we will take $v_{0}=1.6\times10^{-28}$ m$^{3}$ for all calculations. We will take $e_{11}=-2.6\times10^{-21}$ J, and $e_{22}% =-2.2\times10^{-21}$ J for the two components, unless specified otherwise. The degree of polymerization $M$ will be allowed to take various values between $10$, and $500,$ and $q$ is allowed to take various values between $6$, and $14$ as specified case by case In some of the results, we will keep the product $eq$ fixed, for example $eq=1.56\times10^{-20}$ J$,$ as we change $q$, so that $e$ changes with $q,$ since the cohesive energy density is determined by this product; see below. \section{Internal Pressure and ($\partial\mathcal{E}/\partial V$)$_{T}$} The internal pressure $P_{\text{IN}}=-P_{\text{int}}$ is obtained by subtracting $P_{\text{ath}},$ see (\ref{P_IN}), from the isentropic volume derivative of the energy $P=-(\partial\mathcal{E}/\partial V)_{S}$, which follows from the first law of thermodynamics; the derivative is also supposed be carried out at fixed number of particles $N_{\text{P}}$. We will assume here that $P_{\text{ath}}$\ is the pressure of the athermal system at the same volume as the real system\ $(V_{\text{ath}}=V)$. The internal pressure is not the same as the isothermal derivative $-(\partial\mathcal{E}/\partial V)_{T}.$ To demonstrate this, we start with the thermodynamic identity \[ -(\partial\mathcal{E}/\partial V)_{T}=P-T(\partial P/\partial T)_{V}. \] The derivatives here and below are also at fix $N_{\text{P}}$. We now note that the internal energy is a sum of kinetic and potential (interaction) energies $K$ and $U,$ respectively. If $U$ does not depend on particle momenta, which is the case we consider here, then the canonical partition function in classical statistical mechanics becomes a product of two terms, one of which depends only on particle momenta, and the other one on $U$. Thus, the entropy $S$ of the system in classical statistical mechanics is a sum of two terms $S_{\text{KE}}$ and $S_{\text{conf}}$, where $S_{\text{KE}}$ depends on $K$, and $S_{\text{conf}}$ on $U;$ see for example, \cite{Fedor}. The entropy $S_{\text{KE}}$ due to the kinetic energy depends on $K$ and is independent of the volume and the interaction energy. The configurational entropy $S_{\text{conf}}$ of the system, on the other hand, is determined by $U$ and is a function only of the volume $V$\ when there is \emph{no} interaction ($U=0$), i.e. in the athermal state. Thus, for a system in the athermal state, \[ S_{\text{ath}}(K,V)=S_{\text{conf,ath}}(V)+S_{\text{KE}}(K), \] where $K(T)$ is the kinetic energy of the system at a given temperature $T$ \cite{Fedor}, so that% \begin{equation} (\partial S_{\text{ath}}/\partial V)_{K}=(\partial S_{\text{conf,ath}% }/\partial V)=\beta P_{\text{ath}}. \label{ath_S_der}% \end{equation} Since the identity (\ref{ath_S_der}) is valid in general for the athermal entropy, we conclude that $\beta P_{\text{ath}}$ is a function independent of the temperature. Rather, it is a function of $N_{\text{P}},$ and $V$; indeed, it must be simply a function of the number density $n_{\text{P}}\equiv N_{\text{P}}/V$ of particles. Consequently, the athermal pressure $P_{\text{ath}}$ depends linearly on the temperature, so that \[ T(\partial P_{\text{ath}}/\partial T)_{V}=P_{\text{ath}}. \] Using this observation, we find that \begin{equation} (\partial\mathcal{E}/\partial V)_{T}=P_{\text{IN}}-T(\partial P_{\text{IN}% }/\partial T)_{V}=[\frac{\partial}{\partial\beta}(\beta P_{\text{IN}})]_{V}, \label{EV_Derivative}% \end{equation} clearly establishing that $P_{\text{IN}}\neq(\partial\mathcal{E}/\partial V)_{T},$ unless $(\partial P_{\text{IN}}/\partial T)_{V}$ vanishes, which will happen if $P_{\text{int}}$ becomes independent of the temperature (at constant volume). (As we will see below, this will happen in the RMA limit.) It is also possible for $P_{\text{IN}}\ $and $(\partial\mathcal{E}/\partial V)_{T}$ to be the same at isolated points, the extrema of $P_{\text{IN}}$ as a function of $T,$ where $(\partial P_{\text{IN}}/\partial T)_{V}=0.$ In the RMA limit, we have% \begin{equation} P_{\text{IN}}\equiv-P_{\text{int}}\ ^{\underrightarrow{\text{RMA}}}% \ -qe\phi_{\text{m}}^{2}/2v_{0}, \label{Pin_RMA}% \end{equation} so that $(\partial P_{\text{int}}/\partial T)_{V}=0.$ Thus, we conclude that \[ P_{\text{IN}}\ ^{\underrightarrow{\text{RMA}}}\ (\partial\mathcal{E}/\partial V)_{T}\text{, and }c^{\text{P}}\ ^{\underrightarrow{\text{RMA}}}% \ P_{\text{IN}}% \] in the RMA limit, as claimed earlier. Considering the RMA behavior of $P_{\text{IN}},$ and its equality with $c^{\text{P}}$ in the same limit, we infer that we can also use the internal pressure $P_{\text{IN}}$ as an alternative quantity to measure cohesiveness, as was first noted by Hildebrand \cite{Hildebrand1916}. However, in general, the two quantities $P_{\text{IN}}$ and $c^{\text{P}}$ are not going to be the same. We can directly calculate the internal pressure and its temperature-dependence either at constant volume or at constant pressure in our theory from (\ref{pressureEq}). The internal pressure $P_{\text{IN}}$ is given by the negative of $P_{\text{int}}$ in (\ref{P_int}), and can be used as an alternative quantity that is just as good a measure of the cohesion as the cohesive energy density \cite{Hildebrand1916}, as was discussed above. In Fig. \ref{F1}, we show the interaction pressure $P_{\text{int}}v_{0}$ for a symmetric blend ($M=100$) as a function of $kT$. Both axes are in arbitrary energy unit. In the same energy unit, the excess energies are $e_{11}% =-0.25=e_{22},$ and $e_{12}=-0.249,$ and the coordination number is $q=6$. The free volume density is kept fixed at $\phi_{0}=0.1,$ as we change the temperature$.$ This means that we also keep the total monomer density fixed. Thus, if we consider the case of a fixed amount of both polymers, then this analysis corresponds to keeping the volume fixed as the temperature is varied. In other words, $P_{\text{int}}$ in Fig. \ref{F1} is for an isochoric process. We immediately notice that $P_{\text{int}}$ not only changes with $T,$ so that the second term in (\ref{EV_Derivative}) does not vanish, but it is also non-monotonic. Moreover, the difference between $(\partial\mathcal{E}/\partial V)_{T}$ and $P_{\text{IN}}$ could be substantial, especially at low temperatures (see Fig. \ref{F1}) so that using $(\partial\mathcal{E}/\partial V)_{T}$ for $P_{\text{IN}}$ could be quite misleading. The minimum in $P_{\text{int}}$\ occurs at $kT\approx2.0$ in Fig. \ref{F1}. At this point, $(\partial\mathcal{E}/\partial V)_{T}$ and $P_{\text{IN}}$ become identical, but nowhere else. However, they become asymptotically close to each other at very high temperatures where the slope $(\partial P_{\text{IN}% }/\partial T)_{V}$ gradually vanishes. Let us fix the arbitrary energy unit to be $1.38\times10^{-21}$J (equal in magnitude to $100k$), so that $kT=2.0$ in this energy unit$,$ the approximate location of the minimum in $P_{\text{int}% }$ in Fig. \ref{F1}. The minimum occurs around $T=200$ K, i.e., $t_{\text{C}% }=-73.15$ $^{\circ}$C. Thus, $P_{\text{IN}}$ is an decreasing function of temperature approximately above $t_{\text{C}}\simeq-73.15$ $^{\circ}$C, and an increasing function below it. The location of the maximum in $P_{\text{IN}}$ will change with the applied pressure, the two degrees of polymerization, the interaction energies, etc. The significant feature of Fig. \ref{F1} is the very broad flat region near the minimum, which implies a very broad flat range in temperature $t_{\text{C}}$ near the maximum of $P_{\text{IN}}$. \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.749561in 2.993454in 1.336173in 2.672081in, height=3.0242in, width=3.6806in ]% {F1.ps}% \caption{Non-monotonic behavior of $v_{0}P_{\text{int}}$ with $kT$ (arbitrary energy scale). }% \label{F1}% \end{center} \end{figure} In Fig. \ref{F2}, we show $P_{\text{IN}}$ (MPa) as a function of the temperature $t_{\text{C}}$ ($^{\circ}$C) for a pure component ($M=100,$ $q=10,$ $v_{0}=1.6\times10^{-28}$ m$^{\text{3}},$ and $e=-2.6\times10^{-21}% ~$J) for a constant $V$ ($\phi_{0}=0.012246$)$,$ and constant $P$ ($1.0$ atm). At 0$^{\circ}$C, the system has $\phi_{0}=0.012246,$ and $P=1.0$ atm in both cases so that isobaric and isochoric $P_{\text{IN}}$ match there, as seen in Fig. \ref{F2}. We notice that isochoric $P_{\text{IN}}$ has a very week dependence on $t_{\text{C}}$ (suggesting that the temperature range in Fig. \ref{F2} may be near the maximum of $P_{\text{IN}}),$ while isobaric calculation provides a strongly dependent $P_{\text{IN}},$ which asymptotically goes to zero. This difference in the isobaric-isochoric behavior is consistent with our claim above. The difference between the two internal pressures finally reaches a constant equal to about $80$ MPa at very high temperatures$.$ We also show the derivative $(\partial\mathcal{E}% /\partial V)_{T}$ calculated for the isochoric case for comparison. We use isochoric $P_{\text{IN}}$ and (\ref{EV_Derivative}) to obtain isochoric $(\partial\mathcal{E}/\partial V)_{T}.$ We notice that it differs from isochoric $P_{\text{IN}}$ by a small amount over the entire temperature range considered. Near 0$^{\circ}$C, they differ by about 1 MPa, and this difference decreases as the temperature rises so that they approach each other at higher temperatures. This is consistent with what we learned from Fig. \ref{F1} above. We also notice that $P_{\text{IN}}>(\partial\mathcal{E}/\partial V)_{T}$ over the temperature range in Fig. \ref{F2}, and the difference gradually vanishes. From (\ref{EV_Derivative}), we conclude that $(\partial P_{\text{IN}}/\partial T)>0$ for this to be true. This corresponds to $(\partial P_{\text{int}}/\partial T)<0$ in Fig. \ref{F2}. Thus, the temperature range is below the minimum of$\ P_{\text{int}}$; refer to Fig. \ref{F1}.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=1.128415in 6.022026in 1.130859in 0.878988in, height=3.768in, width=5.9171in ]% {F2.ps}% \caption{Various isochoric and isobaric pressures vs temperature for a pure component $(M=100,q=10,v_{0}=1.6\times10^{-28}m^{\text{3}},$and $e=-2.6\times 10^{-21}~J)$. In the inset, we show cohesive pressure at $1.0$ atm for two different $M$ ($M=100:$component 1 and $M=10:$component 2), with the smaller $M$ showing a discontinuity at its boiling point around $600^{\circ}$C.}% \label{F2}% \end{center} \end{figure} \section{Pure Component Cohesive Energy Density or Pressure} \subsection{Athermal Reference State} The following discussion in this section is general and not restricted to a lattice except for the numerical results, which are on a lattice. Therefore, we will include the kinetic energy of the system in this discussion. The definition of $\delta$ requires considering a pure component consisting of one particular component. To be specific, we consider this to be the component $i=1,$ and we will not exhibit the index unless needed for clarity. At a given temperature $T$, and pressure $P$ or volume $V$, one calculates the interaction energy $E_{\text{int}}$ per unit volume by subtracting the kinetic energy $K$\ due to motion from the total internal energy $\mathcal{E}$ of the system. We will assume in the following that the interaction energy depends only on the coordinates and not on the momenta of the particles. In classical statistical mechanics, $K$ is a function only of the temperature $T$ but not of the volume \cite{Fedor}, and can be obtained by considering a \emph{fictitious} pure component at the same $T$, and $P$ (or $V$)$,$ containing the same number of particles, but with particles having \emph{no} interaction ($U=0$)$.$ This fictitious reference system is, as said above, known as the \emph{athermal }reference\emph{ }state. Then, $K=\mathcal{E}% _{\text{ath}},$ where $\mathcal{E}_{\text{ath}}$ is the energy of the athermal system. This reference state is equivalent to the conventional view according to which one considers the gaseous state of the system at the same $T$, but at zero pressure, so that $V\rightarrow\infty,$ which allows for infinite separation between particles to ensure $\mathcal{E}_{\text{int}}=0.$ The energy of the gaseous state is exactly $\mathcal{E}_{\text{ath}},$ which is independent of the volume, even though the physical state requires $V\rightarrow\infty$\ for the absence of interactions. We will continue to use the athermal system instead of the gas phase since the latter requires considering different pressure or volume than the pressure or volume of the physical system under consideration. Thus, we find from (\ref{InteractionEnergy}) \[ E_{\text{int}}\equiv E-(V_{\text{ath}}/V)E_{\text{ath}}, \] where $E\equiv\mathcal{E}/V$ is the energy density of the pure component (with interactions), and $E_{\text{ath}}\equiv\mathcal{E}_{\text{ath}}% /V_{\text{ath}}$ is the energy density of the athermal reference state ($e\equiv e_{11}=0$), and $V$ and $V_{\text{ath}}$ are the corresponding volumes. As said above, $E_{\text{int}}$ has different limits at infinite temperatures depending on whether we consider an isochoric or isobaric process. \subsection{Cohesive Energy Density} The cohesive density $c^{\text{P}}$ is the interaction energy of the particles in the pure component. We set $e\equiv e_{11},$ which must be negative for cohesion. The cohesive energy density $c^{\text{P}}$ for the pure component can be thought of as functions of $T$, and $P,$ or $T$, and $V,$ as the case may be$.$ It is obvious that $E_{\text{int}}$ vanishes as the interaction energy $e$ vanishes. This will also be a requirement for the cohesive energy density: $c^{\text{P}}$ should vanish with the interaction energy. We have calculated $c^{\text{P}}$ as a function of $T$ for isochoric (constant volume) and for isobaric (constant pressure) processes. We denote the two quantities by $c_{V}^{\text{P}}$\ and $c_{P}^{\text{P}},$ respectively, and show them in Fig. \ref{F2}, where they can also be compared with $P_{\text{IN}}$ for isochoric and isobaric processes, respectively. The conditions for the two processes are such that they correspond to identical states at $0^{\circ}$C. We find that while the two quantities $c^{\text{P}}$\ and $P_{\text{IN}}$ are very different for the two processes, they are similar for each process. We again note that $c_{V}^{\text{P}}$\ and $c_{P}^{\text{P}}$ behave very differently, as discussed above, with $c_{V}^{\text{P}}$\ $\geq$ $c_{P}^{\text{P}}.$ Not surprisingly, the same inequality also holds for $P_{\text{IN}}.$ The almost constancy of isochoric $c^{\text{P}}$\ and $P_{\text{IN}}$ provides a strong argument in support of their usefulness as a suitable candidate for the cohesive pressure. This should not be taken to imply that $c^{\text{P}}% $\ and $P_{\text{IN}}$ remain almost constant in every process, as the isobaric results above clearly establish. Unfortunately, most of the experiments are done under isobaric conditions; hence the use of isochoric cohesive quantities may not be useful, and even misleading and care has to be exercised. In the lattice model with only nearest-neighbor interactions, $E_{\text{int}% }=e\phi_{\text{c}}$ where $\phi_{\text{c}}\equiv\phi_{11}$ is the contact density between monomers (of component $i=1);$ hence \begin{equation} \delta^{2}\equiv c^{\text{P}}\equiv-e\phi_{\text{c}}/v_{0}. \label{cohesive_defL}% \end{equation} \paragraph{RMA\ Limit} In the RMA limit, we find from (\ref{RMAdensities1}) that% \begin{equation} c^{\text{P}}\ ^{\underrightarrow{\text{RMA}}}\ -qe\phi_{\text{m}}^{2}/2v_{0}, \label{RMA_Cohesive}% \end{equation} where $\phi_{\text{m}}$\ represents the pure component monomer density in this section. We thus find that the ratio \[ c^{\text{P}}/qe\phi_{\text{m}}^{2}\overset{\text{RMA}}{=}\text{const}% \] in this limit. Since in this limit, the product $qe$ remains a constant, the ratio \[ \widetilde{c}\equiv c^{\text{P}}/\phi_{\text{m}}^{2}\overset{\text{RMA}}% {=}\text{const}% \] in this limit. Both these properties will not hold true in general. \paragraph{End Group Effects} In the following, we will study the combinations \begin{equation} \widehat{c}\equiv2c^{\text{P}}v_{0}/(q-2\nu)e\phi_{\text{m}}^{2},c^{\prime }\equiv c^{\text{P}}/(q-2\nu) \label{mod_cp}% \end{equation} which incorporate the end group effects along with the linear connectivity via the correction $2\nu,$ where \[ \nu\equiv(M-1)/M. \] In the RMA limit,% \[ \widehat{c}\ ^{\underrightarrow{\text{RMA}}}\ 1, \] and% \[ c^{\prime}/e\ ^{\underrightarrow{\text{RMA}}}\ \phi_{\text{m}}^{2}/2v_{0}. \] We will investigate how close $\widehat{c}$ is to $1$ in our calculation for finite $q,$ and how close to a quadratic form in terms of $\phi_{\text{m}}$ does $c^{\prime}$ have. \subsection{van der Waals Fluid} To focus our attention, let us consider a fluid which is described by the van der Waals equation. It is known that for this fluid, $E_{\text{int}% }=-N_{\text{P}}^{2}a/V^{2},$ where $a>0$ is determined by the integral \cite{Landau} \begin{equation} a\equiv-(1/2)\int_{2r_{0}}^{\infty}udV; \label{vdW_a}% \end{equation} here $u$ is a two-body potential function. It is clear that $a$ is \emph{independent} of $T$ for the van der Waals fluid. We can also treat it to be independent of $V\ ($any $V$-dependence must be very weak and can be neglected). The lower limit of the integral is the zero of the two-body potential energy. Thus,% \begin{equation} c^{\text{P}}=an_{\text{P}}^{2}, \label{vdWc}% \end{equation} where $n_{\text{P}}$ is the number density per unit volume; for polymers, $n_{\text{P}}\equiv\phi_{\text{m}}/Mv_{0}$. It is interesting to compare (\ref{vdWc}) with (\ref{RMA_Cohesive}) derived in the RMA limit. They both show the \emph{same} quadratic dependence on the number density. We also note that $c^{\text{P}}$ vanishes as $a$ vanishes, as we expect, but most importantly the ratio $c^{\text{P}}/n_{\text{P}}^{2}$ is independent of the temperature, just as $\widetilde{c}$ is a constant in the RMA limit$.$ Deviation from the quadratic dependence will be observed in most realistic systems, since they cannot be described by this approximation. An interesting observation about this fluid is that $P_{\text{int}}$ is exactly equal to $E_{\text{int}};$ thus, $c^{\text{P}}=-P_{\text{int}}$ for this fluid. In addition, it is well known that for this fluid \begin{equation} (\partial\mathcal{E}/\partial V)_{T}=(\partial\mathcal{E}_{\text{int}% }/\partial V)_{T}=an_{\text{P}}^{2}, \label{vdWdEdV}% \end{equation} since $a$ is temperature-independent. Thus, \begin{equation} P_{\text{IN}}\equiv(\partial\mathcal{E}/\partial V)_{T} \label{PINdEdV}% \end{equation} for the the van der Waals fluid, in which $a$ must be taken as $T$% -independent. The equality (\ref{PINdEdV}) is not always valid as discussed above. \subsection{The Usual Approximation} Usually, one approximates $E_{\text{int}}$ by the energy density of vaporization at the boiling point at $T=T_{\text{B}}.$ This means that one approximates $c^{\text{P}}(T,P)$ by its value $c^{\text{P}}(T_{\text{B}},P),$ which will be a function of $P$, but not of $T.$ On the contrary, $c^{\text{P}}(T,P)$ will show variation with respect to both variables. In addition, it will also change with the lattice coordination number $q,$ and the interaction energy $e$. It is clear that isobaric $c^{\text{P}}(T,P)$ will show the discontinuity at the boiling point, as shown in the inset in Fig. \ref{F2}, where we report isobaric $c^{\text{P}}$ for two different molecular weights $M=10,$ and $M=100$ at $1.0$ atm$.$ The shorter polymer system boils at about $600^{\circ}$C (and $1.0$ atm$)$, but not the longer polymer over the temperature range shown there. At $0^{\circ}$C,\ the smaller polymer has a slightly higher value of isobaric $c^{\text{P}}.$ If we calculate isochoric $c^{\text{P}}$ at a volume equal to that in the isobaric case at $0^{\circ}$C and $1.0$ atm$,$ then the corresponding isochoric $c^{\text{P}},$ which is almost a constant (as shown in the main figure in Fig. \ref{F2}), will be about $65$ MPa, close to its value at $0^{\circ}$C. This value is much larger than $c^{\text{P}}(T_{\text{B}},P)$ of about $15$ MPa at the boiling point. Thus, the conventional approximation cannot be taken as a very good estimate of the cohesiveness of the system, which clearly depends on the state of the system$.$Cohesive energy density as a function of pressure for different $q$. $\qquad\qquad\qquad\qquad\qquad\qquad\qquad$% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.935322in 6.018832in 0.937766in 0.937516in, height=3.7135in, width=6.3027in ]% {F3.ps}% \caption{Cohesive energy density as a function of pressure for different $q$. We also show $c^{\prime}$ and $\widehat{c}.$}% \label{F3}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=1.070568in 5.620840in 0.803333in 0.804498in, height=4.2436in, width=6.3019in ]% {F4.ps}% \caption{Monomer density as a function of presssure for different $q$. For $q\geq8,$ we have a liquid which gets denser as $q$ increases. For $q=6,$ we have a gas phase at low pressure which becomes a lighter liquid at higher pressure, with a liquid gas transition around 0.05 atm. }% \label{F4}% \end{center} \end{figure} \begin{figure} [ptbptb] \begin{center} \includegraphics[ trim=0.902732in 5.818773in 0.804963in 0.706596in, height=4.1451in, width=6.4679in ]% {F5.ps}% \caption{$\widehat{c}$ as a function of pressure at $500^{\circ}$C for different $q$. As $q$ increases, $\widehat{c}$ approaches $1,$ its limiting RMA value, which also forms its upper limit. For any finite $q$, $\widehat{c}$ is strictly lower. }% \label{F5}% \end{center} \end{figure} \subsection{Numerical results} We will evaluate various quantities for isochoric and isobaric processes$.$ For isochoric calculations, we calculate $\phi_{\text{m}}$ at some reference $T,P.$ Usually, we take it to be at $1^{\circ}$C, and $1.0$ atm. We then keep $\phi_{\text{m}}$ fixed as we change the temperature. This is equivalent to keeping the volume fixed for a given amount of the polymers. For isobaric (constant $P$) calculations, we again start at $1^{\circ}$C, and take a certain pressure such as $1.0$ atm, and keep the pressure fixed by adjusting $\phi_{\text{m}}$\ as we change the temperature. For a given amount of polymers, this amounts to adjusting the volume of the system. We show $c^{\text{P}}$ in Fig. \ref{F3}, and the monomer density $\phi_{\text{m}}$ in Fig. \ref{F4}, as a function of $P$ ($P\leq2$ atm) for different values of $q$ from $q=6$ to $q=14.$ We have taken $M=500,$ $v_{0}=1.6\times10^{-28}$ m$^{\text{3}},$ and $e=-2.6\times10^{-21}~$J$,$ and have set $t_{\text{C}}=500^{\circ}$C. We see that $c^{\text{P}}$ increases with $q$, which is expected; see (\ref{RMA_Cohesive}). To further analyze this dependence, we plot $c^{\prime}$ in Fig. 3. There is still a residual increase with $q.$ The increase is also partially due to the fact that $\phi_{\text{m}% }$ increases with $q,$ as shown in Fig. \ref{F4}. Therefore, we also plot $\widehat{c}$ as a function of $P$ in Fig. \ref{F3}$.$ We notice that there is still a residual dependence on $q$ with $\widehat{c}$ increasing with $q,$ and reaching $1.0$ from below as $q$ increases, which is consistent the RMA limit. In Fig. \ref{F5}, we plot $\widehat{c}$ and $\phi_{\text{m}}$\ (in the inset) for the same system over a much wider range of pressure for different $q.$ From the behavior of $\ \widehat{c}$ and $\phi_{\text{m}},$\ we easily conclude that $c^{\text{P}}$ changes strongly with $P$ for smaller $q,$ and the dependence gets weaker as $q$ increases. In Fig. \ref{F6}$,$ we plot $c^{\text{P}}$ for $q=12$ as a function of the monomer density $\phi_{\text{m}}.$ We have taken $M=500,$ $v_{0}% =1.6\times10^{-28}$ m$^{\text{3}},$ and $e=-2.6\times10^{-21}~$J$,$ and have set $t_{\text{C}}=500^{\circ}$C. According to (\ref{RMA_Cohesive}), it should be a quadratic function of the monomer density. To see if this is true for the present case of finite $q,$ we plot the ratio $c^{\text{P}}/\phi_{\text{m}% }^{2}$\ in the inset, which clearly shows that there is still a strong residual dependence left in the ratio. Thus, $c^{\text{P}}$ in a realistic system should not be quadratic in $\phi_{\text{m}}.$% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F6.ps}% \caption{$c^{\text{P}}$ as a function of $\phi_{\text{m}}.$ In the inset, we show $\widetilde{c},$ showing that $c^{\text{P}}$ is not quadratic in $\phi_{\text{m}},$ as suggested by the regular solution theory.}% \label{F6}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.901917in 5.921995in 0.904361in 0.904527in, height=3.8432in, width=6.3693in ]% {F7.ps}% \caption{Cohesive energy density as a function of pressure for different molecular weights. }% \label{F7}% \end{center} \end{figure} In Fig. \ref{F7}, we show $c^{\text{P}}$ as a function $P$ for small pressures for different $M$ ranging from $M=10$ to $M=500;$ we keep $q=14,$ $v_{0}=1.6\times10^{-28}$ m$^{\text{3}},$ and $e=-2.6\times10^{-21}~$J$,$ and have set the temperature $t_{\text{C}}=25^{\circ}$C. We see that $c^{\text{P}% }$ decreases as $M$ increases, but different curves are almost parallel, indicating that it is not the slope, but the magnitude that is affected by $M.$ In Fig. \ref{F8}, we show $c^{\text{P}}$ as a function $P$ for small pressures for different temperatures ranging from $t_{\text{C}}=25^{\circ}$C to $t_{\text{C}}=5000^{\circ}$C$;$ we keep $q=14,$ $M=100,~v_{0}% =1.6\times10^{-28}$ m$^{\text{3}},$ and $e=-2.6\times10^{-21}~$J. We immediately note that the pressure variation is quite minimal over such a small range of pressure between 1 to 25 atm. However, the temperature variation is quite pronounced, again affecting the magnitude and not the slope. There are two important observations. (i) For the highest temperature $5000^{\circ}$C, the state corresponds to a gas, since $c^{\text{P}}\simeq0.$ (ii) At low temperatures, $c^{\text{P}}$ reaches an asymptotic value since the system has become almost incompressible.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F8.ps}% \caption{Cohesive energy density as a function of pressure at different temperatures. }% \label{F8}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F9.ps}% \caption{$c^{\text{P}}$ as a function of $\phi_{\text{m}}$ for different $q$ when $e(q/2-\nu)$ is kept fixed at ($-5.21\times10^{-21}$J).}% \label{F9}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F10.ps}% \caption{$c^{\text{P}}$ as a function of $\phi_{\text{m}}$ for different $q$ when $eq$ is kept fixed at ($-1.56\times10^{-20}$J).}% \label{F10}% \end{center} \end{figure} In Fig. \ref{F9}, which is for a pure component studied in Fig. \ref{F3}, we keep the product $e(q/2-\nu)$ fixed at ($-5.21\times10^{-21}$J)$,$ as we change $q$, so that $e$ changes with $q.$ We keep $M=500,$ $v_{0}% =1.6\times10^{-28}$ m$^{\text{3}},$ and $t_{\text{C}}=500^{\circ}$C. The effect of changing $q$ is now minimal. The cohesive energy density shows minimal variation with $q$, except that it is higher for larger $q$, but the difference first increases with $\phi_{\text{m}}$ and then decreases as $\phi_{\text{m}}\rightarrow1.$\ In Fig. \ref{F10}, we keep instead the product $eq$ fixed, so that there is no endpoint correction. In contrast with Fig. \ref{F9}, we find that $c^{\text{P}},$ while still increasing with $q$, has the property that the its difference for different $q$ continues to increases with $\phi_{\text{m}}$ as $\phi_{\text{m}}\rightarrow1.$ It is clear from both figures that $c^{\text{P}}$ increases with $q$ at a given $\phi_{\text{m}}$ and saturates as $q$ becomes large, which is the direction in which $q$ must increase to obtain the RMA limit. Thus, $c^{\text{P}}$ achieves its maximum value at a given density $\phi_{\text{m}}$ in the RMA approximation:\ for any realistic system in which $q$ is some finite value, the value of the cohesive energy at that density $\phi_{\text{m}}$\ will be strictly smaller. It is also evident from the inset in both figures that at a given pressure $P$, $c^{\text{P}}$ achieves its minimum value in the RMA approximation. The results for isochoric and isobaric calculations are shown in Fig. \ref{F11}. We consider $q=14,$ $M=100,~v_{0}=1.6\times10^{-28}$ m$^{\text{3}% },$ and $e=-2.6\times10^{-21}~$J$,$ and have taken the pressure to be $P=1.0$ atm\ at the initial temperature $t_{\text{C}}=0^{\circ}$C. This corresponds to the monomer density $\phi_{\text{m}}=0.997115.$ For the isochoric calculation, we maintain the same monomer density ($\phi_{\text{m}}=0.997115$) at all temperatures. For the isobaric calculation, we maintain the same pressure ($P=1$ atm) at all temperatures. This ensures that isochoric $c_{V}^{\text{P}% }$ \ and isobaric $c_{P}^{\text{P}}$ are identical at $t_{\text{C}}=0^{\circ}% $C. The figure clearly shows that $c_{V}^{\text{P}}$ is always higher than $c_{P}^{\text{P}}$, as discussed above$.$ This behavior is not hard to understand. As we raise $T$, the pressure increases in the constant volume calculation above $P=1$ atm, making particles get closer together, thereby increasing the cohesive energy.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F11.ps}% \caption{Various $c^{\text{P}}$ as a function of temperature. }% \label{F11}% \end{center} \end{figure} \section{Mutual Cohesive Energy Density or Pressure} \subsection{Mutual Interaction} The solubility of one of the components in a binary mixture of components $i=1,2$ increases as the excess energy $\varepsilon_{12}$ decreases for the simple reason that a larger $\varepsilon_{12}$ corresponds to a stronger excess repulsion between the two components. In particular, the two components will not phase separate at any temperature if $\varepsilon_{12}\leq0$. If $\varepsilon_{12}>0,$ the two components will phase separate at low temperatures. Thus, knowing whether $\varepsilon_{12}>0$\ immediately allows us to conclude that solubility will not occur everywhere. Usually, all energies $e_{ij}$\ are negative, but the sign of $\varepsilon _{12}$\ depends on their relative magnitudes. If, however, we assume the London conjecture (\ref{london_Conj}), then the corresponding excess energy becomes \begin{equation} \varepsilon_{12}=(\sqrt{\left\vert e_{11}\right\vert }-\sqrt{\left\vert e_{22}\right\vert })^{2}/2\geq0. \label{Mutual_Energy}% \end{equation} Thus, the London conjecture implies that the two components experience a repulsive excess interaction so that the solubility decreases as $\varepsilon_{12}$ increases, i.e., as $e_{11}$ and $e_{22}$ become more disparate. The maximum solubility in this case occurs when $\varepsilon _{12}=0,$ i.e., when the two components have identical interactions ("like dissolves like"). In this case, the size or architectural disparity cannot diminish the solubility because the entropy of mixing will always promote miscibility. In general, $\varepsilon_{12}>0,$ and it becomes necessary to study solubility under different thermodynamic conditions. To simplify our investigation, we will assume the London conjecture (\ref{london_Conj}) for the mixture in all the calculations. Thus, the two components always experience a repulsive excess interaction in our calculation. (This is also true when we intentionally violate the London conjecture (\ref{london_Conj}) and set $e_{12}=0,$ as is the case for SRS.) In the RMA limit, the conjecture (\ref{london_Conj}) immediately leads to (\ref{london_berthelot_Conj}), as we have seen above. This need not remain true when we go beyond RMA. Thus, we will inquire if (\ref{london_berthelot_Conj}) is satisfied in general for cohesive energies that are calculated from our theory under the assumption that the London conjecture (\ref{london_Conj}) is valid. Any failure of (\ref{london_berthelot_Conj}) under this condition will clearly have significant implications for our basic understanding of solubility. \subsection{van Laar-Hildebrand Approach using Energy of Mixing} We follow van Laar \cite{vanLaar} and Hildebrand \cite{Hildebrand}, and introduce $c_{12}$ by exploiting the energy of mixing $\Delta E_{\text{M}}$ per unit volume. According to the isometric regular solution theory \cite{Hildebrand}, the two are related by% \begin{equation} \Delta E_{\text{M}}=(c_{11}^{\text{P}}+c_{22}^{\text{P}}-2c_{12})\varphi _{1}\varphi_{2}, \label{mixEnergy1}% \end{equation} where $\varphi_{i}$ are supposed to denote the volume fractions. Using the London-Berthelot conjecture (\ref{london_berthelot_Conj}), we immediately retrieve (\ref{mixEnergy0}), once we recognize that $\delta_{1}^{2}\equiv c_{11}^{\text{P}},$ and $\delta_{2}^{2}\equiv c_{22}^{\text{P}},$ see (\ref{cohesive_def}) above. We now take (\ref{mixEnergy1}) as the general definition of the mutual cohesive energy density $c_{12}$ in terms of the pure component cohesive energy densities$.$ The extension then allows us to evaluate $c_{12}$ by calculating $\Delta E_{\text{M}},$ provided we know $c_{11}^{\text{P}}$ and $c_{22}^{\text{P}}$ for the pure components; the latter are independent of the composition. It can be argued that since (\ref{mixEnergy1}) is valid only for isometric mixing, it should not be considered a general definition of $c_{12}$ for non-isometric mixing. However, since one of our objectives is to investigate the effects due to isometric and non-isometric mixing, we will adopt (\ref{mixEnergy1}) as the general definition of $c_{12}.$ \subsubsection{Lattice model} The kinetic energy of the mixture is the sum of the kinetic energies of the pure components, all having the same temperature, and will not affect the energy of mixing. Thus, we need not consider the kinetic energy in our consideration anymore. In other words, we can safely use a lattice model in calculating $\Delta E_{\text{M}}.$ This is what we intend to do below.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F12.ps}% \caption{Mutual cohesive energy density as a function of temperature for different $q.$ We have taken a blend at 50-50 composition ($M_{1}% =M_{2}=100;e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J, $v_{0}=1.6\times10^{-28}$ m$^{3}$) at 1.0 atm. }% \label{F12}% \end{center} \end{figure} The definition of $c_{12}$ depends on a form (\ref{mixEnergy1}) whose validity in general is questionable, as it is based on RST. To appreciate this more clearly, let us find out the conditions under which the RMA limit of our recursive theory will reproduce (\ref{mixEnergy1}). We first note that the interaction energy density (per unit volume) $E_{\text{int}}$ of the mixture from (\ref{energy_Def}) is given by \begin{equation} E_{\text{int}}v_{0}\equiv e_{11}\phi_{11}+e_{22}\phi_{22}+e_{12}\phi _{12}=-c_{11}^{\text{P}}v_{0}\phi_{11}/\phi_{11}^{\text{P}}-c_{22}^{\text{P}% }v_{0}\phi_{22}/\phi_{22}^{\text{P}}+e_{12}\phi_{12}, \label{energy_Def_Mixture}% \end{equation} where we introduced the pure component cohesive energy densities in the last equation. The energy of mixing per unit volume is \begin{equation} \Delta E_{\text{M}}v_{0}=e_{11}\phi_{11}+e_{22}\phi_{22}+e_{12}\phi _{12}-(V_{1}^{\text{P}}/V)e_{11}\phi_{11}^{\text{P}}-(V_{2}^{\text{P}% }/V)e_{22}\phi_{22}^{\text{P}}, \label{mixEnergyL}% \end{equation} where $V_{1}^{\text{P}},$ and $V_{2}^{\text{P}}$ are the pure component volumes. It is easy to see that in general% \begin{equation} V_{1}^{\text{P}}/V=x\phi_{\text{m}}/\phi_{\text{m}1}^{\text{P}}\equiv \phi_{\text{m1}}/\phi_{\text{m}1}^{\text{P}},V_{2}^{\text{P}}/V=(1-x)\phi _{\text{m}}/\phi_{\text{m}2}^{\text{P}}\equiv\phi_{\text{m2}}/\phi_{\text{m}% 2}^{\text{P}}, \label{volumeRatios}% \end{equation} where $x\equiv\phi_{\text{m1}}/\phi_{\text{m}}$ is the monomer fraction of species $i=1$ introduced earlier in (\ref{monomer_fraction})$,$and $\phi_{\text{m}i}^{\text{P}}$ the pure component monomer density of the $i$th species. The monomer density of both species in the mixture is $\phi _{\text{m}}\equiv\phi_{\text{m1}}+\phi_{\text{m2}}$. \subsubsection{RMA Limit and Monomer Density Equality:} In the RMA limit, it is easy to see by the use of (\ref{RMA2}) that the last equation in (\ref{energy_Def_Mixture}) reduces to% \[ E_{\text{int}}~\ ^{\underrightarrow{\text{RMA}}}~-c_{11}^{\text{P}}% (V_{1}^{\text{P}}/V)^{2}-c_{22}^{\text{P}}(V_{2}^{\text{P}}/V)^{2}% -2c_{12}(V_{1}^{\text{P}}/V)(V_{2}^{\text{P}}/V), \] where \begin{equation} c_{ii}^{\text{P}}~\ ^{\underrightarrow{\text{RMA}}}-(1/2)qe_{ii}\phi _{\text{m}i}^{\text{P}2}/v_{0},\text{ }c_{12}\ \ ^{\underrightarrow {\text{RMA}}}\ -qe_{12}\phi_{\text{m}1}^{\text{P}}\phi_{\text{m}2}^{\text{P}% }/2v_{0} \label{RMA_CohesiveDensities}% \end{equation} are the cohesive energy densities in this limit. The form of $c_{ii}% ^{\text{P}}$ is exactly the same as the one derived above for the pure component; see (\ref{RMAdensities1}), and (\ref{vdWc}). It is also a trivial exercise to see that the RMA limit of the energy of mixing (\ref{mixEnergyL}) will exactly reproduce (\ref{mixEnergy1}) provided we identify% \[ \varphi_{1}\equiv V_{1}^{\text{P}}/V,\varphi_{2}\equiv V_{2}^{\text{P}% }/V\text{ ,}% \] and further assume% \[ \text{ }V_{2}^{\text{P}}+V_{2}^{\text{P}}=V, \] so that $\varphi_{1}+$ $\varphi_{2}=1.$ The last condition is nothing but the requirement that mixing be isometric and can be rewritten using (\ref{volumeRatios}) as% \[ \phi_{\text{m}}\phi_{\text{m}1}^{\text{P}}+x\phi_{\text{m}}(\phi_{\text{m}% 2}^{\text{P}}-\phi_{\text{m}1}^{\text{P}})\equiv\phi_{\text{m}1}^{\text{P}% }\phi_{\text{m}2}^{\text{P}}. \] This should be valid for all $x$ including $x=0$ and $x=1.$ This can only be true if we require the \emph{monomer density equality}: \begin{equation} \phi_{\text{m}}=\phi_{\text{m}1}^{\text{P}}=\phi_{\text{m}2}^{\text{P}}. \label{monomer_DenEq}% \end{equation} This condition is nothing but the \emph{equality of the free volume densities} in the mixture and the pure components, and is a consequence of isometric mixing, as is easily seen from (\ref{MixingVolume}) obtained below. We finally conclude that \begin{equation} \varphi_{1}\equiv x,\varphi_{2}\equiv1-x, \label{VolumeFraction}% \end{equation} as was also the case discussed earlier in the context of (\ref{mixEnergy0}). \subsubsection{Isometric RMA\ Limit} Let us now consider the isometric RMA limit for which we have the simplification \[ V_{1}^{\text{P}}/V=x,V_{2}^{\text{P}}/V=1-x; \] see (\ref{monomer_DenEq}). Thus,% \[ E_{\text{int}}{}\ ^{\underrightarrow{\text{RMA}}}\ -[c_{11}^{\text{P}}% \varphi_{1}^{2}+2c_{12}\varphi_{1}\varphi_{2}+c_{22}^{\text{P}}\varphi_{2}% ^{2}], \] as is well known \cite{Hildebrand}. This form can only be justified in the isometric RMA limit, with the volume fractions given by (\ref{VolumeFraction}% ). Similarly,% \[ \Delta E_{\text{M}}v_{0}{}\ ^{\underrightarrow{\text{RMA}}}\ q[e_{12}% -(1/2)(e_{11}+e_{22})]\phi_{\text{m}1}\phi_{\text{m}2}=(\chi/\beta )\phi_{\text{m}1}\phi_{\text{m}2}, \] where we have introduced the Flory-Huggins chi parameter $\chi\equiv q\beta\varepsilon.$ Using (\ref{monomer_DenEq}), we can rewrite the above energy of mixing in the form (\ref{mixEnergy1}). We finally have for $\Delta E_{\text{M}}$ in the isometric RMA limit% \begin{equation} \Delta E_{\text{M}}{}\ ^{\underrightarrow{\text{RMA}}}\ (\chi\phi_{\text{m}% }^{2}/\beta v_{0})\varphi_{1}\varphi_{2}, \label{RMA_EnergyMix}% \end{equation} again with the volume fractions given by (\ref{VolumeFraction}). \subsubsection{Volume Fractions} In the following, we will always take $\varphi_{i}$ to be given by (\ref{VolumeFraction}). This ensures that $\varphi_{1}+$ $\varphi_{2}=1.$ Another possibility is to define $\varphi_{i}$ in terms of partial monomer volumes $\overline{v_{i}}$: \begin{equation} \varphi_{i}\equiv\phi_{\text{m}i}\overline{v_{i}}/v_{0}. \label{PartialVolumeFraction}% \end{equation} However, as shown in \cite{RaneGuj2003}, the error is not significant except near $x=0$ or $x=1$. Since the calculation of $\overline{v_{i}}$ is somewhat tedious, we will continue to use (\ref{VolumeFraction}) for $\varphi_{i}$ in our calculation, as we are mostly interested in $x=0.5$. \subsubsection{Beyond Isometric RMA} Beyond isometric RMA, the energy of mixing will not have the above form in (\ref{RMA_EnergyMix}); rather, it will be related to the energetic effective chi introduced in (\ref{effective_EChi}) \cite{Gujrati1998,Gujrati2003} in exactly the same form as above:% \[ \Delta E_{\text{M}}\equiv(\chi_{\text{eff}}^{\text{E}}\phi_{\text{m}}% ^{2}/\beta v_{0})\varphi_{1}\varphi_{2}, \] which ties the concept of cohesive energy density intimately with that of the effective chi, as noted earlier. However, it is also clear that $c_{12}$ and $\chi_{\text{eff}}^{\text{E}}$ are not directly proportional to each other. \ \subsection{van Laar-Hildebrand $c_{12}$} Using (\ref{cohesive_defL}) for pure component cohesive energies, we obtain \begin{equation} c_{12}v_{0}=(e_{12}/2)\phi_{12}-e_{11}[(\phi_{11}-V_{1}^{\text{P}}/V\phi _{11}^{\text{P}})/2\varphi_{1}\varphi_{2}-\phi_{11}^{\text{P}}]-e_{22}% [(\phi_{22}-V_{2}^{\text{P}}/V\phi_{22}^{\text{P}})/2\varphi_{1}\varphi _{2}-\phi_{22}^{\text{P}}] \label{Mutual_Cohesive_Density}% \end{equation} as the general expression for the cohesive energy density. We will assume that $\varphi_{i}$ are as given in (\ref{VolumeFraction}). It is clear that the definition of $c_{12}$ given in (\ref{mixEnergy1}) is such that it not only depends on the state of the mixture, but also depends on pure component states. This is an unwanted feature. In particular, $c_{12}$ will show a discontinuity if a pure component undergoes a phase change (see Fig. \ref{F22} later), even though the mixture does not. In Fig. \ref{F12}, we show $c_{12}$ for a 50-50 blend ($M=100$) at $1.0$ atm as a function of $t_{\text{C}}$ calculated for different values of $q$ from $q=6$ to $q=14.$ The curvature of $c_{12}$ gradually changes at low temperatures from concave upwards to downwards. As not only the magnitude but also the shape of $c_{12}$ changes with $q$, $c_{12}$ is not simply proportional to $q$; it is a complicated function of it. This is easily seen from the values of $c_{12}/q$\ at $t_{\text{C}}=0% {{}^\circ}% $C, which is found to increase with $q.$ It is about $4.0$ at $q$=6 and increases to about $6.4$ at $q=14.$ The temperature where $c_{12}$ asymptotically becomes very small, such as $0.1$ MPa occurs at higher and higher values of the temperature as $q$ increases. This is not surprising. We expect the cohesion to increase with $q$ at a given temperature. \subsection{Isometric Mixing:\ EDIM and DDIM} The volume of mixing is defined as \[ \Delta V_{\text{M}}\equiv V-V_{1}^{\text{P}}-V_{2}^{\text{P}}. \] Using (\ref{volumeRatios}), we find that the volume of mixing per unit volume ($\Delta v_{\text{M}}\equiv\Delta V_{\text{M}}/V$), and per monomer ($\Delta\widehat{v}_{\text{M}}\equiv\Delta V_{\text{M}}/N_{\text{m}}$) are% \begin{subequations} \begin{align} \Delta v_{\text{M}} & \equiv\phi_{\text{m}}[1/\phi_{\text{m}}-x/\phi _{\text{m}1}^{\text{P}}-(1-x)/\phi_{\text{m}2}^{\text{P}}% ],\label{MixingVolume}\\ \Delta\widehat{v}_{\text{M}} & \equiv(1/\phi_{\text{m}}-x/\phi_{\text{m}% 1}^{\text{P}}-(1-x)/\phi_{\text{m}2}^{\text{P}})v_{0}. \label{MixingVolume1}% \end{align} One of the conditions for RST to be valid is that this quantity be zero (isometric mixing). The condition (\ref{monomer_DenEq}), which ensures isometric mixing at all $x$, is much stronger than the isometric mixing requirement at a given fixed value of $x$. There are situations in which mixing is isometric at a given $x$, but (\ref{monomer_DenEq}) is not satisfied. For a given pure component monomer densities $\phi_{\text{m}% 1}^{\text{P}},$ and $\phi_{\text{m}2}^{\text{P}},$ that are to be mixed at a given composition $x,$ we must choose the mixture density $\phi_{\text{m}}$ to satisfy \end{subequations} \begin{equation} 1/\phi_{\text{m}}=x/\phi_{\text{m}1}^{\text{P}}+(1-x)/\phi_{\text{m}% 2}^{\text{P}} \label{Mixture_Den_Def}% \end{equation} in order to make the mixing isometric; see (\ref{MixingVolume}). In this case, (\ref{mixEnergy1}) will not hold true despite mixing being isometric. In our calculations, we will consider both ways of ensuring isometric mixing. We will call the mixing method satisfying (\ref{monomer_DenEq}) \emph{equal density isometric mixing} (EDIM), and the mixing method satisfying (\ref{Mixture_Den_Def}) \emph{different density isometric mixing} (DDIM). For most of our computation, we fix the composition $x.$ Almost all of our results are for a 50-50 mixture. We will consider both isometric mixing processes noted above in our calculations. A variety of processes can be considered for each mixing. In order to make calculations feasible, we need to restrict the processes to a few selected ones. We have decided to investigate the following processes with the hope that they are sufficient to illuminate the complex behavior of cohesive energy densities and their usefulness. \subsubsection{(Isometric) Isochoric Process} The process should be properly called an isometric isochoric process, but we will use the term isochoric process in short in this work. The volume of the mixture is kept fixed as the temperature is varied. The energy of mixing can be calculated for a variety of mixing processes. We have decided to restrict this to isometric mixing. We calculate the mixture's monomer density $\phi_{\text{m}}$ at each temperature. For each temperature, we use (\ref{monomer_DenEq}) to determine the pure component monomer densities to ensure isometric mixing for the selected $x$. \subsubsection{(Isobaric) EDIM Process} The process should be properly called an isobaric EDIM process, but we will use the term EDIM process in short in this work. We keep the mixture at a fixed pressure, which is usually $1.0$ atm, and calculate its monomer density $\phi_{\text{m}}$ at each temperature. We then use EDIM to ensure isometric mixing at each temperature and calculate the energy of mixing. In the process, the mixture's volume keeps changing, and the pure component pressures need not be at the mixture's fixed pressure. Thus, the mixing is not at constant pressure, even though the mixture's pressure is constant. \subsubsection{DDIM\ Process} For DDIM, we keep the pressures of the pure components fixed, which is usually $1.0$ atm$,$ and calculate $\phi_{\text{m}1}^{\text{P}},$ and $\phi _{\text{m}2}^{\text{P}},$ from which we calculate $\phi_{\text{m}}$ using (\ref{Mixture_Den_Def}) for the selected $xv$. This ensures isometric mixing, but again the mixing process is not a constant pressure one since the mixture need not have the same fixed pressure of the pure components. Even though the energy of mixing is calculated for an isometric mixing, it is neither calculated for an isobaric nor an isochoric process. All the above three mixing processes correspond to isometric mixing at each temperature. Thus, we can compare the calculated van Laar-Hildebrand $c_{12}$ for these three processes to assess the importance of isometric mixing on $c_{12}$. It is not easy to consider EDIM for an isochoric (constant $V$) process. Therefore, we have not investigated it in this work. \subsubsection{Isobaric Process} In this process, the pressure of the mixture is kept constant as the temperature is varied. The calculation of the energy of mixing can be carried out for a variety of mixing. We will restrict ourselves to mixing at constant pressure so that the pure components also have the same pressure as the mixture at all temperatures. Thus, the volume of mixing will not be zero in this case. The process is properly described as a constant pressure mixing isobaric process, but we will use the term isobaric process in short in this work.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F13.ps}% \caption{Energy of mixing and mutual cohesive energy density as a function of composition $x$. We take $M_{1}=10,M_{2}=100$, $e_{11}=e_{22}=e_{12}% =-2.6\times10^{-21}$ J, $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ The pressure and temperature are fixed at $1.0$ atm and $300^{\circ}$C, respectively.}% \label{F15}% \end{center} \end{figure} \subsection{Results} \subsubsection{Size Disparity Effects} The effects of size disparity alone are presented in Fig. \ref{F15}, where we show the energy of mixing as a function of $x=\phi_{\text{m}1}/\phi_{\text{m}% }$ for a blend ($M_{1}=10,M_{2}=100$) with $e_{11}=e_{22}=e_{12}% =-2.6\times10^{-21}$ J, so that $\varepsilon_{12}=0.$ Thus, energetically, there is no preference. We take $q=14,$ and the pressure and temperature are fixed at $1.0$ atm and $300^{\circ}$C, respectively. We show isobaric and DDIM results. While the energy of mixing is negative for the isobaric case, which does not correspond to isometric mixing, it is positive everywhere for DDIM, which does correspond to isometric mixing. This result should be contrasted with the Scatchard-Hildebrand conjecture (\ref{mixEnergy0}) \cite{Hildebrand}, whose justification requires not only isometric mixing but also the London-Berthelot conjecture (\ref{london_berthelot_Conj}). We also show corresponding $c_{12}$, which weakly changes with $x.$ For $M_{1}=M_{2}=100,$ the energy and the volume of mixing vanish, which is not a surprising result as we have a symmetric blend (both components identical in size and interaction). It is important to understand the significance of the difference in the behavior of $\Delta E_{\text{M}}$\ for the two processes in Fig. \ref{F15}. From Fig. \ref{F7}, we find that the solubility parameter $\delta$\ at $1.0$\ atm and $25^{\circ}$C is about 9.844 and 9.905 (MPa)$^{1/2}$ for $M=100,$ and $10$ respectively. Thus, assuming the validity of the Scatchard-Hildebrand conjecture (\ref{mixEnergy0}), we estimate $\Delta E_{\text{M}}$\ $\cong0.9$ kPa at equal composition ($x=1/2$). However, a correction for temperature difference needs to made, since the results in Fig. \ref{F15} are for $t_{\text{C}}=300^{\circ}$C. From the inset in Fig. \ref{F2}, we observe that $c^{\text{P}}$ almost decreases by a factor of 2/3, while their difference has increased at $t_{\text{C}}=300^{\circ}$C relative to $t_{\text{C}}=25^{\circ}$C. What one finds is that the corrected $\Delta E_{\text{M}}$ is not far from the DDIM $\Delta E_{\text{M}}$ in Fig. \ref{F15} at $x=1/2,$\ but has no relationship to the isobaric $\Delta E_{\text{M}},$ which not only is negative but also has a much larger magnitude$.$ It should be noted that the pressures of the pure components in both calculations reported in Fig. \ref{F15} is the same: $1.0$ atm. Under this condition, the Scatchard-Hildebrand conjecture (\ref{mixEnergy0}) cannot differentiate between different processes as $\delta_{1\text{ }}$and $\delta_{2}$ are unchanged. But this is most certainly not the case in Fig. \ref{F15}. For a symmetric blend, $\Delta E_{\text{M}}=0$ even in an exact theory. Since the symmetry requires $\delta_{1\text{ }}=\delta_{2},$ we find that the London-Berthelot conjecture (\ref{london_berthelot_Conj}) is satisfied. Thus, the violation of the Scatchard-Hildebrand conjecture we observe in Fig. \ref{F15} is due to non-random mixing caused by size-disparity. \subsubsection{Interaction Disparity Effects} In Fig. \ref{F16}, we fix $e_{11}=-2.6\times10^{-21}$ J, and plot the energy of mixing as a function of $(-e_{22})$ for a blend with $M_{1}=M_{2}=100.$ Thus, there is no size disparity but the energy disparity is present except when $e_{11}=e_{22}.$ We not only consider isobaric, and DDIM processes, but also the EDIM process. At $e_{11}=e_{22}$, we have a symmetric blend; hence, $\Delta E_{\text{M}}=0$ and\ $c_{11}^{\text{P}}=$\ $c_{22}^{\text{P}}$ for all the processes. Correspondingly, we have $c_{12}=$ $c_{11}^{\text{P}}% $or\ $c_{22}^{\text{P}},$ and the correction $l_{12}=0$ for all the processes$.$ Away from this point, $\Delta E_{\text{M}}$ for the three processes\ are different, but remain non-negative. The energy disparity has produced much larger magnitudes of $\Delta E_{\text{M}}$\ than the size disparity alone; compare with Fig. \ref{F15}. This difference in the magnitudes of $\Delta E_{\text{M}}$ is reflected in the magnitudes of $c_{12},$ as shown in the figure. We again find that isobaric and DDIM processes are now quite different; compare the magnitudes of $\Delta E_{\text{M}}$ in Figs. \ref{F15} and \ref{F16}. The difference between DDIM and EDIM, though relatively small, is still present, again proving that isometric mixing alone is not sufficient to validate the Scatchard-Hildebrand conjecture. [We note that $\Delta E_{\text{M}}$ can become negative (results not shown) if we add size disparity in addition to the interaction disparity.] The non-negative $\Delta E_{\text{M}}$ is due to the absence of size disparity, and is in accordance with the Scatchard-Hildebrand conjecture. Since $\Delta E_{\text{M}}$\ is the highest for DDIM, the corresponding $c_{12}$ is the lowest. Similarly, the isobaric energy of mixing is usually the lowest, and the corresponding $c_{12}$ usually the highest. We note that all $c_{12}$'s continue to increase with $\left\vert e_{22}\right\vert .$ We observe that isobaric and EDIM $c_{12}$'s are closer to each other, and both are higher than DDIM\ $c_{12}$. In the inset, we also show $l_{12},$ and we learn that it is not a small correction to the London-Berthelot conjecture (\ref{london_berthelot_Conj}) for the isobaric case (non-isometric mixing). For isometric mixing, we have the usual behavior: a small correction $l_{12}.$% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F14.ps}% \caption{Energy of mixing, mutual cohesive energy density, and correction $l_{12}$ as a function of \ $\left\vert e_{22}\right\vert $ for a 50-50 blend. We take $M_{1}=M_{2}=100$, $e_{11}=-2.6\times10^{-21}$ J, $v_{0}% =1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ The pressure and temperature are fixed at $1.0$ atm and $300^{\circ}$C, respectively.}% \label{F16}% \end{center} \end{figure} \subsubsection{Variation with $P$} We plot $\Delta E_{\text{M}},$ $c_{12},$ and $l_{12}$\ as a function of $P$ in Fig. \ref{F17} for isobaric, DDIM, and EDIM processes. We note that all these quantities have a weak but monotonic dependence on $P$ over the range considered$.$ Again, the isobaric $\Delta E_{\text{M}}$ remains the lowest, and consequently isobaric $c_{12}$\ remains the highest over the range considered. As noted above, isobaric and EDIM $c_{12}$'s are closer to each other, but different from DDIM $c_{12}$ over the entire range in Fig. \ref{F17}. The monotonic behavior in $P$ is correlated with a monotonic behavior in $l_{12}.$ We observe that $l_{12}$ provides a small correction at $300^{\circ}$C over the pressure range considered here. The interesting observation is that isobaric $l_{12}$ provides the biggest correction, while DDIM $l_{12}$\ the smallest correction. However, the EDIM $l_{12}$\ remains intermediate, just as EDIM $c_{12}$ is.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F15.ps}% \caption{Energy of mixing, mutual cohesive energy density, and correction $l_{12}$ as a function of \ pressure for a 50-50 blend. We take $M_{1}% =M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J, $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ The temperature is fixed at $300^{\circ}$C. }% \label{F17}% \end{center} \end{figure} \subsubsection{Variation with $T$} We plot $\Delta E_{\text{M}}$ and $l_{12}$\ as a function of $T$ in Fig. \ref{F18} for isobaric, DDIM, EDIM processes along with the isochoric process. The corresponding $c_{12}$'s, all originating around 90 MPa at $-100^{\circ}$C as a function of $T,$ are shown in Fig. \ref{F18-1}. All processes start from the same state at the lowest temperature ($-100^{\circ}$C) in the figure. There is a complicated dependence in $\Delta E_{\text{M}}$ on $T$ over the range considered for some of the processes. Let us first consider the isochoric process in which all quantities show almost no dependence on $T.$ The energy of mixing remains positive in accordance with the London-Berthelot conjecture (\ref{london_berthelot_Conj}). This is further confirmed by an almost constant $c_{12},$ and an almost vanishing correction $l_{12}$ in the inset. Both isobaric and EDIM processes give rise to negative $\Delta E_{\text{M}},$ thereby violating the London-Berthelot conjecture. The DDIM $\Delta E_{\text{M}}$\ shows a peak, which is about eight times in magnitude than its value at the lowest temperature, but remains positive throughout. The corresponding $c_{12}$ shows a continuous decrease to zero with temperature for isobaric, DDIM, and EDIM processes. However, it is almost a constant for the isochoric process. The non-monotonic behavior in temperature of $\Delta E_{\text{M}}$ is correlated with a similar behavior in $l_{12}.$ This behavior is further studied in the next section. From Fig. \ref{F18}, we observe that $l_{12}$ provides a small correction at $300^{\circ}$C. However, at much higher temperatures, it is no longer a small quantity, and depends strongly on the way the mixture is prepared, even if it remains isometric. In particular, isobaric $l_{12}$ seems to provide the biggest correction. It is almost constant and insignificant for the isochoric and EDIM\ processes.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F16.ps}% \caption{Energy of mixing, mutual cohesive energy density, and correction $l_{12}$ as a function of \ temperature for a 50-50 blend. We take $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J, $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ The pressure is fixed at $1.0$ atm. }% \label{F18}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F17.ps}% \caption{Different mutual pressures as a function of temperature; system as described in Fig. \ref{F18}.}% \label{F18-1}% \end{center} \end{figure} \subsection{Using Internal Pressure} We now introduce a new quantity as another measure of the mutual cohesive pressure by using the internal pressure. To this end, we consider the internal pressure $P_{\text{IN}}$ in the RMA limit. We find from (\ref{RMA3}) that \[ P_{\text{IN}}v_{0}\ ^{\underrightarrow{\text{RMA}}}\ q[-e_{11}\phi_{\text{m}% 1}^{2}/2-e_{22}\phi_{\text{m}1}^{2}/2-e_{12}\phi_{\text{m}1}\phi_{\text{m}% 2}], \] which can be expressed as% \[ P_{\text{IN}}v_{0}{}\ ^{\underrightarrow{\text{RMA}}}\ c_{11}^{\text{P}}% x^{2}\phi_{\text{m}}^{2}/\phi_{\text{m}1}^{\text{P}2}+c_{22}^{\text{P}% }(1-x)^{2}\phi_{\text{m}}^{2}/\phi_{\text{m}2}^{\text{P}2}+2c_{12}x(1-x), \] where we have used (\ref{RMA_CohesiveDensities}) in the RMA limit. Now we take this equation as a guide to define a new mutual cohesive energy density, the \emph{mutual internal pressure} $P_{\text{IN}}^{\text{M}}$ in the general case as% \[ P_{\text{IN}}^{\text{M}}=[P_{\text{IN}}-c_{11}^{\text{P}}x^{2}\phi_{\text{m}% }^{2}/\phi_{\text{m}1}^{\text{P}2}-c_{22}^{\text{P}}(1-x)^{2}\phi_{\text{m}% }^{2}/\phi_{\text{m}2}^{\text{P}2}]/2x(1-x). \] In the RMA\ limit, $P_{\text{IN}}^{\text{M}}$ reduces to the RMA $c_{12}$ given in (\ref{RMA_CohesiveDensities}). We show $P_{\text{IN}}^{\text{M}}$ in the inset in Figs. \ref{F15}, \ref{F17}, and \ref{F18-1}. What we discover is that isochoric $P_{\text{IN}}^{\text{M}}$ is almost a constant as a function of temperature around 120 MPa, and is larger than isochoric $c_{12},$ which is around 90 MPa. Indeed, over most of the temperatures at the lower end, we find that $P_{\text{IN}}^{\text{M}}$ $>$ $c_{12}$. However, the main conclusion is that both $c_{12}$ and $P_{\text{IN}}^{\text{M}}$ are monotonic decreasing or are almost constant, and behave identically except for their magnitudes. \section{New Approach using SRS:\ Self-Interacting Reference State} Unfortunately, the van Laar-Hildebrand cohesive energy $c_{12}$ does not have the required property of vanishing with $e_{12},$ see (\ref{RMA_CohesiveDensities}). The unwanted behavior of $c_{12}$\ is due its definition in terms of the energy of mixing from which we need to subtract pure component (for which $e_{12}$ may be thought to be zero) cohesive energies $c_{ii}^{\text{P}}.$\ The subtracted quantity is used to define $c_{12},$ and if this definition has to have any physical significance, it should vanish in the hypothetical state, which we have earlier labelled SRS, in which $e_{12}$ vanishes even though $e_{11}$ and $e_{22}$\ are non-zero. The hypothetical state obviously violates the London condition (\ref{london_Conj}), even though the real mixture does not. Let us demand the subtracted quantity to vanish for SRS$.$ To appreciate this point, consider (\ref{mixEnergyL}) for SRS. It is clear that $\Delta E_{\text{M}}$ continues to depend on the thermodynamic state of the mixture via $\phi_{ii};$ this quantity in general will not be equal to pure component quantity $\phi _{ii}^{\text{P}}$. With the use of (\ref{volumeRatios}) in (\ref{mixEnergyL}), we find that \begin{equation} \Delta E_{\text{M}}^{\text{SRS}}v_{0}=e_{11}[\phi_{11}-x\phi_{\text{m}}% \phi_{11}^{\text{P}}/\phi_{\text{m}1}^{\text{P}}]+e_{22}[\phi_{22}% -(1-x)\phi_{\text{m}}\phi_{22}^{\text{P}}/\phi_{\text{m}2}^{\text{P}}], \label{SIRS_EnergyofMixing0}% \end{equation} which is usually going to depend on the process of mixing. On the other hand, from (\ref{mixEnergy1}), we observe that \begin{equation} \Delta E_{\text{M}}^{\text{SRS}}\overset{c_{12}=0}{=}(c_{11}^{\text{P}}% +c_{22}^{\text{P}})\varphi_{1}\varphi_{2}, \label{SIRS_EnergyofMixing1}% \end{equation} had $c_{12}=0.$\ In this case, $\Delta E_{\text{M}}$ would depend only on pure component quantities $c_{ii}^{\text{P}},$ and its behavior in a given process at fixed composition should be controlled by the behavior of $c_{ii}% ^{\text{P}}$\ in that process. This is not the case, as can be seen in Fig. \ref{F19}, where we plot $\Delta E_{\text{M}}$ for the hypothetical state SRS for a 50-50 mixture. We have ensured that the SRS state at the initial temperature in Fig. \ref{F19} is the same in isobaric and isochoric processes. But the pure components are slightly different. A new process is also considered in Fig. \ref{F19}, in which we set the volume $V_{\text{SRS}}$ of the hypothetical SRS at a given temperature to be equal to the volume $V$ of the real mixture (nonzero $e_{12}$) at $1.0$ atm at that temperature. This process is labelled isobaric ($V_{\text{SRS}}=$ $V$) in the figure. The pure components are also at $1.0$ atm for this process. Since the pure components for this process are the same as in the isobaric calculation, (\ref{SIRS_EnergyofMixing1}) requires $\Delta E_{\text{M}}$ for the two processes \ to be identical at all temperatures. This is evidently not the case. Consider Fig. \ref{F11}. From this, we see that $c_{ii}^{\text{P}}$\ are almost constant with $T$ for the isochoric case. Thus, according to (\ref{SIRS_EnergyofMixing1}), $\Delta E_{\text{M}}$ should be almost a constant, which is not the case. For the isobaric case, $c_{ii}^{\text{P}}% $\ are monotonic decreasing with $T$, which will then make $\Delta E_{\text{M}}$, according to (\ref{SIRS_EnergyofMixing1}), monotonic decreasing with $T$, which is also not the case. Hence, we conclude that the van Laar-Hildebrand $c_{12}$ does not vanish for SRS. A similar unwanted feature is also present in the behavior of $P_{\text{IN}% }^{\text{M}}$ introduced above for the same reason: it also does not vanish for SRS. It is disconcerting that the van Laar-Hildebrand $c_{12}(e_{12}=0)$ does not vanish for SRS, even though the mutual interaction energy $e_{12}=0.$ This behavior is not hard to understand. The mixture energy is controlled by the process of mixing, since the mixture state varies with the process of mixing even at $e_{12}$=0.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F18.ps}% \caption{$\Delta E_{\text{M}}$ for IRS as a function of \ temperature for a 50-50 blend. We take $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=10.$ The pressure is fixed at $1.0$ atm. }% \label{F19}% \end{center} \end{figure} \subsection{Mutual Energy of Interaction $c_{12}^{\text{SRS}}$} To overcome this shortcoming, we introduce a new measure of the cohesive energy that has the desired property of vanishing with $e_{12}.$ Let $E$ denote the energy per unit volume of the mixture. We will follow \cite{RaneGuj2005}, and compare it with that of the hypothetical reference state SRS$.$ Its energy per unit volume $E_{\text{SRS}}$ differs from $E$ due to the absence of the mutual interaction between the two components. Let $V_{\text{SRS}}$ denote the volume of the SRS. In general, we have% \[ V_{\text{SRS}}/V=\phi_{\text{m}}/\phi_{\text{m,SRS}}. \] The difference \[ E_{\text{int}}^{(\text{M})}\equiv E-(V_{\text{SRS}}/V)E_{\text{SRS}}% \] represents the mutual energy of interaction per unit volume due to $1$-$2$ contacts. From (\ref{energy_Def}), we obtain \[ E_{\text{int}}^{(\text{M})}v_{0}\equiv e_{11}[\phi_{11}-(V_{\text{SRS}}% /V)\phi_{11,\text{SRS}}]+e_{22}[\phi_{22}-(V_{\text{SRS}}/V)\phi _{22,\text{SRS}}]+e_{12}\phi_{12}, \] where the contact densities $\phi_{ij}$\ without SRS are for the mixture state and with SRS are for the SRS state. These densities are evidently different in the two states but approach each other as $e_{12}\rightarrow0$. The above excess energy should determine the mutual cohesive energy density, which we will denote by $c_{12}^{\text{SRS}}$ in the following to differentiate it with the van Laar-Hildebrand cohesive energy density $c_{12}$. It should be evident that $E_{\text{int}}^{\text{M}}\ $vanishes as $e_{12}$ vanishes due to its definition.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F19.ps}% \caption{Void density and volume of mixing as a function of \ temperature for a 50-50 blend. We take $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J, $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=10.$ The pressure is fixed at $1.0$ atm. }% \label{F14}% \end{center} \end{figure} The absence of mutual interaction in SRS compared to the mixture causes SRS volume to expand relative to the mixture. This is shown in Fig. \ref{F14} in which the void density in SRS is larger than that in the mixture. The relative volume of mixing at constant pressure ($1.0$ atm) for the mixture is shown in the upper inset, and that for SRS is shown in the lower inset. It is clear that the latter relative volume of mixing is always positive, indicating an effective repulsion between the two components. This should come as no surprise since the excess interaction $\varepsilon_{12}$ for SRS from (\ref{excess_E})% \[ \varepsilon_{12}^{\text{SRS}}=-(e_{11}+e_{22})/2>0, \] and has the value $2.4\times10^{-21}$ J in this case. Thus, it represents a much stronger mutual repulsion than the mutual repulsion due to $\varepsilon _{12}\cong0.1\times10^{-21}$ J in the mixture. The effect of adding the mutual interaction $e_{12}$ to SRS is to add mutual "attraction" that results in cohesion, and in the reduction of volume. Thus, the change in the volume can also be taken as a measure of cohesion, as we will discuss below.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F20.ps}% \caption{Mutual cohesive energy density, and correction $l_{12}$ using the IRS as a function of \ temperature for a 50-50 blend. We take $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J $v_{0}% =1.6\times10^{-28}$ m$^{\text{3}}$ and $q=10.$ The pressure is fixed at $1.0$ atm. }% \label{F20}% \end{center} \end{figure} \subsubsection{RMA Limit of $E_{\text{int}}^{(\text{M})}$} In the RMA limit, along with the monomer equality assumption (\ref{monomer_DenEq}), which implies that $\phi_{\text{m}}=\phi_{\text{m,SRS}% },$ it is easily seen that \[ E_{\text{int}}^{(\text{M})}v_{0}{}^{\underrightarrow{\text{RMA}}}qe_{12}% \phi_{\text{m}1}\phi_{\text{m}2}=qe_{12}\phi_{\text{m}1}^{\text{P}}% \phi_{\text{m}2}^{\text{P}}x(1-x). \] As a consequence, $E_{\text{int}}^{(\text{M})}/2x(1-x)$ reduces to the RMA value of $(-c_{12})$ given in (\ref{RMA_CohesiveDensities}). This means that the quantity $c_{12}^{\text{SRS}}$ defined via% \begin{equation} c_{12}^{\text{SRS}}\equiv-E_{\text{int}}^{(\text{M})}/2\varphi_{1}\varphi_{2} \label{cohesiveIRS}% \end{equation} has the required property that it not only reduces to the correct RMA value of the van Laar-Hildebrand$\ c_{12},$ but it also vanishes with $e_{12}$. [As usual, we assume the identification (\ref{VolumeFraction}).] \subsubsection{New Mutual Cohesive Energy Density: $c_{12}^{\text{SRS}}$} We now take (\ref{cohesiveIRS}) as the general definition of a more suitable quantity to play the role of cohesive energy density. This quantity is a true measure of the effect produced by mutual interaction energy, and which also vanishes with $e_{12}.$ Away from the RMA limit, $c_{12}$ and $c_{12}% ^{\text{SRS}}$ are not going to be the same. It is evident that $c_{12}% ^{\text{SRS}}$ depends not only on $T,P,$ or $T,V,$ but also on the composition $x\equiv\phi_{\text{m}1}/\phi_{\text{m}},$ $q$, and the energies $e_{ij}.$ For isochoric calculations, we will ensure that the mixture and SRS have the same monomer density $\phi_{\text{m}}.$ In this case, $V_{\text{SRS}% }=V.$ For isobaric calculations, we will, as usual, ensure that they have the same pressure, so that the two volumes need not be the same. However, we will also consider $V_{\text{SRS}}=V$ for isobaric calculations to see the effect of this on $c_{12}^{\text{SRS}}.$ This process is what we have labelled isobaric ($V_{\text{SRS}}=$ $V$) in the Fig. \ref{F19}. We show $c_{12}% ^{\text{SRS}}$ for various processes in Fig. \ref{F20}. We see that it continues to increase monotonically for the isochoric case, and reaches an asymptotic value of about 50 MPa, while van Laar-Hildebrand $c_{12}$ in Fig. \ref{F18} is almost constant and about 90 MPa. An increase in solubility with temperature at constant volume is captured by $c_{12}^{\text{SRS}}$ but not by $c_{12}.$ The behavior of the two quantities for the isobaric case is also profoundly different. While $c_{12}$ monotonically decreases with temperature, $c_{12}^{\text{SRS}}$ is most certainly not monotonic. It goes through a maximum around $400% {{}^o}% $C before continuing to decrease. This suggests that the solubility increases before decreasing. This should come as no surprise as we explain later in the following section. Note that while the definition of $c_{12}^{\text{SRS}}$ is independent of any approximate theory, the definition of $c_{12}$ depends on a form (\ref{mixEnergy1}) whose validity is of questionable origin as it depends on RST. It should be noted that the value of $c_{12}^{\text{SRS}}$ truly represents the energy change that occurs (in SRS) due to the presence of the mutual interaction between the two components. The changes brought about due to $e_{12}\neq0$ should not be confused with the changes that occur when the two pure components are mixed ( $e_{12}\neq0),$ because the latter involves not only the process of mixing when $e_{12}=0$ (this gives rise to SRS\ ) but also involves the rearrangements of the two components due to the mutual interaction. It is the latter rearrangement due to mutual interaction that determines $c_{12}^{\text{SRS}}.$ This is easily appreciated by studying the void density profiles for SRS and the mixture in Fig. \ref{F14}. As usual, $e_{12}$ for the mixture is defined by (\ref{london_Conj}).~The hypothetical process of mixing (under the condition $e_{12}=0$) also contributes to the energy of mixing, since the energy of this state is not the same as the sum of the two pure systems' energies, because SRS is affected by mixing the pure components. This is easily seen by computing the energy of mixing $\Delta E_{\text{M,SRS}}$ for SRS for which $e_{12}=0$ but not $e_{11},e_{22}$:% \[ V_{\text{SRS}}\Delta E_{\text{M,SRS}}\equiv V_{\text{SRS}}E_{\text{SRS}}% -V_{1}^{\text{P}}E_{1}^{\text{P}}-V_{2}^{\text{P}}E_{2}^{\text{P}}, \] where $V_{i}^{\text{P}},E_{i}^{\text{P}}$ are the volume and energy density of the pure $i$th component. This part determines the energy of mixing due to pure mixing at $e_{12}=0,$\ and but it has nothing to do with the mutual interaction $e_{12}.$ Thus, this contribution should not be allowed to determine in part the mutual cohesive energy density. It is easy to see that% \[ \Delta E_{\text{M}}\equiv E_{\text{int}}^{\text{M}}+(V_{\text{SRS}}/V)\Delta E_{\text{M,SRS}}, \] in which the second contribution is not a true measure of the mutual interaction. Nevertheless, it is considered a part of the van Laar-Hildebrand cohesive energy density. Using (\ref{mixEnergy1}) and (\ref{cohesiveIRS}) in the above equation, we find that we can express $c_{12}^{\text{SRS}}$ in terms of the van Laar-Hildebrand cohesive energy densities as follows:% \begin{equation} c_{12}^{\text{SRS}}(e_{12})=c_{12}(e_{12})-c_{12}(e_{12}=0)+(1-\phi_{\text{m}% }/\phi_{\text{m,SRS}})(c_{11}^{\text{P}}+c_{22}^{\text{P}})/2. \label{IRS-Hilde_Rel}% \end{equation} Here, $c_{12}^{\text{SRS}}(e_{12})$\ is the SRS-based $c_{12}^{\text{SRS}}$ introduced in (\ref{cohesiveIRS}), and $c_{12}(e_{12})$ is energy-of-mixing based $c_{12}$ in (\ref{mixEnergy1}). In the special case $\phi_{\text{m}% }=\phi_{\text{m,SRS}},$ the last term vanishes, and the SRS-$c_{12}$ is the difference of the van Laar-Hildebrand-$c_{12}.$ In any case, because of the above relation, studying van Laar-Hildebrand-$c_{12}$ also allows us to learn about the SRS-$c_{12}.$ Therefore, we have mostly investigate $c_{12}$ in the present work. However, it should be realized that we also need van Laar-Hildebrand-$c_{12}$ for SRS, data for which ate not available. The arguments similar to the presented above suggest that we can similarly use the difference $P_{\text{int}}-$ $P_{\text{int,SRS}},$ where $P_{\text{int,SRS}}$\ is the interaction pressure in SRS at the same volume $V$, as a measure of the cohesive pressure for the system. Alternatively, we can use SRS at the same pressure $P$ as the mixture, whose volume $V_{\text{SRS}}$ is different from $V$. We can use the negative of the difference $V-V_{\text{SRS}}$ as a measure for cohesion. However, we will study these quantities elsewhere and not here. \subsection{Cohesion and Volume} We have argued above that the changes in the volume can also be taken as a measure of cohesion. We now follow this line of thought. The volume of mixing is governed by two factors, as discussed elsewhere \cite{Gujrati1998,Gujrati2003}: (i) the size disparity between the two components, and (ii) the interactions. To disentangle the two contributions, we can consider the difference \[ \Delta_{\text{ath}}V\equiv V-V_{\text{ath}}, \] where $V_{\text{ath}}$ is the volume of the athermal system (no interaction) at the same $T,P$. (In the athermal limit, $\Delta V_{\text{M,ath}% }=V_{\text{ath}}$ $-V_{1}^{\text{P}}-V_{2}^{\text{P}}$ will be identically zero if the two polymer components have identical sizes $\left( M_{1}% =M_{2}\right) $, and will be negative if they are different.) The difference $\Delta_{\text{ath}}V$ is governed only by the presence of interactions ($e_{11},e_{22},$ and $e_{12}$). A positive $\Delta_{\text{ath}}V$ will imply an effective repulsion, and a negative $\Delta_{\text{ath}}V$ will imply an effective attraction. Thus, $\Delta_{\text{ath}}V$ can also be used as a measure of the cohesiveness of the mixture. It is easy to see that \begin{equation} V_{\text{ath}}/V=\phi_{\text{m}}/\phi_{\text{m,ath}}=(1-\phi_{\text{0}% })/(1-\phi_{\text{0,ath}}), \label{Volume_ratio_ath}% \end{equation} in terms of total monomer density or void density. Using this, we can also calculate the difference $\Delta_{\text{ath}}v\equiv\Delta_{\text{ath}}V/V$ per unit volume in our theory, and is shown in Fig. \ref{F13} in an isobaric process at $1.0$ atm. We note that $\varepsilon_{01}=\varepsilon _{02}=\varepsilon_{12}=0$ ($e_{11}=e_{12}=e_{22}=0$) for the athermal state, while $\varepsilon_{01}=1.1\times10^{-21}$ J, $\varepsilon_{02}=1.3\times 10^{-21}$ J, and $\varepsilon_{12}\simeq0.01\times10^{-21}$ J ($e_{11}% =-2.2\times10^{-21}$J$,$ $e_{12}=-2.6\times10^{-21}$J$,e_{12}\simeq -2.39\times10^{-21}$ J) for the mixture. Recall that we have decided to define $e_{12}$ for the mixture by (\ref{london_Conj}).~The difference $\Delta _{\text{ath}}v$ is obviously controlled by all three energies $e_{ij}$, and not just by $e_{12}.$ Since the mixture has attractive interactions ($e_{ij}<0$), its volume $V$ is smaller than $V_{\text{ath}}$ so that $\Delta_{\text{ath}}v<0$ as seen in Fig. \ref{F13}. To determine the effect of only $e_{12},$ we use SRS in place of the athermal state and introduce the difference \[ \Delta_{\text{SRS}}v=1-(V_{\text{SRS}}/V), \] which is also shown in Fig. \ref{F13}. Here, the mixture is compared with the SRS\ state in which $\varepsilon_{01}=1.1\times10^{-21}$ J, $\varepsilon _{02}=1.3\times10^{-21}$ J, but $\varepsilon_{12}=2.4\times10^{-21}$ J ($e_{11}=-2.2\times10^{-21}$J$,$ $e_{12}=-2.6\times10^{-21}$J$,$ $e_{12}=0$). Since the mixture has extra attractive interaction ($e_{12}<0$) than the SRS state ($e_{12}=0$), $V$ is smaller than $V_{\text{SRS}},$ so that the negative value of $\Delta_{\text{SRS}}v$\ should not be a surprise. The minimum in $\Delta_{\text{SRS}}v$ is due to the behavior of the void density, which is shown in Fig. \ref{F14} for SRS and the mixture. In the two insets, we also show the relative volume of mixing at constant pressure (1.0 atm) for SRS (lower inset) and the mixture (upper inset). We notice that below around $500% {{}^\circ}% $C, the free volume in SRS changes faster than in the mixture, while above it, the converse is the case. This changeover in the rate of change causes the dip and the minimum in $\Delta_{\text{SRS}}v.$ Similarly, the dip in $\Delta_{\text{ath}}v$ is caused by the changeover in the rate of change of free volume in the mixture and the athermal state. The minimum in $\Delta_{\text{SRS}}v$, see Fig. \ref{F13}, also explains the intermediate maximum in the isobaric $c_{12}^{\text{SRS}}$ in Fig. \ref{F20}. The behavior of $c_{12}^{\text{SRS}}$ shows that the mutual cohesiveness increases and then decreases with the temperature, and is merely a reflection of the way the void densities\ in Fig. \ref{F14} behave with the temperature.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F21.ps}% \caption{Volume difference $\Delta_{\text{ath}}v$ and $\Delta_{\text{SIRS}}v$ for a 50-50 blend as a function of temperature. We have $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J $v_{0}% =1.6\times10^{-28}$ m$^{\text{3}}$ and $q=10.$ The pressure is fixed at $1.0$ atm. }% \label{F13}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F22.ps}% \caption{The scaled quantity $\gamma_{12}$ for various processes. This quantity is very close to $1$ except at high temperatures where one of the components seems to be near its boiling. For the system considered, we have $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ The pressure is fixed at $1.0$ atm.}% \label{F21}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.901917in 5.821965in 0.904361in 0.904527in, height=3.9435in, width=6.3693in ]% {F23.ps}% \caption{The scaled quantity $\gamma_{12}$ for various processes for a $50-50$ mixture. This quantity is very close to $1$ except at high temperatures where the sharp discontinuity is due to one of its components undergoing boiling. The system is $M_{1}=100,$ $M_{2}=10$, $e_{11}=-2.6\times10^{-21}$ J$=$ $e_{22},$ $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ The pressure is fixed at $1.0$ atm. }% \label{F22}% \end{center} \end{figure} \section{More Numerical Results: $\gamma_{12}$ and $l_{12}$} We observe from (\ref{RMA_CohesiveDensities}) that the quantity \[ \text{ }c_{12}v_{0}/(-qe_{12}\phi_{\text{m}}^{2}/2) \] approaches $1$ in the RMA limit. As we have seen earlier (compare Figs. \ref{F9} and \ref{F10}), the quantity $q/2$ in the RMA limit should be properly replaced by ($q/2-\nu)$ for finite $q.$ The new factor incorporates the correction due to end groups. Therefore, we introduce a new dimensionless quantity% \[ \gamma_{12}\equiv-c_{12}v_{0}/(q/2-\nu)e_{12}\phi_{\text{m}}^{2}, \] and investigate how close it is to $1.$ Any deviation is due to nonrandom mixing and will provide a clue to its importance. In Fig. \ref{F21}, we plot $\gamma_{12}$ as a function of temperature for various processes. The coordination number is taken to be $q=14$, and the initial void density is $\phi_{0}=0.00031333.$ This system is identical to the one studied in Fig. \ref{F18}. For the isochoric process, $\gamma_{12}$ is almost a constant and very close to (but below) $1.$ It decreases gradually for EDIM and becomes almost 0.92 at about $t_{\text{C}}=2000^{\circ}$C. The situation is very different for isobaric and DDIM\ processes where $\gamma_{12}$ shows enhancement at a temperature near the boiling of one of the pure components. To be convinced that the peak in $\gamma_{12}$ in Fig. \ref{F21} is indeed due to boiling, we plot it for $q=14$ in Fig. \ref{F22}. We see a discontinuity in $\gamma_{12}$ for all cases except for isochoric case, as is evident from the inset (in which isobaric results are not shown). The discontinuity is due to the boiling transition which occurs at different temperatures for the different cases. These discontinuities become rounded as in Fig. \ref{F21}, when we are close to boiling transitions.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F24.ps}% \caption{Binary deviation $l_{12\text{ }}$ and the relative volume of mixing in the inset for a $50-50$ mixture. For the system considered, we have $M_{1}=M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=6.$ }% \label{F23}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.902732in 5.820900in 0.903547in 0.903464in, height=3.9444in, width=6.3693in ]% {F25.ps}% \caption{Binary deviation $l_{12\text{ }}$ and the relative volume of mixing in the inset for a $50-50$ mixture. The system considered has $M_{1}% =M_{2}=100$, $e_{11}=-2.2\times10^{-21}$ J, $e_{22}=-2.6\times10^{-21}$ J, $v_{0}=1.6\times10^{-28}$ m$^{\text{3}}$ and $q=14.$ }% \label{F24}% \end{center} \end{figure} We now turn our attention to the binary correction $l_{12},$ which measures the deviation from the London-Berthelot conjecture (\ref{london_berthelot_Conj}). The existence of this deviation in the case when we choose $e_{12\text{ }}$ to satisfy the London conjecture (\ref{london_Conj}) will suggest that it is caused by thermodynamics which makes cohesive energies different from their respective van der Walls energies $e_{ij}$. Moreover, the behavior of $l_{12}$ also reflects partially the behavior of the pure components, since the definition of $l_{12}$ utilizes the pure component cohesive energies, see (\ref{Dev_l12}), even if $c_{12}$ is defined without any association with them as is the case with $c_{12}% ^{\text{SRS}},$ whose definition is independent of pure component quantities $c_{ii}^{\text{P}}$. Consider the inset in Fig. \ref{F16}, which shows $l_{12}$ associated with $c_{12}.$ We observe that it is almost constant for the two cases (DDIM\ and EDIM) for which mixing is isometric. However, it has a strong variation for the isobaric case where mixing is not isometric. Thus, we see the first hint of a dependence on volume of mixing in the behavior of $l_{12}.$ Such a dependence on the volume of mixing is also seen in the inset showing $l_{12}$ (corresponding to $c_{12})$\ in Fig. \ref{F18}. We are struck by the presence of maxima and minima for the two isochoric cases. They appear to be almost at same temperatures in both cases; notice the location of minima and maxima in the isobaric $l_{12}$ and the volume of mixing. To further illustrate the relationship between isobaric $l_{12}$ and the volume of mixing $\Delta v_{\text{M}}$, we plot them as a function of the temperature in Figs. \ref{F23} and \ref{F24} for $q=6$ and $q=14,$ respectively, for several values of the pressure $P.$ The strong correlation between their maxima and minima are clearly evident. These figures illustrate that the binary correction $l_{12}$ for the isobaric process closely follows how the volume of mixing $\Delta v_{\text{M}}$ behaves. A negative $\Delta v_{\text{M}}$ is a consequence of the effective attraction between the mixing components, which corresponds to a larger value of their mutual cohesive energy density $c_{12}$; this is reflected in a corresponding negative value of $l_{12}.$ As the pressure increases, the correction decreases and the system gets closer to satisfying the the London-Berthelot conjecture (\ref{london_berthelot_Conj}). Thus, we can conclude that the binary correction is negligible, but not zero, as long as the volume of mixing is very small or even zero. \section{Discussion and Conclusions} We have carried out a comprehensive investigation of the classical concept of cohesiveness as applied to a thermodynamic system composed of linear polymers. In pure components, we are only dealing with one kind of interaction: microscopic self-interactions between the monomers. Therefore, we are interested in relating this microscopic self-interaction to a thermodynamic quantity in order to estimate the strength of the microscopic self-interaction. In binary mixtures (polymer solutions or blends), there are two self interactions, and one mutual interaction. The microscopic self interactions in the mixture do not depend on the state of the mixture. Therefore, they are evidently the same as in pure components. Thus, it is the\ microscopic mutual interaction that is new, and needs to be extracted by physical measurements. However, what one measures experimentally is not the microscopic interaction strengths, but macroscopic interaction strengths due to thermodynamic modifications. Traditionally, cohesive energies have been used as indicators of the macroscopic interaction strengths. The definition of $c^{\text{P}}$ for pure components, given in (\ref{cohesive_def}), does not depend on any particular approximation or any theory. It is a general definition. Thus, it can be measured directly if the interaction energy $\mathcal{E}_{\text{int}}$ can be measured. It can also be calculated by the use of any theory. Only the obtained value will depend on the nature of the theory. On the other hand, the mutual cohesive energy density $c_{12}$ is defined by a particular form of the energy of mixing. This gives rise to two serious limitations of the definition. The first deficiency is caused by the use of the form (\ref{mixEnergy1}), whose validity is questionable. Thus, the extracted value of $c_{12}$ is based on this questionable form. The second deficiency is that this value also depends on pure component properties $c_{ii}^{\text{P}},$ since the mutual energy is measured with respect to the pure components. However, we have argued that the energy of the mixture will be different from the sum of the pure component energies even if $e_{12}$ is absent. Thus, $c_{12}$ does not necessarily measure mutual interactions. One of our aims was to see if some thermodynamic quantity can be identified that could play the role of a true $c_{12}$ that would vanish with $e_{12}.$ This is the quantity $c_{12}^{\text{SRS}}$ that we have identified in this work. There is no such problem for $c^{\text{P}},$ since it not only vanishes with $e$, as we have seen before, but it also does not suffer from any approximation. We follow our recent work \cite{RaneGuj2005} in the approach that we take here. We confine ourselves to a lattice model, since the classical approach taken by van Laar \cite{vanLaar} and Hildebrand \cite{Hildebrand} is also based on lattice models in that their theory can only be justified consistently by a lattice model. We then introduce two different reference states to be used for pure components and mixtures, respectively. For pure components, we introduce a hypothetical reference state in which this self-interaction is absent. Since there is only one interaction, which is absent in the reference state, this state is nothing but the athermal state of the pure component. With the use of this state, we introduce the interaction energy $\mathcal{E}_{\text{int}}$ defined via (\ref{InteractionEnergy}). Because of the subtraction in (\ref{InteractionEnergy}), $\mathcal{E}% _{\text{int}}$\ depends directly on the strength of the self-interaction, the only interaction present in a pure component that we wish to estimate. Thus, it is not a surprise that $\mathcal{E}_{\text{int}}$\ is an appropriate thermodynamic quantity that can be used to estimate the strength of the self-interaction. Consequently, we use this quantity to properly define the pure component cohesive energy density $c^{\text{P}}$ at any temperature. We have discussed how this definition resembles the conventional definition of $c^{\text{P}}$ in the literature. For the mixture, we introduce the self-interacting reference state (SRS), which allows us to define $c_{12}^{\text{SRS}}$ that vanishes with $e_{12}.$ In contrast, the van Laar-Hildebrand $c_{12}$ does not share this property. As noted earlier, our approach allows us to calculate $c^{\text{P}}$ and $c_{12}$\ even for macromolecules like polymers, which is of central interest in this work or for polar solvents, which we do not consider here but hope to consider in a separate publication. The cohesive energy density $c^{\text{P}}$ is a macroscopic, i.e. a thermodynamic quantity characterizing\ microscopic interparticle interactions in a pure component. In general, being a macroscopic quantity, $c^{\text{P}}$ is a function of the lattice coordination number $q$, the degree of polymerization $M$, and the interaction energy $e,$ in addition to the thermodynamic state variables $T,$ and $P$ or $V.$ We have investigated all these dependences in our study here. The same is also true for $c_{12}.$ Based on the same philosophy, we need to introduce another hypothetical reference state called the self-interacting reference state (SRS) for a blend, which differs from a real blend in that the mutual interactions are absent, but self-interactions are the same. The difference $\mathcal{E}_{\text{int}% }^{\text{M}}$ introduced in (\ref{Mutual_InteractionEnergy}) allows us to estimate the strength of the microscopic mutual interaction between the two components of a blend; we denote this estimate by $c_{12}^{\text{SRS}}$ to distinguish it from the customary quantity $c_{12}$ originally due to van Laar, and Hildebrand and coworkers. However, $\mathcal{E}_{\text{int}% }^{\text{M}}$ is not what is usually used to define the mutual cohesive energy density $c_{12}.$ Rather, one considers the energy of mixing $\Delta E_{\text{M}}$. We have compared the two approaches in this work. The conventional van Laar-Hildebrand approach to solubility is based on the use of the regular solution theory. Thus, several of its consequences suffer from the limitations of the regular solution theory (RST). One of the most severe limitations, as discussed in the Introduction, is that the theory treats a solution as if it is incompressible. This is far from the truth for a real system. Hence, one of the aims of this work is to investigate the finite compressibility effects, i.e., the so-called "equation-of-state" effects. As a real solution most definitely does not obey the regular solution theory, the experimental results usually will not conform to the predictions of the theory. Thus, we have revisited the solubility ideas within the framework of a new theory developed in our group. This theory goes beyond the regular solution theory and also incorporates equation-of-state effects. We have already found that this theory is able to explain various experimental observations, which the regular solution theory (Flory-Huggins theory for polymers) cannot explain. Therefore, we believe that the predictions of this theory are closer to real observations than those of the regular solution theory. \subsection{Pure Components} We have first studied the cohesive energy density $c^{\text{P}}$of a pure component. Since it is determined by the energy density of vaporization, it is a quantity independent of the temperature. It is also clear from the discussion of the van der Waals fluid that $c^{\text{P}},$ defined by (\ref{cohesive_def}), is independent of the temperature, since the parameter $a$ in (\ref{vdW_a}) is considered $T$-independent. This particular aspect of $c^{\text{P}}$ ensures the equality of $P_{\text{IN}}\equiv-P_{\text{int}}% \ $and $(\partial\mathcal{E}/\partial V)_{T};$ see (\ref{PINdEdV}). For a pure component, Hildebrand \cite{Hildebrand1916} has argued that the solubility of a given solute in different solvents is determined by relative magnitudes of internal pressures, at least for nonpolar fluids. Thus, we have also investigated the internal pressure. However, the equality (\ref{PINdEdV}) is not valid in general as we have shown above. Thus, the internal pressure, while a reliable and alternative measure of the cohesion of the system in its own right, cannot be equated with $(\partial\mathcal{E}/\partial V)_{T}.$ Their equality is shown to hold only in RMA. If we allow $a$ in (\ref{vdW_a}) to have a temperature-dependence, the equality (\ref{PINdEdV}) will no longer remain valid. Moreover, it can be easily checked that under this situation, $P_{\text{IN}}~$and $c^{\text{P}}$ for the van der Waals fluid will no longer be the same. We have investigated the pure component $c^{\text{P}}$ using our recursive theory. We find that it is very different depending on whether we consider an isochoric process, in which case it is almost constant with $T$, or an isobaric process, in which case it gradually decreases to zero at very high $T$. In general, we find that \[ c_{V}^{\text{P}}\ \geq c_{P}^{\text{P}}. \] \ We have found that isochoric $c^{\text{P}}$\ and $P_{\text{IN}}$ are almost constants as a function of $T.$ It provides a strong argument in support of their usefulness as a suitable candidate for the cohesive pressure. But $c^{\text{P}}$\ and $P_{\text{IN}}$ do not remain almost constant in every process, as the isobaric results in Fig. \ref{F2} clearly establish. The same figure also shows in its inset that the isobaric $c^{\text{P}}$ exhibits a discontinuity due to boiling, as expected. Unfortunately, most of the experiments are done under isobaric conditions; hence the use of isochoric cohesive quantities may not be useful, and even misleading and care has to be exercised. We also see that $c^{\text{P}}$ changes a lot over the temperature range and replacing it by its value at the boiling point may be not very useful in all cases. The cohesive energy also depends on the molecular weight, and usually decreases with $M$ as expected; see Fig. \ref{F7}. This is the situation for $q=14,$ and should be contrasted with the behavior shown in the inset in Fig. \ref{F2}, where we show $c_{P}^{\text{P}}$ for $M=10,$ and $100,$ but for $q=10.$ All other parameters are the same as in Fig. \ref{F7}. What we observe is that at low temperatures, $c_{P}^{\text{P}}$ for $M=10$ lies above the $c_{P}^{\text{P}}$ for $M=100,$ while the situation is reversed at high temperatures. This crossover is due to the boiling transition that the smaller $M$ pure component must undergo at about $600% {{}^\circ}% $C. Thus, $c_{P}^{\text{P}}$ decreases with increasing $M$ only far below the boiling temperatures. The dependence on the lattice coordination number $q$ is also not surprising: it increases with $q$ but requires end group correction in that $c^{\text{P}}$ is proportional to $(q/2-\nu);$ see Fig. \ref{F9}$.$ This means that the solubility function $\delta$ also depends on the process, and its value at the boiling temperature of the system will be dramatically different in the two processes. We have also found, see Fig. \ref{F6}, that $c^{\text{P}}$ is not truly a quadratic function of the monomer density, one of the predictions of the regular solution theory. \subsection{Mixtures (Solutions or Blends)} We now turn to the mutual cohesive energy density $c_{12}.$ Here, the situation is muddled since the definition of $c_{12}$ is based on a form (\ref{mixEnergy1}) of the energy of mixing $\Delta E_{\text{M}},$\ whose validity is questionable beyond RST. This should be contrasted with the definition of $c^{\text{P}},$defined by (\ref{cohesive_def}), which is independent of any particular theory. Therefore, $c^{\text{P}}$ can be calculated in any theory without any further approximation except those inherent in the theory. It can also be measured directly by measuring $\mathcal{E}_{\text{int}},$\ and does not require any further approximation to extract it. On the other hand, the calculation of $c_{12}$ in any theory requires calculating $\Delta E_{\text{M}}$\ in that theory; thus, this calculation is based on the approximations inherent in the theory. However, one must still extract $c_{12}$ from the calculated $\Delta E_{\text{M}}$ by expressing\ $\Delta E_{\text{M}}$\ in the form (\ref{mixEnergy1}). As discussed above, this form is justified only in RST and $c_{12}$ in (\ref{mixEnergy1}) at this level of the approximation is indeed a direct measure of the mutual interaction energy $e_{12}.$ Whether $c_{12}$ defined by the form (\ref{mixEnergy1}) still has a direct dependence on $e_{12}$ is one of the questions we have investigated here by introducing SRS. For $c_{12}$ to be a direct measure of the mutual interaction energy $e_{12},$ we have to ensure that it vanish in SRS. What we have shown is that $c_{12}$ does not vanish in SRS as seen in Fig. \ref{F19}. A new measure of the mutual cohesive energy density $c_{12}^{\text{SRS}}$ is introduced in (\ref{cohesiveIRS}), which has the desired property. The reference state SRS behaves very different from the mixture or its athermal analog, as clearly seen from the Figs. \ref{F19},\ref{F14}, and \ref{F13}. In particular, SRS has strong repulsive interactions between the two species. This results in the SRS volume to be much larger than that of the mixture. Therefore, it is possible that the two components may undergo phase separation in SRS. Its use to define $c_{12}^{\text{SRS}}$, therefore, should only be limited to the case where SRS is a single phase state so as to compare with the mixture. We do not report any result when SRS is not a single phase. We have found that $c_{12}^{\text{SRS}}$ has the correct behavior that it first rises and then decreases with $T$ for a compressible blend. As explained above, this is a reflection of the way the void density behaves with the temperature, as shown in Fig. \ref{F13}. The rise and fall of cohesion is not apparent in the temperature-dependence of $c_{12}$. It has also been shown that $c_{12}^{\text{SRS}}$\ can be expressed in terms of $c_{12}.$ Therefore, we have basically explored the behavior of $c_{12}$ more than that of $c_{12}^{\text{SRS}}.$\ As expected, and as shown in Fig. \ref{F12}, $c_{12}$ increases with $q.$ However, it is not just simply a linear function of $q$, as seen in Fig. \ref{F12}. It is also clear from the behavior in this figure that $c_{12}$ changes its curvature with $T.$ Thus, $c_{12}$ is not linear with the inverse temperature $\beta$ over a wide range of temperatures. We have been specifically interested in the contribution due to volume of mixing. For this reason, we have considered three different kinds of isometric mixing (zero volume of mixing), two of which we have called EDIM and DDIM. These are introduced in Sect. V. The third isometric mixing is studied as part of isochoric processes, which we have also considered. The fourth process is an isobaric process in which mixing is at constant pressure. We have also compared $c_{12}$ with a related internal pressure quantity $P_{\text{IN}% }^{\text{M}},$ which is another measure of the mutual cohesive energy density or pressure$.$ For a particular process, the two quantities behave similarly, though their magnitudes are different in that $P_{\text{IN}}^{\text{M}}% >c_{12};$ see Fig. \ref{F18-1}. We have found that isobaric and EDIM quantities are somewhat similar, and both are different from the DDIM\ quantities. All three quantities have a strong temperature dependence, but the isochoric quantity is very different in that $c_{12}$ or $P_{\text{IN}}^{\text{M}}$ for the latter remains almost a constant with $T.$ We have paid special attention to the violation of the Scatchard-Hildebrand conjecture (\ref{mixEnergy0}) even when the microscopic interactions obey the London conjecture (\ref{london_Conj}). We find that even under isometric mixing (EDIM), the energy of mixing can become negative, as seen in Fig. \ref{F18}. There is another possible source of violation of the Scatchard-Hildebrand conjecture seen in Fig. \ref{F15}. Here, the pure components in the isobaric and DDIM processes remain the same, so that the pure component solubilities $\delta$ are the same. Therefore, from (\ref{mixEnergy0}) we would conclude that the energy of mixing should be zero, regardless of the process. What we find is that the energy of mixing is not only not zero, it is also different in the two processes. This violation is due to non-random mixing caused by size and/or interaction strength disparity. We have also investigated the behavior of the binary correction $l_{12}.$ We find that it need not be small. In particular, we find that it becomes very large in isobaric and DDIM processes as seen in Fig. \ref{F18}. As the pressure increases, $l_{12}$ decreases in magnitude and the deviation from the London-Berthelot conjecture (\ref{london_berthelot_Conj}) becomes smaller. This effect shows that the deviation from the London-Berthelot conjecture is due to the presence of compressibility. Thus, the behavior of the free volume determines that of $l_{12}.$ In the isobaric case, we have discovered a very interesting fact. The behavior of $l_{12}$ mimics the behavior of the volume of mixing as seen clearly in Figs. \ref{F23}, and \ref{F24}. This observation requires further investigation to see if there is something unusual about isobaric processes or it is jut due to non-isometric mixing in the presence of free volume.. In summary, we have found that it is not only the non-isometric mixing by itself that controls the behavior of the cohesive quantities, but also the non-random contributions. \begin{acknowledgments} Acknowledgement is made to the National Science Foundation for support (A.A. Smith) of this research through the University of Akron REU Site for Polymer Science (DMR-0352746). \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,686
\section{Introduction}\label{sec:introduction} We consider a game played by a large population of individuals in continuous time. At every time, each individual engages in play with a random opponent extracted from the population and the resulting payoff, which depends on the action profiles of both players, is a vector. Such vector payoffs can be interpreted as deriving from a collection of noninterchangeable goods. Let us think, for instance, of a negotiation between an employer and a candidate employee over salary, career prospects, maximal number of days off and so forth. Formally, we can think of the completeness axiom being satisfied along each dimension of our vector but failing across them, giving a special case of Aumann's \cite{A62} framework. Indeed, vector payoffs may also be appropriate when the continuity axiom fails (see \cite{BBD91}). Alternatively, each player may be representative of a group of individuals whose preferences may not be aggregated into a single ordering, so that the vector payoff has one component for each individual in the grou . Finally, payoff vectors also naturally arise when considering their regret at not having made each possible deviation \noindent \textbf{Main results.} First, we provide a new model that combines approachability and population games. Given that the opponent is randomly extracted from the population, the approach by Blackwell---which looks at the worst-case payoff---may appear conservative. Thus, we relax Blackwell's conditions, assuming that the opponent is not malevolent but instead is simply extracted from a population with given distribution; we call this \emph{$1$st-moment approachability}. Second, we build upon the theory of mean-field games and adapt the concept of mean-field equilibrium to our evolutionary set-up; we call this \emph{self-confirmed equilibrium}. Third, we discuss existence and nonuniqueness of the equilibrium. Finally, we explore the regret interpretation of our model; whereas $1$st-moment approachability of nonpositive regrets no longer implies Nash equilibrium (as in \cite{HM03}), we show that nonpositive maximal regret does imply Bayesian equilibrium under incomplete information. \noindent \textbf{Related literature.} The theory of ``approachability'' dates back to Blackwell \cite{B56} and culminates in the well known Blackwell's Theorem. Approachability arises in several areas of game theory, such as allocation processes in coalitional games \cite{L02-CT}, regret minimization \cite{L03,HM03}, adaptive learning \cite{CL06,FV99,H05,HM01}, excludability and bounded recall \cite{LS06}, and weak approachability \cite{V92}, just to name a few. For instance, in coalitional games one asks whether the core is an approachable set, and which allocation processes can drive the complaint vector to that set. In regret minimization, one considers the nonpositive orthant in the space of regrets; a player tries to adjust her strategy based on the current regret so as to make that set approachable by the regret vector. Once all players have nonpositive regret, the resulting outcome is an equilibrium for the game. This idea of adapting the new action to the current state of the game is common to adaptive learning and evolutionary games as well, but in regret-based dynamics the state is in payoff (rather than strategy) space. Evolution under incomplete information has been relatively little studied, with the notable exception of Ely and Sandholm \cite{ES05,S07}, who analyse a best response dynamic with a subpopulation for each possible type; here, by contrast, we have a single population of agents with nonconstant types who adopt (type-dependent) Bayesian strategies through time. Despite its discrete-time nature in the original Blackwell formulation, approachability has been extended to continuous-time repeated games, thus showing common elements with Lyapunov theory \cite{HM03}. Though first formalized in finite-dimensional spaces, a definition of approachability in infinite-dimensional space has been provided by Lehrer \cite{L02}. Approachability can be reframed within differential games and as such can be studied using differential calculus and stability theory \cite{LS07,SQS09}. In particular, in \cite{LS07} the authors show that, beyond being an extension (to a vector space) of the von Neumann minmax theorem \cite{vN28}, the approachability principle also has elements in common with differential inclusions \cite{AC84}. In addition to this, \cite{SQS09} establishes connections with viability theory \cite{A91}, and set-valued analysis \cite{AF90} (see, cfg., the comparison between an approachable set and a discriminating set) and set invariance theory \cite{B99}.\footnote{Still within the realm of differential games, it is worth noting that the notion of nonanticipative behavior strategies has a long history \cite{BB11,EK72,SQS09,R69,V67}. Actually, it turns out that classical feedback strategies in differential games are special nonanticipative strategies.} The approachability principle is also behind the notion of excludability; along this line, some authors investigate which sets are approachable and which ones excludable under imperfect information (bounded recall, delayed and/or stochastic monitoring) \cite{LS06}. Connected to approachability as well is the concept of ``attainability.'' Attainability is a new notion developed in \cite{BLS12,LSB11} in the context of 2-player continuous-time repeated games with vector payoffs. Attainability arises in many contexts such as transportation networks, distribution networks, production networks applications. The main question is: ``Under what conditions does a strategy for player 1 exist such that the cumulative payoff converges (in the lim sup sense) to a preassigned set (in the space of vector payoffs) independently of the strategy used by player 2?'' A second stream of literature we follow in the present study is the one on \emph{mean field games}. This theory originated in the work of M.\ Y.\ Huang, P.\ E.\ Caines and R.\ Malham\'e \cite{HCM03,HCM06,HCM07}, and independently in that of J.\ M.\ Lasry and P.\ L.\ Lions \cite{LL06a,LL06b,LL07}, where the now standard terminology of Mean Field Games (MFG) was introduced. Explicit solutions in terms of mean field equilibria are not common unless the problem has a linear-quadratic structure, see \cite{B12}. Mean field games have connections to evolutionary games (see for instance \cite{javano}) and large games \cite{A64}. Actually, both the \emph{anonymous game} in \cite{javano} and the \emph{large game} in \cite{A64} build upon the notion of mass interaction and can be seen as a stationary mean field. This paper is organized as follows. In Section \ref{sec:setup}, we set up the problem. In Section \ref{motivations}, we provide our population game motivation for the problem at hand. In Section \ref{main}, we establish the main results of the paper. In Section \ref{regret}, we apply the model to a regret-based setting, and show under incomplete information that nonpositive maximal regrets that are approachable in $1$st moment must be Bayesian equilibria. Finally, in Section \ref{sec:conclusions}, we draw concluding remarks. \noindent {\bf Notation}. We view vectors as columns. For a vector $x$, we use $x_i$ to denote its $i$th coordinate component. Occasionally we may write $(x)_{i=1,\ldots,m}$ to denote an $m$-dimensional column vector. For two vectors $x$ and $y$, we use $x<y$ ($x\le y$) to denote $x_i<y_i$ ($x_i\le y_i$) for all coordinate indices $i$. We let $x^T$ denote the transpose of a vector $x$, and $\|x\|$ its Euclidean norm. We write $P(x)$ to denote the projection of a vector $x$ on a set $X$, and ${\rm dist}(x,X)$ for the distance from $x$ to $X$, i.e.\ $P(x) = \arg \min_{y \in X} \|x - y\|$ and ${\rm dist}(x,X)=\|x-P(x)\|$, respectively. We also denote by $conv$ the convex hull of a given set of points. $\partial_x$ indicates the first partial derivative with respect to $x$. \section{The Model}\label{sec:setup} With the above preamble in mind, the game at hand is a two-player repeated game with vector payoffs in continuous time.\footnote{Whilst the game is repeated, opponents are constantly rematched, and hence no supergame considerations arise.} We assume that the players use nonanticipating behavior strategies with delay. This means that the behavior of a player may depend only on past play. In other words, the way a player plays during a given interval of time does not affect the way the opponent plays during that block. Still, it may affect the other player's play in subsequent intervals. Let $A=\{1,2,\ldots,n\}$ be a discrete set, $a_i:[0,T] \to A$ a measurable function of time and $a_j:[0,T] \to A$ a random disturbance. Let $u: A \times A \to M$ where $M=\{M_{lk}, l,k\in A\}$ and $M_{lk} \in \mathbb{R}^m$ (each entry $M_{lk}$ is an $m$-dimensional vector). Let $X:=conv\{M_{lk}|\, l,k \in A\}$, where $conv$ denotes the \emph{convex hull}, and consider the differential equation in $X$ \begin{equation}\label{dyng} \left\{\begin{array}{ll} d x(t) = \frac{1}{t} (\mathbb E u(a_i(t),a_j(t)) - x(t)) dt,\quad \forall t \in[0,T],\\ x(0)=x_{0}\in X, \end{array}\right. \end{equation} where $x_{0}$ is generated according to a distribution law $\rho_0(x)$. More specifically, consider a probability density function $\rho: X \times [0,+\infty[ \to \mathbb R$, $(x,t) \mapsto \rho(x,t)$, representing the density of the players whose state is $x$ at time $t$, which satisfies $\int_{\mathbb R} \rho(x,t) dx=1$ for every $t$. Let us also define the mean state over players at time $t$ as $\overline \rho(t) := \int_X x \rho(x,t) dx$. We also have $\rho(x,0)=\rho_0(x)$. The objective of a player is to approach a given target $y:[0,T] \to X$. Then, for each group, consider a running cost $g:X \times X \to [0,+\infty[$, $(x, y)\mapsto g(x,y)$ of the form: \begin{eqnarray}\label{gg} g(x, y) & = & \frac{1}{2}\left[\left(y - x\right)^T Q \left(y - x\right) \right], \end{eqnarray} where $Q>0$ and symmetric. The above cost describes i) the (weighted) square deviation of an individual's state from the target Also consider a terminal cost $\Psi:X \times X \to[0,+\infty[$, $(x,y)\mapsto\Psi(x,y)$ of the form \begin{equation}\label{psig}\Psi(x,y) = \frac{1}{2} (y - x)^TS(y - x),\end{equation} where $S>0$. The problem in its generic form is then the following: \begin{problem}\label{prob1} Let the initial state $x(0)$ be given and with density $\rho_0.$ Given a finite horizon $T>0$, a suitable running cost: $g: X \times X \to [0,+\infty[$, $(x,y)\mapsto g(x,y)$, as in (\ref{gg}); a terminal cost $\Psi: X \times X \to [0,+\infty[$, $(y,x)\mapsto\Psi(y,x)$, as in (\ref{psig}), and given a suitable dynamics for $x$ as in (\ref{dyng}), solve \begin{eqnarray \inf_{a_i(\cdot) \in \mathcal C} \left\{J(x_0,a_i(\cdot),a_j(\cdot) = \int_0^T g(x(t),y)dt + \Psi(x(T),y)\right\},\label{cost \end{eqnarray} \noindent where $\mathcal C$ is the set of all measurable functions $a_i(\cdot)$ from $[0,+\infty[$ to $A_i$, and $\mathbb E u(\cdot)$ in (\ref{dyng}) must be consistent with the evolution of the distribution $\rho(\cdot)$ if every player behaves optimally. \end{problem} \section{Motivation: Population Games}\label{motivations} Consider a population game where continuously in time every individual matches with an opponent randomly extracted from the population and the resulting payoff is a vector. The resulting game is a two-player repeated game with vector payoffs in continuous time $\Gamma$ that every individual plays against a population with given (evolving) distribution over actions. Let $A$ be the finite set of actions of every individual, then the instantaneous payoff is given by a function $u: A \times A \to \mathbb{R}^m$, where $m $ is a natural number. We assume w.l.o.g.\ that payoffs are bounded and correspond to the elements of the following discrete set $M=\{M_{lk}, l,k\in A\}$ where $M_{lk} \in \mathbb{R}^m$, so that $u: A\times A \to M$. We extend $u$ to the set of mixed-action pairs, $\Delta(A) \times \Delta(A)$, in a bilinear fashion. The one-shot vector-payoff game $(A,A,u)$ is denoted by $G$ and we will say that the game in continuous time $\Gamma$ is \emph{based on} $G$. The game $\Gamma$ is played over the time interval $[0,\infty)$. We assume that the players use markovian strategies $$\sigma: X \times [0,T] \to A \, \mbox{ such that }\, a_i(t):=\sigma(x,t),$$ where $X:=conv\{M_{lk}|\, l,k \in A\}$ and $x$ is the average (over time) expected (over opponent's play) payoff defined as: \begin{equation} \label{equ966} x(t) = \frac{1}{t}\int_0^t \mathbb E u(a_i(t),a_j(t)) {\rm{d}} t \in \mathbb{R}^m \end{equation} In the above equation, \begin{equation}\label{expval0} \left\{ \begin{array}{lll} \mathbb E u(a_i(t), a_j(t))& := &u(a_i(t), q(t))\\ \\ q \in \Delta(A) \, & s.t. & q_k = \int_{R_k} \rho(x,t) dx, \\ && R_k:= \{x \in \mathbb R^m| \, \sigma(x,t)=k\}, \, \forall k \in A.\end{array}\right. \end{equation} Once we differentiate (\ref{equ966}) with respect to $t$ we obtain the equation (\ref{dyng}) in the same spirit as in Hart and Mas-Colell's paper \cite{HM03} on continuous-time approachability. Then, Problem \ref{prob1} analyzes the approachability of a given target in the space of vector payoffs on the part of a population of individuals. \begin{example} \textbf{(Prisoners' Dilemma)} Suppose, for instance, that players target the average payoffs across the population. Consider the following game: $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline & Cooperate & Defect \\ \hline Cooperate & $(3,3)$ & $(0,4)$ \\ \hline Defect & $(4,0)$ & $(1,1)$\\ \hline \end{tabular} \end{array}$$ Figure \ref{fig:PDp} depicts the payoff space in the continuous-time game based on this Prisoner's Dilemma. Here, the state space is $X=conv\{(3,3),(1,1),(0,4),(4,0)\}$ (the boundary is in solid line), and the target $y$ is the barycenter assuming a uniform distribution. One can visualize the supporting hyperplane $H$ (dot-dashed line) passing through the barycenter, and the vector field $dx(t)$ converging to $(\frac{3}{2},\frac{7}{2})$ for those who cooperate (region below $H$) and to $(\frac{5}{2},\frac{1}{2})$ for those who defect (region above $H$). The set $conv\{(\frac{3}{2},\frac{7}{2}),(\frac{5}{2},\frac{1}{2})\}$ is the set of approachable points with population strategy $q=((\frac{1}{2},\frac{1}{2}),(\frac{1}{2},\frac{1}{2}))$, and the barycenter is at the equilibrium with uniform distribution over $X$. This will be explained in Theorem \ref{polys}. \end{example} \begin{figure} [htb] \centering \def0.4\columnwidth{0.6\columnwidth} \input{PDp.eps_tex} \caption{Payoff space of Prisoners' dilemma:\ State space $X=conv\{(3,3),(1,1),(0,4),(4,0)\}$ (boundary in solid line), supporting hyperplane $H$ (dot-dashed line) passing through the barycenter, vector field $dx(t)$ converging to $(\frac{3}{2},\frac{7}{2})$ for those who cooperate (region below $H$) and to $(\frac{5}{2},\frac{1}{2})$ for those who defect (region above $H$), $conv\{(\frac{3}{2},\frac{7}{2}),(\frac{5}{2},\frac{1}{2})\}$ is set of approachable points with population strategy $q=((\frac{1}{2},\frac{1}{2}),(\frac{1}{2},\frac{1}{2}))$, barycenter is self-confirmed with uniform distribution over $X$.} \label{fig:PDp} \end{figure} \section{Main results}\label{main} This section outlines the main results of this paper. After introducing the \emph{expected value of the projected game}, Theorem \ref{app_princ} establishes conditions for approachability in 1st-moment. Theorem \ref{polys} introduces the notion of self-confirmed equilibrium. Theorems \ref{thm:ex} and \ref{thm:uq} elaborate on existence and nonuniqueness respectively. \subsection{Expected value of the projected game} Given the above game, we wish to analyze convergence properties in the space of distributions of the cumulative or average payoff $x_i(t)$, in the spirit of approachability. We will make use of the notion of \emph{projected game} which we recall next. Let $\lambda \in \mathbb R^m$ and denote by $\langle \lambda,G\rangle$ the one-shot zero sum game whose set of players and their actions are as in game $G$, and the payoff that player $j$ pays to player $i$ is $\lambda^T u(a_i(t),a_j(t))$ for every $(a_i(t),a_j(t)) \in A_i \times A_j$. Observe that, as a zero-sum one-shot game, the game $\langle \lambda, G\rangle$ has a \emph{value}, $val(\lambda)$, obtained as $$val(\lambda):=\min_{a_i(t)} \max_{a_j(t)} \lambda^T u(a_i(t),a_j(t)).$$ Given the stochastic nature of $a_j(t)$ the above min-max operation is not useful to our purposes. Then, we rather consider the expected value of the game (where the inner maximization is replaced by an expectation) and discuss approachability in expectation. In the light of this, and using the bilinear structure of the utility function, and assuming markovian strategies $$\sigma: X \times [0,T] \to A \, \mbox{ such that }\, a_i(t):=\sigma(x,t)$$ we can rewrite the expected value as \begin{equation}\label{expval} \left\{ \begin{array}{lll} \mathbb E val(\lambda)& := & \min_{a_i(t)} \mathbb E \lambda^T u(a_i(t), a_j(t))\\ &=&\min_{a_i(t)} \lambda^T u (a_i(t), q(t) ),\\ \\ q \in \Delta(A) \, & s.t. & q_k = \int_{R_k} \rho(x,t) dx, \\ && R_k:= \{x \in \mathbb R^m| \, \sigma(x,t)=k\}, \, \forall k \in A.\end{array}\right. \end{equation} In the case of state-dependent payoff, which occurs when we consider the game whose payoff is $$f(u(a_i(t),a_j(t)), x(t)) = \frac{1}{t} (\mathbb E u(a_i(t),a_j(t)) - x(t) )= \frac{1}{t} (u(a_i(t),q(t)) - x(t) ),$$ the above expression can be modified as: \begin{equation}\label{expvalsd} \left\{\begin{array}{lll} \mathbb E val_x(\lambda)& := & \min_{a_i(t)} \mathbb E \lambda^T f\Big(u(a_i(t), a_j(t)),x_i)\Big)\\ &=&\min_{a_i(t)} \lambda^T f\Big(u (a_i(t),q(t) ),x_i \Big)\\ \\ q \in \Delta(A) \, & s.t. & q_k = \int_{R_k} \rho(x,t) dx, \\ && R_k:= \{x \in \mathbb R^m| \, \sigma(x,t)=k\}, \, \forall k \in A.\end{array}\right. \end{equation} Note that here we use the notation $u(a_i(t),q(t))$ to mean $\mathbb E u(a_i(t),a_j(t))$. \subsection{Approachability in 1st-moment} Approachability theory was developed by Blackwell in 1956 \cite{B56} and is captured in the well known Blackwell's Theorem. We recall next the geometric (approachability) principle that lies behind Blackwell's Theorem. To introduce the approachability principle, let $\Phi$ be a closed and convex set in $\mathbb R^m$ and let $P(x)$ be the projection of any point $x \in \mathbb R^m$ (closest point to $x$ in $\Phi$). \begin{definition}(Approachable set) A closed and convex set $\Phi$ in $\mathbb R^m$ is \emph{approachable} by player 1 if there exists a strategy for player 1 such that (\ref{appr11}) holds true for every strategy of player 2: \begin{equation}\label{appr11}\lim_{t \rightarrow \infty} dist(x(t),\Phi)=0.\end{equation}\end{definition} The next result is the Blackwell approachability theorem. \begin{prop}\label{app_princ}(Approachability principle \cite{B56,LS07}) A closed and convex set $\Phi$ in $\mathbb R^m$ is approachable by player 1 if for every $x(t)$ there exists a strategy for player 1 such that (\ref{appr}) holds true for every strategy of player 2: \begin{equation}\label{appr} [x(t) - P(x(t))]^T [x(t)-P(x(t)) + f(u_i(\sigma(x,t),a_j(t)), x_i(t))] \leq 0, \quad \forall \ t. \end{equation} \end{prop} Note that in the above statement, condition (\ref{appr}) is equivalent to saying that i) for every $x$ taking $\lambda = \frac{x-P(x)}{\|x-P(x)\|} \in \mathbb R^m$ the value of the projected game satisfies \begin{equation}\label{appr1} [x(t) - P(x(t))]^T [x(t)-P(x(t))] + \|x-P(x)\| val_x(\lambda) \leq 0, \quad \forall \ t. \end{equation} Now, if we assume that the opponent is committed to play a mixed strategy $q \in \Delta(A)$, condition (\ref{appr}) turns into \begin{equation}\label{apprm}[x(t) - P(x(t))]^T [x(t)-P(x(t)) + f(u(\sigma(x,t),q(t)), x(t))]\leq 0, \quad \forall \ t,\end{equation} and the corresponding condition (\ref{appr1}) can be rewritten as \begin{equation \left\{\begin{array}{lll}[x(t) - P(x(t))]^T [x(t)-P(x(t))]+ \|x-P(x)\| \mathbb E val_x(\lambda)\leq 0, \quad \forall \ t,\\ \mathbb E val_x(\lambda) := \min_{a_i(t)} \lambda^T f (u_i (a_i(t),q(t) ),x_i).\end{array}\right. \end{equation} \begin{theorem}\label{app_princ} \textbf{(Approachability in $1$st-moment)} Let $q \in \Delta(A)$ be given. The set of approachable targets is $$\mathcal T(q)=\{y\mid\,y=\sum_{l,k\in A} p_l q_k M_{lk}, \forall p\in \Delta(A)\}.$$ Furthermore, there exists a partitioning $R_1, \ldots, R_{n}$ such that the approachable strategies are markovian and bang-bang: \begin{equation}\sigma(x)=\left\{\begin{array}{ll} a_i=k & \mbox{if $x \in R_k:=\{\xi| \, (\xi-y)^T (u(k,q)-y) \leq 0\}$}\\ a_i\not = k & \mbox{otherwise.}\end{array}\right.\end{equation} \end{theorem} \vspace{1mm}\noindent{\it Proof.}\quad Sketch. (sufficiency) Let $y\in \mathcal T(q)$. Rewrite as $y= \sum_{l,k\in A} p_l q_k M_{lk}$ where where $p,q \in \Delta(A)$. Let us also take $\Phi = \{y(t)\}$. Then for every $x \in X$, taking $\lambda = \frac{x-y}{\|x-y\|} \in \mathbb R^m$ the value of the projected game satisfies \begin{equation}\label{apprm1} \left\{\begin{array}{lll} [x(t) - y]^T [x(t)-y] + \|x-y\| \mathbb E val_x(\lambda)\leq 0, \quad \forall \ t. \\ \mathbb E val_x(\lambda) := \min_{a_i(t)} \lambda^T f\Big(u(a_i(t),q(t) ),x\Big)\\ \end{array}\right. \end{equation} (necessity) Let $y\not \in \mathcal T(q)$. Then the above does not hold. {\bf Q.E.D.} \bigskip In the problem at hand, one additional challenge is that $q$ must be \emph{self-confirmed}. This means that the mixed strategy $q$ entering the computation of the expected value of the projected games $\mathbb E val_x(\lambda)$ must reflect the current state distribution. In formulas, this corresponds to expanding (\ref{apprm1}) as follows: \begin{equation}\label{apprm11} \left\{\begin{array}{lll} [x(t) - y]^T [x(t)-y] + \|x- y\| \mathbb E val_x(\lambda)\leq 0, \quad \forall \ t. \\ \mathbb E val_x(\lambda) := \min_{a_i(t)} \lambda^T f\Big(u(a_i(t),q(t) ),x\Big)\\ q \in \Delta(A) \, s.t. \, q_k = \int_{R_k} \rho(x,t) dx, \\ \qquad R_k:=\{\xi| \, (\xi-y)^T (u(k,q)-y) \leq 0\} \, \forall k \in A. \end{array}\right. \end{equation} In the rest of the paper we look for self-confirmed solutions, which we call equilibria. \subsection{The mean field game}\label{derive} Let us denote by $v(x, t)$ the value of the optimization problem starting from time $t$ at state $x$. The first step is to show that the problem results in the following mean field game system for the unknown scalar functions $v(x, t)$, and $\rho(x, t)$ when each group behaves according to (\ref{cost}): \begin{equation} \label{eq:meanfieldcor} \left\{ \begin{array}{l} \displaystyle \partial_t v(x,t)+\inf_{a_i } \left\{f(u(a_i,q),x) \partial_x v(x,t)+g(x,y)\right\} =0\ \mbox{in } \mathbb{R}^m\times[0,T[,\\ \\ \displaystyle v(x,T)=\Psi(x,y)\ \forall\ x\in\mathbb{R}^m,\\ \displaystyle \\ \partial_t \rho(x,t) + div(\rho(x,t) \cdot f(u(a_i^*,q),x)) = 0,\\ \\ \rho(0)= \rho_0,\\ \end{array} \right. \end{equation} where $a_i^*(t,x)$ and $q$ are the optimal time-varying state-feedback controls of players $i$ and $j$, respectively, obtained as \begin{equation} \label{eq:meanfield1new} \left\{ \begin{array}{l} a_i^* = \sigma(x) \in \arg \min_{a_i \in A_i}\{f(u(a_i,q),x) \partial_x v(x,t)+g(x,y)\},\\ \\ q \in \Delta(A) \, s.t. \, q_k = \int_{R_k} \rho(x,t) dx, \\ \qquad R_k:= \{x \in \mathbb R^m| \, \sigma(x)=k\}, \, \forall k \in A.\end{array} \right. \end{equation} The mean field game system (\ref{eq:meanfieldcor}) appears in the form of two coupled PDEs intertwined in a forward-backward way. The first equation in (\ref{eq:meanfieldcor}) is the \emph{Hamilton-Jacobi-Bellman} (HJB) equation with variable $v(x,t)$ and parametrized in $\rho(\cdot)$. Given the boundary condition on final state (second equation in (\ref{eq:meanfieldcor})), and assuming a given population behavior captured by $\rho(\cdot)$, the HJB equation is solved backwards and returns the value function and best-response behavior of the individuals (first equation in (\ref{eq:meanfield1new})) as well as the worst adversarial response (second equation in (\ref{eq:meanfield1new})). The HJB equation is coupled with a second PDE, known as \emph{Fokker-Planck-Kolmogorov (FPK)} (third equation in (\ref{eq:meanfieldcor})), defined on variable $\rho(\cdot)$ and parametrized in $v(x,t)$. Given the boundary condition on initial distribution $\rho(0)= \rho_0$ (fourth equation in (\ref{eq:meanfieldcor})), and assuming a given individual behavior described by $u^*$, the FPK equation is solved forward and returns the population behavior time evolution $\rho(t)$. Let condition (\ref{apprm}) hold true. Now, for given $x$, take for $\lambda$ the value $\lambda(\partial_x v)=\frac{\partial_x v(x,t)}{\|\partial_x v(x,t)\|}$ which is the gradient direction on $x$. Then, we can introduce the expected value of the \emph{projected anti-gradient game} $$\mathbb E val_x[\partial_x v(x,t)]:=\lambda(\partial_x v)^T f(u_i(a_i^*,q),x).$$ We can then establish the following result. \begin{theorem}\label{polys} \textbf{(Self-confirmed equilibria)} Let condition (\ref{apprm}) hold true. Then, the mean-field game formulation of Problem \ref{prob1} is \begin{equation}\label{cbl22} \left\{\begin{array}{lll} \partial_t v(x,t) + \|\partial_x v \| \mathbb Eval_x[\partial_x v] + \frac{1}{2} (y(t) - x)^T Q (y(t) - x) =0,\\ \; \ \mbox{ in } \mathbb{R}^m\times[0,T[,\\ \\ v(x,T)= \Psi (y(T),x),\ \mbox{in } \mathbb{R}^m,\\ \\ \partial_t \rho(x,t) + div(\rho(x,t) \cdot f(u_i(a_i^*,q)) =0 \ \mbox{in } \mathbb{R}^m\times[0,T[,\\ \\ \rho(x,0)= \rho_0(x)\ \mbox{in } \mathbb{R}^m. \end{array}\right. \end{equation} Furthermore, the optimal controls for players 1 and 2 are \begin{equation}\label{optul2} \left\{\begin{array}{lll} a_i^* = \sigma(x) \in \arg \min_{a_i\in A_i} \lambda(\partial_x v)^T f(u(a_i,q),x) \\ q \in \Delta(A) \, s.t. \, q_k = \int_{R_k} \rho(x,t) dx, \\ \qquad R_k:=\{\xi| \, (\xi-y)^T (u(k,q)-y) \leq 0\} \, \forall k \in A,\\ \sigma(x)=k, \mbox{such that $x \in R_k$.} \end{array}\right. \end{equation} \end{theorem} \vspace{1mm}\noindent{\it Proof.}\quad{} Due to the bilinear structure of $f$, we can deduce that the best-response strategy $u^*$ and worst adversarial disturbance $w^*$ are on a vertex of the associated simplices in $\mathbb R^p$ and $\mathbb R^q$, respectively. This corresponds to saying that both strategies are \emph{pure strategies}. We recall here that pure strategies are such that each player chooses as a result a single predetermined action, in contrast with \emph{mixed strategies} where players select probabilities on actions and at time of play a random mechanism consistent with the selected probability distribution determines the actual action. A consequence of this is that the mean field equilibrium, if exists, is in pure strategies as well. We can rewrite the value of the anti-gradient projected game as $$ \mathbb E val_x[ \partial_x v] = \inf_{l\in A} \sum_{k \in A} \frac{1}{t} (q_k M_{lk} - x)^T \lambda(\partial_x v),$ Best responses and adversarial strategies are then given by $$ a_i^* = \arg \min_{l \in A} \sum_{k \in A} \frac{1}{t} (q_k M_{lk} - x)^T \lambda(\partial_x v).$$ With the above definition of $\mathbb E val_x[\partial_x v]$ in mind, the Hamilton-Jacobi part of (\ref{eq:meanfieldcor}) can be rewritten as \begin{eqnarray}\nonumber \partial_t v + \|\partial_x v \| \mathbb E val_x [\partial_x v] + \frac{1}{2}\left(y(t) - x(t)\right)^T Q \left(y(t) - x(t)\right) \label{eq:HJ0} =0\ \mbox{in } \mathbb{R}^m\times[0,T[, \\v(x,T)=\Psi(x)\ \forall\ x\in\mathbb{R}^m \end{eqnarray} It is left to observe that $f(u^*,w^*)=A_{i^*j^*}$ and proves the third equation (FPK equation). {\bf Q.E.D.} In principle, to find the optimal control input we need to solve the two coupled PDEs in (\ref{cbl22}) in $v$ and $\rho$ with given boundary conditions (second and last conditions). \subsection{Existence and nonuniqueness of equilibria} In this section we investigate existence and nonuniqueness of equilibria. To do this, we analyze the time-dependence of an estimate error $\nu(t)$, which accounts for the deviation between an estimated density $q(t)$ and a current one $\tilde q(t)$ at time $t$: $$\nu(t)= q(t) - \tilde q(t),$$ where \begin{equation}\label{til} \left\{\begin{array}{ll} \tilde q_k(t) = \int_{R_k} \rho(x)dx \\ R_k:=\{\xi| \, (\xi-y(t))^T (u(k,q)-y(t)) \leq 0\}. \end{array}\right.\end{equation} Observe that the time-dependence of $ \tilde q(t)$ enters in the above through the time-varying nature of the target $y(t)$. Now, according to our procedure, we wish to hypothesize a pair $(p,q)$, which constitutes the input, and obtain a new density $\tilde q(p,q)$ as an output. To see this, from $y=\sum_{l,k\in A} p_l q_k M_{lk}, \forall p,q\in \Delta(A)$ the expression (\ref{til}) can be rewritten as \begin{equation} \left\{\begin{array}{ll} \tilde q_k(p,q) = \int_{R_k} \rho(x)dx,\\ R_k:=\{\xi| \, (\xi-\sum_{l,k\in A} p_l q_k M_{lk})^T (u(k,q)-\sum_{l,k\in A} p_l q_k M_{lk}) \leq 0\}. \end{array}\right.\end{equation} Eventually, the procedure should return a fixed point. In other words, if we think of an equilibrium as the pair $(p^*,q^*)$ such that $\nu(p^*,q^*) = 0$, existence of an equilibrium is now related to existence of a fixed point for the above procedure, i.e., $$\tilde q(p_1,q_1)= q.$$ The above means that, given a $(p,q)$ as input to our procedure, the output $\tilde q(p,q)$ coincides with the hypothesized density $q$. It is natural to represent the above algorithmic procedure, as a continuous-time dynamical system and thus to relate convergence to a fixed point to the asymptotic stability of the dynamics. The next assumption introduces conditions for the asymptotic stability to hold. \begin{assumption}\label{1} There exists a pair $(\dot p,\dot q)$ such that \begin{equation}\label{cond} \left[\begin{array}{c} - \partial_{p} \tilde q_1 \dot p + \dot q_1 - \partial_{q} \tilde q_1 \dot q \\ \vdots \\ - \partial_{p} \tilde q_i \dot p + \dot q_i - \partial_{q} \tilde q_i \dot q \\ \vdots \\ - \partial_{p} \tilde q_m \dot p + \dot q_m - \partial_{q} \tilde q_m \dot q \end{array} \right] := (- \partial_{p} \tilde q_i \dot p + \dot q_i - \partial_{q} \tilde q_i \dot q )_{i=1,\ldots,m} \leq - \kappa (q - \tilde q).\end{equation} \end{assumption} The above describes the possibility of varying $(p,q)$ in order to reduce the estimate error $\nu$, whatever the current error is. The next result establishes the existence of an equilibrium based on the above condition. \begin{theorem}\label{thm:ex} {\bf(existence)} Let Assumption \ref{1} hold. Then, the estimate error decays exponentially fast, i.e. $$\nu(t) \leq e^{-\kappa t}\nu(0).$$ \end{theorem} \vspace{1mm}\noindent{\it Proof.}\quad{ This proof is based on a Lyapunov stability approach. In particular, let us introduce a quadratic (in the error) Lyapunov function $$\mathcal L = \frac{1}{2}\nu^T \nu,$$ and show that its derivative is strictly negative. The time derivative can be decomposed as sum of two terms involving the gradient of $\mathcal L$ with respect to the two variables $p$ and $q$. More specifically, \begin{equation} \begin{array}{ll} \dot {\mathcal L} = (\partial_{p} \mathcal L)^T \dot p + (\partial_{q} \mathcal L)^T \dot q\\ =\nu^T \dot \nu = (q - \tilde q)^T \Big[ \left((\partial_{p} \nu_i)^T \dot p\right)_{i=1,\ldots,m} + \left((\partial_{q} \nu_i)^T \dot q \right)_{i=1,\ldots,m} \Big] \\ = (q - \tilde q)^T (- \partial_{p} \tilde q_i \dot p + \dot q_i - \partial_{q} \tilde q_i \dot q )_{i=1,\ldots,m}. \end{array}\end{equation} From condition (\ref{cond}), we also have $$\dot {\mathcal L} \leq - \kappa (q- \tilde q)^T(q- \tilde q) = - \kappa \nu^T \nu,$$ which proves the thesis. {\bf Q.E.D.} } Essentially the above theorem shows that if we let the algorithm run for a long time the estimate error asymptotically converges to zero, namely, $$\lim_{t \rightarrow \infty} \nu = 0,$$ which proves the existence of an equilibrium. We are now in the position to study nonuniqueness of equilibria. In particular, we provide a variational condition under which the equilibrium is nonunique. \begin{theorem}\label{thm:uq} {\bf (nonuniqueness)} Starting at an equilibrium where $\mathcal L=0$, if for all $\lambda \in \mathbb R^m$, $\|\lambda\|=1$ we have \begin{equation} \begin{array}{ccc} \min_{\dot p,\dot q} \lambda^T \dot \nu =\min_{\dot p,\dot q} \lambda^T (- \partial_{p} \tilde q_i \dot p + \dot q_i - \partial_{q} \tilde q_i \dot q )_{i=1,\ldots,m}\\ < 0 < \max_{\dot p,\dot q} \lambda^T \dot \nu = \max_{\dot p,\dot q} \lambda^T (- \partial_{p} \tilde q_i \dot p + \dot q_i - \partial_{q} \tilde q_i \dot q )_{i=1,\ldots,m},\end{array}\end{equation} then there exists a $(\dot p,\dot q)$ such that $\dot{\mathcal L}=0$ and thus the current equilibrium is nonunique. \end{theorem} \vspace{1mm}\noindent{\it Proof.}\quad{There exists a $(\dot p,\dot q)$ such that $$\tilde q (p+\dot p dt,q+\dot q dt)= q+\dot q dt.$$ The above also means that the error $$\nu= \tilde q (p+\dot p dt,q +\dot q dt) - (q + \dot q dt )= 0.$$ {\bf Q.E.D.} } \subsection{Solution of the mean field game} This section investigates on the microscopic dynamics of every player given an equilibrium $(p,q)$ and the corresponding target which is common prior, where the target is denoted by $$y=\sum_{l,k\in A} p_l q_k M_{lk}.$$ As a result we obtain that such a dynamics is a ``potential'' one, in the sense that every player's current average payoff, which we can call \emph{state} of the player, describes a trajectory along the anti-gradient of a potential function, the latter being the value function of the mean-field game introduced earlier. To this purpose, let us denote by $e(t)$ the deviation between the target $y$ that every player wishes to approach, and the current average payoff $x(t)$, namely $$e(t) =y-x(t).$$ Given that our running cost is quadratic, from dynamic programming, it is natural to assume that the value function has also a quadratic structure. This is a recurrent approach which needs an a posteriori verification of the consistency of the quadratic assumption. In particular, let us assume that the upper bound for the value function takes the form \begin{equation}\label{phi} \phi(x,t) = \frac{1}{2} e^T \Phi_t e, \end{equation} where $\Phi_t$ is an opportune matrix which is positive definite, i.e., $\Phi_t >0$. Likewise, consider a quadratic function for the terminal penalty, namely, $$\Psi(x) = \frac{1}{2} e(T)^T \psi e(T).$$ Then, the HJB equation in (\ref{eq:HJ0}) can be rewritten as \begin{eqnarray}\nonumber \partial_t \phi(x,t) + \|\partial_x \phi(x,t) \| \mathbb E val_x [\partial_x \phi(x,t)] + \frac{1}{2} e(t)^TQ e(t) \label{eq:HJ00} =0\ \mbox{in } \mathbb{R}^m\times[0,T[, \\ \phi(x,T)=\Psi(x)\ \forall\ x\in\mathbb{R}^m \end{eqnarray} Substituting the expression (\ref{phi}) for the value function in (\ref{eq:HJ00}) we obtain \begin{eqnarray}\nonumber \frac{1}{2}e(t)^T \dot \Phi_t e(t) -\frac{1}{2} e(t)^T \Phi_t e(t) + \frac{1}{2}e(t)^T Q e(t) \label{eq:HJ0} =0\ \mbox{in } \mathbb{R}^m\times[0,T[, \\ \frac{1}{2} e(T)^T \Phi_T e(T)=\Psi(x)\ \forall\ x\in\mathbb{R}^m \end{eqnarray} The advantage of writing the HJB as above is in that all terms are explicitly written as quadratic terms in the error $e(t)$. Considering that the HJB has to hold true for every $e(t)$, we can drop $e(t)$ and thus we have an expression in the only matrix variable $\Phi_t$ as displayed next: \begin{equation}\nonumber \left\{\begin{array}{lll} \dot \Phi_t - \Phi_t + Q =0\ \mbox{in } [0,T[, \\ \Phi_T= \psi \ \forall\ x\in\mathbb{R}^m.\end{array}\right. \end{equation} The above has the form of a classical differential Riccati equation which can be solved backwardly given the boundary conditions on the matrix in the terminal penalty, $\Phi_T= \psi$. We can use such a result to analyze the microscopic dynamics of each player as detailed in the next subsection. \subsubsection{Microscopic model} Every single player is characterized by the following system of equations involving the evolution of the average payoff (first equation), its best-response (second equation), and the expression for the density (third equation): \begin{equation}\label{optul222} \left\{\begin{array}{lll} dx(t)= \frac{1}{t} \left(\sum_{k \in A} q_k M_{a^*k} - x(t) \right) dt,\\ a^*(x,t)= \arg \min_{a\in A} (\Phi_t e(t))^T \left(\sum_{k \in A} q_k M_{ak} - x(t) \right), \\ q \in \Delta(A) \, s.t. \, q_k = \int_{R_k} \rho(x,t) dx, \\ \qquad R_k:= \{x \in \mathbb R^m| \, \sigma(x,t)=k\}, \, \forall k \in A. \end{array}\right. \end{equation} Note that the expression for the best-response is obtained from (\ref{optul2}) where $\partial_x v$ is now replaced by $\Phi_t e(t)$. This is a straightforward consequence from assuming the value function quadratic as in (\ref{phi}). Let $t=e^s$ then $$\dot x(s)= \sum_{k \in A} q_k M_{a^*k} - x(s) = u(a^*,q) - x(s).$$ For all $x$ the supporting hyperplane $H:=\{\xi| \, (\xi-y)^T (u(a^*,q)-y) = 0\}$ separates $x$ from $u(a^*,q)$, i.e., $$(x-y)^T (u(a^*,q) -y) = (x-y)^T (\sum_{k \in A} q_k M_{a^*k}-y) \leq 0.$$ Then from Theorem \ref{app_princ} approachability follows. \section{Application: Regret and Bayesian equilibrium}\label{regret} Perhaps the leading application of games with vector payoffs is in the study of regret-based dynamics, to which we now turn. \subsection{Regret targeting in classical two-player games} Given a symmetric normal-form game with common action set $A$ and symmetric payoff function $\pi:A\rightarrow\mathbb{R}$, let the \emph{regret} of player $i$ from not having played action $k\in A$ under action profile $\alpha\in A^2$ be $$r(k,\alpha)=\pi(k,\alpha_{-i})-\pi(\alpha_i,\alpha_{-i}).$$ A straightforward way to justify the vector payoffs introduced earlier is to make them coincide with the regret vector associated to each action profile, i.e. $$u(\alpha):=\Big(r(k,\alpha)\Big)_{k \in A}.$$ In Hart and Mas-Colell \cite{HM03}, approachability of the nonpositive orthant implies convergence to Nash equilibrium under such payoffs. This is no longer true for $1$st-moment approachability, which drives \emph{expected}---rather than maximum---regret to zero, so that some deviations could still have positive regret. In the following, we turn standard games like the Prisoners' Dilemma, coordination games and Hawk--Dove games into games with regret vectors of type $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline & Left & Right \\ \hline Top & $\left(\begin{array}{c} 0 \\ a \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ b \end{array}\right)$ \\ \hline Bottom & $\left(\begin{array}{c} -a \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} -b \\ 0 \end{array}\right)$ \\ \hline \end{tabular} \end{array}$$ and analyse the resulting dynamics of a population targeting expected regret. \begin{example} \textbf{(Prisoners' Regret)} Consider again the Prisoners' Dilemma, and the following bimatrix, which represents the regret vector of player 1: $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline & Cooperate & Defect \\ \hline Cooperate & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ \\ \hline Defect & $\left(\begin{array}{c} -1 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} -1 \\ 0 \end{array}\right)$\\ \hline \end{tabular} \end{array}$$ Putting ourselves in the position of the Row player, and supposing that the Column player is randomly extracted from the population, we have that if Column is playing $D$, then if Row switched from $D$(efect) to $C$(ooperate), he would lose his payoff of $1$, whereas if he stuck to $D$ the regret would be $0$. This explains the vector payoff $(-1,0)$ for the action profile $(D,D)$. Likewise, if Row switched from $C$ to $D$ he would earn a payoff of $1$, in comparison with a regret of $0$ when sticking to $C$. This is represented by the regret vector $(0,1)$ for the action profile $(C,D)$. The reasoning would be analogous if Column were to play $C$. Note that at the pure Nash equilibrium $(D,D)$ the regret vector is component-wise nonpositive. \begin{figure} [htb] \centering \def0.4\columnwidth{0.4\columnwidth} \input{PD.eps_tex} \caption{Regret space of Prisoners' dilemma: State space $X=conv\{(-1,0),(0,1)\}$ (solid line), initial distribution $\rho(x,0)$ (grey area), and vector field $dx(t)$ converging to $y=(-0.5,0.5)$.} \label{fig:PD} \end{figure} Figure~\ref{fig:PD} depicts the state space $X=conv\{(-1,0),(0,1)\}$ (solid line) in the case with an initial distribution $m(x,0)$ (grey area) of players. The arrows indicate the vector field $dx(t)$ if every player in state $x \in conv\{(-1,0),(-1/2,-1/2)\}$ cooperates, i.e.\ $a_i=1$ and every player in state $x \in conv\{(0,1),(-1/2,-1/2)\}$ defects. The vector field is such that eventually all players converge to the target$y=(-1/2,1/2)$. Consequently, the distribution converges asymptotically to a Dirac impulse in $y$. \end{example} \begin{example} \textbf{(Coordination game)} Consider now the coordination game in the bimatrix on the left, with associated regret-vector game on the right: $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline & Mozart & Mahler \\ \hline Mozart & $(2,2)$ & $(0,0)$ \\ \hline Mahler & $(0,0)$ & $(1,1)$ \\ \hline \end{tabular} $~~~~$ \begin{tabular}{|c|c|c|} \hline & Mozart & Mahler \\ \hline Mozart & $\left(\begin{array}{c} 0 \\ -2 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ \\ \hline Mahler & $\left(\begin{array}{c} 2 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} -1 \\ 0 \end{array}\right)$\\ \hline \end{tabular} \end{array}$$ \begin{figure} [htb] \centering \def0.4\columnwidth{0.5\columnwidth} \input{Coord.eps_tex} \caption{Regret space of the coordination game: State space $X=conv\{(-1,0),(0,1),(0,-2),(2,0)\}$ (boundary in solid line), and vector field $dx(t)$ converging to $(1,0)$ (grey area) and $(0,-1)$ (white area), approachable point is $y=(0,-1)$, set of approachable points is $conv\{(1,0),(0,-1)\}$ (dashed line) with mixed population strategy $q=(\frac{2}{3},\frac{1}{3})$.} \label{fig:Coord} \end{figure} In Fig.\ \ref{fig:Coord} we illustrate the state space $X=conv\{(-1,0),(0,1),(0,-2),(2,0)\}$ (the boundary is in solid line). With a target $y=(0,-1)$, suppose we have a distribution on actions $q=(2/3,1/3)$, i.e.\ $2/3$ of the population plays Mozart, then $u(1,q)= (0,-1)$ and $u(2,q)= (1,0)$ (here $k=2$ means playing Mahler). The set of approachable points with mixed population strategy $q=(2/3,1/3)$ is $conv\{(1,0),(0,-1)\}$ (dashed line), namely, any point in the convex hull of $u(1,q)= (0,-1)$ and $u(2,q)= (1,0)$. The arrows indicate the vector field $dx(t)$ if every player in state $x \in R_2:=\{\xi| \, (\xi-y)^T (u(2,q)-y) \leq 0\}$ (grey area) plays Mahler, namely, $a_i=\sigma(x)=2$. On the other hand, every player in state $x \in R_1:=\{\xi| \, (\xi-y)^T (u(1,q)-y) \leq 0\}$ (white area) plays Mozart, namely, $a_i=\sigma(x)=1$. Obviously we need that the integral of the distribution $m$ over $R_2$ is consistent with the initial assumption, which means $q_2=\int_{R_2} \rho(x,t) dx =1/3$. If this occurs, the vector field is such that eventually all players converge to $y=(0,-1)$. Consequently, the distribution converges to a Dirac impulse in $y$. \end{example} \begin{example} (Hawk--Dove game) We can likewise transform the Hawk--Dove (or chicken) game on the left into the corresponding regret-vector game on the right: $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline & Hawk & Dove \\ \hline Hawk & $\Big( -1,-1\Big)$ & (4,0) \\ \hline Dove & (0,4) & $\Big( 2,2\Big)$ \\ \hline \end{tabular} $~~~~$ \begin{tabular}{|c|c|c|} \hline & Hawk & Dove \\ \hline Hawk & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ -2 \end{array}\right)$ \\ \hline Dove & $\left(\begin{array}{c} -1 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 0 \end{array}\right)$\\ \hline \end{tabular} \end{array}$$ We have two pure Nash equilibria $(Dove, Hawk)$ and $(Hawk, Dove)$, whose corresponding regret vectors are nonpositive.\end{example} More generally, let us now consider the parametric game introduced earlier: $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline & Left & Right \\ \hline Top & $\left(\begin{array}{c} 0 \\ a \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ b \end{array}\right)$ \\ \hline Bottom & $\left(\begin{array}{c} -a \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} -b \\ 0 \end{array}\right)$ \\ \hline \end{tabular} \end{array}$$ Fig.\ \ref{fig:ab1} illustrates the state space $X=conv\{(0,a),(-a,0),(-b,0),(0,b)\}$ (the boundary is in solid line) where $a< 0 < b$. The target $y=(0,a)$ is in the negative orthant. Here we consider a distribution on actions $q=(1,0)$, i.e.\ everybody plays $k=1$, then $u(1,q)= (0,a)$ and $u(2,q)= (-a,0)$. The arrows indicate the vector field $dx(t)$ for which eventually all players converge to $y=(0,a)$. Consequently, the distribution converges to a Dirac impulse in $y$. Note that the supporting hyperplane $H:=\{\xi| \, (\xi-y)^T (u(2,q)-y) = 0\}$ (dot-dashed line) intersects $X$ at only one point (the vertex), which is proven to be necessary for the vertex to be at the equilibrium. This will be explained in Theorem \ref{polys}. \begin{figure} [htb] \centering \def0.4\columnwidth{0.6\columnwidth} \input{ab1.eps_tex} \caption{Regret space of parametric game with $a< 0 < b$: State space $X=conv\{(0,a),(-a,0),(-b,0),(0,b)\}$ (boundary in solid line), vector field $dx(t)$ converging to $(0,a)$ which is also an approachable vertex with population strategy $q=(1,0)$, supporting hyperplane $H$ (dot-dashed line) intersects $X$ only in one point (the vertex).} \label{fig:ab1} \end{figure} \begin{figure} [htb] \centering \def0.4\columnwidth{0.4\columnwidth} \input{ab2.eps_tex} \caption{Regret space of parametric game with $0<b < a$: State space $X=conv\{(0,a),(-a,0),(-b,0),(0,b)\}$ (boundary in solid line), supporting hyperplane $H$ (dot-dashed line) passing through the vertex $(-b,0)$, vector field $dx(t)$ converging to $(0,b)$ left of $H$ and to $(-b,0)$ right of $H$, $conv\{(0,b),(-b,0)\}$ is set of approachable points with population strategy $q=(0,1)$, vertex $(-b,0)$ is not self-confirmed, while vertex $(0,a)$ is self-confirmed with population strategy $q=(1,0)$.} \label{fig:ab2} \end{figure} Fig.\ \ref{fig:ab2} depicts the state space $X=conv\{(0,a),(-a,0),(-b,0),(0,b)\}$ (the boundary is in solid line) where $0<b < a$. The target $y=(-b,0)$ is again in the negative orthant. Here we consider a distribution on actions $q=(0,1)$, i.e.\ everybody plays $k=2$, then $u(1,q)= (0,b)$ and $u(2,q)= (-b,0)$. The arrows indicate the vector field $dx(t)$ for which eventually all players converge to $y=(-b,0)$. Consequently, the distribution converges to a Dirac impulse in $y$. However, there is an issue here related to the fact that the vertex $y$ is not at the equilibrium. To see this, note that the supporting hyperplane $H:=\{\xi| \, (\xi-y)^T (u(1,q)-y) = 0\}$ (dot-dashed line) partitions $X$ into two regions, which is proven to be necessary for the vertex not to be at the equilibrium. This will be explained in Theorem \ref{polys}. \subsection{Maximum regret and Bayesian equilibrium} Whilst $1$st-moment approachability gives interesting dynamics in population games based on regret then, it does not give convergence to Nash equilibrium. In this section, however, we show how the model can be applied to an incomplete-information setting to yield convergence to Bayesian equilibrium. Suppose then that the continuous-time population game $\Gamma$ is based on a game of incomplete information; in particular, we are given a Harsanyi game $G$ (as described in \cite{Z12}) with state of the world $\omega=(s(\omega);t_1(\omega),t_2(\omega))$ chosen by Nature from a finite set $Y$ using a probability distribution $\theta$. Players then learn their own types $t_i(\omega)\in T_i$, choose actions $\beta_i$ from a common finite set $B(\omega)$, and receive symmetric payoffs $\varpi_i(\beta;\omega)$, $\beta=(\beta_1,\beta_2)$; the state of nature is $s(\omega)=(B(\omega),\varpi)$, $\varpi=(\varpi_1,\varpi_2)$. Each player $i$ then has a common finite set $\Sigma$ of ($T_i$-measurable) pure Bayesian strategies $\sigma_i:Y\rightarrow B(\omega)$, which we identify with the action set $A$ in our general framework. Given a strategy profile $\sigma\in\Sigma^2$, let the vector payoffs be given by \emph{maximal regrets}, $$u(\sigma):=\Big(\max_{k \in\Sigma}r(k(\omega),\sigma(\omega))\Big)_{t_i\in T_i}.$$ Players are continuously rematched against new opponents to play this game $G$, and a new state of the world is chosen for each such matching; hence, each play of $G$ is one-shot in Nature, as distinct from repeated games of incomplete information (see \cite{AM95} and Ch.\ 14 of \cite{MSZ13}), where the opponents and state remain constant through time. $1$st-moment approachability of the nonpositive orthant in $\Gamma$ then implies that $$\mathbb E_\theta\max_{k\in\Sigma}\varpi_i(k(\omega),\sigma_{-i}(\omega)) - \varpi_i(\sigma(\omega))\leq0.$$ But since the maximum of convex functions is convex, Jensen's inequality implies that the left-hand side is no less than $$\max_{k\in\Sigma}\mathbb E_\theta\varpi_i(k(\omega),\sigma_{-i}(\omega)) - \mathbb E_\theta\varpi_i(\sigma(\omega)),$$ which is hence also nonpositive. Thus, we have a Nash equilibrium of the Harsanyi game, which is also a Bayesian equilibrium of the incomplete-information game by Harsanyi's \cite{H68} Theorem I. For example, consider a game $G$ where each player's payoffs are randomly determined; with probability $1/2$, the Row player $R$ has the payoffs in the left-hand ``l'' matrix, and with probability $1/2$, she has the payoffs in the right-hand ``h'' matrix: $$\begin{array}{cc} \begin{tabular}{|c|c|c|} \hline $l$ & Opera & Football \\ \hline Opera & $3$ & $1$ \\ \hline Football & $0$ & $2$\\ \hline \end{tabular} $~~~~$ \begin{tabular}{|c|c|c|} \hline $h$ & Opera & Football \\ \hline Opera & $1$ & $3$ \\ \hline Football & $2$ & $0$\\ \hline \end{tabular} \end{array}$$ The Column player $C$'s payoffs are determined in a symmetric manner. Each player observes her own payoffs, but not those of her opponent. There are thus four possible states of the world $Y=\{\omega_{ll},\omega_{lh},\omega_{hl},\omega_{hh}\}$: \begin{equation} \left\{ \begin{array}{l} \omega_{ll}=\left(s_{ll};[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{lh}],[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{hl}]\right) \\ \omega_{lh}=\left(s_{lh};[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{lh}],[\frac{1}{2}\omega_{lh},\frac{1}{2}\omega_{hh}]\right) \\ \omega_{hl}=\left(s_{hl};[\frac{1}{2}\omega_{hl},\frac{1}{2}\omega_{hh}],[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{hl}]\right) \\ \omega_{hh}=\left(s_{hh};[\frac{1}{2}\omega_{hl},\frac{1}{2}\omega_{hh}],[\frac{1}{2}\omega_{lh},\frac{1}{2}\omega_{hh}]\right), \end{array}\right. \end{equation} each occurring with probability $1/4$. Furthermore, there are two possible types of each player, $$\{R_l,R_h\}=\left\{\left[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{lh}\right],\left[\frac{1}{2}\omega_{hl},\frac{1}{2}\omega_{hh}\right]\right\},$$ $$\{C_l,C_h\}=\left\{\left[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{hl}\right],\left[\frac{1}{2}\omega_{ll},\frac{1}{2}\omega_{hl}\right]\right\},$$ and each player assigns probability $1/2$ to each of her opponents' possible types. Representing this situation as a Bayesian game, the Row player's vector payoffs are: $$\begin{array}{cc} \begin{tabular}{|c|c|c|c|c|} \hline & $O_l$, $O_h$ & $O_l$, $F_h$ & $F_l$, $O_h$ & $F_l$, $F_h$ \\ \hline $O_l$, $O_h$ & $\left(\begin{array}{c} 3 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 2 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 2 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 3 \end{array}\right)$ \\ \hline $O_l$, $F_h$ & $\left(\begin{array}{c} 3 \\ 2 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ \\ \hline $F_l$, $O_h$ & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 2 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 2 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 3 \end{array}\right)$ \\ \hline $F_l$, $F_h$ & $\left(\begin{array}{c} 0 \\ 2 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 2 \\ 0 \end{array}\right)$ \\ \hline \end{tabular} \end{array}$$ where, for example, $O_l$, $F_h$ denotes the pure Bayesian strategy $\{\sigma_R(R_l)=\{\textrm{Opera}\},\sigma_R(R_h)=\{\textrm{Football}\}\}$. The Column player's payoffs are symmetric. This game has one pure-strategy equilibrium where Row plays $O_l$, $F_h$ and Column plays $O_l$, $O_h$, and a symmetric one where Row plays $O_l$, $O_h$ and Column plays $O_l$, $F_h$. Now convert this game into one with maximal-regret payoffs: $$\begin{array}{cc} \begin{tabular}{|c|c|c|c|c|} \hline & $O_l$, $O_h$ & $O_l$, $F_h$ & $F_l$, $O_h$ & $F_l$, $F_h$ \\ \hline $O_l$, $O_h$ & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ \\ \hline $O_l$, $F_h$ & $\left(\begin{array}{c} 0 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 3 \end{array}\right)$ \\ \hline $F_l$, $O_h$ & $\left(\begin{array}{c} 3 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 0 \end{array}\right)$ \\ \hline $F_l$, $F_h$ & $\left(\begin{array}{c} 3 \\ 0 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 1 \\ 1 \end{array}\right)$ & $\left(\begin{array}{c} 0 \\ 3 \end{array}\right)$ \\ \hline \end{tabular} \end{array}$$ For instance, if Row is playing $F_l$, $O_h$ and Column is playing $O_l$, $O_h$, Row type $R_l$'s expected payoff is $0$, whereas he could have had $3$ by playing $O_l$, $O_h$, giving a maximal regret of $3$; similarly, type $R_h$'s payoff is $1$, whereas he could have had $2$ by playing $F_l$, $F_h$, giving a maximal regret of $1$. $1$st-moment approachability of the nonpositive orthant with these maximal-regret payoffs then implies Bayesian equilibrium. In this respect, from Theorem \ref{app_princ} we know that, for instance, for any pure strategy $q$ we have \begin{equation} \mathcal T(q)= \left\{\begin{array}{lll} \{y\mid\,y \in conv( (0,1), (0,0), (3,1), (3,0))\}, & q=(1,0,0,0),\\ \{y\mid\,y \in conv( (0,0), (0,1), (1,0), (1,1))\}, & q=(0,1,0,0),\\ \{y\mid\,y \in conv( (0,0), (0,1), (1,0), (1,1))\}, & q=(0,0,1,0),\\ \{y\mid\,y \in conv( (1,0), (1,3), (0,0), (0,3))\}, & q=(0,0,0,1). \end{array}\right. \end{equation} This means that for any pure strategy $q$ the origin $(0,0)$ is reachable and in particular the corresponding strategy is \begin{equation}\sigma(x)=\left\{\begin{array}{lll} a_i=2 & \mbox{for all $x$}, & q=(1,0,0,0),\\ a_i=1 & \mbox{for all $x$}, & q=(0,1,0,0),\\ a_i=1 & \mbox{for all $x$}, & q=(0,0,1,0),\\ a_i=3 & \mbox{for all $x$}, & q=(0,0,0,1).\end{array}\right.\end{equation} However, note none of the above strategies corresponds to a self-confirmed equilibrium according to Theorem \ref{polys}. Indeed, let us take for instance the first strategy, $a_i=2$, for all $x$ when $q=(1,0,0,0)$. But $a_i=2$, for all $x$ implies $R_2 = X$ and $R_1 = R_3 = R_4= \emptyset$ which implies in turn $q=(0,1,0,0)$ and this contradicts the assumption $q=(1,0,0,0)$. We can repeat the same reasoning for any other strategy. \section{Conclusion}\label{sec:conclusions} This paper has combined approachability theory, evolutionary games, and mean-field games in a unified framework. The game studied has a vector payoff, a large number of players, and admits classical mean-field game representation involving two coupled PDEs, the \emph{Hamilton-Jacobi-Bellman equation} and the \emph{advection equation}. We have highlighted multiple contributions. First, we coin the notion of \emph{1st-moment approachability} and analyze the corresponding convergence conditions. Second, we use the mean-field game to introduce the \emph{self-confirmed equilibrium}. Third we discuss on existence, non uniqueness, and stability of equilibria as fixed points of the two PDEs. Future work involves the stochastic analysis of the same game in the presence of an additional Brownian motion in the dynamics. This would capture uncertainty or model-misspecification. In a different direction, we are interested in extending the study to the case where each player can adopt a mixed strategy, which would imply a new definition of density distribution on the space of mixed strategies; so far, the density distribution is defined on the space of pure strategies. A third development will be a further analysis of the connections with the Bayesian approach. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,668
He who coordinates his thoughts with his actions controls his own destiny. Experience is the treasure of any persona. Love is not a feeling of attraction between each other.But it is an open devotion between between the two. Do not be concerned with how others treat you. Be only concerned with how you treat others. Life, wherever it leads,Will always be the same,It begs for the best of you. To experience peace ~ stop disturbing it!
{ "redpajama_set_name": "RedPajamaC4" }
6,113
"Space weather" may sound like a contradiction. How can there be weather in the vacuum of space? Yet space weather, which refers to changing conditions in space, is an active field of research and can have profound effects on Earth. We are all familiar with the ups and downs of weather on Earth, and how powerful storms can be devastating for people and vegetation. Although we are separated from the Sun by a large distance as well as by the vacuum of space, we now understand that great outbursts on the Sun (solar storms, in effect) can cause changes in the atmosphere and magnetic field of Earth, sometimes even causing serious problems on the ground. In this chapter, we will explore the nature of the Sun's outer layers, the changing conditions and activity there, and the ways that the Sun affects Earth. By studying the Sun, we also learn much that helps us understand stars in general. The Sun is, in astronomical terms, a rather ordinary star—not unusually hot or cold, old or young, large or small. Indeed, we are lucky that the Sun is typical. Just as studies of Earth help us understand observations of the more distant planets, so too does the Sun serve as a guide to astronomers in interpreting the messages contained in the light we receive from distant stars. As you will learn, the Sun is dynamic, continuously undergoing change, balancing the forces of nature to keep itself in equilibrium. In this chapter, we describe the components of the Sun, how it changes with time, and how those changes affect Earth.
{ "redpajama_set_name": "RedPajamaC4" }
4,147
package com.cloud.consoleproxy.vnc.packet.server; public abstract class AbstractRect implements Rect { protected final int x; protected final int y; protected final int width; protected final int height; public AbstractRect(int x, int y, int width, int height) { this.x = x; this.y = y; this.width = width; this.height = height; } @Override public int getX() { return x; } @Override public int getY() { return y; } @Override public int getWidth() { return width; } @Override public int getHeight() { return height; } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,851
Q: Sort with one element at the end I have a list of objects and I want to sort it alphabetically by an attribute. But I want to add an exception rule if this attribute matches a specific string. For example: public class Car { String name; } List<Car> cars = asList( new Car("Unassigned"), new Car("Nissan"), new Car("Yamaha"), new Car("Honda")); List<Car> sortedCars = cars .stream .sorted(Comparator.comparing(Car::getName)) .collect(Collectors.toList()); If cars.name == "Unassigned" then this car should remain at the end of the list and the result would be: [Car<Honda>, Car<Nissan>, Car<Yamaha>, Car<Unassigned>] A: List<Car> sortedCars = cars .stream() .sorted(Comparator.comparing( Car::getName, Comparator.comparing((String x) -> x.equals("Unassigned")) .thenComparing(Comparator.naturalOrder()))) .collect(Collectors.toList()); There are a lot of things going on here. First I am using Comparator.comparing(Function, Comparator); then (String x) -> x.equals("Unassigned") which actually compares a Boolean (that is Comparable); then the fact that (String x) is used - as this type witness is used to correctly infer the types... A: The most direct and and easy-to-read solution is probably to write a custom comparator that implements your sorting logic. You can still use the Comparator.comparing method to make it a bit prettier, though: public static final String UNASSIGNED = "Unassigned"; List<Car> cars = List.of( new Car("Unassigned"), new Car("Nissan"), new Car("Yamaha"), new Car("Honda")); List<Car> sortedCars = cars.stream() .sorted(Comparator.comparing(Car::getName, (name1, name2) -> { if (name1.equals(name2)) return 0; if (name1.equals(UNASSIGNED)) return 1; if (name2.equals(UNASSIGNED)) return -1; return name1.compareTo(name2); })) .collect(toList()); It is possible to extract the "at-the-end" functionality to a separate comparable combinator method. Like this: List<Car> sortedCars = cars.stream() .sorted(Comparator.comparing(Car::getName, withValueAtEnd(UNASSIGNED))) .collect(toList()); public static <T extends Comparable<T>> Comparator<T> withValueAtEnd(T atEnd) { return withValueAtEnd(atEnd, Comparator.naturalOrder()); } public static <T> Comparator<T> withValueAtEnd(T atEnd, Comparator<T> c) { return (a, b) -> { if (a.equals(atEnd)) return 1; if (b.equals(atEnd)) return -1; return c.compare(a, b); }; } Also, it's good style to use a named constant for special values like your "Unassigned". Also, note that if you don't need to keep the unsorted cars list, then you can sort that list in place instead of using a stream: cars.sort(UNASSIGNED_COMPARATOR); A: One way to possibly do that would be using partitioning as: Map<Boolean, List<Car>> partitionedCars = cars .stream() .collect(Collectors.partitioningBy(a -> a.getName().equals("Unassigned"))); List<Car> sortedCars = Stream.concat(partitionedCars.get(Boolean.FALSE).stream() .sorted(Comparator.comparing(Car::getName)), partitionedCars.get(Boolean.TRUE).stream()) .collect(Collectors.toList()); A: You could just replace "Unassigned" with an end-of-alphabet string in your comparator. Comparator.comparing(car -> car.getName().equals("Unassigned") ? "ZZZ" : car.getName()) A: Alternatively, just remove all instances of "Unassigned" and add however many to the end of the List after. For example: int numberOfUnassigned = 0; for (Iterator<Car> iterator = sortedCars.iterator(); iterator.hasNext();) { String str = iterator.next().getName(); if (str.equals("Unassigned")) { iterator.remove(); numberOfUnassigned++; } } And then add the number of numberOfUnassigned to the end.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,515
Why Peakon Solutions Why Peakon Customers Pricing Resources Log in Book a demo Thanks for subscribing! The next issue will be heading your way. Career stagnating? You could be part of the "Abandoned Generation" James Young • 01-03-2017 The 2008 financial crisis was years in the making. Poor regulation and rampant greed; the big banks and the white-collar elite were riding the crest of a wave. A wave that eventually broke, leaving the financial and employment markets in turmoil. Almost a decade on the effects are still being felt, and no one is suffering more than those who graduated into the worst UK economy in living memory. They belong to the Abandoned Generation; a generation of disillusioned 25-34 year olds who feel robbed of a career. 37% of 25-34 year olds feel their career prospects are "significantly worse" than their parents' In early February, we conducted a survey across the UK, aiming to understand the level of optimism and satisfaction among the British workforce with regards to their career prospects. What we found was a dramatic peak in discontent among older Millennials. Compared to previous generations, who benefitted from stronger economies in the 80s and 90s, the UK's current 25-34 year olds are far less optimistic about their prospects. A poor start means a poor finish The first 10 years of employment are the most crucial in defining career success. During this time, young workers experience 70% of their total salary growth, gain vital experience, and cement a career path. A study by the National Bureau of Economic Research showed that a weakened economy causes employees to lose out on this fertile training ground. The report revealed that new graduates were starting their careers at smaller businesses and on lower pay than in previous decades. Those who managed to progress to senior roles and higher salaries were only able to do so by moving between roles more regularly. Many of these young adults are realising they have fallen victim to the economic climate. Ambitious millennials, denied development opportunities and pushed into lower skilled work, are becoming increasingly aware that they have been left behind. The Abandoned Generation not limited to the UK We extended the survey internationally, looking at the sentiment of Millennials elsewhere in Europe and further afield in the Americas. The results showed large amounts of discontent in France and Spain, compared to a more optimistic outlook in Mexico and Germany. In alignment with the findings of the NBER, our study found Millennial career optimism in each nation to be closely linked to the relative strength of the economy. In the years following the 2008 financial crash, the Spanish job market was in disarray and youth unemployment rates exceeded 50%. Having graduated into one of the harshest economic climates in recent history, 46% of Spain's young workers believe their prospects to be worse than those of their parents'. It's a similar story in France where, again, 46% of 25-34 year olds feel they have poorer career opportunities. The OECD reported in 2016 that the French economy was experiencing a concerning lack of economic growth. At the time, the unemployment rate was 10.2% – the second-highest among G7 nations – compared to just 4.3% over the border in Germany. By contrast, German respondents are notably positive. Europe's largest economy is continuing to excel with GDP growing 1.9% in 2016, and unemployment at a 25 year low. This, combined with a relatively low cost of living, means that young workers in Germany already have a high quality of life, and also foresee a prosperous future. Most optimistic of all respondents though, are millennials in Mexico, with more than 70% believing themselves to be better off than their parents' generation. The Mexican economy has recovered from the economic crisis that the country experienced in the early 90s, and despite talk of a border wall, net migration from Mexico to the US has actually reversed. More Mexican immigrants are now returning to their homeland to pursue a career in an improved economy, than are leaving to seek employment opportunities in the United States. Millennials should focus on gaining experience early on "Early career stages are never going to be pleasant in a poor economy, but you have to roll with the punches", says Dan Rogers, co-founder of Peakon. "While you're unlikely to foresee a path to your dream job early on, the key is to gain as much relevant experience as possible. "You're better off taking an internship at Google, for example, than a mediocre but higher paid role at an unknown company. Not only do give yourself the opportunity to work your way up, but more importantly, you are learning from the best people in the field. "As technology continues to disrupt entire industries, the most valuable employees are those who are able to adapt and draw on other experiences. The baby boomer generation may have been able to specialise in one area and work at the same company their whole lives, but businesses these days are far less predictable. "Take the impact of self-driving car technology on the automotive sector. What was traditionally an industry that relied on the expertise of mechanical engineers, is now being driven by computer scientists. Anyone who's managed to acquire experience in both areas during their career, is suddenly extremely employable. "If you currently find yourself two or more years into a role, and you don't feel like you're learning or gaining experience, then you should consider leaving (or tell your boss to use Peakon!). If you look at lifetime career earnings, the people who move more frequently do better." DataMillennials The next issue will be heading your way. Learn and grow with Peakon Join 10,000 other business leaders who receive our latest posts by email. The leading employee engagement platform Speak to one of our team to learn how Peakon can help everyone in your organisation reach their full potential. Part data scientist, part visualisation expert, and once cleaner of Didier Drogba's swimming pool - James takes Peakon's data and uses it to tell insightful stories. Victoria Murphy • 31-01-2018 Three tech trends set to change HR in 2018 Between virtual clouds, analytic software and the Internet of things, we're seeing an ever-growing number of platforms and programs designed to help carry out HR […] Peakon From Insights to Actions: Peakon launches transparent, accountable approach to action planning Introducing Peakon Actions. Close your feedback loop, log your initiatives and track their impact on your organisation's culture. Peakons transparenter Ansatz zur Aktionsplanung Help everyone in your organisation to reach their full potential Peakon gathers employee feedback, analyses it, and provides the insights you need to improve your business in real time. Book a demo Start free trial Solutions Why Peakon Customers Pricing GDPR Life at Peakon Our story Our values Careers Hiring! Leadership CSR Library Webinars Help Centre Events Blog Heartbeat by Peakon Contact us Office locations Media enquiries This website is using cookies, to provide certain functions, personalise content and to analyse the use of this website. We share some of this information with third parties who may combine it with other information that you have provided to them or that they have collected from your use of their services. You consent to the use of cookies if you continue to use our website. Further Information (Cookie Policy) How Peakon can help you Employee Experience & Retention Read our latest product announcements Select your industry Discover how Peakon can help you Stay ahead of the competition Build a successful company culture Technology & Startups Drive innovation through your people Discover the secret to great service Increase performance through engagement Charity & Non-Profit Help your employees make a difference Support your people and drive change eBooks, guides and other resources Online events hosted by our team Get the most out of Peakon Solutions Why Peakon Customers Pricing Resources Blog Log in Start free trial
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
267
Q: Deploy project created with foundation-cli? I'm trying to figure out if there's any way to deploy this project https://github.com/nataliecardot/zeus-hosting-setup, preferably to GitHub Pages. No dist folder is created when I run foundation build, which apparently is an old issue. Is it possible for my to deploy this in any way? Or if not, any other way to get it online? A: You need to generate the static pages that will be published online, and do so in the appropriate branch. Since master includes what is necessary to generate the pages, said pages need to be stored in a gh_pages branch. See "Types of GitHub Pages sites" and its next section "Publishing sources for GitHub Pages sites". The point remains: that repository as it is, with only its master branch, would not make any page visible on its own.
{ "redpajama_set_name": "RedPajamaStackExchange" }
586
Q: How do we reset the state associated with a Kafka Connect source connector? We are working with Kafka Connect 2.5. We are using the Confluent JDBC source connector (although I think this question is mostly agnostic to the connector type) and are consuming some data from an IBM DB2 database onto a topic, using 'incrementing mode' (primary keys) as unique IDs for each record. That works fine in the normal course of events; the first time the connector starts all records are consumed and placed on a topic, then, when new records are added, they are added to our topic. In our development environment, when we change connector parameters etc., we want to effectively reset the connector on-demand; i.e. have it consume data from the "beginning" of the table again. We thought that deleting the connector (using the Kafka Connect REST API) would do this - and would have the side-effect of deleting all information regarding that connector configuration from the Kafka Connect connect-* metadata topics too. However, this doesn't appear to be what happens. The metadata remains in those topics, and when we recreate/re-add the connector configuration (again using the REST API), it 'remembers' the offset it was consuming from in the table. This seems confusing and unhelpful - deleting the connector doesn't delete its state. Is there a way to more permanently wipe the connector and/or reset its consumption position, short of pulling down the whole Kafka Connect environment, which seems drastic? Ideally we'd like not to have to meddle with the internal topics directly. A: Partial answer to this question: it seems the behaviour we are seeing is to be expected: If you're using incremental ingest, what offset does Kafka Connect have stored? If you delete and recreate a connector with the same name, the offset from the previous instance will be preserved. Consider the scenario in which you create a connector. It successfully ingests all data up to a given ID or timestamp value in the source table, and then you delete and recreate it. The new version of the connector will get the offset from the previous version and thus only ingest newer data than that which was previously processed. You can verify this by looking at the offset.storage.topic and the values stored in it for the table in question. At least for the Confluent JDBC connector, there is a workaround to reset the pointer. Personally, I'm still confused why Kafka Connect retains state for the connector at all when it's deleted, but seems that is designed behaviour. Would still be interested if there is a better (and supported) way to remove that state. Another related blog article: https://rmoff.net/2019/08/15/reset-kafka-connect-source-connector-offsets/
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,953
Q: How can I sort a 2-D array in MATLAB with respect to one column? I would like to sort a matrix according to a particular column. There is a sort function, but it sorts all columns independently. For example, if my matrix data is: 1 3 5 7 -1 4 Then the desired output (sorting by the first column) would be: -1 4 1 3 5 7 But the output of sort(data) is: -1 3 1 4 5 7 How can I sort this matrix by the first column? A: I think the sortrows function is what you're looking for. >> sortrows(data,1) ans = -1 4 1 3 5 7 A: An alternative to sortrows(), which can be applied to broader scenarios. * *save the sorting indices of the row/column you want to order by: [~,idx]=sort(data(:,1)); *reorder all the rows/columns according to the previous sorted indices data=data(idx,:)
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,811
Location:AYANA Midplaza JAKARTA is located in the Golden Triangle neighborhood of Jakarta, close to General Soedirman, National Monument, and Bung Karno Stadium. Nearby points of interest also include Jakarta Cathedral and Plaza Semanggi. Hotel Features:Dining options at AYANA Midplaza JAKARTA include a restaurant and a bar/lounge. Room service is available 24 hours a day. Recreational amenities include an outdoor pool, a health club, a spa tub, a sauna, and a fitness facility. The property's full-service health spa has massage/treatment rooms and beauty services. This 5-star property has a business center and offers small meeting rooms, secretarial services, and limo/town car service. Wireless Internet access is available in public areas. This Jakarta property has event space consisting of banquet facilities, conference/meeting rooms, and exhibit space. The property offers an airport shuttle (surcharge). ;Business services, wedding services, and tour assistance are available. Guest parking is available for a surcharge. Additional property amenities include a coffee shop/café, valet parking, and a concierge desk. Guestrooms: There are 321 guestrooms at AYANA Midplaza JAKARTA. Coffee/tea makers and minibars are offered. Bathrooms feature shower/tub combinations, bathrobes, scales, and complimentary toiletries. In addition to desks, guestrooms offer multi-line phones with voice mail. Televisions have satellite channels. Air-conditioned rooms also include electronic/magnetic keys and irons/ironing boards. A turndown service is available nightly, housekeeping is offered, and guests may request wake-up calls. Cribs (infant beds) are available on request.
{ "redpajama_set_name": "RedPajamaC4" }
3,288
Q: Clickable Image Button I'm trying to make an image clickable so that I can use it as a button to download a PDF although I feel like I'm over-thinking this and confusing myself. An example of the code I've used: <div id="name"><a href="file.pdf"></a></div> The div id is then used with CSS to display the image as I also wanted a hover effect so the user has some sort of feedback when on the button. #name { background-image: url('standardimage.jpg'); height: 51px; width: 285px; margin: auto; margin-bottom: 5px; } #name:hover { background-image: url('hoverimage.jpg'); height: 51px; width: 285px; margin: auto; margin-bottom: 5px; cursor: pointer; -o-transition: .5s; -ms-transition: .5s; -moz-transition: .5s; -webkit-transition: .5s; } Any help would be appreciated thank you! A: So the problem you are facing is the the link (the <a> tag) is the actual button but that one has no size cause it's kind of empty. See this code snippet. The <a> tag has a red border all around but there is nothing which fills it up ... #name { background-image: url(http://lorempixel.com/400/200/sports/1/); height: 51px; width: 285px; margin: auto; margin-bottom: 5px; } #name:hover { background-image: url(http://lorempixel.com/400/200/sports/2/); height: 51px; width: 285px; margin: auto; margin-bottom: 5px; cursor: pointer; -o-transition: .5s; -ms-transition: .5s; -moz-transition: .5s; -webkit-transition: .5s; } #name a { border:solid 1px red; background-color: orange; z-index: 999; } <div id="name"><a href="#/path/to/file.pdf"></a></div> So if you set all those styles you had to the <a> tag and add display: inline-block; then it will work see here: #name a { display: inline-block; /* add this line */ background-image: url(http://lorempixel.com/400/200/sports/1/); height: 51px; width: 285px; margin: auto; margin-bottom: 5px; } #name a:hover { background-image: url(http://lorempixel.com/400/200/sports/2/); height: 51px; width: 285px; margin: auto; margin-bottom: 5px; cursor: pointer; -o-transition: .5s; -ms-transition: .5s; -moz-transition: .5s; -webkit-transition: .5s; } #name a { border:solid 1px red; background-color: orange; z-index: 999; } <div id="name"><a href="#/path/to/file.pdf"></a></div>
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,203
{"url":"https:\/\/en.wikibooks.org\/wiki\/FHSST_Physics\/Heat_and_Properties_of_Matter\/Ideal_Gasses","text":"# FHSST Physics\/Heat and Properties of Matter\/Ideal Gasses\n\nHeat and Properties of Matter The Free High School Science Texts: A Textbook for High School Students Studying Physics Main Page - << Previous Chapter (Pressure) - Next Chapter (Electrostatics) >> Phases of Matter - Deformation of Solids - Ideal Gasses - Temperature - Thermal Properties - Important Equations and Quantities\n\n# Ideal gases\n\n## Equation of state\n\nAn ideal gas has the following equation of state:\n\n${\\displaystyle pV=\\nu RT}$\n\nWhere p is pressure, V is volume, ${\\displaystyle \\nu =m\/\\mu }$ is number of moles observed, ${\\displaystyle R=N_{A}k_{B}=6.022}$ J\/(mol\u00b7K) is a universal gas constant.\n\nThe gas constant has a value of 8.314 J\/(mol.K) or 0.0821 (L*atm)\/(mol*K)\n\n## Pressure of a gas\n\nlet us consider n mole of an ideal gas is placed inside the cubical vessel of side l let the mass of each molecule is m and let it moves with velocity c1","date":"2023-01-30 14:01:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 3, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5509876608848572, \"perplexity\": 1879.979939771504}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499819.32\/warc\/CC-MAIN-20230130133622-20230130163622-00109.warc.gz\"}"}
null
null
Batesville got behind early and made a valiant attempt to regroup and catch back up, but ran out of time losing 11-6 to Franklin County. Nolan Williams got the start on the mound striking out 8 thru 4 innings. Cory Ralston came in for 2 innings of relief. Batesville managed 8 hits on the day. Ben Obendorf lead the team with 2 hits, Nolan Williams, Cory Ralston (2B), Jonathon Hoff, Tyler Schaeffer, Justin Heiser and Owen Fitzpatrick each had 1 hit.
{ "redpajama_set_name": "RedPajamaC4" }
4,516
Photograph by Bob Daemmrich; illustration by Miles Donovan 2015: The Best and Worst Legislators – Dennis Bonnen THE BEST: Representative Dennis Bonnen This article was originally posted on Texas Monthly. When he was first elected to the Texas House, at age 24, Dennis Bonnen was often dismissed as a hothead. Twenty years later, his intelligence and experience have helped him become the chairman of Ways and Means and one of the chamber's most effective leaders. He began this session by leading the House's border security effort and produced a plan that passed with overwhelming bipartisan support. But when it came to the biggest clash of the session, Bonnen's wits and skill were augmented by his street-fighting background. The brawl was about taxes. While the House was focused on the budget, the Senate had prioritized tax relief and emerged with a $4.6 billion proposal, which was profoundly flawed but put the House in a tricky situation: "tax cuts" sound good in press releases. Bonnen had no way to anticipate such a muddled scheme. But he dealt with it like a ninja. In addition to franchise tax cuts, he came up with a proposal to cut the sales tax, an idea that came out of the blue, giving Bonnen a chance to explain its advantages over the Senate's "property tax relief." On the merits, his proposal was better than the Senate's. And at $4.9 billion, it was bigger too. Dan Patrick was not amused, and as the most powerful statewide official in Texas, he was able to extract some of his demands. But he didn't get all of them, for two simple reasons: Bonnen had outfoxed the Senate, and the House stood behind its leading hothead. NextWe are cautiously optimistic about Texas' next speaker, Dennis Bonnen, with the emphasis on 'cautious'Next
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,878
Rooderkerk, Robert P.; van Heerde, Harald J.; Bijmolt, Tammo H.A. Rooderkerk, R.P., van Heerde, H.J. & Bijmolt, T.H.A. (2011). Incorporating context effects into a choice model. Journal of Marketing Research, 48(4), 767-780. The behavioral literature provides ample evidence that consumer preferences are partly driven by the context provided by the set of alternatives. Three important context effects are the compromise, attraction, and similarity effects. Because these context effects affect choices in a systematic and predictable way, it should be possible to incorporate them in a choice model. However, the literature does not offer such a choice model. This study fills this gap by proposing a discrete-choice model that decomposes a product's utility into a context free partworth utility and a context-dependent component capturing all three context effects. Model estimation results on choice-based conjoint data involving digital cameras provide convincing statistical evidence for context effects. The estimated context effects are consistent with the predictions from the behavioral literature, and accounting for context effects leads to better predictions both in and out of sample. To illustrate the benefit from incorporating context effects in a choice model, the authors discuss how firms could utilize the context sensitivity of consumers to design more profitable product lines.
{ "redpajama_set_name": "RedPajamaC4" }
5,459
Нематеріальні активи () — права на результати інтелектуальної діяльності, які зазвичай не мають фізичної форми, наприклад, авторські права, ліцензії, патенти або перевищення ринкової ціни підприємства над його балансовою вартістю (ґудвіл). У фінансовому обліку та звітності вартість таких прав включають до розділу активів. Визначення Нематеріальний актив – нематеріальний актив, який не має матеріальної форми та може бути ідентифікований. Придбаний або отриманий нематеріальний актив визнається, якщо існує імовірність одержання суб'єктом господарювання майбутніх економічних вигод, пов'язаних з його використанням, та його вартість може бути достовірно визначена. Нематеріальні активи – право власності на результати інтелектуальної діяльності, у тому числі промислової власності, а також інші аналогічні права, визнані об'єктом права власності (інтелектуальної власності), право користування майном та майновими правами платника податку в установленому законодавством порядку, у тому числі набуті в установленому законодавством порядку права користування природними ресурсами, майном та майновими правами. Актив є ідентифікованим, якщо він: а) може бути відокремлений, тобто його можна відокремити або відділити від суб'єкта господарювання і продати, передати, ліцензувати, здати в оренду або обміняти індивідуально або разом з пов'язаним з ним контрактом, ідентифікованим активом чи зобов'язанням, незалежно від того, чи має суб'єкт господарювання намір зробити це, або б) виникає внаслідок договірних або інших юридичних прав, незалежно від того, чи можуть вони бути передані або відокремлені від суб'єкта господарювання або ж від інших прав та зобов'язань. Не визнаються нематеріальним активом, а підлягають відображенню у складі витрат звітного періоду, в якому їх здійснено витрати на: дослідження, підготовку і перепідготовку кадрів, рекламу і просування продукції на ринку, створення, реорганізацію та переміщення підприємства або його частини, підвищення ділової репутації підприємства, вартість видань і витрати на створення торгових марок (товарних знаків). Придбані (створені) нематеріальні активи зараховують на баланс підприємства за первісною вартістю. Первісна вартість придбаного нематеріального активу складається з ціни (вартості) придбання, мита, непрямих податків, що не підлягають відшкодуванню, та інших витрат, безпосередньо пов'язаних з його придбанням і доведенням до стану, придатного для використання. Первісну вартість нематеріальних активів збільшують на суму витрат, пов'язаних з удосконаленням цих нематеріальних активів і підвищенням їхніх можливостей і терміну використання, які сприятимуть збільшенню первісно очікуваних майбутніх економічних вигод. Групи нематеріальних активів Згідно зі стандартами бухгалтерський облік нематеріальних активів ведеться щодо кожного об'єкта за такими групами: права користування природними ресурсами (право користування надрами, іншими ресурсами природного середовища, геологічною та іншою інформацією про природне середовище тощо); права користування майном (право користування земельною ділянкою відповідно до земельного законодавства, право користування будівлею, право на оренду приміщень тощо); права на комерційні позначення (права на торговельні марки (знаки для товарів і послуг), комерційні (фірмові) найменування тощо), крім тих, витрати на придбання яких визнаються роялті; права на об'єкти промислової власності (право на винаходи, корисні моделі, промислові зразки, сорти рослин, породи тварин, компонування (топографії) інтегральних мікросхем, комерційні таємниці, у тому числі ноу-хау, захист від недобросовісної конкуренції тощо), крім тих, витрати на придбання яких визнаються роялті; авторське право та суміжні з ним права (право на літературні, художні, музичні твори, комп'ютерні програми, програми для електронно-обчислювальних машин, компіляції даних (бази даних), виконання, фонограми, відеограми, передачі (програми) організацій мовлення тощо), крім тих, витрати на придбання яких визнаються роялті; інші нематеріальні активи (право на провадження діяльності, використання економічних та інших привілеїв тощо). У балансі суму нематеріальних активів відображають у складі необоротних активів за залишковою вартістю, яка визначається як різниця між первісною вартістю і сумою накопиченої амортизації. Нематеріальні активи  – це законодавчо визнані необоротні активи підприємства, у вигляді різних прав, що мають цільове призначення, реальну вартість та здатні приносити їх власникові (користувачеві) прибуток або іншу користь Оцінка вартості нематеріального активу (майнових прав інтелектуальної власності) Згідно з Національним стандартом № 4 «Оцінка майнових прав інтелектуальної власності» оцінка майнових прав інтелектуальної власності в Україні здійснюється за допомогою наступних підходів: Витратний підхід Витратний підхід до оцінки майнових прав інтелектуальної власності ґрунтується на визначенні вартості витрат, необхідних для відтворення або заміщення об'єкта оцінки. Витратний підхід застосовується для визначення залишкової вартості заміщення (відтворення) майнових прав інтелектуальної власності шляхом вирахування з вартості відтворення (заміщення) величини зносу. Вартість відтворення майнових прав інтелектуальної власності визначається шляхом застосування методу прямого відтворення, а вартість заміщення - методу заміщення. Дохідний підхід Дохідний підхід до оцінки майнових прав інтелектуальної власності ґрунтується на застосуванні оціночних процедур переведення очікуваних доходів у вартість об'єкта оцінки. Дохідний підхід застосовується для оцінки майнових прав інтелектуальної власності у випадку, коли можливо визначити розмір доходу, що отримує або може отримувати юридична чи фізична особа, якій належать такі права, від їх використання. Основними методами дохідного підходу, що застосовуються для оцінки майнових прав інтелектуальної власності, є метод непрямої капіталізації (дисконтування грошового потоку) та метод прямої капіталізації доходу. Порівняльний підхід Порівняльний підхід до оцінки майнових прав інтелектуальної власності застосовується у разі наявності достатнього обсягу достовірної інформації про ціни на ринку подібних об'єктів та умови договорів щодо розпорядження майновими правами на такі об'єкти. У разі застосування порівняльного підходу до оцінки майнових прав інтелектуальної власності подібність об'єктів визначається з урахуванням їх виду, галузі (сфери) застосування, економічних, функціональних та інших характеристик. Сукупність елементів порівняння формується з факторів, які впливають на вартість майнових прав інтелектуальної власності. До таких факторів, зокрема, належать наявність правової охорони майнових прав інтелектуальної власності; умови фінансування договорів, предметом яких є майнові права інтелектуальної власності; галузь або сфера, в якій може використовуватись об'єкт права інтелектуальної власності, майнові права на який оцінюються; функціональні, споживчі, економічні та інші характеристики такого об'єкта; рівень його новизни; залишковий строк корисного використання; придатність до промислового (комерційного) використання. Див. також Законодавча термінологія Оценка товарного знака (торговой марки) в Украине Примітки Бухгалтерський облік і аудит Право інтелектуальної власності Терміни українського законодавства Економічна термінологія
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,038
{"url":"http:\/\/simbad.cds.unistra.fr\/simbad\/sim-ref?bibcode=2003AJ....125..593S","text":"other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help\n\n2003AJ....125..593S - Astron. J., 125, 593-609 (2003\/February-0)\n\nStar formation in Sculptor group dwarf irregular galaxies and the nature of \"transition\" galaxies.\n\nSKILLMAN E.D., COTE S. and MILLER B.W.\n\nAbstract (from CDS):\n\nWe present new H\u03b1 narrowband imaging of the H II regions in eight Sculptor group dwarf irregular (dI) galaxies. The H\u03b1 luminosities of the detected H II regions range from some of the faintest detected in extragalactic H II regions (\u223c1035 ergs\/s in SC 24) to some of the most luminous (\u223c1040 ergs\/s in NGC 625). The total H\u03b1 luminosities are converted into current star formation rates (SFRs). Comparing the Sculptor group dI's to the Local Group dI's, we find that the Sculptor group dI's have, on average, lower values of SFR when normalized to either galaxy luminosity or gas mass (although there is considerable overlap between the two samples). The range for both the Sculptor group and Local Group samples is large when compared with that seen for the sample of gas-rich, quiescent, low surface brightness (LSB) dI's from van Zee et al. (published in 1997) and the sample of isolated dI's from van Zee (from 2000 and 2001). This is probably best understood as a selection effect since the nearby group samples have a much larger fraction of extremely low luminosity galaxies and the smaller galaxies are much more liable to large relative variations in current SFRs. The Sculptor group and LSB samples are very similar with regard to mean values of both \u03c4gas and \u03c4form, and the Local Group and isolated dI samples are also similar to each other in these two quantities. Currently, the Sculptor group lacks dI galaxies with elevated normalized current SFRs as high as the Local Group dI's IC 10 and GR 8. The properties of transition'' (dSph\/dIrr) galaxies in Sculptor and the Local Group are also compared and found to be similar. The transition galaxies are typically among the lowest luminosities of the gas-rich dwarf galaxies. Relative to the dwarf irregular galaxies, the transition galaxies are found preferentially nearer to spiral galaxies and are found nearer to the center of the mass distribution in the local cloud. While most of these systems are consistent with normal dI galaxies, exhibiting temporarily interrupted star formation, the observed density-morphology relationship (which is weaker than that observed for the dwarf spheroidal galaxies) indicates that environmental processes such as tidal stirring'' may play a role in causing their lower SFRs.\n\nJournal keyword(s): Galaxies: Dwarf - Galaxies: Evolution - galaxies: individual (NGC 625) - Galaxies: Irregular - ISM: H II Regions\n\nNomenclature: Figs, Table 2: [SCM2003] ESO 347-17 NN (Nos 1-11), [SCM2003] ESO 348-9 N (Nos 1-4), [SCM2003] SC 18 N (No. 1), [SCM2003] NGC 59 N (Nos 1-4), [SCM2003] ESO 473-24 N (Nos 1-4), [SCM2003] SC 24 N (No. 1), [SCM2003] AM 0106-382 N (Nos 1-5), [SCM2003] NGC 625 NN (Nos 1-23).\n\nCDS comments: Galaxies SC are [CFC97] Sc","date":"2023-01-31 06:40:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6583020687103271, \"perplexity\": 8561.087896800123}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499845.10\/warc\/CC-MAIN-20230131055533-20230131085533-00608.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/calculus\/98050-simple-integration.html","text":"# Math Help - Simple integration\n\n1. ## Simple integration\n\nHiya,\n\nThe question I'm stuck on is:\n\nFind the integral.\n\n2\/3x^4 dx\n\nCould anyone help me with a step by step solution? Thanks!\n\n2. $\\frac{2}{3}x^{-4}$ will integrate to some fraction of\n\n$\\frac{2}{3}x^{-3} + c$ ... what fraction, though?\n\nIf you meant\n\n$\\frac{2}{3}x^{4}$ then some fraction of\n\n$\\frac{2}{3}x^{5} + c$ ... what fraction, though?\n\n3. Sorry I meant\n\n2\/(3x^4) dx\n\n4. Yeah, that's the first one above...\n\n$\\frac{2}{3x^4} = \\frac{2}{3}\\ \\frac{1}{x^4} = \\frac{2}{3}x^{-4}$","date":"2015-06-03 02:21:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 5, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8853336572647095, \"perplexity\": 6385.195556654995}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-22\/segments\/1433195036613.12\/warc\/CC-MAIN-20150601214356-00031-ip-10-180-206-219.ec2.internal.warc.gz\"}"}
null
null
.class public Landroid/view/animation/DecelerateInterpolator; .super Landroid/view/animation/BaseInterpolator; .source "DecelerateInterpolator.java" # interfaces .implements Lcom/android/internal/view/animation/NativeInterpolatorFactory; # annotations .annotation runtime Lcom/android/internal/view/animation/HasNativeInterpolator; .end annotation # instance fields .field private mFactor:F # direct methods .method public constructor <init>()V .locals 1 .prologue .line 37 invoke-direct {p0}, Landroid/view/animation/BaseInterpolator;-><init>()V .line 79 const/high16 v0, 0x3f800000 # 1.0f iput v0, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F .line 37 return-void .end method .method public constructor <init>(F)V .locals 1 .param p1, "factor" # F .prologue .line 47 invoke-direct {p0}, Landroid/view/animation/BaseInterpolator;-><init>()V .line 79 const/high16 v0, 0x3f800000 # 1.0f iput v0, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F .line 48 iput p1, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F .line 47 return-void .end method .method public constructor <init>(Landroid/content/Context;Landroid/util/AttributeSet;)V .locals 2 .param p1, "context" # Landroid/content/Context; .param p2, "attrs" # Landroid/util/AttributeSet; .prologue .line 52 invoke-virtual {p1}, Landroid/content/Context;->getResources()Landroid/content/res/Resources; move-result-object v0 invoke-virtual {p1}, Landroid/content/Context;->getTheme()Landroid/content/res/Resources$Theme; move-result-object v1 invoke-direct {p0, v0, v1, p2}, Landroid/view/animation/DecelerateInterpolator;-><init>(Landroid/content/res/Resources;Landroid/content/res/Resources$Theme;Landroid/util/AttributeSet;)V .line 51 return-void .end method .method public constructor <init>(Landroid/content/res/Resources;Landroid/content/res/Resources$Theme;Landroid/util/AttributeSet;)V .locals 4 .param p1, "res" # Landroid/content/res/Resources; .param p2, "theme" # Landroid/content/res/Resources$Theme; .param p3, "attrs" # Landroid/util/AttributeSet; .prologue const/high16 v3, 0x3f800000 # 1.0f const/4 v2, 0x0 .line 56 invoke-direct {p0}, Landroid/view/animation/BaseInterpolator;-><init>()V .line 79 iput v3, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F .line 58 if-eqz p2, :cond_0 .line 59 sget-object v1, Lcom/android/internal/R$styleable;->DecelerateInterpolator:[I invoke-virtual {p2, p3, v1, v2, v2}, Landroid/content/res/Resources$Theme;->obtainStyledAttributes(Landroid/util/AttributeSet;[III)Landroid/content/res/TypedArray; move-result-object v0 .line 64 .local v0, "a":Landroid/content/res/TypedArray; :goto_0 invoke-virtual {v0, v2, v3}, Landroid/content/res/TypedArray;->getFloat(IF)F move-result v1 iput v1, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F .line 65 invoke-virtual {v0}, Landroid/content/res/TypedArray;->getChangingConfigurations()I move-result v1 invoke-virtual {p0, v1}, Landroid/view/animation/DecelerateInterpolator;->setChangingConfiguration(I)V .line 66 invoke-virtual {v0}, Landroid/content/res/TypedArray;->recycle()V .line 56 return-void .line 61 .end local v0 # "a":Landroid/content/res/TypedArray; :cond_0 sget-object v1, Lcom/android/internal/R$styleable;->DecelerateInterpolator:[I invoke-virtual {p1, p3, v1}, Landroid/content/res/Resources;->obtainAttributes(Landroid/util/AttributeSet;[I)Landroid/content/res/TypedArray; move-result-object v0 .restart local v0 # "a":Landroid/content/res/TypedArray; goto :goto_0 .end method # virtual methods .method public createNativeInterpolator()J .locals 2 .prologue .line 84 iget v0, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F invoke-static {v0}, Lcom/android/internal/view/animation/NativeInterpolatorFactoryHelper;->createDecelerateInterpolator(F)J move-result-wide v0 return-wide v0 .end method .method public getInterpolation(F)F .locals 6 .param p1, "input" # F .prologue const/high16 v3, 0x3f800000 # 1.0f .line 71 iget v1, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F cmpl-float v1, v1, v3 if-nez v1, :cond_0 .line 72 sub-float v1, v3, p1 sub-float v2, v3, p1 mul-float/2addr v1, v2 sub-float v0, v3, v1 .line 76 .local v0, "result":F :goto_0 return v0 .line 74 .end local v0 # "result":F :cond_0 sub-float v1, v3, p1 float-to-double v2, v1 iget v1, p0, Landroid/view/animation/DecelerateInterpolator;->mFactor:F const/high16 v4, 0x40000000 # 2.0f mul-float/2addr v1, v4 float-to-double v4, v1 invoke-static {v2, v3, v4, v5}, Ljava/lang/Math;->pow(DD)D move-result-wide v2 const-wide/high16 v4, 0x3ff0000000000000L # 1.0 sub-double v2, v4, v2 double-to-float v0, v2 .restart local v0 # "result":F goto :goto_0 .end method
{ "redpajama_set_name": "RedPajamaGithub" }
6,410
\section{INTRODUCTION} D+1-dimensional pure LGT's with finite periodic extension $N_t$ in the ``time'' direction describe glue systems at finite temperature $1/N_t$. They undergo a deconfinement transition, signalled by a finite expectation value of the trace of the Polyakov loop. According to the 15 years old Svetitsky-Yaffe conjecture~\cite{sy}, the D-dimensional effective statistical model for the Polyakov loops has an action (Hamiltonian) with short range interactions and a global symmetry group given by the center of SU(N). If both the deconfining phase transition of the gauge model and of the corresponding order-disorder phase transition of the spin system are continuous, the two models belong to the same universality class. The conjecture that the deconfinement transition of 4D SU(2) LGT belongs to the 3D Ising universality class is supported by analytical calculations, and numerical estimates of its critical indices, c.f.~\cite{Bielefeld}. We tested the Svetitsky-Yaffe conjecture by comparing flows of block spin effective actions for the Polyakov loops with flows of the 3D Ising model. Approach of both flows to a single trajectory ending in a common renormalization group fixed point demonstrates universality on a fundamental level. \section{BLOCKING POLYAKOV LOOPS} Our procedure to generate and analyse block spin effective actions for Polyakov loops consists of the following steps. \begin{itemize} \item Map the SU(2) configurations $U$ living on an $N_t \times N_s^3$ lattice to Ising configurations $\sigma(U)$ on an $N_s^3$ lattice. The Ising variables are given by the signs of the Polyakov loops. \item Block the Ising configurations with the majority rule, using cubical blocks of size $L_B$. \item Compute the effective coupling constants using IMCRG~\cite{gupta}. \item Generate a renormalization group flow by increasing the block size $L_B$. \item Compare the resulting flow with that computed directly in the Ising model. \end{itemize} More formally, the effective action for the signs of the Polyakov loops is given by \begin{equation} \exp[-H'(\mu)]= \int DU~ P(\mu,U)~\exp[-S_{\rm g}(U)] \, , \end{equation} with \begin{equation} P(\mu,U) = \prod_{x'}^{\rm (blocks)} \frac12 \left[ 1 + \mu_{x'} \; \mbox{sign} \sum_{x\in x'} \sigma_x(U) \right]. \end{equation} Here, $S_{\rm g}$ is the standard Wilson action for SU(2). In case of an even block size $L_B$ the sum of the Ising spins $\sigma_x$ inside a block $x'$ can be zero. In that case a positive (negative) $\mu_{x'}$ is selected with probability one half. Unfortunately, effective Hamiltonians contain an infinite number of coupling constants. In practical calculations one has to truncate to a finite set of interactions. We chose to include in the ansatz eight 2-point couplings and six 4-point couplings. The 2-point couplings can be labelled by specifying the relative position of the interacting spins (up to obvious symmetries): our couplings $K_1 \dots K_8$ then correspond to 001,011,111,002,012,112,022,122. The 4-point couplings $K_9 \dots K_{14}$ are defined through Fig.~1. The corresponding interaction terms in the effective Hamiltonian are denoted by $S_\alpha'$, $\alpha=1 \dots 14$. \vskip5mm \begin{center} {~ }\epsfig{file=quart.eps,height=5.0cm} \end{center} Fig. 1. Definition of the 4-point-couplings. \section{IMCRG FOR POLYAKOV LOOPS} Translating the rationale of IMCRG~\cite{gupta} to the present context means that one does {\em not} simulate standard SU(2) model but instead the system with partition function \begin{equation} \sum_{\mu} \int DU~ P(\mu,U) \, \exp\left[ - S_{\rm g}(U)\; + \; \bar H(\mu)\right] \, , \end{equation} where $\bar H(\mu)= \sum_{\alpha} \bar K_\alpha S_{\alpha}'(\mu)$ is a guess for $H'(\mu)$. Note the {\em plus} sign in front of $\bar H$. As before, $\mu$ denote the block spins, defined on blocks of size $L_B$ obtained from blocking the Polyakov loops. The crucial observation is now that if $\bar H=H'$, i.e., if the guess of the Hamiltonian is right, then the $\mu_{x'}$ decouple completely. This means in particular that the correlations $\langle S_{\alpha}'(\mu)\rangle$ vanish. A non-perfect guess can be improved by iteration of \begin{equation} \bar K_{\alpha} \longrightarrow \bar K_{\alpha} + n_\alpha^{-1} \langle S'_\alpha(\mu) \rangle \, , \end{equation} where $n_\alpha$ are trivial multiplicity factors. A few iterations of this correction step usually suffice to obtain good precision for the effective couplings. This method, though it seems to be restricted to Ising type models, has several merits. One of them is the following: For small enough blocks, the autocorrelations in the simulations are drastically reduced. This follows from the fact that in case of perfect compensation the blocks completely decouple and fluctuate independently. In the SU(2) IMCRG calculations discussed below this phenomenon was clearly observed~\cite{tocome}. \section{MONTE CARLO RESULTS} We started by computing the flow of the critical 3D Ising model, using two different models. The standard Ising model with nearest neighbour couplings becomes critical at $\beta=0.2216544$. A version ``$I_3$'' which includes also third (cube-diagonal) neighbour couplings has a critical point at $\beta_1 = 0.128003$ and $\beta_3=0.051201$~\cite{bloete}. We then turned to FT SU(2) with $N_t=2$. We computed the effective actions from simulations on lattices consisting of $6^3$ blocks of size $L_B \leq 6$. The simulations were performed at $\beta_{\rm g}=1.880,1.877,1.874,1.871$. Best matching with the Ising data was achieved at $\beta_{\rm g}=1.877$. The flow of $K_1$ and $K_2$ for this gauge coupling is shown in Fig.~2, together with the Ising flows. The gauge data are displayed with squares, the Ising results are shown with bars, diamonds and dotted fit lines. The fits were done with a simple power law~\cite{tocome}. A nice matching is observed also for the 12 couplings not shown here. Note that the block sizes of the various models have to be rescaled with respect to each other in order to yield matching. E.g., the $L_B$ from the $I_3$ model have to be rescaled by a factor of 0.59 with respect to that of the standard Ising model. Let us finally show a comparison of the flow of the nearest neighbour coupling for different gauge couplings. This is shown in Fig.~3. The dashed line, together with diamonds and crosses, gives the Ising flow. The other data (from top to bottom) correspond to $\beta_{\rm g}= 1.880$ (triangles), 1.877 (crosses), 1.774 (squares), and 1.771 (stars). The gauge block sizes are rescaled such that best matching is obtained for $\beta_{\rm g}= 1.877$. Within the given precision, also the $\beta_{\rm g}=1.874$ data could be rescaled to match the Ising flow. This is, however, clearly ruled out for $\beta_{\rm g}=1.880$. \vskip2mm \epsfig{file=b1.eps,height=7.5cm} \vskip0.2cm \epsfig{file=b2.eps,height=7.5cm} Fig.~2. Matching of the couplings $K_1$ and $K_2$. \vskip2mm \vskip0.2cm \epsfig{file=all.eps,height=7.0cm} Fig.~3. Comparison of $K_1$ flow for different $\beta_{\rm g}$. \vskip0.2cm \section{CONCLUSIONS} IMCRG works well as a method to compute the effective action of Ising type degrees of freedom, also in non-Ising models like 4D FT SU(2) LGT. The calculations could be done on workstations. The Svetitsky-Yaffe conjecture is confirmed in a very fundamental way by observing matching of the RG trajectories with those of the Ising model. We obtained results for $N_t=1$ also, see~\cite{tocome}. Extension to $N_t$ bigger than two is expensive, because one needs larger blocks to come close enough to the fixed point.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,187
package org.apache.camel.component.reactive.streams.support; import org.apache.camel.CamelContext; import org.apache.camel.component.reactive.streams.api.CamelReactiveStreamsService; import org.apache.camel.component.reactive.streams.api.CamelReactiveStreamsServiceFactory; import org.apache.camel.component.reactive.streams.engine.ReactiveStreamsEngineConfiguration; public class ReactiveStreamsTestServiceFactory implements CamelReactiveStreamsServiceFactory { /** * Creates a new instance of the {@link ReactiveStreamsEngineConfiguration} * * @param context the Camel context * @param configuration the ReactiveStreams engine configuration * @return the ReactiveStreams service */ @Override public CamelReactiveStreamsService newInstance(CamelContext context, ReactiveStreamsEngineConfiguration configuration) { return new ReactiveStreamsTestService("test-service"); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,247
Our Portal Gen-Next Slammed 5 things about Here's Why Browse here: Latest Article By Author Angela Bassett becomes Marvel's first actor nominated for an Oscar Box office haul for 'Avatar: The Way of Water' tops $2 billion Madonna announces music tour celebrating 40 years of hits Gina Lollobrigida, post WWII Italian film diva, dies at 95 Harry Styles and Wet Leg lead BRIT nominations Published On: September 2, 2020 09:26 AM NPT By: Reuters Scaled-down Venice film festival hopes to shake off virus gloom Fewer Hollywood stars will grace the red carpet and there will be no fans clamouring for autographs. But for all the COVID-19 restrictions, director Alberto Barbera says the very fact that the Venice film festival is going ahead in front of live audiences this week sends a positive message. "We think that it's time to restart for cinema," Barbera told Reuters on the eve of the festival, which runs from Sept. 2-12 and is now in its 77th year. "We need to reopen the theatres. We need to distribute new films. We need to start shooting new films again and I hope that the festival will be a sign of solidarity and encouragement for everybody involved with the film industry." The world's oldest film festival, regarded as a showcase for Oscar contenders as awards season approaches, is the first such international event to take place since the movie world ground to a halt due to the pandemic. The world's biggest - Cannes film festival - was cancelled. With coronavirus cases rising again in Italy and elsewhere, a strict safety protocol has been put in place. Anyone attending from outside Europe's Schengen area will have to test for COVID-19 before departure and once on the Lido, the long, narrow island in the Venetian Lagoon where the annual festival is held. Temperatures will be checked and every second seat in the cinemas will be left empty. Seats will have to be reserved online, audiences will be required to wear a face mask and fans will not be allowed near the red carpet. The restrictions are similar to those being imposed in cinemas around the world, including Italy, where only big venues have re-opened and viewers are seated apart and encouraged to wear masks at least in the foyer area. Among the stars expected to make the trip are Australian actress Cate Blanchett, who will lead the jury, U.S. actor Matt Dillon and Spanish director Pedro Almodovar. British actress Tilda Swinton is also expected to attend to receive a life achievement award. The main competition lineup for the Golden Lion award for best film includes 18 titles, compared to 21 last year. Just two are U.S. studio productions: "Nomadland", a road drama by U.S.-based Chinese director Chloe Zhao starring Frances McDormand, and "The World To Come", with Casey Affleck. They will compete against works by Israel's Amos Gitai and Russia's Andrei Konchalovsky, and four Italian films, including Gianfranco Rosi's "Notturno", about the Syrian conflict. Titles screening out of competition include "The Duke", a crime comedy from "Notting Hill" director Roger Michell starring Helen Mirren, and "Greta", a documentary portrait of Swedish climate activist Greta Thunberg. More than 50 countries are participating - at least four titles were added at the last minute to the out of competition line-up, some of them filmed and edited in record time after the easing of lockdowns. Hollywood, Venice, COVID-19, Recommended Story Harry Styles To Perform At 2023 Grammys On CBS - by Agencies Daily Horoscope for January 28 Noah Schnapp Starts Shooting 'Stranger Things' Season 5 In May Copyright 2023 Nepal Republic Media Pvt. ltd, nagariknetwork.com. All right reserved JDA Complex, Bag Durbar, Kathmandu mycity@myrepublica.com Phone Online: 01- 4263457 Phone Reception: 01-4265100
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,025
Hilton Head, South Carolina, May 1862. Photograph by Henry P. Moore. The black people were the property of CSA general Thomas F. Drayton. The white man is most likely a Union soldier. _for_ **HOWARD REEVES,** _ **who truly cares about those agonizing prayers of centuries**_ This book grew out of my article "The Trump of Jubilee," published in ASALH's 2007 edition of _The Woodson Review_. Library of Congress Cataloging-in-Publication Data Bolden, Tonya. Emancipation Proclamation : Lincoln and the dawn of liberty / Tonya Bolden. p. cm. Includes bibliographical references and index. ISBN 978-1-4197-0390-4 (alk. paper) eISBN 978-1-6131-2977-7 1. United States. President (1861–1865 : Lincoln). Emancipation Proclamation−Juvenile literature. 2. Lincoln, Abraham, 1809–1865−Juvenile literature. 3. Slaves−Emancipation−United States− Juvenile literature. 4. United States−Politics and government−1861–1865−Juvenile literature. I. Title. E453.B68 2012 973'7.14−dc23 2012000845 Text copyright © 2013 Tonya Bolden For illustration credits, see this page. Book design by Maria T. Middleton Published in 2013 by Abrams Books for Young Readers, an imprint of ABRAMS. All rights reserved. No portion of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without written permission from the publisher. Abrams Books for Young Readers are available at special discounts when purchased in quantity for premiums and promotions as well as fundraising or educational use. Special editions can also be created to specification. For details, contact specialsales@abramsbooks.com or the address below. 115 West 18th Street New York, NY 10011 www.abramsbooks.com CONTENTS PART I **"THE AGONIZING PRAYERS OF CENTURIES"** PART II **"A FIT AND NECESSARY MILITARY MEASURE"** PART III **"THE TRUMP OF JUBILEE"** EPILOGUE TIMELINE GLOSSARY NOTES SELECTED SOURCES ACKNOWLEDGMENTS IMAGE CREDITS INDEX OF SEARCHABLE TERMS A bird's-eye view of Old Point Comfort in Hampton Virginia (1861). The peninsula is dominated by Fort Monroe, known during the Civil War as Freedom's Fort. Lithograph by E. Sachse & Co. A * * * **ABRAHAM LINCOLN WAS** _**perhaps the greatest figure of the nineteenth century. . . . And I love him not because he was perfect but because he was not and yet triumphed. . . .**_ _**[P]ersonally I revere him the more because up out of his contradictions and inconsistencies he fought his way to the pinnacles of earth and his fight was within as well as without.**_ —W. E. B. Du Bois, _The Crisis_ (September 1922) **A HIGHLY SECRETIVE** _**man, easy to underestimate, whose inner musings were for the most part unknowable, Lincoln remains endlessly fascinating to school children, scholars and all those who view his life in epic terms.**_ —Larry Jordan, _Midwest Today_ (February 1993) **THE PROBLEM IS** _**that we tend too often to read Lincoln's growth backward, as an unproblematic trajectory toward a predetermined end. This enables scholars to ignore or downplay aspects of Lincoln's beliefs with which they are uncomfortable.**_ —Eric Foner, _The Fiery Trial_ (2010) * * * Tremont Temple, a Baptist church in Boston (ca. 1857). Lithograph by J. H. Bufford. A PART I * * * **"THE AGONIZING PRAYERS OF CENTURIES"** * * * "WE WERE WAITING AND LISTENING AS FOR A BOLT FROM the sky . . . we were watching, as it were, by the dim light of the stars, for the dawn of a new day; we were longing for the answer to the agonizing prayers of centuries." So remembered Frederick Douglass, speaking of a "we" that included the electric lecturer Anna E. Dickinson, the riveting Reverend J. Sella Martin, and some three thousand other anxious souls packed into Tremont Temple, a Boston church. This "we" was waiting for word that Abraham Lincoln had John Hancocked a proclamation of liberation. It was Thursday, January 1, 1863. Blocks south of Tremont Temple, in snowy Boston's Music Hall, another crowd of abolitionists was waiting. Ralph Waldo Emerson was there. So were William Lloyd Garrison and Harriet Beecher Stowe. Absent from both great gatherings was the relentless Wendell Phillips, a lawyer by trade. He was in nearby Medford, waiting at the home of friends George and Mary Stearns. ON NEW YEAR'S DAY, 1863, THE WAITING WE WASN'T LIMITED to the Boston area. Washington, D.C., had Henry McNeal Turner, the pastor of Israel Bethel, a church at the foot of Capitol Hill. _Waiting_. On a farm near Columbus, Ohio, there was the writer Frances E. W. Harper, ever poised to pen another poem. _Waiting_. As was Charlotte Forten, a schoolteacher on a South Carolina sea island, not far from Beaufort, where Harriet Tubman was based. _Waiting_. Just like Sandy Cornish in Key West, Florida, a man who had bought his liberty in the 1840s but who later lost his freedom papers in a fire. Worse, one night Cornish was jumped by fiends intent on selling him back into slavery. By dint of will and brute strength, Cornish broke loose. Then, the next day in the public square, he attacked _himself_. Cornish slashed his Achilles tendons, drove a knife into a hip, and in other ways butchered his body. All to make himself useless for slavery. As he told the cowed crowd, he was willing to do worse—anything but be "a slave agin, for I was free." Now, some twenty years later, like others who stood against slavery, the scarred—but free—Sandy Cornish, about seventy, was waiting for that "dawn of a new day." FOR THE TRUE BELIEVERS in freedom—the enslaved, the freed, and those who had always lived in liberty—it had been a very long wait indeed. Since 1641, when Massachusetts became England's first North American colony to legalize slavery. Since 1770, when Crispus Attucks took two musket balls to the chest during the Boston Massacre. Since black patriots fought so fiercely at the Battle of Lexington and Concord, then at Bunker Hill. When the Declaration of Independence deemed it "self-evident" that "all men are created equal" and entitled to "life, liberty, and the pursuit of happiness," we waited for the birth of a slavery-free new nation. But that was not to be. Petition for freedom to Massachusetts governor Thomas Gage, His Majesty's Council, and the House of Representatives, May 25, 1774 (page one). In this document, a great number of blacks stated that they, like all people, had a natural right to freedom. They beseeched the authorities to set them and their children at liberty. A THE U.S. CONSTITUTION AND SLAVERY * * * The Constitution, completed in September 1787, does not include the words _slave_ or _slavery_. The framers, a number of whom were slaveholders, instead used euphemisms. (Example: "person held to service or labor" for someone enslaved.) Parts of the Constitution that address slavery include the following three excerpts. An explanation follows each. * * * ARTICLE I, SECTION 2 Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. * * * For direct taxation and representation in Congress, each enslaved person is to be counted as three-fifths of a person. This allowed states with large populations of enslaved people to have more representation in Congress than if enslaved people weren't counted (as antislavery people wished) but less representation than if the enslaved were counted as whole persons (as slaveholders wanted). * * * ARTICLE I, SECTION 9 The Migration or Importation of such Persons as any of the States now existing shall think proper to admit, shall not be prohibited by the Congress prior to the Year one thousand eight hundred and eight, but a Tax or duty may be imposed on such Importation, not exceeding ten dollars for each Person. * * * Congress cannot abolish the slave trade until 1808, and a tax can be levied on the importation of human beings. * * * ARTICLE IV, SECTION 2 No Person held to Service or Labour in one State, under the Laws thereof, escaping into another, shall, in Consequence of any Law or Regulation therein, be discharged from such Service or Labour, but shall be delivered up on Claim of the Party to whom such Service or Labour may be due. * * * Escape from slavery to a state prohibiting it does not translate into legal liberty. This "fugitive slave" clause also says that those who flee bondage are to be "delivered up" to their owners. * * * A OVER THE YEARS, WE REJOICED WHEN A NORTHERN STATE abolished the abomination. We agonized when a slave state entered the Union. As the nation grew westward, we fought slavery's spread. The Missouri Compromise of 1820 was something of a mercy: Maine entered the Union as a free state to offset Missouri's entry as a slave state. Added to that, slavery was banned from the rest of Louisiana Purchase lands (outside of Missouri) located north of the parallel 36˚30'. As we waited for all of America to repent—to repudiate slavery—we wept, we raged, we prayed. Over beatings and brandings and bullwhippings. Over the rapes. Over families fractured on auction blocks. And then there was all that stolen labor. _Free and Slave Areas After the Missouri Compromise, 1820_. The Missouri Compromise was intended to keep the peace between the North and the South. Along with allowing Maine into the Union as a free state and Missouri as a slave state, the compromise banned slavery in unorganized Louisana Purchase land (originally 800,000-plus square miles). The U.S. bought this land from France in 1803 for fifteen million dollars. Part of the purchase became the state of Louisiana in 1812 and Arkansas Territory in 1819. The Northwest Territory: Acquired from England in 1783, in the treaty that ended the American Revolution. Congress banned slavery in the Northwest Territory, which originally consisted of present-day Illinois, Indiana, Michigan, Ohio, Wisconsin, and part of Minnesota. Florida Territory: Acquired from Spain in the Adams-Onís treaty signed in 1819. _A Slave-Coffle Passing the Capitol_ , from _A Popular History of the United States_ (vol. 4, 1881). Slave labor was used to build the White House and the Capitol. * * * **I HAVE OFTEN** _**been awakened at the dawn of day by the most heart-rending shrieks of [my Aunt Hester], whom [our master, Aaron Anthony] used to tie up to a joist, and whip upon her naked back till she was literally covered with blood. No words, no tears, no prayers, from his gory victim, seemed to move his iron heart. . . . I remember the first time I ever witnessed this horrible exhibition. I was quite a child. . . . It was the bloodstained gate, the entrance to the hell of slavery, through which I was about to pass.**_ —Frederick Douglass, in his first autobiography, _Narrative of the Life of Frederick Douglass, An American Slave_ (1845), which featured a preface by William Lloyd Garrison and a letter of support from Wendell Phillips **I NEVER RISE** _**to address a colored audience without feeling ashamed of my own color; ashamed of being identified with a race of men who have done you so much injustice, and who yet retain so large a portion of your brethren in servile chains.**_ —William Lloyd Garrison in June 1831, to black conventions in Boston, Philadelphia, and other northern cities * * * Frederick Douglass (ca.1847–52), by Samuel J. Miller. Maryland-born Douglass liberated himself in 1838, when he was twenty. During the Civil War, Douglass and his family lived in Rochester, New York. William Lloyd Garrison (ca. 1850), by Southworth & Hawes. Massachusettsborn Garrison was the founding editor of _The Liberator_ (1831) and president of the American Anti-Slavery Society from 1843 to 1865. Garrison once supported gradual emancipation and the colonization of blacks outside America. After a change of heart, he became a leading voice for immediate, unconditional abolition and against colonization. So often we sought solace in freedom songs, like "The Day of Jubilee": _Soon shall the trump of freedom_ _Resound from shore to shore;_ _Soon, taught by heavenly wisdom_ , _Man shall oppress no more:_ _But every yoke be broken_ , _Each captive soul set free—_ _And every heart shall welcome_ _The day of Jubilee_. All the while, we waited for breakthroughs from piles of appeals on behalf of black humanity: all the books and broadsides, pamphlets and petitions, articles and ardent speeches; all the people, like Elijah Lovejoy, persecuted—flogged, bludgeoned, hanged, tarred and feathered, shot—for entreating the nation to set the captives free. _The Mob Attacking the Warehouse of Godfrey Gilman & Co., Alton, Ill., on the Night of the 7th Nov. 1837_, from _Alton Trials_ (1838). On the night of November 7, 1837, a proslavery mob attacked Gilman's warehouse, on the banks of the Mississippi River, because it held the new printing press of white abolitionist Elijah Lovejoy. The printing press was destroyed and dumped into the river (as had been done to three of Lovejoy's previous presses). Elijah Lovejoy was shot dead while trying to defend his property. Western Anti-Slavery Society leaflet (ca. 1850). From the April 23, 1831, issue of Garrison's newspaper. Anti-abolitionist handbill, 1837. A call to disrupt an antislavery meeting—vilifying the speaker as a "tool of evil and fanaticism." Cotton banner (ca. 1840). Garrison displayed it at antislavery events—proud of his "fanaticism." Page one of a correspondence from Maria Weston Chapman to William Lloyd Garrison (ca. 1840). Chapman, the white secretary of the Boston Female Anti-Slavery Society, was reporting the results of her organization's petition drive in protest of the annexation of the Republic of Texas as a slave state. The interracial society gathered more than fifteen thousand signatures in cities and towns around Massachusetts. Texas had seceded from Mexico and become an independent republic in 1836. In 1845, it became the twenty-eighth state, entering the Union as a slave state. The masthead from the debut issue of Frederick Douglass's first newspaper. _A Slave-Hunt_ , from _Cassell's History of the United States_ (vol. 3, ca. 1880). So many days, so many nights, we awaited news of souls with the strength to self-liberate. Did he—did she—did they—meet with God's speed, or did bloodhounds pick up the scent? Of course we abhorred the Compromise of 1850's Fugitive Slave Law. This "Bloodhound Law" compelled federal marshals to be in the search-and-seizure business, and it gave them the power to commandeer citizens into slave-hunting posses. What's more, anyone who helped someone escape slavery could face imprisonment of up to six months and a fine of as much as one thousand dollars (about seventeen thousand dollars today). * * * **WE DO NOT** _**breathe well. There is infamy in the air.**_ —Ralph Waldo Emerson on the 1850 Fugitive Slave Law, in an 1851 address in Concord, Massachusetts * * * _The United States Senate, A.D. 1850_ (1855), by Peter F. Rothermel, engraved by Robert Whitechurch. Kentucky senator Henry Clay is depicted holding forth on the Compromise of 1850, which he engineered to keep the peace between the North and South, just as he had done thirty years earlier with the Missouri Compromise (when he was Speaker of the House). In addition to the Bloodhound Law, the Compromise of 1850 included the abolition of the slave trade (but not slavery itself) in the nation's capital. It also dealt with the contentious issue of slavery in lands acquired through the U.S.–Mexican War (1846–48). California entered the Union as a free state. The rest was organized into Utah and New Mexico territories, where slavery was to be a matter of popular sovereignty—that is, left up to the settlers. Senator Clay, a slaveholder, was a cofounder of the American Colonization Society (1816). It promoted the emigration of free and freed blacks, primarily to West Africa. Detail of a map of America in 1854. The Kansas-Nebraska Act lifted the ban on slavery above the 36°30' parallel, thus nullifying part of the Missouri Compromise. Since the Missouri Compromise, more of the Louisiana Purchase lands had become different territories. We seethed all the more as the outrages went on. In 1854 came the Kansas-Nebraska Act, so galling, so appalling. It trampled underfoot the Missouri Compromise by making slavery _possible_ in these two new territories if that's what settlers wanted. And it caused a civil war in one territory between proslavery and antislavery activists, resulting in "Bleeding Kansas." There was also "Bleeding Sumner": Massachusetts Senator Charles Sumner, nearly beaten to death—on the Senate floor!—by Representative Preston Brooks of South Carolina. At the heart of Brooks's hatred was Sumner's diehard antislavery stance. _Southern Chivalry—Argument Versus Club's_ [sic] (1856), by John L. Magee. On May 22, 1856, Representative Brooks attacked Senator Sumner with his gutta-percha cane in the Senate chamber. Sumner had recently given his speech "Crime Against Kansas" denouncing efforts to make Kansas slave soil. He also insulted Brooks's state and his relative, South Carolina Senator Andrew Butler. After the incident, Brooks received gifts of several new canes. The one from a Virginia chamber of commerce and the one from a group of citizens in Charleston, South Carolina, were inscribed: "Hit Him Again." Sumner didn't return to the Senate for three years because of his injuries. * * * **HOW LONG WILL** _**Northern men watch this struggle between Freedom and Slavery? . . . How long will they see their rights trampled on, their liberty sacrificed, their highest and most lofty sentiments crushed beneath the iron heel of oppression! How long will they bear all this without one effort of resistance?**_ —Thirteen-year-old Anna E. Dickinson, in a January 1856 letter to _The Liberator_. She was distraught over news that a schoolteacher in Lexington, Kentucky, had been tarred and feathered for protesting slavery. * * * Many of us put great faith in the fledgling Republican Party, born of protest over the Kansas-Nebraska Act and dedicated to banning slavery in the territories. We especially cheered the "radicals": Republicans intent on seeing slavery abolished _everywhere_ in America. The Republican Party's chief competition was the Democratic Party, a party dominated by proslavery Southerners, a party with more than a few Northern members on the side of wealthy planters who craved virgin land out west on which to build more slave-worked plantations. _Anti-Slavery Meeting on the Common_ , by Worcester & Pierce, from the May 3, 1851, issue of _Gleason's Pictorial Drawing-Room Companion_. It depicts Wendell Phillips speaking at an April 11, 1851, rally in Boston over the arrest, under the Fugitive Slave Law, of bricklayer Thomas Simms ("Sims" in some sources). He was soon returned to slavery in Savannah, Georgia. G. H. Hayes's engraving of "the Sack of Lawrence," a free-soil stronghold in Kansas Territory. The town was destroyed by proslavery activists on May 21, 1856. A few days later, abolitionist John Brown captained the massacre of several proslavery men living near Lawrence, along Pottawatomie Creek. In 1856, the man the Republican Party fielded for the presidency was the former explorer of the American West known as "the Pathfinder": Colonel John Charles Frémont. His slogan thrilled: "Free Soil, Free Labor, Free Speech, Free Men, and Frémont!" But in the end, the next president was a Democrat, James Buchanan of Pennsylvania. _Jno C. Fremont [and] Wm. L. Dayton. The Champions of Freedom!_ (ca. 1856). Frémont's running mate, featured in this Republican Party presidential campaign banner, was William Lewis Dayton of New Jersey, a former U.S. senator. In March 1857, two days after Buchanan's inauguration, came another blow: the U.S. Supreme Court's Dred Scott decision, denying liberty to Dred and Harriet Scott, enslaved in St. Louis, Missouri. The couple had sued for freedom on the grounds that, at times, their owner had them in captivity on free soil (for one, at Fort Snelling in Wisconsin Territory, where the two met and married). The Supreme Court's 7–2 decision did more than say no to the Scotts' freedom plea. It rejected the notion that they had a right to sue in the first place. Whatever blacks' hopes and dreams were, they didn't matter. Blacks really had no rights, said Chief Justice Roger Brooke Taney. Whether enslaved or free, blacks never were and could never be U.S. citizens: They were not included in the Constitution's "We the people." Going farther afield, Chief Justice Taney also declared that the federal government had no right to regulate slavery in the territories. * * * **WE ARE HERE** _**to enter our indignant protest against the Dred Scott decision—against the infamous Fugitive Slave Law—against all unjust and oppressive enactments . . . against the alarming aggression of the Slave Power upon the rights of the people of the North—and especially against the existence of the slave system at the South. . . . We are here to reiterate the self-evident truths of the Declaration of Independence. . . . We are here to declare that the men who, like Crispus Attucks, were ready to lay down their lives to secure American Independence, and the blessings of liberty . . . are not the men to be denied the claims of human nature, or the rights of citizenship. . . .**_ _**Give us Disunion with liberty and a good conscience, rather than Union with slavery and moral degradation.**_ —William Lloyd Garrison at Boston's Faneuil Hall, March 5, 1858, during a commemoration of the Boston Massacre * * * The abolitionist John Brown was all the more on fire after the Dred Scott decision, envisioning a mighty uprising against slavery. Financed by a group known as the Secret Six, he sought to set off the revolt in October 1859. His trigger: an audacious attack on the federal arsenal at Harpers Ferry, Virginia. After the raid failed, the name "John Brown" became both a rallying cry and a lightning rod. More so after he was captured, tried, and found guilty of conspiracy, murder, and treason, then executed by the Commonwealth of Virginia on December 2, 1859, a day we sorely mourned. * * * **I KNOW THAT** _**there is some quibbling, some querulousness, some fear, in reference to an out-and-out endorsement of [John Brown's] course . . . but I am prepared, my friends . . . to approve of the**_ **means** _**. . . because I remember that our Fourth-of-July orators sanction the same thing; because I remember that Concord, and Bunker Hill . . . and the celebration of those events, all go to approve the means that John Brown has used; the only difference being, that in our battles, in America, means have been used for**_ **white** _**men and that John Brown has used his means for**_ **black** _**men. . . .**_ _**I am ready to say, if he has violated the law . . . if he has been the traitor that the South brands him as having been, and the madman that the North says he has been, John Brown is not to be blamed. I say that the system which violates the sacredness of conjugal love, the system that robs the cradle of its innocent treasure . . . the system that takes away every God-given right . . . I say that that system is responsible for every single crime committed within the borders where [slavery] exists. It is the system, my friends.**_ —J. Sella Martin, before a crowd of four thousand blacks and whites at Boston's Tremont Temple, on the day of John Brown's execution * * * About a year after John Brown's body dangled from a rope, a Republican won the White House: Abraham Lincoln of Illinois. He was a man on record as calling slavery a "monstrous injustice." On election night 1860, we who were pledged to universal liberty wondered if the wait would soon be over. A photograph of John Brown by J. B. Heywood and painted by N. B. Onthank. The photograph was taken in May 1859, five months before Brown's Harpers Ferry raid. The proslavery mob's murder of Elijah Lovejoy in 1837 played a role in Brown's commitment—body, soul, and mind—to abolition and equal rights for blacks. "Great Sale of Land, Negroes, Corn, & Other Property!" November 24, 1860. This flyer was for an estate sale scheduled for January 2, 1861, in Charleston, South Carolina. Describing the ninety-one human beings as "likely" was another way of saying that they were fit, strong, or competent. PART II * * * **"A FIT AND NECESSARY MILITARY MEASURE"** * * * THE ENIGMATIC, MOODY, PRONE-TO-BROODING ABRAHAM LINCOLN, like most whites of his day, didn't think blacks (or any people of color) rated full equality, but he did believe that all people had a right to the fruits of their own labor. He truly loathed slavery. He saw it as plain _wrong_ —and a blot on the nation, an anathema to its ideals. Problem was, as lawyer Lincoln interpreted the Constitution, the U.S. government could not regulate an institution within a state, be that institution marriage or slavery. Too, Lincoln couldn't turn a blind eye to the Fifth Amendment's provision barring the government from seizing people's property (whether land, home, or human beings) without compensation—that is, payment. Fifty-one-year-old Abraham Lincoln, photographed by Mathew Brady on February 27, 1860, the day Lincoln delivered his famous speech at the Cooper Institute (now the Cooper Union). Lincoln was born in the slave state of Kentucky but lived most of his life on free soil, first in Indiana and then in Illinois. He served four terms in the Illinois legislature and one term representing Illinois in the U.S. Congress. "I have no purpose directly or indirectly to interfere with the institution of slavery in the states where it exists. I believe I have no lawful right to do so, and I have no inclination to do so." Lincoln had uttered these words in 1858 during a senatoral election debate with the father of the Kansas-Nebraska Act, Senator Stephen Douglas of Illinois (whom Lincoln failed to unseat). Two years later, while seeking to be the Republican Party's presidential candidate, Lincoln held fast to his equally long-held belief that, though the U.S. government could not regulate slavery in the states, it _could_ regulate it in the territories. He would point out, for example, that back in 1787 Congress banned slavery in the Northwest Territory. Lincoln made this point in his February 1860 speech at New York City's Cooper Institute. There he held the packed Great Hall for two hours and closed with this charge: "Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it." This speech brought Lincoln to national prominence, and he clinched the Republican Party's presidential nomination a few months later. Legions of proslavery Southerners—especially wealthy planters—believed that it was their duty to leave the Union if Lincoln became president. They were dead sure that he wouldn't just continue to rail against slavery in the territories, he would also seek to abolish it in the states where it already existed, despite what he said. "The Union Is Dissolved!" _Charleston Mercury_ , December 20, 1860. The first public notice that the South Carolina legislature had voted unanimously to leave the Union. After Lincoln won the White House, South Carolina proved that disunion had been no idle threat. In December 1860—three months before Lincoln was even inaugurated—South Carolina issued its declaration of independence. By early February 1861 more states had seceded: Mississippi, Florida, Alabama, Georgia, Louisiana, then Texas. CSA president Jefferson Davis (ca.1858–60), by Mathew Brady. Like Lincoln, Davis was born in the slave state of Kentucky. His pre–Civil War career in the U.S. government included service as secretary of war and as a U.S. senator from Mississippi, a post he quit after that state left the Union in January 1861. The breakaway states formed the Confederate States of America (CSA), creating a government that paralleled that of the United States, having a legislative, a judicial, and an executive branch. Its chief executive was Jefferson Davis, a Mississippi planter and slaveholder. The CSA's constitution pulled no punches on slavery. It stated that in any new territory that the Confederacy acquired "the institution of negro slavery, as it now exists in the Confederate States, shall be recognized and protected." And from the start, the CSA was keen on seizing U.S. property within its borders, especially arsenals and forts. A divided America couldn't wait to hear what Lincoln had to say on March 4, 1861, Inauguration Day. In his speech, Lincoln reached for reconciliation, striving to stave off a civil war and to keep the remaining slave states from joining the CSA. He reiterated that he had no plans or desire to interfere with slavery in the states. He promised that the U.S. government would continue to uphold the 1850 Fugitive Slave Law. He also mentioned a measure that the outgoing president, James Buchanan, had promoted and that Congress had passed two days earlier: a thirteenth amendment to the Constitution that would ban Congress from ever abolishing slavery in the states where it already existed. "I have no objection to its being made express, and irrevocable," said Lincoln of this amendment. * * * **I KNOW WHAT** _**anarchy is. I know what civil war is. I can imagine the scenes of blood through which a rebellious slave population must march to their rights. They are dreadful. And yet, I do not know, that, to an enlightened mind, a scene of civil war is any more sickening than the thought of a hundred and fifty years of slavery.**_ —Wendell Phillips at Boston's Music Hall in mid-February 1861 * * * Boston-born abolitionist Wendell Phillips (ca. 1858). _Inauguration of President Lincoln, in Front of the Capitol, at Washington, D.C_., from the October 1, 1881, issue of _Pictorial War Record_. The Capitol's new dome will not be completed until 1863, when its Statue of Freedom is installed on top. (And not until 1937 will Inauguration Day be changed from March 4 to January 20.) _Scene Around a Bulletin-Board_ , from _Harper's Pictorial History of the Great Rebellion_ (1866). On April 13, 1861, the Union's Major Robert Anderson surrendered Fort Sumter. At the end of his inaugural address, Lincoln pressed hard for peace and reunion. "In _your_ hands, my dissatisfied fellow-countrymen, and not in _mine_ , is the momentous issue of civil war. The government will not assail _you_. You can have no conflict, without being yourselves the aggressors." He closed on a note of hope: for "the better angels of our nature" to prevail. About a month later, Confederate forces fired on Union-held Fort Sumter in South Carolina's Charleston Harbor. Soon, the Civil War was on. By late spring 1861, four more slave states—Virginia, Arkansas, North Carolina, Tennessee—had joined the CSA. Adding to the turmoil, Virginia's western counties, where allegiance to the Union was strong and reliance on slavery was not, had seceded from the Old Dominion (the genesis of a future state: West Virginia). _Interior of Fort Sumter. During the Bombardment, April, 12th, 1861_ , by Currier & Ives (1861). The CSA attacked Fort Sumter when the Union moved to resupply it with food and other provisions. AS SABERS SLASHED, AS SHOT AND SHELL BLASTED LAND and obliterated lives, Lincoln steadily proclaimed that the _sole_ aim of the war was to end the rebellion—preserve the Union as it was. Abolitionists called for a broader war. "Death to Slavery!" was their hearts' cry—as it was for countless souls in captivity, scores of whom made haste to Union camps and forts the minute America began flying apart. Fortunate were the ones given asylum. Most were rebuffed, and some were even returned to their owners. Similarly, when free black men rushed to fight in the Union Army, their services were refused. _The Lexington of 1861_ , Currier & Ives (ca. 1861). On April 19, 1861, in Baltimore, Maryland, pro-Confederate people attacked the Union's Sixth Massachusetts Regiment en route to Washington, D.C. The incident, in which soldiers and civilians were wounded and killed, is known as the Baltimore Riot, the Pratt Street Riot, or the Baltimore Massacre. Local and state authorities ordered certain railroad bridges destroyed to prevent more federal troops from traveling through Maryland. Some citizens hacked telegraph lines and cut down telegraph poles to D.C. As a result, Maryland, which borders the District of Columbia on three sides, was put under martial law. The riot/massacre occurred eighty-six years to the day after the first fight in the American Revolution: the Battle of Lexington and Concord. J. Sella Martin, pastor of Boston's Joy Street Baptist Church. Martin was born in slavery in Charlotte, North Carolina. When he was a boy, he was sold apart from his mother and sister. He was sold several more times before escaping to the North in the 1850s, when he was in his early twenties. * * * **ARE NOT THESE** _**Northern people the most arrant cowards, as well as the biggest fools on earth? Just think of [Lieutenant Colonel Dimick] and [Lieutenant] Slemmer [at Fort Pickens, in Pensacola, Florida] sending back the fugitives that sought protection. . . . They refuse to let white men sell the Southerners food, and yet they return slaves to work on the plantation to raise all the food that the Southerners want.**_ —J. Sella Martin to Frederick Douglass in a May 1861 letter, printed in the June 1861 issue of _Douglass' Monthly_ * * * After the outbreak of the war, people who wanted to link preservation of the Union with black liberty had cause to both hurrah and hiss over actions taken by generals, by the Republican-dominated Congress, and by President Lincoln himself. In May 1861, the Union's General Benjamin Franklin Butler, not an abolitionist, flouted the Fugitive Slave Law in his domain. This was the Virginia peninsula's Fort Monroe, where three black men sought sanctuary on May 23. When their owner, a Confederate colonel, got wind of their whereabouts, he requested their return. On one condition, replied General Butler: that the colonel pledge allegiance to the USA. When the colonel refused, Butler kept the black men as contraband of war: that is, as confiscated enemy property of military value. Butler wasn't in a quandary about what to do with the three black men. He put them to work. In justifying his actions to a superior, the general wrote: "I am credibly informed that the negroes in this neighborhood are now being employed in the erection of batteries and other works by the rebels, which it would be nearly or quite impossible to construct without their labor. Shall [the rebels] be allowed the use of this property against the United States, and we not be allowed its use in aid of the United States?" _Contraband of War_ , from _A Popular History of the United States_ (vol. 4, 1881). Seated at left: General Benjamin Butler. Standing in front of the table are presumably the three black men who took refuge at Fort Monroe in late May 1861: Frank Baker, Shepard Mallory, and James Townsend. It wasn't long before droves of walking, talking "property" in the area headed to Butler's garrison, known on the grapevine as Freedom's Fort. There were those in the Congress eager for the Union to be one great big "Freedom's Fort." While they held their fire on a state's right to slavery, most Republicans were ready to take aim at the property rights of _individuals_ in rebellion—just as General Butler had done. They were all the more determined after July 21, 1861. This was the day of the CSA victory in the first major land battle: at Bull Run Creek near Manassas, Virginia, about twenty-five miles from Washington, D.C. _Secession Exploded_ (1861), by William Wiswell. This cartoon casts secession as a monster and the Confederate states as equally vile. The artist also remarked on Southern states still in the Union. For example, Maryland (tugging on Uncle Sam's coattails) is portrayed as two-faced. Other states are two-headed, reflecting the existence of pro-Union and pro-Confederate factions within them. _Stampede of Slaves from Hampton to Fortress Monroe_ , from the August 17, 1861, issue of _Harper's Weekly_. By early August 1861, roughly nine hundred black adults and children had made their way to "Freedom's Fort." The Battle of Bull Run, also known as the Battle of Manassas. Of the estimated 32,000 Confederate troops who fought in this battle, there were some 1,700 casualties. Of the roughly 28,000 Union troops, there were about 3,000 casualties. The engraving from which this print was made originally appeared in the August 3, 1861, issue of _Frank Leslie's Illustrated Newspaper_. Wild talk about Confederate atrocities against Union soldiers in the Battle of Bull Run—along with reports of Confederate use of slave labor during it—had hordes of Union loyalists hot for revenge. So it was with significant support from the pro-Union public that, on August 6, 1861, Congress passed a bill introduced by Senator Lyman Trumbull, Republican of Illinois: "An Act to confiscate Property used for Insurrectionary Purposes." This bill sanctioned seizure of property used to aid the rebellion, whether weapons or wagons, plantations or people. The "contrabands," as seized people were callously called, would enter a legal limbo. Technically, they would not be free but, rather, in the Union's custody. Lincoln signed the confiscation bill into law, though it was bound to prompt squawks from many folks in the "border states," as slave states that had not joined the CSA were known: Delaware, Kentucky, Maryland, Missouri, and, though not a state, the federation of Virginia's western counties that remained loyal to the Union. Within weeks of signing the Confiscation Act, Lincoln had something more explosive to worry about when it came to the border states. THE FLASHPOINT WAS MISSOURI, WHERE CONFEDERATE sympathizers were engaging in guerrilla warfare and where CSA forces had gained some control of the southwestern part of the state. The Union's man in charge of keeping Missouri in line was the former Republican presidential candidate, John Charles Frémont, now a general. On August 30, 1861, from his base in St. Louis, Frémont put Missouri under martial law. In outlining what it meant to be under military rule, he proclaimed that, among other things, Confederate sympathizers would have their property seized. If that property included people, they would be _freed_! _Whoa!_ Lincoln, who learned of Frémont's decree in the newspaper, couldn't let that stand. He promptly wrote to Frémont, pressing him to void the passage on freedom. Confiscation was fine, but freedom was political dynamite. It would, said Lincoln, "alarm our Southern Union friends, and turn them against us—perhaps ruin our rather fair prospect for Kentucky." Lincoln worried about Kentucky for good reason. Back in April, when the president called out to the states for troops to put down the rebellion, his birth state had refused. Its governor, Beriah Magoffin, responded thusly: "I say, _emphatically_ , Kentucky will furnish no troops for the wicked purpose of subduing her sister Southern states." Then, within weeks, Kentucky officially declared neutrality. If Kentucky went over to the CSA, the Union would lose a major source of grain and livestock as well as open access to critical waterways, most especially the Ohio River. Lincoln feared that if Kentucky bolted, so might Missouri, also rich in resources. What's more, its largest city, St. Louis, was on the Mississippi River, key for transporting troops, supplies, and commercial goods, just like the Ohio River. _Secession, 1860–1861_. Since 1854, several territories had been subdivided into new territories, and several states had entered the Union. _Purple_ : free-soil states on the side of the Union. _Olive_ : slave states that did not secede. _Pink_ : pro-Union territories. _Dark orange_ : slave states that seceded before the outbreak of war. _Light orange_ : states that seceded after the outbreak of war. _Yellow_ : territories that sided with the CSA, among them Indian Territory and the attached panhandle known as No Man's Land. Letter from Gen. Robert Anderson to President Abraham Lincoln, September 13, 1861. Anderson, former commander of Fort Sumter, was now head of the Department of the Cumberland (Kentucky and Tennessee). In this letter, Anderson tells the president that Frémont's declaration of freedom is "producing most disastrous results" in Kentucky and could cause the state to leave the Union if not revoked. The letter is dated two days after Lincoln did just that, apparently unbeknownst to General Anderson. After Frémont refused to revise his proclamation, Lincoln did it for him. (Their relationship was beyond repair. Before the year was out, the general was relieved of command.) Many Union loyalists cheered Lincoln for revoking Frémont's freedom edict. But not abolitionists. They bombarded the president with letters of protest, pilloried him in publications, and commiserated with one another over what they saw as his maddening timidity. "We cannot conquer the rebels as the war is now conducted," complained Republican Senator Charles Sumner to a friend. "There will be a vain masquerade of battles, a flux of blood and treasure, and nothing done!" What a shame, the senator lamented of Lincoln, "to have the power of a god and not to use it godlike!" Sumner, who was quite chummy with the president, had been urging him to declare immediate emancipation since the war began. * * * **FRÉMONT'S PROCLAMATION HAS** _**in it that genuine military ring, that martial directness, for which the heart of the people in disturbed times always longs. They long for the man without fear—whose sword divides all meshes of compromise, all fine-spun legal doubts and hesitancies.**_ —Harriet Beecher Stowe in _The Liberator_ (September 20, 1861) * * * "I think Sumner, and the rest of you, would upset our apple-cart altogether, if you had your way." That's what the cleric Charles Edwards Lester recalled Lincoln telling him. "We didn't go into the war to put _down_ slavery, but to put the flag _back_." But the president didn't rule out a major move against slavery. "We must wait until every other means has been exhausted," Lincoln told Lester. _"This thunderbolt will keep_." But wouldn't slavery enable the CSA to keep fighting? Harriet Beecher Stowe (ca. 1865). Stowe's bestseller, _Uncle Tom's Cabin_ (1852), a sentimental novel about the horrors of slavery, brought countless people into the antislavery fold. It also earned its author, a native New Englander, the everlasting enmity of proslavery people. "Camp of 'contrabands' of 13th" was written below this photograph, one of several on an album page devoted to the Thirteenth Massachusetts Infantry in Williamsport, Maryland, during the winter of 1861–62. Almost always, blacks who took refuge with Union soldiers worked for their keep. Their duties ranged from cooking and doing laundry to digging trenches and serving as scouts. Of the nation's roughly four million people in bondage, about 3.5 million of them were in the Confederate states. Even if toddlers and old folks were subtracted from that number, that still left a lot of forced laborers at the CSA's disposal. These black people didn't just raise cotton and food crops. They didn't just build batteries. They were also cooks and coopers, bakers, butchers, blacksmiths, and boatmen. And some were put to work in weapon-making factories and shops, such as the one at Tredegar Iron Works in Richmond, Virginia, capital of the Confederacy. All this black labor freed up whites for combat. Plus, people in bondage had information the Union could use (on troop movements, for example). Letting the CSA keep such a valuable resource was insane, charged abolitionists. Abolitionists had more to deplore in Lincoln's Annual Message to Congress, in December 1861. In it, the president proposed freedom for the so-called contrabands—referring to them, oddly, as having been "liberated" by the Confiscation Act. The freedom he now broached had a string attached. Congress would have to earmark money for sending "contrabands" out of the country. And not just them. "It might be well to consider, too," added Lincoln, "whether the free colored people already in the United States could not, so far as individuals may desire, be included in such colonization." Lincoln, a member of the Illinois Colonization Society in the 1850s, couldn't see blacks ever getting a fair shake in America, nor could he envision blacks and whites living together in peace. Plenty of whites outside the CSA felt the same way. The nation's roughly 500,000 free blacks faced intense discrimination at almost every turn: barred from certain schools, banned from certain jobs, denied the right to vote in some localities. What's more, in some states, including Lincoln's Illinois, it was a crime for blacks to take up residence there. For many whites in Illinois and elsewhere in the North and West, the idea of millions of freed blacks was a frightful thing. Some feared that blacks would seek bloody revenge against whites in the South. Others feared that freed people would troop north and west seeking even freer air. And this fear spawned other phantom fears: that blacks wouldn't be able to function in freedom and thus would become burdens; that blacks _would_ be able to function in freedom and thus threaten white jobs. Declaration, 1861. Formed to push the public—and Lincoln—to hit hard at slavery, the Emancipation League championed abolition "as a measure of justice and as a military necessity." Frederick Douglass and Wendell Phillips lectured for the league. Its chief financial backer was George Stearns, a wealthy white manufacturer. Other white members included Julia Ward Howe and her husband, physician Samuel Gridley Howe. Howe and Stearns had been members of the Secret Six, the group that had financed John Brown. Republican senator Henry Wilson of Massachusetts (ca. 1860–65), a future vice president under president Ulysses S. Grant. There were also whites who had no fear but simply didn't want throngs of blacks in their midst. This was akin to some whites opposing slavery in the territories: they wanted the territories to be lands of opportunity—not for already wealthy slave-holding planters but for workaday whites, allowing them the chance at what would later be known as the American Dream. So Lincoln was hardly alone in favoring colonization. Not in the 1850s and not during the Civil War. In the summer of 1861, Republican Senator James Henry Lane, a champion of a free-soil Kansas, had called for blacks and whites to be kept apart. Far apart—as in "an ocean rolling between them." Lane envisioned South America an ideal place for blacks, or as he put it, "the elysium of the colored man." America would be "the elysium of the white." WHILE CONGRESS CHEWED ON LINCOLN'S PITCH FOR colonization funds in December 1861, it was dealing with an antislavery bill introduced by Senator Henry Wilson of Massachusetts. His bill called for immediate emancipation where the federal government had undisputed jurisdiction: Washington, D.C. In was also in December 1861 that Secretary of War Simon Cameron created a stir with his annual report, for in it he recommended that the Union Army use "contrabands" in combat. Worse, the report was leaked to the press. Lincoln was not happy. It was yet another idea to horrify many white loyalists, especially in the border states, two of which were embroiled in their own civil wars. In October, a pro-Confederate faction of Missouri's legislature had established its own government, in Neosho, on the Arkansas-Missouri border. In November, the same thing had happened in Kentucky. There, the pro-Confederate government claimed Bowling Green as its capital. These developments made Secretary of War Cameron's proposal to arm blacks who escaped to Union lines—many of them from the border states—like adding fuel to a fire. Cameron, already under scrutiny for corruption, wouldn't be issuing any more potentially rebel-rousing reports. Lincoln made sure that there was an ocean rolling between Cameron and the nation: In January 1862, the president made him ambassador to Russia. By then, the rump governments of Kentucky and Missouri had been admitted into the CSA. AND THE WAR RAGED ON. UNION AND CONFEDERATE FORCES continued the wounding and the killing in skirmishes and full-blown battles, gunning for each other on land, at sea, in swamps, from trees—and the CSA was far from being licked. Abolitionists clamored all the louder for Lincoln to champion black liberty as key to Union victory. * * * **SLAVERY IS THE** _**stomach of the rebellion. The bread that feeds the rebel army, the cotton that clothes them, and the money that arms them and keeps them supplied with powder and bullets, come from the slaves. . . . Strike here . . . and you at once put an end to this rebellion. . . . Shall this not be done, because we shall offend the Union men in the border states?**_ —Frederick Douglass in _Douglass' Monthly_ (September 1861) * * * In a January 1862 speech in the House of Representatives, Pennsylvania's Thaddeus Stevens called emancipation "the most terrible weapon" in the Union's armory. "Universal emancipation must be proclaimed to all," he said. Stevens had long believed that emancipation was a moral necessity. Now, along with seeing it as a weapon, he thought it essential for a permanent peace between North and South. Slavery was the cause of the war, Stevens insisted. Even if the Union won it, if the nation did not eradicate slavery, war would come again, he predicted. Stevens thus urged: "While you are quelling this insurrection at such fearful cost, remove the cause, that future generations may live in peace." Republican representative Thaddeus Stevens of Pennsylvania (ca.1861). * * * **THEY MAY SEND** _**the flower of their young men down South . . . one year, two years, three years, till they are tired of sending, or till they use up all the young men. All no use! God's ahead of Master Lincoln. God will [not let] Master Lincoln beat the South till he do the right thing.**_ —Harriet Tubman (according to abolitionist Lydia Maria Child in a January 1862 letter to poet John Greenleaf Whittier, another abolitionist) * * * On the face of it, the president wasn't persuaded by Stevens or anyone else to strike hard at slavery. But he revealed himself ready to chip away at it. On March 6, 1862, Lincoln asked Congress to back compensated, gradual emancipation. Specifically, he wanted the senators and representatives to issue the following joint resolution: RESOLVED: That the United States ought to cooperate with any State which may adopt gradual abolishment of slavery, giving to such State [financial] aid, to be used by such State in its discretion, to compensate for the inconveniences, public and private, produced by such change of system. Lincoln long believed that the most peaceable, practical way to end slavery in the states was for the U.S. government to pay them to phase it out, as some Northern states had done (but without compensation). For example, Pennsylvania's gradual emancipation act had set a time frame for the freedom of children born to enslaved women after March 1, 1780, the date of the act's passage. These children were to be indentured servants until age twenty-eight, then free. * * * **THE LATE MESSAGE** _**of President Lincoln to Congress, relative to emancipation, has given rise to more speculations, and created more surmises than any other document ever issued from the mansion halls of the White House. . . .**_ _**I look at it as one of the most ingenious subterfuges. . . . [I]t denies that Congress has any power to legislate on slavery—leaving it under the absolute control of individual States.**_ —Henry McNeal Turner in _The Christian Recorder_ (March 22, 1862) * * * While Lincoln waited for that yes or no on the joint resolution on compensated, gradual emancipation, Congress passed, in early March, and the president approved, a measure that Senator Charles Sumner had championed: an additional article of war that forbade Union officers to return "fugitives from service or labor" to their owners. This article of war gutted the 1850 Fugitive Slave Law. Was the Union becoming one great big "Freedom's Fort" after all? Would the border states see the writing on the wall and say yes to compensated, gradual emancipation? Or would they stand firm for slavery? While the world awaited the outcome, more enslaved people made a dash for freedom. Still others, staying put, engaged in sabotage, from work slowdowns to torching Confederate property: military, municipal, and civilian. In the meantime, the border states seemed less of a worry to many people loyal to the Union. By late March 1862, it was highly unlikely that Maryland and Delaware would go over to the CSA. Yes, pro-Confederate sympathies remained strong in these states, but so too was the presence of Union troops. Also by then, the Union had routed CSA forces from southwestern Missouri. As for Kentucky, the Union didn't yet have a total lock on that state, but in February, its forces had taken Bowling Green, capital of the Confederate shadow government. COMPENSATED, GRADUAL EMANCIPATION WAS SOON IN THE news again. On April 10, 1862, Congress gave the president that joint resolution he wanted. Two days later, a bill that abolitionists wanted to clear Congress did: the one making the District of Columbia slavery-free. As Lincoln wished, the bill included compensation to slaveholders loyal to the Union. For loss of their human property, the U.S. government would pay them up to three hundred dollars per person on average. The people freed, along with blacks in D.C. already free, would receive one hundred dollars, but only if they moved to Haiti, Liberia, or another nation of the president's choosing. Most abolitionists couldn't stomach slaveholders getting even a cent and despised Lincoln's push for colonization. Yet they had to let out a huge "Hallelujah!" after the president signed the D.C. Emancipation Act into law on April 16. Roughly three thousand blacks were now forever free. (How many of them accepted the hundred dollars and left the country is unknown, but on April 21, Congress received a petition from forty blacks in D.C. seeking help in emigrating to Central America. Two days later there was a similar petition from twenty. Henry McNeal Turner was among the signers.) Petition for owner compensation (page one). Former slaveholder John Harry of Georgetown filed for compensation on April 29, 1862. He claimed a loss of twenty-seven people because of the D.C. Emancipation Act. They included Grace Butler (about fifty-eight years old), her two daughters Martha and Eliza Ann (about thirty-four and twenty), and her son Walter (sixteen). Harry stated that the mother was of "strong frame," and complained of rheumatism. He valued her at one hundred dollars (but the government valued her at $43.80). All told, the U.S. government paid more than nine hundred D.C. slaveholders close to one million dollars for black people they had held in slavery. In 1862, blacks in Washington, D.C., certainly celebrated the end of slavery in the nation's capital, but they didn't have their first grand Emancipation Parade until after the war, as depicted here in F. Dielman's _Celebration of the Abolition of Slavery in the District of Columbia by the Colored People, in Washington, April 19, 1866_. His engraving originally appeared in the May 12, 1866, issue of _Harper's Weekly_. Republican representative Owen Lovejoy of Illinois (ca. 1850–60). BANNING SLAVERY IN THE TERRITORIES was still on the Republican agenda. In early May 1862, an Illinois representative launched his crusade in Congress for a bill doing just this. He was Owen Lovejoy, a younger brother of the abolitionist Elijah Lovejoy, murdered by a proslavery mob twenty-five years earlier. As with other antislavery measures, forbidding slavery in the territories would be easier now that the war was on. Many Democrats who would have fought tooth and nail to kill such a bill had gone over to the CSA. (At the time, there were few enslaved people in the territories. According to the 1860 census, Utah, for example, had twenty-nine.) * * * **WHAT I BELIEVE** _**is this: we have opened in our national history the chapter which is to record the freedom of every man under the stars and stripes. Abraham Lincoln may not wish it; he cannot prevent it; the nation may not will it, but the nation can never prevent it. . . . For the first time in our history for seventy years, the government, as a corporation, has spoken antislavery words and done antislavery deeds. It is a momentous alteration in the heart that governs the government. I allude to that fact, not because I care for the state of mind of Mr. Lincoln or the Cabinet specifically; I view them as milestones.**_ —Wendell Phillips on May 6, 1862, at the American Anti-Slavery Society's 29th annual convention, in New York City * * * Freedom seemed fast on the march down south, following Union victories in coastal South Carolina, Georgia, and Florida. On May 9, the Union's General David Hunter put these states under martial law. What's more, Hunter declared that everyone held in captivity in those states was _free_. Just as with Frémont's freedom decree, Lincoln quashed Hunter's. In a proclamation on May 19, the president stressed that _no one_ had been authorized to declare _anyone_ free in _any_ state. * * * **THE SOUTH, HAVING** _**been deceived in regard to Mr. Lincoln and the aims of the Republican party, went to war to protect slavery. Now, perhaps, they are beginning to see that Mr. Lincoln is not so far from a slave-catcher, after all.**_ —Anna E. Dickinson, furious over the Hunter affair, at the annual New England Anti-Slavery Convention on May 28, 1862, in Boston * * * Interestingly, Lincoln followed this pronouncement, so pleasing to proslavery activists, with words apt to make some abolitionists take heart: If a major emancipation edict became "a necessity" to save the Union, then that was _his_ call as commander in chief of the U.S. armed forces. In this proclamation the president also urged slaveholding states to say yes to compensated, gradual emancipation. This was not his first appeal. Back in late 1861, Lincoln had secretly tried and failed to get a yes from the legislature of Delaware, a state with fewer than two thousand people in slavery (not even 2 percent of the population). After Lincoln's May 1862 appeal, Delaware still didn't bite. Neither did any other state. Not in May. And not after June 19—the day the bill prohibiting slavery in the territories, having cleared Congress, was sent to Lincoln. He signed it into law straightaway. Philadelphia-born Anna E. Dickinson (ca. 1861–65). In January 1860, seventeen-year-old Dickinson made her debut as a public speaker with some impromptu remarks at a program called "Woman's Rights and Wrongs." That fall she spoke out against slavery at the Pennsylvania Anti-Slavery Society's annual meeting. Thanks to William Lloyd Garrison, Dickinson was soon giving speeches around New England. Robert Smalls was on the lookout for a chance to escape slavery since the start of the war. He took his liberty in May 1862 aboard the CSA gunboat _Planter_ , piloting the vessel out of Charleston Harbor. His family and about a dozen other enslaved adults and children were with him. SLAVERY, LONG OUTLAWED IN THE NORTH, WAS NOW OUTLAWED in the District of Columbia and in the territories. More people in captivity in the CSA and in the border states were taking their liberty, making the most of the chaos of war, a war in its second year. Still, Lincoln gave no signal that he deemed a major freedom decree "a necessity." The president kept everybody guessing. On the Fourth of July 1862, Charles Sumner begged Lincoln to strike hard at slavery and thereby make the day "more sacred and more historic." Said Sumner: "You need more men, not only at the North, but at the South, in the rear of the rebels; you need the slaves." "Too big a lick," Lincoln replied. He projected that half the Union troops would desert in protest. He also expressed concern about the border states. A week later, the president met with representatives of the border states. In urging them to say yes to compensated, gradual emancipation, Lincoln tried some of everything. He appealed to their patriotism, arguing that a yes would be a powerful show of support for the Union, strong enough to take the wind out of the CSA's sails. It would shorten the war. He appealed to their wallets, maintaining that if the war dragged on, slavery "will be gone, and you will have nothing valuable in lieu of it." He also appealed to their prejudice: "When numbers shall be large enough to be company and encouragement for one another, the freed people will not be so reluctant to go [to voluntarily leave the country]." Two days later, on July 14, Lincoln received a letter with the majority decision: no. They wouldn't let slavery go. As for their support of the Union, these border state representatives assured Lincoln that he could count on that, but it wasn't unconditional. "Confine yourself to your constitutional authority," they wrote. "Confine your subordinates within the same limits; conduct this war solely for the purpose of restoring the Constitution to its legitimate authority; concede to each state and its loyal citizens, their just rights, and we are wedded to you by indissoluble ties." While the border states stood their ground, on July 17, Congress passed and Lincoln signed two bills advancing liberty. There was the Militia Act. It authorized Lincoln to let blacks serve in the U.S. armed forces as laborers and in any other capacity "for which they may be found competent." In the case of black boys and men who belonged to Confederates, they would be declared free, along with their mothers, wives, and children (if they, too, had Confederate owners). Mightier than the Militia Act was the second Confiscation Act. Like the first, its "father" was Senator Lyman Trumbull. Under this new bill, the property of anyone who in any way supported the CSA could be seized. When it came to human property, whether they escaped, were captured in battle, or were abandoned, these people wouldn't be left in legal limbo. They would be "forever free of their servitude and not again held as slaves." This confiscation act applied immediately to leading members of the CSA, such as its president and other government officials. For less prominent supporters of the rebellion, the president had to first issue a public warning giving them sixty days to return their allegiance to the Union or risk seizure of their property. Republican senator Lyman Trumbull of Illinois (ca. 1861). The second Confiscation Act also reinforced part of the Militia Act by empowering the president to "employ as many persons of African descent as he may deem necessary and proper" for quashing the rebellion. What's more, the second Confiscation Act scratched Lincoln's itch for colonization. It gave him broad powers to move blacks to "some tropical country." But there were conditions: blacks had to be willing to go, and this country had to be willing to receive them "with all the rights and privileges of freemen." Lincoln also had a big budget for colonization. Months back, tied to the D.C. Emancipation Act, Congress had appropriated one hundred thousand dollars for colonization. On July 16, 1862, the day before passage of the second Confiscation Act, Congress approved a spending bill that earmarked for colonization another five hundred thousand dollars (roughly eleven million dollars today). THE TROPICAL CLIME UPPERMOST ON THE PRESIDENT'S mind for blacks was the province of Chiriquí in present-day Panama. Back in 1861, Lincoln had been much persuaded by the claims of shipping magnate Ambrose Thompson, chief of the Chiriquí Improvement Company. According to Thompson, his firm owned several hundred thousand acres of land in Chiriquí, where blacks from America, the thinking went, could build new lives for themselves as farmers (growing coffee, cotton, rice, and other crops) or as coal miners given the purported rich deposits of that ore. _Political Caricature. No. 4, The Miscegenation Ball_ (1864) by Kimmel & Foster. This lithograph lampoons the Republican Party as infatuated with black people. Like other cartoons of the era, it alludes to one reason why many whites opposed emancipation and advocated separation of the races: fear that freedom would lead to widespread race mixing, also known as "miscegenation." This would, in turn, lead to romances and ultimately to the "mongrelization" of the white race. Ironically, many people who opposed voluntary interracial relationships never had a problem with the fact that many enslaved black women were forced to have relations with white men, as happened to the mothers of Frederick Douglass and J. Sella Martin. On August 14, 1862, Lincoln met in the White House with five men of some mark in D.C.'s black community. It was a delegation headed by Edward M. Thomas, president of the Anglo-African Institute for the Encouragement of Industry and Art. Lincoln wanted these men to recruit a band of blacks (ideally, groups of families) for a pilot emigration program. "You and we are different races," the president told the delegation. "We have between us a broader difference than exists between almost any other two races. Whether it is right or wrong I need not discuss, but this physical difference is a great disadvantage to us both, as I think your race suffer very greatly, many of them by living among us, while ours suffer from your presence." Solution: separation of the races. Calling slavery "the greatest wrong inflicted on any people," Lincoln stated that even when blacks "cease to be slaves, you are yet far removed from being placed on an equality with the white race." And it was a white race shedding rivers of its own blood: "See our present condition—the country engaged in war!—our white men cutting one another's throats, none knowing how far it will extend; and then consider what we know to be the truth. But for your race among us there could be no war." The president's remarks, reported by the press, earned him praise from negrophobes. The _New York Herald_ , for one, beamed over the "great truth" Lincoln had spoken about the races, though not over his plans for colonization, thinking it not feasible. The nation needed the labor, the newspaper argued, favoring "the mild servitude of the Southern states" for blacks. As for abolitionists, "squelch them," said the _Herald_ , insisting that abolitionists, by their agitation over the last thirty years, had "caused the war." Not surprisingly, what Lincoln said to that black delegation enraged more than a few abolitionists, some of whom called for _slaveholders_ to be colonized beyond the nation's shores. Frances E. W. Harper, from _The Underground Railroad_ (1872), by her friend William Still. Harper, a writer, teacher, and Underground Railroad aide, was born free in a slave state (Maryland). Although she curtailed her lecturing in 1860 when she married Fenton Harper of Ohio, she made her anti-colonization stance known through her writing. * * * **A SPECTACLE, AS** _**humiliating as it was extraordinary, was presented to all Christendom on the afternoon of the 14th. . . . By special invitation of President Lincoln, a committee of the colored people . . . appeared before him, to listen to a proposition, on his part, for their removal to Central America. . . . Can anything be more puerile, absurd, illogical, impertinent, untimely?**_ —William Lloyd Garrison in _The Liberator_ (August 22, 1862) **NO, MR. PRESIDENT,** _**it is not the innocent horse that makes the horse thief, not the traveler's purse that makes the highway robber, and it is not the presence of the negro that causes this foul and unnatural war, but the cruel and brutal cupidity of those who wish to possess horses, money and negroes by means of theft, robbery, and rebellion.**_ —Frederick Douglass in _Douglass' Monthly_ (September 1862) **LET THE PRESIDENT** _**be answered firmly and respectfully . . . that while we admit the right of every man to choose his home, that we neither see the wisdom nor expediency of our self-exportation from a land which has been in a measure enriched by our toil for generations.**_ —Frances E. W. Harper in _The Christian Recorder_ (September 27, 1862) * * * The best-known public rebuke Lincoln received on the matter of slavery came from Horace Greeley, the editor of the widely read _New-York Daily Tribune_. On August 19, Greeley wrote Lincoln a fever-pitch letter. The next day Greeley published it in his newspaper under the headline THE PRAYER OF THE TWENTY MILLIONS (as if he were speaking for every soul in the North). Millions of Americans, said Greeley, were "sorely disappointed and deeply pained by the policy you seem to be pursuing with regard to the slaves of the rebels." Forget about the "fossil politicians" in the border states, said Greeley. Hurl a thunderbolt at slavery. "On the face of this wide earth, Mr. President, there is not one disinterested, determined, intelligent champion of the Union cause who does not feel that all attempts to put down the rebellion and at the same time uphold its inciting cause [slavery] are preposterous and futile—that the rebellion, if crushed out tomorrow, would be renewed within a year if slavery were left in full vigor." Three days later, Lincoln's response to Greeley—in full vigor—appeared in the _Daily National Intelligencer_ , a D.C. newspaper: Horace Greeley from Amherst, New Hampshire (ca. 1862). My paramount object in this struggle _is_ to save the Union, and is _not_ either to save or to destroy slavery. If I could save the Union without freeing _any_ slave, I would do it, and if I could save it by freeing _all_ the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that. What I do about slavery, and the colored race, I do because I believe it helps to save the Union; and what I forbear, I forbear because I do _not_ believe it would help to save the Union. Lincoln didn't end on a fire-breathing note. He sounded apologetic. "I have here stated my purpose according to my view of _official_ duty, and I intend no modification of my oft-expressed _personal_ wish that all men, everywhere could be free." Yet again, the president gave proslavery _and_ antislavery people something on which to pin their hopes. What was he thinking? If Lincoln left any personal notes on the matter, they have yet to see the light of day. His motives remain the subject of debate. WAS HE CONFUSED? BEING SHREWD? AND WAS THIS letter, which Lincoln sent to that D.C. newspaper, in part about damage control? The day before Lincoln's letter to Greeley ran in the _Intelligencer_ , Greeley's _Tribune_ told its readers something Lincoln could hardly have wanted broadcast. "In justice to all parties," announced the _Tribune_ on August 22, "it seems proper to state the following, which we learn from so many sources that it can no longer be considered a state secret." The newspaper reported that Lincoln had recently shared with his cabinet a proclamation of emancipation "abolishing slavery wherever on the 1st of next December the rebellion" had not been "crushed." Of the seven cabinet members, one was absent, the _Tribune_ believed. Of the six present, four gave the proclamation their blessing. However, Lincoln changed his mind about issuing it because the secretary of state, William Seward, and the postmaster general, Montgomery Blair, had "opposed it with all their might." Greeley's newspaper had a number of things wrong, but quite a few things right. A month before, on July 22, Lincoln had indeed spoken with his cabinet, all seven members, about a proclamation on emancipation. It began with his reading his rough draft of a three-point decree. 1. Pursuant to the second Confiscation Act, he would warn all persons supporting the Confederacy (apart from specified Confederate leaders, whose property was already subject to seizure) that they risked having their property seized. 2. In pursuit of peace and restoration of the Union, the offer of compensation for gradual emancipation was still on the table. 3. "And, as a fit and necessary military measure," he, as commander in chief of the Union armed forces, would declare on January 1, 1863, people enslaved in Confederate territory "shall then, thenceforward, and forever, be free." Lincoln was prepared to go public with his proclamation without delay. As the _Tribune_ had reported, Seward did oppose the proclamation. It wasn't because the secretary of state was anti-emancipation. It was the timing that bothered him. The Union's General George B. McClellan had recently failed to capture Richmond in the Peninsula Campaign. If Lincoln issued the proclamation right then, the Union might appear weak, desperate, Seward feared. He thought it made more sense to announce the proclamation when the Union had something to crow about militarily. Above and below: Draft of the Emancipation Proclamation by President Abraham Lincoln, July 22, 1862. Lincoln shared this draft with his cabinet shortly after the border states said no to compensated, gradual emancipation. Secretary of the Navy Gideon Welles didn't promote or protest the proclamation during the meeting, but he later wrote of having qualms about such an "extreme exercise of war powers." Like Welles, Secretary of the Interior Caleb Smith, no fan of emancipation, kept his thoughts to himself during the meeting. In contrast, Montgomery Blair, the postmaster general, spoke his mind. He feared that the proclamation would rile the border states. Blair also felt that it would make whites elsewhere in the Union howl. As a result, Republicans would pay dearly in the midterm elections in the coming fall. _First Reading of the Emancipation Proclamation of President Lincoln_ (1864), oil on canvas by Francis Bicknell Carpenter. Left to right: Stanton, Chase ( _standing_ ), Lincoln, Welles, Smith, Seward ( _seated in foreground_ ), Blair, and Bates. Details in the painting include a copy of the Constitution ( _on the table between Lincoln and Seward_ ); a map of the slave population ( _on the floor behind Bates_ ); and a copy of the _New-York Daily Tribune_ ( _far left, behind Stanton's chair_ ). The July 22 meeting was held, it is believed, in Lincoln's office, roughly the space the Lincoln Bedroom occupies today in the White House. Secretary of the Treasury Salmon Chase, antislavery to the bone, supported the proclamation even though he opposed compensation for emancipation (as he did colonization). Also, Chase would have preferred something akin to a controlled demolition: let Union forces declare people forever free as they conquered land. In other words, let the second Confiscation Act do its work. The man who had replaced Simon Cameron as secretary of war, Edwin Stanton, was in favor of Lincoln issuing his proclamation right away. Stanton believed in denying the CSA black labor as much as he believed in black liberty. The remaining cabinet member, Attorney General Edward Bates, also had no problem with the proclamation (but at the time he did have a problem with Lincoln's insistence on _encouraging_ blacks to emigrate. Bates was for straight-out deportation). In the end, Lincoln decided to pocket his proclamation, to wait. While Lincoln waited, on July 25 he issued a proclamation, authorized under the second Confiscation Act, warning rank-and-file Confederates that if they did not cease and desist from rebellion within sixty days, their property—including people enslaved—was in jeopardy of being seized. While Lincoln waited, his emancipation edict was neither out of sight nor out of mind. He revised it. At one point, he added that "the effort to colonize persons of African descent upon this continent, or elsewhere will be continued." He also noted that he would recommend that at the end of the hostilities, people loyal to the Union be compensated for "all losses by acts of the United States, including the loss of slaves." While Lincoln waited, he also gave Secretary of War Stanton a secret go-ahead on the matter of black troops. In turn, Stanton, also on the quiet, authorized General Rufus Saxton in South Carolina to raise a black fighting force of up to five thousand men. Naturally, while Lincoln waited, he paid close attention to battlefield reports. More dead. More maimed. And no shining victory for the Union to claim. The bad news included Union defeat in the Second Battle of Bull Run. But then came the combat near Sharpsburg, Maryland, by Antietam Creek. There, on September 17, 1862, the Confederate general Robert E. Lee found his troops greatly outnumbered in phase one of his plan to invade the North. After a day of awful slaughter on both sides, the Battle of Antietam ended with CSA forces in retreat. OUR ARMS VICTORIOUS, proclaimed the _Boston Evening Transcript_ on September 19. GREAT VICTORY, the _New York Times_ boomed the next day. Lincoln's wait ended two days later, on Monday, September 22, 1862. "I think the time has come now," Treasury Secretary Chase recalled the president telling his cabinet on this day, the day of the expiration of that warning Lincoln had given on July 25. "I wish it were a better time," Lincoln continued. "I wish that we were in a better condition. The action of the army against the rebels has not been quite what I should have best liked. But they have been driven out of Maryland, and Pennsylvania is no longer in danger of invasion." He also said that at one point he promised himself and his "Maker" that if CSA troops were driven from Maryland, he would announce the Emancipation Proclamation. Detail of _The Aftermath at Bloody Lane_ (1889), oil on canvas by James Hope. "Bloody Lane" was a sunken road, eight hundred yards long, where some five thousand Union and Confederate soldiers lay dead and wounded by 1:00 p.m. on September 17, during the Battle of Antietam, also known as the Battle of Sharpsburg. All told, there were more than twenty thousand casualties. The artist was a captain in the Second Vermont Infantry. Because of illness, he didn't see action, but he was well enough to sketch battle scenes, which he later turned into panoramic paintings (twelve feet wide) like this one. To that end, the president then conferred with his cabinet on his latest draft of the document. After that, he went public with this decree. Lincoln put the Union, the Confederacy—the world—on notice that, come January 1, 1863, he would declare every captive black soul in CSA-controlled territory "forever free." GOD BLESS ABRAHAM LINCOLN! blared Greeley's _Tribune_ on September 23. _But_ the decree said (or implied) that if the Confederacy laid down its arms before the first of the year, "forever free" would be off the table. Lincoln wouldn't strike at slavery in the CSA. We on the side of black liberty, like those opposed to it, had up to one hundred days to wait. Abraham Lincoln, Preliminary Proclamation, September 1862. This is page two of the now four-page decree. _Writing the Emancipation Proclamation_ (ca. 1864), by Adalbert Johann Volck, a Confederate sympathizer. In Volck's etching, Lincoln looks sinister as he scribbles on a table with legs that taper into cloven hooves (associated with the devil in some cultures). Instead of respecting the Constitution, Lincoln uses it as a footrest. Also adding to the sense of menace: the little demon holding the inkwell, the bats or vultures outside the window, the curtain's vulture-head tieback. In the corner is the personification of America, Lady Columbia, with a cap over her face. This is an allusion to Lincoln's secret entry into Washington, D.C., right before his inauguration because of an alleged plot to kill him in Baltimore. It was rumored that part of his disguise was a Scottish cap. The small drawing on the back wall, "St. Ossawatomie," implies that Lincoln idolized John Brown. (In 1856, several months after the Pottawatomie massacre, John Brown lost a son and several other comrades in a battle with proslavery activists in Osawatomie, Kansas.) The larger drawing, "St. Domingo," alludes to the way blacks in Haiti achieved their independence from France: through violence, just as white colonists in America had gained theirs from England. But Volck was not focused on that similarity but rather on fueling fear of black people. The decanter and glasses on the small table suggest that the president was drinking as he penned the Emancipation Proclamation. _Abraham Lincoln Writing the Emancipation Proclamation_ (1863), oil on canvas by David Gilmour Blythe, a Union loyalist. The artist imagined Lincoln working on the Emancipation Proclamation disheveled and in his stocking feet. The messy room is full of symbolism and history. For starters, the window curtain is an upside-down American flag—not a sign of disrespect but a signal of distress. To the president's right is a pile of petitions, books (one on the Constitution), and other documents (one from the border states). On Lincoln's lap is a copy of the Constitution and the Bible. The couch is covered with petitions for emancipation and protests (including one about soldiers having to be bothered with "contrabands"). At Lincoln's feet are maps. The one of the Confederate states is held down by a rail-splitter's maul—a sledgehammer—used in making fence rails. (At one point, Lincoln was a rail-splitter, and during the 1860 presidential campaign some of his boosters seized upon the idea of promoting him as "the Rail Splitter," to increase his appeal to working-class voters.) Two items hold down the map of South Carolina: a bundle of rods ( _fasci_ , Latin), a symbol of power and authority from ancient Rome, and a ball of Greek fire (an ancient fire bomb). _Watch Meeting—Dec. 31st 1862—Waiting for the Hour_ (1863), oil on canvas by William Tolman Carlton. It's five minutes from midnight, and most eyes are on the elderly man's pocket watch, its fob in the shape of an anchor, a symbol of hope for Christians. Torchlit and nailed to the wall hangs a copy of the Emancipation Proclamation. PART III * * * **"THE TRUMP OF JUBILEE"** * * * "WE WERE WAITING AND LISTENING AS FOR A BOLT FROM the sky, . . . we were watching, as it were, by the dim light of the stars, for the dawn of a new day; we were longing for the answer to the agonizing prayers of centuries." So remembered Frederick Douglass. Eight o'clock. Nine o'clock. Ten o'clock. Hour by hour, Douglass, Anna E. Dickinson, J. Sella Martin, and the rest of the crowd at Tremont Temple clung to hope, while braced for a blow. We had never taken anything for granted. Bernard Kock to Abraham Lincoln, Saturday, October 4, 1862. This note accompanied the calling card of developer Bernard Kock, so-called governor of ÎIe à Vache (Cow Island), off the coast of Haiti. Kock convinced Lincoln that ÎIe à Vache was an ideal place to colonize blacks from America. Chiriquí no longer looked so good. For starters, the Chiriquí Improvement Company's claim of land ownership proved suspect and reports of high-quality coal on the land false. On New Year's Eve 1862, while many were praying for the Emancipation Proclamation, Lincoln contracted with Kock to relocate five thousand blacks to ÎIe à Vache for fifty dollars per person. In the end, only 450 people went. During the hundred days' wait, we had heard reports of Union soldiers grumbling about Lincoln's emancipation decree: sucking their teeth over the prospect of fighting not only for the Union but also for black liberty. We had also witnessed Republican defeats in the midterm elections. Losses included more than thirty seats in the House of Representatives and the governorship of New York. There, the winner, Horatio Seymour, had roundly condemned the Emancipation Proclamation. And there was a puzzling, troubling proposal in Lincoln's December 1862 Annual Message to Congress. The president advocated amending the Constitution to (1) make "forever free" enslaved people who had experienced "actual freedom" during the war (with compensation to their owners, provided they were Union loyalists); (2) empower Congress to fund "and otherwise provide" for black emigration anywhere outside America; and (3) compensate _any_ state that agreed to abolish slavery by January 1. But not in 1863. In 1900. If what he proposed became constitutional law, the war would end "now" and the Union would be saved "forever," Lincoln argued. And he closed on a soaring note, what many historians regard as his greatest peroration, that is the end of a speech: Fellow-citizens, _we_ cannot escape history. We of this Congress and this administration, will be remembered in spite of ourselves. . . . The fiery trial through which we pass, will light us down, in honor or dishonor, to the latest generation. . . . We know how to save the Union. . . . In _giving_ freedom to the _slave_ , we _assure_ freedom to the _free_ —honorable alike in what we give, and what we preserve. We shall nobly save, or meanly lose, the last best, hope of earth. Other means may succeed; this could not fail. The way is plain, peaceful, generous, just—a way which, if followed, the world will forever applaud, and God must forever bless. What Lincoln recommended was not to be a substitute for the Emancipation Proclamation. Still, Congress did not oblige him. On the night of January 1, 1863, at Tremont Temple, pocket watches were moving on eleven o'clock when word spread that Lincoln had signed the proclamation of liberation. One very impatient member of the crowd, Judge Thomas Russell, dashed to the offices of the _Boston Journal_ to see if the decree had indeed come across that newspaper's wires. Finding that it had, and against the protest of the night editor, the judge snatched the dispatch, then bolted. When he reached Tremont Temple with proof of the proclamation, people whooped and hollered, shouted and sobbed. Bowlers and bonnets confettied the air. Frederick Douglass soon led the crowd in John Brown's favorite hymn, "Blow Ye the Trumpet, Blow." Earlier, when word that the proclamation was coming across the wires reached Boston's Music Hall, "shouts arose, hats and handkerchiefs were waved, men and women sprang to their feet," reported _The Liberator_. This crowd also hip-hip-hoorayed William Lloyd Garrison. * * * **I BREAK YOUR** _**bonds and masterships, And I unchain the slave: Free be his heart and hand henceforth, As wind and wandering wave.**_ —from Ralph Waldo Emerson's poem "Boston Hymn," which he read early on during the wait at the Music Hall * * * Boston-born poet, essayist, and philosopher Ralph Waldo Emerson, ca. 1860. Like Emerson, Garrison would join Wendell Phillips and others at George and Mary Stearns's home in Medford for a John Brown party. The highlight was the special showing of a marble bust of their martyred hero. Another delight was Emerson reciting his "Boston Hymn" and Julia Ward Howe her poem "The Battle Hymn of the Republic," by then a beloved Union marching song. DOWN IN THE NATION'S CAPITAL, Henry McNeal Turner, pastor of Israel Bethel church, had been in the throng at the _Evening Star_ , waiting for a hot-off-the-press copy of that newspaper with the proclamation. Once Turner got hold of a copy, he tore down Pennsylvania Avenue. "I ran as for my life, and when the people saw me coming with the paper in my hand they raised a shouting cheer that was almost deafening." Back in his pulpit, Turner was so breathless that he ended up handing to another the honor of reading the Emancipation Proclamation aloud. It said nothing about _gradual_ liberation. Nothing about compensation for a single slaveholder. Nothing about forcing—or even urging—blacks to emigrate. What's more, Lincoln had signed off on black men serving as soldiers in the Union Army. During the wait, Lincoln had done quite a bit of revising. Henry McNeal Turner, pastor of D.C.'s first African Methodist Episcopal church. Turner, a native of South Carolina, had never known slavery. This engraving appeared in the December 12, 1863, issue of _Harper's Weekly_. It accompanied an article on Turner's commission as the first black chaplain in the U.S. armed forces. A BY THE PRESIDENT OF THE UNITED STATES OF AMERICA A PROCLAMATION Whereas on the 22d day of September, A.D. 1862, a proclamation was issued by the President of the United States, containing, among other things, the following, to wit: * * * Below, Lincoln quotes parts of the preliminary proclamation. * * * "That on the 1st day of January, A.D. 1863, all persons held as slaves within any State or designated part of a State the people whereof shall then be in rebellion against the United States shall be then, thenceforward, and forever free; and the executive government of the United States, including the military and naval authority thereof, will recognize and maintain the freedom of such persons and will do no act or acts to repress such persons, or any of them, in any efforts they may make for their actual freedom. "That the executive will on the 1st day of January aforesaid, by proclamation, designate the States and parts of States, if any, in which the people thereof, respectively, shall then be in rebellion against the United States; and the fact that any State or the people thereof shall on that day be in good faith represented in the Congress of the United States by members chosen thereto at elections wherein a majority of the qualified voters of such States shall have participated shall, in the absence of strong countervailing testimony, be deemed conclusive evidence that such State and the people thereof are not then in rebellion against the United States." * * * Lincoln now proceeds with his final proclamation. * * * Now, therefore, I, Abraham Lincoln, President of the United States, by virtue of the power in me vested as Commander in Chief of the Army and Navy of the United States in time of actual armed rebellion against the authority and Government of the United States, and as a fit and necessary war measure for suppressing said rebellion, do, on this 1st day of January, A.D. 1863, and in accordance with my purpose so to do, publicly proclaimed for the full period of one hundred days from the day first above mentioned, order and designate as the States and parts of States wherein the people thereof, respectively, are this day in rebellion against the United States the following, to wit: Arkansas, Texas, Louisiana (except the parishes of St. Bernard, Plaquemines, Jefferson, St. John, St. Charles, St. James, Ascension, Assumption, Terrebonne, Lafourche, St. Mary, St. Martin, and Orleans, including the city of New Orleans), Mississippi, Alabama, Florida, Georgia, South Carolina, North Carolina, and Virginia (except the forty-eight counties designated as West Virginia, and also the counties of Berkeley, Accomac, Northampton, Elizabeth City, York, Princess Anne, and Norfolk, including the cities of Norfolk and Portsmouth), and which excepted parts are for the present left precisely as if this proclamation were not issued. * * * The paragraph above states where the Proclamation applies: All eight states that are still in rebellion: Alabama, Arkansas, Florida, Georgia, Mississippi, North Carolina, South Carolina, and Texas. Certain parts of Louisiana and Virginia are exempted because they are under Union occupation and so technically no longer in rebellion. The Proclamation does not apply to Tennessee for the same reason. However, Lincoln does not exempt some other Union-occupied places, most notably the South Carolina sea islands, where black troops were recruited and where about ten thousand blacks live. Given the Union troop presence in the area, the Proclamation can be enforced there, and so freedom is immediate. Another estimated forty thousand people are also actually freed because they, too, are in Union-occupied territory where the Proclamation applies. Of the roughly four million people enslaved, the Proclamation doesn't apply to about eight hundred thousand (about five hundred thousand of whom are in the border states). * * * And by virtue of the power and for the purpose aforesaid, I do order and declare that all persons held as slaves within said designated States and parts of States are, and henceforward shall be free, and that the executive government of the United States, including the military and naval authorities thereof, will recognize and maintain the freedom of said persons. * * * Unlike earlier versions, the Proclamation does not declare anyone free "forever." This no doubt reflects Lincoln's acute awareness that his war measure didn't guarantee anything and that there was no telling what might happen after the war. * * * And I hereby enjoin upon the people so declared to be free to abstain from all violence, unless in necessary self-defense; and I recommend to them that in all cases when allowed they labor faithfully for reasonable wages. * * * Blacks are urged to not riot and to not shrink from work for fair pay. * * * And I further declare and make known that such persons of suitable condition will be received into the armed service of the United States to garrison forts, positions, stations, and other places and to man vessels of all sorts in said service. * * * Blacks can be soldiers in the Union Army. * * * And upon this act, sincerely believed to be an act of justice, warranted by the Constitution upon military necessity, I invoke the considerate judgment of mankind and the gracious favor of Almighty God. * * * This reiterates the _official_ reason for the proclamation—"military necessity"—followed by some noble sentiments. Treasury Secretary Chase had suggested this when Lincoln met with his cabinet to review the document one last time. Earlier, Charles Sumner had told Lincoln that the proclamation had to say something about "'justice' & 'God.'" * * * In witness whereof I have hereunto set my hand and caused the seal of the United States to be affixed. Done at the city of Washington, this 1st day of January, A.D. 1863, and of the Independence of the United States of America the eighty-seventh. Abraham Lincoln By the President: William H. Seward Secretary of State _Emancipation Day in South Carolina_ from the January 24, 1863, issue of _Frank Leslie's Illustrated Newspaper_. Festivities were under way earlier in the day, in Port Royal, South Carolina, home of the First South Carolina Volunteers: the first fruits of the order on raising black troops that Secretary of War Stanton had given General Saxton months earlier. On New Year's Eve 1862, Lincoln had alerted Saxton and other top military men that he would be signing the Emancipation Proclamation. So, on New Year's Day 1863, while most of the nation was still waiting, the Port Royal celebration started, around noon, in a grove of live oaks. After prayer, there was a reading aloud of a near-final draft of the proclamation by William Henry Brisbane, a slaveholder turned abolitionist some thirty years earlier. The Port Royal program didn't go exactly as planned. At one point, blacks went off-script. It started with one man. "There suddenly arose, close beside the platform, a strong male voice (but rather cracked and elderly)," remembered Colonel Thomas Wentworth Higginson. And, in a flash, two black women joined in. The three were singing a patriotic hymn: _My country, 'tis of thee_ _Sweet land of liberty_ _Of thee I sing_ . . . Thomas Wentworth Higginson, commander of the First South Carolina Volunteers. This Massachusetts native was a member of the John Brown Secret Six. "People looked at each other," wrote Higginson, "and then at us on the platform, to see whence came this interruption." No matter the dismay of those on the dais, the singers didn't waver, didn't flinch. "Firmly and irrepressibly, the quavering voices sang on, verse after verse." Soon, other blacks were singing, too. And if Harriet Tubman was on the scene, as is believed, surely she gave her all to the song. As whites also began to sing, Higginson signaled them to hush—to just listen. "I never saw anything so electric; it made all other words cheap; it seemed the choked voice of a race at last unloosed." Charlotte Forten (ca. 1870), a member of one of Philadelphia's most prominent black families. (Her wealthy grandfather, James Forten, had given William Lloyd Garrison moral and financial support during _The Liberator_ 's early days.) In the fall of 1862, Charlotte Forten headed south to teach "contrabands" on St. Helena, South Carolina, and thus to take part in the Port Royal Experiment. In this program, started earlier in the year after the Union capture of Port Royal Island, Northern reformers committed money and supplies—and, for some, their teaching skills—to aid blacks whose owners had fled Port Royal and other sea islands. The goal of the Port Royal Experiment was twofold: to help these people build new lives and to prove to white skeptics that freed blacks could be absorbed into society. * * * **THE MOST GLORIOUS** _**day this nation has yet seen,**_ **I** _**think . . . I**_ **cannot** _**give a regular chronicle of the day. It is impossible. I was in such a state of excitement. It all seemed, and seems still, like a brilliant dream. . . . As I sat on the stand and looked around on the various groups, I thought I had never seen a sight so beautiful. There were the black soldiers . . . and crowds of lookers-on, men, women and children, grouped in various attitudes, under the trees. The faces of all wore a happy, eager, expectant look.**_ —Charlotte Forten in her diary, remembering the Port Royal celebration * * * W. G. Jackman's engraving of Sandy Cornish from _After the War: A Southern Tour_ (1866), by Whitelaw Reid. Festivals of freedom abounded in the days to come: in Chicago, Illinois; in Columbus, Ohio; in Philadelphia, Pennsylvania, and in Pittsburgh, too. There were even celebrations where the Emancipation Proclamation _did not_ apply—in Norfolk, Virginia, for example, which was exempt because it was already under Union control. There, according to a _New York Times_ reporter based at Fort Monroe, some four thousand blacks "paraded through the principal streets" behind a fife and drum band. About a thousand miles south, in Union-occupied Key West, Florida, a similar parade was headed by Sandy Cornish, who some twenty years earlier had hacked at his own body to avoid being re-enslaved. At jubilee after jubilee, President Lincoln was lionized. People sang his praises. _Why?_ Some then and many in the future would frown. Why did blacks, especially, lift up Lincoln's name? Hadn't he lobbied for them to leave the nation? Didn't the Emancipation Proclamation declare blacks free where the U.S. government had no actual power but left so many enslaved where it did—most notably in the border states? All true. But as Frederick Douglass wrote of that long day's wait, "It was not logic, but the trump of jubilee, which everybody wanted to hear." We weren't stupid. Like Lincoln, we knew the proclamation was a _war_ measure and not an ironclad law. So we knew that the fight for absolute abolition still had to be waged. But on Thursday, January 1, 1863, so many who believed in freedom looked beyond the proclamation's "whereas" and "whereof," the geographic particulars, the tenses and technicalities. In a leap of faith, we claimed this decree as the dawn of a new day, as an amen!—a thunderbolt "So be it!"—to all those "agonizing prayers of centuries." * * * **IT SHALL FLASH** _**through coming ages;**_ _**It shall light the distant years;**_ _**And eyes now dim with sorrow**_ _**Shall be clearer through their tears.**_ —from Frances E. W. Harper's "President Lincoln's Proclamation of Freedom" (ca. 1863) * * * _Freed Negroes Spreading the News of President Lincoln's Emancipation Proclamation_ (Les Negres Affranchis colportant le decret d'affranchisement du President Lincoln), from the March 21, 1863, issue of the French newspaper _Le Monde Illustré_. The scene depicted occurred near Winchester, Virginia. EPILOGUE * * * **SLAVERY HAS EXISTED** _**in this country too long and has stamped its character too deeply and indelibly, to be blotted out in a day or a year, or even in a generation. The slave will yet remain in some sense a slave, long after the chains are taken from his limbs, and the master will yet retain much of the pride, the arrogance, imperiousness and conscious superiority, and love of power, acquired by his former relation of master. Time, necessity, education, will be required to bring all classes into harmonious and natural relations. . . .**_ _**Law and the sword can and will, in the end abolish slavery. But law and the sword cannot abolish the malignant slaveholding sentiment which has kept the slave system alive in this country during two centuries. Pride of race, prejudice against color, will raise this hateful clamor for oppression of the negro as heretofore. The slave having ceased to be the abject slave of a single master, his enemies will endeavor to make him the slave of society at large.**_ —Frederick Douglass at Spring Street AME Zion Church in Rochester, New York, on December 28, 1862 * * * Minds mightier than mine have debated and will continue to debate the answer to the question "Who freed the slaves?" Lincoln? The abolitionists? The Republican-dominated Congress? The Union Army? The stiff-necked border states (by not saying yes to compensated, gradual compensation)? The slaveholding Southern aristocracy (the force behind secession)? The enslaved people themselves? God? Or, as historian Lerone Bennett Jr. has pondered, was it History? And there's that other great debate: Was Lincoln wrapped tight in racism, or was he shedding that skin as the war ensued? Or was he really not a racist at all, but simply "a man of his times"? While I can appreciate the debate intellectually, I'm not consumed by it. Nor do I fume over the fact that the Emancipation Proclamation is a dull document that makes our eyes glaze over, possessing "all the moral grandeur of a bill of lading," as the historian Richard Hofstadter famously put it. Considering that it is a military order, I can't fault it for being devoid of passionate prose. And if truth be told, I don't care if Lincoln issued the Emancipation Proclamation as a military necessity, as a moral necessity, or because he had a dream. For me, what matters is that people in bondage—folk from whom I descend—were at long last freed. Lincoln's Emancipation Proclamation was a milestone in the march to final freedom. It changed so much. The Emancipation Proclamation emboldened more enslaved children and adults—one by one, two by two, in groups—to walk, run, sail, swim to Union lines, claiming freedom and creating a labor drain and more chaos in the Confederacy. The Emancipation Proclamation fired up nearly 200,000 black men—134,000 from the CSA and the border states—to join the Union armed forces, ready to prove themselves battle-brave, and well aware that if the Confederacy won, final freedom might be a far-off thing. _Lee Surrendering to Grant at Appomattox_ (ca. 1870), oil on paperboard by Alonzo Chappel. When the CSA's top general, Robert E. Lee, surrendered to the Union's top general, Ulysses S. Grant, on April 9, 1865, in Appomattox, Virginia, the Civil War was essentially over. Harriet Tubman in her early nineties (ca. 1912), roughly fifty years after the Emancipation Proclamation was issued. This photograph was taken outside her home in Auburn, New York. As well, the Emancipation Proclamation spoke to the "better angels" of many white men in the Union—men more willing than some first imagined to give life and limb for both preservation of the Union _and_ black liberty. It prompted more than a few whites not heretofore in Union blue to go marching off to war—a war the Union won. Finally, the Emancipation Proclamation was the prologue for the actual Thirteenth Amendment: not the one a panicked, scrambling Congress, hoping to prevent a civil war, passed back in March 1861—the one that would have banned the legislative branch from ever abolishing slavery in the states, the amendment that never went anywhere because only two states ratified it (Maryland and Ohio, with Illinois endorsing it during a constitutional convention). No, not that original Thirteenth Amendment but the one that came four years later, ending chattel slavery in America—an amendment Lincoln vigorously supported. Given all this, I can't help but honor Lincoln's Emancipation Proclamation—can't help but respect what it meant to all those who waited so long for that trump of jubilee, that dawn of liberty: souls who knew about those agonizing prayers of centuries in a way that I never will. A THE THIRTEENTH AMENDMENT * * * This amendment passed in Congress on January 31, 1865. It was ratified by the necessary three-fourths of the states on December 6, 1865. * * * SECTION 1. Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction. SECTION 2. Congress shall have power to enforce this article by appropriate legislation. Fifty-five-year-old Abraham Lincoln on February 5, 1865, photographed by Alexander Gardner. Weeks later, on April 11, the president gave a speech from a White House balcony on the nation's reunion, now that the war was over. Lincoln expressed support for some black men—"the very intelligent" and "those who serve our cause as soldiers"—having the right to vote. "That is the last speech he will ever make," one member of the crowd reportedly said. It was John Wilkes Booth, a Confederate sympathizer from the border state of Maryland. Three days later, on Good Friday, Booth fatally shot the president. TIMELINE _This Civil War timeline is not comprehensive but rather offers some of the key events. Space does not permit inclusion of all the major military battles_. **1860** FEBRUARY 27 Lincoln rails against slavery in the territories at New York City's Cooper Institute. APRIL 23—MAY 3 At a convention in Charleston, South Carolina, the Democratic Party splits over slavery in the territories. One faction (mostly Northerners) insists that it should be a matter of popular sovereignty. The other faction (mostly Southerners) wants the party to stand for a guarantee that slaveholders can move into the territories with their human property. MAY 16—18 At its convention in Chicago, the Republican Party chooses Lincoln as its presidential candidate and Senator Hannibal Hamlin of Maine as his running mate. JUNE 18—23 At a convention in Baltimore, "Popular Sovereignty" Democrats pick Lincoln's old rival, Illinois Senator Stephen Douglas, as their presidential candidate and Herschel Vespasian Johnson, former governor of Georgia, as his running mate. JUNE 28 At a convention in Richmond, Virginia, the faction of largely Southern Democrats picks the sitting vice president, John Cabell Breckinridge of Kentucky, as the party's presidential candidate and Senator Joseph Lane of Oregon as his running mate. NOVEMBER 6 The Lincoln-Hamlin ticket wins the presidential election. DECEMBER 3 A proslavery mob attacks Frederick Douglass, William Lloyd Garrison, and other abolitionists in Boston's Tremont Temple. This happens during a commemoration of John Brown. DECEMBER 20 South Carolina secedes. **1861** JANUARY 9—26 Mississippi, Florida, Alabama, Georgia, and then Louisiana secede. JANUARY 29 Kansas joins the U.S. as a free state. FEBRUARY 1 Texas issues its Ordinance of Secession. FEBRUARY 4—9 The CSA is formed with Jefferson Davis as president during a convention in Montgomery, Alabama, the Confederacy's first capital. MARCH 2 Congress approves a thirteenth amendment that would prevent Congress from abolishing slavery in any state where it exists. This amendment will go nowhere because only two state legislatures ratify it: Ohio in May 1861, and Maryland in January 1862. It was also in 1862 that Illinois approved the amendment, but it did so at a constitutional convention and therefore on legally shaky ground. MARCH 4 Lincoln is inaugurated the sixteenth U.S. president. In his address, he pushes for peace. APRIL 12—14 Fort Sumter is attacked, surrenders to CSA forces, and is evacuated. APRIL 15 Lincoln issues a proclamation calling state militias to send a total of 75,000 troops to suppress the rebellion. He also orders Congress to return to work on July 4. APRIL 17 Virginia issues its Ordinance of Secession. APRIL 19 A pro-CSA mob attacks U.S. troops in Baltimore. _Also on this day:_ Lincoln issues a proclamation for a naval blockade of ports in South Carolina, Georgia, Alabama, Florida, Mississippi, Louisiana, and Texas. APRIL 27 Lincoln extends the naval blockade to include Virginia and North Carolina. Though North Carolina hasn't yet seceded, its officials are seizing federal property (just as in the states that have seceded), refusing to hand over revenue due, and in other ways engaging in rebellion. _Also on this day:_ Lincoln suspends the writ of habeas corpus along troop routes between Philadelphia and Washington, D.C., thus allowing the army to jail subversives without charges. (In September 1862, Lincoln will suspend the writ of habeas corpus throughout America.) MAY 6 Arkansas secedes. MAY 13 Baltimore is put under martial law. MAY 20 North Carolina secedes. _Also on this day:_ Kentucky proclaims neutrality. MAY 24 U.S. General Benjamin Butler defies the Fugitive Slave Law at Fort Monroe. MAY 29 Jefferson Davis arrives in Richmond, Virginia, as the CSA capital moves there from Montgomery, Alabama. JUNE 8 Tennessee secedes. JUNE 11—25 Representatives from western counties in Virginia meet in Wheeling to officially repudiate Virginia's secession. They declare themselves the "Restored Government of Virginia." JULY 21 First Battle of Bull Run/Manassas (Virginia): The CSA victory disabuses people in the North of the notion that suppressing the rebellion will be a cakewalk. AUGUST 6 U.S. Congress passes and Lincoln signs into law the first Confiscation Act. AUGUST 30 U.S. General John C. Frémont, chief of the Union Army's Western Department, puts Missouri under martial law and declares free those people held in slavery by supporters of the rebellion. SEPTEMBER 11 Lincoln rescinds Frémont's freedom decree. DECEMBER 3 In his Annual Message to Congress, Lincoln proposes colonizing blacks. DECEMBER 16 Inaugural meeting of the Emancipation League, cofounded by Wendell Phillips. **1862** JANUARY 10 The _New York Times_ reports: "The President shall acquire in Mexico, South America, Central America, or islands in the Gulf of Mexico, lands, or the right of settlement on lands, to which emancipated slaves shall be transported, single persons receiving forty acres of land, and married persons eighty acres." MARCH 6 Lincoln asks the U.S. Congress to support compensated, gradual emancipation. MARCH 13 U.S. Congress passes and Lincoln signs into law an additional article of war, banning military officers from returning people who escaped slavery to their owners. APRIL 10 U.S. Congress signs off on compensated, gradual emancipation. APRIL 11 U.S. Congress passes the D.C. Emancipation Act, which includes compensation for slaveholders and $100,000 for colonizing blacks. APRIL 16 Slavery is abolished in Washington, D.C., when Lincoln signs the Emancipation Act into law. Scholars debate why he waited five days. MAY 9 U.S. General David Hunter, commander of the Department of the South, declares South Carolina, Georgia, and Florida under martial law and the people enslaved in those states free. MAY 13 Robert Smalls, with family members and friends, escapes slavery in South Carolina aboard the CSA vessel _Planter_. He will become captain of the boat as a member of the Union Navy. MAY 19 Lincoln revokes General Hunter's freedom decree of May 9. JUNE 19 U.S. Congress passes and Lincoln signs into law the bill prohibiting slavery in U.S. Territories. JULY 12 Lincoln meets with representatives of the border states on compensated, gradual emancipation. JULY 14 The majority of representatives from the border states reject compensated, gradual emancipation. JULY 17 U.S. Congress passes and Lincoln signs into law the second Confiscation and the Militia Acts. JULY 22 Lincoln discusses the Emancipation Proclamation with his cabinet. SUMMER J. Sella Martin buys his sister and her daughter and son out of slavery in Columbus, Georgia, for about $2,000. He raised the money while lecturing in England. CIRCA AUGUST The Contraband Relief Association is founded to provide material support and other help to blacks flooding into Washington, D.C., in their escapes from slavery. The organization is spearheaded by Elizabeth Keckley, dressmaker of the Union's First Lady, Mary Todd Lincoln. Keckley had purchased her and her son's freedom in the 1850s (in Missouri). AUGUST 14 At the White House, Lincoln talks to a black delegation about colonization. AUGUST 20 Horace Greeley publishes his letter to Lincoln, "The Prayer of Twenty Millions," in his newspaper, the _New-York Tribune_ , taking him to task for not moving boldly against slavery. AUGUST 23 Lincoln's response to Greeley is published. Its most famous passage: "My paramount object in this struggle _is_ to save the Union, and _is not_ either to save or to destroy slavery." AUGUST 25 Secretary of War Stanton authorizes General Rufus Saxton in South Carolina to raise black troops. SEPTEMBER 17 Battle of Antietam/Sharpsburg (Maryland): Union forces under the command of General George B. McClellan stop the invasion of Union soil by CSA forces under the command of General Robert E. Lee. SEPTEMBER 22 Lincoln issues the preliminary Emancipation Proclamation. OCTOBER 22 Charlotte Forten sails from a New York port to South Carolina, where she will teach "contrabands" on St. Helena. DECEMBER 1 In his Annual Message to Congress, Lincoln advocates compensation to Union loyalists for the loss of enslaved people; compensated, gradual emancipation; and colonization. DECEMBER 31 Lincoln contracts with Bernard Kock to transport blacks to Île à Vache, off Haiti's southwestern peninsula. **1863** JANUARY 1 Lincoln issues the final **EMANCIPATION PROCLAMATION**. JANUARY 12 In a message to his Congress, CSA President Jefferson Davis declares the Emancipation Proclamation an abomination that makes reunion "forever impossible." FEBRUARY 24 Frederick Douglass becomes a recruiter for black regiments. (Two of his sons, Charles and Lewis, will serve in one: the Fifty-fourth Massachusetts Infantry.) APRIL 14 About 450 blacks sail from Fort Monroe aboard the _Ocean Ranger_ to start a colony on Île à Vache, Haiti. MAY 1 The CSA Congress authorizes the execution or enslavement of captured black soldiers. The CSA has already declared white officers of black troops worthy of death. MAY 6 Abolitionists gather at Boston's Tremont Temple to celebrate and hear an address from Thomas Simms. A few weeks earlier, in Mississippi, this black man (along with his family and friends) made it to Union troops under the command of General Ulysses S. Grant (about to start the siege of Vicksburg). After Simms and company gave Grant's forces intelligence on the Confederates, they headed for Boston with a safe-passage certificate from Grant. Back in 1851, Sims was arrested in Boston for escaping slavery. His return to the South and to slavery was something Wendell Phillips and other abolitionists tried to stop. JUNE 2 With Union Colonel James Montgomery, Harriet Tubman leads about three hundred black troops in a raid along the Combahee River (near Beaufort, South Carolina). It results in the destruction of a CSA supply depot and other property and the liberation of about seven hundred blacks. JUNE 20 West Virginia enters the Union with a gradual emancipation plan in place, a condition Lincoln and Congress had imposed. AUGUST 10 Frederick Douglass meets with Lincoln to demand equal pay for black troops. White privates received $13 a month plus a clothing allowance of $3.50. Black troops, no matter rank, received $10 a month, with $3 _deducted_ from their pay for clothing. OCTOBER 17 The _New York Times_ reports on blacks who left America to start a colony on Île à Vache, Haiti: "Information has reached here that these colonists were badly provided for, and many of them died from disease, while others fled to more desirable localities." NOVEMBER 6 Henry McNeal Turner becomes the first black chaplain in the Union Army, by Lincoln appointment. (Turner will serve with a black regiment for which his church has been a recruiting center.) NOVEMBER 19 In the ceremony dedicating the Gettysburg battlefield a national cemetery, Lincoln delivers his shortest and most famous speech, the Gettysburg Address. It begins: "Fourscore and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal." It ends with the hope that America "shall have a new birth of freedom, and that government of the people, by the people, for the people shall not perish from the earth." The Battle of Gettysburg (July 1–3, 1863), which put a stop to the CSA invasion of Northern soil, was the bloodiest battle of the war. Of the combined 158,000 Union and Confederate forces, there were some 51,000 casualties. DECEMBER 2 The Capitol's dome is completed with the installation of the last sections of its crowning glory: the bronze Statue of Freedom. DECEMBER 8 Lincoln issues Proclamation of Amnesty and Reconstruction. It offers full pardon and return of property (except people) to most Confederates if they pledge allegiance to the Union and accept emancipation. It also outlines the process for the Confederate states to be restored to the Union. Most important requirement: a new state constitution that abolishes slavery. By now, the Union is in control of many CSA major ports and capital cities. **1864** JANUARY 16 Twenty-one-year-old Anna E. Dickinson makes history as the first woman to give a speech to the U.S. Congress. Her topic: "The Perils of the Hour." She takes Lincoln to task for his moderation but in the end expresses her support of him. Lincoln and the First Lady are in attendance for part of the speech. (In the fall of 1863, Dickinson stumped for Republican candidates. In the fall of 1864, she will do so again.) FEBRUARY 1 Lincoln directs Secretary of War Stanton to send a ship to Île à Vache to bring back the black colonists who wish to return. MARCH 16 In its new state constitution Union-held Arkansas abolishes slavery. MARCH 20 A little over 350 of the roughly 450 blacks who emigrated to Île à Vache return to the U.S. aboard the _Marcia C. Day_. APRIL 8 A new thirteenth amendment to the Constitution, one to abolish chattel slavery in America, passes in the U.S. Senate (38–6). JUNE 7—8 In a convention in Baltimore, Lincoln is nominated for a second term by the Republican Party (temporarily calling itself the National Union Party). JUNE 15 U.S. Congress passes a bill mandating that black soldiers receive the same pay as white soldiers. JUNE 28 U.S. Congress repeals the Fugitive Slave Law. JULY 1 (APPROXIMATELY) Garrison sends to Lincoln William Tolman Carlton's painting _Watch Meeting—Dec. 31st 1862—Waiting for the Hour_. It was purchased by a group of Boston women and given to the president at Garrison's urging. JULY 2 U.S. Congress passes a bill eliminating any further funding for colonization. SEPTEMBER 2 Fall of Atlanta: U.S. forces under command of General William Tecumseh Sherman occupy the CSA's munitions "capital." SEPTEMBER 5 In its new state constitution Union-held Louisiana abolishes slavery. NOVEMBER 1 Slavery is abolished in Maryland in a new state constitution approved in September. NOVEMBER 8 Lincoln wins reelection, beating the Democratic Party candidate General George B. McClellan, a former general-in-chief of the Union Army. Lincoln's vice president is the Tennessee Democrat Andrew Johnson, whom the president had made military governor of Tennessee back in 1862. NOVEMBER 15 Leaving Atlanta in ruins, and with a 62,000-man army, Union general William Tecumseh Sherman begins a campaign of destruction en route to coastal Georgia: his "March to the Sea." NOVEMBER 28 Frances E. W. Harper attends a celebration (at New York City's Cooper Institute) of the abolition of slavery in Maryland, where she was born. (She is on her way to Maryland, where she will spend time with Frederick Douglass, who had not set foot in his home state in about twenty years. Harper, recently widowed, will settle in Philadelphia and resume writing and lecturing for social justice.) DECEMBER 22 Sherman sends Lincoln a telegram from Georgia: "I beg to present you as a Christmas gift the city of Savannah with 150 heavy guns & plenty of ammunition & also about 25,000 bales of cotton." **1865** JANUARY 11 Missouri abolishes slavery within in its borders. JANUARY 21 Garrison writes Lincoln, bewildered that the president never acknowledged receipt of the painting _Watch Meeting—Dec. 31st 1862—Waiting for the Hour_. Garrison knows (from Charles Sumner) that the painting is in the White House. JANUARY 31 U.S. House of Representatives finally approves the Thirteenth Amendment (119–56). FEBRUARY 1 Lincoln's Illinois is the first state to ratify the Thirteenth Amendment. FEBRUARY 3 West Virginia abolishes slavery. FEBRUARY 5 Lincoln shares with his cabinet a plan for ending the war and slavery: offer all slaveholding states $400 million, half if hostilities cease on or before April 1 and half when the Thirteenth Amendment is adopted. (Lincoln calculated that $400 million was about the cost of one hundred days of war.) The cabinet opposes the scheme; the president scraps it. FEBRUARY 7 Lincoln sends Garrison a note apologizing for his tardy thank-you for _Watch Meeting—Dec. 31st 1862—Waiting for the Hour_ , which he calls a "spirited and admirable painting." FEBRUARY 22 Union-held Tennessee abolishes slavery in an amendment to its constitution. MARCH 3 Congress passes and Lincoln signs into law a bill creating America's first government-run social welfare agency: the Bureau of Refugees, Freedmen, and Abandoned Lands (known as the Freedmen's Bureau). MARCH 4 Lincoln delivers his second Inaugural Address. His closing begins: "With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in; to bind up the nation's wounds." MARCH 13 CSA president Jefferson Davis signs into a law a bill that allows blacks to serve in combat. APRIL 3 Union troops take Richmond, Virginia, the CSA capital. APRIL 9 The CSA's General Robert E. Lee surrenders to the Union's General Ulysses S. Grant. APRIL 11 From a White House balcony, Lincoln gives a speech on the nation's political reconstruction in which he expresses support for limited black suffrage. APRIL 14 While Lincoln and the First Lady watch a play at Washington, D.C.'s, Ford Theatre, John Wilkes Booth shoots the president in the head. APRIL 15 Lincoln dies. Vice president Andrew Johnson becomes president. MAY 9—10 Thirty-second Annual Convention of the American Anti-Slavery Society at the Church of the Puritans in New York City. William Lloyd Garrison calls for the organization to be dissolved because its mission—the abolition of slavery—has essentially been accomplished. Anna E. Dickinson, Frederick Douglass, and Wendell Phillips, among others, disagree. For them "mission accomplished" means achieving black civil and political rights. The society continues with Phillips as president. MAY 22 Following capture in Georgia, CSA president Jefferson Davis begins a two-year imprisonment in Virginia's Fort Monroe. JUNE 19 Blacks in Galveston, Texas, learn from Union soldiers that the war is over and that they are free. This is the genesis of the celebration Juneteenth. DECEMBER 6 The Thirteenth Amendment is ratified by the required three-fourths of the states (twenty-seven of thirty-six). On December 18, it officially becomes part of the Constitution. DECEMBER 29 The last issue of Garrison's _The Liberator_ is published. GLOSSARY _Definitions of words and phrases are given relative to their use in this book_. _abolitionist_ a person who advocates the end (or abolition) of slavery. _Annual Message to Congress_ today's State of the Union Address. _batteries_ places on which to mount guns or fortifications. _broadside_ a sheet of paper on which is printed an ad, a notice, political message, or other information for public consumption. _ca_. abbreviation for "circa," derived from the Latin word _circum_ meaning "around." _chattel_ personal property other than real estate and things tied to it like buildings. In chattel slavery, one person completely owns another. Other types of slavery include debt slavery, in which a person is forced to work for no wages until a debt is paid off. _colonization_ the act or process of establishing a colony within or without a nation's borders. _Compromise of 1850_ a series of laws intended to avoid a civil war over slavery that included the abolition of the slave trade (but not slavery) in Washington, D.C., and a Fugitive Slave Law that made it easier for slaveholders to regain their property from free soil. Also, decisions were made about slavery in territory gained as a result of the U.S.-Mexican War (1846–48): California was admitted to the Union as a free state. The rest was organized into Utah and New Mexico Territories, where slavery was to be a matter of popular sovereignty. _Confederacy_ shorthand for the Confederate States of America. _Confederate_ a supporter of the Confederate States of America. _confiscate_ to take or seize. _Congress (U.S.)_ national legislative, or lawmaking, body, made up of the Senate and the House of Representatives. _contrabands_ enslaved blacks who were seized by Union soldiers or given refuge with Union soldiers or other authorities. _Cooper Institute_ a college founded in 1859, in Lower Manhattan, to offer working-class people a free education. _disunion_ separation. _Dred Scott decision_ The March 6, 1857, U.S. Supreme Court ruling in which Dred and Harriet Scott were denied their freedom. The decision also declared that (1) blacks were not and never could be U.S. citizens and (2) the U.S. government could not regulate slavery in the territories. _Faneuil Hall_ Boston's most famous meeting hall. _free soil_ a state or territory in which slavery is illegal. _free state_ a state in which slavery is illegal. _freedom papers_ a document proving that a person had gained his or her liberty. In some places in the South, blacks who had never been enslaved had certificates attesting to this. It sometimes happened that a person who couldn't produce proof of his or her freedom was sold into slavery. _gradual emancipation_ freedom rolled out over time or at a set future date. _gutta-percha_ a hard natural rubber derived from a family of tropical trees that has been used to make golf balls and canes, among other things. _jubilee_ freedom. _Kansas-Nebraska Act_ an 1854 law that overturned the ban on slavery in territory above the 36°30' parallel as established by the Missouri Compromise of 1820. Instead, slavery in Kansas and Nebraska was to be a matter of popular sovereignty. _martial law_ military rule. _Missouri Compromise_ an 1820 law in which Maine, a former province of Massachusetts, entered the Union as a free state and Missouri as a slave state. As well, slavery was banned in the land acquired in the Louisiana Purchase above the 36°30' parallel (except Missouri). _popular sovereignty_ doctrine that the people living in a U.S. territory, not the federal government, should be the ones to decide whether to make slavery legal there. _proclamation_ a statement of a policy to the public at large. _Radical Republican_ a member of the Republican Party who advocated the immediate abolition of slavery everywhere in America before and during the Civil War and who during Reconstruction pressed for blacks to have civil and political rights. Not until the middle of the twentieth century would changes in the makeup of America's two major political parties lead to a situation in which the Democratic Party was the party more associated with black civil rights. _Rebels (Rebs)_ what pro-Union people called supporters of the CSA. _rump government_ a government that has no real authority or power. _secede_ withdraw. _Secret Six_ Also known as the Committee of Six, this was a group of men who financed John Brown's antislavery activities in Kansas and his raid on Harpers Ferry. Five of the men were from Massachusetts: physician Samuel Gridley Howe; manufacturer George Stearns; Congregational minister Thomas Wentworth Higginson; Unitarian minister Reverend Theodore Parker; and journalist Frank Sanborn. The sixth man was Gerrit Smith, a millionaire landowner in Peterboro, New York. _servile_ of or pertaining to someone enslaved. _slave state_ a state where slavery is legal. _slave trade_ the bringing of people into the nation or a locality within it for purposes of slavery. _territories_ lands possessed by the U.S. that are not yet states. _trump_ the sound of a trumpet; the announcement or proclamation of something joyous in the offing. _Union_ another name for the United States that grew in popularity during the Civil War. NOTES _Archaic spellings along with spelling and punctuation errors have been silently corrected_. this page "Abraham Lincoln was . . .": "Again, Lincoln," _The Crisis_ , September 1922, 200. this page "A highly secretive . . .": "In Search of the Real Abe Lincoln," _Midwest Today_ , February 1993. Posted at <http://www.midtod.com/bestof/abe.phtml>. this page "The problem is . . .": _The Fiery Trial_ , xix–xx. **Part I: "The Agonizing Prayers of Centuries"** this page "We were waiting . . .": _Life and Times of Frederick Douglass_ in _Frederick Douglass: Autobiographies_ , 791. this page on Cornish: "From Slavery to Freedom and Success: Sandy Cornish and Lillah Cornish," _Florida Keys Sea Heritage Journal_ , Spring 1994, 1, 12–15 and _After the War_ , 189–93. this page "I have often . . .": _Narrative of the Life of Frederick Douglass, an American Slave_ in _Frederick Douglass: Autobiographies_ , 18. this page "I never rise . . .": "Letter from William C. Nell" (quoting Garrison), _The Liberator_ , December 14, 1855, 198. this page "The Day of Jubilee": _Songs of the Free_ , 199. this page "We do not breathe . . .": "Address to Citizens of Concord," _Emerson's Antislavery Writings_ , 53. this page "Hit Him Again": "Embodied Eloquence, the Sumner Assault, and the Transatlantic Cable," _American Literature_ , vol 82, no. 3, September 2010, 491. this page "How long will . . .": "Southern Outrages," _The Liberator_ , February 22, 1856, 32. this page "We are here . . .": "The Boston Massacre, March 5, 1770," _The Liberator_ , March 12, 1858, 43. this page "I know that . . .": "Great Meeting in Boston," _The Liberator_ , December 9, 1859, 194. this page "monstrous injustice": "Speech at Peoria, Illinois," _Collected Works of Abraham Lincoln_ , vol. 2, 255. **Part II: "A Fit and Necessary Military Measure"** this page on Lincoln's racial views: "The Lincoln-Douglas Debates 4th Debate Part I," <http://teachingamericanhistory.org/library/index.asp?document=1048m>, "Fifth Debate: Galesburg, Illinois," <http://www.nps.gov/liho/historyculture/debate5.htm>, and "Lincoln's Black History," <http://www.nybooks.com/articles/archives/2009/jun/11/lincolns-black-history/> this page "I have no purpose . . .": "First Debate with Stephen A. Douglas at Ottawa, Illinois," _Collected Works of Abraham Lincoln_ , vol. 3, 16. this page "Let us have . . .": "Address at Cooper Institute, New York City," _Collected Works of Abraham Lincoln_ , vol. 3, 550. this page "the institution of negro . . .": _The Civil War and Reconstruction: A Documentary Collection_ , 436. this page "I know what . . .": "Progress," _Speech, Lectures, and Letters_ , 384. this pages Lincoln's Inaugural Address: "First Inaugural Address—Final Text," _Collected Works of Abraham Lincoln_ , vol. 4, 262–71. this page "Are not these Northern people. . .": "Letter from Rev. J. Sella Martin," _Douglass' Monthly_ , June 1861, 469. this page "I am credibly . . .": _The Emancipation Proclamation: A Brief History with Documents_ , 44. this page on the number of blacks at Fort Monroe: "Imagined Promises, Bitter Realities" in _The Emancipation Proclamation: Three Views_ , 1. this page Bull Run data: "Manassas, First," CWSAC Battle Summaries, www.nps.gov/hps/abpp/battles/va005.htm. this pages first Confiscation Act: _From Property to Person_ , 253–54. this page "alarm our Southern . . .": "To John C. Fremont," _Collected Works of Abraham Lincoln_ , vol. 4, 506. this page "I say, _emphatically_ . . .": _History of Kentucky_ , vol. 1, 87. this page "We cannot conquer . . . use it godlike": _Memoir and Letters of Charles Sumner_ , vol. 4, 42. this page "Frémont's proclamation has . . .": "The Hour and the Man," _The Liberator_ , September 20, 1861, 150. this page "I think Sumner . . . thunderbolt will keep": _Life and Public Services of Charles Sumner_ , 359–60. this page Population data: _The Negro's Civil War_ , 321. this page Lincoln's December 1861 Message to Congress: "Annual Message to Congress" _Collected Works of Abraham Lincoln_ , vol. 5, 48. this page "an ocean rolling between them," "the elysium of . . ." and "the elysium of . . .": "The Career of a Kansas Politician," _American Historical Review_ , vol. 4, October 1898, 101. this page "Slavery is the stomach . . .": "Cast Off the Mill Stone," _Douglass' Monthly_ , September 1861, 514. this page Stevens's speech: _The Congressional Globe_ , 37th Congress, 2nd Session, 440–41. this page "They may send . . .": _Harriet Tubman_ , 298–99. this page Lincoln's March 6, 1862, Message to Congress: "Message to Congress," _Collected Works of Abraham Lincoln_ , vol. 5, 144–45. this page "The late message . . .": "Turner on the President's Message," _The Christian Recorder_ , March 22, 1862, 46. this page on John Harry's petition and payments to slave holders: "Sample of Petition for Owner Compensation" and "List of Owners Who Filed Petitions" "Final Report of the Commission for the Emancipation of Slaves, among DC Emancipation Documents posted os.dc.gov/os/cwp/view,a,1207,q,640006.asp and "End Slavery in the Nation's Capital Booklet" posted at www.os.dc.gov/os/cwp/view,a,1207,q,608954.asp. this page "What I believe is . . .": "Twenty-Ninth Annual Meeting of the American Anti-Slavery Society," _The Liberator_ , May 16, 1862, 78. this page "The South, having . . .": "The New England Anti-Slavery Convention," _The Liberator_ , June 6, 1862, Transcript at Accessible Archives: www.accessible.com. this page Lincoln's repeal of Hunter's freedom decree: "Proclamation Revoking General Hunter's Order of Military Emancipation of May 9, 1862," _Collected Works of Abraham Lincoln_ , vol. 5, 222–23. This document has within it a reprint of Hunter's proclamation. this page "more sacred. . . . You need more . . .": _The Works of Charles Sumner_ , vol. 7, 215. this page "Too big a lick": Donald, _Lincoln_ , 364. this page Lincoln's address to the Border State representatives: "Appeal to Border State Representatives to Favor Compensated Emancipation," _Collected Works of Abraham Lincoln_ , vol. 5, 317–19. this pages Majority reply from Border State: "Border State Congressmen to Abraham Lincoln," Monday July 14, 1862, posted at The Abraham Lincoln Papers at the Library of Congress, memory.loc.gov/ammem/alhtml/malhome.html. this page Militia Act: Freedmen & Southern Society Project, <http://www.history.umd.edu/Freedmen/milact.htm>. this pages second Confiscation Act: _From Property to Person_ , 257–61. this page Lincoln's address to black delegation: "Address on Colonization to a Deputation of Negroes," _Collected Works of Abraham Lincoln_ , vol. 5, 370–75. this page "great truth," "the mild servitude . . . ," "squelch them," and "caused the war": "What Is to be Done with the Negroes, and What with the Abolitionists?" _New York Herald_ , August 29, 1862, 4. this page "A spectacle, as . . .": "The President on African Colonization," _The Liberato_ r, August 22, 1862, 134. this page "No, Mr. President . . .": "The President and His Speeches," _Douglass' Monthly_ , September 1862, 707. this page "Let the President . . .": "Mrs. Frances E. Watkins Harper on the War and the President's Colonization Scheme," _The Christian Recorder_ , September 27, 1862, 153. this page Greeley's letter to Lincoln: "Horace Greeley to Abraham Lincoln, August 19], 1862 (Clipping of Letter; endorsed by Lincoln)" posted at The Abraham Lincoln Papers at the Library of Congress, [memory.loc.gov/ammem/alhtml/malhome.html. this pages Lincoln's letter to Greeley: "To Horace Greeley," _Collected Works of Abraham Lincoln_ , vol. 5, 388–89. this pages _Tribune_ report on cabinet meeting: "From Washington," _New-York Daily Tribune_ , August 22, 1862, 5. this page "extreme exercise of . . .": _Team of Rivals_ , 466. this page description of Carpenter painting: _United States Senate Catalogue of Fine Art_ , 116–21. this page "the effort to . . . loss of slaves": draft of the Preliminary Emancipation Proclamation. New York State Library, http://www.nysl.nysed.gov/library/features/ep/transcript.htm. this page "OUR ARMS VICTORIOUS": _Boston Evening Transcript_ , September 19, 1862, 1. this page "GREAT VICTORY": _New York Times_ , September 20, 1862, 1. this page "I think the time . . . Maker": Salmon P. Chase. Holograph journal, open to September 22, 1862. Salmon Chase Papers, Manuscript Division, Library of Congress (154) Digital ID #al0154. this page "GOD BLESS ABRAHAM LINCOLN!": _The Civil War: Primary Documents_ , 128. this page description of Volck etching: Vorenberg, _The Emancipation Proclamation_ , 66–67. this page description of Blythe painting: Miscellaneous materials provided by Carnegie Museum of Art. **Part III: "The Trump of Jubilee"** this page on _Watch Meeting: Art in the White House_ , 158. this page "We were waiting . . .": _Life and Times of Frederick Douglass_ in _Frederick Douglass: Autobiographies_ , 791. this page Lincoln's December 1862 Message to Congress: "Annual Message to Congress," _Collected Works of Abraham Lincoln_ , vol. 5, 518–37. this page on Judge Russell at _Boston Journal_ : "The Emancipation Proclamation," _New York Times_ , October 16, 1884, 2. this page on Tremont Temple celebration: _Frederick Douglass_ , 215. this page on celebration at Music Hall: "Proclamation-Day in Boston," _The Liberator_ , January 9, 1863, 7. this page Emerson's "Boston Hymn": _The Liberator_ , January 30, 1863, 19. this pages on the Stearns' John Brown party: "How Boston Received the Emancipation Proclamation," _The American Review of Reviews_ , February 1913, 177–78. this page on Turner and the Emancipation Proclamation: _The Negro's Civil War_ , 50. this pages Annotations on the Proclamation: _The Fiery Trial_ , 241–44 and _Lincoln's Emancipation Proclamation_ , 178–81. this pages on celebration at Port Royal: _Army Life in a Black Regiment_ , 40–41. this page on Harriet Tubman probably at the Port Royal celebration: _Harriet Tubman_ , 54. this page "The most glorious . . .": _The Journals of Charlotte Forten Grimké_ , 428–29. this page "paraded through the principal streets": "News from Fortress Monroe," _New York Times_ , January 4, 1863, 5. this page on Cornish and the parade in Key West, Florida: "Interesting from Key West," _New York Herald_ , February 11, 1863, 8. this page "It was not logic . . .": _Life and Times of Frederick Douglass_ in _Frederick Douglass: Autobiographies_ , 791. this page Harper's poem: _A Brighter Coming Day_ , 186. **Epilogue** this page "Slavery has existed . . .": "Remarks of Frederick Douglass at Zion Church on Sunday 28, of Dec.," _Douglass' Monthly_ , January 1863, 770. this page "all the moral . . .": Guelzo, _Lincoln's Emancipation Proclamation_ , 2. this page "the very intelligent . . . cause as soldiers": "Last Public Address," _Collected Works of Abraham Lincoln_ , vol. 8, 403. this page "That is the last . . .": Donald, _Lincoln_ , 588. **Timeline** _January 10, 1862_ "The President shall acquire . . .": "News from Washington," _New York Times_ , January 10, 1862, 1. _January 12, 1863_ "forever impossible": "The Rebel Message," _New York Herald_ , January 17, 1863, 1. _October 17, 1863_ "information has reached . . .": "The Experimental Black Colony," _New York Times_ , October 17, 1863, http://www.nytimes.com/1863/10/17/news/washington-our-special-washington-dispatches-eastern-shore-counties-virginia.html?scp=1&sq=colonists&st=p. _November 19, 1863_ Battle of Gettysburg data: "Gettysburg," CWSAC Battle Summaries, http://www.nps.gov/history/hps/abpp/battles/pa002.htm. Gettysburg Address: "Bliss Copy," at <http://www.papersofabrahamlincoln.org/Gettysburg%20Address.htm>. _December 22, 1864_ Sherman's telegram: "William T. Sherman to Abraham Lincoln, Thursday, December 22, 1864," The Abraham Lincoln Papers at the Library of Congress (online). _February 7, 1865_ "spirited and admirable painting": "Abraham Lincoln to William Lloyd Garrison, [February 7, 1865], The Abraham Lincoln Papers at the Library of Congress (online). _March 4, 1865_ "With malice toward none . . .": "Second Inaugural Address," _Collected Works of Abraham Lincoln_ , vol. 8, 333. SELECTED SOURCES Basler, Roy P., ed. _The Collected Works of Abraham Lincoln_. <http://quod.lib.umich.edu/l/lincoln/>. Bennett, Lerone, Jr. _Forced into Glory: Abraham Lincoln's White Dream_. Chicago: Johnson Publishing, 2000. Blair, William A., and Karen Fisher Younger, eds. _Lincoln's Proclamation: Emancipation Reconsidered_. Chapel Hill: University of North Carolina Press, 2009. Blassingame, John W. _Slave Testimony: Two Centuries of Letters, Speeches, Interviews, and Autobiographies_. Baton Rouge: Louisiana State University Press, 1996. Chapman, Maria Weston, ed. _Songs of the Free, and Hymns of Christian Freedom_. Boston: Isaac Knapp, 1836. Collins, Lewis and Richard H. _History of Kentucky_ vol. 1, Covington, KY: Collins & Co, 1878. Cox, LaWanda. _Lincoln and Black Freedom: A Study in Presidential Leadership_. Columbia: University of South Carolina Press, 1993. Donald, David Herbert. _Lincoln_. New York: Simon & Schuster, 1995. Foner, Eric. _The Fiery Trial: Abraham Lincoln and American Slavery_. New York: W.W. Norton, 2010. ______. _Forever Free: The Story of Emancipation and Reconstruction_. New York: Vintage, 2006. ______, ed. _Our Lincoln: New Perspectives on Lincoln and His World_. New York: W.W. Norton, 2009. Foster, Frances Smith. _A Brighter Coming Day: A Frances Ellen Watkins Harper Reader_. New York: The Feminist Press, 1990. Franklin, John Hope. _The Emancipation Proclamation_. Wheeling, IL: Harlan Davidson, 1995. Gates, Henry Louis, Jr., volume editor. _Frederick Douglass: Autobiographies_. New York: Library of America, 1994. Gienapp, William E., ed. _The Civil War and Reconstruction: A Documentary Collection_. New York: W.W. Norton, 2001. Goodwin, Doris Kearns. _Team of Rivals: The Political Genius of Abraham Lincoln_. New York: Simon & Schuster, 2006. Gougeon, Len and Joel Myerson, eds. _Emerson's Antislavery Writings_. New Haven: Yale University Press, 1995. Guelzo, Allen C. _Lincoln's Emancipation Proclamation: The End of Slavery in America_. New York: Simon & Schuster, 2006. Hanlon, Christopher. "Embodied Eloquence, the Sumner Assault, and the Transatlantic Cable," _American Literature_ , vol 82, no. 3, September 2010. 489–518. Higginson, Thomas Wentworth. _Army Life in a Black Regiment_. Boston: Field, Osgood & Co., 1870. Holzer, Harold, Edna Greene Medford, and Frank J. Williams. _The Emancipation Proclamation: Three Views_. Baton Rouge: Louisiana State University Press, 2006. Humez, Jean M. _Harriet Tubman: The Life and the Life Stories_. Madison: University of Wisconsin Press, 2003. Kachun, Mitch. _Festivals of Freedom: Memory and Meaning in African American Emancipation Celebrations, 1808–1915_. Amherst: University of Massachusetts Press, 2003. Kloss, William et al. _Art in the White House: A Nation's Pride_. New York: Abrams, 1994. Kloss, William and Diane K. Skvarla. U.S. Senate Commission on Art. _United States Senate Catalogue of Fine Art_. D.C.: Government Printing Office, 2002. Lester, C. Edwards. _Life and Public Services of Charles Sumner_. New York: United States Publishing Co., 1874. McFeely, William S. _Frederick Douglass_. New York: W.W. Norton, 1995. McPherson, James M. _Battle Cry of Freedom: The Civil War Era_. New York: Oxford University Press, 2003. ______. _The Negro's Civil War_. New York: Ballantine, 1991. Phillips, Wendell. _Speech, Lectures, and Letters_. Boston: Lee and Shepard, 1872. Pierce, Edward L., ed. _Memoir and Letters of Charles Sumner_ , vol. 4. 2nd ed. Boston: Roberts Brothers, 1894. Quarles, Benjamin. _Lincoln and The Negro_. New York: Da Capo Press, 1991. Reid, Whitelaw. _After the War: A Southern Tour, May 1, 1865 to May 1, 1866_. London: Sampson Low, Son & Marston, 1866. Risley, Ford. _The Civil War: Primary Documents on Events from 1860 to 1865_. Westport, CT: Greenwood Press, 2004. Schmidt, Lewis G. "From Slavery to Freedom and Success: Sandy Cornish and Lillah Cornish," _Florida Keys Sea Heritage Journal_ , Spring 1994, 1, 12–15. Siddali, Silvana R. _From Property to Person: Slavery and the Confiscation Acts, 1861_ – _1862_. Baton Rouge: Louisiana State University Press, 2005. Stevenson, Brenda, ed. _The Journals of Charlotte Forten Grimké_. New York: Oxford University Press, 1989. Striner, Richard. _Father Abraham: Lincoln's Relentless Struggle to End Slavery_. New York: Oxford University Press, 2007. Villard, Fanny Garrison. "How Boston Received the Emancipation Proclamation," _The American Review of Reviews_ , February 1913. Vorenberg, Michael. _The Emancipation Proclamation: A Brief History with Documents_. Boston: Bedford/St. Martin's, 2010. ______. _Final Freedom: The Civil War, the Abolition of Slavery, and the Thirteenth Amendment_. Cambridge: Cambridge University Press, 2004. ACKNOWLEDGMENTS I am so grateful to my editor, Howard Reeves, for his commitment and for being, once again, so wonderful to work with. I am also deeply indebted to other members of the Abrams Books for Young Readers family, people who routinely go above and beyond: editorial assistant Jenna Pocius; managing editor Jim Armstrong; associate managing editor Jen Graham; designer Maria Middleton; production manager Alison Gervais; senior publicist Mary Ann Zissimos; marketing and publicity coordinator Laura Mihalick; and marketing and publicity director Jason Wells. I thank you all so much for your attention to detail and for your brilliance. I'd be remiss if I didn't also thank copyeditor Anne Heausler, proofreader Rob Sternitzky, and fact-checker David Webster. This book also had the benefit of the enlightening minds of law professor Bobby Thomas, who helped me unpack legalese, and Dr. Stephen Kenney, director of Boston's Commonwealth Museum, who also read the manuscript and gave me such thoughtful, meaningful comments. IMAGE CREDITS **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** Historic Maps Restored. **This page:** Boston Athenaeum. **This page:** Courtesy of the Massachussets Historical Society. **This page:** The Granger Collection, New York. **This page:** The Granger Collection. **This page:** Left: The Art Institute of Chicago; right: The Granger Collection, New York. **This page:** The Granger Collection, New York. **This page:** The Library of Congress. **This page:** Clockwise from top left: The Library of Congress; The Library of Congress; Courtesy of the Massachussets Historical Society. **This page:** Top: Trustees of the Boston Public Library/Rare Books; bottom: The Library of Congress. **This page:** The Granger Collection, New York. **This page:** U.S. Senate Collection. **This page:** The Granger Collection, New York. **This page:** The Granger Collection, New York. **This page:** The Granger Collection, New York. **This page:** The Granger Collection, New York. **This page:** The Library of Congress. **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** The Library of Congress. **This page:** The Library of Congress. **This page:** Top: U.S. Senate Collection; bottom: Chicago History Museum. **This page:** The Granger Collection, New York. **This page:** The Library of Congress. **This page:** The Library of Congress. **This page:** Courtesy of the Massachussets Historical Society. **This page:** Author's Collection. **This page:** Library of Congress. **This page:** Public domain. **This page:** Public domain. **This page:** The Library of Congress. **This page:** The Library of Congress. **This page:** SSPL/National Media Museum / Art Resource, NY. **This page:** U.S. Army Heritage & Education Center, U.S. Army Military History Institute. **This page:** Cornell University Library. **This page:** Courtesy of the Massachussets Historical Society. **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** National Archives and Records Administration, Washington, DC. **This page:** The Granger Collection, New York. **This page:** Photography Collection, Miriam and Ira D. Wallach Division of Art, Prints and Photographs, The New York Public Library. **This page:** Courtesy of the Massachussets Historical Society. **This page:** The Library of Congress. **This page:** The Library of Congress. **This page:** The Library of Congress. **This page:** Manuscripts, Archives and Rare Books Division, Schomburg Center for Research in Black Culture, The New York Public Library. **This page:** Print Collection, Miriam and Ira D. Wallach Division of Art, Prints and Photographs, The New York Public Library. **This page:** The Library of Congress. **This page:** U.S. Senate Collection. **This page:** The National Parks Service. **This page:** New York State Library. **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** Carnegie Museum of Art, Pittsburgh; Patrons Art Fund: gift of Mr. and Mrs. John F. Walton, Jr. **This page:** White House Historical Association (White House Collection). **This page:** The Library of Congress. **This page:** Courtesy of the Gilder Lehrman Institute of American History. **This page:** Author's Collection. **This page:** The Granger Collection, New York. **This page:** Courtesy of the Massachussets Historical Society. **This page:** Photographs and Prints Division, Schomburg Center for Research in Black Culture, The New York Public Library. **This page:** Author's Collection. **This page:** Art Resource, NY. **This page:** Smithsonian American Art Museum, Washington, DC/Art Resource, NY. **This page:** The Library of Congress. **This page:** The Library of Congress. INDEX OF SEARCHABLE TERMS A abolitionists. _See also individual abolitionists and societies_ on colonization on Frémont's freedom declaration mob attack on, at Tremont Temple on strategic need to free enslaved abolition of slavery. _See_ emancipation of slaves _Abraham Lincoln Writing the Emancipation Proclamation_ (Blythe) African Americans. _See_ blacks _The Aftermath at Bloody Lane_ (Hope) Alabama, secession of American Anti-Slavery Society after the abolition of slavery Dickinson at convention of Garrison as president of Phillips at convention of American Colonization Society Anderson, Gen. Robert Anglo-African Institute for the Encouragement of Industry and Art anti-abolitionists Antietam, Battle of _Anti-Slavery Meeting on the Common_ (Worcester & Pierce) Arkansas abolition of slavery in secession of Attucks, Crispus B Baker, Frank Baltimore Massacre Baltimore Riot Bates, Edward "Battle Hymn of the Republic" (Howe) Bennett, Lerone, Jr. blacks. _See also_ slavery in the American Revolution on colonization as "contrabands" ( _See_ "contrabands") Dred Scott decision and equal pay for black soldiers Freedmen's Bureau proposed voting rights of seeking sanctuary during Civil War as Union soldiers U.S. Constitution on whites' fear of working for the Union Army Blair, Montgomery "Bleeding Kansas" "Bleeding Summer" "Bloodhound Law." _See_ Fugitive Slave Law Blythe, David Gilmour Booth, John Wilkes border states. _See also individual states_ conditional support of Lincoln by Confiscation Act of 1861 and Emancipation Proclamation and fear of offending on gradual emancipation on use of "contrabands" in combat Boston Female Anti-Slavery Society Brady, Mathew Breckinridge, John Cabell Brisbane, William Henry Brooks, Preston Brown, John execution of at Harper's Ferry, Virginia in Kansas Secret Six funding of Buchanan, James Bufford, J. H. Bull Run, First Battle of Bull Run, Second Battle of Bureau of Refugees, Freedmen, and Abandoned Lands Butler, Andrew Butler, Eliza Ann Butler, Gen. Benjamin Franklin Butler, Grace Butler, Martha Butler, Walter C California, as a free state Cameron, Simon Capitol building Carlton, William Tolman Carpenter, Francis Bicknell _Celebration of the Abolition of Slavery in the District of Columbia by the Colored People, in Washington, April 19, 1866_ (Dielman) Chapman, Maria Weston Chappel, Alonzo _Charleston Mercury_ Chase, Salmon chattal slavery Chiriquí Improvement Company (Panama) citizenship for blacks, Dred Scott decision and Civil War. _See also_ Confederate States of America; Union Army Antietam, Battle of border states and Bull Run, First Battle of Bull Run, Second Battle of Confederate enslaved contributing to Confederate States of America formation Confiscation Act of 1861 Confiscation Act of 1862 emancipation as weapon in Fort Sumter surrender Gettysburg, Battle of habeas corpus and Lincoln's inaugural speech on slavery Militia Act of 1862 naval blockade of southern ports Sherman's "March to the Sea" solely to preserve the Union surrender at Appomattox timeline of Clay, Henry colonization abolitionists on acquisition of land for blacks on Clay on in the Confiscation Act of 1862 congressional appropriations for of "contrabands" deportation proposed in of enslaved freed in D.C. funding eliminated for Garrison on in Île à Vache (Haiti) Lincoln on reasons for Lincoln's proposal to Congress Lincoln talks with blacks on in Panama as part of gradual emancipation to persuade border states to shorten war of slaveholders in South America whites on compensation to slaveholders in D.C. Emancipation Act Emancipation Proclamation and in gradual emancipation if slaveholding states end the war opposition to to Union loyalist slaveholders Compromise of 1850 Confederate States of America (CSA). _See also_ Civil War benefited by slave labor black soldiers in enslavement of captured black soldiers by Kentucky and Missouri government factions in Proclamation of Amnesty and Reconstruction and states in Virginia's western counties not in Confiscation Act of 1861 Confiscation Act of 1862 _Contraband of War_ Contraband Relief Association "contrabands" Butler's confiscation of enemy property education of proposed colonization of seized under the Confiscation Act of 1861 as Union soldiers working for Union soldiers Cooper Institute (Cooper Union), New York City Cornish, Sandy CSA. _See_ Confederate States of America D Davis, Jefferson on black Confederate soldiers as Confederate president on Emancipation Proclamation imprisonment of photograph of Day of Jubilee "The Day of Jubilee" (freedom song) Dayton, William Lewis D.C. Emancipation Act abolition of slavery in celebrations of colonization in compensation in pressure for Declaration of Independence Delaware as border state in the Civil War gradual emancipation and Democratic Party, on slavery Dickinson, Anna E. on continuance of Anti-Slavery Society emancipation celebration and photograph of quotes by speech to U.S. Congress Dielman, F. Dimick, Lt. Col. District of Columbia. _See_ D.C. Emancipation Act Douglas, Stephen Douglass, Charles Douglass, Frederick on aftermath of slavery on colonization on continuance of American Anti-Slavery Society emancipation celebration and Emancipation League and on enslaved experiences on equal pay for black troops mob attack on _The North Star_ newspaper of photograph of recruitment for black regiments on strategic need for emancipation Douglass, Lewis Drayton, Gen. Thomas F. Dred Scott decision E _Emancipation Day in South Carolina_ Emancipation League Declaration inaugural meeting of emancipation of enslaved. _See also_ Emancipation Proclamation; gradual emancipation colonization and compensation for ( _See_ compensation to slaveholders) under the Confiscation Act of 1862 as "contrabands" ( _See_ "contrabands") of CSA supporters debate on responsibility for draft proclamation in Lincoln's cabinet fears of festivities after Frémont's decree of Hunter's decree of Lincoln's proposed constitutional amendment on in the territories unconditional abolition argued in the Union Army waiting for news of waiting period in in Washington, D. C. Emancipation Parade (Washington, D.C.) Emancipation Proclamation announcement of cabinet discussions on compensation in effect of Republican electoral losses after states in which proclamation applies text of timing of waiting period in Emerson, Ralph Waldo emancipation celebration and photograph of quotes of F fear of freed blacks _First Reading of the Emancipation Proclamation of President Lincoln_ (Carpenter) Florida emancipation celebrations in Hunter's decree of emancipation in secession of Florida Territory Forten, Charlotte Forten, James Fort Monroe (Virginia) Fort Sumter (South Carolina) Freedmen's Bureau _Freed Negroes Spreading the News of President Lincoln's Emancipation Proclamation_ freedom papers "Freedom's Fort" (Fort Monroe) Frémont, John Charles emancipation decree of as presidential candidate "fugitive slave" clause of the U.S. Constitution Fugitive Slave Law Butler's defiance of Lincoln's support for provisions of repeal of Sumner's measure banning G Gage, Thomas Gardner, Alexander Garrison, William Lloyd black support of on colonization cotton banner of on dissolution of Anti-Slavery Society on Dred Scott decision on gradual emancipation _The Liberator_ newspaper of photograph of during the proclamation of emancipation proslavery mob attack on quotes of sends _Watch Meeting_ painting to Lincoln Georgia fall of Atlanta Hunter's decree of emancipation in secession of Sherman's "March to the Sea" Gettysburg, Battle of Gettysburg Address glossary of terms gradual emancipation border states on colonization in compensation in Garrison on Lincoln's appeal to states for Lincoln's request to Congress for offered as alternative to emancipation in Pennsylvania Turner on in West Virginia Grant, Gen. Ulysses S. "Great Sale of Land, Negroes, Corn, & Other Property" flyer Greeley, Horace on Lincoln's announcement of planned emancipation _New-York Daily Tribune_ photograph of "The Prayer of the Twenty Millions" H habeas corpus, suspension of writ of Haiti, colonization in Hamlin, Hannibal Harper, Frances E. W. on colonization emancipation celebration and engraving of Harpers Ferry attack Harry, John Hayes, G. H. Heywood, J. B. Higginson, Thomas Wentworth Hofstadter, Richard Hope, James Howe, Julia Ward emancipation celebration and Emancipation League and in the Secret Six Howe, Samuel Gridley Hunter, Gen. David I Île à Vache (Haiti), colonization in Illinois final Thirteenth Amendment and proslavery thirteenth amendment and Illinois Colonization Society Indian Territory Israel Bethel J Jackman, W. G. Johnson, Andrew Johnson, Herschel Vespasian Jubilee, Day of Juneteenth celebration K Kansas John Brown in slavery wars in statehood for Kansas-Nebraska Act Keckley, Elizabeth Kentucky as border state neutrality in Civil War pro-Confederate government in Kock, Bernard L Lane, James Henry Lane, Joseph Lee, Gen. Robert E. _Lee Surrendering to Grant at Appomattox_ (Chappel) Lester, Charles Edwards _The Lexington of 1851_ (Currier & Ives) _The Liberator_ newspaper Lincoln, Abraham Annual Message to Congress Annual Message to Congress assassination of on civil war solely to preserve the Union on colonization in Haiti on colonization in Panama on colonization of "contrabands" on colonization of emancipated blacks on colonization to shorten the war Confiscation Act of 1861 Confiscation Act of 1862 draft proclamation shared with cabinet election of, first election of, second on Frémont's freedom decree Gettysburg Address on government's power to end slavery on gradual emancipation on Hunter's freedom decree Inaugural Address, first Inaugural Address, second on necessity of racial separation painting of on paying slaveholding states to end the war photographs of praise of, for emancipation Proclamation of Amnesty and Reconstruction on slavery suspension of habeas corpus on voting rights for blacks Louisiana abolition of slavery in secession of Louisiana Purchase, slavery and Lovejoy, Elijah Lovejoy, Owen M Magee, John L. Magoffin, Beriah Maine, as a free state Mallory, Shepard Manassas, Battle of (Battle of Bull Run) "March to the Sea" Martin, J. Sella emancipation celebration and on John Brown photograph and life of purchase of sister and niece by on Union army returning black fugitives Maryland abolition of slavery in approval of proslavery thirteenth amendment by as border state in the Civil War Massachusetts, slavery in McClellan, Gen. George B. Militia Act of 1862 "miscegenation," fear of Mississippi, secession of Missouri abolition of slavery in as border state Frémont's decree to free enslaved in pro-Confederate government in as a slave state Missouri Compromise of 1820 _The Mob Attacking the Warehouse of Godfrey Gilman & Co_. Montgomery, James Moore, Henry P. Music Hall, Boston N National Union Party naval blockade of southern ports New Mexico, slavery and _New-York Daily Tribune_ _New York Herald_ No Man's Land North Carolina secession of seizing of federal property in _The North Star_ newspaper Northwest Territory, slavery in O Ohio, proslavery thirteenth amendment and Old Point Comfort, Virginia, _vi. See also_ Fort Monroe (Virginia) Onthank, N. B. P Panama, colonization of freed blacks in Parker, Theodore Pennsylvania emancipation celebrations in gradual emancipation and "The Perils of the Hour" (Dickinson) Phillips, Wendell on continuance of American Anti-Slavery Society emancipation celebration and Emancipation League and photograph of quotes of speaking at an anti-slavery meeting _Planter_ (CSA gunboat) _Political Caricature. No. 4, The Miscegenation Ball_ (Kimmel & Foster) Port Royal emancipation festivities Port Royal Experiment Pratt Street Riot "The Prayer of the Twenty Millions" (Greeley) Proclamation of Amnesty and Reconstruction R Radical Republicans Reid, Whitelaw Republican Party antislavery stance of election of Lincoln Frémont and Dayton campaign in impact of Emancipation Proclamation on as National Union Party reelection of Lincoln Republic of Texas Rothermel, Peter F. runaway enslaved. _See_ "fugitive slave" clause; Fugitive Slave Law Russell, Thomas S "The Sack of Lawrence" (Hayes) Sanborn, Frank Saxton, Gen. Rufus _Scene Around a Bulletin-Board_ Scott, Dred and Harriet _Secession Exploded_ (Wiswell) Second Battle of Bull Run Secret Six Seward, William Seymour, Horatio Sharpsburg, Battle of (Battle of Antietam) Sherman, Gen. William Tecumseh Simms, Thomas _A Slave-Coffle Passing the Capitol_ _A Slave-Hunt_ slave-hunting posses slavery. _See also_ "contrabands"; emancipation of enslaved banned in the territories as cause of Civil War chattel Dred Scott decision enslaved auctions and sales "fugitive slave" clause Fugitive Slave Law ( _See_ Fugitive Slave Law) government's right to regulate as helping the Confederacy legalization of Lincoln on Lincoln's inaugural speech on Missouri Compromise on slave trade taxation and U.S. Constitution on slave trade Slemmer, Lt. Smalls, Robert Smith, Caleb Smith, Gerrit South Carolina Hunter's decree of emancipation in secession of _Southern Chivalry–Argument Versus Club's_ (Magee) _Stampede of Slaves from Hampton to Fortress Monroe_ Stanton, Edwin Stearns, George Stearns, Mary Stevens, Thaddeus Still, William St Louis, strategic importance of Stowe, Harriet Beecher sue, right to Sumner, Charles attack on, by Brooks on Fugitive Slave Law on need for Emancipation Proclamation urging Lincoln for immediate emancipation Supreme Court T Taney, Chief Justice Brooke taxation, slavery and Tennessee abolition of slavery in secession of territories map of free and slave slavery banned in slavery to be decided by settlers in Texas secession of as a slave state Thirteenth Amendment approval of Emancipation Proclamation as prologue for ratification of Senate passage of text of thirteenth amendment, proslavery Thomas, Edward M. Thompson, Ambrose timeline Townsend, James Tredegar Iron Works, Richmond Tremont Temple, Boston address by Thomas Simms awaiting the news of emancipation at on day of John Brown's execution photograph of proslavery mob attack on Trumbull, Lyman Tubman, Harriet on the Civil War emancipation celebration and photograph of raid on CSA supply depot led by Turner, Henry McNeal as chaplain in U.S. armed forces emancipation celebration and on emigration engraving of on gradual emancipation U _Uncle Tom's Cabin_ (Stowe) _The Underground Railroad_ Union Army. _See also_ Civil War barred from returning fugitives black chaplain in black soldiers in black workers for "contrabands" and ( _See_ "contrabands") equal pay for black soldiers Frémont's decree of emancipation Hunter's decree of emancipation "The Union Is Dissolved!" _The United States Senate, A.D. 1850_ (Rothermel) U.S. Constitution proposed antislavery thirteenth amendment proposed proslavery thirteenth amendment on seizing property on slavery on states' rights Thirteenth Amendment Utah, slavery and V Virginia emancipation celebrations in secession of western counties of Volck, Adalbert Johann W Washington, D.C. _See_ D.C. Emancipation Act _Watch Meeting–Dec. 31st 1862–Waiting for the Hour_ (Carlton) Welles, Gideon Western Anti-Slavery Society West Virginia abolition of slavery in as Restored Government of Virginia statehood and gradual emancipation for White House, slave labor and whites on colonization fear of blacks by fear of miscegenation by at Port Royal emancipation festivities Wilson, Henry Wiswell, William _Writing the Emancipation Proclamation_ (Volck)
{ "redpajama_set_name": "RedPajamaBook" }
637
Either for clothing, bags or shoes leather has always been part in society's products preference. Shoes, being a huge market, were first created with leather. Only later on – 1800's- were they seen to change material. Leather was and is still used for reasons that make it useful. For their greater utility they are often handmade. To give customers better satisfaction. Shoes weren't always so fashionable or comfortable, but they were present since a while back. About 40,000 years ago, during the middle paleolithic period to be exact, the first shoes where invented and were constantly used. Made of wraparound leather, the shoes were lasting. After a few thousand years, europe's baroque era greatly influenced the trend and the material. It is not until the 18 th century that men and women shoes start to differ in style, color, heel, and toe shape. Around the same time silk was very common. 1850 is a crucial date, as it was only then that shoes where made for distinct feet. Before that they were made straight, meaning that there was no differentiation from left and right shoes. Even though silk was very current, leather was still seen as one of the best quality. The usefulness of leather, was why we started to adopt it in the beginning. Leather is tough, long-lasting, eco-friendly and smells good. Depending on the animal's species, it can be either flexible or rigid. Both quality serving well for any kind of products. Take our shoes for example, they are flexible for the baby's foot to easily adapt to the shape. As for the first walking shoes, they are rigid with a share of flexibility for the baby's first walk to be agreeable and secure. They are both made similarly with different soles. Now that we have advance technology, handmade items are becoming scarce and valued. They are better finished and guarantee a job for someone. Thus another reason for our decision. " Having the possibly to provide jobs for those who need them, makes me love my job all the more." Anne quotes. Leather is made from animal's skin. Starting from beef to fish their skins are efficiently used for our needs and wants. The shoes are first designed and a foot cast of a certain size is used for verifications. Leaving the outline to be traced on the leather. After the outline is traced out, the assemblage is put together by sewing and gluing and is taken to a finishing specialist. A long process, with many hands behind it to bring joy to the customers. We are happy to sell these shoes with such process and quality and we hope to continue and enlarge our collection for more jobs and more smiles. That's it for this issue. We hope you enjoyed it! We'll see you in a few weeks for more history and new products. Stay updated for upcoming events on our instagram or Facebook. Thank you again to everyone that bought the shoes and/or other lisaura products.
{ "redpajama_set_name": "RedPajamaC4" }
3,540
Tiago by Reily Garrett December 7, 2015 December 7, 2015 under the covers book lovers Title: Tiago Series: Kurupira #1 Author: Reily Garrett Genre: Paranormal/Romantic Suspense Jungle: a vicious area of confusion—i.e. the real world. Enter the realm of the Amazon Rainforest where man is not an apex predator. Twenty-four hours ago, Brielle stood helpless while an intruder murdered her mother. After fleeing to the Amazon Rainforest, she struggles to stay a step ahead of the psychopath searching for the crystal she wears, her birthright. As guardian of the biological hotspot, Tigao guards the maze of rivers, swamps, and forests from those who would rape and plunder his land for its wealth of minerals, trees, and wildlife. His compulsion to protect the foreign, blonde-haired enigma rescued from a poacher leads him face-to-face with a scientific sociopath intent on creating a world where chaos reigns—under his leadership. An exciting blend of action, adventure, and romance, between a man not looking for love and a woman trying to survive. AMAZON US / UK Even if she had the ability and strength to run, she wouldn't get far before one predator or another brought her down. Bleeding from her wrists and ankles issued an open invitation to one and all carnivores. In her mind's eye, she could see the elongated mandibles of the soldier ants chomping through anything in their path, the formic acid venom they emitted dissolving her flesh in a consistent and organized fashion. Though blind, they were sensitive to movement. Even if they only moved forward at an inch in several seconds, she didn't stand a chance with the drugs lingering in her system and other predators nearby. Her mom had once told her you could actually hear them coming through the forest—and she did. Though the gentle mist falling was near silent, the sound of the ant swarm's chomping through decaying leaves gained volume with each tick of an imagined clock. As she awaited death to come in the form of millions of stings and dissolving flesh, she thought of her mom who'd wanted her to live and find contentment in the Amazon. Now she would die here, alone and terrified. Off to her left, the hisses of many leopards morphed into growls and snarls. The staccato spitting of rapid gunfire followed angry curses, which then morphed into pathetic screams. Perhaps Gus wouldn't survive this, after all. To hear the guide beg from the very creatures he'd endeavored to kill lent a primal satisfaction, even if it was the last she'd ever enjoy. At least, the residents of the jungle and women in nearby villages would remain safer. Digging her elbows and toes into the tent's canvas floor, she inched forward, compelled to watch death approach. Swishing and slapping leaves opposite the last tent brought her attention to her final threat. Only minutes existed between this life and feeling her mom's arms around her again. Since fear wouldn't lend enough strength to attempt escape, she'd meet this final threat with as much dignity as she could muster. She'd been drugged, overcome with grief and pain. It was just—all too much. Acid released from the ants' mouths would be painful beyond anything she'd ever endured, yet a calm and peaceful mantle settled about her shoulders. Nothing lasted forever despite what her mom had declared. "One day, you'll join with an incredible man who'll stand by your side for eternity. From that point on, you'll know peace as you never have." How could she believe such nonsense? She was no fairy princess. Deeper in the jungle, Gus's strident voice echoed in the morning stillness. His gun remained silent except for the metallic clicks announcing his lack of ammunition. "Move, damn it! They're coming. You'll die, too, if we don't leave. Why are you surrounding me and not attacking?" Confusion bore down faster than the oncoming ant swarm, compounded by her inability to sort out the facts. Perhaps the mind couldn't comprehend some terrors so it instead exuded a sense of euphoric peace in its place. In the distance, a warm glow bathed the drooping durian leaves, not from above but seemingly from within. A glow she associated with Tiago. In the midst of all the chaos, a chestnut-colored arm brushed the small branches aside to reveal Tiago's chiseled and carved form. "Run! For God's sake, run. There's a swarm of army ants…" After a quick inhale, she struggled through the tent opening on bruised hands and skinned knees, hands and feet still bound. Another fast breath and she stood, naked, on shaky legs. The soothing rain washed the air clean even as it drizzled down her face to slide over the slope of her breasts to the valley between. Goose bumps raised along her arms and shoulders multiplied with the slight breeze. Air exploded from her lungs, taking sanity with it in one loud whoosh. Her teeth rattled as she dropped to her butt, shocked speechless, breathless. Thousands, millions of ants had parted for and surrounded Tiago, not attacking, remaining a constant distance from his path. In mesmerized horror, she watched the insect army's destruction of all vegetation in its path on either side of the man who walked between their ranks. From her mother's teachings, she remembered they generally traveled in either some type of fan wave or even a line but never alongside a human being. He isn't human. The thought came unbidden in a moment of blind inspiration, along with the realization that no one would live to tell his secret. The iridescent shimmer surrounding him brightened to near-blinding intensity in the following heartbeats, adding a fluency to his predatory grace. Light seems to originate from the pendant he wears. In a self-protective gesture, she shaded her eyes with cupped hands. Even with his glowing form and the distance between them, death approaching in stilted movements, the heat in his penetrating-cobalt gaze reminded her–she wore no clothes. Yet, from him, the warmth curled inside to stir the same excitement she felt with his prior touch on her forehead. Though she still heard Gus's angry and panicked shouting in the distance, she no longer feared for herself, despite the fact she could now see the swarm of ants heading directly toward her. What the hell? Their alternating tripod system of angular movement held her in thrall with their approach. One coordinated half-moon shaped body of ants marched to either side of Tiago and seemed to know his path before he moved to maintain a constant distance from him. "How…?" The thought of holding her breath to still her slightest movements before the blind advancing horde suddenly seemed a ridiculous option in light of his obvious control. She gulped. Thousands of ants stopped two feet away from her, their small, segmented bodies swaying side-to-side to an unheard beat. Looking beyond Tiago, she realized she'd misjudged their numbers. "There must be millions…" As stunning as they were deadly, astonishing didn't come close to describing the spectacle before her. In unison, the miniature waves undulated in their coordinated and fascinating shift. Each ripple drew her further into their mesmerizing spell. Gus's wailing became white noise in the background. Now, she realized, not only did Tiago command every aspect of her being, he commanded non-human creatures as well. That understanding came on an elemental level, innate knowledge verified by nightmarish reel playing out in front of her. He also commanded the distant leopards surrounding Gus. Yet here he stood, looming, towering over her, making her feel small yet significant in a way she'd never imagined. Reily's work as an ICU nurse, private investigator, and work in the military police has given her countless experiences in a host of different environments to add a real world feel to her fiction. Though her kids are her life, writing is Reily's life after. The one enjoyed…after the kids are in bed or after they're in school and the house is quiet. This is the time to kick back with laptop and lapdog to give your imagination free reign. In life, hobbies can come and go according to our physical abilities, but you can always enjoy a good book. Life isn't perfect, but our imaginations can be. Relax, whether it's in front of a fire or in your own personal dungeon. Take pleasure in a mental pause as you root for your favorite hero/heroine and bask in their accomplishments, then share your opinions of them over a coffee with your best friend (even if he's four legged). Life is short, cherish your time. https://widget-prime.rafflecopter.com/classic/6384577/main.html Previous Post Of War #1-6 by Lisa Beth Darling Next Post Defy by Allie Juliette Mousseau
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,455
Neoseiulus harrowi är en spindeldjursart som först beskrevs av Elsie Collyer 1964. Neoseiulus harrowi ingår i släktet Neoseiulus och familjen Phytoseiidae. Inga underarter finns listade i Catalogue of Life. Källor Externa länkar Spindeldjur harrowi
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,981
Bougainvillia frondosa is a marine invertebrate, a species of hydroid in the suborder Anthomedusae. It was first described by Mayer in 1900. References Bougainvilliidae Animals described in 1900
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,662
title: FAQ type: faq layout: page toc: true --- ## What age range is appropriate for the VR Kit? The VR headset is not for use by children under age 13 according to the official health and safety warning. ## What Hardware is included in the Virtual Reality Kit? The VR Kit includes the following items: * 1x Oculus Ready PC Tower that meets the Oculus minimum specs and approved by Oculus IT * Mouse and Keyboard * 1x Oculus Rift CV1 * 1x 24" Monitor * 1x Ricoh Theta S 360 Camera * 1x Camera Stand * 1x Laminated getting started guide ## How will the equipment be used? We provide instructions to get started experiencing and creating virtual reality. It is up to the teachers themselves to decide and innovate on how they would like to integrate the VR kit into their classrooms. ## How could we evaluate its use? It is important to us that teachers themselves to evaluate how successful the VR Kit is for their classrooms as well as students. We will be sending over surveys assessing the experience at various times and welcome as much ongoing feedback as possible. ## Are there any ongoing expenses or upkeep? There are no costs to maintaining the VR Kit. However, TechStart isn't responsible for damages or repairs to the kit. The PC has a 1 year warranty and the virtual reality headset can be serviced via the customer service department at oculus. The equipment we are donating is expensive and sensitive to damage, so it is important to take the necessary precautions to keep the equipment undamaged as you see appropriate. [aframe-react]: https://github.com/ngokevin/aframe-react [archive3d]: http://archive3d.net/ [awesome]: https://github.com/aframevr/awesome-aframe [awesomecomponents]: https://github.com/aframevr/awesome-aframe#components [awesomestock]: https://github.com/neutraltone/awesome-stock-resources [cardboard]: https://www.google.com/get/cardboard/ [blog]: https://aframe.io/blog/ [clara]: http://clara.io [cors]: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing [d3]: https://www.youtube.com/watch?v=Tb2b5nFmmsM [drawcomponent]: https://github.com/maxkrieger/aframe-draw-component [ecs]: ../docs/core [extensible]: https://extensiblewebmanifesto.org/ [fork]: https://github.com/aframevr/aframe/tree/master/src/components [ghissue]: https://github.com/aframevr/aframe/issues [ghpages]: https://pages.github.com/ [ghpull]: https://github.com/aframevr/aframe/pulls [github]: http://github.com/aframevr/aframe [glam]: https://github.com/tparisi/glam [guide]: ../docs/guide [htmltexturecomponent]: https://github.com/scenevr/htmltexture-component [leapmotion]: https://www.leapmotion.com/ [janus]: http://www.janusvr.com/ [mediael]: https://developer.mozilla.org/docs/Web/API/HTMLMediaElement [mozvr]: http://mozvr.com [oculus]: https://www.oculus.com/ [oculusdev]: https://developer.oculus.com/downloads/ [overlayiframe]: http://learningthreejs.com/blog/2013/04/30/closing-the-gap-between-html-and-webgl/ [popmotion]: https://github.com/Popmotion/aframe-role [redditwebvr]: https://www.reddit.com/r/webvr [requestfs]: https://developer.mozilla.org/docs/Web/API/Element/requestFullScreen [riftspec]: https://www.oculus.com/en-us/blog/powering-the-rift/ [scene]: http://scenevr.com/ [sketchup]: https://3dwarehouse.sketchup.com/ [slack]: https://aframevr-slack.herokuapp.com/ [slackwebvr]: https://webvr-slack.herokuapp.com/ [template]: https://github.com/ngokevin/aframe-template-component [textgeometrycomponent]: https://github.com/ngokevin/aframe-text-component [textwrapcomponent]: https://github.com/maxkrieger/aframe-textwrap-component [three]: http://threejs.org [turbosquid]: http://www.turbosquid.com/Search/3D-Models/free [twitter]: https://twitter.com/aframevr [uploader]: https://aframe.io/aframe/examples/_uploader/ [videoissue]: https://github.com/aframevr/aframe/issues/316 [vive]: http://www.htcvive.com/us/ [webvrhacks]: https://hacks.mozilla.org/2016/03/introducing-the-webvr-1-0-api-proposal/ [webvrpolyfill]: https://github.com/borismus/webvr-polyfill [webvrspec]: https://github.com/MozVR/webvr-spec [writecomponent]: ../docs/core/component.html
{ "redpajama_set_name": "RedPajamaGithub" }
2,945
Das Apostolische Vikariat Mongo (lat.: Apostolicus Vicariatus Mongensis) ist ein römisch-katholisches Apostolisches Vikariat mit Sitz in Mongo im Tschad. Es umfasst die Provinzen Guéra, Batha, Ouaddaï, Wadi Fira, Borkou, Ennedi Est, Ennedi Ouest, Tibesti und Salamat. Geschichte Papst Johannes Paul II. gründete die Apostolische Präfektur am 1. Dezember 2001 aus Gebietsabtretungen des Erzbistums N'Djaména und des Bistums Sarh mit der Bulle Universae Ecclesiae. Am 3. Juni 2009 wurde sie zum Apostolischen Vikariat erhoben. Ordinarien Apostolische Präfekten Henri Coudray SJ, 2001–2009 Apostolische Vikare Henri Coudray SJ, 2009–2020 Philippe Abbo Chen, seit 2020 Siehe auch Liste der römisch-katholischen Diözesen Weblinks Mongo Mongo Guéra Gegründet 2001
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,363
{"url":"http:\/\/en.wikipedia.org\/wiki\/Locally_finite_measure","text":"# Locally finite measure\n\nJump to: navigation, search\n\nIn mathematics, a locally finite measure is a measure for which every point of the measure space has a neighbourhood of finite measure.\n\n## Definition\n\nLet (X, T) be a Hausdorff topological space and let \u03a3 be a \u03c3-algebra on X that contains the topology T (so that every open set is a measurable set, and \u03a3 is at least as fine as the Borel \u03c3-algebra on X). A measure\/signed measure\/complex measure \u03bc defined on \u03a3 is called locally finite if, for every point p of the space X, there is an open neighbourhood Np of p such that the \u03bc-measure of Np is finite.\n\nIn more condensed notation, \u03bc is locally finite if and only if\n\n$\\forall p \\in X, \\exists N_{p} \\in T \\mbox{ s.t. } p \\in N_{p} \\mbox{ and } \\left| \\mu (N_{p}) \\right| < + \\infty.$\n\n## Examples\n\n1. Any probability measure on X is locally finite, since it assigns unit measure the whole space. Similarly, any measure that assigns finite measure to the whole space is locally finite.\n2. Lebesgue measure on Euclidean space is locally finite.\n3. By definition, any Radon measure is locally finite.\n4. Counting measure is sometimes locally finite and sometimes not: counting measure on the integers with their usual discrete topology is locally finite, but counting measure on the real line with its usual Borel topology is not.","date":"2014-09-15 04:45:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 1, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9490159749984741, \"perplexity\": 394.2857342310382}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-41\/segments\/1410657104119.19\/warc\/CC-MAIN-20140914011144-00086-ip-10-196-40-205.us-west-1.compute.internal.warc.gz\"}"}
null
null
I have found a pretty cool tool to extract your Outlook Emails to individual PDF files in bulk. If you are like me you have thousands of emails in Outlook and need a way to extract those emails to a common format for archiving or sharing. Atlanta, GA - July 17, 2006 – The Laboratory Informatics Institute was established on June 19, 2006. This new organization has been created as a laboratory industry trade association for the education, standardization and promotion of the functional and professional areas of Laboratory Informatics. 07/17/2006 - Learn about ATL's LIMS Solutions at the NYAAEL/PaAAEL Annual Conference and Expo! On July 31st, 2006, ATL will be exhibiting at the NYAAEL/PaAAEL Annual Conference and Expo. In addition, ATL will be presenting an educational session on LIMS. Bill Purves is in his third decade of providing metal analysis courses for the PACS short course program. Mr. Purves is a highly rated instructor. In just a few days you will obtain practical information to improve your on-the-job performance. Upcoming PACS short courses on metal analysis in 2006 are listed below. The 2007 schedule will be available in September. Henry Nowicki, president of Activated Carbon Services and PACS: Testing, Consulting and Training, is providing a two-day introductory course on activated carbon adsorption this summer. Antek HealthWare, LLC developer of both LabDAQ, the nation's most widely used Laboratory Information System (LIS), as well as DAQbilling, its innovative practice management system/medical billing software, is pleased to announce the opening of its online supply store. Scientists at the U.S. Department of Energy's Argonne National Laboratory have discovered new ways that ions interact with mineral surfaces in water, opening a door to new knowledge on how contaminants travel in the environment. The insight, published in today's issue of Physical Review Letters, leads to a better understanding of the factors that determine water quality.
{ "redpajama_set_name": "RedPajamaC4" }
4,949
Q: Multiply defined labels with \usebox I want to duplicate some content without re-defining its labels. For example, \newbox{\foobox} \sbox{\foobox}{\vbox{\section{Foo}\label{sec:foo}} \usebox{\foobox} \usebox{\foobox} will emit a warning, because sec:foo is defined more than once. Is there any way to prevent the second \usebox from writing to the aux file? A: I assume, you want to have the first \label intact and to disable the others. Then the following uses a trick: * *The box is created with two commands \BeginIgnoreAux{<id>} and \EndIgnoreAux before and after the original contents. These macros write markers into the .aux file with the result, the label stuff gets in between. *If the .aux file is read, then the label stuff is executed as usual the first time. If \BeginIgnoreAux{<id>} is called more then once with the same <id>, then the label stuff is ignored. This avoids duplicate labels. *LaTeX reads the .aux file twice. At the end of the document the .toc files are written, for example. Therefore Macro \AuxResetIgnoreStuff clears the <id>s. \documentclass{article} \makeatletter \global\let\AuxResetIgnoreStuff\@empty \usepackage{auxhook} \AddLineBeginAux{\string\AuxResetIgnoreStuff} % Macros inside the `.aux' file \newcommand*{\AuxBeginIgnore}[1]{% \@ifundefined{ignore@#1}{% \global\expandafter\let\csname ignore@#1\endcsname\@empty \expandafter\g@addto@macro\expandafter\AuxResetIgnoreStuff \expandafter{% \expandafter\global\expandafter\let\csname ignore@#1\endcsname\relax }% }\AuxSkip } \def\AuxSkip#1\AuxEndIgnore{} \let\AuxEndIgnore\relax % User commands \newcommand*{\BeginIgnoreAux}[1]{% \protected@write\@auxout{}{% \string\AuxBeginIgnore{#1}% }% } \newcommand*{\EndIgnoreAux}[1]{% \protected@write\@auxout{}{% \string\AuxEndIgnore }% } \makeatother \begin{document} \tableofcontents \newbox{\foobox} \sbox{\foobox}{% \BeginIgnoreAux{foo}% \vbox{\section{Foo}\label{sec:foo}}% \EndIgnoreAux } \noindent \usebox{\foobox} \usebox{\foobox} \noindent Reference to the first label: \ref{sec:foo}. \end{document} The .aux file contains: \relax \AuxResetIgnoreStuff \AuxBeginIgnore{foo} \@writefile{toc}{\contentsline {section}{\numberline {1}Foo}{1}} \newlabel{sec:foo}{{1}{1}} \AuxEndIgnore \AuxBeginIgnore{foo} \@writefile{toc}{\contentsline {section}{\numberline {1}Foo}{1}} \newlabel{sec:foo}{{1}{1}} \AuxEndIgnore And the .toc file: \contentsline {section}{\numberline {1}Foo}{1}
{ "redpajama_set_name": "RedPajamaStackExchange" }
29
\section{Introduction} Within the context of heterotic M-theory \cite{Horava:1995qa,Horava:1996ma,Lukas:1997fg, Lukas:1998yy, Lukas:1998tt}, there have been a number of $N=1$ supersymmetric theories introduced that have a phenomenologically realistic observable sector \cite{Braun:2005nv,Braun:2005bw,Braun:2005ux,Bouchard:2005ag,Anderson:2009mh,Braun:2011ni,Anderson:2011ns,Anderson:2012yf,Anderson:2013xka,Nibbelink:2015ixa,Nibbelink:2015vha,Braun:2017feb}. Various aspects of such theories, such as the spontaneous breaking of their supersymmetry \cite{Lukas:1999kt,Antoniadis:1997xk,Dudas:1997cd,Lukas:1997rb,Nilles:1998sx,Choi:1997cm}, the role of five-branes in the orbifold interval \cite{Lukas:1998hk,Lehners:2006ir,Carlevaro:2005bk,Gray:2003vw,Brandle:2001ts,Lima:2001nh,Grassi:2000fk}, moduli and their stabilization \cite{Anderson:2010mh, Anderson:2011cza, Anderson:2011ty,Correia:2007sv}, their low-energy phenomenology \cite{Deen:2016vyh,Dumitru:2018jyb,Dumitru:2018nct,Dumitru:2019cgf,Ambroso:2009jd,Ovrut:2014rba,Ovrut:2015uea}, non-perturbative superpotentials~\cite{Witten:1999eg,Buchbinder:2002ic,Beasley:2003fx,Basu:2003bq,Braun:2007xh,Braun:2007tp,Braun:2007vy,Bertolini:2014dna,Buchbinder:2016rmw,Buchbinder:2019eal,Buchbinder:2019hyb,Buchbinder:2017azb,Buchbinder:2018hns,Buchbinder:2002pr,Anderson:2015yzz} and so on have been discussed in the literature. Within the context of heterotic M-theory, it was shown that compactifying the observable sector on a specific Schoen Calabi--Yau threefold equipped with a particular holomorphic gauge bundle with structure group $SU(4)$ produces a low-energy $N=1$ supersymmetric theory with precisely the spectrum of the MSSM \cite{Braun:2005bw,Braun:2005zv,Braun:2005nv}; that is, three families of quarks and leptons with three right-handed neutrino chiral supermultiplets, one per family, and a Higgs-Higgs conjugate pair of chiral superfields. There are no vector-like pairs and no exotic fields. However, in addition to the gauge group $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$ of the MSSM, there is an extra gauged $U(1)_{B-L}$ group. In a series of papers \cite{Ovrut:2014rba,Ovrut:2015uea,Deen:2016vyh}, it was shown that if $N=1$ supersymmetry is softly broken at the unification scale, there exists an extensive set of initial soft breaking parameters -- dubbed ``viable black points'' -- such that, when the theory is scaled down to low energy, all phenomenological requirements are satisfied. More precisely, the $B-L$ symmetry is broken at a sufficiently high scale, the electroweak symmetry is spontaneously broken at the correct scale with the measured values for the $W^{\pm}$ and $Z^{0}$ gauge bosons, the Higgs boson mass is within three sigma of the experimentally measured value, and all sparticle masses exceed their present experimental lower bounds. Remarkably, the initial viable soft supersymmetry breaking parameters appear to be uncorrelated and require no fine-tuning. This realistic theory is referred to as the $B-L$ MSSM or the strongly coupled heterotic standard model. However, in order to be a completely viable vacuum state, it is essential that there exists a holomorphic gauge bundle on the Schoen threefold in the hidden sector. That such a hidden sector gauge bundle can exist and be compatible with the observable sector $SU(4)$ bundle and the Bogomolov inequality was shown in \cite{Braun:2006ae}. In conjunction with the $SU(4)$ observable sector bundle, this hidden sector gauge bundle must be consistent with a number of constraints. These are~\cite{Ovrut:2018qog}: 1) the $SU(4)$ holomorphic vector bundle must be slope-stable so that its gauge connection satisfies the Hermitian Yang--Mills equations, 2) allowing for five-branes in the $S^{1}/{\mathbb{Z}}_{2}$ orbifold interval, the entire theory must be {\it anomaly free}, and 3) the squares of the unified gauge coupling parameters in both the observable and hidden sectors of the theory must be {\it positive definite}. Furthermore, the hidden sector gauge bundle should be chosen so that it is 4) slope-stable and, hence, its gauge connection satisfies the Hermitian Yang--Mills equations, and 5) it does not spontaneously break $N=1$ supersymmetry. In a previous paper \cite{Braun:2013wr}, several such hidden sector bundles, composed of both a single line bundle and a direct sum of line bundles, were presented and proven to satisfy all of these constraints. Unfortunately, it was shown that the effective four-dimensional theory corresponded to the {\it weakly coupled} heterotic string. Hence, amongst other problems, the correct value for the observable sector unification scale and gauge coupling could not be obtained. It is the purpose of this paper to rectify this problem. We will provide a hidden sector gauge bundle -- characterised by a line bundle $L$ that gives an induced rank-two bundle $L\oplus L^{-1}$ so as to embed properly into $E_{8}$ -- which not only satisfies all of the above ``vacuum'' constraints, but, in addition, corresponds to the {\it strongly coupled} heterotic string. We will show that there is a substantial region of K\"ahler moduli space for which a) the $S^{1}/{\mathbb{Z}}_{2}$ orbifold length is sufficiently larger than the average Calabi--Yau radius, and b) that the effective strong coupling parameter is large enough to obtain the correct value for the observable sector $SO(10)$ unification mass and gauge coupling. We refer to these additional criteria as the ``dimensional reduction'' and ``physical'' constraints respectively. Finally, we will show, via an effective field theory analysis, that this hidden sector line bundle preserves $N=1$ supersymmetry in the effective $D=4$ field theory. The fact that, by necessity, our results are only computed to first order, $\kappa_{11}^{4/3}$, within the context of the strongly coupled heterotic string, leads one to ask what the effect of higher-order corrections might be. To partially address this, we calculate a number of important quantities to next order, that is, order $\kappa_{11}^{6/3}$, and compare the results against our first-order computations. We find that, in all cases, by going to higher order the physical behavior of the quantities analyzed actually improves over the lower-order results. These higher-order results are computed and analyzed in detail in Appendix D. \section{The \texorpdfstring{$B-L$}{B-L} MSSM Heterotic Standard Model} The $B-L$ MSSM vacuum of heterotic M-theory was introduced in \cite{Braun:2005bw,Braun:2005zv,Braun:2005nv} and various aspects of the theory were discussed in detail in \cite{Marshall:2014kea,Marshall:2014cwa,Dumitru:2018jyb,Dumitru:2018nct,Dumitru:2019cgf}. This phenomenologically realistic theory is obtained as follows. First, eleven-dimensional Hořava--Witten theory \cite{Horava:1995qa,Horava:1996ma} -- which is valid to order $\kappa_{11}^{2/3}$, where $\kappa_{11}$ is the eleven-dimensional Planck constant -- is compactified on a specific Calabi--Yau threefold $X$ down to a five-dimensional $M_{4} \times S^{1}/\mathbb{{Z}}_{2}$ effective theory, with $N=1, D=5$ supersymmetry in the bulk space and $N=1, D=4$ supersymmetry on the orbifold boundaries \cite{Lukas:1998yy, Lukas:1998tt}. By construction, this five-dimensional theory is also only valid to order $\kappa_{11}^{2/3}$. A BPS double domain wall vacuum solution of this theory was then presented \cite{Lukas:1998tt}. This BPS vacuum of the five-dimensional theory -- which will be discussed in detail in Appendix D -- can, in principle, be computed to all orders as an expansion in $\kappa_{11}^{2/3}$ and used to dimensionally reduce to a four-dimensional, $N=1$ supersymmetric theory on $M_4$. However, since the five-dimensional effective theory is only defined to order $\kappa_{11}^{2/3}$, and since solving the BPS vacuum equations to higher order for the Calabi--Yau threefold associated with the $B-L$ MSSM is very difficult, it is reasonable to truncate the BPS vacuum at order $\kappa_{11}^{2/3}$ as well. Dimensionally reducing with respect to this ``linearized'' solution to the BPS equations then leads to the four-dimensional $N=1$ supersymmetric effective Lagrangian for the $B-L$ MSSM vacuum of heterotic M-theory. By construction, this four-dimensional theory is also only valid to order $\kappa_{11}^{2/3}$ -- except for several quantities, specifically the dilaton, the gauge couplings of both the observable and hidden sectors and the Fayet--Iliopoulos term associated with any $U(1)$ gauge symmetry of the hidden sector, which are well-defined to order $\kappa_{11}^{4/3}$ \cite{Lukas:1997fg, Lukas:1998tt, Lukas:1998hk, Ovrut:2018qog}. All geometric moduli are obtained by averaging the associated five-dimensional fields over the fifth dimension. Having discussed the generic construction of the four-dimensional effective theory, we will, in this section, simply present the basic mathematical formalism and notation required for the analysis in this paper. \subsection{The Calabi--Yau Threefold} The Calabi--Yau manifold $X$ is chosen to be a torus-fibered threefold with fundamental group $\pi_1(X)=\ensuremath{\mathbb{Z}}_3 \times \ensuremath{\mathbb{Z}}_3$. More specifically, the Calabi--Yau threefold $X$ is the fiber product of two rationally elliptic $\dd\mathbb{P}_{9}$ surfaces, that is, a self-mirror Schoen threefold \cite{MR923487,Braun:2004xv}, quotiented with respect to a freely acting $\mathbb{Z}_{3} \times \mathbb{Z}_{3}$ isometry. Its Hodge data is $h^{1,1}=h^{1,2}=3$, so there are three K\"ahler and three complex structure moduli. The complex structure moduli will play no role in the present paper. Relevant here is the degree-two Dolbeault cohomology group \begin{equation} H^{1,1}\big(X,\ensuremath{{\mathbb{C}}}\big)= \Span_\ensuremath{{\mathbb{C}}} \{ \omega_1,\omega_2,\omega_3 \} \ , \label{1} \end{equation} where $\omega_i=\omega_{ia {\bar{b}}}$ are harmonic $(1,1)$-forms on $X$ with the properties \begin{equation} \omega_3\wedge\omega_3=0 \ ,\quad \omega_1\wedge\omega_3=3\,\omega_1\wedge\omega_1 \ ,\quad \omega_2\wedge\omega_3=3\,\omega_2\wedge\omega_2 \ . \label{2} \end{equation} Defining the intersection numbers as \begin{equation} d_{ijk} = \frac{1}{v} \int_X \omega_i \wedge \omega_j \wedge \omega_k \quad i,j,k=1,2,3\ , \label{3} \end{equation} where $v$ is a reference volume of dimension (length)$^6$, it follows that \begin{equation}\label{4} (d_{ijk}) = \left( \begin{array}{ccc} (0,\tfrac13,0) & (\tfrac13,\tfrac13,1) & (0,1,0) \\ (\tfrac13,\tfrac13,1) & (\tfrac13,0,0) & (1,0,0) \\ (0,1,0) & (1,0,0) & (0,0,0) \end{array} \right) \ . \end{equation} The $(i,j)$-th entry in the matrix corresponds to the triplet $(d_{ijk})_{k=1,2,3}$. The K\"ahler cone is the positive octant \begin{equation} \ensuremath{\mathcal{K}} = H^2_{+}(X,\ensuremath{{\mathbb{R}}}) \subset H^2(X,\ensuremath{{\mathbb{R}}})\ . \label{7} \end{equation} The K\"ahler form, defined to be $\omega_{a {\bar{b}}}=ig_{a {\bar{b}}}$, where $g_{a {\bar{b}}}$ is the Ricci-flat metric on $X$, can be any element of $\ensuremath{\mathcal{K}}$. That is, suppressing the Calabi--Yau indices, the K\"ahler form can be expanded as \begin{equation} \omega = a^i\omega_i , \quad \text{where } a^i >0 \ . \label{8} \end{equation} The real, positive coefficients $a^i$ are the three $(1,1)$ K\"ahler moduli of the Calabi--Yau threefold. Here, and throughout this paper, upper and lower $H^{1,1}$ indices are summed unless otherwise stated. The dimensionless volume modulus is defined by \begin{equation} V=\frac{1}{v} \int_X \sqrt{g} \label{9} \end{equation} and, hence, the dimensionful Calabi--Yau volume is ${\bf{V}}=vV$. Using the definition of the K\"ahler form and the intersection numbers \eqref{3}, $V$ can be written as \begin{equation} V=\frac{1}{6v}\int_X \omega \wedge \omega \wedge \omega= \frac{1}{6} d_{ijk} a^i a^j a^k \ . \label{10} \end{equation} It is sometimes useful to express the three $(1,1)$ moduli in terms of $V$ and two additional independent moduli. This can be accomplished by defining the scaled shape moduli \begin{equation} b^i=V^{-1/3}a^i \ , \qquad i=1,2,3 \ . \label{11} \end{equation} It follows from \eqref{10} that they satisfy the constraint \begin{equation} d_{ijk}b^ib^jb^k=6 \label{12} \end{equation} and, hence, represent only two degrees of freedom. \subsection{The Observable Sector Bundle} On the observable orbifold plane, the vector bundle $\ensuremath{V^{(1)}}$ on $X$ is chosen to be a specific holomorphic bundle with structure group $SU(4)\subset E_8$. This bundle was discussed in detail in \cite{Braun:2005nv,Braun:2005bw,Gomez:2005ii,Braun:2006ae}. Here we will present only those properties of this bundle relevant to the present paper. First of all, in order to preserve $N=1$ supersymmetry in the low-energy four-dimensional effective theory on $M_{4}$, this bundle must be both slope-stable and have vanishing slope~\cite{Braun:2005zv,Braun:2006ae}. Recall that the slope of any bundle or sub-bundle $\cal{F}$ is defined as \begin{equation} \mu({\cal{F}})= \frac{1}{\rank({\cal{F}})v^{2/3}} \int_X{c_1(\cal{F})\wedge \omega \wedge \omega} \ , \label{50} \end{equation} where $\omega$ is the K\"ahler form in \eqref{8}. Since the first Chern class $c_1$ of any $SU(N)$ bundle must vanish, it follows immediately that $\mu(\ensuremath{V^{(1)}})=0$, as required. However, demonstrating that our chosen bundle is slope-stable is non-trivial and was proven in detail in several papers \cite{Braun:2005nv,Braun:2005bw,Gomez:2005ii}. The $SU(4)$ vector bundle will indeed be slope-stable in a restricted, but large, region of the positive K\"ahler cone. As proven in detail in~\cite{Braun:2006ae}, this will be the case in a subspace of the K\"ahler cone defined by seven inequalities. In this region, all sub-bundles of $V^{(1)}$ will have negative slope. These can be slightly simplified into the statement that the moduli $a^{i}$, $i=1,2,3$, must satisfy at least one of the two inequalities \begin{equation}\label{51} \begin{gathered} \left( a^1 < a^2 \leq \sqrt{\tfrac{5}{2}} a^1 \quad\text{and}\quad a^3 < \frac{ -(a^1)^2-3 a^1 a^2+ (a^2)^2 }{ 6 a^1-6 a^2 } \right) \quad\text{or}\\ \left( \sqrt{\tfrac{5}{2}} a^1 < a^2 < 2 a^1 \quad\text{and}\quad \frac{ 2(a^2)^2-5 (a^1)^2 }{ 30 a^1-12 a^2 } < a^3 < \frac{ -(a^1)^2-3 a^1 a^2+ (a^2)^2 }{ 6 a^1-6 a^2 } \right) \ . \end{gathered} \end{equation} The subspace $\ensuremath{\mathcal{K}}^s$ satisfying \eqref{51} is a full-dimensional subcone of the K\"ahler cone $\ensuremath{\mathcal{K}}$ defined in \eqref{7}. It is a cone because the inequalities are homogeneous. In other words, only the angular part of the K\"ahler moduli (the $b^i$) are constrained, but not the overall volume. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{fig_starmap_copy.pdf} \caption{The observable sector stability region in the K\"ahler cone.} \label{fig:starmap} \end{figure} Hence, it is best displayed as a two-dimensional ``star map'' as seen by an observer at the origin. This is shown in Figure 1. For K\"ahler moduli restricted to this subcone, the four-dimensional low-energy theory in the observable sector is $N=1$ supersymmetric. Having discussed that our specific $SU(4)$ holomorphic vector bundle preserves four-dimensional $N=1$ supersymmetry, let us examine the physical content of the effective theory on $M_{4}$. To begin with, $SU(4) \times Spin(10)$ is a maximal-rank subgroup of $E_{8}$. Hence, $SU(4)$ breaks the $E_{8}$ group to \begin{equation} E_8 \to Spin(10) \ . \label{13} \end{equation} However, to proceed further, one must break this $Spin(10)$ ``grand unified'' group down to the gauge group of the MSSM. This is accomplished by turning on two \emph{flat} Wilson lines, each associated with a different $\ensuremath{\mathbb{Z}}_3$ factor of the $\ensuremath{\mathbb{Z}}_3 \times \ensuremath{\mathbb{Z}}_3$ holonomy of $X$. Doing this preserves the $N=1$ supersymmetry of the effective theory, but breaks the observable gauge group down to \begin{equation} Spin(10) \to SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_{B-L} \ . \label{17} \end{equation} As discussed in Section 5 below, the mass scale associated with the Wilson lines can be approximately the same, or separated by up to one order of magnitude. Be that as it may, for energies below the lightest Wilson line mass, the particle spectrum of the $B-L$ MSSM is exactly that of the MSSM; that is, three families of quarks and leptons, including three right-handed neutrino chiral supermultiplets -- one per family -- and exactly one pair of Higgs-Higgs conjugate chiral superfields. There are no vector-like pairs of particles and no exotics of any kind. It follows from \eqref{17} however, that the gauge group is that of the MSSM plus an additional gauged $U(1)$ associated with the $B-L$ quantum numbers. The physics of this additional gauge symmetry -- which is broken far above the electroweak scale -- is discussed in detail in a number of papers \cite{Deen:2016vyh,Ambroso:2009jd,Ovrut:2012wg,Ovrut:2014rba,Ovrut:2015uea,Barger:2008wn,FileviezPerez:2009gr,FileviezPerez:2012mj} and is phenomenologically acceptable. \subsection{The Hidden Sector Bundle} In \cite{Ovrut:2018qog}, the hidden-sector vector bundle was chosen to have the generic form of a Whitney sum \begin{equation} V^{(2)}={\cal{V}}_{N} \oplus {\cal{L}}\ , \qquad {\cal{L}}=\bigoplus_{r=1}^R L_r \ , \label{dude1} \end{equation} where ${\cal{V}}_{N}$ is a slope-stable, non-abelian bundle and each $L_{r}$, $r=1,\dots,R$, is a holomorphic line bundle with structure group $U(1)$. In Appendix A below, a careful analysis is given of more restrictive vector bundles, consisting of the Whitney sum of line bundles only; that is \begin{equation} \qquad {V^{(2)}}={\cal{L}}=\bigoplus_{r=1}^R L_r \ . \label{dude1aA} \end{equation} This is presented to set the context for future work involving hidden sectors with several line bundles. However, in this present paper, we will further simplify the hidden sector vector bundle by requiring it to be defined by a {\it single} holomorphic line bundle $L$, in such a way that its $U(1)$ structure group embeds into $E_{8}$. It follows from the discussion in Appendix A that a line bundle $L$ is associated with a divisor of $X$ and is conventionally expressed as \begin{equation} L=\ensuremath{{\cal O}}_X(l^1, l^2, l^3) \ , \end{equation} where the $l^i$ are integers satisfying the condition \begin{equation} (l^1+l^2) \op{mod} 3 = 0 \ . \label{22} \end{equation} This additional constraint is imposed in order for these bundles to arise from $\ensuremath{\mathbb{Z}}_3 \times \ensuremath{\mathbb{Z}}_3$ equivariant line bundles on the covering space of $X$. The structure group of $L$ is $U(1)$. However, there are many distinct ways in which this $U(1)$ subgroup can be embedded into the hidden-sector $E_{8}$ group. The choice of embedding determines two important properties of the effective low-energy theory. First, a specific embedding will define a commutant subgroup of $\Ex8$, which appears as the symmetry group for the four-dimensional effective theory. Second, the explicit choice of embedding will determine a real numerical constant \begin{equation} a=\tfrac{1}{4} \tr _{E_{8}}Q^2 \ , \label{26A} \end{equation} where $Q$ is the generator of the $U(1)$ factor embedded in the $\Rep{248}$ adjoint representation of the hidden sector $E_{8}$, and the trace $\tr$ includes a factor of $1/30$. This coefficient will enter several of the consistency conditions, such as the anomaly cancellation equation, required for an acceptable vacuum solution. \subsection{Bulk Space Five-Branes} In strongly coupled heterotic M-theory, there is a one-dimensional interval $S^{1}/{\mathbb{Z}}_{2}$ separating the observable and hidden orbifold planes. Denoting by $\rho$ an arbitrary reference radius of $S^{1}$, the reference length of this one-dimensional interval is given by $\pi \rho$. A real coordinate on this interval is written as $x^{11} \in [0, \pi \rho]$. As discussed in Appendix A, arbitrary dimensionless functions on $M_{4} \times S^{1}/{\mathbb{Z}}_{2}$ can be averaged over this interval, leading to moduli that are purely functions on $M_{4}$. Averaging the $b$ function in the five-dimensional metric, $\dd s_{5}^{2} = \dots +b^{2}(\dd x^{11})^{2}$, defines a four-dimensional modulus \begin{equation} \frac{\ensuremath{{\widehat R}}}{2}=\langle b \rangle _{11} \ . \label{case1} \end{equation} The physical length of this orbifold interval is then given by $\pi \rho \ensuremath{{\widehat R}}$. It is convenient to define a new coordinate $z$ by $z=\frac{x^{11}}{\pi \rho}$, which runs over the interval $z \in [0,1]$. In addition to the holomorphic vector bundles on the observable and hidden orbifold planes, the bulk space between these planes can contain five-branes wrapped on two-cycles ${\cal{C}}_2^{(n)}$, $n=1,\dots,N$ in $X$. Cohomologically, each such five-brane is described by the $(2,2)$-form Poincar\'e dual to ${\cal C}_2^{(n)}$, which we denote by $W^{(n)}$. Note that to preserve $N=1$ supersymmetry in the four-dimensional theory, these curves must be holomorphic and, hence, each $W^{(n)}$ is an effective class. In Appendix A, we present the formalism associated with an arbitrary number $N$ of such five-branes. However, in the main text of this paper, we will consider only a {\it single} five-brane. We denote its location in the bulk space by $z_{1}$, where $z_{1} \in [0,1]$. When convenient, we will re-express this five-brane location in terms of the parameter $\lambda=z_{1}-\frac{1}{2}$, where $\lambda \in [-\frac{1}{2},\frac{1}{2}]$. \section{The Vacuum Constraints} There are three fundamental constraints that any consistent vacuum state of the $B-L$ MSSM must satisfy. These are the following. \subsection{The SU(4) Slope Stability Constraint} The $SU(4)$ holomorphic vector bundle discussed in subsection 2.2 must be slope-stable so that its gauge connection satisfies the Hermitian Yang--Mills equations. As presented in \eqref{51}, this constrains the allowed region of K\"ahler moduli space to be \begin{equation}\label{51A} \begin{gathered} \left( a^1 < a^2 \leq \sqrt{\tfrac{5}{2}} a^1 \quad\text{and}\quad a^3 < \frac{ -(a^1)^2-3 a^1 a^2+ (a^2)^2 }{ 6 a^1-6 a^2 } \right) \quad\text{or}\\ \left( \sqrt{\tfrac{5}{2}} a^1 < a^2 < 2 a^1 \quad\text{and}\quad \frac{ 2(a^2)^2-5 (a^1)^2 }{ 30 a^1-12 a^2 } < a^3 < \frac{ -(a^1)^2-3 a^1 a^2+ (a^2)^2 }{ 6 a^1-6 a^2 } \right) \ . \end{gathered} \end{equation} This constraint depends entirely on the phenomenologically acceptable non-abelian vector bundle in the observable sector. However, there are two remaining fundamental constraints that strongly depend on the choice of the hidden sector bundle and on the number of bulk space five branes. These two constraints, which are required for any consistent $B-L$ MSSM vacuum, were discussed in general in \cite{Ovrut:2018qog}, and are presented in Appendix A for heterotic vacua for any number of bulk space five-branes and in which the hidden-sector gauge bundle consists of a Whitney sum of line bundles. In the text of this paper, however, we will limit our analysis to hidden-sector vacua constructed from a {\it single} line bundle $L$ only and to the case of a {\it single} five-brane. Under these restrictions, the fundamental vacuum constraints given in \cite{Ovrut:2018qog} and Appendix A simplify to the following conditions. \subsection{Anomaly Cancellation Constraint} In \eqref{29} of Appendix A, the condition for anomaly cancellation between the observable sector, a hidden sector composed of the Whitney sum of line bundles and an arbitrary number of bulk space five-branes is presented. Restricting this to a single hidden-sector line bundle $L$, a single bulk-space five-brane and using the formalism presented in that Appendix, the anomaly cancellation equation can be simplified and then rewritten in the form \begin{equation} W_i= \bigl( \tfrac{4}{3},\tfrac{7}{3},-4\big)\big|_i + a \, d_{ijk} l^j l^k \geq 0 \ \qquad i=1,2,3 \ , \label{33A} \end{equation} where the coefficient $a$ is defined in \eqref{26A}. The positivity constraint on $W$ follows from the requirement that the five-brane wraps an effective class to preserve $N=1$ supersymmetry. \subsection{Gauge Coupling Constraints} The general expressions for the square of the unified gauge couplings in both the observable and hidden sectors -- that is, ${4\pi}/{(g^{(1)})^{2}}$ and ${4\pi}/{(g^{(2)})^{2}}$ respectively -- were presented in \cite{Ovrut:2018qog}. In Appendix A, these are discussed within the context of a hidden-sector bundle \eqref{dude1aA} consisting of the Whitney sum of line bundles, as well as an arbitrary number five-branes in the bulk-space interval. Here, we restrict those results to the case of a hidden-sector bundle constructed from of a single line bundle $L$ and a single five-brane located at $\lambda= z_{1}-\frac{1}{2} \in \left[-\tfrac{1}{2},\tfrac{1}{2}\right]$. The charges $\beta_i^{(0)}$ and $\beta_i^{(1)}$, and the constant coefficient $\epsilon'_S$ are discussed in Appendix A and given by \begin{equation} \beta_i^{(0)}=\left(\tfrac{2}{3},-\tfrac{1}{3},4\right)_{i} \ , \qquad \beta_i^{(1)}=W_{i} \ , \label{ruler1} \end{equation} and \begin{equation} \epsilon'_S = \pi \epsilon_{S} \ , \qquad \epsilon_{S}= \left(\frac{\kappa_{11}}{4\pi} \right)^{2/3}\frac{2\pi\rho}{v^{2/3}} \ . \label{40AA} \end{equation} The parameters $v$ and $\rho$ are defined above and $\kappa_{11}$ is the eleven-dimensional Planck constant. Written in terms of the K\"ahler moduli $a^{i}$ using \eqref{11}, the constraints that $(g^{(1)})^{2}$ and $(g^{(2)})^{2}$ be positive definite are then given by \begin{equation} \label{68AA} \begin{split} d_{ijk} a^i a^j a^k- 3 \epsilon_S' \frac{\ensuremath{{\widehat R}}}{V^{1/3}} \bigl( -(\tfrac{8}{3} a^1 + \tfrac{5}{3} a^2 + 4 a^3) \qquad &\\ + 2(a^1+a^2) -(\tfrac{1}{2}-\lambda)^2 a^i \,{W}_i \bigr) &> 0 \ , \end{split} \end{equation} and \begin{equation} \label{69AA} \begin{split} d_{ijk} a^i a^j a^k- 3 \epsilon_S' \frac{\ensuremath{{\widehat R}}}{V^{1/3}} \bigl(a\,d_{ijk}a^i l^j l^k \qquad &\\ + 2(a^1+a^2) -(\tfrac{1}{2}+\lambda)^2 a^i \,{W}_i \bigr) &> 0 \ , \end{split} \end{equation} respectively. The Calabi--Yau volume modulus $V$ is defined in terms of the $a^{i}$ moduli in \eqref{10}, and $\ensuremath{{\widehat R}}$ is the independent $S^{1}/{\mathbb{Z}}_{2}$ length modulus defined in \eqref{case1}. Note that the coefficient $a$ defined in \eqref{26A} enters both expressions via the five-brane class $ {W}_i$ and independently occurs in the second term of \eqref{69AA}. \section{A Solution of the \texorpdfstring{$B-L$}{B-L} MSSM Vacuum Constraints}\label{sec:constraints} In this section, we will present a simultaneous solution to all of the $B-L$ MSSM vacuum constraints listed above -- namely: 1) the slope-stability conditions given in \eqref{51A} for the $SU(4)$ observable sector gauge bundle, 2) the anomaly cancellation condition with an effective five-brane class presented in \eqref{33A}, and, finally, 3) the conditions for positive squared gauge couplings in both the observable and hidden sectors, presented in \eqref{68AA} and \eqref{69AA}. The slope-stability conditions for the $SU(4)$ observable sector gauge bundle is independent of the choice of the hidden-sector gauge bundle and any bulk-space five-branes. However, the remaining constraints depend strongly upon the specific choice of the line bundle $L$ in the hidden sector, its exact embedding in the hidden sector $E_{8}$ gauge group and, finally, on the location $\lambda$ and the effective class of the five-brane in the $S^{1}/{\mathbb{Z}}_{2}$ interval. For this reason, we first consider the $SU(4)$ slope-stability conditions. \subsection{The \texorpdfstring{$SU(4)$}{SU(4)} Slope-Stability Solution} The region of K\"ahler moduli space satisfying the slope-stability conditions \eqref{51A} was shown to be a three-dimensional subcone of the positive octant of the the full K\"ahler cone. This subcone was displayed above as a two-dimensional ``star map'' in Figure 1. Here, to be consistent with the solution and graphical display of the remaining sets of constraints, we present a portion of the solution space of slope-stability conditions \eqref{51A} as a three-dimensional figure in a positive region of K\"ahler moduli space -- restricted, for specificity, to $ 0 \leq a^i \leq 10~{\rm for}~ i=1,2,3$. This is shown in Figure 2. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{SU4.pdf} \caption{The region of slope-stability for the $SU(4)$ observable-sector bundle, restricted to $ 0 \leq a^i \leq 10~{\rm for}~ i=1,2,3$.} \label{fig:SU4} \end{figure} Before continuing, we note that the slope-stability constraint regions in \eqref{51A} are invariant under scaling $a^{i}\to \mu a^{i}$, where $\mu$ is a positive real number. Figure 2 includes \emph{all} K\"ahler moduli in the restricted region satisfying the $SU(4)$ slope-stability constraints. \subsection{An Anomaly Cancellation Solution}\label{sec:anomaly_cancellation} Unlike the $SU(4)$ slope-stability constraints, the condition \eqref{33A} for anomaly cancellation depends on the explicit choice of the hidden sector line bundle $L$, as well as on the parameter $a$ defined in \eqref{26A}. Hence, one must specify the exact embedding of this line bundle into the hidden sector $E_{8}$ gauge group. Here, for specificity, we will choose the line bundle to be \begin{equation} L=\ensuremath{{\cal O}}_X(2, 1, 3) \ . \label{red1} \end{equation} Note that each entry is an integer and that $l^1=2$ and $l^2=1$ satisfy the equivariance condition \eqref{22}, as they must. The reason for this choice of line bundle, and the presentation of several other line bundles that lead to acceptable results, will be discussed below. Here, we focus exclusively on line bundle \eqref{red1}. Generically, there are numerous distinct embeddings of an given arbitrary line bundle into an $E_{8}$ gauge group, each with its own commutant subgroup and $a$ parameter. In this section, to be concrete, we will choose a particular embedding of $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ into $E_{8}$ and, having done so, explicitly calculate its $a$ parameter. The explicit embedding of $L$ into $E_8$ is chosen as follows. First, recall that \begin{equation} SU(2) \times E_7 \subset E_8 \label{red2} \end{equation} is a maximal subgroup. With respect to $SU(2) \times E_7$, the $\Rep{248}$ representation of $E_8$ decomposes as \begin{equation} \Rep{248} \to (\Rep{1}, \Rep{133}) \oplus (\Rep{2}, \Rep{56}) \oplus (\Rep{3}, \Rep{1})\ . \label{red3} \end{equation} Now choose the generator of the $U(1)$ structure group of $L$ in the fundamental representation of $SU(2)$ to be $(1,-1)$. It follows that under $SU(2) \rightarrow U(1)$ \begin{equation} \Rep{2} \to 1 \oplus -1 \ , \label{red4} \end{equation} and, hence, under $U(1) \times E_7$ \begin{equation} \Rep{248} \to (0, \Rep{133}) \oplus \bigl( (1, \Rep{56}) \oplus (-1, \Rep{56})\bigr) \oplus \bigl( (2, \Rep{1}) \oplus (0, \Rep{1}) \oplus (-2, \Rep{1}) \bigr) . \label{red5} \end{equation} The generator $Q$ of this embedding of the line bundle $L$ can be read off from expression \eqref{red5}. Inserting this into \eqref{26A}, we find that \begin{equation} a=1 . \label{red6} \end{equation} We note in passing that the four-dimensional effective theory associated with choosing this explicit embedding has gauge symmetry \begin{equation} H=E_{7} \times U(1) \ , \label{red7} \end{equation} where the second factor is an ``anomalous'' $U(1)$. It is identical to the structure group of $L$ and arises in the low-energy theory since $U(1)$ commutes with itself. This will be discussed in detail later in this paper. An important consequence of the explicit embedding \eqref{red2}, \eqref{red4} and, hence, \eqref{red5} is the following. To begin with, we note that $ L=\ensuremath{{\cal O}}_X(2, 1, 3)$ is, indeed, a sub-line bundle of the hidden sector $E_{8}$ gauge group. However, ``embedding'' this line bundle into $E_{8}$ means that the single gauge connection associated with the $U(1)$ structure group of $L$ must also be a subset of the $\bf 248$ indexed non-abelian connection of $E_{8}$. Since the slope of this $E_{8}$ representation vanishes, it follows that the slope of the line bundle $L$ must also vanish. Generically, however, this will not be the case. It follows from \eqref{50} and \eqref{23} that the slope of $L$ is proportional to its first Chern class $c_{1}(L)=\frac{1}{v^{1/3}}(2\omega_{1}+\omega_{2}+3\omega_{3})$ and, hence, its slope does not vanish anywhere in K\"ahler moduli space. Therefore, to ``embed'' $L$ into $E_{8}$ as specified by \eqref{red4}, it is necessary to extend the hidden sector gauge bundle to the ``induced'' rank 2 bundle \begin{equation} \mathcal{V}=L \oplus L^{-1} \ . \label{ind} \end{equation} The first Chern class of this induced bundle necessarily vanishes and, hence, the associated abelian connection can be appropriately embedded into the hidden sector {\bf 248}-valued $E_{8}$ gauge connection. We want to emphasize that this induced line bundle was implicitly used in both the anomaly constraint \eqref{33A} and the gauge coupling constraints \eqref{68AA} and \eqref{69AA} since the parameter $a=1$ was computed using the generator $Q$ of $L \oplus L^{-1}$ derived from \eqref{red5}. Having discussed this in detail, let us now insert $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ and $a=1$ into the anomaly cancellation constraint \eqref{33A}. Using \eqref{4}, we find that \begin{equation} W_{i}=(9 ,17 , 0)|_{i} \geq 0 \quad\text{for each } i=1,2,3 \ . \label{red8} \end{equation} Hence, the anomaly cancellation condition is satisfied. \subsection{Moduli Scaling: Simplified Gauge Parameter Constraints} Before presenting solutions to the gauge coupling positivity constraints \eqref{68AA} and \eqref{69AA}, we observe the following important fact. Note, using expression \eqref{10} for the volume modulus $V$, that both of these constraints remain invariant under the scaling \begin{equation} a^{i} \to \mu a^{i} \ ,\qquad \epsilon'_{S}\ensuremath{{\widehat R}} \to \mu^{3} \epsilon'_{S}\ensuremath{{\widehat R}} \ , \label{wall1} \end{equation} where $\mu$ is any positive real number. It follows that the coefficient $\epsilon_S'\ensuremath{{\widehat R}}/V^{1/3}$ in front of the $\kappa_{11}^{4/3}$ terms in each of the two constraint equations can be set to unity by choosing the appropriate constant $\mu$; that is \begin{equation} \epsilon_S'\frac{\ensuremath{{\widehat R}}}{V^{1/3}} \to 1 \ . \label{wall2} \end{equation} We will refer to this choice of $\epsilon_S'\ensuremath{{\widehat R}}/V^{1/3}=1$ as the ``unity'' gauge. Working in unity gauge, the gauge coupling positivity constraints \eqref{68AA} and \eqref{69AA} simplify to % \begin{equation} \label{68A} \begin{split} d_{ijk} a^i a^j a^k- 3\bigl( -(\tfrac83 a^1 + \tfrac53 a^2 + 4 a^3) + \qquad& \\ + 2(a^1+a^2) -(\tfrac{1}{2}-\lambda)^2 a^i \,{W}_i \bigr) &> 0 \ , \end{split} \end{equation} and \begin{equation} \label{69A} \begin{split} d_{ijk} a^i a^j a^k- 3 \bigl(a\,d_{ijk}a^i l^j l^k + \qquad& \\ + 2(a^1+a^2) -(\tfrac{1}{2}+\lambda)^2 a^i \,{W}_i \bigr) &> 0 \ . \end{split} \end{equation} In the following subsections, we will solve these constraints in unity gauge. Before doing so, however, we wish to emphasize again that the $SU(4)$ slope-stability constraints discussed above are also invariant under $a^{i} \to \mu a^{i}$ scaling and, hence, the results shown in Figure 2 remain unchanged. Of course, to be consistent with the solution of the anomaly cancellation constraint presented in \eqref{red8}, we will solve the gauge coupling positivity constraints for 1) the explicit choice of line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$, 2) the explicit embedding \eqref{red5} and 3) the associated embedding parameter $a=1$. In this specific case, the constraints \eqref{68A} and \eqref{69A} become \begin{equation} \begin{split} ({a^1})^2a^2+a^1({a^2})^2+6a^1a^2a^3&+2a^1-a^2+12a^3+\\ &+3\left(\frac{1}{2}-\lambda \right)^2(9a^1+17a^2)>0 \end{split} \label{clip1} \end{equation} and \begin{equation} \begin{split} ({a^1})^2a^2+a^1({a^2})^2+6a^1a^2a^3&-29a^1-50a^2-12a^3+\\ &+3\left(\frac{1}{2}+\lambda \right)^2(9a^1+17a^2)>0 \end{split} \label{clip2} \end{equation} where we have used \eqref{4}. \subsection{Five-Brane Location} It is also clear from the gauge coupling constraints \eqref{clip1} and \eqref{clip2} that, even in unity gauge, it is necessary to explicitly fix the location of the bulk space five-brane by choosing its location parameter $\lambda$. As can be seen in \eqref{clip2}, the condition $(g^{(2)})^2>0$ is most easily satisfied when the value of $\lambda$ is as large as possible; that is, for the five-brane to be near the hidden wall. For concreteness, we will take \begin{equation} \lambda=0.49 \ . \label{cup1} \end{equation} Note that we do not simply set $\lambda=\frac{1}{2}$, so as to avoid unwanted ``small instanton'' transitions of the hidden sector~\cite{Ovrut:2000qi}; that is, to keep the five-brane as an independent entity. \subsection{Gauge Couplings Solution} In unity gauge, choosing $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ with $a=1$, and \eqref{red8} and \eqref{cup1}, one can solve the positive gauge coupling constraints \eqref{clip1} and \eqref{clip2} simultaneously. The results are presented in Figure 3, again restricted to the region $ 0 \leq a^i \leq 10~{\rm for}~ i=1,2,3$ of K\"ahler moduli space. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{g_positive.pdf} \caption{Simultaneous solution to both $(g^{(1)})^2 >0$ and $(g^{(2)})^2>0$ gauge coupling constraints \eqref{clip1} and \eqref{clip2} in unity gauge with $\lambda=0.49$, restricted to the region $ 0 \leq a^i \leq 10$ for $i=1,2,3$.} \label{fig:pos_couplings} \end{figure} \subsection{The Simultaneous Solution of All Required Constraints} Intersecting the results of the previous subsections, we can now present a simultaneous solution to all of the $B-L$ MSSM vacuum constraints listed above -- that is, 1) the solution for the $SU(4)$ slope-stability conditions given in Figure 2, 2) the solution for the anomaly cancellation condition with an effective five-brane class presented in \eqref{red8} and 3) the conditions for positive squared gauge couplings in both the observable and hidden sectors shown in Figure 3. We reiterate that this is a very specific solution to these constraints, with the hidden sector line bundle chosen to be $L=\ensuremath{{\cal O}}_X(2, 1, 3)$, with its specific embedding into the hidden sector $E_{8}$ given in \eqref{red5} leading to parameter $a=1$ and, finally, the location of the five-brane fixed at $\lambda=0.49$. The intersecting region of K\"ahler moduli space satisfying all of these constraints computed in unity gauge is shown in Figure 4. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{AllRegion.pdf} \caption{ The region of K\"ahler moduli space where the $SU(4)$ slope-stability conditions, the anomaly cancellation constraint, and the positive squared gauge coupling constraints with $\lambda=0.49$ are simultaneously satisfied in unity gauge, restricted to $ 0 \leq a^i \leq 10$ for $i=1,2,3$. This amounts to the intersection of Figures 2 and 3.} \label{fig:all_constr} \end{figure} To conclude, we must emphasize a subtle but important point. The use of unity gauge, defined by \eqref{wall2}, disguises the fact that the constraint equations actually contain the expression $\epsilon'_{S} \ensuremath{{\widehat R}}$, where $\epsilon'_{S}$ is a coupling parameter and $\ensuremath{{\widehat R}}$ is the modulus for the $S^{1}/{\mathbb{Z}}_{2}$ orbifold interval. Under the scaling $a^{i} \rightarrow \mu a^{i}$ of the K\"ahler moduli, it is the {\it product} $\epsilon'_{S} \ensuremath{{\widehat R}}$ that scales as $\epsilon'_{S} \ensuremath{{\widehat R}} \rightarrow \mu^{3} \epsilon'_{S} \ensuremath{{\widehat R}}$. However, the invariance of the constraint equations under this scaling does {\it not} specify the scaling behavior of either $\epsilon'_{S}$ or $\ensuremath{{\widehat R}}$ individually -- only their product. It follows that, in principle, $\epsilon'_{S} \rightarrow \mu^{A}\epsilon'_{S}$ and $ \ensuremath{{\widehat R}} \rightarrow \mu^{B} \ensuremath{{\widehat R}}$ for any values of $A$ and $B$ as long as $A+B=3$. We will therefore interpret Figure 4 to be such that: 1) although every point $a^i$, $i=1,2,3$ contained in it satisfies all vacuum constraints, a given point can depend arbitrarily on any value of $\epsilon'_{S}$, 2) under $\mu$ scaling, all that is required is that $\epsilon'_{S} \ensuremath{{\widehat R}} \rightarrow \mu^{3} \epsilon'_{S} \ensuremath{{\widehat R}}$, but the degree of scaling of $\epsilon'_{S}$ and $\ensuremath{{\widehat R}}$ individually remains undetermined. This interpretation will become important below, when we impose additional constraints in the following two sections. \section{Dimensional Reduction and Physical Constraints} In addition to the three ``vacuum'' constraints discussed above -- and solved in unity gauge for a specific choice of line bundle, line bundle embedding and five-brane location -- there are three additional conditions that must be satisfied for the $B-L$ MSSM theory to be viable. These can be broken into two categories. First, there is a new ``reduction'' constraint required for the consistency of the $d=11$ to $d=5$ heterotic M-theory dimensional reduction. Second, there are two purely ``phenomenological'' constraints. They are that the $Spin(10)$ grand unification scale, $M_{U}$, and the associated unified gauge coupling, $\alpha_{u}$, in the observable sector be consistent with the phenomenologically viable values for these quantities~\cite{Deen:2016vyh,Ovrut:2015uea,Ovrut:2012wg}. \subsection{The Reduction Constraint} We begin by discussing the constraint required for a consistent dimensional reduction on a Calabi--Yau threefold $X$ from the $d=11$ Hořava--Witten orbifold to five-dimensional heterotic M-theory. In order for this reduction to be viable, the averaged Calabi--Yau radius must, when calculated using the eleven-dimensional M-theory metric, be sufficiently smaller than the physical length of the $S^{1}/ \mathbb{Z}_{2}$ interval. That is, one must have \begin{equation} \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}} > 1 \ , \label{sun1} \end{equation} where the constant parameters $v$ and $\rho$ were introduced above and the moduli $V$ and $\ensuremath{{\widehat R}}$ are defined in \eqref{10} and \eqref{case1} respectively. The extra factor of $V^{-1/3}$ in the numerator arises because the $S^{1}/\mathbb{Z}_{2}$ interval length must be computed with respect to the eleven-dimensional metric. To see this, recall from \cite{Lukas:1998tt} that the eleven-dimensional metric ansatz for the reduction to five dimensions is given by \begin{equation} \dd s_{11}^{2}=V^{-2/3}g_{\alpha\beta} \dd x^{\alpha} \dd x^{\beta}+g_{AB} \dd x^{A}\dd x^{B}\ ,\label{eq:11d_metric} \end{equation} where $g_{\alpha\beta}$ is the five-dimensional metric and $g_{AB}$ is the metric of the Calabi–Yau threefold. Note that the factor of $V^{-2/3}$ is chosen so that $g_{\alpha\beta}$ is in the five-dimensional Einstein frame. To further reduce to four dimensions, one takes \begin{equation} g_{\alpha\beta} \dd x^{\alpha}\dd x^{\beta}=\ensuremath{{\widehat R}}^{-1}g_{\mu\nu} \dd x^{\mu} \dd x^{\nu}+\ensuremath{{\widehat R}}^{2}(\dd x^{11})^{2} \ ,\label{eq:5d_metric} \end{equation} where $x^{11}$ runs from $0$ to $\pi\rho$ and $g_{\mu\nu}$ is the four-dimensional Einstein frame metric. Note that, following the convention outlined in Appendix A and used throughout the text, we denote all moduli averaged over the $S^{1}/ \mathbb{Z}_{2}$ orbifold interval without the subscript ``$0$''. As measured by the five-dimensional metric, the $S^{1}/ \mathbb{Z}_{2}$ orbifold interval has length $\pi\rho\ensuremath{{\widehat R}}$. However, if one wants to compare the scale of the orbifold interval with that of the Calabi–Yau threefold, one must use the eleven-dimensional metric. Substituting (\ref{eq:5d_metric}) into (\ref{eq:11d_metric}) and averaging the value of $V$ over the orbifold interval, we find \begin{equation} \dd s_{11}^{2}=V^{-2/3}\ensuremath{{\widehat R}}^{-1}g_{\mu\nu} \dd x^{\mu} \dd x^{\nu}+V^{-2/3}\ensuremath{{\widehat R}}^{2}( \dd x^{11})^{2}+g_{AB} \dd x^{A} \dd x^{B}\ .\label{eq:11d_metric-1} \end{equation} From this we see that, in eleven dimensions, the orbifold interval has length $\pi\rho\ensuremath{{\widehat R}} V^{-1/3}$, as used in \eqref{sun1}. It is helpful to note that \eqref{sun1} can be written as \begin{equation} \frac{\ensuremath{{\widehat R}}}{\epsilon_{R}V^{1/2}} > 1 \ ,\quad \text{where } \epsilon_{R}=\frac{v^{1/6}}{\pi \rho} \ . \label{soc1} \end{equation} \subsection{The Phenomenological Constraints} Thus far, with the exception of subsection 2.2, the main content of this text has been exploring the mathematical constraints required for the theory to be anomaly free with a hidden sector containing a single line bundle $L$ and a single bulk space five-brane. The content of subsection 2.2, however, was more phenomenological. The K\"ahler moduli space constraints were presented so that a specific $SU(4)$ holomorphic vector bundle in the observable sector would be slope-stable and preserve $N=1$ supersymmetry. Furthermore, important phenomenological properties of the resultant effective theory were presented; specifically, that the low-energy gauge group, after turning on both $\mathbb{Z}_{3}$ Wilson lines, is that of the Standard Model augmented by an additional gauge $U(1)_{B-L}$ factor, and that the particle content of the effective theory is precisely that of the MSSM, with three right-handed neutrino chiral multiplets and a single Higgs-Higgs conjugate pair, and no exotic fields. That being said, for the $B-L$ MSSM to be completely realistic there are additional low-energy properties that it must possess. These are: 1) spontaneous breaking of the gauged $B-L$ symmetry at sufficiently high scale, 2) spontaneous breaking of electroweak symmetry with the measured values of the $W^{\pm}$ and $Z^0$ masses, 3) the Higgs mass must agree with its measured value, and 4) all sparticle masses must exceed their current experimental lower bounds. In a series of papers \cite{Ovrut:2014rba,Ovrut:2015uea,Deen:2016vyh}, using generic soft supersymmetry breaking terms added to the effective theory, scattering the initial values of their parameters statistically over various physically interesting regions and running all parameters of the effective theory to lower energy using an extensive renormalization group analysis, it was shown that there is a wide range of initial conditions that completely solve all of the required phenomenological constraints. These physically acceptable initial conditions were referred to as ``viable black points''. Relevant to this paper is the fact that, for two distinct choices of the mass scales of the two $\mathbb{Z}_{3}$ Wilson lines, the four gauge parameters associated with the $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y} \times U(1)_{B-L}$ group were shown to grand unify -- albeit at different mass scales. Let us discuss these two choices in turn. \subsubsection{Split Wilson Lines} The first scenario involved choosing one of the Wilson lines to have a mass scale identical to the $Spin(10)$ breaking scale and to fine-tune the second Wilson line to have a somewhat lower scale, chosen so as to give exact gauge coupling unification. The region between the two Wilson line mass scales can exhibit either a ``left-right'' scenario or a Pati--Salam scenario depending on which Wilson line is chosen to be the lightest. We refer the reader to \cite{Ovrut:2012wg} for details. Here, to be specific, we will consider the ``left-right'' split Wilson line scenario. For a given choice of viable black point, the gauge couplings unify at a specific mass scale $M_{U}$ with a specific value for the unification parameter $\alpha_{u}$. It was shown in \cite{Deen:2016vyh} that there were 53,512 phenomenologically viable black points. The results for $M_{U}$ and the associated gauge parameter $\alpha_{u}$ are plotted statistically over these viable black points in Figures 5 and 6 respectively. The average values for the unification scale and gauge parameter, $\langle M_U\rangle$ and $\langle \alpha_u \rangle$ respectively, are indicated. \begin{figure} \centering \includegraphics[scale=0.9]{thresholdHistogramUnificationScale.png} \caption{A histogram of the unification scale for the 53,512 phenomenologically viable black points in the split Wilson line ``left-right'' unification scheme. The average unification scale is $\langle M_U\rangle=3.15\times10^{16}~\text{GeV}$.} \label{fig:a} \end{figure} \\ \begin{figure} \centering \includegraphics[scale=0.9]{thresholdHistogramAlpha.png} \caption{A histogram of the unification scale for the 53,512 viable black points in the split Wilson line ``left-right'' unification scheme. The average value of the unified gauge coupling is $\langle \alpha_u\rangle=0.0498=\frac{1}{20.08}$.} \label{fig:b} \end{figure} \indent The results presented in Figures 5 and 6 lead us to postulate two new ``phenomenological'' constraints on our $B-L$ MSSM vacuum. The first constraint, arising from Figure 5, is that \begin{equation} \langle M_{U}\rangle=3.15 \times 10^{16}~\text{GeV} \equiv\frac{1}{\boldsymbol{V}^{1/6}}=\frac{1}{v^{1/6}V^{1/6}} \ . \label{jack1} \end{equation} Hence, given a point in the unity gauge K\"ahler moduli space shown in Figure 4 -- and using \eqref{10} to compute the value of $V$ at that point -- it follows from \eqref{jack1} that one can determine the required value of $v$. Up to this point in the paper, $v$ was unconstrained. To elucidate the second constraint, we must present the explicit expression for the $D=4$ effective Lagrangian for the observable and hidden sector gauge field kinetic terms. This was calculated in \cite{Lukas:1997fg,Lukas:1998yy,Lukas:1998tt} and, ignoring gravitation, was found to be \begin{equation} \mathcal{L}=\dots-\frac{1}{16\pi\hat{\alpha}_{GUT}}(\re f_{1} \tr_{E_{8}}F_{1}^{\mu\nu}F_{1\mu\nu}+\re f_{2} \tr_{E_{8}}F_{2}^{\mu\nu}F_{2\mu\nu})+\dots\label{eq:het_lagrangian} \end{equation} where $\hat{\alpha}_{GUT}$ is a parameter given by\footnote{As discussed in \cite{Conrad:1997ww}, the expression for $\hat{\alpha}_{\text{GUT}}$ presented here is two times larger than the result given in \cite{Lukas:1997fg,Lukas:1998yy,Lukas:1998tt}.} \begin{equation} \hat{\alpha}_{GUT}=\frac{\kappa_{11}^{2}}{v}\left( \frac{4\pi}{\kappa_{11}} \right)^{2/3} \ . \label{bag1} \end{equation} For the specific choice of the hidden sector line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ with embedding coefficient $a=1$, the functions $\re f_{1}$ and $\re f_{2}$ in unity gauge are found to be \begin{equation} \re f_{1} = V +\tfrac{1}{3}a^1- \tfrac{1}{6}a^2 +2a^3+\tfrac{1}{2}(\tfrac{1}{2}-\lambda)^2 (9a^1+17a^2) \ , \label{bag2} \end{equation} and \begin{equation} \label{bag2A} \re f_{2}= V-\tfrac{29}{6}a^1-\tfrac{25}{3}a^2-2a^3+\tfrac{1}{2}(\tfrac{1}{2}+\lambda)^2 (9a^1+17a^2) \ , \end{equation} where $W_{i}$ and $\lambda$ are given in \eqref{red8} and \eqref{cup1} respectively. It then follows from Figure 6 and \eqref{eq:het_lagrangian} that \begin{equation} \langle \alpha_{u} \rangle = \frac{1}{20.08}=\frac{\hat{\alpha}_{GUT}}{\re f_{1}} \ . \label{bag3} \end{equation} Hence, given a point in the unity gauge K\"ahler moduli space shown in Figure 4 -- and using \eqref{bag2} to compute the value of $ f_{1} $ at that point -- it follows from \eqref{bag3} that one can determine the required value of $\hat{\alpha}_{GUT}$, which, up to this point, was unconstrained. Using the relations \begin{equation} \kappa_{4}^{2}= \frac{8\pi}{M_{P}^{2}} =\frac{\kappa_{11}^{2}}{v2\pi\rho} \ , \label{bag4} \end{equation} where $\kappa_{4}$ and $M_{P}=1.221 \times 10^{19}~\text{GeV}$ are the four-dimensional Newton's constant and Planck mass respectively, it follows from \eqref{bag1} that \begin{equation} \rho=\left( \frac{\hat{\alpha}_{GUT}}{8 \pi^{2}} \right)^{3/2}v^{1/2}M_{P}^{2} \ . \label{bag5} \end{equation} Finally, using these relations, the expression for $\epsilon_S'$ in \eqref{40AA} can be rewritten as \begin{equation} \epsilon_S' =\frac{2\pi^{2}\rho^{4/3}}{v^{1/3}M_{P}^{2/3}} \ . \label{soc2} \end{equation} That is, at any given fixed point in the unity gauge K\"ahler moduli space shown in Figure 4, which, by definition, satisfies all of the $B-L$ MSSM ``vacuum constraints'' listed in Sections 3 and 4, as well as the two ``phenomenological'' constraints presented in this subsection, one can determine {\it all} constant parameters of the theory -- that is, $v$, $\hat{\alpha}_{GUT}$, $\rho$, $ \epsilon_S^\prime$ and $\epsilon_{R}$. We again emphasize, as discussed above, that the unity gauge solution space of the vacuum constraints is valid for any arbitrary choice of $ \epsilon_S^\prime$. \subsubsection{Simultaneous Wilson lines} In the previous subsection, we presented the phenomenological constraints for the ``left-right'' split Wilson line scenario. Here, we will again discuss the two phenomenological constraints, but this time in the scenario where the mass scales of the two Wilson lines and the ``unification'' scale are approximately degenerate. Although somewhat less precise than the split Wilson line scenario, this ``simultaneous'' Wilson line scenario is more natural in the sense that less fine-tuning is required. We refer the reader to \cite{Deen:2016vyh} for details. In this new scenario, we continue to use the previous mass scale $\langle M_{U}\rangle=3.15 \times 10^{16}~\text{GeV}$ as the $SO(10)$ ``unification'' scale -- since its mass is set by the scale of the gauge bundle -- even though when the Wilson lines are approximately degenerate the low-energy gauge couplings no longer unify there. Rather, they are split at that scale by individual ``threshold'' effects. Since the full $B-L$ MSSM low energy theory now exists at $\langle M_{U} \rangle$, we will assume that soft supersymmetry breaking also occurs at that scale. As shown in \cite{Deen:2016vyh}, we find that there are 44,884 valid black points which satisfy all low-energy physical requirements -- including the correct Higgs mass. Rather than statistical plots over the set of all phenomenological black points, as we did in Figures 5 and 6 for the previous scenario, here we present a single figure showing the running of the inverse $\alpha$ parameters for the $SU(3)_{C}$, $SU(2)_{L}$, $U(1)_{3R}$ and $U(1)^{'}_{B-L}$ gauge couplings. This is presented in Figure 7. % \begin{figure} \centering \includegraphics[scale=0.9]{nonUnification.png} \caption{Running gauge couplings for a sample ``valid black point'' with $M_{SUSY}=2350$ GeV, $M_{B-L}=4670$ GeV and $\sin^{2}\theta_R = 0.6$. In this example, $\alpha_3(\langle M_U \rangle)=0.0377$, $\alpha_2(\langle M_U \rangle )=0.0377$, $\alpha_{3R}(\langle M_U \rangle)=0.0433$, and $\alpha_{BL^\prime}(\langle M_U \rangle)=0.0360$.} \label{fig:c} \end{figure} Note that in the analysis of Figure 7, we use the $U(1)_{3R}$ gauge group instead of $U(1)_{Y}$ and $U(1)^{'}_{B-L}$ instead of $U(1)_{B-L}$, which is a minor redefinition of the $B-L$ charges, since this simplifies the renormalization group analysis. However, the averages over their gauge thresholds differs only minimally from the basis used in this paper. Furthermore, we will augment the results of Figure 7 with a more detailed discussion below which uses our standard basis. As discussed in the previous paragraph, the first constraint in this new scenario is identical to constraint \eqref{jack1} above. That is, \begin{equation} \langle M_{U}\rangle=3.15 \times 10^{16}~\text{GeV} \equiv\frac{1}{\boldsymbol{V}^{1/6}}=\frac{1}{v^{1/6}V^{1/6}} \ . \label{jack1A} \end{equation} Hence, given a point in the unity gauge K\"ahler moduli space shown in Figure 4 -- and using \eqref{10} to compute the value of $V$ at that point -- it follows from \eqref{jack1A} that one can determine the required value of $v$. To elucidate the second phenomenological constraint in this scenario, however, requires a further analysis. First note from Figure 7, which is computed for a {\it single} initial valid black point, that at $ \langle M_{U} \rangle$ the values of the $\alpha$ parameters for each of the four gauge couplings are given by \begin{gather} \alpha_3(\langle M_U\rangle)=0.0377\ ,\qquad \alpha_2(\langle M_U\rangle)=0.0377\ , \\ \alpha_{3R}(\langle M_U\rangle)=0.0433\ ,\qquad \alpha_{BL^\prime}(\langle M_U\rangle)=0.0360 \ , \label{JFK1} \end{gather} % respectively. Taking the average over these parameters, we find that for that specific valid black point, % \begin{equation} \alpha_{u}^{\rm avg}= \frac{1}{25.87} \ . \label{JFK2} \end{equation} % However, to get a more generic value for the average $\langle \alpha_{u} \rangle$ at the unification scale $\langle M_{U} \rangle$, one can either: 1) repeat the same analysis as in Figure 7, statistically calculating over all 44,884 valid black points and finding the average of the results, or 2) use the following technique, which is unique to a string theory analysis. Since our observable sector comes from an $E_{8}\times E_{8}$ heterotic string theory in ten dimensions, we will use the second analysis for simplicity. To do this, we note that, at string tree level, the gauge couplings are expected to grand unify to a single parameter $g_{\rm string}$ at a ``string unification'' scale \begin{equation} M_{\rm string}=g_{\rm string} \times 5.27 \times 10^{17}~\mbox{GeV} \ . \end{equation} The string coupling parameter $g_{\rm string}$ is set by the value of the dilaton, and is typically of ${\cal{O}}(1)$. A common value in the literature, see for example \cite{Dienes:1996du,Bailin:2014nna,Nilles:1998uy}, is $g_{\rm string}= 0.7$ which, for specificity, we will use henceforth. Therefore, we take $\alpha_{\rm string}$ and the string unification scale to be \begin{equation} \alpha_{\rm string}=\frac{g_{\rm string}^{2}}{4\pi} = 0.0389 \ , \qquad M_{\rm string}=3.69 \times 10^{17}~ \mbox{GeV} \ , \label{hani4} \end{equation} respectively. Note that $ M_{\rm string}$ is approximately an order of magnitude larger than $\langle M_{U}\rangle$. Below $M_{\rm string}$ however, the couplings evolve according to the renormalization group equations of $B-L$ MSSM effective field theory. This adds another scaling regime, $\langle M_{U}\rangle \rightarrow M_{\rm string}$, to those discussed previously. The effective field theory in this regime remains that of the $B-L$ MSSM, with the same renormalization group factors as between the $B-L$ breaking scale and $\langle M_{U}\rangle $. However, the gauge coupling renormalization group equations are now altered to \begin{equation} 4\pi {\alpha_{a}}^{-1}( p)=4\pi \alpha_{\rm string }^{-1}-b_{a}\ln\left(\frac{p^2}{M_{\rm string}^{2}}\right) \ , \label{hani6} \end{equation} where the index $a$ runs over $SU(3), SU(2), 3R, B-L$, the coefficients $b_{a}$ are given in \cite{Ovrut:2015uea} and, for simplicity, we have ignored the ``string threshold'' corrections calculated in \cite{Deen:2016vyh}. Note that the one-loop running gauge couplings do not unify exactly at $\langle M_U\rangle $. Rather, they are ``split'' by dimensionless threshold effects. Using \eqref{hani4} and taking $p^2=\langle M_U\rangle ^{2}$, one can evaluate the $\alpha_{a}$ parameter for each of the four gauge couplings at the scale $\langle M_U\rangle $. We find that \begin{gather} \alpha_{SU(3)}(\langle M_U\rangle )=0.0430\ ,\qquad\alpha_{SU(2)}(\langle M_U\rangle )=0.0383\ ,\\ \alpha_{3R}(\langle M_U\rangle )=0.0351\ ,\qquad\alpha_{B-L}(\langle M_U\rangle )=0.0356\ , \label{tr1} \end{gather} and, hence, the average ``unification'' parameter at $\langle M_U\rangle $ is given by \begin{equation} \langle \alpha_{u}\rangle =\frac{1}{26.46} \ . \label{tr2} \end{equation} It follows that for the ``simultaneous'' Wilson line scenario, the second phenomenological constraint is altered to become \begin{equation} \langle \alpha _{u}\rangle =\frac{1}{26.46}=\frac{\hat{\alpha}_{GUT}}{\re f_{1}} \ . \label{tr3} \end{equation} Hence, given a point in the unity gauge K\"ahler moduli space shown in Figure 4 -- and using \eqref{bag2} to compute the value of $\re f_{1} $ at that point -- it follows from \eqref{tr3} that one can determine the required value of $\hat{\alpha}_{GUT}$, which, up to this point, was unconstrained. As with the ``left-right'' Wilson line scenario in the previous subsection, given the values for $v$ and $\hat{\alpha}_{GUT}$ from \eqref{jack1A} and \eqref{tr3}, one can then compute the parameters $\rho$, $\epsilon_S'$ and $\epsilon_{R}$ using \eqref{bag5}, \eqref{soc2} and \eqref{soc1} respectively. \section{A Solution of All the Constraints} In Section 4, we displayed the solutions to all of the $B-L$ MSSM ``vacuum'' constraints, valid for a hidden sector line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ embedded as in \eqref{red5} with $a=1$ for a single five-brane with its bulk space location fixed to be at $\lambda=0.49$. The intersecting region of K\"ahler moduli satisfying all of these constraints, computed in unity gauge, are shown in Figure 4. In Section 5, we introduced three {\it additional} constraints. One of these, which we refer to as the ``reduction'' constraint, is required for the consistency of the dimensional reduction from the eleven-dimensional theory down to five dimensions. The other two, which we call the ``phenomenological'' constraints, are necessary so that the $SO(10)$ grand unified group in the observable sector has the correct unification scale, $\langle M_{U}\rangle $, and the right value of the physical unified gauge parameter, $\langle \alpha_{u}\rangle $, determined from the $B-L$ MSSM via a renormalization group analysis. In this section we want to find the subspace of the solution space presented in Figure 4, which, in addition, is consistent with the reduction constraint and the two phenomenological constraints presented in Section 5. We begin by imposing the physical constraints. We demand that \emph{at every point} in the region of K\"ahler moduli space shown in Figure 4, all parameters of the theory are adjusted so that $\langle M_U \rangle $ and $\langle \alpha_u \rangle $ are fixed at the physical values presented in Section 5 -- that is, the unification scale is always set to $\langle M_U\rangle =3.15\times 10^{16}~\text{GeV}$, whereas for $SO(10)$ breaking with split Wilson lines, $\langle \alpha_{u}\rangle =1/20.08$, while for $SO(10)$ breaking with simultaneous Wilson lines $\langle \alpha_{u}\rangle =1/26.46$. It follows from the physical constraint equations \eqref{jack1}, and \eqref{bag3} and \eqref{tr3} that the values of $v$ and ${\hat{\alpha}}_{GUT}$ -- and, hence, the remaining parameters $\rho$, $\epsilon_S^\prime$ and $\epsilon_R$ -- can be always be chosen so as to obtain the required values of $\langle M_U\rangle $ and $\langle \alpha_{u}\rangle $. However, different points in Figure 4 will, in general, require {\it different} values of these parameters to satisfy the physical constraints. In particular, this means that different points will correspond to different values of $\epsilon_S^\prime$. As discussed at the end of Section 4, this interpretation is completely consistent with the moduli shown in Figure 4 solving all of the ``vacuum'' constraints. To make this explicit, one can invert constraint equations \eqref{jack1}, \eqref{bag3} and \eqref{tr3} so as to express $v$ and ${\hat{\alpha}}_{GUT}$ explicitly as functions of $\langle M_{U}\rangle $, $\langle \alpha_{u}\rangle $ and the K\"ahler moduli. That is, expression \eqref{jack1} can be inverted to give \begin{equation} v=\frac{1}{\langle M_U\rangle ^{6}V} \ , \label{door1} \end{equation} while \eqref{bag3} and \eqref{tr3} give \begin{equation} {\hat{\alpha}}_{GUT}=\frac{1}{\langle \alpha_{u} \rangle \re f_{1}} \ . \label{door2} \end{equation} As first presented in \eqref{bag2}, the function $\re f_{1}$ is given by \begin{equation} \re f_{1} = V +\tfrac{1}{3}a^1- \tfrac{1}{6}a^2 +2a^3+\tfrac{1}{2}(\tfrac{1}{2}-\lambda)^2(9a^{1}+17a^{2}) \ , \label{door3} \end{equation} where $\lambda=0.49$ and $\langle \alpha_{u} \rangle =\{\frac{1}{20.08}, \frac{1}{25.87}\}$ for split and simultaneous Wilson lines respectively. Inserting these expressions into \eqref{bag5}, \eqref{soc2} and \eqref{soc1}, one obtains the following expressions for $\rho$, $\epsilon_S^\prime$ and $\epsilon_R$ respectively. We find that \begin{equation} \rho= \left(\frac{\langle \alpha_{u} \rangle}{16\pi^2}\right)^{3/2}\frac{M_P^2}{\langle M_{U}\rangle ^3} \frac{ ( \re f_{1} )^{3/2}}{V^{1/2}} \label{seb6} \end{equation} and \begin{equation} \epsilon_S' =\frac{\langle \alpha_{u} \rangle^2}{128\pi^2}\frac{M_P^2}{\langle M_U\rangle ^2}\frac{(\re f_{1})^2}{V^{1/3}}\ ,\qquad \epsilon_{R}=\frac{64\pi^{2}}{\langle \alpha_{u}\rangle ^{3/2} }\frac{\langle M_{U}\rangle^{2}}{M_{P}^{2}}\frac{V^{1/3}}{(\re f_{1})^{3/2}} \ . \label{seb7} \end{equation} Using these expressions, the parameters at any fixed point of the moduli space in Figure 4 can be calculated. Again, we note that these parameters change from point to point in Figure 4. Next, we impose the dimensional reduction constraint. We require that \eqref{sun1} be valid; that is, the length of the $S^{1}/{\mathbb{Z}}_{2}$ orbifold interval should be larger than the average Calabi--Yau radius % \begin{equation} \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6} } > 1 \ . \label{seb3} \end{equation} Choosing any point $a^{i}$ in the K\"ahler moduli space of Figure 4, one can use \eqref{door1}, \eqref{door2}, \eqref{seb6} and \eqref{seb7} to determine the parameters $v$, ${\hat{\alpha}}_{GUT}$, $\rho$ and $\epsilon'_{S}$ that satisfy the phenomenological constraints at that point. Also note, since we are working in unity gauge \eqref{wall2}, that \begin{equation} \ensuremath{{\widehat R}}=\frac{V^{1/3}}{\epsilon'_{S}} \ . \label{exam1} \end{equation} Now, at that chosen point in Figure 4, insert the calculated values of $v$, ${\hat{\alpha}}_{GUT}$, $\rho$ and $\epsilon'_{S}$, as well the value of $\ensuremath{{\widehat R}}$ computed from \eqref{exam1}, into the inequality \eqref{seb3} for the ratio of the dimensions. Scanning over all points in Figure 4, we will be able to find the subspace of that region of K\"ahler moduli space in which condition \eqref{seb3} is satisfied. That is, at any such a point, not only are all the ``vacuum'' constraints satisfied, but the ``reduction'' and ``phenomenological'' constraints are as well. There, of course, will be two such regions -- one corresponding to the ``split'' Wilson line scenario and a second corresponding to the ``simultaneous'' Wilson line scenario. These regions are shown as the brown subspaces of Figure 8 (a) and (b) respectively. \begin{figure}[t] \centering \begin{subfigure}[c]{0.47\textwidth} \includegraphics[width=1.0\textwidth]{PhysConstraint1.pdf} \caption{$\langle \alpha_{u}\rangle =\frac{1}{20.08}$} \end{subfigure} \begin{subfigure}[c]{0.47\textwidth} \includegraphics[width=1.0\textwidth]{PhysConstraint2.pdf} \caption{$\langle \alpha_{u}\rangle =\frac{1}{26.46}$} \end{subfigure} \caption{The region of K\"ahler moduli space where the $SU(4)$ slope-stability conditions, the anomaly cancellation constraint and the positive squared gauge coupling constraint from Figure 4 are satisfied, in addition to the dimensional reduction and the phenomenological constraints introduced in Section 5. The results are valid for a hidden sector line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ with $a=1$ and for a single five-brane located at $\lambda=0.49$. We study both cases of split Wilson lines, with $\langle \alpha_{u}\rangle =\frac{1}{20.08}$, and simultaneous Wilson lines with $\langle \alpha_{u}\rangle =\frac{1}{26.46}$. Note that reducing the size of $\langle \alpha_u\rangle $ increases the space of solutions.} \label{fig:PhysContraint} \end{figure} One can go further and, by scanning over the brown subspace associated with each Wilson line scenario, find the numerical range of the ratio $\frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}}$ in each case. We find that \begin{equation}\ 1 \lesssim \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}} \lesssim 17.4 \label{er1} \end{equation} for the split Wilson line scenario and \begin{equation}\ 1 \lesssim \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}} \lesssim 19.8 \label{er2} \end{equation} for the simultaneous Wilson lines. Finally, but importantly, we want to emphasize that all of the results of this and preceding sections have, thus far, been calculated in ``unity'' gauge; that is, choosing \begin{equation} \frac{\epsilon'_{S}{\ensuremath{{\widehat R}}}}{V^{1/3}}=1 \ . \label{er33} \end{equation} This was possible because, as discussed in Section 4, all ``vacuum'' constraints remained form invariant under the scaling \begin{equation} a^{i} \rightarrow \mu a^{i}\ , \qquad\epsilon'_{S} \ensuremath{{\widehat R}} \rightarrow \mu^{3} \epsilon'_{S}\ensuremath{{\widehat R}} \ , \label{er4} \end{equation} where $\mu > 0$. However, it follows from this that if any point \{$a^{i}$\} in Figure 4 satisfies all vacuum constraints, so will any point \{$\mu a^{i}$\}. Do any of these ``scaled'' moduli carry new information concerning both the reduction and the physical constraints? The answer to this is no, as we will now demonstrate. Let us pick any point \{$a^{i}$\} in Figure 4 and assume that the values of the parameters at that point are given by $v$, ${\hat{\alpha}}_{GUT}$, $\rho$ and $\epsilon'_{S}$ obtained from expressions \eqref{door1}, \eqref{door2}, \eqref{seb6} and \eqref{seb7} respectively evaluated at this point. We now want to determine how each of these parameters changes under the $\mu$ scaling given in \eqref{er4}. To do this, one can again use the same equations \eqref{door1}, \eqref{door2}, \eqref{seb6} and \eqref{seb7}, but now scaling the original point as in \eqref{er4}. To do this, we must know the scaling behavior of both $V$ and $\re f_{1}$ respectively. It follows from \eqref{10} that $V \rightarrow \mu^{3} V$. However, to obtain the scaling behavior of $\re f_{1}$, one must go back to \eqref{69AA} and recall that the terms in $Ref_{1}$ linear in the K\"ahler moduli are, generically, multiplied by the factor $\epsilon'_{S} \ensuremath{{\widehat R}}/ V^{1/3}$. Since this is set to 1 in unity gauge, it does not appear in expression \eqref{door3}. Hence, it follows from \eqref{er4} that under $\mu$ scaling $\re f_{1} \rightarrow \mu^{3} \re f_{1}$. Using these results, we find that \begin{gather} v \rightarrow \mu^{-3}v\ ,\quad{\hat{\alpha}}_{GUT} \rightarrow \mu^{3}{\hat{\alpha}}_{GUT}\ ,\quad\rho \rightarrow \mu^{3} \rho\ ,\\ \epsilon'_{S} \rightarrow \mu^{5}\epsilon'_{S}\ ,\quad\epsilon_{R} \rightarrow \mu^{-7/2} \epsilon_{R} \ . \label{sf1} \end{gather} Note that, until now, we knew that scaling invariance required $\epsilon'_{S}\ensuremath{{\widehat R}} \rightarrow \mu^{3} \epsilon'_{S}\ensuremath{{\widehat R}}$, but could not specify the scaling of $\epsilon'_{S}$ parameter and the modulus $\ensuremath{{\widehat R}}$ individually. However, from the last term in \eqref{sf1} it follows that \begin{equation} \ensuremath{{\widehat R}} \rightarrow \mu^{-2} \ensuremath{{\widehat R}} \ . \label{sf2} \end{equation} It is now straightforward to insert these results into the expression for the ratio of the orbifold interval length/average Calabi--Yau radius. We find that the scaling of the individual parameters and moduli {\it exactly cancel}. That is, under the scaling given in \eqref{er4} and \eqref{sf1} \begin{equation} \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6} } \rightarrow \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6} } \ . \label{sf3} \end{equation} We conclude from this that the $\mu$-scaled point $\{\mu a^{i}\}$ of any point $\{a^{i}\}$ in the brown regions of Figure 8 (a) and (b) continues to satisfy all the ``vacuum'' and ``phenomenological'' constraints and and has identical values for the orbifold interval length/average Calabi--Yau radius. For this reason, we find it sufficient to display the final results as the brown regions in Figure 8 (a) and (b) only. \section{Slope-Stability and Supersymmetry} Thus far, we have found the region of K\"ahler moduli space in which the $\text{\ensuremath{\SU 4}}$ bundle is slope-stable with vanishing slope, the five-brane class is effective, the squares of both gauge couplings are positive, the length of the orbifold is larger than the characteristic length scale of the Calabi--Yau threefold and the vacuum is consistent with both the mass scale and gauge coupling of $SO(10)$ grand unification in the observable sector. Importantly, however, we still must satisfy two remaining conditions. First, it is necessary that the gauge connection associated with a hidden sector line bundle on the Calabi--Yau threefold be a solution of the Hermitian Yang--Mills (HYM) equations~\cite{UY,Donaldson} and, second, that the line bundle be such that the low-energy effective theory admits an $N=1$ supersymmetric vacuum. We will now analyze both of these remaining constraints. First, for specificity, we restrict the analysis to the particular line bundle discussed above; that is, $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ embedded into $SU(2) \subset E_{8}$ as in \eqref{red4} with coefficient $a=1$. In addition, we choose $\lambda=.49$ as in \eqref{cup1}. Following that, however, we will present a discussion of these constraints for a ``generic'' line bundle with the same embedding \eqref{red2} into $SU(2) \subset E_{8}$ and $\lambda=.49$. To carry out these analyses, it is first necessary to introduce the Fayet--Iliopoulos term associated with the hidden sector $U(1)$ gauge group and to discuss the $\kappa_{11}^{2/3}$ correction to both the Fayet--Iliopoulos term and the slope. \subsection{A Fayet--Iliopoulos Term and the \texorpdfstring{$\kappa_{11}^{2/3}$}{kappa 2/3} Slope Correction} It follows from \eqref{59} that the Fayet--Iliopoulos term associated with a generic single line bundle $L=\ensuremath{{\cal O}}_X(l^{1}, l^{2}, l^{3})$ and a single five-brane located at $\lambda\in[-1/2,1/2]$ is given in ``unity'' gauge by \begin{equation} \label{again2A} FI= \frac{a}{2} \frac{ \epsilon_S \epsilon_R^2}{\kappa_{4}^{2}} \frac{1} {\ensuremath{{\widehat R}} V^{2/3}} \bigl(d_{ijk} l^i a^j a^k - a\,d_{ijk}l^il^jl^k - l^i(2,2,0)|_i +(\tfrac{1}{2}+\lambda)^2l^iW_i \bigr) \ , \end{equation} with the volume modulus $V$ and $W_{i}$ presented in \eqref{10} and \eqref{33A} respectively and $\ensuremath{{\widehat R}}$ defined in \eqref{case1}. Note that the coefficient $a$ defined in \eqref{26A} enters this expression both explicitly and via the five-brane class $ {W}_i$. It is important to note -- using \eqref{3}, \eqref{50} and \eqref{23} -- that the ``classical'' slope of the line bundle $L=\ensuremath{{\cal O}}_X(l^{1}, l^{2}, l^{3})$ is given by\footnote{Note that this is not the same as the scaling factor $\mu$ of the previous section. From here onwards, $\mu$ will denote the slope.} \begin{equation} \mu(L)=d_{ijk} l^ia^j a^k \ , \label{river1} \end{equation} that is, the first term in the bracket of \eqref{again2A}. It follows that the remaining terms in the bracket, specifically \begin{equation} - a\,d_{ijk}l^il^jl^k - l^i(2,2,0)|_i+(\tfrac{1}{2}+\lambda)^2l^iW_i \ , \label{river2} \end{equation} are the strong coupling $\kappa_{11}^{4/3}$ corrections to the slope of $L$. For the remainder of this paper, we will take the slope of the line bundle $L=\ensuremath{{\cal O}}_X(l^{1}, l^{2}, l^{3})$ to be the $\kappa_{11}^{4/3}$, genus-one corrected expression \begin{equation} \mu(L)=d_{ijk} l^ia^j a^k- a\,d_{ijk}l^il^jl^k - l^i(2,2,0)|_i+(\tfrac{1}{2}+\lambda)^2l^iW_i \ . \label{trenton1} \end{equation} \subsection{Slope-Stability of the Hidden Sector Bundle \texorpdfstring{$L=\ensuremath{{\cal O}}_X(2, 1, 3)$}{L = O(2,1,3)} } Although any line bundle $L$ is automatically slope-stable, since it has no sub-bundles, in order for its gauge connection to ``embed'' into the hidden sector $E_{8}$ gauge connection it is necessary to extend the bundle to $L \oplus L^{-1}$, as discussed in subsection 4.2. However, even though the connection associated with the bundle $L \oplus L^{-1}$ can, in principle, embed properly into the $\bf 248$ gauge connection of the hidden sector $E_{8}$, it remains necessary to show that $L \oplus L^{-1}$ is ``slope-stable''; that is, that its associated connection satisfies the Hermitian Yang--Mills equations. More properly stated, since $L \oplus L^{-1}$ is the Whitney sum of two line bundles, it was shown in \cite{UY, Donaldson} that it will admit a connection that uniquely satisfies the Hermitian Yang--Mills equations if and only if it is ``polystable''; that is, if and only if \begin{equation} \mu(L)=\mu(L^{-1})=\mu(L \oplus L^{-1}) \ . \label{poly1} \end{equation} Since $\mu(L \oplus L^{-1})$ must vanish by construction, it follows that $L \oplus L^{-1}$ is polystable if and only if $\mu(L)=0$. Let us now consider the specific line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ embedded into $SU(2) \subset E_{8}$ as in \eqref{red4} with coefficient $a=1$, and take $\lambda=0.49$. It follows from \eqref{again2A} that in this case \begin{equation} FI= \frac{ \epsilon_S \epsilon_R^2}{2\kappa_{4}^{2}} \frac{1} {\ensuremath{{\widehat R}} V^{2/3}} \left( \tfrac{1}{3}(a^1)^{2}+\tfrac{2}{3}(a^2)^{2} +8a^1a^2+4a^2a^3 +2a^1a^3 -13.35 \right) \label{trenton2} \end{equation} and from \eqref{trenton1} that the associated genus-one corrected slope is \begin{equation} \mu(L)= \tfrac{1}{3}(a^1)^{2}+\tfrac{2}{3}(a^2)^{2} +8a^1a^2+4a^2a^3 +2a^1a^3 -13.35 \ . \label{trenton3} \end{equation} Hence, this specific hidden sector bundle will be slope polystable -- and, therefore, admit a gauge connection satisfying the corrected Hermitian Yang--Mills equations -- if and only if the K\"ahler moduli $a^{i}, i=1,2,3$ satisfy the condition that \begin{equation} \tfrac{1}{3}(a^1)^{2}+\tfrac{2}{3}(a^2)^{2} +8a^1a^2+4a^2a^3 +2a^1a^3 -13.35 = 0 \ . \label{trenton4} \end{equation} The region of K\"ahler moduli space satisfying this condition is the two-dimensional surface displayed in Figure 9. \begin{figure}[t] \centering \begin{subfigure}[c]{0.6\textwidth} \caption*{} \end{subfigure}\\ \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.0\textwidth]{ZeroSlope.pdf} \end{subfigure} \caption{The surface in K\"ahler moduli space where the genus-one corrected slope of the hidden sector line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ vanishes.} \label{fig:ZeroSlope} \end{figure} Recall that for hidden sector line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ with $a=1$ and for a single five-brane located at $\lambda=0.49$, the region of K\"ahler moduli space satisfying all {\it previous} constraints -- that is, the $SU(4)$ slope-stability conditions, the anomaly cancellation constraint, the positive squared gauge coupling constraints, in addition to the dimensional reduction and the phenomenological constraints -- are shown as the brown regions in Figure 8 (a) and (b), for the split Wilson lines and the simultaneous Wilson line scenarios respectively. It follows that the intersection of the brown regions of Figure 8 (a) and (b) with the the two-dimensional surface in Figure 9 will further constrain our theory so that the hidden sector gauge connection satisfies the corrected Hermitian Yang--Mills equations -- as it must. The regions of intersection are displayed graphically in Figure 10. We emphasize that although the brown regions of Figure 8 (a) and (b) overlap in this region of K\"ahler moduli space, each point in their overlap region has a somewhat different set of parameters associated with it. Hence, in discussing a point in the magenta region of Figure 10, for example, it is necessary to state whether it is arising from the split Wilson line or simultaneous Wilson line scenario. \begin{figure}[t] \centering \begin{subfigure}[c]{0.6\textwidth} \caption*{} \end{subfigure}\\ \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.0\textwidth]{Intersection1.pdf} \end{subfigure} \caption{ The magenta region shows the intersection between the brown regions of Figure 8 (a) and (b) and the two-dimensional cyan surface in Figure 9. Therefore, the magenta region represents the sub-region of the vanishing, genus-one corrected slope surface, each point of which satisfies all the necessary constraints discussed in Section 6. The size of the magenta region is the same for both the split and simultaneous Wilson lines scenarios. However, the values of the coupling parameters differ slightly for these two cases, at any point in this intersection subspace. } \label{fig:Intersection} \end{figure} As we did previously for the brown regions presented in Figure 8 (a) and (b), it is of interest to scan over the magenta subspace of Figure 10 to find the numerical range of the ratio $\frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}}$. We find that \begin{equation}\ 6.4 \lesssim \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}} \lesssim 12.9 \label{er1A} \end{equation} for the split Wilson line scenario and \begin{equation}\ 7.3 \lesssim \frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}} \lesssim 14.7 \label{er2A} \end{equation} for simultaneous Wilson lines. In fact, one can go further and present a histogram of the percentage versus the ratio $\frac{\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}{(vV)^{1/6}}$ for each scenario. These histograms are shown in Figure 11 (a) and (b) for the split Wilson line and simultaneous Wilson line scenarios respectively. \begin{figure}[t] \centering \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.0\textwidth]{Hist1.pdf} \caption{Split Wilson lines: $\langle \alpha_{u}\rangle =\frac{1}{20.08}$.} \end{subfigure} \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.0\textwidth]{Hist2.pdf} \caption{Simultaneous Wilson lines: $\langle \alpha_{u}\rangle =\frac{1}{26.46}$.} \end{subfigure} \caption{Plots of the percentage of occurrence versus the ratio of the orbifold interval length to the average Calabi--Yau radius; for the split Wilson line scenario in (a) and for the simultaneous Wilson line scenario in (b). The results shown in (a) and (b) represent a scan over the magenta region of K\"ahler moduli space displayed in Figure 10, where all vacuum, reduction and physical constraints are satisfied and the line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ is slope polystable.} \label{fig:2Dplots} \end{figure} Before proceeding to the discussion of $N=1$ supersymmetry in the $D=4$ effective theory, it will be useful to present the formalism for computing the low energy matter spectrum associated with a given hidden sector line bundle. We do this in the next subsection, displaying the formalism and low energy spectrum within the context of the line bundle $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ for specificity. \subsection{The Matter Spectrum of the \texorpdfstring{$L=\ensuremath{{\cal O}}_X(2, 1, 3)$}{L = O(2,1,3)}, \texorpdfstring{$D=4$}{D = 4} Effective Theory} Having found the explicit sub-region of K\"ahler moduli space that satisfies all required constraints for the $L=\ensuremath{{\cal O}}_X(2, 1, 3)$ line bundle, in this subsection we will discuss the computation of the vector and chiral matter content of the $D=4$ low-energy theory of the hidden sector. Generically, the low-energy matter content depends on the precise hidden sector line bundle under consideration, as well as its embedding into $E_{8}$. In this subsection, we again choose the line bundle to be $L=\mathcal{O}_{X}(2,1,3)$, embedded into $E_{8}$ as in \eqref{red5} with $a=1$. However, the formalism presented is applicable to any line bundle with any embedding into $E_{8}$. The commutant of the $U(1)$ structure group of our specific embedding is $\Uni1 \times E_{7}$. As discussed in Section 4, the $\Rep{248}$ decomposes under $U(1) \times\Ex 7$ as \begin{equation} \Rep{248} \to (0, \Rep{133}) \oplus \bigl( (1, \Rep{56}) \oplus (-1, \Rep{56})\bigr) \oplus \bigl( (2, \Rep{1}) \oplus (0, \Rep{1}) \oplus (-2, \Rep{1}) \bigr)\ . \label{red55} \end{equation} The $(0,\Rep{133})$ corresponds to the adjoint representation of $\Ex 7$, while the $(\pm1,\Rep{56})$ give rise to chiral matter superfields with $\pm1$ $U(1)$ charges transforming in the $\underline{\bf56}$ representation of $\Ex 7$ in four dimensions. The $(\pm2,\Rep 1)$ are $E_7$ singlet chiral superfields fields with charges $\pm 2$ under $U(1)$. Finally, the $(0,\Rep{1})$ gives the one-dimensional adjoint representation of the $\Uni 1$ gauge group. The embedding of the line bundle is such that fields with $U(1)$ charge $-1$ are counted by $H^{*}(X,L)$, charge $-2$ fields are counted by $H^{*}(X,L^{2})$ and so on.\footnote{This is due to the form of the gauge transformation of the matter fields specified in \eqref{eq:delta_C}. This was chosen so as to agree with \cite{Wess:1992cp,Anderson:2009nt}.} The low-energy massless spectrum can be determined by examining the chiral fermionic zero-modes of the Dirac operators for the various representations in the decomposition of the $\Rep{248}$. Generically, the Euler characteristic $\chi(\mathcal{F})$ counts $n_{\text{R}}-n_{\text{L}}$, where $n_{R}$ and $n_{L}$ are the number of right- and left-chiral zero-modes respectively transforming under the representation associated with the bundle $\mathcal{F}$. With the notable exception of $\mathcal{F}=\mathcal{O}_{X}$, which is discussed below, paired right-chiral and left-chiral zero-modes are assumed to form a massive Dirac fermion and are integrated out of the low-energy theory. Therefore, it is precisely the difference of the number of right-chiral fermions minus the left-chiral fermions, counted by the Euler characteristic $\chi$, that give the massless zero-modes of the $D=4$ theory. On a Calabi–Yau threefold $X$, $\chi(\mathcal{F})$ can be computed by the Atiyah–Singer index theorem as \begin{equation} \chi(\mathcal{F})=\sum_{i=0}^{3}(-1)^{i}h^{i}(X,\mathcal{F})=\int_{X}\op{ch}(\mathcal{F})\wedge\op{Td}(X)\ , \end{equation} where $h^{i}$ are the dimensions of the $i$-th cohomology group, $\op{ch}(\mathcal{F})$ is the Chern character of $\mathcal{F}$, and $\op{Td}(X)$ is the Todd class of the tangent bundle of $X$. When $\mathcal{F}=L=\mathcal{O}_{X}(l^{1},l^{2},l^{3})$ is a line bundle, this simplifies to \begin{equation}\label{eq:chi} \chi(L)=\tfrac{1}{3}(l^{1}+l^{2})+\tfrac{1}{6}d_{ijk}l^{i}l^{j}l^{k}\ . \end{equation} Unlike the case of an $\SU N$ bundle, when $L$ is a line bundle with non-vanishing first Chern class, $\chi$ can receive contributions from all \emph{four} $h^i$, $i=0,1,2,3$. For example, $h^1(X,L)+h^3(X,L)$ then counts the number of (left-handed) chiral multiplets while $h^0(X,L)+h^2(X,L)$ counts (right-handed) anti-chiral multiplets, both transforming in the $(-1,\Rep{56})$ representation. Note that the multiplets counted by $h^0(X,L)+h^2(X,L)$ are simply the CPT conjugate partners of those already counted by $h^1(X,L^{-1})+h^3(X,L^{-1})$. Since it is conventional to give a supersymmetric matter spectrum in terms of (left-handed) chiral supermultiplets, it is sufficient to compute $h^1+h^3$ for the various bundles under consideration. Using \eqref{eq:chi}, it is straightforward to compute the value of $\chi$ for the powers of $L$ associated with the decomposition \eqref{red55}. These are presented in Table \ref{tab:chiral_spectrum}. Having done this, let us discuss the spectrum in more detail.\footnote{See \cite{Braun:2005ux}, for example, for a similar discussion of the hidden-sector spectrum for an $\SU 2$ bundle.} \begin{table} \noindent \begin{centering} \begin{tabular}{rrr} \toprule $U(1) \times \Ex 7$ & Cohomology & Index $\chi$\tabularnewline \midrule \midrule $(0,\Rep{133})$ & $H^{*}(X,\mathcal{O}_{X})$ & $0$\tabularnewline \midrule $(0,\Rep 1)$ & $H^{*}(X,\mathcal{O}_{X})$ & $0$\tabularnewline \midrule $(-1,\Rep{56})$ & $H^{*}(X,L)$ & $8$\tabularnewline \midrule $(1,\Rep{56})$ & $H^{*}(X,L^{-1})$ & $-8$\tabularnewline \midrule $(-2,\Rep 1)$ & $H^{*}(X,L^{2})$ & $58$\tabularnewline \midrule $(2,\Rep 1)$ & $H^{*}(X,L^{-2})$ & $-58$\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{The chiral spectrum for the hidden sector $\protect\Uni 1\times\protect\Ex 7$ with a single line bundle $L=\mathcal{O}_{X}(2,1,3)$. The Euler characteristic (or index) $\chi$ gives the difference between the number of right- and left-chiral fermionic zero-modes transforming in the given representation. We denote the line bundle dual to $L$ by $L^{-1}$ and the trivial bundle $L^{0}$ by $\mathcal{O}_{X}$.\label{tab:chiral_spectrum}} \end{table} \begin{itemize} \item The index of the bundle $\mathcal{O}_X$ associated with the $(0,\Rep{133})$ and $(0,\Rep{1})$ representations vanishes, so the corresponding fermionic zero-modes must be \emph{non-chiral}. As discussed in \cite{Braun:2013wr}, since the trivial bundle $\mathcal{O}_{X}$ has $h^{0}(X,\mathcal{O}_{X})=h^{3}(X,\mathcal{O}_{X})=1$ and zero otherwise, there is a single right-chiral fermionic zero-mode (counted by $h^{0}$) and a single left-chiral fermionic zero-mode (counted by $h^{3}$), which combine to give the conjugate gauginos in a massless vector supermultiplet. In other words, the low-energy theory has one vector supermultiplet transforming in the $(0,\Rep{133})$ adjoint representation of $E_{7}$ and one vector supermultiplet in the $(0,\Rep{1})$ adjoint representation of $\Uni 1$. \item The $(1, \Rep{56})$ multiplets are counted by $H^{*}(X,L^{-1})$. Since $\chi(L^{-1})=-8$, there are 8 unpaired left-chiral fermionic zero-modes that contribute to 8 chiral matter supermultiplets transforming in the $(1,\Rep{56})$ of $U(1) \times E_{7}$. \item Similarly, the $(-1, \Rep{56})$ multiplets are counted by $H^{*}(X,L)$. Since $\chi(L)=8$, there are 8 unpaired right-chiral fermionic zero-modes that contribute to 8 anti-chiral matter supermultiplets transforming in the $(-1,\Rep{56})$ of $U(1) \times E_{7}$. However, these do not give extra fields in the spectrum: they are (right-handed) anti-chiral $(-1, \Rep{56})$ supermultiplets which are simply the CPT conjugate partners of the 58 chiral $(1, \Rep{56})$ supermultiplets already counted above~\cite{Green:1987mn}. \item Since $\chi(L^{-2})=-58$, there are 58 unpaired left-chiral fermionic zero-modes that contribute to 58 chiral matter supermultiplets transforming in the $(2,\Rep{1})$ representation of $U(1) \times E_{7}$. \item Similarly, the $(-2,\Rep{1})$ multiplets are counted by $H^{*}(X,L^{2})$. Since $\chi(L^{2})=58$, there are 58 unpaired right-chiral fermionic zero-modes that contribute to 58 charged anti-chiral matter supermultiplets transforming in the $(2,\Rep{1})$ representation of $U(1) \times E_{7}$. However, as discussed above, these do not give extra fields in the spectrum: they are (right-handed) anti-chiral $(-2,\Rep{1})$ supermultiplets which are simply the CPT conjugate partners of the 58 chiral $(2,\Rep{1})$ supermultiplets already counted above. \end{itemize} In summary, the $\Uni 1\times\Ex 7$ hidden sector massless spectrum for $L=\mathcal{O}_{X}(2,1,3)$ is \begin{equation} 1\times(0,\Rep{133})+1\times(0,\Rep{1})+8\times(1,\Rep{56})+58\times(2,\Rep{1})\ ,\label{eq:matter} \end{equation} corresponding to one vector supermultiplet transforming in the adjoint representation of $\Ex7$, one $\Uni1$ adjoint representation vector supermultiplet, eight chiral supermultiplets transforming as $(1,\Rep{56})$ and 58 chiral supermultiplets transforming as $(2,\Rep{1})$. Note that since we have a chiral spectrum charged under $\Uni1$ with all positive charges, the $\Uni1$ gauge symmetry will be anomalous. As we discuss in the next subsection, this anomaly is canceled by the four-dimensional version of the Green--Schwarz mechanism which, in addition, gives a non-zero mass to this ``anomalous'' hidden sector $U(1)$. \subsection{\texorpdfstring{$D=4$}{D = 4} Effective Lagrangian and the Anomalous \texorpdfstring{$U(1)$}{U(1)} Mass} Before proceeding to the discussion of $N=1$ supersymmetry, it will be useful to present the $D=4$ effective theory for the hidden sector and to explicitly compute the anomalous mass of the $U(1)$ gauge boson. We present the results for a generic hidden sector line bundle $L=\mathcal{O}_{X}(l^1,l^2,l^3)$ with an arbitrary embedding into the hidden sector $E_{8}$. However, we conclude subsection 7.4.2 by computing the anomalous mass associated with the specific line bundle $L=\mathcal{O}_{X}(2,1,3)$ embedded into $E_{8}$ as in \eqref{red5} with $a=1$. \subsubsection{\texorpdfstring{$D=4$}{D = 4} Effective Lagrangian} Following the conventions of \cite{Brandle:2003uya,Freedman:2012zz}, the relevant terms in the four-dimensional effective action for the hidden sector of the strongly coupled heterotic string are \begin{equation} \mathcal{L}=\ldots -G_{LM}D_{\mu}C^{L}D^{\mu}{\bar{C}}^{M}-\tfrac{1}{2}g_{ij}D_{\mu}T^i D^{\mu}\bar{T}^{j}-\frac{4a\re f_{2}}{16\pi\hat{\alpha}_{\text{GUT}}}F_{2}^{\mu\nu}F_{2\mu\nu}-\frac{\pi\hat{\alpha}_{\text{GUT}}}{2a\re f_{2}}D_{\Uni 1}^{2}\ , \label{eq:het_lagrangian-4} \end{equation} where $C^L$ denote the scalar components of the charged zero-mode chiral superfields, generically with different $U(1)$ charges $Q^{L}$ discussed in the previous subsection, $T^{i}$ are the {\it complex} scalar components \begin{equation} T^{i}=t^{i}+\ii\,2\chi^{i}\qquad i=1,2,3 \ , \label{gd1} \end{equation} of the Kähler moduli superfields, where the $t^{i}$ are defined in \eqref{47} and $\chi^{i}$ are the associated axions, and $F_{2\mu\nu}$ is the hidden sector four-dimensional $\Uni 1$ field strength. The K\"ahler metrics $G_{LM}$ and $g_{ij}$ are functions of the dilaton and K\"ahler moduli with positive eigenvalues. As we will see below, the exact form of $G_{LM}$ is not important in this paper, whereas the exact form of $g_{ij}$ will be essential in the calculation of the anomalous $U(1)$ vector superfield mass. An explicit calculation of $g_{ij}$ is presented in Appendix C. Note that we have written the kinetic term for the hidden sector gauge field as a trace over $\Uni1$ instead of $\Ex8$ using \eqref{eq:trace_identity}, so that $\tr_{E_{8}} F_2^{\mu\nu} F_{2\mu\nu}=4a\,F_2^{\mu\nu} F_{2\mu\nu}$. The final term in \eqref{eq:het_lagrangian-4} is the potential energy, where $D_{\Uni 1}$ is proportional to the solution of the auxiliary D-field equation of motion and is given by \begin{equation} D_{\Uni 1}=FI-Q^{L}C^{L}G_{LM}{\bar{C}}^{M} \ . \label{again1} \end{equation} The complex scalar fields $C^{L}$ enter the expression for $D_{U(1)}$ since they transform linearly under $U(1)$ with charge $Q^{L}$. Following \eqref{again2A}, $FI$ is the genus-one corrected Fayet--Iliopoulos term, which is associated with a single line bundle and a single five-brane located at $\lambda\in[-1/2,1/2]$. In ``unity'' gauge, where ${\epsilon_S^\prime \hat R}/{V^{1/3}}=1$, it is given by \begin{equation} \label{again2} FI= \frac{a}{2} \frac{ \epsilon_S \epsilon_R^2}{\kappa_{4}^{2}} \frac{1} {\ensuremath{{\widehat R}} V^{2/3}} \bigl(d_{ijk} l^i a^j a^k - a\,d_{ijk}l^il^jl^k - l^i(2,2,0)|_i +(\tfrac{1}{2}+\lambda)^2l^iW_i \bigr) \ , \end{equation} with the volume modulus $V$ and $W_{i}$ presented in \eqref{10} and \eqref{33A} respectively. \subsubsection{The Anomalous \texorpdfstring{$U(1)$}{U(1)} Mass} As is commonly known, a $\Uni 1$ symmetry that appears in the both the internal and four-dimensional gauge groups is generically anomalous~\cite{Dine:1986zy,Dine:1987xk,Lukas:1999nh,Blumenhagen:2005ga}. Hence, there must be a Green–Schwarz mechanism in the original heterotic M-theory which will cancel this anomaly in the effective field theory. Importantly, however, in addition to canceling this anomaly, the Green--Schwarz mechanism will give a mass for the $U(1)$ vector superfield~\cite{Green:1984sg}. This occurs as follows. The Green--Schwarz mechanism leads to a non-linear $U(1)$ action on the $\chi^{i}$ axionic partners of the $a^{i}$ K\"ahler moduli. That is, under a $U(1)$ gauge transformation, one finds that \begin{equation} \delta\chi^{i}=-a\epsilon_{S}\epsilon_{R}^{2}\varepsilon l^{i} \ , \label{again5} \end{equation} where the $\epsilon_{S}$ and $\epsilon_{R}$ parameters are defined in \eqref{40AA} and \eqref{soc1} respectively, $a$ is the parameter associated with the embedding of the line bundle into $E_{8}$ and $\varepsilon$ is a gauge parameter. It follows that to preserve $U(1)$ gauge invariance, the kinetic energy term for the complex K\"ahler moduli must be written with a covariant derivative of the form \begin{equation} D_{\mu}T^i =\partial_{\mu} T^{i}+i2a\epsilon_{S}\epsilon_{R}^{2}l^{i}A_{\mu} \ . \label{again6} \end{equation} Inserting this into the kinetic energy term for the $T^{i}$ moduli in \eqref{eq:het_lagrangian-4}, and scaling the gauge connection $A_{\mu}$ so that its kinetic energy term is in the canonical form $-\frac{1}{4}F_{\mu \nu}F^{\mu \nu}$, generates a mass for the $U(1)$ vector superfield given by \begin{equation} m_{A}^{2}=\frac{\pi\hat{\alpha}_{\text{GUT}}}{a\re f_{2}}2a^{2}\epsilon_{S}^{2}\epsilon_{R}^{4}g_{ij} l^{i}l^{j} \ . \label{again7} \end{equation} The subscript $A$ refers to the fact that this mass arises from the Green--Schwarz mechanism required to cancel the gauge anomaly in the effective field theory. Using \eqref{sun11}, \eqref{sun2} and \eqref{sun3A}, one can evaluate the metric $g_{ij}$, which is presented in \eqref{pen1}. Inserting this into \eqref{again7} leads to an expression for $m_A^2$ of the form \begin{equation} m_A^2 =\frac{\pi\hat\alpha_{GUT}}{a \re f_2}\frac{a^2\epsilon_S^2\epsilon_R^4}{\kappa_{4}^{2}\ensuremath{{\widehat R}}^{2}} \left(\frac{1}{8V^{4/3}}\mu(L)^{2}-\frac{1}{2V^{1/3}}d_{ijk}l^{i}l^{j}a^{k}\right) \ , \label{eq:anomalous_massA} \end{equation} which is valid for a generic line bundle $L=\mathcal{O}_{X}(l^1,l^2,l^3)$ embedded arbitrarily into the hidden sector $E_{8}$. \begin{figure}[t] \centering \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.0\textwidth]{2D_anomaly.pdf} \end{subfigure} \caption{Value of $m_{A}$ versus the ratio ${\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}/{(vV)^{1/6} }$ of the five-dimensional orbifold length to the averaged Calabi--Yau radius at different points across the magenta region shown in Figure 10, for both the split (blue) and simultaneous (orange) Wilson line scenarios.} \label{fig:2DAnomalyPlots} \end{figure} We conclude this subsection, by evaluating \eqref{eq:anomalous_massA} for the specific line bundle $L=\mathcal{O}_{X}(2,1,3)$ embedded into $E_{8}$ as in \eqref{red5} with $a=1$. We display in Figure \ref{fig:2DAnomalyPlots} the value of $m_{A}$ versus the ratio ${\pi \rho {\ensuremath{{\widehat R}}} V^{-1/3}}/{(vV)^{1/6} }$ of the five-dimensional orbifold length to the average Calabi--Yau radius at different points across the magenta region shown in Figure \ref{fig:Intersection} for both the split and simultaneous Wilson line scenarios. \subsection{Supersymmetric Vacuum Solutions in Four Dimensions} The generic form of the $U(1)$ D-term in the four-dimensional effective theory for an arbitrary hidden sector line bundle $L=\ensuremath{{\cal O}}_X(l^{1}, l^{2}, l^{3})$ was presented in \eqref{again1}. Using this result, we will now discuss the conditions for unbroken $N=1$ supersymmetry in the four-dimensional theory. $N=1$ supersymmetry will be preserved in the $D=4$ effective theory only if the $D_{\Uni 1}$ term presented in \eqref{again1} vanishes at the minimum of the potential energy; that is \begin{equation} \langle D_{\Uni 1} \rangle=0 \ . \end{equation} Whether or not the $D=4$ effective theory can satisfy this condition, and the exact details as to how it does so, depends strongly on the value of the Fayet--Iliopoulos term. There are two generic possibilities. \begin{enumerate} \item[](i) The genus-one corrected FI-term vanishes. In this case, the VEVs of the scalar fields either all vanish or the VEVs of those fields with opposite charge, should they exist, cancel against each other. \item[](ii) The genus-one corrected FI-term is non-vanishing. In this case, non-zero VEVs of the scalar fields $C^L$ with the same sign as $FI$ turn on to cancel the non-vanishing FI-term. \end{enumerate} Each of the two scenarios comes with its own conditions which have to be met. In the first case, in order to obtain a vanishing FI-term, the strong coupling $\kappa_{11}^{2/3}$ corrections to the slope need to cancel the tree-level ``classical'' slope in \eqref{again2}. For that to happen, one needs to be in a very strongly coupled regime, where working only to order $\kappa_{11}^{2/3}$ may be a poor approximation. We provide a detailed discussion about the strong coupling expansion to first and higher order in Appendix D. In the second case, the low-energy spectrum needs to contain scalars $C^L$ with the correct charge $Q^L$ under $U(1)$, such that their VEVs can cancel the non-zero $FI$ contribution. In such a scenario, one can move in K\"ahler moduli space--while still satisfying all the vacuum and phenomenological constraints (that is, generically, outside the magenta region of Figure 10 while remaining within the brown region)--to a less strongly coupled regime in which the first-order expansion to $\kappa_{11}^{2/3}$ is more accurate. However, as we will show in Section 7.5.2, the VEVs of these scalar fields may deform the hidden sector line bundle to an $SU(2)$ bundle, which might or might not be slope-stable. \subsubsection{Vanishing FI}\label{sec:vanishing_FI} Let us start by analyzing the first case. A simple way to ensure unbroken $N=1$ supersymmetry in $D=4$ and slope stability of the hidden sector bundle is to require that \begin{equation} FI=0 \ . \label{lamp1} \end{equation} There are then two scenarios in which supersymmetry can remain unbroken in the low energy theory. These are the following: \begin{enumerate} \item The first, and simplest, possibility is that the charges $Q^{L}$ of the scalar fields $C^L$ are all of the same sign. It follows that the potential energy will set all VEVs to zero, $\langle C^{L} \rangle=0$, and hence $D_{\Uni 1}$ will vanish at this {\it stable} vacuum. Thus $N=1$ supersymmetry will be unbroken. \item A second possibility is that some of the $Q^{L}$ signs may differ. This will lead to unstable, flat directions where both $D_{\Uni 1}$ and the potential energy vanish. If one is at an arbitrary point, away from the origin, in a flat direction, then at least two VEVs will be non-vanishing $\langle C^{L} \rangle \neq 0$ and, hence, although preserving $N=1$ supersymmetry, such a vacuum would also spontaneously break the $U(1)$ symmetry. In such a scenario, the non-zero VEVs of the $C^L$ scalars give a mass to the $\Uni1$ vector field via the super-Higgs effect. Scaling the gauge connection $A_{\mu}$ so that its kinetic energy term is in the canonical form $-\frac{1}{4}F_{\mu \nu}F^{\mu \nu}$, the value of this mass is easily computed and found to be \begin{equation} m_{D}^{2}=\frac{\pi\hat{\alpha}_{\text{GUT}}}{a\re f_{2}} Q^{L} {Q}^{M}G_{LM}\langle C^{L}\rangle\langle {\bar{C}}^{M}\rangle \ . \label{again4A} \end{equation} \end{enumerate} Having discussed this second possibility, we note again that the associated potential energy must have at least one flat direction and, hence, is an {\it unstable} vacuum state. For this reason, we will ignore such vacua in this paper. However, the first scenario is easily satisfied, as we now demonstrate with an explicit example. \subsubsection*{$FI=0$ Example: $N=1$ Supersymmetry for $L=\mathcal{O}_X(2,1,3)$} We now discuss $N=1$ supersymmetry in the example introduced above, where the line bundle is taken to be $L=\mathcal{O}_X(2,1,3)$ and embedded into $SU(2) \subset E_{8}$ as in \eqref{red4} with coefficient $a=1$, and the location of the single five-brane is at $\lambda=0.49$. Recall that the genus-one corrected FI-term for this line bundle and embedding was presented in \eqref{trenton2} and given by \begin{equation} FI= \frac{ \epsilon_S \epsilon_R^2}{2\kappa_{4}^{2}} \frac{1} {\ensuremath{{\widehat R}} V^{2/3}} \left( \tfrac{1}{3}(a^1)^{2}+\tfrac{2}{3}(a^2)^{2} +8a^1a^2+4a^2a^3 +2a^1a^3 -13.35 \right) \ . \label{bird1} \end{equation} In Figure 10, the region of K\"ahler moduli space which satisfies all of the required constraints, including slope stability, was presented. As discussed in detail in subsection 7.2, in order for the bundle $L \oplus L^{-1}$ to be polystable, it was necessary to restrict this region of moduli space to points that set the genus-one corrected slope \eqref{trenton3} -- and, hence, the FI-term in \eqref{bird1} -- to zero. Furthermore, the low-energy scalar spectrum carrying non-vanishing $U(1)$ charge was determined in subsection 7.3. It was shown there that the low-energy scalar spectrum of the hidden sector -- specifically $8 \times (1,\Rep{56})+58 \times (2,\Rep{1})$ -- each had charges $Q^{L}$ of the same sign. It then follows from the above discussion that the potential energy must have a unique minimum where the VEVs vanish, $\langle C^{L} \rangle=0$, such that $\langle D_{\Uni 1} \rangle=0$ at this minimum. Hence, $N=1$ supersymmetry is {\it unbroken} in the vacuum state of the $D=4$ effective theory. Since the VEVs of all light $U(1)$ charged scalar fields vanish for this explicit example, it follows from \eqref{again4A} that \begin{equation} m_{D}=0 \ . \end{equation} However, as discussed above, since the $U(1)$ symmetry is anomalous, the mass $m_{A}$ presented in \eqref{eq:anomalous_massA} is non-vanishing and, for this explicit example, plotted in Figure 12. We would like to point out that $L=\mathcal{O}_X(2,1,3)$ is not the only hidden sector line bundle which, if embedded into $SU(2) \subset E_{8}$ as in \eqref{red5} with $a=1$, has a region of K\"ahler moduli space where all required constraints are satisfied, $FI=0$ and the $D=4$ vacuum preserves $N=1$ supersymmetry. However, any such line bundle $L$ must be ``ample''--that is, each of its defining integers $l^{i}, i=1,2,3$ where $l^{1}+l^{2}=0~ mod ~3$ must either be all positive or all negative. The reason is that for the Schoen manifold defined in Section 2, one can show that the genus-one corrected Fayet-Iliopoulos term can vanish, that is, $FI=0$, if and only if $L$ is ample. Restricting to ample line bundles, one can indeed find a significant number satisfying all required constraints. However, of these, many have a large number of equal sign zero-mode chiral multiplets -- some with large charges $Q^{L}$--making them incompatible with spontaneous supersymmetry breaking via gaugino condensation. While potentially of physical interest, we wish to focus on the subset of ample line bundles that have a sufficiently small zero-mode chiral spectrum, with sufficiently small charges, to be compatible with supersymmetry breaking via $E_{7}$ gaugino condensation. These line bundles are specified by \begin{gather}\label{many} \mathcal{O}_X(2,1,3)\ , \qquad \mathcal{O}_X(1,2,3)\ ,\qquad \mathcal{O}_X(1,2,2)\ , \qquad\mathcal{O}_X(2,1,2)\ , \nonumber \\ \mathcal{O}_X(2,1,1)\ , \qquad\mathcal{O}_X(1,2,1)\ , \qquad\mathcal{O}_X(2,1,0) \ , \end{gather} and their duals; that is, for example, $\mathcal{O}_X(-2,-1,-3)$. Spontaneous supersymmetry breaking via $E_{7}$ gaugino condensation in this context will be explored in a future publication. As discussed at the beginning of this section, although this hidden sector vacuum satisfies all required physical and phenomenological constraints, setting the FI-term to zero necessitates exact cancellation of the genus-one corrected slope against the tree-level slope of the hidden sector line bundle. Unsurprisingly, this fine-tuning can only be carried out in a relatively strongly coupled regime of heterotic M-theory -- thus making the validity of the linearized approximation used in this paper uncertain. This is made more explicit and discussed in detail in Appendix D. It is, therefore, of some interest to explore vacua for which the genus-one corrections to the slope are significantly smaller than the tree level slope of the hidden sector bundle. In this case, one expects the effective coupling parameter to be smaller than in the previous scenario and, hence, the linearized results used in this paper to be a better approximation. For this reason, we now introduce the basic criteria required by hidden sector vacua where the FI-term does not vanish. Explicit examples of such hidden sector vacua will be presented in a future publication. \subsubsection{Non-vanishing FI} We now consider what happens when the $\kappa_{11}^{2/3}$ correction to the tree-level slope is small and so cannot be used to set the FI-term to zero. The question then is, given a non-vanishing FI-term \begin{equation} FI\neq0\ , \end{equation} can one still preserve $N=1$ supersymmetry in the four-dimensional effective theory? Recall that the conditions for a supersymmetric vacuum are the vanishing of the F- and D-terms. For consistency with the previous content of this paper, let us continue to assume that the vacuum has an unbroken $U(1) \times \Ex 7$ gauge symmetry, embedded into $E_8$ as discussed in Section 4.2. It follows that the $\Ex 7$ D-terms will vanish by setting the VEVs of the non-abelian $(\pm1,\Rep{56})$ matter fields to be zero. Any F-terms involving the non-abelian matter fields will then also vanish. As discussed in \cite{Anderson:2010ty}, the F-term conditions for the $(\pm2,\Rep 1)$ matter fields permit us to give VEVs to only one set of fields, that is, either $(2,\Rep 1)$ or $(-2,\Rep 1)$ but not both. The remaining condition for the vacuum solution to be supersymmetric is the vanishing of the $\Uni 1$ D-term, $\langle D_{\Uni 1}\rangle=0$. Since the $FI$ term does \emph{not} vanish for any choice of line bundle when the $\kappa_{11}^{2/3}$ correction is small, one is forced to cancel the $FI$ term against the VEVs of the charged singlet fields. In other words, we want \begin{equation}\label{eq:vevs} Q^{L}\langle C^{L}\rangle G_{LM}\langle\bar{C}^{M}\rangle=FI\quad\Rightarrow\quad\langle D_{\Uni 1}\rangle=0\ . \end{equation} Obviously, such a cancellation will depend on the relative sign of $FI$ and the charges of the scalars $C^{L}$. For example, if the $FI$ term is \emph{positive}, one needs at least one zero-mode chiral supermultiplet whose scalar component is a singlet under the non-abelian group and has \emph{positive} $\Uni 1$ charge. Whether or not such scalar fields are present will depend on the specific line bundle studied. If one can cancel the FI-term in this way, the non-zero VEVs of the $C^{L}$ scalars give a mass to the $\Uni 1$ vector field via the super-Higgs effect: \begin{equation} m_{D}^{2}=\frac{\pi\hat{\alpha}_{\text{GUT}}}{a\re f_{2}}Q^{L}Q^{M}\langle C^{L}\rangle G_{LM}\langle\bar{C}^{M}\rangle\ , \end{equation} where the subscript $D$ indicates that this mass is due to the non-vanishing VEVs needed to set the D-term to zero. Note that in the case where one gives VEVs to fields of a single charge $Q^{L}$, the mass is related to the $FI$ term as \begin{equation} m_{D}^{2}=\frac{\pi\hat{\alpha}_{\text{GUT}}}{a\re f_{2}}Q^{L}\,FI\ . \end{equation} As in the case of vanishing slope in the previous subsection, the $\Uni 1$ vector field mass also receives a contribution from the Green–Schwarz mechanism. Hence the total mass of the vector field is given by the sum of $m_{D}^{2}$ above and $m_{A}^{2}$ from \eqref{eq:anomalous_massA}. As we discussed in Section \ref{sec:anomaly_cancellation}, the embedding of $\Uni 1$ inside $\Ex 8$ that we have considered for much of this paper factors through the $\SU 2$ subgroup of $\Ex 8$ that commutes with $\Ex 7$. The $\Uni 1$ gauge connection $A$ for the line bundle $L$ can be thought of as defining an $\Ex 8$ connection in two equivalent ways: either embedding directly in $\Ex 8$ via the generator $Q$ discussed around \eqref{red5}, or first embedding in $\SU 2$ as $\text{diag}(A,-A)$ and then embedding $\SU 2$ in $\Ex 8$ via \eqref{red3}. The second of these two pictures is helpful for understanding the effect of allowing non-zero VEVs for the charged singlets, $\langle C^{L}\rangle\neq0$. First note that the connection $\text{diag}(A,-A)$ is a connection for an $\SU 2$ bundle which splits as a direct sum \begin{equation} \mathcal{V}=L\oplus L^{-1} \end{equation} of line bundles. How does this relate to supersymmetry? The induced $\Ex 8$ connection will solve the (genus-one corrected) Hermitian Yang–Mills equation, and so give a supersymmetric solution, if the $\SU 2$ connection itself solves the (genus-one corrected) Hermitian Yang–Mills equation. This is guaranteed if the rank two $L\oplus L^{-1}$ bundle is polystable with vanishing slope.\footnote{Here, slope is taken to mean the genus-one corrected slope. The same comments apply if one considers only the tree-level expression.} Since $\mu(\mathcal{V})=0$ by construction, the remaining conditions for polystability are \begin{equation} \mu(L)=\mu(L^{-1})=0\ . \end{equation} This is exactly the vanishing $FI$ case studied in \ref{sec:vanishing_FI}, where the corrected slope of $L$ is set to zero and the VEVs of the charged singlet matter fields vanish. When $\mu(L)\neq0$, the $\SU 2$ bundle $\mathcal{V}$ is no longer polystable and so its connection does not solve the Hermitian Yang–Mills equation. The four-dimensional consequence of this is that the $FI$ term no longer vanishes. However, we might be able to turn on VEVs for appropriate charged singlet matter fields in order to cancel the $FI$ term and set the D-term to zero, thus preserving superysymmetry. One might wonder: what is the bundle interpretation of turning on VEVs for these charged singlet matter fields? As discussed in \cite{0905.1748,1012.3179,1506.00879}, these VEVs should be seen as deforming the gauge bundle away from its split form $\mathcal{V}=L\oplus L^{-1}$ to a \emph{non-split} $\SU 2$ bundle $\mathcal{V}'$ which admits a connection that \emph{does} solve the Hermitian Yang--Mills equations. Consider the case where $\mu(L)>0$ (equivalent to $FI>0$) in some region of K\"ahler moduli space where the constraints of Section \ref{sec:constraints} are all satisfied. From \eqref{eq:vevs} we see that one can set $\langle D_{\Uni 1}\rangle=0$ provided we have charged scalars $C^{L}$ with positive charge, $Q^{L}>0$. From the generic form of the cohomologies in Table \ref{tab:chiral_spectrum}, the required scalars are those transforming in $(2,\Rep 1)$, with the chiral superfields which contain these scalars counted by $h^{1}(X,L^{-2})+h^{3}(X,L^{-2})$. Hence, giving VEVs to $(2,\Rep 1)$ scalars corresponds to allowing non-trivial elements of $H^{1}(X,L^{-2})\oplus H^{3}(X,L^{-2})$. The first summand has an interpretation as the space of extensions of $L^{-1}$ by $L$, with the exact sequence \begin{equation} 0\to L^{-1}\to\mathcal{V}'\to L\to0 \end{equation} defining an $\SU 2$ bundle $\mathcal{V}'$. This extension can be non-trivial ($\mathcal{V}\neq\mathcal{V}'$) provided \begin{equation} \op{Ext}^{1}(L^{-1},L) = H^{1}(X,L^{-2})\neq0\ . \end{equation} Choosing a non-zero element of this space then corresponds to turning on VEVs for some set of $(2,\Rep 1)$ scalars. Thus we see that giving VEVs to positively charged singlet scalars arising from $H^1(X,L^{-2})$ amounts to deforming the induced $L \oplus L^{-1}$ bundle $\mathcal{V}$ to the $\SU 2$ bundle $\mathcal{V}'$. Note that if $H^1(X,L^{-2})=0$, the VEVs of positively charged matter coming from $H^3(X,L^{-2})$ cannot be interpreted as deforming to a new $SU(2)$ bundle. Hence, the bundle remains $L \oplus L^{-1}$ which is unstable and, therefore, its gauge connection does not solve the Hermitian Yang--Mills equation. Assuming one can show for a given line bundle $L$ that $H^1(X,L^{-2}) \neq 0$, it might seem that we are done – the $\Uni 1$ D-term vanishes and supersymmetry appears to have been restored. However, the four-dimensional analysis is insensitive to whether the new bundle $\mathcal{V}'$ is slope stable and thus actually admits a solution to Hermitian Yang–Mills. Unfortunately, checking slope stability is a difficult calculation that one must do explicitly for each example. As a preliminary check, one can first see whether $\mathcal{V}'$ satisfies some simpler \emph{necessary} conditions for slope stability. First, the obvious subbundle $L^{-1}$ should not destabilise $\mathcal{V}'$. In our case this is guaranteed as we have assumed $\mu(L)>0$, so that $L^{-1}$ has negative slope.\footnote{If instead $\mu(L)<0$, one simply swaps the roles of $L$ and $L^{-1}$ in the above discussion and instead considers the extension of $L^{-1}$ by $L$.} Second, $\mathcal{V}'$ must satisfy the Bogomolov inequality~\cite{huybrechts2010geometry}. For a bundle with vanishing first Chern class, this states that if $\mathcal{V}'$ is slope stable with respect to some choice of K\"ahler class $\omega=a^{i}\omega_{i}$, then \begin{equation} \int_{X}c_{2}(\mathcal{V}')\wedge\omega\geq0\ . \end{equation} Since $\mathcal{V}'$ is constructed as an extension of line bundles, we have \begin{equation} c_{2}(\mathcal{V}')\equiv c_{2}(L\oplus L^{-1})=-\tfrac{1}{2}c_{1}(L)\wedge c_{1}(L)\ , \end{equation} with $c_{1}(L)={v^{-1/3}}l^{i}\omega_{i}$. Thus if $\mathcal{V}'$ is to be slope stable, we must be in a region of K\"ahler moduli space where \begin{equation}\label{eq:Bolg} -\tfrac{1}{2}\int_{X}c_{1}(L)\wedge c_{1}(L)\wedge\omega\geq0\quad\Rightarrow\quad d_{ijk}l^{i}l^{j}a^{k}\leq0\ . \end{equation} Note that this is a necessary but not sufficient condition. However, it is often the case that the Bogomolov inequality is the only obstruction to finding stable bundles~\cite{Braun:2005zv}. We thus have a new set of necessary conditions on $L$ (in addition to the physically and mathematically required constraints presented in Section \ref{sec:constraints}) for there to be a supersymmetric vacuum after turning on the VEVs to cancel the FI-term. These are \begin{enumerate} \item Singlet matter with the correct charge must be present, so that $FI$ can be canceled and the D-term set to zero. \item $H^1(X,L^{-2})$ must not vanish. \item The Bogomolov inequality, $d_{ijk}l^{i}l^{j}a^{k}\leq0$, must be satisfied. \end{enumerate} Does our previous choice of $L=\mathcal{O}_{X}(2,1,3)$ satisfy these conditions? Note that $\mu(L)>0$ everywhere in the K\"ahler cone for this line bundle. From its low-energy spectrum in Table \ref{tab:chiral_spectrum}, we see we have 58 massless positively charged singlets transforming in the $(2,\Rep 1)$ representation, and so we do indeed have the correct matter to cancel the FI-term and set the D-term to zero. However, as discussed in \cite{Braun:2013wr}, if $L$ is an {\it ample} line bundle then \begin{equation} H^1(X,L)=0 \ . \label{snow1} \end{equation} % Since $L=\mathcal{O}_{X}(2,1,3)$--and, hence, $L^{-2}$--is ample, it follows that $H^1(X,L^{-2})=0$. Therefore, condition 2 above implies that $L \oplus L^{-1}$ cannot admit an extension to an $SU(2)$ bundle $\mathcal{V}'$. Ignoring this for a moment, and assuming that there did exist an $SU(2)$ extension, we would still have to check whether or not the Bogomolov inequality, a necessary condition for $\mathcal{V}'$ to be slope stable, is satisfied. However, from \eqref{eq:Bolg} and the positivity of the K\"ahler moduli, we see that it is {\it impossible} to satisfy this inequality, implying (again) that the split bundle constructed from $L=\mathcal{O}_{X}(2,1,3)$ cannot be deformed to admit a solution to the Hermitian Yang–Mills equation. Moreover, we see the same will be true for any ample line bundle -- the $l^{i}$ are positive and $d_{ijk}l^{i}l^{j}a^{k}\leq0$ is not satisfied anywhere in the positive K\"ahler cone. What about other choices of line bundle? It turns out that of the three conditions, the Bogomolov inequality is the more difficult to satisfy. Scanning over different choices of $L$, one finds that in the region of K\"ahler moduli space where the $\SU 4$ bundle is stable, the only line bundles that are equivariant with $\mu(L)>0$,\footnote{We restrict to $\mu(L)>0$ in our scan to match our analysis above. Including bundles with $\mu(L)<0$ would give the reverse extension sequence with the bundle and its dual swapped, leading to the same $\SU 2$ bundles that were already captured by restricting to positive slope.} allow for anomaly cancellation and satisfy the Bogomolov inequality are \begin{equation} \mathcal{O}_{X}(1,2,-1)\ ,\qquad\mathcal{O}_{X}(2,1,-1)\ ,\qquad\mathcal{O}_{X}(7,2,-2)\ ,\qquad\mathcal{O}_{X}(7,5,-3)\ . \end{equation} Do any of these have positively charged singlet matter in their low-energy spectrum to allow for a non-trivial extension? That is, do we have $H^{1}(X,L^{-2})>0$ for any of these candidate line bundles? For $\mathcal{O}_{X}(1,2,-1)$ and $\mathcal{O}_{X}(2,1,-1)$, it is simple to show using a Leray spectral sequence that the answer is no. For a definitive answer in the remaining two cases, one must extend the analysis of Appendix A of \cite{Braun:2005zv} to higher degree line bundles on dP$_9$. This is beyond the scope of the present paper. Therefore, for now, we content ourselves with noting that $\chi(L^{-2})$ is positive for both remaining line bundles, which is consistent with $H^{1}(X,L^{-2})=0$ and the absence of an extension to a $SU(2)$ bundle $\mathcal{V}'$. As exploited by a number of other works~\cite{Anderson:2010ty,Anderson:2012yf,Nibbelink:2015ixa}, moving from a single line bundle to two or more such bundles provides a richer low-energy spectrum, making it much easier to find examples which possess the correct charged matter and satisfy both the phenomenological constraints and the Bogomolov inequality. We intend to pursue this in detail in future work. \section{Conclusions} In this paper, we have explicitly chosen the hidden sector line bundle $\mathcal{O}_X(2,1,3)$, embedded in a specific way with embedding coefficient $a=1$ into the $E_{8}$ gauge group, and studied its phenomenological properties. This choice of hidden sector was shown to satisfy all ``vacuum'' constraints required to be consistent with present low-energy phenomenology, as well as both the ``reduction'' and ``physical'' constraints required to be a ``strongly-coupled'' heterotic vacuum consistent with both the mass scale and gauge coupling of a unified $SO(10)$ theory in the observable sector. Additionally, we showed that the induced $\SU2$ bundle $L \oplus L^{-1}$ is polystable after including genus-one corrections, and that the effective low-energy theory admits an $N=1$ supersymmetric vacuum. We pointed out that there are actually a large number of different line bundles that one could choose, and a large number of inequivalent embeddings of such line bundles into $E_{8}$. An alternative choice of hidden sector bundle could lead to: 1) a different commutant subgroup $H$ and hence a different low-energy gauge group, 2) a different spectrum of zero-mass particles transforming under $H \times U(1)$, 3) a different value for the associated Fayet--Iliopoulos term and, hence, a different D-term mass for the $U(1)$ vector superfield, and so on. Furthermore, a richer zero-mode spectrum could open the door to mechanisms for arbitrary size spontaneous $N=1$ supersymmetry breaking, new dark matter candidates and other interesting phenomena. We will explore all of these issues in several upcoming papers. \subsection*{Acknowledgements} We would like to thank Yang-Hui He and Fabian Ruehle for helpful discussions. Anthony Ashmore and Sebastian Dumitru are supported in part by research grant DOE No.~DESC0007901. Burt Ovrut is supported in part by both the research grant DOE No.~DESC0007901 and SAS Account 020-0188-2-010202-6603-0338.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,030
Genoa is an unincorporated community in Olmsted County, Minnesota, United States. The community is located along 75th Street NW (County 14) near Exchange Avenue. Genoa is located within New Haven Township and Kalmar Township. Nearby places include Oronoco, Pine Island, and Byron. The South Branch Middle Fork of the Zumbro River flows nearby. History Genoa was platted in 1865, and named after Genoa, in Italy. A post office was established at Genoa in 1872, and remained in operation until 1905. References Unincorporated communities in Olmsted County, Minnesota Unincorporated communities in Minnesota
{ "redpajama_set_name": "RedPajamaWikipedia" }
367
\section{Introduction} \IEEEPARstart{T}{he} new Hamamatsu model R11410-21 Photomultiplier Tube (PMT) specially designed for XENON1T direct dark matter search experiment \cite{aprile:12} that is being under construction in Gran Sasso National Laboratory (Italy) since June 2013. The detector sensitive volume will accommodate 2.2 ton of liquid xenon monitored by 248 PMTs. The PMT response to various light signals produced in a liquid xenon detector should be well understood as it directly affects the experimental results. Before instrumenting the detector with the PMTs each individual PMT has to be carefully characterized by measuring its single photoelectron peak, gain and afterpulse rate. Some properties like linearity of the response and photocathode uniformity are common for most of the PMTs. These properties can be measured for few PMT samples. Gain, dark count rate, linearity of the response, photocathode uniformity and afterpulsing for the older version R11410-10 PMTs were previously measured \cite{lung:12}. Some of the PMT characteristics including Quantum Efficiency (QE), gain and linearity of the response could be temperature dependent and, therefore, should be measured at liquid xenon temperature. The performance of the R11410 PMT in a liquid xenon environment was studied elsewhere \cite{baudis:13} demonstrating the stability of gain for a period of more then 5 months and a stable PMT operation in the vicinity of a strong electric field. Understanding the internal radioactive background in XENON1T detector is a crucial part of the experiment as it will define its sensitivity to the dark matter signal. As shown in \cite{aprile:11} and \cite{aprile:13}, the most of the gamma background and a significant part of the neutron background comes from the PMTs as they contain much more radioactive contaminants than the other elements of the detector. Therefore, it is also crucial to measure the radioactive background from the detector components and the PMTs in particular \cite{aprile:15}. In the present article we summarize our ongoing campaign on the characterization of the R11410 PMTs that will be employed in XENON1T detector. We report on the recent measurements of the absolute QE at liquid xenon temperature. The measurement results of the linearity of the PMT response are also shown. We present the experimental setups for the mass production tests both at a room temperature and at -100 $^0$C. We also present the experimental setup and the results of the PMT measurements in liquid xenon environment. \section{PMT arrangement in XENON1T detector} As mentioned above XENON1T dark matter experiments will accommodate a total of 248 PMTs as shown in \ref{fig:pmt_layout}. \begin{figure}[tbp] \begin{center \subfiguretopcaptrue \subfigure[][] { \label{subfig:x1t_cut} \includegraphics[width=4cm]{figures/x1t_cut.eps} } \subfigure[][] { \label{subfig:pmt_arrays} \includegraphics[width=4cm]{figures/pmt_arrays.eps} } \caption{a) A cross section of XENON1T detector showing its internal structure. b) A 3D rendering of the top and the bottom PMT support structures for XENON1T detector. The bottom PMT array will accommodate 121 PMTs arranged in a hexagonal pattern; the top PMT array will be comprised of 127 PMTs arranged in a circular pattern to improve the resolution of radial event position reconstruction.} \label{fig:pmt_layout} \end{center \end{figure} These PMTs will be split between the top and the bottom PMT supporting structures shown in \ref{subfig:pmt_arrays}. The top PMT supporting structure (see \ref{subfig:pmt_arrays}) will be instrumented with 127 PMTs arranged in concentric circles in order to improve the resolution of the event position reconstruction. The bottom PMT array (see \ref{subfig:pmt_arrays}) will incorporate 121 PMTs arranged in a hexagonal pattern to optimize the optical coverage. The gaps between the PMTs in the top and in the bottom PMT arrays will be covered with a single piece PTFE reflector to improve the light collection efficiency. \section{Measurements of the absolute QE for R11410-10 PMTs} The measurements of the absolute QE of Hamamatsu model R11410-10 PMTs has been recently reported \cite{lyashenko:14}. Although the measurements were performed for the older PMT version, they can also apply for the newer R11410-21 PMT as it uses the same type of the photocathode. \begin{figure}[tbp] \begin{center \subfiguretopcaptrue \subfigure[][] { \label{subfig:qe_lt_ka07} \includegraphics[width=9cm]{figures/qe_lowt_ka35.eps} } \subfigure[][] { \label{subfig:qe_lt_ka35} \includegraphics[width=9cm]{figures/qe_vs_t_175nm.eps} } \caption{a) The absolute QE as a function of wavelength measured at various temperatures \protect\cite{lyashenko:14}. The absolute QE was recorded at the following temperatures: Room Temperature (red stars), 0 $^{0}$C (orange triangles), -25 $^{0}$C (dark yellow squares), -50 $^{0}$C (magenta circles), -70 $^{0}$C (cyan left arrows), -90 $^{0}$C (black right arrows) and -110 $^{0}$C (blue diamonds). b) Absolute QE as a function of temperature at a wavelength of 175 nm measured for the five Hamamatsu model R11410-10 PMTs. Solid lines represent linear fits to each of the PMTs.} \label{fig:qe_lowt} \end{center \end{figure} The absolute QE of five Hamamatsu model R11410-10 PMTs was measured at low temperatures down to -110 $^{0}$C in a spectral range from 154.5 nm to 400 nm. As shown in \ref{fig:qe_lowt} that during the PMT cooldown from room temperature to -110 $^{0}$C (operation temperature of the PMTs in liquid xenon detectors) the QE increases by 10-15\% at 175 nm. The increase of the QE at low temperatures can be explained as follows. In the photocathode bulk a dominant energy loss mechanism for a photoelectron prior its emission into vacuum is the collisions with optical phonons. A decrease of the phonon-photoelectron cross-section \cite{araujo:03} with temperature results in a reduced energy loss by the photoelectron and, therefore, leads to an improved quantum efficiency. The increase of QE of the PMT at low temperature will result in the improved single photon sensitivity. This will allow for lowering of the energy threshold and therefore, improving the detector sensitivity to the low energy WIMPs. \section{PMT voltage divider} The purpose of the PMT voltage divider is to supply proper bias voltages (recommended by the manufacturer) for each of the PMT dynodes and to pick up the signal from the PMT anode. Usually it is realized using a passive resistive linear circuit. In the design of the PMT voltage divider for the low background liquid noble gas detectors such as XENON1T, several important requirements have to be met. First, it is a low power dissipation of the divider. A constant current flowing through the resistive elements of the divider will produce heat that could cause a formation of bubbles or initiate boiling when the divider operates in a noble liquid. These processes have to be avoided as it can affect the stability of the detector by causing some sparking issues in the regions of a high electric field in the detector. In order to avoid the formation of the bubbles in the liquid xenon the heat flux (defined as an electric power per surface area) produced in the divider should not exceed $2\cdot10^4$ W/m$^2$ as shown in \cite{haruyama:02}. Second, the voltage divider has to be made of low radioactivity materials. As indicated in \cite{aprile:11} and \cite{aprile:13} the components of the PMT divider could make a considerable contribution to the radioactive background. And third, it is a stable operation at low temperatures. The PMT voltage divider for XENON1T detector is being designed to fulfill the above requirements. The electrical scheme of the divider will be presented in the next technical paper by XENON1T collaboration. \begin{figure}[tbp] \begin{center \includegraphics[width=4cm]{figures/pmt_base.eps} \caption{A photograph of the voltage divider PCB mounted on the PMT.} \label{fig:pmt_base_photo} \end{center \end{figure} The Printed Circuit Board (PCB) of the divider was made of Cirlex\footnote{\href{http://www.cirlex.com/}{Cirlex} is a registered trademark of DUPONT} polyimide laminate known to have significantly lower radioactivity compared to the traditional PCB materials. A photograph of the Cirlex PCB of the divider mounted on the PMT is shown in \ref{fig:pmt_base_photo}. The divider is characterised by a very low power consumption of 0.024 W per PMT at -HV = 1500 V (a nominal PMT operation voltage). The heat flux through the surface area of the resistors in the PMT divider can be estimated as 0.024 W / (14 $\cdot$ 2 mm $\cdot$ 1.25 mm) = 760 W/m$^2$, where 14 is the number of Mega Ohm - scale resistors in the divider and 2 mm $\cdot$ 1.25 mm is the surface area of the each Mega Ohm - scale resistor. According to \cite{haruyama:02} this heat flux is low enough not to form any bubbles in the liquid xenon. The linearity of the PMT response to the light pulses could also be affected by the voltage divider as well as by the avalanche charge distribution inside the PMT. When an intense light pulse enters a PMT a large current flows in the latter dynode stages increasing the space charge density and causing current saturation. The space charge effects depend on the electric field distribution and intensity between each dynode. The large current in the last dynode stages could cause a voltage drop at the resistors of the PMT voltage divider thus slightly redistributing the voltages applied to the dynodes and affecting the linearity of the PMT response to the light pulses. The linearity of the PMT divider could be improved by adding capacitors at the last amplification stages. The linearity of the PMT response to 1 $\mu$S wide (an approximate pulse width of S2 signals in XENON1T detector) light pulses as a function of the number of photoelectrons per pulse was measured as described in \cite{hama}. It was measured at a pulse repetition rate of 200 Hz, 500 Hz, 1000 Hz and 2000 Hz as shown in \ref{fig:linearity}. The measurements were performed at a PMT gain of $4.85\cdot10^6$. \begin{figure}[tbp] \begin{center \includegraphics[width=9cm]{figures/linearity.eps} \caption{Preliminary results of the PMT linearity measurements. The linearity of the PMT response to 1 $\mu$S wide light pulses as a function of the number of photoelectrons per pulse measured at various pulse repetition rates. The PMT gain is $4.85\cdot10^6$.} \label{fig:linearity} \end{center \end{figure} As seen in \ref{fig:linearity}, the PMT with the voltage divider shows a linear response within a deviation of 5\% up to $5\cdot10^4$, $2.5\cdot10^4$, $1.5\cdot10^4$ and $7.5\cdot10^3$ photoelectrons in the light pulse at a rate of 200 Hz, 500 Hz, 1000 Hz and 2000 Hz respectively. A detailed description of the main design concepts in the divider development for R11410 PMTs as well as a comprehensive survey of the linearity measurements can be found elsewhere \cite{behrens:13}. \section{Radioactive screening} We employed several methods to assess the radioactive contamination of the PMT materials including the High Purity Germanium (HPGe) underground detectors and Glow Discharge Mass Spectrometry (GDMS) \cite{aprile:11:1}. Each of the material used in the PMT was studied using the above mentioned techniques. The measured radioactive contaminations by various radioactive isotopes present in the PMT materials fulfill the requirements for XENON1T detector with a total radioactive background budget of 0.5 evts/year/1T. A comprehensive radioactive screening report for the PMTs and the PMT materials will be soon published by XENON collaboration. \section{Mass production tests} So far we have received 220 PMTs from the manufacturer. Most of the PMTs showed rather high QE exceeding 30\%. For the mass production tests at room temperature we adopted the experimental PMT test facility at Max-Planck-Institut f\"{u}r Kernphysik (MPIK) at Heidelberg. It allows for simultaneous calibration of 12 PMTs by measuring the single photoelectron response, transit time spread, after pulse rate and spectrum, and high voltage calibration. An example of a single photoelectron spectrum recorded using this setup is shown in \ref{fig:spe}. The PMT was biased at -1400V corresponding to a gain of about $2\cdot10^6$. \begin{figure}[tbp] \begin{center \includegraphics[width=7cm]{figures/spe_ka0017.eps} \caption{An example of the single photoelectron spectrum recorder at a bias voltage of -1400V corresponding to a gain of about $2\cdot10^6$} \label{fig:spe} \end{center \end{figure} In a real experiment one should operate PMTs at a lowest possible bias voltage to avoid possible sparking issues. At the same time this bias voltage has to provide high enough gain to ensure clear SPE peak. A single photoelectron (SPE) spectrum was recorded for all PMTs. These spectra were then analysed to extract the peak-to-valley (P/V) ratios, the PMT amplification factors (Gains) and the SPE resolution. We learned from these measurements that the optimal PMT gain to provide low enough PMT operation voltage, a satisfactory peak-to-valley ratio of about 3.5 and the SPE resolution of about 30\% is around $2\cdot10^6$. A dedicated setup at Max-Planck-Institut f\"{u}r Kernphysik (MPIK) at Heidelberg for the mass production tests in cold. In this setup 12 PMTs can be cooled down simultaneously with a cold nitrogen gas in order to study the PMT characteristics in cold and to subject the PMTs to a thermal stress test. The gas is cooled by convection through the constant flow of liquid nitrogen in the copper coil. The measurements the dark count rate (noise PMT signals induced by a number of sources including field emission, leakage current, and thermionic emission) at -100 $^0$C were performed for all PMTs. A vast majority of the PMTs demonstrated a stable low temperature operation. According to the manufacturer's specifications the rate of dark counts for these type of the PMTs should not exceed 200 Hz. It has to be mentioned that each of the tested PMTs were cooled down for at least 3 times. \section{Measurements in liquid xenon} A dedicated experimental setup for performing the PMT tests in liquid xenon was built at University of Zurich. The setup comprises of a cryostat that accommodates 5 PMTs to be submerged in liquid xenon. The PMTs biasing and the signal readout are realized using the voltage dividers, cables and cable feedthroughs that are identical to those that will be used in XENON1T detector. In \ref{fig:gain_vs_time} we present the gain stability measurements in liquid xenon. The gain of a R11410-21 PMT immersed in LXe was monitored for a period of 5 months. \begin{figure}[tbp] \begin{center \includegraphics[width=9cm]{figures/uzh_results.eps} \caption{Long-term stability of the gain of a R11410 immersed in LXe, as measured over a period of about 5 months \protect\cite{baudis:13}. The gain is stable within 2\%, as indicated by the dashed lines. Periods in which the gain shows larger variations can be correlated to changing experimental conditions. At the end of the run, the gray points show the measurements while the corresponding black points are corrected and take into account a sudden change in the LXe pressure and temperature. A similar correction cannot be done reliably for the short spikes in the gain measurements, hence they are presented as measured.} \label{fig:gain_vs_time} \end{center \end{figure} As we learned from \ref{fig:gain_vs_time}, the gain is stable within 2\%, as indicated by the dashed lines. Periods in which the gain shows larger variations can be correlated to changing experimental conditions. The gray points on the top right of the \ref{fig:gain_vs_time} show the measurements while the corresponding black points are corrected and take into account a sudden change in the LXe pressure and temperature. A similar correction cannot be done reliably for the short spikes in the gain measurements, hence they are presented as measured. \section{Conclusions} XENON1T will incorporate 248 Hamamatsu model R11410-21 Photomultiplier tubes (PMTs) shared between the top and bottom PMT arrays. A comprehensive campaign on the characterization of these PMTs and on the study of some important PMT properties is in full swing. The absolute quantum efficiency (QE) of the PMT was measured at low temperatures down to -110 $^0$C (a typical the PMT operation temperature in liquid xenon detectors) in a spectral range from 154.5 nm to 400 nm. At -110 $^0$C the absolute QE increased by 10-15\% at 175 nm compared to that measured at room temperature. We developed a new low power consumption, low radioactivity voltage divider for the PMTs. It is characterised by a very low power consumption of 0.024 W per PMT at -HV = 1500 V (a nominal PMT operation voltage) to avoid the formation of bubbles in the liquid xenon. The preliminary results of the linearity measurement for the current version of the divider showed that a PMT with this voltage divider shows a linear response up to $5\cdot10^4$, $2.5\cdot10^4$, $1.5\cdot10^4$ and $7.5\cdot10^3$ photoelectrons in the light pulse at a rate of 200 Hz, 500 Hz, 1000 Hz and 2000 Hz correspondingly. The measured radioactive contamination induced by the PMTs and the PMT voltage divider materials fulfill the requirements of XENON1T detector to not exceed the total background of 0.5 evts/year/1tonn. 220 PMTs received from Hamamamtsu Photonics K.K. showed an average QE exceeding 30\% at 175 nm at room temperature. At the room temperature, we measured on a dedicated setup the single photoelectron peaks for all deliverd PMTs showing that the optimal PMT gain to provide the lowest operation voltage, ensuring satisfactory peak-to-valley ratio and SPE resolution is around $2\cdot10^6$. The operation stability for most of the PMTs was also demonstrated at a temperature of -100 $^0$C. A dedicated setup was built for testing the PMTs in liquid xenon. The PMTs tested in liquid xenon demonstrated a stable operation for over a 5 months period. \section*{Acknowledgment} We gratefully acknowledge support from NSF, DOE, SNF, Volkswagen Foundation, FCT, Region des Pays de la Loire, STCSM, NSFC, DFG, MPG, Stichting voor Fundamenteel Onderzoek der Materie (FOM), the Weizmann Institute of Science, the EMG research center and INFN. We are grateful to LNGS for hosting and supporting XENON1T.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,745
Q: Partial Fraction: Already irreducible? I have this partial fraction: $${3x+7}\over{(x-4)^2+25}$$ As far as I can tell, I do not think this can be decomposed. Is that a correct assumption? Sorry for the very short question, there isn't much work I could show, I think. Thank you! A: In case the purpose is to find an antiderivative $$\frac{3x+7}{x^2-8x+41} = \frac{3x-12}{x^2-8x+41} + \frac{19}{x^2-8x+41} = \left( \frac{3}{2} \right) \left( \frac{2x-8}{x^2-8x+41} \right) + \left( \frac{19}{(x-4)^2+25} \right) $$ For the 19 part, we would expect to use a substitution $$ x-4 = 5 u $$
{ "redpajama_set_name": "RedPajamaStackExchange" }
23
\section{\label{sec:Intro}Introduction} Disorder is ubiquitous in nature. The issue of universal properties of disordered systems has sparked much interest, both experimental and theoretical, beginning with Anderson localization in electronic systems, and by now including phenomena observable in a wide range of physical systems. Transport in one-dimensional (1D) disordered systems, being the simplest problem, is naturally the first to be understood. Studies on electronic conductivity by modeling a 1D wire as an array of potentials with random shapes and/or random spacings abound. For an excellent review of the state of affairs for 1D systems in 1982, we refer the reader to the article by Erd{\"o}s and Herndon.\cite{Erdos82} Overviews of subsequent developments can be found in Refs.~\onlinecite{Kramer93}, \onlinecite{Pendry94}, and \onlinecite{Beenakker}. Despite the comprehensive understanding of the 1D problem, owing to the difficulty of the subject, exact analytical results are few and far between. Perturbative treatments, typically in the limit of weak scattering, are the main tools for analytical studies: the scaling theory,\cite{Anderson80} the Born approximation for scattering,\cite{Abrikosov81} the DMPK equation,\cite{DMPK} and many more. Exact results emerge mostly from numerical studies, or from the group-theoretic approaches of Refs.~\onlinecite{Erdos82}, \onlinecite{Kirkman84}, and \onlinecite{Pendry94}, which apply in general situations, but are nevertheless somewhat complicated because of the use of direct products of transfer matrices. In this article, we revisit the simplest problem of a 1D single-scattering-channel system where the disorder in question is one of uniformly distributed phase disorder. A physical realization of a system manifesting such a disorder comprises a stack of identical semi-transparent glass plates, with random spacings between adjacent plates. The task is to explore the transmission of a laser beam through the random stack. Because the typical separation between plates is much larger than the wavelength of the light, a uniform distribution of separation will describe the phase disorder well. Such a system was already considered by Stokes \cite{Stokes} in 1862, who gave a ray-optical treatment. Wave-optical investigations came much later; see Ref.~\onlinecite{Buchwald89} for the history of the subject. In fact, the 1984 work by Perel' and Polyakov \cite{PP84} deals with a more general 1D transport problem that is treated with pertinent approximations; the situation considered here is contained as a special case, for which these approximations are not needed. This particular case was also investigated by Berry and Klein \cite{Berry97} in 1997 (which work inspired our current efforts). We employ a different strategy that exploits fully the recurrence relation established in Ref.~\onlinecite{Lu10}, which facilitates an exact yet simple analytical treatment. Disorder average of all moments of the conductance and resistance (that is, the transmission probability and its reciprocal) can be derived easily within a single framework. The same framework further permits direct derivation of the exact probability distributions for the conductance and resistance, which allows for computation of all statistics of the disordered system. This goes beyond past work that reconstructed the distribution for the conductance from its moments,\cite{Pendry94} or derivations that relied on perturbative approaches.\cite{Gert59,Abrikosov81} The simplicity in our solution lies in a relation with Legendre functions, whose properties are well studied and are easily amenable to analytical manipulations. Our exact solution for this simple model can conceivably be used as the starting point for perturbation towards other more realistic systems (see, for instance, the recent experimental proposal of Ref.~\onlinecite{Gavish05}). The article will proceed as follows: After setting up the problem in Section \ref{sec:Setup}, we describe (Section \ref{sec:Recur}) how to average over disorder using a recurrence relation, previously derived in Ref.~\onlinecite{Lu10}. This recurrence relation is applicable even for general disorder. Specializing to uniform phase disorder, in Sections \ref{sec:Resistance} and \ref{sec:Conductance}, we make use of a close link to Legendre functions to write down closed-form expressions for the expected values of moments of conductance and resistance. Comparisons with existing results in the weak-scattering, long-chain limit are presented in Section \ref{sec:WeakScat}. The recurrence relation also gives the exact probability distributions for the conductance and the resistance, and these probability distributions are studied in detail in Section \ref{sec:Prob}. We close with a summary in Section \ref{sec:Conc}. \section{\label{sec:Setup}Problem setup: Transfer-matrix description of the scattering process} Consider a 1D chain of scatterers, such as a stack of semi-transparent glass plates, or a string of impurities. Both the scatterer and the scattered particle are assumed to be scalar particles, or at least only scalar degrees of freedom participate in the scattering process. The scatterers are identical, but sit at locations with varying distances between adjacent scatterers. The distances between adjacent scatterers are the random variables in our system---the disorder. The scattering of the particle off the $n$th scatterer can be summarized by a transfer matrix $T_n$ which propagates the wave function of the scattered particle past the $n$th scatterer (see Figure \ref{fig:FigTransfer}). $T_n$ is obtained by solving the scattering problem for the $n$th scatterer, and accounts for multiple scattering off the same scatterer due to reflected waves from adjacent scatterers. For our problem, $T_n$ can be decomposed into two parts, \begin{equation} T_n=D(kL_n)\,T\quad\textrm{with }D(\varphi)= {\left(\begin{array}{cc}\mathrm{e}^{\mathrm{i}\varphi}&0\\0&\mathrm{e}^{-\mathrm{i}\varphi}\end{array}\right)}, \end{equation} where $T$ describes the scattering due to the $n$th scatterer itself, and $D(kL_n)$ accounts for the phase acquired by the scattered particle in traveling distance $L_n$ between the $n$th and ($n$+1)th scatterers. $T$ takes the general form \renewcommand{\arraystretch}{1.15} \begin{align} T&=D(\beta)\,\mathcal{T}(\vartheta)\,D(\alpha)\nonumber\\ \textrm{with}\quad\mathcal{T}(\vartheta)&={\left(\begin{array}{cc}\cosh\bigl(\frac{1}{2}\vartheta\bigr)&\sinh\bigl(\frac{1}{2}\vartheta\bigr)\\\sinh\bigl(\frac{1}{2}\vartheta\bigr)&\cosh\bigl(\frac{1}{2}\vartheta\bigr)\end{array}\right)}.\label{eq:T1} \end{align} Here, $\alpha$ and $\beta$ denote overall phases, and $\mathcal{T}(\vartheta)$ contains the transmission amplitude $t= \bigl[\cosh(\frac{1}{2}\vartheta)\bigr]^{-1}$ and the reflection amplitude $r=\sqrt{1-t^2}=\tanh\bigl(\frac{1}{2}\vartheta\bigr)$. Since the scatterers are identical, the same $T$ describes every scatterer. The disorder that stems from the variable separation between adjacent scatterers is characterized by $L_n$, for $n=1,\ldots, N$ (with $L_N= 0$). \renewcommand{\arraystretch}{1.0} The total transfer matrix, for a fixed configuration of $N$ scatterers, is \begin{eqnarray} && T^{(N)}= T_NT_{N-1}\ldots T_1\\ &=&D(\beta)\,\mathcal{T}(\vartheta)\,D{\Bigl(\frac{\varphi_{N-1}}{2}\Bigr)}\,\mathcal{T}(\vartheta)\ldots D{\Bigl(\frac{\varphi_1}{2}\Bigr)}\,\mathcal{T}(\vartheta)\,D(\alpha),\nonumber \end{eqnarray} where $\varphi_n= 2(\alpha+\beta+ k L_n)$. $T^{(N)}$ can also be written in the form of Eq.~\eqref{eq:T1}, with an effective $N$-scatterer transmission amplitude $t^{(N)}= \bigl[\cosh\bigl(\frac{1}{2}\vartheta^{(N)}\bigr)\bigr]^{-1}$, so that \begin{equation} T^{(N)}=D\bigl(\beta^{(N)}\bigr)\,\mathcal{T}\bigl(\vartheta^{(N)}\bigr)\,D\bigl(\alpha^{(N)}\bigr) \end{equation} with overall phases $\alpha^{(N)}$ and $\beta^{(N)}$. These phases and $\vartheta^{(N)}$ can be recursively expressed in terms of $\vartheta$ and the individual spacings $L_n$. This can be understood by adding one more scatterer to the end of the chain of $N$ scatterers. The total transfer matrix is now $T^{(N+1)}=T_{N+1}T^{(N)} = D\bigl(\beta^{(N+1)}\bigr)\,\mathcal{T}\bigl(\vartheta^{(N+1)}\bigr)\,D\bigl(\alpha^{(N+1)}\bigr)$, with $\vartheta^{(N+1)}$ obeying the composition law \begin{align} \cosh\vartheta^{(N+1)}&=\cosh\vartheta\,\cosh\vartheta^{(N)}\nonumber\\ &\quad+\sinh\vartheta\,\sinh\vartheta^{(N)}\cos(\phi_N). \end{align} Here, $\phi_N$ is twice the phase sandwiched between the two matrices $\mathcal{T}(\vartheta)$ and $\mathcal{T}(\vartheta^{(N+1)})$, and is given in this case by $\phi_N=2(kL_N+\alpha+\beta^{(N)})$. We can regard $\phi_n$, in place of $L_n$, as the parameter that represents the disorder in our system. Composition laws can also be written down for $\alpha^{(N+1)}$ and $\beta^{(N+1)}$, but they will not enter our analysis. As a convenient shorthand, we will use the notation $C= \cosh\vartheta$, and $S=\sinh\vartheta=\sqrt{C^2-1}$. Similarly, one can define $C^{(N)}= \cosh\vartheta^{(N)}$ and $S^{(N)}= \sinh\vartheta^{(N)}$ so that the composition law appears succinctly as \begin{equation}\label{eq:comp} C^{(N+1)}=CC^{(N)}+SS^{(N)}\cos(\phi_N). \end{equation} The total transmission probability (or dimensionless conductance, in the language of transport on 1D wires) after $N$ scatterers is \begin{equation} \tau_N= \bigl(t^{(N)}\bigr)^2=\frac{2}{C^{(N)}+1}\,. \end{equation} The value of $\tau_N$ depends sensitively on the configuration of the scatterers, that is, on the values of the phases $\phi_n$. \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{Fig1.pdf} \caption{\label{fig:FigTransfer}The transfer matrix description of the scattering process. $T$ is the transfer matrix for scattering due to a scatterer by itself (for example, a glass plate); $T_n$ includes the effects of the spacing $L_n$ between the $n$th and ($n$+1)th scatterers. $\psi_n(x)$ is the wave function of the scattered particle just before it hits the $n$th scatterer, and can be written in terms of left- and right-moving waves as $\psi_n(x)=u_ne^{\mathrm{i}kx}+v_ne^{-\mathrm{i}kx}$, where $k$ is the momentum of the scattered particle.} \end{center} \end{figure} \section{\label{sec:Recur}Recurrence relation: Averaging over disorder} Rather than focusing on a particular configuration of the $N$ scatterers, one is usually more interested in statistics of the entire ensemble of configurations. This requires a statement about the nature of the disorder, namely, the probability distribution $\mathrm{d}\mu(\phi_n)$ for $\phi_n$. More generally, one can have correlated disorder, where the $\phi_n$ values are not statistically independent, but this is beyond the scope of our current discussion. The transmission of $N$ scatterers, averaged over the disorder, is then \begin{equation} \langle\tau_N\rangle =\int\mathrm{d}\mu(\phi_{N-1})\ldots\int\mathrm{d}\mu(\phi_2)\int\mathrm{d}\mu(\phi_1)~\frac{2}{C^{(N)}+1}. \end{equation} The dependences on $\phi_1,\phi_2,\ldots,\phi_{N-1}$ are implicit in $C^{(N)}$. Writing out these dependences in full can be very complicated without being enlightening. To simplify the problem, we assume that the disorder is uniformly distributed, which permits the replacement \begin{equation} \int \mathrm{d}\mu(\phi_n)\longrightarrow\int_{(2\pi)}\frac{\mathrm{d}\phi_n}{2\pi} \end{equation} in the disorder average. Here, the subscript $(2\pi)$ denotes integration over any $2\pi$-interval of our choice. A uniform distribution is a good description whenever the separation between adjacent scatterers is itself uniformly distributed. It also describes the physical situation whenever $kL\gg 1$ for some typical separation $L$ between scatterers. This often applies for monochromatic light scattered by a stack of semi-transparent glass plates with layers of air between them. In the case of a chain of ions trapped in a lattice, thermal motion of the ions within the trapping potential leads to random separation between adjacent ions. The separation is usually concentrated around some central value $L$, but for an electron incident with large momentum $k$, the phase $kL_n$ will explore many cycles of $2\pi$ over even small deviations of $L_n$ from $L$. A uniform distribution for $\phi_n\sim kL_n$ is then a fitting description. \subsection{A recurrence relation} To proceed with our analysis, we recall that the composition law \eqref{eq:comp} allows us to compute $\langle\tau_N\rangle$ recursively with the aid of the recurrence relation derived in Ref.~\onlinecite{Lu10}. We define a map $\mathcal{M}_C$ that transforms any function $f(C')$ in accordance with \begin{equation} (\mathcal{M}_C f)(C')= \int_{(2\pi)}\frac{\mathrm{d}\phi}{2\pi}~f(CC'+SS'\cos\phi). \end{equation} Here, as before, $C= \cosh\vartheta$ is such that $t=\bigl[\cosh\bigl(\frac{1}{2}\vartheta\bigr)\bigr]^{-1}$ is the transmission amplitude for a single scatterer. $C'$, the variable here, and $S'= \sqrt{C'^2-1}$ are to be viewed as the hyperbolic cosine and sine of some $\vartheta'$. Note that $(\mathcal{M}_C f)(C'=1)=f(C'=C)$. The map $\mathcal{M}_C$ possesses a symmetry that will prove useful later. For any two functions $f(C')$ and $g(C')$, \begin{equation}\label{eq:sym} \int_1^\infty\mathrm{d} C'(\mathcal{M}_Cf)(C')\,g(C')=\int_1^\infty\mathrm{d} C' f(C')\,(\mathcal{M}_Cg)(C'), \end{equation} that is, we can consider $\mathcal{M}_C$ as acting on either $f$ or $g$. To see this, observe that \begin{align}\label{eq:sym1} &\quad~\int_1^\infty \mathrm{d} C' (\mathcal{M}_Cf)(C')g(C')\nonumber\\ &=\int_1^\infty \mathrm{d} C'\int_1^\infty\mathrm{d} C'' f(C')\,\mathcal{K}_C(C',C'')\,g(C'') \end{align} with the kernel \begin{align} \mathcal{K}_C(C',C'')&=\int_{(2\pi)}\frac{\mathrm{d}\phi}{2\pi}\,\delta{\bigl(CC'+SS'\cos\phi-C''\bigr)}\nonumber\\ &=\frac{1}{\pi}\bigl[1-C^2-C'^2-C''^2+2C C' C''\bigr]^{-\frac{1}{2}}_+.\label{eq:K} \end{align} Here, $\delta(~)$ is the delta function, and the subscript $+$ is such that $[x]^{-1/2}_+=1/\sqrt{x}$ when $x\geq 0$, and is 0 otherwise. Since $\mathcal{K}_C(C',C'')$ is invariant under permutations of $C,C'$ and $C''$, it follows that the roles of $f$ and $g$ in Eq.~\eqref{eq:sym1} can be interchanged, as expressed by the symmetry rule \eqref{eq:sym}. From Ref.~\onlinecite{Lu10}, the average transmission probability is \begin{equation} \langle\tau_N\rangle=(\mathcal{M}_C^{N-1}f)(C')\Big\vert_{C'=C}~~\textrm{for }f(C')= \frac{2}{C'+1}, \end{equation} a fact that can be verified by writing out $\langle \tau_N\rangle$ in full for $N=1,2,3,\ldots$. More generally, the disorder average of any function of $C^{(N)}$ is given by \begin{equation}\label{eq:avgf} {\left\langle f{\left(C^{(N)}\right)}\right\rangle}=(\mathcal{M}_C^{N-1}f)(C')\Big\vert_{C'=C}. \end{equation} Another way of deriving Eq.~\eqref{eq:avgf} is to consider the probability density $W_N(C')$ that $C^{(N)}$ takes the value $C'$, so that \begin{equation}\label{eq:avgfWN} {\left\langle f{\left(C^{(N)}\right)}\right\rangle}=\int_1^\infty\mathrm{d} C'\, f(C')\,W_N(C'). \end{equation} In view of the composition law \eqref{eq:comp}, we have, for $N=m+n$, with $m,n$ positive integers, \begin{widetext} \begin{eqnarray} W_N(C')&=&\int_1^\infty \mathrm{d} C_1\, W_m(C_1)\int_1^\infty \mathrm{d} C_2\, W_n(C_2)\int_{(2\pi)}\frac{\mathrm{d}\phi}{2\pi}\,\delta{\bigl(C_1C_2+S_1S_2\cos\phi-C'\bigr)}\nonumber\\ &=&\int_1^\infty \mathrm{d} C_1 \int_1^\infty \mathrm{d} C_2 \,W_m(C_1) \,\mathcal{K}_{C'}(C_1,C_2)\,W_n(C_2).\label{eq:WNnm}\label{eq:WNnm2} \end{eqnarray} \end{widetext} The first line of Eq.~\eqref{eq:WNnm2} can be understood by splitting the $N$ scatterers into two segments (see Figure \ref{fig:Wmn}), one comprising the left $m$ scatterers, with an overall $C^{(m)}$ value equal to $C_1$, the other comprising the remaining $n$ scatterers, with an overall $C^{(n)}$ value equal to $C_2$. The two segments are separated by a random phase $\phi$, and the composition law gives the overall $C^{(N)}$ value for the $N$ scatterers. For $W_1(C')=\delta(C'-C)$, Eq.~\eqref{eq:WNnm} gives iteratively $W_2(C'),W_3(C'), \ldots$, summarized as \begin{equation}\label{eq:WNrecur} W_N(C')=(\mathcal{M}_C W_{N-1})(C')=(\mathcal{M}_C^{N-1}W_1)(C'). \end{equation} Upon inserting this formula into Eq.~\eqref{eq:avgfWN} and recalling the symmetry Eq.~\eqref{eq:sym} of $\mathcal{M}_C$, we get Eq.~\eqref{eq:avgf} for the disorder average of $f$. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{Fig2.pdf} \caption{\label{fig:Wmn} Illustration of concatenating two sub-chains of length $m$ and $n$ together to form a single chain of $N=m+n$ scatterers.} \end{figure} As an example of the usefulness of Eq.~\eqref{eq:avgf}, let us compute the average of $\log\tau_N$. We set $f(C')=\log{\bigl(\frac{2}{C'+1}\bigr)}$. Performing the integration over $\phi$ in $(\mathcal{M}_Cf)(C')$, we observe that $(\mathcal{M}_Cf)(C')=f(C)+f(C')$. Repeated applications of $\mathcal{M}_C$ thus yields \begin{equation}\label{eq:logtau} \langle\log\tau_N\rangle=Nf(C)=N\log\tau_1, \end{equation} where $\tau_1=t^2=\frac{2}{C+1}$ is the transmission probability for a single scatterer. This expression comes as no surprise as $\log\tau_N$ is well known to be additive under disorder averaging, and in the current context, the result \eqref{eq:logtau} is the main observation in the paper by Berry and Klein;\cite{Berry97} Eq.~(\ref{eq:logtau}) can also be found in the paper by Perel' and Polyakov \cite{PP84} as an unnumbered equation in \S4. Note that formula \eqref{eq:avgf} applies even for disorder that is not uniformly distributed, as long as we replace $\int_{(2\pi)} \frac{\mathrm{d}\phi}{2\pi}$ in the definition of $\mathcal{M}_C$ by the more general $\int \mathrm{d}\mu(\phi)$. Equation \eqref{eq:WNnm} also holds for a general $W_1(C')$ not necessarily equal to a delta function. \subsection{Eigenfunctions of the recurrence map} Computing the average of $\log\tau_N$ is easy because $\mathcal{M}_C$ acts in a simple way on the relevant $f(C')$. For more general functions, $\mathcal{M}_C$ can act in a complicated manner, and it becomes difficult to solve the recurrence relation directly to obtain a closed-form expression for the averaged quantity. The properties of the map $\mathcal{M}_C$ thus require thorough understanding before we can proceed further. Since we are to apply $\mathcal{M}_C$ repeatedly, eigenfunctions of $\mathcal{M}_C$---functions that remain invariant (apart from an overall factor) under the action of $\mathcal{M}_C$---will be particularly useful. Consider the Legendre functions, $\Lambda_\nu(C')$, which are functions that satisfy the Legendre differential equation (see, for example, Ref.~\onlinecite{AS}), \begin{equation} \frac{\mathrm{d}}{\mathrm{d} C'}(1-C'^2)\frac{\mathrm{d}}{\mathrm{d} C'}\Lambda_\nu(C')+\nu(\nu+1)\,\Lambda_\nu(C')=0. \end{equation} For our current purposes, as the notation already suggests, $C'$ is to be viewed as the hyperbolic cosine of some parameter, and we are interested in $C'$ between 1 and $\infty$. For a given $\nu$, there are two independent solutions to the Legendre equation, $P_\nu(C')$ (Legendre functions of the first kind) and $Q_\nu(C')$ (Legendre functions of the second kind). $Q_\nu(C')$ blows up at $C'=1$ while $P_\nu(C'=1)=1$ for all $\nu$. Suppose we begin with $P_\nu(C')$ and apply the recurrence map $\mathcal{M}_C$. We recall the addition theorem for $P_\nu(C')$ (see, for example, Ref.~\onlinecite{GR}), \begin{align} &\quad~ P_\nu(CC'+SS'\cos\phi)\\ &=P_\nu(C)P_\nu(C')+2\sum_{m=1}^\infty P_\nu^{-m}(C_<)P_\nu^m(C_>)\cos(m\phi)\nonumber \end{align} where $P_\nu^m$ are the associated Legendre functions, and $C_{<(>)}= \min(\max)\{C,C'\}$. Employing this formula in the integrand of $(\mathcal{M}_CP_\nu)(C')$ yields the eigenvalue equation \begin{equation}\label{eq:evalEq} (\mathcal{M}_CP_\nu)(C')=P_\nu(C)P_\nu(C'), \end{equation} and the Legendre functions $P_\nu(C')$ are eigenfunctions of $\mathcal{M}_C$. One can directly verify that $(\mathcal{M}_CP_\nu)(C')$ satisfies the Legendre equation with the same value of $\nu$, which permits writing $(\mathcal{M}_CP_\nu)(C')$ as a linear combination of $P_\nu(C')$ and $Q_\nu(C')$. Considering the value of $(\mathcal{M}_CP_\nu)(C')$ at $C'=1$ leads to the conclusion \eqref{eq:evalEq}. This eigenfunction property results in simple behavior of $P_\nu(C')$ under repeated applications of $\mathcal{M}_C$, \begin{equation}\label{eq:eigen} (\mathcal{M}_C^{n}P_\nu)(C')=P_\nu(C)^nP_\nu(C'). \end{equation} The degree $\nu$ can be any complex number. Of particular importance to us are the cases when $\nu$ is a nonnegative integer, and when $\nu$ takes the form $\nu=-\frac{1}{2}+\mathrm{i}x$. When $\nu=0,1,2,\ldots$, we have the Legendre polynomials familiar from many areas of physics; the Legendre functions $P_{-\frac{1}{2}+\mathrm{i}x}(C')$ are known as the Mehler (or conical) functions, and have appeared in other physical problems, for example, the solution of Laplace's equation in toroidal coordinates. More relevant to our current subject, the Mehler functions were used in the exact solution of the DMPK equation (see Ref.~\onlinecite{Beenakker} and references therein), as well as in a different eigenvalue problem for the Anderson model.\cite{Abrikosov78,Kirkman84} \section{\label{sec:Resistance}Moments of the resistance} In the language of transport in electronic systems, the analogous quantity for transmission probability is the (dimensionless) conductance. The reciprocal of the conductance is the resistance, $\rho_N=1/\tau_N=\frac{1}{2}(C^{(N)}+1)$ for $N$ scatterers. Our present concern is to compute the disorder-averaged resistance. We begin with $f(C')=\frac{1}{2}(C'+1)=\frac{1}{2}[P_1(C')+P_0(C')]$, which gives \begin{equation}\label{eq:24} \langle\rho_N\rangle=\frac{1}{2}(C^N+1), \end{equation} upon applying Eq.~\eqref{eq:eigen}.\cite{Note:PP84-1} For a long chain of scatterers, $N\gg 1$, we have $\log{\left\langle \rho_N\right\rangle}\simeq N\log C$, consistent with the usual expectation that the resistance grows exponentially as $N$ increases. For averages of moments of the resistance, $\langle \rho_N^m\rangle$, for positive integer $m$, we start with \mbox{$f(C')=\big[\frac{1}{2}(C'+1)\big]^m$}. Noting that $f(C')$ involves only positive integer powers of $C'$, and every positive integer power of $C'$ can be expanded in terms of the Legendre polynomials $P_\ell(C')$ (see, for example, Ref.~\onlinecite{AS} or \onlinecite{GR}), we can similarly obtain $\langle \rho_N^m\rangle$ without great effort by applying Eq.~\eqref{eq:eigen}. To illustrate, let us consider $\langle \rho_N^3\rangle$. Here, $f(C')=\frac{1}{8}(C'+1)^3=\frac{1}{20}{\left[P_3(C')+5P_2(C')+9P_1(C')+5P_0(C')\right]}$. From Eq.~\eqref{eq:eigen}, we then get \begin{equation} \langle\rho_N^3\rangle =\frac{1}{20}P_3(C)^N+\frac{1}{4}P_2(C)^N+\frac{9}{20}P_1(C)^N+\frac{1}{4}, \end{equation} an expression that would have been difficult to obtain had we tried to solve the recurrence relation directly. Note that the averages $\langle \rho_N^m\rangle$ can be computed iteratively starting with $m=1$ by employing recurrence formulas for Legendre polynomials that relate $C'P_\ell(C')$ to $P_{\ell\pm 1}(C')$. For large $N$, $\langle\rho_N^m\rangle$ is dominated by the Legendre polynomial with the largest index $\ell = m$, and hence, \begin{equation} \langle\rho_N^m\rangle\simeq \frac{(m!)^2}{(2m)!}P_m(C)^N\quad\textrm{for }N\gg 1. \end{equation} Thus, $\log\langle \rho_N^m\rangle\propto N$ for $N\gg 1$ and any fixed $m$. Also, $\rho_N^m$ has spread $\sqrt{\langle \rho_N^{2m}\rangle-\langle\rho_N^m\rangle^2}$, which, for large $N$, is dominated by $\sqrt{\langle \rho_N^{2m}\rangle}$. \section{\label{sec:Conductance}Moments of the conductance} To compute the average transmission probability (or conductance), we note that the above method of expanding powers of $C'$ in terms of Legendre polynomials no longer works, since the $C'$ dependence in $\tau_N$ occurs in the denominator. Whereas the Legendre polynomials are complete for $-1\leq C'\leq 1$, this completeness is of no help for $C'>1$, which is the range of interest in the current context. Nevertheless, the idea of expanding in terms of Legendre functions still works. For studying the conductance and its moments, we expand, not in terms of Legendre polynomials, but in terms of Legendre functions of the Mehler type $P_{-\frac{1}{2}+\mathrm{i}x}(C')$, through the Mehler-Fock transformation. \subsection{The Mehler-Fock transformation} The Mehler-Fock transformation \cite{MehlerFock} is an index transformation that uses the Mehler functions $P_{-\frac{1}{2}+\mathrm{i}x}(C')$ as the basis for expansion. Formally, one defines the Mehler-Fock transform $\hat f(x)$ of $f(C')$ as \begin{equation} \hat f(x)=x\tanh(\pi x)\int_1^\infty \mathrm{d} C'\, P_{-\frac{1}{2}+\mathrm{i}x}(C')\,f(C'). \end{equation} This integral exists whenever $f(C')$ is (weighted) square-integrable \cite{Yaku97} such that \begin{equation}\label{eq:sqInt} \int_1^\infty \mathrm{d} C' \,\sqrt{C'-1}\,\big |f(C')\big|^2<\infty. \end{equation} The inverse transformation expresses the original function in terms of its transform, \begin{equation}\label{eq:MFinv} f(C')=\int_0^\infty \mathrm{d} x \,P_{-\frac{1}{2}+\mathrm{i}x}(C')\,\hat f(x). \end{equation} The Mehler-Fock transform allows us to easily compute averages of physical quantities $f(C^{(N)})$, whenever $f$ is square integrable in the sense of \eqref{eq:sqInt}. We write, \begin{align} \langle f(C^{(N)})\rangle&=(\mathcal{M}_C^{N-1}f)(C')\Big\vert_{C'=C}\nonumber\\ &={\left.\int_0^\infty \mathrm{d} x\,(\mathcal{M}_C^{N-1}P_{-\frac{1}{2}+\mathrm{i}x})(C')\,\hat f(x)\right\vert}_{C'=C}\nonumber\\ &=\int_0^\infty \mathrm{d} x\,P_{-\frac{1}{2}+\mathrm{i}x}(C)^N\hat f(x),\label{eq:MFAvg} \end{align} where, exploiting the linearity of the map $\mathcal{M}_C$, we applied $\mathcal{M}_C$ to the Mehler functions using \eqref{eq:eigen}. This gives a closed-form expression for $\langle f(C^{(N)})\rangle$, with two remaining items to evaluate---the Mehler-Fock transform of $f$, and the final $x$ integral---but otherwise takes care of the complicated recursive composition law for concatenating $N$ scatterers. \subsection{Moments of the conductance} For moments of the transmission probability, or equivalently, moments of the conductance, the Mehler-Fock transform $\hat f(x)$ can be worked out explicitly. For $\langle\tau_N^m\rangle$, we set \begin{equation}\label{eq:fMoments} f(C')={\left(\frac{2}{C'+1}\right)}^m\qquad\textrm{for } m> \frac{1}{2}. \end{equation} To compute its Mehler-Fock transform, we note that $P_{-\frac{1}{2}+\mathrm{i}x}(C')$ can be written in terms of the hypergeome\-tric function,\cite{AS} $P_\nu(C')=\bigl(\frac{2}{C'+1}\bigr)^{-\nu}F\bigl(-\nu,-\nu;1;\frac{C'-1}{C'+1}\bigr)$ for $\nu=-\frac{1}{2}+\mathrm{i}x$. Inserting this into the Mehler-Fock transform integral for $f$ gives \begin{align} \hat f(x)&=2x\tanh(\pi x)\\ &\quad\times \int_0^1\mathrm{d} y\,{\left(1-y\right)}^{m-\frac{3}{2}-\mathrm{i}x}F{\left(\frac{1}{2}-\mathrm{i}x,\frac{1}{2}-\mathrm{i}x;1;y\right)},\nonumber \end{align} after making the substitution $y= \frac{C'-1}{C'+1}$. This integral is a standard one,\cite{GR} \begin{align} \hat f(x)&=2x\tanh(\pi x)\,G_m(x)\\ \textrm{with }~G_m(x)&= {\left\vert\frac{\Gamma{\left(m-\frac{1}{2}+\mathrm{i}x\right)}}{\Gamma(m)}\right\vert}^2\nonumber\\ &=\frac{{\left(m-\frac{3}{2}\right)}^2+x^2}{(m-1)^2}\,G_{m-1}(x),\nonumber \end{align} where $\Gamma(z)$ is the familiar Gamma function. For $m=1$, $\hat f(x)=2\pi x\tanh(\pi x)\,\textrm{sech}(\pi x)$, an expression that can be verified against known integrals involving Mehler functions (see, for example, Ref.~\onlinecite{Erdelyi}). With this, we arrive at a compact expression for $\langle\tau_N^m\rangle$, \begin{equation}\label{eq:taum} \langle\tau_N^m\rangle=2\int_0^\infty \mathrm{d} x\,x\tanh(\pi x)\,G_m(x)\,P_{-\frac{1}{2}+\mathrm{i}x}(C)^N, \end{equation} which is valid for $m>1/2$, including noninteger $m$ values.\cite{Note:PP84-2} The remaining integral over $x$ can be estimated in limiting cases (for example, the weak-scattering limit discussed below), or evaluated numerically. We note that the Mehler functions are standard special functions for which efficient methods of numerical evaluation are known (for instance, via integral representations) or even built into mathematical software. \section{\label{sec:WeakScat}A long chain of weak scatterers} To connect with previous work on this subject, let us examine the limit of weak scattering, where we take $C$ close to 1. At the same time, we assume a long chain of scatterers, so that $N\gg 1$. We consider $\langle \tau_N^m\rangle$, given by Eq.~\eqref{eq:taum}. For fixed $m$ and $C$, the integrand in Eq.~\eqref{eq:taum} is significant only for small $x$ values. This can be understood from the following integral representation for the Mehler function (see, for example, Ref.~\onlinecite{GR}), \begin{equation}\label{eq:PIntRep} P_{-\frac{1}{2}+\mathrm{i}x}(C)=\frac{\sqrt 2}{\pi}\int_0^{\vartheta} \mathrm{d} u\,\frac{\cos(xu)}{\sqrt{C-\cosh u}}, \end{equation} where $C=\cosh\vartheta$ as usual. Since $x$ enters only in the cosine in Eq.~\eqref{eq:PIntRep}, for fixed $C$, $P_{-\frac{1}{2}+\mathrm{i}x}(C)$ stays bounded as a function of $x$. Furthermore, one can check that the factor $x\tanh(\pi x)\,G_m(x)$ in the integrand of Eq.~\eqref{eq:taum} vanishes for large $x$ due to the suppression from the factorials in $G_m(x)$ as $x$ grows. These observations justify the approximation that, for $C$ close enough to 1, or equivalently, for $\vartheta$ close enough to zero, one can expand the cosine in Eq.~\eqref{eq:PIntRep} about $xu=0$, and keep only the low-order terms, \begin{align} P_{-\frac{1}{2}+\mathrm{i}x}(C)&\simeq P_{-\frac{1}{2}}(C)+\frac{x^2}{2}\frac{\partial^2}{\partial x^2}P_{-\frac{1}{2}+\mathrm{i}x}(C)\Big\vert_{x=0}\nonumber\\ &\simeq P_{-\frac{1}{2}}(C)\mathrm{e}^{-bx^2}.\label{eq:Gauss} \end{align} In the second line, we have defined $b>0$ such that \begin{equation} b= -\frac{1}{2}[P_{-\frac{1}{2}}(C)]^{-1}\frac{\partial^2}{\partial x^2}P_{-\frac{1}{2}+\mathrm{i}x}(C)\Big\vert_{x=0}, \end{equation} and approximated $1-bx^2\simeq \mathrm{e}^{-bx^2}$. With this, $\langle\tau_N^m\rangle$ becomes much simpler: \begin{align} \langle\tau_N^m\rangle&=2[P_{-\frac{1}{2}}(C)]^N\mathcal{I}_m(x),\nonumber\\ \textrm{with}\quad \mathcal{I}_m(x)&=\int_0^\infty \mathrm{d} x \,x\tanh(\pi x)\,G_m(x)\,\mathrm{e}^{-Nbx^2}. \end{align} For $Nb\gg1$, the Gaussian suppression in $\mathcal{I}_m(x)$ allows one to approximate the integral by Taylor-expanding the integrand about $x=0$. This gives \begin{equation} \mathcal{I}_m(x)\simeq \frac{1}{4}{\left(\frac{\pi}{Nb}\right)}^{3/2}{\Big[\frac{\Gamma{\left(m-\frac{1}{2}\right)}}{\Gamma(m)}\Big]}^2, \end{equation} with equality attained when $Nb\rightarrow \infty$. Putting the pieces together, we have \begin{eqnarray} \langle\tau_N^m\rangle&\simeq&{\left[\frac{\Gamma{\left(m-\frac{1}{2}\right)}}{\Gamma(m)}\right]}^2\frac{1}{2}{\left(\frac{\pi}{Nb}\right)}^{3/2}[P_{-\frac{1}{2}}(C)]^N,~\label{eq:taumLimit} \end{eqnarray} valid in the limit of a long chain of weak scatterers. This expression validates the conjecture of Ref.~\onlinecite{Lu10}, namely, \begin{equation} {\left(\frac{\langle\tau_N\rangle}{\langle\tau_2\rangle}\right)}^{1/(N-2)}\longrightarrow P_{-\frac{1}{2}}(C) \qquad\textrm{as }N\rightarrow\infty, \end{equation} since the quantity \mbox{$\Upsilon(\tau_1)=\int_{(2\pi)}\frac{\mathrm{d}\varphi}{2\pi}(C+S\cos\varphi)^{-1/2}$} in Ref.~\onlinecite{Lu10} is an integral representation of $P_{-\frac{1}{2}}(C)$.\cite{GR} We can further approximate $b$ and $P_{-\frac{1}{2}}(C)$ for $\vartheta\ll1$, \begin{align} P_{-\frac{1}{2}}(C)&=1-\frac{1}{16}\vartheta^2+O(\vartheta^4)\simeq \mathrm{e}^{-\vartheta^2/16}\nonumber\\ \quad\textrm{and}\quad b&=\frac{1}{4}\vartheta^2+O(\vartheta^4). \end{align} From these, $P_{-\frac{1}{2}}(C)\simeq \mathrm{e}^{-b/4}$, which upon inserting into Eq.~\eqref{eq:taumLimit}, gives \begin{equation}\label{eq:taumLimit1} \langle\tau_N^m\rangle\simeq{\left[\frac{\Gamma{\left(m-\frac{1}{2}\right)}}{\Gamma(m)}\right]}^2\frac{1}{2}{\left(\frac{\pi}{Nb}\right)}^{3/2}\mathrm{e}^{-Nb/4}. \end{equation} This expression is identical to that found in Ref.~\onlinecite{Abrikosov81} for the conduction of electrons in a 1D wire, once we make the identification $L/l= Nb$, where $L$ is the length of the wire, and $l$ is the mean-free-path of the electrons. The factor of $1/4$ in the exponential in Eq.~\eqref{eq:taumLimit1} is the familiar ratio of $\log(\langle\tau_N\rangle)$ to $\langle\log\tau_N\rangle=N\log\tau_1$ (see, for example, Ref.~\onlinecite{Kramer93}). From the expressions for $P_{-\frac{1}{2}}(C)$ and $b$ in the weak-scattering limit, one can compute corrections to the standard answer of $1/4$ as a function of $\vartheta$. Observe from Eq.~\eqref{eq:taumLimit} that, under the approximations above, $\langle\tau_N^m\rangle\propto\langle\tau_N\rangle$, and that the proportionality factor depends only on $m$, but not on the scattering strength. We can derive this statement more directly, using the following identity,\cite{AS} \begin{align} G_m(x)&=\frac{1}{\pi}{\left[\frac{\Gamma{\left(m-\frac{1}{2}\right)}}{\Gamma(m)}\right]}^2\alpha_m(x)\,G_1(x)\nonumber\\ \textrm{with}\quad\alpha_m(x)&=\prod_{k=1}^{m-1}{\left(1+\frac{(2x)^2}{(2k-1)^2}\right)}, \end{align} so that we can write (without any approximation), \begin{align} &\langle\tau_N^m\rangle=\frac{1}{\pi}{\left[\frac{\Gamma{\left(m-\frac{1}{2}\right)}}{\Gamma(m)}\right]}^2 \\ &\qquad\times{\Bigl\{2\int_0^\infty \mathrm{d} x\,x\tanh(\pi x)\alpha_m(x)G_1(x)P_{-\frac{1}{2}+\mathrm{i}x}(C)^N\Bigr\}}.\nonumber \end{align} The expression within the curly braces is exactly $\langle\tau_N\rangle$, if we can replace $\alpha_m(x)$ by~1. Now, $\alpha_m(x)\simeq 1$ whenever $x\ll 1$, and this is a good approximation whenever the integral in the curly braces gets its dominant contribution from small $x$ values. From our analysis above for the limit of a long chain of weak scatterers, the Gaussian suppression from the $\mathrm{e}^{-Nbx^2}$ factor guarantees that, indeed, the integrand is important only for $x\ll 1$. More generally, $P_{-\frac{1}{2}+\mathrm{i}x}(C)^N$ is significant for \mbox{$x\ll1$} only whenever \mbox{$(\sinh\vartheta/\vartheta)^{\frac{N}{2}}\gg1$} (see Appendix \ref{app:Mehler}). This condition reduces to $Nb\gg 1$ in the limit of weak scattering. We thus conclude that \begin{equation} \langle\tau_N^m\rangle\simeq\frac{1}{\pi}{\left[\frac{\Gamma{\left(m-\frac{1}{2}\right)}}{\Gamma(m)}\right]}^2\langle\tau_N\rangle \end{equation} whenever $(\sinh\vartheta/\vartheta)^{\frac{N}{2}}\gg 1$, which holds when either $N$ or $\vartheta$ (or both) are large. This applies beyond the limit of a long chain of weak scatterers. For example, one can numerically verify the excellent accuracy of the above approximation when $\vartheta=5$ and $N=5$, for which $(\sinh\vartheta/\vartheta)^{N/2}\simeq 850$, where neither are the scatterers weak, nor is the chain a long one. This proportionality between moments of the conductance is a specific example of a similar statement applicable for more general types of disorder (but valid only when $N\rightarrow \infty$) previously pointed out in Ref.~\onlinecite{Pendry94}. \section{\label{sec:Prob}The probability distributions for conductance and resistance} Armed with all the positive integer moments $\langle\tau_N^m\rangle$, one expects to be able to reconstruct the probability distribution for the conductance for given $N$ and $C$. In Ref.~\onlinecite{Pendry94}, the probability distribution for the conductance was obtained by writing down an expansion of the Fourier transform of the distribution in terms of $\langle\tau_N^m\rangle$. Approaching the probability distribution from a different angle, one can make use of our recurrence relation to propagate the initial single-scatterer distribution to the distribution for $N$ scatterers. Our approach uncovers features absent from a reconstruction via moments. We already have an expression (Eq.~\eqref{eq:WNrecur}) for $W_N(C')$, which we repeat here, \begin{align} W_N(C')&=(\mathcal{M}_C^{N-1}W_1)(C')\nonumber\\ &=(\mathcal{M}_C^{N-1}\widetilde W_{1,C'})(C'')\Big\vert_{C''=C},\label{eq:WN2} \end{align} where, in the second equality, $\widetilde W_{1,C'}(C'')=\delta(C''-C')$, and we have made use of Eq.~\eqref{eq:sym}. One can verify that $W_N(C')\geq 0$ for $1\leq C'<\infty$ [in fact, \mbox{$W_N(C')=0$} for $C'>\cosh(N\vartheta)$], and that $\int_1^\infty \mathrm{d} C'\,W_N(C')=1$. From Eq.~\eqref{eq:WN2}, we immediately have the probability distributions for the conductance $\tau'= \frac{2}{C'+1}$ and the resistance $\rho'=1/\tau'$, related by Jacobian transformations, \begin{align} &\quad~\mathrm{d} C'\,W_N(C')\\ &=\mathrm{d}\tau'\, \frac{2}{\tau'^2}\,(\mathcal{M}_C^{N-1} W_1)(C')\Big\vert_{C'=\frac{2}{\tau'}-1},&0\leq\tau'\leq 1,\nonumber\\ &=\mathrm{d}\rho'\, 2\,(\mathcal{M}_C^{N-1}W_1)(C')\Big\vert_{C'=2\rho'-1},&1\leq \rho'<\infty.\nonumber \end{align} Observe that Eq.~\eqref{eq:WN2} bears resemblance with the expression for $\langle f(C^{(N)})\rangle$ in Eq.~\eqref{eq:MFAvg}, if we set $f(C'')=\widetilde W_{1,C'}(C'')$. It thus seems plausible that our previous method of the Mehler-Fock transform might aid us in simplifying the recurrence formula for $W_N(C')$. The Mehler-Fock transform integral (over $C''$, with $C'$ held constant) gives $\hat f(x)=x\tanh(\pi x)P_{-\frac{1}{2}+\mathrm{i}x}(C')$. Forgetting for the moment that $f(C'')$ is not square-integrable in the sense of \eqref{eq:sqInt}, we employ the inverse-transform formula to yield \begin{equation}\label{eq:WNsqInt} W_N(C')=\int_0^\infty \mathrm{d} x \,x \tanh(\pi x) \,P_{-\frac{1}{2}+\mathrm{i}x}(C)^N \,P_{-\frac{1}{2}+\mathrm{i}x}(C'), \end{equation} which is Eq.~(28) in Ref.~\onlinecite{PP84}. For computing statistics of square-integrable functions of $C'$, we can safely regard Eq.~\eqref{eq:WNsqInt} as a true identity for distributions, since we recover Eq.~\eqref{eq:MFAvg} when we replace $W_N(C')$ in $\int_1^\infty\mathrm{d} C'\, f(C')W_N(C')$ with the right-hand side of Eq.~\eqref{eq:WNsqInt}. For computing the average of non-square-integrable functions of $C'$, like the moments of the resistance, the validity of Eq.~\eqref{eq:WNsqInt} is less clear. If we take the long-chain ($s=Nb\rightarrow\infty$), weak-scattering limit ($\vartheta\ll 1$) of the right-hand side of Eq.~\eqref{eq:WNsqInt}, we obtain \begin{align} &\quad~\int_0^\infty \mathrm{d} x\,x\tanh(\pi x)P_{-\frac{1}{2}+\mathrm{i}x}(C')P_{-\frac{1}{2}+\mathrm{i}x}(C)^N\nonumber\\ &\simeq\frac{s^{-3/2}\mathrm{e}^{-s/4}}{2\sqrt{2\pi}}\int_{\cosh^{-1}(C')}^\infty \mathrm{d} u\frac{ue^{-u^2/(4s)}}{\sqrt{\cosh u-C'}}.\label{eq:WrongW} \end{align} Here, we have approximated $P_{-\frac{1}{2}+\mathrm{i}x}(C)^N\simeq e^{-s/4}e^{-sx^2}$ as before, and also made use of an integral representation for $P_{-\frac{1}{2}+\mathrm{i}x}(C')$ of the form \cite{Hobson} \begin{equation} P_{-\frac{1}{2}+\mathrm{i}x}(C')=\frac{\sqrt 2}{\pi}\coth(\pi x)\int_{\cosh^{-1}(C')}^\infty \mathrm{d} u\,\frac{\sin(xu)}{\sqrt{\cosh u-C'}}. \end{equation} The limiting expression in Eq.~\eqref{eq:WrongW} is identical to the distribution for $C^{(N)}$ found in Ref.~\onlinecite{Gert59} by solving the DMPK equation, and also in Ref.~\onlinecite{Abrikosov81} for the resistance in the Anderson model. This lends credibility to Eq.~\eqref{eq:WNsqInt}, even for moments of the resistance. Furthermore, the fact that $f(C')=[(C'+1)/2]^m$ is not square-integrable is perhaps not troublesome because $W_N(C')=0$ for $C'>\cosh(N\vartheta)$ (although not apparent from Eq.~\eqref{eq:WNsqInt}), so that the $C'$ integral in $\langle f(C^{(N)})\rangle$ cuts off at $C'=\cosh(N\vartheta)$ rather than extending to infinity. A different source of concern lies with how the singularity of the delta function in $W_1(C')$ propagates as $N$ grows. For $N=1$, the singularity occurs at $C'=C$; for $N=2$, it occurs at $C'=1$, since $W_2(C') = \mathcal{K}_C(C,C')$; for $N=3$, the singularity is at $C'=C$ again (see Appendix \ref{app:WN}); for $N=4$, it goes back to $C'=1$. In fact, for any even $N$, we expect $W_N(C'=1)$ to be particularly large (or even infinite), because, as explained in detail in Appendix \ref{app:WN}, there exists an infinite family of phase configurations that attain $C'=1$ whenever $N$ is even. Since $W_N(C'=1)=W_{N-1}(C'=C)$, this large value of $W_N(C'=1)$ for $N$ even is inherited by $W_N(C'=C)$ for $N$ odd. This again suggests that the large value of $W_N(C')$ occurs at values that are different for $N$ even and $N$ odd, putting into suspect the existence of an asymptotic probability distribution for large $N$. Fortunately, using our exact expressions for $W_N(C')$, one can show that beyond $N=4$, $W_N(C')$ becomes finite everywhere, and the singularity present in small $N$ values goes away; see Appendix \ref{app:WN} for a detailed analysis. This justifies the validity of the reconstruction of the probability distribution of the conductance via its moments in Ref.~\onlinecite{Pendry94}, which relies on a Fourier transform that comes with an ``equal almost everywhere" condition that automatically gets rid of singularities with zero measure. \section{\label{sec:Conc}Summary} We have shown how one can analytically derive the statistics of a 1D chain of identical scatterers of any length, with uniform phase disorder. Making use of the fact that Legendre functions are eigenfunctions of the recurrence relation governing the chain statistics, we obtained disorder-averaged moments of the resistance by expanding in terms of Legendre polynomials; disorder-averaged moments of the conductance came from expanding in terms of the Mehler functions via the Mehler-Fock transformation. The probability distributions of the conductance and the resistance can also be written in terms of the recurrence relation, and we pointed out singularities absent in existing derivations. These extra features, despite the fact that they play little role in the physics in the end, remind us of the importance of exact and analytical solutions, which may be the only way of identifying subtle properties and verifying the validity of common assumptions. \begin{acknowledgments} We thank Dmitry Polyakov for bringing Ref.~\onlinecite{PP84} to our attention. This work is supported by the National Research Foundation and the Ministry of Education, Singapore. We would like to thank Christian Miniatura, Beno{\^i}t Gr{\'e}maud, Cord M{\"u}ller, and Lee Kean Loon for insightful comments. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,029
\section{Introduction} Ethnic classification from facial images has been studied for the past two decades with the purpose of understanding how humans perceive and determine an ethnic group from a given image. The motivation stems, for example, from the fact that (gender and) ethnicity play an important role in face-related applications, such as advertising, social insensitive-based systems, etc. Furthermore, while facial features are subject to change (due to aging, for example), ethnicity is of interest due to its invariance over time. Recent works on demographic classification are divided conceptually into appearancebased methods (using, e.g., eigenface methods, fisherface methods, etc.) and geometry-based methods (relying, e.g., on geometric parameters, such as the distance between the eyes, face width and length, nose thickness, etc.). One of the main challenges of automatic demographic classification is to avoid any ``noise'', such as illumination, background distortion, and a subject's pose. In this paper, we introduce a deep learning-based method, that achieves state-of-the-art results for facial image representations and classification for the four ethnic groups: African, Asian, Caucasian, and Indian. \section{Related Work} \subsection{Traditional ML-Based Techniques} During the past two decades, there has been enormous progress on the topic of ethnic group classification, using various classical Machine Learning methods. These approaches are based mainly on feature extraction and training classifiers; see Table \ref{traditionalMethods} below. Hosoi et al.~\cite{Hosoi} were among the first to achieve promising results. They employed Gabor wavelet transformations for extracting key facial features, and then applied SVM classification. They reported classification accuracies of 94.3\%, 96.3\%, and 93.1\%, respectively, for the three ethnic groups: African, Asian, and Caucasian. \vspace{-0.3cm} \begin{table}[] \centering \caption{Previous work on ethnic group classification using traditional Machine Learning methods} \resizebox{\textwidth}{!}{% \begin{tabular}{|c||c|c|c||c|} \hline \rowcolor[HTML]{EFEFEF} \textbf{\begin{tabular}[c]{@{}c@{}}Authors \end{tabular}} & \textbf{Approaches} & \textbf{Databases} & \textbf{Ethnic groups} & \textbf{Success rate} \\ \hline \hline \begin{tabular}[c]{@{}c@{}}Hosoi et al.\\ 2004~\cite{Hosoi}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Gabor Wavelet \\ and SVM\end{tabular} & 1,991 face photos & African, Asian, Caucasian & 94.3\%, 96.3\%, 93.1\% \\ \hline \begin{tabular}[c]{@{}c@{}}Lu et al.\\ 2004~\cite{XiaoguangLu}\end{tabular} & LDA & \begin{tabular}[c]{@{}c@{}}Union of DB\\ (2,630 photos of \\ 263 objects)\end{tabular} & Asian, non-Asian & 96.3\% (Avg) \\ \hline \begin{tabular}[c]{@{}c@{}}Yang et al. \\ 2010~\cite{Yang}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Real Adaboost\\ (Haar, LBPH)\end{tabular} & \begin{tabular}[c]{@{}c@{}}FERET and PIE\\ (11,680 Asian and \\ 1,016 non-Asian)\end{tabular} & Asian, non-Asian & 92.1\%, 93.2\% \\ \hline \begin{tabular}[c]{@{}c@{}}Lyle et al. \\ 2010~\cite{Lyle}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Perioucular regions,\\ LBP, SVM\end{tabular} & \begin{tabular}[c]{@{}c@{}}FRGC \\ (4,232 faces, 404 objects)\end{tabular} & Asian, non-Asian & 92\% (Avg) \\ \hline \begin{tabular}[c]{@{}c@{}}Guo et al.\\ 2010~\cite{Guo}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Biologically inspired\\ features\end{tabular} & \begin{tabular}[c]{@{}c@{}}MORPH-II \\ (10,530 Africans, 10,530 Caucasians)\end{tabular} & African, Caucasian & 99.1\% (Avg) \\ \hline \begin{tabular}[c]{@{}c@{}}Xie et al.\\ 2012~\cite{Xie}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Kernal class dependent \\ feature analysis\\ (KCFA)\end{tabular} & \begin{tabular}[c]{@{}c@{}}MBGC DB \\ (10,000 African,\\ 10,000 Asian, 20,000 Caucasian)\end{tabular} & African, Asian, Caucasian & 97\%, 95\%, 97\% \\ \hline \end{tabular} } \label{traditionalMethods} \end{table} \vspace{-0.3cm} Lu et. al~\cite{XiaoguangLu} constructed an \textit{ensemble framework}, which integrates LDA applied to the input face images at different scales. The combination strategy in the ensemble is the product rule~\cite{Kittler} to combine the outputs of individual classifiers at these different scales. Their binary classifier of Asian and non-Asian classes obtained success rates of 96.3\%, on average. Yang et al.~\cite{Yang} used LBPH\footnote{LBPH is a combination of \emph{local binary pattern} (LBP) with the \emph{histogram of oriented gradients} (HOG) techniques.}~\cite{HOG} to extract features of texture descriptions, in order to enhance considerably the human detection algorithm that was previously suggested by Xiaoyu et al.~\cite{Xiaoyu}. Real AdaBoost was then used iteratively to learn a sequence of best local features to create a strong classifier. Their binary classifier of Asian and non-Asian classes had success rates of 92.1\% and 93.2\%, respectively. Lyle et al.~\cite{Lyle} extracted ethnicity information from the \textit{periocular region images}\footnote{A periocular region includes the iris, eyes, eyelids, eye lashes, and part of the eyebrows.} using grayscale pixel intensities and periocular texture features computed by LBP. Their binary SVM classifier of Asian and non-Asian classes yields success rates of 93\% and 91\%, respectively, Guo et al.~\cite{Guo} proposed using \textit{biologically-inspired features} for ethnic classification, by applying a battery of linear filters to an image and using the filtered images as primary features~\cite{Jarrett}). Their binary classifier to Africans and Caucasians achieved 99.1\% success rate, on average. However, integrating the three ethnic groups: Asian, Hispanic, and Indian, result in a sharp success rate decrease. Specifically, the accuracies recorded were African: 98.3\%, Caucasian: 97.1\%, Hispanic: 59.5\%, Asian: 74.2\%, and Indian: 6.9\%. Xie et al.~\cite{Xie} used \textit{kernel class-dependent feature analysis} for generating nonlinear features (by mapping them onto a higher-dimensional feature space which allows higher order correlations~\cite{XieChunyan}) and facial color-based features to classify large-scale face databases. Their classifier achieved success rates of 97\%, 95\%, and 97\%, respectively, for the three ethnic groups: African, Asian, and Caucasian. To summarize, although some of the surveyed methods yield high classification results, it appears that they are limited to laboratory conditions, i.e., they may not perform as well on a diverse, large-scale database, consisting of face images of different gender, pose, age, illumination conditions, etc. In contrast, we create in this work a diverse face image database for training and testing. \subsection{Recent Deep Learning Techniques} Ethnic group classification has improved significantly in recent years, due to the use of deep learning ( techniques, e.g., CNN architectures, enhanced feature extraction, etc. (See Table ~\ref{previousDL} for an overview.) \vspace{-0.3cm} \begin{table}[] \centering \caption{DL-based methods for ethnic group classification} \resizebox{\textwidth}{!}{% \begin{tabular}{|c||c|c|c||c|} \hline \rowcolor[HTML]{EFEFEF} \textbf{Authors} & \textbf{Approach} & \textbf{Databases} & \textbf{Ethnic groups} & \textbf{Success rate} \\ \hline \hline \begin{tabular}[c]{@{}c@{}}Ahmed et al.~\cite{Ahmed} \\ 2008 \end{tabular} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}c@{}}Transfer learning from\\pseudo tasks\\ (CNN + transfer learning)\end{tabular}} & \multicolumn{1}{l|}{FRGC 2.0, FERET} & \multicolumn{1}{l||}{Asian, Caucasian, Other} & \multicolumn{1}{c|}{95.4\% (avg.)} \\ \hline \begin{tabular}[c]{@{}c@{}}Inzamam et al.~\cite{Inzamam}\\ 2017 \end{tabular} & \begin{tabular}[c]{@{}c@{}}Feature extraction due to ANN\\ and SVM classification\end{tabular} & 10 different DBs & African, Asian, Caucasian & 99.66\%, 98.28\%, 99.05\% \\ \hline \begin{tabular}[c]{@{}c@{}}Wang et al.~\cite{WeiWang}\\ 2017 \end{tabular} & CNN & Varaiety of DBs & \begin{tabular}[c]{@{}c@{}}African, Caucasian\\ Chinese, non-Chinese\\ Han, Uyghur, non-Chinese \end{tabular} & \begin{tabular}[c]{@{}c@{}} 99.4\%, 100\%\\ 99.62\%, 99.38\%\\ 99\% (avg.) \end{tabular} \\ \hline \end{tabular} } \label{previousDL} \end{table} \vspace{-0.3cm} Ahmed et al.~\cite{Ahmed} were the first to apply transfer learning for ethnic classification. Their classifier achieved a success rate of 95.4\%, on average, for the ethnic groups: Asian, Caucasian and ``Other'', using the FRGC 2.0 and FERET databases for training data. Inzamam et al.~\cite{Inzamam} performed the classification by extracting features from a deep neural network followed by SVM classification on 10 datasets (13,394 images in total, including different variations of the FERET, CASPEAL, and Yale databases). Their classifier achieved success rates of 99.66\%, 98.28\%, and 99.05\%, respectively, for the ethnic groups: African, Asian, and Caucasian. Wang et al.~\cite{WeiWang} used deep CNNs to extract features and classify them simultaneously. Three different classifiers were created: (1) The first binary classifier for African and Caucasian classes, achieving success rates of 99.4\% and 100\%, respectively; (2) a binary classifier for Chinese and non-Chinese classes, achieving success rates of 99.62\% and 99.38\%, respectively; and (3) a 3-way classifier for Han, Uyghur, and non-Chinese classes, achieving an average success rate of 99\%. \section{Proposed Method} \subsection{Data Source} As previously indicated, the purpose of this research is to distinguish between the four ethnic groups: African. Asian, Caucasian, and Indian. We created our dataset by combining 10 different databases, originally proposed for the problem of \emph{face recognition}, and then sorting them into the ethnic groups of interest. The databases included IMFDB~\cite{IMFDB}, CNBC~\cite{CNBC}, Labeled Faces in the Wild (LFW)~\cite{LFW}, the Essex face dataset~\cite{ESSEX}, Face Tracer~\cite{FaceTracer}, the Yale face database ~\cite{YALE}, SCUT5000~\cite{SCUT5000}, and additional collected image datasets. We also used the well-known FERET database~\cite{FERET1,FERET2}, which contains facial images collected under the FERET program, sponsored at the time by the U.S. Department of Defense (DoD). Altogether, the collected dataset contains images of various sizes. \subsection{Facial Image Preprocessing} As part of preprocessing, the data should be normalized to be compatible with the network's architecture. Also, it is denoised to make it as clean as possible. Thus, we first convert every RGB image to a grayscale one to create a homogeneous dataset of grayscale images. Note that our collection also contains datasets (such as AT\&T and CASPEAL) of only grayscale face images. We then use the Face Cascade detector (part of the OpenCV library), to detect and crop the faces. After detecting and cropping the faces, the cropped images are downscaled to $80 \times 80$ pixels and are denoised using a non-local means denoising algorithm (implemented by the OpenCV function, fastNlMeansDenoising). Finally, (grayscale) images are duplicated to create an image size of $80 \times 80 \times 3$. This is done to be compatible with the VGG-16 network, which receives three-channel images as input. After preprocessing, the face images were sorted manually into the four ethnic groups of interest (i.e., African, Asian, Caucasian, and Indian), creating a labeled face database for ethnic group classification. See Fig.~\ref{fig:EthnicGroups} for specific face images per each ethnic group. \vspace{-0.3cm} \begin{figure}[] \centering \small \includegraphics[width=0.3\textwidth]{Figures/Data/EthnicGroups.png} \caption{\label{fig:EthnicGroups} Examples of preprocessed face images and their ethnic group labels (from left to right): African, Asian, Caucasian, and Indian.} \end{figure} Since the number of images acquired was rather imbalanced over the four ethnic groups, we perturbed each image in the smaller training samples with a minor Gaussian noise, so as to augment these training samples with slightly different duplicates. \subsection{Transfer Learning}\label{TransLearning} Due to the challenging problems DL has to solve, it takes enormous resources (mostly training time, but also fast computers, training data storage, and human expertise) to train such models. Transfer learning is an ML technique that helps to overcome those issues by using a model that was trained on a specific task (without any changes to the weights) to solve other tasks. Yosinski et al.~\cite{Yosinski} showed that using transfer learning can solve all of these issues and create more efficient and accurate models for solving additional problems. It is important to note that transfer learning only works if the model features learned from the first task are sufficiently generic. A pretrained model has been previously trained on a dataset and contains the weights and biases that represent the features of the data it has seen during training. The most commonly used pre-trained models are VGG16, VGG19~\cite{Simonyan}, and Inception V3~\cite{Szegedy}, due to their high success rate and improvement on the ImageNet dataset classification problem. VGG16 is a classification model with 16 layers, which is based on the ImageNet dataset and can classify 1,000 different image types (including animals, buildings, and humans). The model's weight file size is 528 MB, and it can be easily accessed for free. \subsection{Network Architecture} \begin{figure}[] \centering \subfloat[]{\label{origVG16}\includegraphics[height= 3cm, width=0.8\textwidth]{Figures/Networks/VGG16Orig.png}} \\ \subfloat[]{\label{TransferVGG16}\includegraphics[height=3cm, width=0.8\textwidth]{Figures/Networks/VGG16TransferNew.png}} \caption{(a) Original version of VGG-16, and (b) our modified architecture for partial transfer learning from VGG-16.} \label{fig:TransferNetwork} \end{figure} Fig.~\ref{fig:TransferNetwork} shows the original VGG-16 architecture and our modified architecture, which inputs a preprocessed $80 \times 80 \times 3$ image and outputs its predicted ethnic group. The modified architecture contains the original, previously trained VGG-16 network, without the final three fully-connected layers and the original softmax layer. The five layers of the remained network (i.e., max pooling layer, three convolution layers and activation layers, and the final max pooling layer) were selected after experimenting extensively with a large number of possibilities for retraining the entire network. Note that running on the original network ``as-is'' would have given very poor results. Instead, the idea is to capture universal features (like curves and edges), and further refine these features due to the above modified layers, by retraining the entire network in the context of our problem. Specifically, the classification softmax layer was replaced by a fully-connected layer of size $n = 500$ and a softmax layer (of size $p = 4$), which outputs a probability distribution for the ethnic group classification. The output of the softmax layer is the probability distribution for the classification problem. We train the network with the purpose of minimizing the cross-entropy loss function. The network is trained using \textit {stochastic gradient descent} (SGD), as part of the backpropagation phase. \section{Experiments and Results} We present the datasets used in the experiments, and give detailed empirical results of the 10-fold cross validation for the four-class ethnic group classification. To increase the classification success rate, we first experimented with different network hyper-parameters, e.g., number of epochs, type of activation function, size of the fully-connected layer to add, type of loss function, etc. After running a grid search on the hyper parameters options, we selected the following set of hyper-parameters, which provided the best performance: 50 epochs, a ReLu~\cite{Hinton2} activation function (an element-wise operation applied per pixel), an additional fully-connected layer of 500 neurons, and a categorical cross-entropy loss function. We trained and tested the model for each fold, by allocating each time 75\%, 10\%, and 15\% of the data, respectively, to training, validation, and testing. The training time using TensorFlow and Keras infrastructure on GeForce GTX 1070 was roughly 4.5 hours (compared to nearly 11.5 hours for training from scratch on the same architecture), and the real-time evaluation of an image is about 10 msec. We ran a 10-fold cross validation using the selected base model, and obtained classification accuracies of 99.02\%, 99.76\%, 99.18\%, and 96.72\%, respectively, for the categories, African, Asian, Caucasian, and Indian. Bottom-line accuracies and loss for the Ethnic classes are summarized in Table~\ref{transferLearning10FoldFinal}. \vspace{-0.5cm} \begin{table}[] \centering \caption{Summary of total success rate and loss over entire experiments} \begin{tabular}{|c|c|c|c||c|l|} \hline \rowcolor[HTML]{EFEFEF} African & Asian & Caucasian & Indian & Total Success rate & Total Loss \\ \hline \hline 99.02\% & 99.76\% & 99.18\% & 96.72\% & 99.18\% & 0.03518 \\ \hline \end{tabular} \label{transferLearning10FoldFinal} \end{table} \section{Conclusions} In this paper we presented a novel approach to the ethnic group classification problem. By modifying a previously trained classification network (namely VGG-16) for transfer learning we achieved state-of-the-art performance with respect to four ethnic classes: African, Asian, Caucasian, and Indian. Specifically, we obtained higher success rate levels for a larger number of classes, while working with a more diverse dataset than previously reported. Also, our derived scheme exhibits faster training time then training from scratch with similar results. Our future work will focus on extending the number of classes, and improving the robustness of the proposed method to different image conditions, such as different head poses, illumination change, etc.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,453
{"url":"https:\/\/drchristiansalas.com\/2014\/09\/05\/note-on-a-computer-search-for-primes-in-arithmetic-progression-by-weintraub-1977\/","text":"# Note on a Computer Search for Primes in Arithmetic Progression by Weintraub\u00a0(1977)\n\nA paper by Weintraub, S, 1977 (Primes in arithmetic progression, BIT Numerical Mathematics, Vol 17, Issue 2, p. 239-243) implements a computer search for primes in arithmetic progression (PAPs). It refers to a number $N$ which is set to what seems at first sight to be an arbitrary value of 16680. In this note I want to try to bring out some maths underlying this choice of $N$ in Weintraub\u2019s paper, and also record a brute force implementation I carried out in Microsoft Excel of an adjustment factor in an asymptotic formula by Grosswald (1982) which yields the number of PAPs less than or equal to some specified number.\n\nThe number of prime arithmetic sequences of a given length that one can hope to find is determined by the chosen values of $m$ and $N$ in Weintraub\u2019s paper.\n\nOn page 241 of his paper, Weintraub says: \u201c\u2026it is likely that with $m = 510510$ [and $N = 16680$] there exist between 20-30 prime sequences of 16 terms\u2026\u201d\n\nI was pleased to find that Weintraub\u2019s estimate of 20-30 agrees with an asymptotic formula obtained later by Grosswald (in 1982) building on a conjecture by Hardy and Littlewood. The number of $q$-tuples of primes $p_1$, . . . , $p_q$ in arithmetic progression, all of whose terms are less than or equal to some number $x$, was conjectured by Grosswald to be asymptotically equal to\n\n$\\frac{D_q x^2}{2(q-1)(\\log x)^q}$\n\nwhere the factor $D_q$ is\n\n$\\prod_{p > q} \\big[\\big( \\frac{p}{p-1}\\big)^{q-1} \\cdot \\frac{p - (q-1)}{p} \\big] \\times \\prod_{p \\leq q} \\frac{1}{p}\\big(\\frac{p}{p-1}\\big)^{q-1}$\n\nWhen $q = 16$ as in Weintraub\u2019s paper, we get $D_{16} = 55651.46255350$ (see below).\n\nUsing $m = 510510$ and $N = 16680$, Weintraub said on page 241 that one gets an upper prime limit of around $8 \\times 10^9$ (Weintraub actually said: \u201capproximately $7.7 \\times 10^9$\u201c).\n\nPlugging in the numbers $q = 16$, $D_{16} = 55651.46255350$ and $x = 8 \\times 10^9$ in the first formula above, we get the answer 22, i.e., there are 22 prime sequences of 16 terms, in line with Weintraub\u2019s estimate of 20-30 on page 241 of his paper.\n\nWeintraub was clearly aware that $N = 16680$ (in conjunction with $m = 510510$) would make approximately 20-30 prime sequences of 16 terms available to his search.\n\nThe adjustment factor of $D_{16} = 55651.46255350$ above can be obtained to a high degree of accuracy using a series approximation involving the zeta function (see, e.g., Caldwell, 2000, preprint, p. 11). However, I wanted to see how accurately one could calculate it by using the first one hundred thousand primes directly in Grosswald\u2019s formula for $D_q$ above. I did this in a Microsoft Excel sheet and got the answer $D_{16} = 55651.76179$. Directly using Grosswald\u2019s formula for $D_q$, it was not possible to get an estimate of $D_{16}$ accurate to the first decimal place even with one hundred thousand primes.","date":"2019-04-26 13:57:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 29, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8295247554779053, \"perplexity\": 340.6966028897037}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578806528.96\/warc\/CC-MAIN-20190426133444-20190426155444-00342.warc.gz\"}"}
null
null
While it still isn't the magical replicators envisioned in the sci-fi series Star Trek, Dutch startup byFlow has found an alternative use for 3D printers: food. Going beyond printing plastic figurines, a canal house, or concrete bridges, byFlow's printer can create ingredients for tasty dishes as well as complete meals. From chocolate to beef. The company from the Netherlands is part of High Tech Campus Eindhoven. byFlow started in 2009 as a regular 3D printing company and later specialized in 3D printers for food. What makes their product unique is that their printer is compact, portable, easy to use and maintain – referring to themselves as the Apple of the 3D printing world. Why 3D print your food? With a growing world population, there is an urgent need for sustainable, safe and healthy food. Moreover, consumers demand their food to be nutritious, personalized and traceable. byFlow's 3D Food Printer makes it possible to create premade food with recycled ingredients that are not good enough to be sold in supermarkets - and are normally thrown away - but are still usable as a food source, like damaged fruits and vegetables. Not many people realize that printing food can be an important part of the healthcare sector. Elderly people sometimes have problems swallowing their food. They are normally fed with nutrition shakes, and drinks that contain the necessary nutritional ingredients they need in liquid form. However, people tend to forget that the act of eating (chewing, digestion, etc.) is an important process for human health. byFlow's 3D printers make it possible to create good looking and tasty dishes containing fresh nutritional ingredients 'The ingredients have to be a paste-like material so it's easy to swallow, but can be printed again in the shape of a tomato or a carrot. Even meat-printing is possible', says byFlow's Nina Hoff. Another interesting – and fun – feature of 3D printing is that it is possible to create shapes that would be normally impossible to make by hand, with materials that are normally tough to use in a 3D printing environment - like chocolate. Through social media customers share their dishes by making the designs available for others to download.
{ "redpajama_set_name": "RedPajamaC4" }
3,006
\section{Introduction} Let $n$ be a positive integer and $\GF n$ be the finite field of order $2^n$. The zeros of the polynomial \begin{equation} \label{eq:1} P_a(x) = x^{2^k+1}+x+a,\quad a\in\GF n^\star \end{equation} has been studied in \cite{BLUHER2004,HELLESETH2008,HELLESETH2010}. This polynomial has arisen in several different contexts including the inverse Galois problem \cite{AbhyankarCohenZieve2000}, the construction of difference sets with Singer parameters \cite{DILLON2004}, to find cross-correlation between $m$-sequences \cite{DOBBERTIN2006,HELLESETH2008} and to construct error correcting codes \cite{BRACKEN2009}. More general polynomial forms $x^{2^k+1}+rx^{2^k}+sx+t$ are also transformed into this form by a simple substitution of variable $x$ with $(r+s^{\frac 1{2^k}})x+r$. It is clear that $P_a(x)$ have no multiple roots. In 2004, Bluher \cite{BLUHER2004} proved following result. \begin{theorem} \label{thm:number_zeros} For any $a\in\GF n^*$ and a positive integer $k$, the polynomial $P_a(x)$ has either none, one, three or $2^{\gcd(k,n)}+1$ zeros in $\GF n$. \end{theorem} In this paper, we will consider a particular case with $\gcd(n,k)=1$. In this case, Theorem~\ref{thm:number_zeros} says that $P_a(x)$ has none, one or three zeros in $\GF n$. In 2008, Helleseth and Kholosha \cite{HELLESETH2008} have provided criteria for which $a$ $P_a(x)$ has exactly one zero in $\GF{n}$ and an explicit expression of the unique zero when $\gcd(k,n)=1$. In 2014, Bracken, Tan and Tan \cite{BRACKEN2014} presented a criterion for which $a$ $P_a(x)$ has no zero in $\GF{n}$ when $n$ is even and $\gcd(k,n)=1$. In this paper, we explicitly calculate all possible zeros in $\GF n$ of $P_a(x)$ when $\gcd(n,k)=1$. New criterion for which $a$, $P_a(x)$ has none, one or three zeros is a by-product of this result. We begin with showing that we can reduce the study to the case when $k$ is odd. In the odd $k$ case, one core of our approach is to exploit a recent polynomial identity special to characteristic 2, presented in \cite{BLUHER2016} (Theorem~\ref{thm:identity_dickson}). This polynomial identity enables us to divide the problem of finding zeros in $\GF n$ of $P_a$ into two independent problems: Problem 1 to find the unique preimage of an element in $\GF n$ under a M$\ddot{u}$ller-Cohen-Matthews (MCM) polynomial and Problem 2 to find preimages of an element in $\GF n$ under a Dickson polynomial (subsection~\ref{sec:preliminaries}). There are two key stages to solve Problem~\ref{MCMProblem}. One is to establish a relation of the MCM polynomial with the Dobbertin polynomial. Other is to find an explicit solution formula for the affine equation $x^{2^k}+x=b, b\in \GF{n}$. These are done in subsection \ref{sec:probl-refmcmpr} and Problem~\ref{MCMProblem} is solved by Theorem~\ref{thm:SolvingA}. Problem~\ref{DicksonProblem} is relatively easy which is answered by Theorem~\ref{even_n_odd_k} and Theorem~\ref{odd_n_odd_k} in subsection \ref{sec:probl-refd}. Finally, we collect together all these results to give explicit expression of all possible zeros of $P_a$ in $\GF n$ by Theorem~\ref{thm:maineven}, Theorem~\ref{thm:mainoddodd} and Theorem~\ref{thm:mainoddeven}. \section{Preliminaries} In this section, we state some results on finite fields and introduce classical polynomials that we shall need in the sequel. We begin with the following result that will play an important role in our study. \begin{proposition} \label{prop:decomposition} Let $n$ be a positive integer. Then, every element $z$ of $\GF n^*:=\GF{n}\setminus\{0\}$ can be written (twice) $z=c+\frac 1c$ where $c\in\GF n^\star:=\GF{n}\setminus \GF{}$ if $\Tr{n}(\frac{1}{z})=0$ and $c\in \mu_{2^n+1}^{\star}:=\{\zeta\in\GF {2n}\mid \zeta^{2^n+1}=1\}\setminus \{1\}$ if $\Tr{n}(\frac{1}{z})=1$. \end{proposition} \begin{proof} For $z\in \GF n^*$, $z=c+\frac 1c$ is equivalent to $\frac{1}{z^2}=\frac{c}{z}+\left(\frac{c}{z}\right)^2$, and thus this equation has a solution in $\GF{n}$ if and only if $\Tr{n}(\frac{1}{z})=0$. Hence, mapping $c\longmapsto c+\frac 1c$ is 2-to-1 from $\GF n$ onto $\{z\in \GF{n}\mid \Tr{n}(\frac{1}{z})=0\}$ with convention $\frac{1}{0}:=0$. Also, since $\left(c+\frac 1c\right)^{2^n}=c^{2^n}+\left(\frac 1c\right)^{2^n}=\frac 1c+c$ for $c\in \mu_{2^n+1}^{\star}$, the mapping $c\longmapsto c+\frac 1c$ is 2-to-1 from $\mu_{2^n+1}^{\star}$ with cardinality $2^n$ onto $\{z\in \GF{n}\mid \Tr{n}(\frac{1}{z})=1\}$ with cardinality $2^{n-1}$. \end{proof} We shall also need two classical families of polynomials, Dickson polynomials of the first kind and M$\ddot{u}$ller-Cohen-Matthews polynomials. The Dickson polynomial of the first kind of degree $k$ in indeterminate $x$ and with parameter $a\in\GF n^*$ is \begin{displaymath} D_k(x,a) = \sum_{i=0}^{\lfloor k/2\rfloor}\frac{k}{k-i}\binom{k-i}{i}a^kx^{k-2i}, \end{displaymath} where $\lfloor k/2\rfloor$ denotes the largest integer less than or equal to $k/2$. In this paper, we consider only Dickson polynomials of the first kind $D_k(x,1)$, that we shall denote $D_k(x)$ throughout the paper. A classical property of Dickson polynomial that we shall use extensively is \begin{proposition} For any positive integer $k$ and any $x\in\GF n$, we have \begin{equation} \label{eq:Dickson} D_k\left(x+\frac 1x\right) = x^k + \frac{1}{x^k}. \end{equation} \end{proposition} M$\ddot{u}$ller-Cohen-Matthews polynomials are another classical polynomials defined as follows \cite{COHEN94}, \begin{equation*} f_{k,d}(X) := \frac{{T_k(X^c)}^d}{X^{2^k}} \end{equation*} where \begin{equation*} T_k(X) := \sum_{i=0}^{k-1}X^{2^i}\quad\text{and}\quad cd = 2^k+1. \end{equation*} A basic property for such polynomials that we shall need in this paper is the following statement. \begin{theorem} \label{thm:MCM_permutation} Let $k$ and $n$ be two positive integers with $\gcd(k,n)=1$. \begin{enumerate} \item If $k$ is odd, then $f_{k,2^{k}+1}$ is a permutation on $\GF n$. \item If $k$ is even, then $f_{k,2^{k}+1}$ is a $2$-to-$1$ on $\GF n$. \end{enumerate} \end{theorem} \begin{proof} For odd $k$, see \cite{COHEN94}. When $k$ is even, $f_{k,2^k+1}$ is not a permutation of $\GF n$. Indeed, Theorem 10 of \cite{DILLON2004} states that $f_{k,1}$ is $2$-to-$1$, and then the statement follows from the facts that $f_{k,2^k+1}(x^{2^k+1})=f_{k,1}(x)^{2^k+1}$ and $\gcd(2^k+1, 2^n-1)=1$ when $\gcd(k,n)=1$. \end{proof} We will exploit a recent polynomial identity involving Dickson polynomials established in \cite[Theorem 2.2]{BLUHER2016}. \begin{theorem} \label{thm:identity_dickson} In the polynomial ring $\GF[2^k]{}[X,Y]$, we have the identity \begin{equation*} X^{2^{2k}-1}+\left(\sum_{i=1}^kY^{2^k-2^i}\right)X^{2^k-1}+Y^{2^k-1} = \prod_{w\in\GF[2^k]{}^{*}} \left(D_{2^k+1}(wX) -Y\right). \end{equation*} \end{theorem} Finally we remark that the identity by Abhyankar, Cohen, and Zieve \cite[Theorem 1.1]{AbhyankarCohenZieve2000} tantalizingly similar to this identity treats any characteristic, while this identity is special to characteristic 2 (this may happen because the Dickson polynomials are ramified at the prime 2). However, the Abhyankar-Cohen-Zieve identity has not lead us to solving $P_a(x)=0$. \section{Solving $P_a(x)=0$} Throughout this section, $k$ and $n$ are coprime and we set $q=2^k$. \subsection{Splitting the problem} \label{sec:preliminaries} One core of our approach is to exploit Theorem \ref{thm:identity_dickson} to the study of zeros in $\GF n$ of $P_a$. To this end, we observe firstly that \begin{equation*} \begin{split} &X^{q^2-1}+\left(\sum_{i=1}^kY^{q-2^i}\right)X^{q-1}+Y^{q-1}\\ &\qquad = X^{q^2-1} + Y^{q} T_k\left(\frac 1Y\right)^2\,X^{q-1}+Y^{q-1}. \end{split} \end{equation*} Substituting $tx$ to $X$ in the above identity with $t^{q^2-q}=Y^{q}T_k\left(\frac 1Y\right)^2$, we get \begin{equation*} \begin{split} &X^{q^2-1}+\left(\sum_{i=1}^kY^{q-2^i}\right)X^{q-1}+Y^{q-1}\\ &\qquad= Y^{q}T_k\left(\frac 1Y\right)^2t^{q-1}\left(x^{q^2-1}+x^{q-1} + \frac{1}{YT_k\left(\frac 1Y\right)^{2}t^{q-1}}\right).\\ \end{split} \end{equation*} Now, $t^{q^2-q}=Y^{q}T_k\left(\frac 1Y\right)^2$ is equivalent to $t^{q-1}=YT_k\left(\frac 1Y\right)^{\frac 2q}$. Therefore \begin{equation*} \begin{split} YT_k\left(\frac 1Y\right)^{2}t^{q-1} = Y^{2}T_k\left(\frac 1Y\right)^{\frac{2(q+1)}q} = \left(f_{k,q+1}\left(\frac 1Y\right)\right)^{\frac 2q}. \end{split} \end{equation*} By all these calculations, we get \begin{equation} \label{eq:A} \begin{split} &x^{q^2-1}+x^{q-1} + \frac{1}{ \left(f_{k,q+1}\left(\frac 1Y\right)\right)^{\frac 2q}} \\ &\qquad= \frac{1}{Y^{q-1} \left(f_{k,q+1}\left(\frac 1Y\right)\right)^{\frac 2q}}\left(X^{q^2-1}+\left(\sum_{i=1}^kY^{q-2^i}\right)X^{q-1}+Y^{q-1}\right). \end{split} \end{equation} If $k$ is odd, $f_{k,q+1}$ is a permutation polynomial of $\GF n$ by Theorem~\ref{thm:MCM_permutation}. Therefore, for any $a\in\GF n^*$, there exists a unique $Y$ in $\GF n^*$ such that $a=\frac{1}{f_{k,q+1}\left(\frac 1Y\right)^{\frac 2q}}$. Hence, by Theorem~\ref{thm:identity_dickson} and equation~(\ref{eq:A}), we have \begin{equation} \label{eq:Dickson1} P_a\left(x^{q-1}\right) = x^{q^2-1}+x^{q-1}+a = \frac{1}{Y^{q-1} \left(f_{k,q+1}\left(\frac 1Y\right)\right)^{\frac 2q}}\prod_{w\in\GF[q]{}^*}\left(D_{q+1}\left(wtx\right)-Y\right) \end{equation} where $Y$ is the unique element of $\GF n^*$ such that $a=\frac{1}{f_{k,q+1}\left(\frac 1Y\right)^{\frac 2q}}$ and $t^{q-1}=YT_k\left(\frac 1Y\right)^{\frac 2q}$. Now, since $\gcd(q-1,2^n-1)=1$, the zeros of $P_a(x)$ are the images of the zeros of $P_a(x^{q-1})$ by the map $x\mapsto x^{q-1}$. Therefore, when $k$ is odd, equation~(\ref{eq:Dickson1}) states that finding the zeros of $P_a(x^{q-1})$ amounts to determine preimages of $Y$ under the Dickson polynomial $D_{q+1}$. When $k$ is even, $f_{k,q+1}$ is no longer a permutation and we cannot repeat again the preceding argument (indeed, when $k$ is even, $f_{k,q+1}$ is $2$-to-$1$, see Theorem~\ref{thm:MCM_permutation}). Fortunately, we can go back to the odd case by rewriting the equation. Indeed, for $x\in \GF{n}$, \begin{equation*} \begin{split} P_a(x) &= x^{2^k+1}+x+a = \left(x^{2^{n-k}+1}+x^{2^{n-k}}+a^{2^{n-k}}\right)^{2^k}\\ &= \left((x+1)^{2^{n-k}+1}+(x+1)+a^{2^{n-k}}\right)^{2^k} \end{split} \end{equation*} and so \begin{equation} \label{eq:even_case} \{x\in\GF n\mid P_a(x) = 0\} = \left\{x + 1\mid x^{2^{n-k}+1}+x+a^{2^{n-k}}=0,\,x\in\GF n\right\}. \end{equation} If $k$ is even, then $n$ is odd as $\gcd(k,n)=1$, and so $n-k$ is odd and we can reduce to the odd case. We now summarize all the above discussions in the following theorem. \begin{theorem} \label{thm:main} Let $k$ and $n$ be two positive integers such that $\gcd(k,n)=1$. \begin{enumerate} \item\label{oddcase} Let $k$ be odd and $q=2^k$. Let $Y\in\GF n^*$ be (uniquely) defined by $a=\frac{1}{f_{k,q+1}\left(\frac 1Y\right)^{\frac 2q}}$. Then, \begin{equation*} \{x\in\GF n\mid P_a(x) = 0\} = \left\{\frac{z^{q-1}}{YT_k\left(\frac 1Y\right)^{\frac 2q}}\,|\, D_{q+1}(z) = Y,\, z\in\GF n\right\}. \end{equation*} \item Let $k$ be even and $q^\prime=2^{n-k}$. Let $Y^\prime\in\GF n^*$ be (uniquely) defined by $a^{q^\prime}=\frac{1}{f_{n-k,q^\prime+1}\left(\frac 1{Y^\prime}\right)^{\frac 2{q^\prime}}}$. Then, \begin{equation*} \{x\in\GF n\mid P_a(x) = 0\} = \left\{1+\frac{z^{q^\prime-1}}{Y^\prime T_{n-k}\left(\frac 1{Y^\prime}\right)^{\frac 2{q^\prime}}}\,|\, D_{q^\prime+1}(z) = Y^\prime,\, z\in\GF n\right\}. \end{equation*} \end{enumerate} \end{theorem} \begin{proof} Suppose that $k$ is odd. Equation~(\ref{eq:Dickson1}) shows that the zeros of $P_a$ in $\GF n$ are $x^{q-1}$ for the elements $x\in\GF n^\star$ such that $D_{q+1}(wtx)=Y$ where $t^{q-1}=YT_k\left(\frac 1Y\right)^{\frac 2q}$. Set $z=wtx$. Then, since $w\in\GF[q]{}^*$, $x^{q-1}=\left(\frac{z}{wt}\right)^{q-1}=\frac{z^{q-1}}{t^{q-1}}=\frac{z^{q-1}}{YT_k\left(\frac 1Y\right)^{\frac 2q}}$. Item 2 follows from Item \ref{oddcase} and equality \eqref{eq:even_case}. \end{proof} Theorem~\ref{thm:main} shows that we can split the problem of finding the zeros in $\GF n$ of $P_a$ into two independent problems with odd $k$. \begin{problem} \label{MCMProblem} For $a\in\GF n^*$, find the unique element $Y$ in $\GF n^*$ such that \begin{equation}\label{eq:ProblemA} a^{\frac q2} = \frac{1}{f_{k,q+1}\left(\frac 1Y\right)}. \end{equation} \end{problem} \begin{problem} \label{DicksonProblem} For $Y\in\GF n^*$, find the preimages in $\GF n$ of $Y$ under the Dickson polynomial $D_{q+1}$, that is, find the elements of the set \begin{equation}\label{eq:ProblemB} D_{q+1}^{-1}(Y) = \{z\in\GF n^\star\mid D_{q+1}(z)= Y\}. \end{equation} \end{problem} In the following two subsections, we shall study those two problems only when $k$ is odd since, if $k$ is even, it suffices to replace $k$ by $n-k$, $q$ by $q^\prime=2^{n-k}$, and $a$ by $a^{q^\prime}$ in all the results of the odd case. \subsection{On problem~\ref{MCMProblem}} \label{sec:probl-refmcmpr} Define \begin{equation} \label{eq:Qkk'} Q_{k,k^\prime}^\prime(x) =\frac{x^{q+1}}{\sum_{i=1}^{k^\prime} x^{q^i}} \end{equation} where $k^\prime < 2n$ is the inverse of $k$ modulo $2n$, that is, s.t. $kk^\prime=1\mod 2n$. Note that $k^\prime$ is odd since $\gcd(k^\prime, 2n)=1$. It is known that if $\gcd(2n,k)=1$ and $k^\prime$ is odd, then $Q_{k,k^\prime}^\prime$ is permutation on $\GF {2n}$ (see \cite{DILLON2004} or \cite{DILLON99} where $Q_{k,k^\prime}=1/Q_{k,k^\prime}^\prime$ is instead considered). Indeed, due to \cite{DILLON2004}, defining the following sequences of polynomials \[ A_1(x)=x,\, A_2(x)=x^{q+1},\, A_{i+2}(x)=x^{q^{i+1}} A_{i+1}(x)+x^{q^{i+1}-q^i}A_i(x), \quad i\geq 1, \] \[ B_1(x)=0,\, B_2(x)=x^{q-1},\, B_{i+2}(x)=x^{q^{i+1}} B_{i+1}(x)+x^{q^{i+1}-q^i}B_i(x), \quad i\geq 1, \] then the polynomial expression of the inverse $R_{k,k^\prime}$ of the mapping induced by $Q_{k,k^\prime}^\prime$ on $\GF{2n}$ is \begin{equation} \label{eq:Rkk'} R_{k,k'}(x)=\sum_{i=1}^{k'}A_i(x)+B_{k'}(x). \end{equation} Directively from the definitions, it follow \[f_{k,q+1}(x+x^2) = \frac{(x+x^q)^{q+1}}{x^q+x^{2q}}\] and \[ Q_{k,k^\prime}^\prime\left(x+x^{q}\right)= \frac{(x+x^q)^{q+1}}{x^q+x^{q^{k^\prime+1}}}.\] Since $x^{2q}=x^{q^{k^\prime+1}}\Longleftrightarrow x=x^{2^{kk^\prime-1}}$, it holds that \begin{equation} \label{eq:DillonDobbertin} f_{k,q+1}(x+x^2) = Q_{k,k^\prime}^\prime\left(x+x^{q}\right) \end{equation} for any $x\in \GF{2n}$. Let $x$ be an element of $\GF{2n}$ such that \[\frac{1}{Y}=x+x^2.\] By using \eqref{eq:DillonDobbertin} we can rewrite \eqref{eq:ProblemA} as \[ a^{-\frac q2} =Q_{k,k^\prime}^\prime\left(x+x^{q}\right).\] Therefore, we have \begin{proposition} \label{thm:MCMLinearEquation} Let $a\in\GF n^*$. Let $x\in\GF{2n}$ be a solution of \begin{displaymath} R_{k,k^\prime}\left(a^{-\frac q2}\right) =x+x^{q}. \end{displaymath} Then, $Y=\frac{1}{x+x^{2}}=\left(1+\frac{1}{x}\right)+\frac{1}{\left(1+\frac{1}{x}\right)}$ is the unique solution in $\GF{n}$ of $a^{\frac q2} = \left(f_{k,q+1}\left(\frac 1Y\right)\right)^{-1}$. \end{proposition} Proposition~\ref{thm:MCMLinearEquation} shows that solving Problem~\ref{MCMProblem} amounts to find a solution of an affine equation $x+x^q=b$, which is done in the following. \begin{proposition} \label{MCM:Solvingx^q+x=b} Let $k$ be odd and $\gcd(n,k)=1$. Then, for any $b\in\GF n$, \begin{displaymath} \{x\in\GF {2n}\mid x+x^q=b\}=S_{n,k}\left(\frac{b}{\zeta+1}\right)+\GF{}, \end{displaymath} where $S_{n,k}(x)=\sum_{i=0}^{n-1}x^{q^{i}}$ and $\zeta$ is an element of $\mu_{2^n+1}^{\star}$. \end{proposition} \begin{proof} As it was assumed that $k$ is odd and $\gcd(n,k)=1$, it holds $\gcd(2n,k)=1$ and so the linear mapping $x\in\GF{2n}\longmapsto x+x^q$ has kernel of dimension 1, i.e. the equation $x+x^q=b$ has at most 2 solutions in $\GF{2n}$. Since $S_{n,k}(x) + \left(S_{n,k}(x)\right)^{q} = x + x^{q^{n}}$, we have \begin{eqnarray*} S_{n,k}\left(\frac{b}{\zeta+1}\right)+\left(S_{n,k}\left(\frac{b}{\zeta+1}\right)\right)^{q} + b &=& \frac{b}{\zeta+1} + \left(\frac{b}{\zeta+1}\right)^{q^{n}} + b\\ &=& \frac{b}{\zeta+1} + \frac{b}{\zeta^{q^{n}}+1} + b\\ &=& \frac{b}{\zeta+1} + \frac{b}{1/\zeta+1} + b\\ &=& 0 \end{eqnarray*} and thus really $S_{n,k}\left(\frac{b}{\zeta+1}\right), S_{n,k}\left(\frac{b}{\zeta+1}\right)+1\in \GF{2n}$ are the $\GF{2n}-$solutions of the equation $x+x^q=b$. \end{proof} By Proposition~\ref{thm:MCMLinearEquation} and Proposition~\ref{MCM:Solvingx^q+x=b}, we can now explicit the solutions of Problem~\ref{MCMProblem}. \begin{theorem} \label{thm:SolvingA} Let $a\in\GF n^*$. Let $k$ be odd with $\gcd(n,k)=1$ and $k^\prime$ be the inverse of $k$ modulo $2n$. Then, the unique solution of (\ref{eq:ProblemA}) in $\GF n^*$ is \begin{displaymath} Y = \frac{1}{S_{n,k}\left(\frac{R_{k,k^\prime}\left(a^{-\frac q2}\right)}{\zeta+1}\right)+\left(S_{n,k}\left(\frac{R_{k,k^\prime}\left(a^{-\frac q2}\right)}{\zeta+1}\right)\right)^2} \end{displaymath} where $\zeta$ denotes any element of $\GF{2n}^\star$ such that $\zeta^{2^n+1}=1$, $S_{n,k}(x)=\sum_{i=0}^{n-1}x^{q^{i}}$ and $R_{k,k^\prime}$ is defined by (\ref{eq:Rkk'}). Furthermore, we have $Y=T+\frac{1}{T}$ for \begin{displaymath} T=1+\frac{1}{S_{n,k}\left(\frac{R_{k,k^\prime}\left(a^{-\frac q2}\right)}{\zeta+1}\right)}. \end{displaymath} \end{theorem} \subsection{On Problem~\ref{DicksonProblem}} \label{sec:probl-refd} By Proposition~\ref{prop:decomposition}, one can write $z=c+\frac{1}{c}$ where $c\in\GF{n}^\star$ or $c\in \mu_{2^n+1}^{\star}$. Equation~(\ref{eq:Dickson}) applied to $z$ leads then to \begin{equation} \label{eq:ADickson} D_{q+1}(z) = c^{q+1} + \frac{1}{c^{q+1}}. \end{equation} Thus, we can be reduced to solve firstly equation $T+\frac{1}{T}=Y$, then equation $c^{q+1}=T$ in $\GF{n}^\star \cup \mu_{2^n+1}^{\star}$, and set $z=c+\frac{1}{c}$. Here, let us point out that $c^{q+1}=T$ is equivalent to $\left(\frac 1c\right)^{q+1}=\frac 1T$ and that $c$ and $\frac 1c$ define the same element $z=c+\frac 1c$ of $\GF n$. Proposition~\ref{prop:decomposition} says that the equation $T+\frac 1T = Y$ has two solutions in $\GF n^\star$ if $\Tr{n}\left(\frac 1Y\right)=0$ and in $\mu_{2^n+1}^{\star}$ if $\Tr{n}\left(\frac 1Y\right)=1$. In fact, Proposition~\ref{MCM:Solvingx^q+x=b} gives an explicit solution expression, that is, \begin{equation}\label{solution:T+1/T=Y} T=YS_{n,1}\left(\frac{1}{Y^2(\zeta+1)}\right) \text{ and } T=YS_{n,1}\left(\frac{1}{Y^2(\zeta+1)}\right)+Y, \end{equation} where $S_{n,1}(x)=\sum_{i=0}^{n-1}x^{2^i}$ and $\zeta$ is any element of $\mu_{2^n+1}^{\star}$. Now, let us consider solutions of $c^{q+1}=T$ in $\GF{n}^\star \cup \mu_{2^n+1}^{\star}$. First, note that if $T\in \GF{n}^\star$, then necessarily $c\in\GF n^\star$ (indeed, if $c\in \mu_{2^n+1}^{\star}$, we get $T^2=T\cdot T=T^{2^n}\cdot T=T^{2^n+1}=(c^{2^n+1})^{q+1}=1$ contradicting $T\notin \GF{}$). Recall that if $k$ is odd and $\gcd(n,k)=1$, then \begin{equation}\label{gcd:+-} \gcd(q+1,2^n-1) = \begin{cases}1, & \mbox{if $n$ is odd}\\ 3, & \mbox{if $n$ is even}\end{cases} \end{equation} and \begin{equation}\label{gcd:++} \gcd(q+1,2^n+1) = \begin{cases}1, & \mbox{if $n$ is even}\\ 3, & \mbox{if $n$ is odd.}\end{cases} \end{equation} Therefore, if $T\in \GF{n}^\star$, then there are $0$ (if $T$ is a non-cube in $\GF{n}^\star$) or $3$ (if $T$ is a cube in $\GF{n}^\star$) elements $c$ in $\GF n^\star$ such that $c^{q+1}=T$ when $n$ is even while there is a unique $c$ (i.e. $T^{(q+1)^{-1}\mod 2^n-1}$) when $n$ is odd. And, if $T\in \mu_{2^n+1}^{\star}$, then there are $0$ (if $T$ is a non-cube in $\mu_{2^n+1}^{\star}$) or $3$ (if $T$ is a cube in $\mu_{2^n+1}^{\star}$) elements $c$ in $\mu_{2^n+1}^{\star}$ such that $c^{q+1}=T$ when $n$ is odd while there is a unique $c$ (i.e. $T^{(q+1)^{-1}\mod 2^n+1}$) when $n$ is even. It remains to show in the case when there are three solutions $c$, they define three different elements $z\in \GF n^\star$. Denote $w$ a primitive element of $\GF[4]{}$. Then these three solutions of $c^{q+1}=T$ are of form $c$, $cw$ and $cw^2$. Now, $cw_1+\frac{1}{cw_1}=cw_2+\frac{1}{cw_2}$ implies that $cw_1=cw_2$ or $cw_1=\frac{1}{cw_2}$ (because $A+\frac 1A = B + \frac 1B$ is equivalent to $(A+B)(AB + 1) = 0$). The second case is impossible because it implies that $T=c^{q+1}=\left(\frac{1}{w_1^{\frac12}w_2^{\frac12}}\right)^{q+1}=1$ because $3$ divides $q+1$ when $k$ is odd. We can thus state the following answer to Problem 2. \begin{theorem} \label{even_n_odd_k} Let $k$ be odd and $n$ be even. Let $Y\in\GF n^*$. Let $T$ be any element of $\GF {2n}$ such that $T+\frac 1T=Y$ (this can be given by \eqref{solution:T+1/T=Y}). \begin{enumerate} \item If $T$ is a non-cube in $\GF{n}^\star$, then \[D_{q+1}^{-1}(Y)=\emptyset.\] \item If $T$ is a cube in $\GF{n}^\star$, then \begin{equation*} D_{q+1}^{-1}(Y)=\left\{cw+\frac{1}{cw}\mid c^{q+1} = T,\,c\in\GF n^\star,\, w\in\GF[4]{}^*\right\}. \end{equation*} \item If $T$ is not in $\GF{n}$, then \begin{equation*} D_{q+1}^{-1}(Y)=\left\{T^{(q+1)^{-1}\mod 2^n+1}+\frac{1}{T^{(q+1)^{-1}\mod 2^n+1}}\right\}. \end{equation*} \end{enumerate} \end{theorem} \begin{remark} Item 1 of Theorem~\ref{even_n_odd_k} recovers \cite[Theorem 2.1]{BRACKEN2014} which states: when $n$ is even and $\gcd(n,k)=1$ (so $k$ is odd), $P_a$ has no zeros in $\GF n$ if and only if $a^{-1}=f_{k,q+1}\left(\frac{1}{T+\frac1T}\right)^{\frac 2q}$ for some non-cube $T$ of $\GF n^\star$. Indeed, the statement of Theorem 2.1 in \cite{BRACKEN2014} is not exactly what we write but it is worth noticing that the quantity that is denoted $A(b)$ in \cite{BRACKEN2014} satisfies ${A(b)}^{-1}=f_{k,q+1}\left(\frac{1}{b^{\frac14}+\frac 1{b^{\frac 14}}}\right)^{\frac2q}$. \end{remark} \begin{theorem} \label{odd_n_odd_k} Let $k$ be odd and $n$ be odd. Let $Y\in\GF n^*$. Let $T$ be any element of $\GF {2n}$ such that $T+\frac 1T=Y$ (this can be given by \eqref{solution:T+1/T=Y}). \begin{enumerate} \item If $T$ is a non-cube in $\mu_{2^n+1}^{\star}$, then \[D_{q+1}^{-1}(Y)=\emptyset.\] \item If $T$ is a cube in $\mu_{2^n+1}^{\star}$, then \begin{equation*} D_{q+1}^{-1}(Y)=\left\{cw+\frac{1}{cw}\mid c^{q+1} = T,\,c\in\mu_{2^n+1}^{\star},\, w\in\GF[4]{}^*\right\}. \end{equation*} \item If $T$ is in $\GF{n}$, then \begin{equation*} D_{q+1}^{-1}(Y)=\left\{T^{(q+1)^{-1}\mod 2^n-1}+\frac{1}{T^{(q+1)^{-1}\mod 2^n-1}}\right\}. \end{equation*} \end{enumerate} \end{theorem} \subsection{On the roots in $\GF{n}$ of $P_a(x)$} \label{sec:zeros-refeq:1} We sum up the results of previous subsections to give an explicit expression of the roots in $\GF{n}$ of $P_a(x)$. Let $k$ denote any positive integer coprime with $n$ and $a\in\GF n^*$. First, let us consider the case of odd $k$. Let $k^\prime$ be the inverse of $k$ modulo $2n$. Define \begin{displaymath} T=1+\frac{1}{S_{n,k}\left(\frac{R_{k,k^\prime}\left(a^{-\frac q2}\right)}{\zeta+1}\right)}, \end{displaymath} where $\zeta$ is any element of $\GF{2n}^\star$ such that $\zeta^{2^n+1}=1$, $S_{n,k}(x)=\sum_{i=0}^{n-1}x^{q^{i}}$ and $R_{k,k^\prime}$ is defined by (\ref{eq:Rkk'}). According to Theorem~\ref{thm:SolvingA}, Theorem~\ref{even_n_odd_k} and Theorem~\ref{odd_n_odd_k}, we have followings. \begin{theorem} \label{thm:maineven} Let $n$ be even, $\gcd(n,k)=1$ and $a\in \GF{n}^*$. \begin{enumerate} \item If $T$ is a non-cube in $\GF n$, then $P_a(x)$ has no zeros in $\GF n$. \item If $T$ is a cube in $\GF n$, then $P_a(x)$ has three distinct zeros $\frac{\left(cw+\frac 1{cw}\right)^{q-1}}{YT_k\left(\frac 1Y\right)^{\frac 2q}}$ in $\GF n$, where $c^{q+1}=T$, $w\in\GF[4]{}^*$ and $Y=T+\frac 1T$. \item If $T$ is not in $\GF{n}$, then $P_a(x)$ has a unique zero $\frac{\left(c+\frac{1}{c}\right)^{q-1}}{YT_k\left(\frac 1Y\right)^{\frac 2q}}$ in $\GF n$, where $c={T}^{(q+1)^{-1}\mod 2^n+1}$ and $Y=T+\frac 1T$. \end{enumerate} \end{theorem} \begin{remark} When $k=1$, that is, $P_a(x)=x^3+x+a$, Item (1) of Theorem~\ref{thm:maineven} is exactly Corollary 2.2 of \cite{BRACKEN2014} which states that, when $n$ is even, $P_a$ is irreducible over $\GF n$ if and only if $a=c+\frac1c$ for some non-cube $c$ of $\GF n$. \end{remark} \begin{theorem} \label{thm:mainoddodd} Let $n$ and $k$ be odds with $\gcd(n,k)=1$ and $a\in \GF{n}^*$. \begin{enumerate} \item If $T$ is a non-cube in $\mu_{2^n+1}^{\star}$, then $P_a(x)$ has no zeros in $\GF n$. \item If $T$ is a cube in $\mu_{2^n+1}^{\star}$, then $P_a(x)$ has three distinct zeros $\frac{\left(cw+\frac 1{cw}\right)^{q-1}}{YT_k\left(\frac 1Y\right)^{\frac 2q}}$ in $\GF n$, where $c^{q+1}=T$, $w\in\GF[4]{}^*$ and $Y=T+\frac 1T$. \item If $T$ is in $\GF n$, then $P_a(x)$ has a unique zero $\frac{\left(c+\frac{1}{c}\right)^{q-1}}{YT_k\left(\frac 1Y\right)^{\frac 2q}}$ in $\GF n$, where $c={T}^{(q+1)^{-1}\mod 2^n-1}$ and $Y=T+\frac 1T$. \end{enumerate} \end{theorem} When $k$ is even, following Item (2) of Theorem~\ref{thm:main}, we introduce $l=n-k$, $q^\prime=2^{l}$ and $l^\prime$ the inverse of $l$ modulo $2n$. Define \begin{displaymath} T^\prime=1+\frac{1}{S_{n,l}\left(\frac{R_{l,l^\prime}\left(a^{-\frac {(q^\prime)^2}{2}}\right)}{\zeta+1}\right)}, \end{displaymath} where $\zeta$ is any element of $\GF{2n}^\star$ such that $\zeta^{2^n+1}=1$, $S_{n,l}(x)=\sum_{i=0}^{n-1}x^{{q^\prime}^{i}}$ and $R_{l,l^\prime}$ is defined by (\ref{eq:Rkk'}). \begin{theorem} \label{thm:mainoddeven} Let $n$ be odd and $k$ be even with $\gcd(n,k)=1$. Let $a\in \GF{n}^*$. \begin{enumerate} \item If $T^\prime$ is a non-cube $\mu_{2^n+1}^{\star}$, then $P_a(x)$ has no zeros in $\GF n$. \item If $T^\prime$ is a cube in $\mu_{2^n+1}^{\star}$, then $P_a(x)$ has three distinct zeros $1+\frac{\left(dw+\frac 1{dw}\right)^{q^\prime-1}}{Y^\prime T_{l}\left(\frac 1{Y^\prime}\right)^{\frac 2{q^\prime}}}$ in $\GF n$, where $d^{q^{\prime}+1}=T^\prime$, $w\in\GF[4]{}^*$ and $Y^\prime=T^\prime+\frac 1{T^\prime}$. \item If $T^\prime$ is in $\GF n$, then $P_a(x)$ has a unique zero $1+\frac{{\left(c+\frac{1}{c}\right)}^{q^\prime-1}}{Y^\prime T_{l}\left(\frac 1{Y^\prime}\right)^{\frac 2{q^\prime}}}$ in $\GF n$, where $c={T^\prime}^{(q^\prime+1)^{-1}\mod 2^n-1}$ and $Y^\prime=T^\prime+\frac 1{T^\prime}$. \end{enumerate} \end{theorem} \begin{remark} When $n$ is even, Theorem \ref{thm:maineven} shows that $P_a$ has a unique solution if and only if $T$ is not in $\GF n$. According to Proposition \ref{thm:MCMLinearEquation}, this is equivalent to $\Tr n(R_{k,k^\prime}(a^{-\frac q2}))=1$, that is, $\Tr n(R_{k,k^\prime}(a^{-1}))=1$. When $n$ is odd and $k$ is odd (resp. even), Theorem \ref{thm:mainoddodd} and Theorem \ref{thm:mainoddeven} show that $P_a$ has a unique zero in $\GF n$ if and only if $T$ (resp. $T^\prime$) is in $\GF n$. According to Proposition \ref{thm:MCMLinearEquation}, this is equivalent to $\Tr n(R_{k,k^\prime}(a^{-1}))=0$ or $\Tr n(R_{l,l^\prime}(a^{-1}))=0$ for odd $k$ or even $k$, respectively. By the way, for $x\in \GF{n}$, $Q_{l,l^\prime}^\prime\left(x+x^{q^\prime}\right)=\frac{(x+x^{q^\prime})^{q^\prime+1}}{x^{q^\prime}+x^{2{q^\prime}}}=\left(\frac{(x+x^q)^{q+1}}{x^q+x^{2q}}\right)^{2^{(n-k)^2}}=Q_{k,k^\prime}^\prime\left(x+x^{q}\right)^{2^{(n-k)^2}}$. Hence if $T^\prime \in \GF{n}$, then $R_{l,l^\prime}(a^{-1})=R_{k,k^\prime}(a^{-1})^{\frac{1}{2^{(n-k)^2}}}$, and so $\Tr n(R_{l,l^\prime}(a^{-1}))=0$ is equivalent to $\Tr n(R_{k,k^\prime}(a^{-1}))=0$. After all, we can recover \cite[Theorem 1]{HELLESETH2008} which states that $P_a$ has a unique zero in $\GF n$ if and only if $\Tr n(R_{k,k^\prime}(a^{-1})+1)=1$. \end{remark} \section{Conclusion} In \cite{BLUHER2004,BLUHER2016,BRACKEN2014,HELLESETH2008,HELLESETH2010}, partial results about the zeros of $P_a(x)=x^{2^k+1}+x+a$ in $\GF n$ have been obtained. In this paper, we provided explicit expression of all possible roots in $\GF n$ of $P_a(x)$ in terms of $a$ and thus finish the study initiated in these papers when $\gcd(n,k)=1$. We showed that the problem of finding zeros in $\GF n$ of $P_a(x)$ in fact can be divided into two problems with odd $k$: to find the unique preimage of an element in $\GF n$ under a M$\ddot{u}$ller-Cohen-Matthews (MCM) polynomial and to find preimages of an element in $\GF n$ under a Dickson polynomial. We completely solved these two independent problems. We also presented an explicit solution formula for the affine equation $x^{2^k}+x=b, b\in \GF{n}$.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,145
{"url":"https:\/\/tex.stackexchange.com\/questions\/360193\/how-can-i-hyphenate-before-capital-letters-in-ttfamily","text":"# How can I hyphenate before capital letters in \\ttfamily?\n\nA while ago I asked How can I allow line-breaks before a double-colon (::) in \\texttt? This question is about extending the line-breaking behaviour to include hypenation.\n\nThe given solution solves the problem as it was originally asked:\n\n\\ExplSyntaxOn\n\\NewDocumentCommand{\\cppstring}{m}\n{\n\\tl_set:Nn \\l_spraff_cppstring_tl { #1 }\n% change _ to a printable underscore\n\\regex_replace_all:nnN { _ } { \\cO\\_ } \\l_spraff_cppstring_tl\n% change :: to \\linebreak[0]::\n\\regex_replace_all:nnN { :: } { \\c{linebreak}[0]:: } \\l_spraff_cppstring_tl\n% print the result\n{\\normalfont\\ttfamily \\tl_use:N \\l_spraff_cppstring_tl }\n}\n\\tl_new:N \\l_spraff_cppstring_tl\n\\ExplSyntaxOff\n\n\n(I have tweaked this slightly from the given solution, I use \\normalfont\\ttfamily instead of \\texttt because of this question.)\n\nThis solution doesn't always work for my document:\n\nI think it would be an improvement to allow hyphenation in some cases. Specifically, before a capital letter in CamelCase names. I would like to allow hyphenation such as this:\n\nlorem ipsum lorem ipsum lorem ipsum MyNamespaceName::SomeType::Nested-\nType::member_function\n\n\nI want to retain the line-break-before-double-colon behaviour that I already have. How can I extend this \\cppstring command to hyphenate before capital letters?\n\nI don't know what the hyphenation rules are in detail, but this will ideally happen as little as possible -- i.e. only when an overfull hbox would otherwise be drawn. Minimising the hyphenation is desirable, preventing the overfull hboxes is critical.\n\nAdd to the replacement that a capital letter <capital letter> which is not preceded by a word boundary is changed into \\-<capital letter>.\n\n\\documentclass{article}\n\\usepackage{xparse,l3regex}\n\n\\ExplSyntaxOn\n\\NewDocumentCommand{\\cppstring}{m}\n{\n\\tl_set:Nn \\l_spraff_cppstring_tl { #1 }\n% change _ to a printable underscore\n\\regex_replace_all:nnN { _ } { \\cO\\_ } \\l_spraff_cppstring_tl\n% change :: to \\linebreak[0]::\n\\regex_replace_all:nnN { :: } { \\c{linebreak}[0]:: } \\l_spraff_cppstring_tl\n% change capital letter X to \\-X\n\\regex_replace_all:nnN { (\\B[A-Z]) } { \\c{-}\\1 } \\l_spraff_cppstring_tl\n% print the result\n{\\normalfont\\ttfamily \\tl_use:N \\l_spraff_cppstring_tl }\n}\n\\tl_new:N \\l_spraff_cppstring_tl\n\\ExplSyntaxOff\n\n\\begin{document}\n\nlorem ipsum lorem ipsum lorem ipsum\n\\cppstring{MyNamespaceName::SomeType::NestedType::member_function}\n\n\\bigskip\n\n\\parbox{0pt}{\nlorem ipsum lorem ipsum lorem ipsum\n\\cppstring{MyNamespaceName::SomeType::NestedType::member_function}\n}\n\n\\end{document}\n\n\nThe second example shows all added hyphenation points, in order to see you don't get them between :: and a capital letter.","date":"2019-10-21 17:20:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5360064506530762, \"perplexity\": 6217.860006828016}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987781397.63\/warc\/CC-MAIN-20191021171509-20191021195009-00043.warc.gz\"}"}
null
null
Мёртвая метафора (лексическая, «метафора-название», языковая метафора (узуальная, конвенциональная), метафора лексикализованная, метафора окаменевшая и др.) — метафора, внутренняя форма и семантическая двуплановость которой уже не ощущается носителями языка: ушная раковина (ср. морская раковина), электрический ток (ср. поток воды), электромагнитная волна (ср. морская волна). Мёртвая метафора, как правило, является результатом семантического сдвига, процесса, называемого буквализацией метафоры. Мёртвая метафора «утратила всякую связь с первоначальным образом» и потому «уже не обнаруживает никакой стилистической выразительности». О. М. Лосева отмечает: «Считается, что в любом тексте находится огромное число мёртвых или лексикализованных метафор, которые способны воздействовать на читателя всеми аспектами своей семантики. Мёртвая метафора фокусируется на проблеме соотношения и различия между буквальным и передаваемым значениями, являясь не менее важной, чем живая метафора, по отношению к концептуальным процессам, которые позволяют нам развивать умственные представления в большом диапазоне семантических полей, соотнося их с временем, речью, эмоциями. Действия мёртвых метафор лежат в закреплении установленных систем значений и классификаций». Дж. Лакофф в монографии «Метафоры, которыми мы живём» ставит под вопрос деление метафор на живые и мёртвые. По его мнению, множество метафорических выражений из числа конвенциональных, которые причислены к мёртвым, на самом деле — живые. Мёртвая метафора может быть обновлена, то есть превращена в образную метафору посредством развёртывания, ср.: тяжкие воспоминания → тяжкий крест воспоминаний; аналогично: шить дело → Пускай в уголовном розыске // бисером шьют мне дело (В. Павлова); глазунья → Сегодня полнолунье, // я вспоминаю — как гляжу в окно: // воспетая поэтами глазунья // подмигивает мне (М. Воловик); анютины глазки → анютины глаза глядят на пруд… (В. Казакевич). Данный приём называют обновлением метафоры. Примечания Литература Москвин В. П. Классификация русских метафор // Языковая личность: культурные концепты: Сборник научных трудов. — Волгоград: Перемена, 1996. — 259 с. Москвин В. П. Русская метафора: параметры классификации // Научные доклады высшей школы: Филологические науки. 2000. — № 2. — С. 66-74. Москвин В. П. Русская метафора. Очерк семиотической теории [Текст] / В. П. Москвин. — М.: Издетальство ЛКИ, 2007. — 184 с. Москвин В. П. Язык поэзии. Приёмы и стили: Терминологический словарь. — М.: Флинта, 2020. Рикер П. Живая метафора // Теория метафоры / пер. с англ., общ. ред. Н. Д. Арутюновой и М. А. Журинской. — М.: Прогресс, 1990. — С. 435—455. Метафоры
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,203
Vanilla pompona auch kleine Vanille oder (aufgrund ihrer Form) Bananenvanille, ist eine Pflanzenart aus der Gattung Vanille (Vanilla) in der Familie der Orchideen (Orchidaceae). Die Kletterpflanze hat ihr Verbreitungsgebiet in Mittelamerika. Beschreibung Vanilla pompona ist eine immergrüne Kletterpflanze. Die Sprossachse ist grün, fleischig, im Querschnitt rund mit einem Durchmesser von 1 bis 1,5 Zentimeter. Die Blätter sind länglich oval, vorne enden sie spitz, der Blattgrund ist abrupt verschmälert bis leicht herzförmig, der Blattstiel ist 1 Zentimeter lang und ebenso breit. Die Blattlänge beträgt 15 bis 25 Zentimeter, die Breite 5 bis 12 Zentimeter. Die Oberseite der Blätter ist glänzend dunkelgrün, die Unterseite ist heller und matter. Die Blätter sind ledrig bis fleischig (0,3 bis 0,5 Zentimeter Dicke), der Blattrand ist etwas durchscheinend und scharf gekantet. Die Blütenstandsachse wird 2 bis 5 Zentimeter lang und trägt sechs bis acht große, zitronig duftende Blüten. Die Tragblätter sind zweizeilig angeordnet, breit bis schmal oval geformt, 1 bis 2 Zentimeter lang. Der gebogene Fruchtknoten ist 5 bis 6 Zentimeter lang und im Querschnitt leicht dreieckig. Die Sepalen und seitlichen Petalen sind lanzettlich, oberhalb der Mitte am breitesten, sie enden stumpf, ihre Länge beträgt 7,5 bis 8,5 Zentimeter bei 1,2 bis 1,6 Zentimeter Breite. Die Petalen sind dabei etwas kürzer als die äußeren Blütenblätter, ihre Ränder können leicht gewellt sein, auf der Außenseite sind sie gekielt. Die Lippe ist 9 bis 9,5 Zentimeter lang, ungelappt oder angedeutet dreilappig. Ihre Ränder sind nach oben geschlagen und bilden eine Röhre, nur der vordere Teil ist ausgebreitet und am Rand gewellt. Die Oberfläche der Lippe ist glatt bis auf ein Büschel nach hinten gerichteter Haare. Die Säule wird 6 bis 7 Zentimeter lang, auf der Hälfte ihrer Länge ist sie mit den Rändern der Lippe verwachsen. Die leicht gebogene, süß aromatisch riechende Frucht erreicht 10 bis 12, selten bis 18 Zentimeter Länge, im Querschnitt misst sie 1,6 bis 3 Zentimeter. Die Samen sind oval, glänzend schwarz und 0,4 Millimeter groß. Die Chromosomenzahl beträgt 2n = 32. Verbreitung Vanilla pompona ist aus Mexiko, Nicaragua, Costa Rica und Panama nachgewiesen, eventuell gehören auch Populationen aus Ecuador und Kolumbien dazu. Die Verbreitung im nördlichen Südamerika ist unklar. Aufgrund ihrer aromatischen Früchte wird sie gelegentlich angebaut, vor allem in der Karibik (Guadeloupe, Antillen) aber auch in Madagaskar. Sie ist in der Kultivation nicht so heikel wie die Gewürzvanille (Vanilla planifolia) und fruchtet schon nach kurzer Zeit. Systematik und Botanische Geschichte Diese Orchidee wurde 1829 von Schiede beschrieben. Innerhalb der Gattung Vanilla wird Vanilla pompona in die Untergattung Xanata und dort in die Sektion Xanata, die nur Arten der Neotropis enthält, eingeordnet. Soto Arenas und Cribb ordnen eine Reihe weiterer Arten in die sogenannte Vanilla pompona-Gruppe ein, dies sind Vanilla calyculata, Vanilla chamissonis, Vanilla columbiana, Vanilla grandiflora, Vanilla pseudopompona und Vanilla vellozii. Drei Unterarten lassen sich innerhalb von Vanilla pompona unterscheiden: Vanilla pompona subsp. pompona, die Nominatform, hat ein Verbreitungsgebiet in Mexiko, das von den anderen Populationen weiter südlich getrennt ist. Die Blüten sind relativ klein und öffnen sich nicht weit. Vanilla pompona subsp. grandiflora aus dem Gebiet zwischen Trinidad und dem tropischen Südamerika hat, wie der Name schon andeutet, größere Blüten, die sich weit öffnen. Vanilla pompona subsp. pittieri aus Honduras, Nicaragua, Costa Rica und Panama. Literatur Einzelnachweise Weblinks Vanille Pompona
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,422
Q: How to find Joint PDF given PDF of Two Continuous Random Variables What could be a general way to find the Joint PDF given two PDFs? For example, $X$ and $Y$ be the two random variables with PDFs: $f(x)$ = $\{$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ {1\over 40}$; if $0 < x < 10$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ $$0$ ;if $10 < x < 30$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ $$1\over 40$ ;if $30 < x < 60$ $\ \ \ \ \ \ \ \ \ \ \ \ \}$ $f(y)$ = $\{$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ $$0$; if $0 < y < 10$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ $$1\over 20$ ; if $10 < y < 30$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ $$0$; if $30 < y < 60$ $\ \ \ \ \ \ \ \ \ \ \ \ \ $$\}$ What is the way to find $f(z)$ $?$, where $Z$ is a continuous random variable made* up of $X$ and $Y$ $?$ *made up: if I am making any sense
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,226
\section{Introduction} The primary motivation for this paper is the existence of unexpected commutativity properties discovered during the last 10 years for double interchange semigroups. Kock \cite[Proposition 2.3]{Kock2007} presents a $4 \times 4$ configuration for which associativity and the interchange law imply the equality of two monomials, with the same placement of parentheses and operation symbols, but with different permutations of the arguments. We display his result both algebraically and geometrically: \begin{equation} \label{Kockdiagram} \begin{array}{c} \begin{array}{l} ( a \,\Box\, b \,\Box\, c \,\Box\, d ) \,\blacksquare\, ( e \,\Box\, f \,\Box\, g \,\Box\, h ) \,\blacksquare\, ( i \,\Box\, j \,\Box\, k \,\Box\, \ell ) \,\blacksquare\, ( m \,\Box\, n \,\Box\, p \,\Box\, q ) \equiv \\ ( a \,\Box\, b \,\Box\, c \,\Box\, d ) \,\blacksquare\, ( e \,\Box\, g \,\Box\, f \,\Box\, h ) \,\blacksquare\, ( i \,\Box\, j \,\Box\, k \,\Box\, \ell ) \,\blacksquare\, ( m \,\Box\, n \,\Box\, p \,\Box\, q ) \end{array} \\[5mm] \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 6 mm, y = 6 mm ] \node [Square] at ($(0, 0)$) {$a$}; \node [Square] at ($(1, 0)$) {$b$}; \node [Square] at ($(2, 0)$) {$c$}; \node [Square] at ($(3, 0)$) {$d$}; \node [Square] at ($(0,-1)$) {$e$}; \node [Square] at ($(1,-1)$) {$f$}; \node [Square] at ($(2,-1)$) {$g$}; \node [Square] at ($(3,-1)$) {$h$}; \node [Square] at ($(0,-2)$) {$i$}; \node [Square] at ($(1,-2)$) {$j$}; \node [Square] at ($(2,-2)$) {$k$}; \node [Square] at ($(3,-2)$) {$\ell$}; \node [Square] at ($(0,-3)$) {$m$}; \node [Square] at ($(1,-3)$) {$n$}; \node [Square] at ($(2,-3)$) {$p$}; \node [Square] at ($(3,-3)$) {$q$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 6 mm, y = 6 mm ] \node [Square] at ($(0, 0)$) {$a$}; \node [Square] at ($(1, 0)$) {$b$}; \node [Square] at ($(2, 0)$) {$c$}; \node [Square] at ($(3, 0)$) {$d$}; \node [Square] at ($(0,-1)$) {$e$}; \node [Square] at ($(1,-1)$) {$g$}; \node [Square] at ($(2,-1)$) {$f$}; \node [Square] at ($(3,-1)$) {$h$}; \node [Square] at ($(0,-2)$) {$i$}; \node [Square] at ($(1,-2)$) {$j$}; \node [Square] at ($(2,-2)$) {$k$}; \node [Square] at ($(3,-2)$) {$\ell$}; \node [Square] at ($(0,-3)$) {$m$}; \node [Square] at ($(1,-3)$) {$n$}; \node [Square] at ($(2,-3)$) {$p$}; \node [Square] at ($(3,-3)$) {$q$}; \end{tikzpicture} \end{array} \end{array} \end{equation} Note the transposition of $f$ and $g$. We use the symbol $\equiv$ as an abbreviation for the statement that the equation holds for all values of the arguments. This interplay between algebra and geometry underlies all the results in this paper. DeWolf \cite[Proposition 3.2.4]{DeWolf2013} used a similar argument with 10 variables to prove that the operations coincide in every cancellative double interchange semigroup. Bremner \& Madariaga used computer algebra to show that nine variables is the smallest number for which such a commutativity property holds. We display one of their results \cite[Theorem 4.1]{BM2016}; note the transposition of $e$ and $g$: \begin{equation} \label{BMdiagram} \begin{array}{c} \begin{array}{l} ( ( a \,\Box\, b ) \,\Box\, c ) \,\blacksquare\, ( ( ( d \,\Box\, ( e \,\blacksquare\, f ) ) \,\Box\, ( g \,\blacksquare\, h ) ) \,\Box\, i ) \equiv \\ ( ( a \,\Box\, b ) \,\Box\, c ) \,\blacksquare\, ( ( ( d \,\Box\, ( g \,\blacksquare\, f ) ) \,\Box\, ( e \,\blacksquare\, h ) ) \,\Box\, i ) \end{array} \\[5mm] \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 0.625 mm, y = 0.625 mm ] \draw ( 0, 0) -- (48, 0) ( 0,24) -- (24,24) ( 0,48) -- (48,48) ( 6,36) -- (24,36) ( 0, 0) -- ( 0,48) (12, 0) -- (12,48) (24, 0) -- (24,48) (48, 0) -- (48,48) ( 6,24) -- ( 6,48) (24,24) -- (48,24) ( 6,12) node {$a$} (18,12) node {$b$} (36,12) node {$c$} ( 3,36) node {$d$} ( 9,30) node {$e$} ( 9,42) node {$f$} (18,30) node {$g$} (18,42) node {$h$} (36,36) node {$i$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 0.625 mm, y = 0.625 mm ] \draw ( 0, 0) -- (48, 0) ( 0,24) -- (24,24) ( 0,48) -- (48,48) ( 6,36) -- (24,36) ( 0, 0) -- ( 0,48) (12, 0) -- (12,48) (24, 0) -- (24,48) (48, 0) -- (48,48) ( 6,24) -- ( 6,48) (24,24) -- (48,24) ( 6,12) node {$a$} (18,12) node {$b$} (36,12) node {$c$} ( 3,36) node {$d$} (18,30) node {$e$} ( 9,42) node {$f$} ( 9,30) node {$g$} (18,42) node {$h$} (36,36) node {$i$}; \end{tikzpicture} \end{array} \end{array} \end{equation} Even though the operations are associative, we fully parenthesize monomials so that the algebraic equation corresponds exactly with its geometric realization. In this paper, we begin the classification of commutativity properties with 10 variables which are not consequences of the known results with nine variables. Following Loday \& Vallette \cite{LV2012}, we say \emph{algebraic operad} to mean an operad in the symmetric monoidal category of vector spaces over a field $\mathbb{F}$. When we say simply \emph{operad}, we mean a symmetric algebraic operad generated by two binary operations. For an earlier reference on operads and their applications, see Markl, Shnider \& Stasheff \cite{MSS2002}. For a more recent reference which emphasizes the algorithmic aspects, see Bremner \& Dotsenko \cite{BD2016}. The most common contemporary application of rectangular partitions is to VLSI (very large scale integration): the process of producing integrated circuits by the combination of thousands of transistors into a single silicon chip \cite{KLMH2011}. In microelectronics, block partitions are called \emph{floorplans}: schematic representations of the placement of the major functional components of an integrated circuit. Finding optimal floorplans subject to physical constraints leads to NP-hard problems of combinatorial optimization. An important subset consists of \emph{sliceable} floorplans; these are similar to our dyadic partitions, except that we require the bisections to be exact. Many NP-hard problems have polynomial time solutions in the sliceable case. However, sliceable floorplans are defined to exclude the possibility that four subrectangles intersect in a point, thus rendering the interchange law irrelevant. \section{Overview of results} We recall basic definitions and results from the theory of algebraic operads. We consider seven algebraic operads, each generated by two binary operations. \subsection{Nonassociative operads} The first four have nonassociative operations. \begin{definition} \label{freeoperad} $\mathbf{Free}$ is the free operad generated by operations ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$: we identify the basis monomials in arity $n$ with the set $\mathbb{B}_n$ of all labelled rooted complete binary plane trees with $n$ leaves (\emph{tree monomials} for short), where \emph{labelled} means that we assign an operation to each internal node (including the root), and we choose a bijection between the leaf nodes and the argument symbols $\arg_1, \dots, \arg_n$. For $n = 1$ we have only one tree, with no root and one leaf labelled $x_1$. The partial compositions $\circ_i$ in this operad are defined as usual. (We avoid using $\circ$ as an operation symbol since it conflicts with partial composition.) \end{definition} \begin{algorithm} We recall in the general case the recursive algorithm for \emph{converting a tree $T$ to a word $\mu(T)$}. Let $\mathbf{O}$ be the free nonsymmetric operad generated by operations $\Omega = \{ \, \omega_i \mid i \in I \, \}$ indexed by the set $I$, with arities assigned by the function $a\colon I \to \mathbb{N}$. The tree $\vert$ with one (leaf) node may be identified with the word $x_1$. Each operation $\omega_i$ can be identified with either (i) the word $\mu(T_i) = \omega_i(x_1,\dots,x_{a(i)})$, or (ii) the planar rooted tree $T_i$ with root labelled $\omega_i$ and $a(i)$ leaves labelled $x_1, \dots, x_{a(i)}$ from left to right. This defines $\mu(T)$ for trees with exactly one internal node (the root). Now let $T$ be a tree with at least two internal nodes (counting the root), with root labelled $\omega_i$ and with $a(i)$ children (which are leaves or roots of subtrees) denoted $T_1, \dots, T_{a(i)}$. By induction we may assume that $\mu(T_1), \dots, \mu(T_{a(j)})$ have been defined, and so we set $\mu( T ) = \omega_i( \mu(T_1), \dots, \mu( T_{a(i)} )$, with the subscripts of the variables changed to produce the identity permutation. By induction, this defines $\mu(T)$ for every basis tree $T$ in the free operad $\mathbf{O}$. \end{algorithm} \begin{definition} \label{interoperad} $\mathbf{Inter}$ is the quotient of $\mathbf{Free}$ by the operad ideal $\mathrm{I} = \langle \boxplus \rangle$ generated by the interchange law (also called exchange law, medial law, entropic law): \begin{equation} \label{intlaw} \boxplus\colon\, ( a \,{\,\scalebox{.67}{$\vartriangle$}\,}\, b ) \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, ( c \,{\,\scalebox{.67}{$\vartriangle$}\,}\, d ) - ( a \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, c ) \,{\,\scalebox{.67}{$\vartriangle$}\,}\, ( b \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, d ) \equiv 0. \end{equation} Example \ref{exampleinterchange} gives the geometrical explanation of our symbol for this relation. \end{definition} \begin{definition} \label{bpoperad} $\mathbf{BP}$ is the operad of \emph{block} (or \emph{rectangular}) partitions of the (open) unit square $I^2$, $I = (0,1)$. To be precise, a block partition $P$ of $I^2$ is determined by a finite set $C$ of open line segments contained in $I^2$ such that $P = I^2 \setminus \bigcup C$ is the disjoint union of open subrectangles $(x_1,x_2) \times (y_1,y_2)$, called \emph{empty blocks}. The segments in $C$, which are called \emph{cuts}, must be either horizontal or vertical, that is $H = (x_1,x_2) \times \{y_0\}$ or $V = \{x_0\} \times (y_1,y_2)$, where $0 \le x_0, x_1, x_2, y_0, y_1, y_2 \le 1$ with $x_1 < x_2$, $y_1 < y_2$. We assume that the cuts are \emph{maximal} in the sense that if two elements $H, V \in C$ intersect then one is horizontal, the other is vertical, and $H \cap V$ is a single point. $\mathbf{BP}$ has two binary operations: the \emph{horizontal} (resp.~\emph{vertical}) operation $x \rightarrow y$ (resp.~$x \uparrow y$) translates $y$ one unit to the east (resp.~north), forms the union of $x$ and translated $y$ to produce a block partition of a rectangle of width (resp.~height) two, scales this rectangle horizontally (resp.~vertically) by one-half, and produces another block partition. The operadic analogues are as follows. If $x$ is a block partition with $m$ parts ordered $x_1, \dots, x_m$ in some way, then $x$ is an $m$-ary operation: for any other block partition $y$ with $n$ parts, the partial composition $x \circ_i y$ ($1 \le i \le m$) is the result of scaling $y$ to have the same size as $x_i$, and replacing $x_i$ by the scaled partition $y$, producing a new block partition with $m{+}n{-}1$ parts. Let $\,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\,$ and $\boxminus$ denote the two block partitions with two equal parts: the first has a vertical (resp.~horizontal) cut and represents the horizontal (resp.~vertical) operation; the parts are labelled 1, 2 in the positive direction, namely east (resp.~north). These two operations form a basis for the homogeneous space $\mathbf{BP}(2)$. The original operations are then defined as follows: \begin{equation} \label{bpoperations} \begin{array}{l} x \rightarrow y = ( \, \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \circ_1 x \, ) \circ_{m+1} y = ( \, \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \circ_2 y \, ) \circ_1 x, \\[0.5mm] x \uparrow y = ( \, \boxminus \circ_1 x \, ) \circ_{m+1} y = ( \, \boxminus \circ_2 y \, ) \circ_1 x. \end{array} \end{equation} $\mathbf{BP}$ is a set operad, but we make it into an algebraic operad in the usual way (see \S\ref{vectorsetoperads}): we define operations on elements and extend to linear combinations. \end{definition} \begin{definition} \label{dbpoperad} $\mathbf{DBP}$ is the unital suboperad of $\mathbf{BP}$ generated by $\mathbf{BP}(2)$, where unital means we include the unary operation represented by $I^2$, the block partition with one part. Thus $\mathbf{DBP}$ consists of the \emph{dyadic} partitions, meaning that every $P$ in $\mathbf{DBP}$ with $n{+}1$ parts comes from some $Q$ in $\mathbf{DBP}$ with $n$ parts by exact bisection of a part of $Q$ horizontally or vertically. The free double interchange magma is the algebra over $\mathbf{DBP}$ generated by $I^2$. \end{definition} \begin{algorithm} \label{defbinarypartition} For the general dimension $d \ge 1$, a \emph{dyadic block partition} $P$ with $k$ parts of the open unit $d$-cube $I^d$, $I = (0,1)$, is constructed by setting $P_1 \leftarrow \{ I^d \}$ and performing the following steps for $i = 1, \dots, k{-}1$: \begin{itemize}[leftmargin=*] \item Choose an element $B \in P_i$ and a coordinate axis $j \in \{1,\dots,d\}$. \item Set $c \leftarrow \tfrac12(a_j{+}b_j)$ where $(a_j,b_j)$ is the projection of $B$ onto the coordinate $x_j$. \item Set $\{ B', B'' \} \leftarrow B \setminus \{ \, x \in B \mid x_j = c \, \}$: the disjoint open blocks obtained from bisecting $B$ by the hyperplane $x_j = c$. \item Set $P_{i+1} \leftarrow ( \, P_i \setminus \{ B \} \, ) \sqcup \{ B', B'' \}$: in $P_i$, replace block $B$ with blocks $B'$, $B''$. \end{itemize} Finally, set $P \leftarrow P_k$. \end{algorithm} \begin{definition} \label{defsubrectangle} Let $P$ be a block partition of $I^2$ of arity $n$ determined by a set $C$ of line segments. Then $P = I^2 \setminus C$ is the disjoint union of $n$ empty blocks $B_1, \dots, B_n$ and we indicate this by writing $P = \bigsqcup B_i$. Suppose that the open rectangle $R = (x_1,x_2) \times ( y_1,y_2) \subseteq I^2$ admits a block partition (in the obvious sense) into the disjoint union of a subset $B_{i_1}, \dots, B_{i_m}$ of $m$ empty blocks from $P$. In this case we say that $R$ is a \emph{subrectangle} of $P$ of arity $m$. Every empty block $B_i$ is a subrectangle of $P$ of arity 1. \end{definition} \begin{definition} \label{defgeomap} The \emph{geometric realization map} $\Gamma\colon \mathbf{Free} \to \mathbf{BP}$ is the morphism of operads defined on tree monomials as follows: $\Gamma( \,\vert\, ) = I^2$, where $\vert$ is the (unique) tree with one node (a leaf), and recursively we define \begin{equation} \begin{array}{c} \Gamma( \, T_1 {\,\scalebox{.67}{$\vartriangle$}\,} T_2 \, ) = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(T_1)}; ( 15, 5 )*+{\Gamma(T_2)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 10, 0 ) = "3"; ( 10, 10 ) = "4"; ( 20, 0 ) = "5"; ( 20, 10 ) = "6"; { \ar@{-} "1"; "2" }; { \ar@{-} "3"; "4" }; { \ar@{-} "5"; "6" }; { \ar@{-} "1"; "5" }; { \ar@{-} "2"; "6" }; \end{xy}} = \Gamma(T_1) \rightarrow \Gamma(T_2), \\[4mm] \Gamma( \, T_1 {\,\scalebox{.67}{$\blacktriangle$}\,} T_2 \, ) = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(T_1)}; ( 5, 15 )*+{\Gamma(T_2)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 0, 20 ) = "3"; ( 10, 0 ) = "4"; ( 10, 10 ) = "5"; ( 10, 20 ) = "6"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "1"; "4" }; { \ar@{-} "2"; "5" }; { \ar@{-} "3"; "6" }; \end{xy}} = \Gamma(T_1) \uparrow \Gamma(T_2). \end{array} \end{equation} \end{definition} \begin{lemma} \label{lemma1} The image of $\Gamma$ is the operad $\Gamma( \mathbf{Free} ) = \mathbf{DBP}$ of dyadic block partitions. The kernel of $\Gamma$ is the operad ideal $\mathrm{ker}(\Gamma) = \langle \boxplus \rangle$ generated by the interchange law. Hence there is an operad isomorphism $\mathbf{Inter} \cong \mathbf{DBP}$. \end{lemma} \begin{proof} The first statement is clear, the second is Lemma \ref{nasrinslemma}, and the third is an immediate consequence of the first and second. \end{proof} \begin{notation} \label{gammanotation} We write $\mathrm{I} = \mathrm{ker}(\Gamma) = \langle \boxplus \rangle$, and $\gamma\colon \mathbf{Inter} \rightarrow \mathbf{DBP}$ for the isomorphism of Lemma \ref{lemma1}. Then the geometric realization map $\Gamma = \iota \circ \gamma \circ \chi$ factors through the natural surjection $\chi \colon \mathbf{Free} \twoheadrightarrow \mathbf{Inter}$ and the inclusion $\iota \colon \mathbf{DBP} \hookrightarrow \mathbf{BP}$: \begin{equation} \mathbf{Free} \twoheadrightarrow \mathbf{Free} / \mathrm{I} = \mathbf{Free} / \langle \boxplus \rangle = \mathbf{Inter} \xrightarrow{\;\gamma\;} \mathbf{DBP} \hookrightarrow \mathbf{BP}. \end{equation} See Figure \ref{bigpicture}. \end{notation} \begin{remark} We mention but do not elaborate on the similarity between (i) the straightforward $n$-dimensional generalizations of the operads $\mathbf{BP}$ and $\mathbf{DBP}$, and (ii) the much-studied operads $E_n$ which are weakly equivalent to the topological operads of little $n$-discs and little $n$-cubes. We refer the reader to McClure \& Smith \cite{MS2004} for further details and references. \end{remark} \subsection{Associative operads} The last three operads have associative operations. \begin{definition} \label{assocboperad} $\mathbf{AssocB}$ is the quotient of $\mathbf{Free}$ by the operad ideal $\mathrm{A} = \langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever} \rangle$ generated by the associative laws for two operations: \begin{equation} \label{associativelaws} \begin{array}{l} \mathrm{A}_{\wedgehor}(a,b,c) = ( \, a \,{\,\scalebox{.67}{$\vartriangle$}\,}\, b \, ) \,{\,\scalebox{.67}{$\vartriangle$}\,}\, c - a \,{\,\scalebox{.67}{$\vartriangle$}\,}\, ( \, b \,{\,\scalebox{.67}{$\vartriangle$}\,}\, c \, ), \\ \mathrm{A}_{\wedgever}(a,b,c) = ( \, a \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, b \, ) \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, c - a \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, ( \, b \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, c \, ). \end{array} \end{equation} This \emph{two-associative} operad is denoted $\mathbf{2as}$ by Loday \& Ronco \cite{LR2006}. It is clumsy to regard the basis elements of $\mathbf{AssocB}$ as cosets of binary tree monomials in $\mathbf{Free}$ modulo the ideal $\mathrm{A} = \langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever} \rangle$. We write $\overline{{\,\scalebox{.67}{$\vartriangle$}\,}}$ and $\overline{{\,\scalebox{.67}{$\blacktriangle$}\,}}$ for the operations in $\mathbf{AssocB}$, where the bar indicates the quotient modulo $\mathrm{A}$. To be precise, for tree monomials $x, y \in \mathbf{Free}$, we define $\overline{{\,\scalebox{.67}{$\vartriangle$}\,}}$ and $\overline{{\,\scalebox{.67}{$\blacktriangle$}\,}}$ by these equations: \begin{equation} \begin{array}{l} \left( \, x + \mathrm{A} \, \right) \,\overline{{\,\scalebox{.67}{$\vartriangle$}\,}}\, \left( \, y + \mathrm{A} \, \right) = \left( \, x \,{\,\scalebox{.67}{$\vartriangle$}\,}\, y \, \right) + \mathrm{A}, \\ \left( \, x + \mathrm{A} \, \right) \,\overline{{\,\scalebox{.67}{$\blacktriangle$}\,}}\, \left( \, y + \mathrm{A} \, \right) = \left( \, x \,{\,\scalebox{.67}{$\blacktriangle$}\,} y\, \, \right) + \mathrm{A}. \end{array} \end{equation} \end{definition} \begin{definition} \label{assocnboperad} $\mathbf{AssocNB}$ is an isomorphic copy of $\mathbf{AssocB}$ corresponding to the following change of basis. We write $\rho \colon \mathbf{AssocB} \to \mathbf{AssocNB}$ to represent rewriting a coset representative (a binary tree) as a nonbinary (= not necessarily binary) tree. The new basis consists of the disjoint union $\{ x_1 \} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$ of the isolated leaf $x_1$ and two copies of the set $\mathbb{T}$ of all labelled rooted \emph{not necessarily binary} plane trees with at least one internal node (counting the root). We assume that each internal node has at least two children, and so every tree in $\mathbb{T}$ has at least two leaves. If $T$ is a tree in $\mathbb{T}$ with root $r$, then the \emph{level} $\ell(s)$ of any internal node $s$ is the length of the unique path from $r$ to $s$ in $T$. In $\mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,}$, the root $r$ of every tree $T$ has label ${\,\scalebox{.67}{$\vartriangle$}\,}$, and the label of an internal node $s$ is ${\,\scalebox{.67}{$\vartriangle$}\,}$ (resp.~${\,\scalebox{.67}{$\blacktriangle$}\,}$) if $\ell(s)$ is even (resp.~odd). In $\mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$, the labels of the internal nodes are reversed. If $T$ is in $\mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$ (so $n \ge 2$), then we include the $n!$ trees for all bijections between the leaves and the argument symbols $\arg_1, \dots, \arg_n$. Lemma \ref{assoclemma} gives a precise statement of the bijection between these two bases. For further information, see Loday \& Ronco \cite[\S5]{LR2006}. \end{definition} \begin{remark} If the choice of basis is not relevant, then we write $\mathbf{Assoc}$ to represent the operad $\mathbf{AssocB} \cong \mathbf{AssocNB}$. \end{remark} \begin{definition} \label{diaoperad} $\mathbf{DIA}$ is the quotient of $\mathbf{Free}$ by the operad ideal $\langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever}, \boxplus \rangle$. This is the operad governing double interchange \emph{algebras}, which possess two associative operations satisfying the interchange law. \end{definition} \subsection{Set and vector operads} \label{vectorsetoperads} The operads $\mathbf{Inter}$, $\mathbf{AssocB}$, $\mathbf{AssocNB}$, $\mathbf{DIA}$ are defined by relations of the form $v_1 - v_2 \equiv 0$ (equivalently $v_1 \equiv v_2$) where $v_1, v_2$ are cosets of tree monomials in $\mathbf{Free}$. We could therefore work entirely with set operads, since we never need to consider linear combinations. Vector spaces and sets are connected by a pair of adjoint functors: the forgetful functor sending a vector space $V$ to its underlying set, and the left adjoint sending a set $S$ to the free vector space on $S$ (the vector space with basis $S$). The connection between vector spaces and sets is reflected in the relation between Gr\"obner bases for operads and the theory of rewriting systems: if we compute a syzygy for two tree polynomials $v_1 - v_2$ and $w_1 - w_2$, then the common multiple of the leading terms cancels, and we obtain another difference of tree monomials; similarly, from a critical pair of rewrite rules $v_1 \mapsto v_2$ and $w_1 \mapsto w_2$, we obtain another rewrite rule\footnote{We thank Vladimir Dotsenko for this clarification.}. We state our main results in terms of set operads, but strictly speaking, we work with algebraic operads. A double interchange semigroup is a module over an operad $\mathbf{DIS}$ in the category of sets; the corresponding notion over the algebraic operad $\mathbf{DIA}$ is a double interchange algebra. Our main reason for using algebraic rather than set operads is that the former theory is much better developed. For more about the relation between set and algebraic operads, see Giraudo \cite[\S1.1.2]{Giraudo2016}. \subsection{Morphisms between operads} \label{morphisms} Our goal in this paper is to understand the operad $\mathbf{DIA}$, which is the quotient of $\mathbf{Free}$ by the operad ideal generated by the associative and interchange laws. We have no convenient normal form for the basis monomials of $\mathbf{DIA}$ (that is, no convenient way to choose a canonical representative for each equivalence class in the quotient operad). As we have just seen, there is a convenient normal form when we factor out associativity but not interchange. As we will see later (Lemma \ref{nasrinslemma}), there is also a convenient normal form when we factor out interchange but not associativity: the dyadic block partitions. Our approach will be to use the (tree) monomial basis of the operad $\mathbf{Free}$; to these monomials, we apply rewriting rules which express associativity of each operation (from right to left, or the reverse) and the interchange law between the operations (from black to white, or the reverse). These rewritings convert one tree monomial in $\mathbf{Free}$ to another tree monomial which is equivalent to the first modulo associativity and interchange. In other words, given an element $X$ of $\mathbf{DIA}$ represented by a tree monomial $T$ in $\mathbf{Free}$, these rewritings convert $T$ to another tree monomial $T'$ which is in the same inverse image as $T$ with respect to the natural surjection $\mathbf{Free} \twoheadrightarrow \mathbf{DIA}$. We must allow undirected rewriting because of the complicated way in which associativity and interchange interact: in order to pass from $T$ to $T'$, we may need to apply associativity from left to right, then apply interchange, and then apply associativity from right to left. We present a commutative diagram of operads and morphisms in Figure \ref{bigpicture}: \begin{itemize}[leftmargin=*] \item $\alpha$ is the natural surjection from $\mathbf{Free}$ onto $\mathbf{Assoc} = \mathbf{Free} / \mathrm{A}$. \item $\chi$ is the natural surjection from $\mathbf{Free}$ onto $\mathbf{Inter} = \mathbf{Free} / \mathrm{I}$. \item $\overline{\alpha}$ is the natural surjection from $\mathbf{Inter}$ onto $\mathbf{DIA} = \mathbf{Inter} / \langle\mathrm{A}_{\wedgehor}{+}\mathrm{I},\mathrm{A}_{\wedgever}{+}\mathrm{I}\rangle$. \item $\overline{\chi}$ is the natural surjection from $\mathbf{Assoc}$ onto $\mathbf{DIA} = \mathbf{Assoc} / \langle\boxplus{+}\mathrm{A}\rangle$. \item $\overline{\chi} \circ \alpha = \overline{\alpha} \circ \chi$: the diagram commutes. \item For $\gamma$, $\iota$, $\Gamma = \iota \circ \gamma \circ \chi$ see Definition \ref{defgeomap} and Notation \ref{gammanotation}. \item For $\rho$ see Definition \ref{assocnboperad}. \end{itemize} \begin{figure}[ht] \begin{adjustbox}{center} \setlength{\fboxsep}{12pt} \fbox{ \setlength{\fboxsep}{4pt} \setlength\fboxrule{1pt} \begin{xy} ( 0, 0 )*+{\mathbf{Free}} = "free"; ( 48, -36 )*+{\mathbf{BP}} = "bp"; ( 48, -24 )*+{\mathbf{DBP}} = "dbp"; ( 48, -12 )*+{\mathbf{Inter}} = "inter"; ( 48, 0 )*+{\circlearrowleft} = "x"; ( 48, 12 )*+{\mathbf{AssocB}} = "assocb"; ( 48, 24 )*+{\mathbf{AssocNB}} = "assocnb"; ( 96, 0 )*+{\boxed{\mathbf{DIA}}} = "dia"; { \ar@{->>}^{-/\langle\mathrm{A}_{\wedgehor},\mathrm{A}_{\wedgever}\rangle\quad}_{\alpha} "free"; "assocb" }; { \ar@{->>}^{-/\langle\boxplus\rangle}_{\chi} "free"; "inter" }; { \ar@/_6mm/@{.>}_{\Gamma} "free"; "bp" }; { \ar@{->}_{\gamma}^{\text{isomorphism}} "inter"; "dbp" }; { \ar@{->>}^{-/\langle\mathrm{A}_{\wedgehor}{+}\mathrm{I},\mathrm{A}_{\wedgever}{+}\mathrm{I}\rangle\qquad}_{\overline{\alpha}} "inter"; "dia" }; { \ar@{->}^{\rho}_{\text{isomorphism}} "assocb"; "assocnb" }; { \ar@{->>}^{-/\langle\boxplus{+}\mathrm{A}\rangle}_{\overline{\chi}} "assocb"; "dia" }; { \ar@{->}^{\text{inclusion}}_{\iota} "dbp"; "bp" }; \end{xy} } \end{adjustbox} \vspace{-10pt} \caption{Big picture of operads and morphisms for rewriting monomials} \label{bigpicture} \end{figure} \subsection{Diagram chasing and commutativity} \label{diagramchasing} By a \emph{monomial} $X$ in $\mathbf{DIA}$, we mean an equivalence class of (tree) monomials in $\mathbf{Free}$, modulo the equivalence relation generated by the relations $\mathrm{A}_{\wedgehor}$, $\mathrm{A}_{\wedgever}$, $\boxplus$. Thus $X$ is a nonempty subset of (tree) monomials in $\mathbf{Free}$, and a representative of $X$ is simply an element of $X$. We start with a monomial $X$ in $\mathbf{DIA}$ and choose a convenient representative (tree) monomial $T \in X$. To the tree monomial $T$, we freely apply any sequence of rewrite rules of the following two types: \begin{itemize}[leftmargin=*] \item \emph{Reassociating} in either direction (left to right or right to left) with respect to either operation, ${\,\scalebox{.67}{$\vartriangle$}\,}$ or ${\,\scalebox{.67}{$\blacktriangle$}\,}$: this means applying $\alpha$ to $T$ to obtain a unique element of $\mathbf{Assoc}$, namely the coset $\alpha(T) = T + \mathrm{A}$; rewriting the binary tree $T$ as a nonbinary tree as explained in Definition \ref{assocnboperad}; and applying $\alpha^{-1}$ by choosing a different binary tree $T'$ representing the same nonbinary tree: $T + \mathrm{A} = T' + \mathrm{A}$, i.e., $\alpha(T) = \alpha(T')$. \item \emph{Interchanging} in either direction, left to right or right to left (more precisely horizontal to vertical, or vertical to horizontal, i.e., white root to black root, or black root to white root): this means applying $\chi$ to $T$ to obtain a unique element of $\mathbf{Inter}$, namely the coset $\chi(T) = T + \mathrm{I}$; rewriting the binary tree $T$ as a dyadic block partition (Lemma \ref{nasrinslemma}: interchange may only be applied in an unambiguous way to a \emph{binary} tree); and applying $\chi^{-1}$ by choosing a different binary tree $T'$ representing the same dyadic block partition: $T + \mathrm{I} = T' + \mathrm{I}$, i.e., $\chi(T) = \chi(T')$. \end{itemize} The role played by the nonbinary trees when reassociating is analogous to the role played by the dyadic block partitions when interchanging: the actual rewriting of the coset representatives (the tree monomials in $\mathbf{Free}$) takes place using $\rho$ and $\rho^{-1}$ for associativity, and $\gamma$ and $\gamma^{-1}$ for the interchange law. We point out that: \begin{itemize}[leftmargin=*] \item applying associativity, $T \mapsto T' \in \alpha^{-1}(\alpha(T))$, changes the corresponding dyadic block partition, but does not change the nonbinary tree monomial; \item applying the interchange law, $T \mapsto T' \in \chi^{-1}(\chi(T))$, changes the corresponding nonbinary tree monomial, but does not change the dyadic block partition. \end{itemize} This rewriting process is unavoidable because we do not have a well-defined normal form for elements in $\mathbf{DIA}$, but we do have easily computable normal forms for elements of $\mathbf{Free}$, $\mathbf{Assoc} = \mathbf{Free} / \langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever} \rangle$ and $\mathbf{Inter} = \mathbf{Free} / \langle \boxplus \rangle$. We apply any number of these rewrite rules in any order, and stop if and when we obtain a tree monomial $T''$ \emph{identical} to the original monomial $T$ \emph{except} for the permutation of the arguments. The equality in $\mathbf{DIA}$ of the cosets of the tree monomials $T$ and $T''$ in $\mathbf{Free}$ is a multilinear commutativity relation for double interchange algebras, or equivalently for double interchange semigroups (since we have been working exclusively with basis monomials). \begin{figure}[ht] \setlength{\fboxsep}{10pt} \[ \boxed{ \begin{array}{l} ( ( a {\,\scalebox{.67}{$\blacktriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) \;\; [\in \mathbf{Free}] \\ \xrightarrow{\makebox[12mm]{$\alpha$}} \; ( ( a {\,\scalebox{.67}{$\blacktriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$\rho$}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocNB}] \\ \xrightarrow{\makebox[12mm]{$ \rho^{-1} $}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$\alpha^{-1}$}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) \;\; [\in \mathbf{Free}] \\ \xrightarrow{\makebox[12mm]{$\chi$}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{I} \;\; [\in \mathbf{Inter}] \\[8pt] \xrightarrow{\makebox[12mm]{$\gamma$}} \; \adjustbox{valign=m} {\begin{xy} ( 0, 0 ) = "1"; ( 0, 12 ) = "2"; ( 0, 24 ) = "3"; ( 12, 0 ) = "4"; ( 12, 12 ) = "5"; ( 12, 24 ) = "6"; ( 24, 0 ) = "7"; ( 24, 12 ) = "8"; ( 24, 24 ) = "9"; ( 0, 18 ) = "10"; ( 12, 18 ) = "11"; ( 18, 0 ) = "12"; ( 18, 12 ) = "13"; { \ar@{-} "1"; "3" }; { \ar@{=} "4"; "6" }; { \ar@{-} "7"; "9" }; { \ar@{-} "1"; "7" }; { \ar@{-} "2"; "8" }; { \ar@{-} "3"; "9" }; { \ar@{-} "10"; "11" }; { \ar@{-} "12"; "13" }; ( 6, 6 )*+{a} = "a"; ( 6, 15 )*+{b} = "b"; ( 6, 21 )*+{c} = "c"; ( 15, 6 )*+{d} = "d"; ( 21, 6 )*+{e} = "e"; ( 18, 18 )*+{f} = "f"; \end{xy}} = \adjustbox{valign=m} {\begin{xy} ( 0, 0 ) = "1"; ( 0, 12 ) = "2"; ( 0, 24 ) = "3"; ( 12, 0 ) = "4"; ( 12, 12 ) = "5"; ( 12, 24 ) = "6"; ( 24, 0 ) = "7"; ( 24, 12 ) = "8"; ( 24, 24 ) = "9"; ( 0, 18 ) = "10"; ( 12, 18 ) = "11"; ( 18, 0 ) = "12"; ( 18, 12 ) = "13"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "7"; "9" }; { \ar@{-} "1"; "7" }; { \ar@{=} "2"; "8" }; { \ar@{-} "3"; "9" }; { \ar@{-} "10"; "11" }; { \ar@{-} "12"; "13" }; ( 6, 6 )*+{a} = "a"; ( 6, 15 )*+{b} = "b"; ( 6, 21 )*+{c} = "c"; ( 15, 6 )*+{d} = "d"; ( 21, 6 )*+{e} = "e"; ( 18, 18 )*+{f} = "f"; \end{xy}} \;\; [\in \mathbf{DBP}] \;\; \left( \!\! \begin{tabular}{c} double line \\ denotes root \\ operation \end{tabular} \!\! \right) \\ \xrightarrow{\makebox[12mm]{$\gamma^{-1}$}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{I} \;\; [\in \mathbf{Inter}] \\ \xrightarrow{\makebox[12mm]{$\chi^{-1}$}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) \;\; [\in \mathbf{Free}] \\ \xrightarrow{\makebox[12mm]{$\alpha$}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$ \rho $}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocNB}] \\ \xrightarrow{\makebox[12mm]{$ \rho^{-1} $}} \; ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} d ) {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$\alpha^{-1}$}} \; ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} d ) {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) \;\; [\in \mathbf{Free}] \end{array} } \] \vspace{-15pt} \caption{Example of rewriting in free double interchange semigroups} \label{rewritingexample} \end{figure} \begin{example} In Figure \ref{rewritingexample} we present a simple exercise in rewriting, which does not lead to a commutativity property (such an example would require at least nine variables), but which should suffice to illustrate the preceding discussion. For clarity, we include in square brackets the position in Figure \ref{bigpicture} of the current object. We emphasize that we never work directly with elements of $\mathbf{DIA}$. \end{example} \section{Background in categorical algebra} Most of the operadic and geometric objects studied in this paper originate in category theory; we mention Mac Lane \cite{MacLane1965}, Kelly \& Street \cite{KS1974}, Street \cite{Street1996} as mathematical references, Kr\"omer \cite{Kromer2007} for historical and philosophical aspects. \subsection{Many binary operations} Many different structures may be regarded as extensions of the notion of semigroup to the case of $d \ge 2$ binary operations. (The results of this paper concern only $d = 2$.) We give definitions for the general case $d \ge 2$ only when this requires no more space than $d = 2$. \begin{definition} \label{defmtuplesemigroup} A $d$-\emph{tuple magma} is a nonempty set $S$ with $d$ binary operations $S \times S \to S$, denoted $(a,b) \mapsto a \star_i b$ for $1 \le i \le d$. A $d$-\emph{tuple semigroup} is a $d$-tuple magma in which every operation satisfies the associative law. A $d$-\emph{tuple interchange magma} is a $d$-tuple magma in which every pair of distinct operations satisfies the interchange law. A $d$-\emph{tuple interchange semigroup} is a $d$-tuple semigroup in which every pair of distinct operations satisfies the interchange law. (Some authors refer to the last structure simply as ``a $d$-tuple semigroup''.) \end{definition} \begin{example} \label{exampleinterchange} Double interchange magmas have operations $\rightarrow$ (horizontal) and $\uparrow$ (vertical) related by the interchange law, which expresses the equality of two sequences of bisections which partition a square into four smaller squares: \[ ( a \rightarrow b ) \uparrow ( c \rightarrow d ) \, \equiv \begin{array}{c} \begin{tikzpicture}[draw=black, x=6 mm, y=6 mm] \node [Square] at ($(0, 0.0)$) {$a$}; \node [Square] at ($(1, 0.0)$) {$b$}; \node [Square] at ($(0,-1.2)$) {$c$}; \node [Square] at ($(1,-1.2)$) {$d$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[draw=black, x=6 mm, y=6 mm] \node [Square] at ($(0, 0)$) {$a$}; \node [Square] at ($(1, 0)$) {$b$}; \node [Square] at ($(0,-1)$) {$c$}; \node [Square] at ($(1,-1)$) {$d$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[draw=black, x=6 mm, y=6 mm] \node [Square] at ($(0.0, 0)$) {$a$}; \node [Square] at ($(1.2, 0)$) {$b$}; \node [Square] at ($(0.0,-1)$) {$c$}; \node [Square] at ($(1.2,-1)$) {$d$}; \end{tikzpicture} \end{array} \equiv \; ( a \uparrow c ) \rightarrow ( b \uparrow d ). \] \end{example} \subsection{The Eckmann-Hilton argument} Structures with binary operations $\rightarrow$ and $\uparrow$ satisfying the interchange law arose during the late 1950s and early 1960s, in universal algebra, algebraic topology, and double category theory. The operations are usually associative, but even without this assumption, Eckmann \& Hilton \cite{EH1962} showed that if we allow them to possess unit elements $e_\rightarrow$ and $e_\uparrow$ then the interchange law forces the units to be equal: \begin{align*} e_\rightarrow = e_\rightarrow \rightarrow e_\rightarrow &= ( e_\rightarrow \uparrow e_\uparrow ) \rightarrow ( e_\uparrow \uparrow e_\rightarrow ) \\ &= ( e_\rightarrow \rightarrow e_\uparrow ) \uparrow ( e_\uparrow \rightarrow e_\rightarrow ) = e_\uparrow \uparrow e_\uparrow = e_\uparrow. \end{align*} From this, it further follows that each operation is the opposite of the other, and that the two operations coincide: if we write $e = e_\rightarrow = e_\uparrow$ then \begin{align*} & x \rightarrow y \equiv ( e \uparrow x ) \rightarrow ( y \uparrow e ) \equiv ( e \rightarrow y ) \uparrow ( x \rightarrow e ) \equiv y \uparrow x, \\ & y \uparrow x \equiv ( y \rightarrow e ) \uparrow ( e \rightarrow x ) \equiv ( y \uparrow e ) \rightarrow ( e \uparrow x ) \equiv y \rightarrow x. \end{align*} Thus there remains \emph{one commutative operation}, which is in fact also \emph{associative}: \[ ( a b ) c \equiv ( a b ) ( e c ) \equiv ( a e ) ( b c ) \equiv a ( b c ). \] Hence we assume that \emph{at most one} of the operations possesses a unit element. \begin{theorem} Let $S$ be an $d$-tuple interchange magma with operations $\star_1, \dots, \star_d$ for $d \ge 2$. If these operations have unit elements, then the units are equal, the operations coincide, and the remaining operation is commutative and associative. \end{theorem} \begin{proof} See the paper of Eckmann \& Hilton \cite{EH1962}, especially Theorem 3.33 (page 236), the definition of $\mathbf{H}$-structure (page 241), and Theorem 4.17 (page 244). \end{proof} \subsection{Double categories} Extension of the notion of semigroup to sets with $d \ge 2$ operations received a strong impetus in the 1960s from different approaches to two-dimensional category theory: see Ehresmann \cite{Ehresmann1963} for double categories, B\'enabou \cite{Benabou1967} for bicategories, Kelly \& Street \cite{KS1974} for 2-categories. The survey by Street \cite{Street1996} also covers higher-dimensional categories; pasting diagrams \cite{Johnson1989,Power1991} and parity complexes \cite{Street1991} arose as extensions of the interchange law to higher dimensions. For our purposes, the most relevant concept is that of double category; we mention in particular the work of Dawson \& Par\'e \cite{DP1993}. The most natural example is the double category $\mathbf{Cat}$ which has small categories as objects, functors as 1-morphisms, and natural transformations as 2-morphisms; it has two associative operations, horizontal composition of functors and vertical composition of natural transformations, which satisfy the interchange law. \begin{definition} A \emph{double category} $\mathbf{D}$ is an ordered pair of categories $( \mathbf{D}_0, \mathbf{D}_1 )$, together with functors $e \colon \mathbf{D}_0 \to \mathbf{D}_1$ and $s, t \colon \mathbf{D}_1 \to \mathbf{D}_0$. In $\mathbf{D}_0$ we denote objects by capital Latin letters $A$, $A'$, \dots (the 0-cells of $\mathbf{D}$) and morphisms by arrows labelled with lower-case italic letters $u$, $v$, \dots (the vertical 1-cells of $\mathbf{D}$). In $\mathbf{D}_1$ we denote objects by arrows labelled with lower-case italic letters $h$, $k$, \dots (the horizontal 1-cells of $\mathbf{D}$), and the morphisms by lower-case Greek letters $\alpha$, $\beta$, \dots (the 2-cells of $\mathbf{D}$). If $A$ is an object in $\mathbf{D}_0$ then $e(A)$ is the (horizontal) identity arrow on $A$. (Recall by the Eckmann-Hilton argument that identity arrows may exist in only one direction.) The functors $s$ and $t$ are the source and target: if $h$ is a horizontal arrow in $\mathbf{D}_1$ then $s(h)$ and $t(h)$ are its domain and codomain, objects in $\mathbf{D}_0$. These three functors are related by the equation $s(e(A)) = t(e(A)) = A$ for every $A$. For 2-cells $\alpha$, $\beta$ horizontal composition $\alpha \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \beta$ and vertical composition $\alpha \boxminus \beta$ are defined by the following diagrams and satisfy the interchange law: \begin{equation*} \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 12, 0 )*+{C} = "c"; ( 24, 0 )*+{E} = "e"; ( 0, 12 )*+{B} = "b"; ( 12, 12 )*+{D} = "d"; ( 24, 12 )*+{F} = "f"; ( 6, 1 ) = "ac"; ( 6, 11 ) = "bd"; ( 18, 1 ) = "ce"; ( 18, 11 ) = "df"; { \ar@{->}^{u} "a"; "b" }; { \ar@{->}_{v} "c"; "d" }; { \ar@{->}_{w} "e"; "f" }; { \ar@{->}_{h} "a"; "c" }; { \ar@{->}_{k} "c"; "e" }; { \ar@{->}^{\ell} "b"; "d" }; { \ar@{->}^{m} "d"; "f" }; { \ar@{=>}_{\alpha} "ac"; "bd" }; { \ar@{=>}_{\beta} "ce"; "df" }; \end{xy} } \!\!\cong \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 18, 0 )*+{E} = "e"; ( 0, 12 )*+{B} = "b"; ( 18, 12 )*+{F} = "f"; ( 9, 1 ) = "ae"; ( 9, 11 ) = "bf"; { \ar@{->}^{u} "a"; "b" }; { \ar@{->}_{w} "e"; "f" }; { \ar@{->}_{k \circ h} "a"; "e" }; { \ar@{->}^{m \circ \ell} "b"; "f" }; { \ar@{=>}_{\alpha \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \beta} "ae"; "bf" }; \end{xy} } \quad \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 0, 12 )*+{B} = "b"; ( 0, 24 )*+{C} = "c"; ( 12, 0 )*+{D} = "d"; ( 12, 12 )*+{E} = "e"; ( 12, 24 )*+{F} = "f"; ( 6, 1 ) = "ad"; ( 6, 11 ) = "be-"; ( 6, 13 ) = "be+"; ( 6, 23 ) = "cf"; { \ar@{->}^{u} "a"; "b" }; { \ar@{->}^{v} "b"; "c" }; { \ar@{->}_{w} "d"; "e" }; { \ar@{->}_{x} "e"; "f" }; { \ar@{->}_{h} "a"; "d" }; { \ar@{->}_{} "b"; "e" }; { \ar@{->}^{\ell} "c"; "f" }; { \ar@{=>}_{\alpha} "ad"; "be-" }; { \ar@{=>}_{\beta} "be+"; "cf" }; \end{xy} } \!\!\!\!\cong \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 0, 12 )*+{C} = "c"; ( 18, 0 )*+{D} = "d"; ( 18, 12 )*+{F} = "f"; ( 9, 1 ) = "h"; ( 9, 11 ) = "\ell"; { \ar@{->}_{h} "a"; "d" }; { \ar@{->}^{\ell} "c"; "f" }; { \ar@{->}^{v \circ u} "a"; "c" }; { \ar@{->}_{x \circ w} "d"; "f" }; { \ar@{=>}_{\alpha \boxminus \beta} "h"; "\ell" }; \end{xy} } \end{equation*} A monoid may be viewed as a category with one object; this restriction guarantees that all morphisms are composable. Similarly, a double interchange semigroup may be viewed as a double category with one object. If we retain associativity and omit interchange, then we obtain a sesquicategory; see Stell \cite{Stell1995}. \end{definition} \subsection{Endomorphism PROPs} In the category of vector spaces and linear maps over a field $\mathbb{F}$, the endomorphism PROP of $V$ is the bigraded direct sum \[ \mathbf{End}(V) = \bigoplus_{p, q \ge 0} \mathbf{End}(V)^{p,q} = \bigoplus_{p, q \ge 0} \mathbf{Lin}( V^{\otimes p}, V^{\otimes q} ), \] where $\mathbf{Lin}( V^{\otimes p}, V^{\otimes q} )$ is the vector space of all linear maps $V^{\otimes p} \longrightarrow V^{\otimes q}$. On $\mathbf{End}(V)$ there are two natural bilinear operations: \begin{itemize}[leftmargin=*] \item The horizontal product: for $f\colon V^{\otimes p} \longrightarrow V^{\otimes q}$ and $g\colon V^{\otimes r} \longrightarrow V^{\otimes s}$ we define the operation $\otimes\colon \mathbf{End}(V)^{p,q} \otimes \mathbf{End}(V)^{r,s} \longrightarrow \mathbf{End}(V)^{p+r,q+s}$ as follows: \[ f \otimes g \colon V^{\otimes (p+r)} \cong V^{\otimes p} \otimes V^{\otimes r} \longrightarrow V^{\otimes q} \otimes V^{\otimes s} \cong V^{\otimes (q+s)}. \] \item The vertical product: for $f\colon V^{\otimes p} \longrightarrow V^{\otimes q}$ and $g\colon V^{\otimes q} \longrightarrow V^{\otimes r}$ we define the operation $\circ \colon \mathbf{End}(V)^{p,q} \otimes \mathbf{End}(V)^{q,r} \longrightarrow \mathbf{End}(V)^{p,r}$ as follows: \[ g \circ f \colon V^{\otimes p} \longrightarrow V^{\otimes r}. \] \end{itemize} These two operations satisfy the interchange law. If $f\colon V \rightarrow V'$ and $g\colon W \rightarrow W'$ then $f \otimes g \colon V \otimes W \longrightarrow V' \otimes W'$ is defined by interchange between $\otimes$ and function \emph{evaluation}: $( f \otimes g )( v \otimes w ) \equiv f(v) \otimes g(w)$. If $f'\colon V' \rightarrow V''$ and $g'\colon W' \rightarrow W''$ then composition of tensor products of maps is defined by interchange between $\otimes$ and function \emph{composition}: $( f' \otimes g' ) \circ ( f \otimes g ) \equiv ( f' \circ f ) \otimes ( g' \circ g )$. \subsection{Tree sequences and Thompson's group} We consider the group of symmetries of the set of all dyadic partitions of the open unit interval $I = (0,1)$. \begin{definition} \label{defdyadicsubset} A number $x \in I$ is \emph{dyadic of level $b$} if $x = a 2^{-b}$ for positive integers $a, b$ where $a$ is \emph{odd} and $1 \le a \le 2^b{-}1$. A dyadic subset $C \subset I$ is a \emph{tree sequence} (or \emph{dyadic partition}) if $C$ is obtained from $I$ by a sequence of (exact) bisections of open subintervals. (Thus $C$ is the image of an unlabelled plane rooted complete binary tree under the one-dimensional geometric realization map.) For every $a 2^{-b} \in C$, exactly one of $a{-}1$, $a{+}1$ is twice an odd number, say $2a'$, and the other is divisible by 4. Then a dyadic subset is a tree sequence if and only if $x = a 2^{-b} \in C$ implies $p(x) = a' 2^{-b+1} \in C$; that is, every $x \in C$ has a tree parent in $C$. \end{definition} \begin{definition} Let $f$ be a homeomorphism of $[0,1]$ which fixes the endpoints and is piecewise linear. Assume that the subset of $(0,1)$ at which $f$ is not differentiable is a tree sequence, and that at all other interior points $f'(x)$ is a power of 2. The set of all such $f$ is a group under function composition, called \emph{Thompson's group} $F$. For further information, see Cannon et al.~\cite{CFP1996}. \end{definition} Let $A = \{ a_1, \dots, a_n \}$ and $B = \{ b_1, \dots, b_n \}$ be (strictly increasing) tree sequences of size $n$ partitioning $(0,1)$ into $n{+}1$ subintervals. We have $f(A) = B$ where $f \in F$ is linear on each subinterval and satisfies $f(a_i) = b_i$ for $1 \le i \le n$. Thus $F$ describes transformations from one rooted binary tree to another. Plane rooted complete binary trees with $n$ internal nodes are in bijection with association types for nonassociative products of $n{+}1$ factors. Hence, we may also regard $F$ as consisting of transformations from one association type to another; in this case, we call $f \in F$ a \emph{reassociation} of the parentheses. We display the bijection between tree sequences and association types for arities $\le 5$ in Figure \ref{treesequences}. \begin{figure}[ht] {\footnotesize \[ \begin{array}{l@{\;}l@{\quad}l@{\;}l@{\,}} \line(1,0){128} & a & \line(1,0){64}\line(0,1){6}\line(1,0){64} & ab \\ \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & (ab)c & \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & a(bc) \\ \line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & ((ab)c)d & \line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){64} & (a(bc))d \\ \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & (ab)(cd) & \line(1,0){64}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32} & a((bc)d) \\ \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16} & a(b(cd)) \\ \line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & (((ab)c)d)e & \line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & ((a(bc))d)e \\ \line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){64} & ((ab)(cd))e & \line(1,0){32}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){64} & (a((bc)d))e \\ \line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){64} & (a(b(cd)))e & \line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & ((ab)c)(de) \\ \line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & (a(bc))(de) & \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32} & (ab)((cd)e) \\ \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16} & (ab)(c(de)) & \line(1,0){64}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32} & a(((bc)d)e) \\ \line(1,0){64}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){32} & a((b(cd))e) & \line(1,0){64}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16} & a((bc)(de)) \\ \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16} & a(b((cd)e)) & \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8} & a(b(c(de))) \end{array} \]} \vspace{-5mm} \caption{Tree sequences and association types} \label{treesequences} \end{figure} \section{Preliminary results on commutativity relations} \subsection{Lemmas on associativity and interchange} For $n \ge 1$, the tree monomial basis of $\mathbf{Free}(n)$ is the set $\mathbb{B}_n$ of all complete rooted binary plane trees with $n$ leaves, with internal nodes labelled ${\,\scalebox{.67}{$\vartriangle$}\,}$ or ${\,\scalebox{.67}{$\blacktriangle$}\,}$, and leaves labelled by a permutation of $x_1, \dots, x_n$ (Definition \ref{freeoperad}). For $\mathbf{Assoc}$, either we use equivalence classes (under double associativity) of binary trees as basis monomials, or (more conveniently) we use the basis $\mathbb{NB} = \{ x_1 \} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$ of labelled rooted (not necessarily binary) plane trees with \emph{alternating} labels ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$ on internal nodes (Definitions \ref{assocboperad}, \ref{assocnboperad}). \begin{lemma} \label{assoclemma} A basis for $\mathbf{Assoc}(n)$ is the set $\mathbb{NB}_n$ of all trees in $\mathbb{NB}$ with $n$ leaves. \end{lemma} \begin{proof} We give an algorithm for converting a tree $T \in \mathbb{B}_n$ into a tree $\alpha(T) \in \mathbb{NB}_n$. We omit the (trivial but tedious) details of the proof that $\alpha$ is surjective, and that for any tree $U \in \mathbb{NB}_n$, the inverse image $\alpha^{-1}(U) \subseteq \mathbb{B}_n$ consists of a single equivalence class for the congruence on $\mathbb{B}_n$ defined by the consequences of the associativity relations $\mathrm{A}_{\wedgehor}$, $\mathrm{A}_{\wedgever}$ of equation \eqref{associativelaws}. We define $\alpha$ by the following diagrams, which indicate that for every $T \in \mathbb{B}_n$, and \emph{every} internal node labelled ${\,\scalebox{.67}{$\vartriangle$}\,}$, the subtree of $T$ with that node as root is rewritten as indicated, obtaining a tree $\alpha(T) \in \mathbb{NB}_n$: \[ \begin{array}{c@{\qquad\qquad}c} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow{\;\;\;\alpha\;\;\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 12 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "t3" }; { \ar@{-} "root"; "t4" }; \end{xy} } & \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow{\;\;\;\alpha\;\;\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 8 )*+{T_1} = "t1"; ( 4, 8 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "r" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \\[9mm] \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow{\;\;\;\alpha\;\;\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 8 )*+{T_3} = "t3"; ( 12, 8 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "t3" }; { \ar@{-} "root"; "t4" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; \end{xy} } & \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow[\text{no change}]{\;\alpha\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \end{array} \] Switching ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$ throughout defines $\alpha$ for subtrees with roots labelled ${\,\scalebox{.67}{$\blacktriangle$}\,}$. \end{proof} Gu et al.~\cite{GLM2008} study various classes of binary trees whose internal nodes are labelled white or black; however, none of their results coincides with our situation. The Knuth rotation correspondence is similar but not identical to our bijection $\alpha$ between binary and nonbinary trees; see Ebrahimi-Fard \& Manchon \cite[\S2]{EFM2014}. For $n \ge 1$, consider the graph $G_n$ whose vertex set is $\mathbb{B}_n$; the size of this set is the large Schr\"oder numbers (OEIS A006318): 1, 2, 6, 22, 90, 394, 1806, \dots. In $G_n$ there is an edge joining tree monomials $v, w$ if and only if $w$ may be obtained from $v$ by one application of the interchange law. The number of \emph{isolated} vertices in $G_n$ \cite{BM2016} is the sequence (OEIS A078482) 1, 2, 6, 20, 70, 254, 948, \dots. This is also the number of planar guillotine partitions of $I^2$ which avoid a certain nondyadic block partition with four parts (equivalently, there is no way to apply the interchange law); see Asinowski et al.~\cite[\S6.2, Remark 1]{ABBMP2013}. \begin{notation} For monomials $m_1, m_2 \in \mathbf{Free}(n)$ with $n \ge 4$, we write $m_1 \equiv m_2$ if and only if $m_1$ and $m_2$ can be obtained from the two sides of the interchange law \eqref{intlaw} by the same sequence of partial compositions. We write $m_1 \sim m_2$ if and only if $\Gamma( m_1 ) = \Gamma( m_2 )$, where $\Gamma$ is the geometric realization map (Definition \ref{defgeomap}). \end{notation} \begin{lemma} \label{nasrinslemma} The equivalence relations $\sim$ and $\equiv$ coincide. That is, $\sim$ is generated by the consequences in arity $n$ of the interchange law \eqref{intlaw}. \end{lemma} \begin{proof} For $n = 1,2,3$, the map $\Gamma$ is injective, so there is nothing to prove. Now suppose that $n \ge 4$ and that $m_1, m_2 \in \mathbf{Free}(n)$ satisfy $m_1 \sim m_2$; thus for some dyadic block partition $P \in \mathbf{DBP}(n)$ we have $m_1, m_2 \in \Gamma^{-1}(P)$. For $n = 4$, the dihedral group of symmetries of the square acts on the basis of 40 tree monomials; the generators are replacing ${\,\scalebox{.67}{$\vartriangle$}\,}$ (resp.~${\,\scalebox{.67}{$\blacktriangle$}\,}$) by the opposite operation and transposing the operations. In the following argument, we omit the permutations of the indeterminates, but the reasoning remains valid for a symmetric operad. There are nine orbits, of sizes two (twice), four (five times), eight (twice). For each orbit, we choose an orbit representative and display its image under $\Gamma$ in Figure \ref{rectangularpartitions}. The dihedral group also acts in the obvious way on these nine dyadic block partitions. In every case except the first, the size of the orbit generated by the block partition equals the size of the orbit generated by the tree monomial. The first block partition $\boxplus$ is fixed by all 8 symmetrices of the square, and the two monomials in $\Gamma^{-1}(\boxplus)$ are the two terms of the interchange law \eqref{intlaw}. This is the only failure of injectivity in arity 4. \begin{figure}[ht] \footnotesize \[ \begin{array}{c} \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (0.75,0) -- (0.75,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,1.125) -- (1.5,1.125) (0,0.75) -- (1.5,0.75) (0,0.375) -- (1.5,0.375) (0,0) -- (0,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,1.3125) -- (1.5,1.3125) (0,1.125) -- (1.5,1.125) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,1.125) -- (1.5,1.125) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (0.75,1.125) -- (0.75,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,0.375) -- (1.5,0.375) (0,0.5625) -- (1.5,0.5625) (0,0.75) -- (1.5,0.75) (0,1.5) -- (1.5,1.5) (0,0) -- (0,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \\[3mm] \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,0.375) -- (1.5,0.375) (0,0.75) -- (1.5,0.75) (0,1.5) -- (1.5,1.5) (0,0) -- (0,1.5) (0.75,0.375) -- (0.75,0.75) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,0.375) -- (1.5,0.375) (0,0.75) -- (1.5,0.75) (0,1.5) -- (1.5,1.5) (0,0) -- (0,1.5) (0.75,0.75) -- (0.75,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (0.75,0) -- (0.75,0.75) (0.375,0) -- (0.375,0.75) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,0.75) -- (1.5,0.75) (0,0.375) -- (0.75,0.375) (0,0) -- (0,1.5) (0.75,0) -- (0.75,0.75) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \end{array} \] \vspace{-5mm} \caption{Orbit representatives for dihedral group in arity 4} \label{rectangularpartitions} \end{figure} Assume that $n \ge 5$ and that $\sim$ and $\equiv$ coincide on $\mathbf{Free}(k)$ for $k < n$. Clearly any monomial $m \in \mathbf{Free}(k)$ has the form $m = m_1 \ast m_2$ for some $m_1 \in \mathbf{Free}(k_1)$, $m_2 \in \mathbf{Free}(k_2)$ where $k_1, k_2 < n$, $k_1 + k_2 = n$, and $\ast \in \{ {\,\scalebox{.67}{$\vartriangle$}\,}, {\,\scalebox{.67}{$\blacktriangle$}\,} \}$. Consider a dyadic block partition $P \in \mathbf{DBP}(n)$. There are three cases: \emph{Case 1}. Assume $P$ contains the horizontal bisection of $I^2$, but the two resulting parts do not both have vertical bisections. Let $x, y \in \mathbf{Free}(n)$ be tree monomials in $\Gamma^{-1}(P)$, so $x \sim y$. By assumption, we have $x = x_1 {\,\scalebox{.67}{$\blacktriangle$}\,} x_2$ where $x_i = x'_i {\,\scalebox{.67}{$\blacktriangle$}\,} x''_i$ for at most one $i \in \{1,2\}$; and the same for $y$. Since $\Gamma$ is an operad morphism, it follows from $\Gamma( x_1 {\,\scalebox{.67}{$\blacktriangle$}\,} x_2 ) = \Gamma( y_1 {\,\scalebox{.67}{$\blacktriangle$}\,} y_2 )$ that $\Gamma( x_1 ) \uparrow \Gamma( x_2 ) = \Gamma( y_1 ) \uparrow \Gamma( y_2 )$. It is geometrically clear that $\Gamma( x_i ) = \Gamma( y_i )$ for $i \in \{1,2\}$, and this implies that $x_i$ and $y_i$ have the same arity $k_i$. Hence $x_i \sim y_i$, and by induction $x_i \equiv y_i$. Therefore $x \equiv y$. \emph{Case 2}. Assume $P$ contains the vertical bisection of $I^2$, but the two resulting parts do not both have horizontal bisections. The argument is the same as Case 1 with ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$ transposed; this leaves the interchange law \eqref{intlaw} unchanged. \emph{Case 3}. Assume $P$ contains both horizontal and vertical bisections of $I^2$. In addition to the possibilities in Cases 1 and 2, there are two different factorizations for each monomial $x, y \in \Gamma^{-1}(P)$ into products of four factors. Using both algebraic and geometric notation, we have: \begin{align*} & x = x_1 {\,\scalebox{.67}{$\blacktriangle$}\,} x_2 = ( z_1 {\,\scalebox{.67}{$\vartriangle$}\,} z_2 ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( z_3 {\,\scalebox{.67}{$\vartriangle$}\,} z_4 ) \stackrel{\boxplus}{=} ( z_1 {\,\scalebox{.67}{$\blacktriangle$}\,} z_3 ) {\,\scalebox{.67}{$\vartriangle$}\,} ( z_2 {\,\scalebox{.67}{$\blacktriangle$}\,} z_4 ) = y_1 {\,\scalebox{.67}{$\vartriangle$}\,} y_2 = y, \\ & \Gamma(x) = \adjustbox{valign=m} {\begin{xy} ( 15, 5 )*+{\Gamma(x_1)}; ( 15, 15 )*+{\Gamma(x_2)}; ( 10, 0 ) = "1"; ( 10, 10 ) = "2"; ( 10, 20 ) = "3"; ( 20, 0 ) = "4"; ( 20, 10 ) = "5"; ( 20, 20 ) = "6"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "1"; "4" }; { \ar@{-} "2"; "5" }; { \ar@{-} "3"; "6" }; \end{xy}} = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(z_1)}; ( 5, 15 )*+{\Gamma(z_3)}; ( 15, 5 )*+{\Gamma(z_2)}; ( 15, 15 )*+{\Gamma(z_4)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 0, 20 ) = "3"; ( 10, 0 ) = "4"; ( 10, 10 ) = "5"; ( 10, 20 ) = "6"; ( 20, 0 ) = "7"; ( 20, 10 ) = "8"; ( 20, 20 ) = "9"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "7"; "9" }; { \ar@{-} "1"; "7" }; { \ar@{-} "2"; "8" }; { \ar@{-} "3"; "9" }; \end{xy}} = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(y_1)}; ( 15, 5 )*+{\Gamma(y_2)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 10, 0 ) = "3"; ( 10, 10 ) = "4"; ( 20, 0 ) = "5"; ( 20, 10 ) = "6"; { \ar@{-} "1"; "2" }; { \ar@{-} "3"; "4" }; { \ar@{-} "5"; "6" }; { \ar@{-} "1"; "5" }; { \ar@{-} "2"; "6" }; \end{xy}} = \Gamma(y). \end{align*} If $x \sim y$ then either (i) the claim follows from the equivalence of factors in lower arity as in Cases 1 and 2, or (ii) the claim follows from an application of the interchange law in arity $n$ as indicated in the last two equations. \end{proof} For a generalization of Lemma \ref{nasrinslemma} to $d \ge 2$ nonassociative operations, see \cite{BD2017}. \subsection{Cuts and slices} Recall the notions of empty blocks and subrectangles in a block partition from Definitions \ref{bpoperad} and \ref{defsubrectangle}. \begin{definition} Let $P$ be a block partition of $I^2$ and $R$ a subrectangle of $P$. By a \emph{main cut} in $R$ we mean a horizontal or vertical bisection of $R$. Every subrectangle has at most two main cuts; the empty block is the only block partition with no main cut. Suppose that a main cut partitions $R$ into subrectangles $R_1$ and $R_2$. If either $R_1$ or $R_2$ has a main cut parallel to the main cut of $R$, this is called a \emph{primary cut} in $R$. This definition extends as follows: if the subrectangle $S$ of $R$ is one of the subrectangles obtained from a sequence of cuts all of which are parallel to a main cut of $R$ then a main cut of $S$ is a primary cut of $R$. In a given direction, we include the main cut of $R$ as a primary cut. Let $C_1, \dots, C_\ell$ be all the primary cuts of $R$ parallel to a given main cut $C_i$ of $R$ ($1 \le i \le \ell$) in their natural order (bottom to top, or left to right) so that there is no primary cut between $C_j$ and $C_{j+1}$ for $1 \le j \le \ell{-}1$. Define the artificial ``cuts'' $C_0$ and $C_{\ell+1}$ to be the bottom and top (or left and right) sides of $R$. We write $S_j$ for the $j$-th \emph{slice} of $R$ parallel to the given main cut; that is, the subrectangle between $C_{j-1}$ and $C_j$ for $1 \le j \le \ell{+}1$. \end{definition} \begin{definition} Let $m$ be a monomial of arity $n$ in the operad $\mathbf{Free}$. We say that $m$ \emph{admits a commutativity relation} if for some transposition $(ij) \in S_n$ ($i < j$), the following relation holds for the corresponding cosets in $\mathbf{DIA}$: \[ m(x_1,\dots,x_i,\dots,x_j,\dots,x_n) \equiv m(x_1,\dots,x_j,\dots,x_i,\dots,x_n). \] \end{definition} We emphasize (referring to the commutative diagram of Figure \ref{bigpicture}) that the proof of a commutativity property for the monomial $m$ consists of a sequence of applications of associativity and the interchange law starting from $m$ and ending with the same pattern of parentheses and operations but with a different permutation. \begin{proposition} \label{twomaincuts} Let $m$ be a tree monomial in $\mathbf{Free}$ which admits a commutativity relation. Assume that this commutativity relation is not the result of operad partial composition with a commutativity relation of lower arity, either from (i) a commutativity relation holding in a proper factor of $m$, or (ii) a commutativity relation holding in a proper quotient of $m$, by which we mean substitution of the same decomposable factor for the same indecomposable argument in both sides of a commutativity relation of lower arity. If $P = \Gamma(m)$ is the corresponding dyadic block partition of $I^2$ then $P$ contains both of the main cuts (horizontal and vertical); that is, it must be possible to apply the interchange law as a rewrite rule at the root of the tree monomial $m$. \end{proposition} \begin{proof} Any dyadic block partition $P$ of $I^2$ has at least one main cut, corresponding to the root of the tree monomial $m$ for which $P = \Gamma(m)$. Transposing the $x$ and $y$ axes if necessary (this corresponds to switching the horizontal and vertical operation symbols in the monomial $m$), we may assume that $P$ contains the vertical main cut, corresponding to the operation ${\,\scalebox{.67}{$\vartriangle$}\,}$ in the monomial $m = m_1 {\,\scalebox{.67}{$\vartriangle$}\,} m_2$: \[ P = \begin{array}{|c|c|} \midrule P_1 & P_2 \\ \midrule \end{array} \] Let $P_1 = \Gamma(m_1)$ and $P_2 = \Gamma(m_2)$ be the dyadic block partitions of $I^2$ corresponding to $m_1$ and $m_2$. If both $P_1$ and $P_2$ have the horizontal main cut, then we are done, since these two cuts combine to produce the horizontal main cut for $P$: \[ P = \begin{array}{|c|c|} \midrule P''_1 & P''_2 \\ \midrule P'_1 & P'_2 \\ \midrule \end{array} \] Otherwise, at most one of $P_1$ and $P_2$ has the horizontal main cut. Reflecting in the vertical line $x = \tfrac12$ if necessary (this corresponds to replacing the horizontal operation ${\,\scalebox{.67}{$\vartriangle$}\,}$ by its opposite throughout tree monomial $m$), we may assume that $P_1$ does \emph{not} have the horizontal main cut. Then either $P_1$ has the vertical main cut, or $P_1$ has no main cut (so that $P_1$ is an empty block): \[ P = \begin{array}{|c|c|c|} \midrule P'_1 & P''_1 & P_2 \\ \midrule \end{array} \qquad \text{or} \qquad P = \begin{array}{|c|c|} \midrule P_1 & P_2 \\ \midrule \end{array} \] In either case, $P_1$ is the union of $k \ge 1$ consecutive vertical slices $S_1, \dots, S_k$ from left to right, where we assume that $k$ is as large as possible so that the slices are as thin as possible. It follows that each of these vertical slices either is an empty block or has only one main cut which is horizontal: \[ P = \begin{array}{|c|c|c|c|} \midrule S_1 & \cdots & S_k & P_2 \\ \midrule \end{array} \] Therefore, in the monomial $m_1$ for which $P_1 = \Gamma( m_1 )$, each of these vertical slices corresponds either to an indecomposable indeterminate $x_j$ or a decomposable element $t {\,\scalebox{.67}{$\blacktriangle$}\,} u$ whose root operation is the vertical operation (corresponding to the horizontal main cut). By assumption, $P_1$ does not have the horizontal main cut, and so at least one of the vertical slices $S_j$ does not have the horizontal main cut; we choose $j$ to be as small as possible, thereby selecting the leftmost vertical slice without the horizontal main cut: \[ P = \begin{array}{|c|c|c|c|c|c|} \midrule S_1 & \cdots & x_j & \cdots & S_k & P_2 \\ \midrule \end{array} \] By maximality of the choice of $k$, the vertical slice $S_j$ does not have the vertical main cut either. Thus $S_j$ has no main cut, and hence $S_j$ is the empty block, and so in the monomial $m_1$, the vertical slice $S_j$ corresponds to an indecomposable indeterminate $x_j$. Therefore $m$ must have the following form, where $v$ and/or $w$ may be absent (that is, $v {\,\scalebox{.67}{$\vartriangle$}\,} x_j {\,\scalebox{.67}{$\vartriangle$}\,} w$ may be $x_j {\,\scalebox{.67}{$\vartriangle$}\,} w$ or $v {\,\scalebox{.67}{$\vartriangle$}\,} x_j$ or simply $x_j$): \begin{equation} \label{vwequation} m = m_1 {\,\scalebox{.67}{$\vartriangle$}\,} m_2 = v {\,\scalebox{.67}{$\vartriangle$}\,} x_j {\,\scalebox{.67}{$\vartriangle$}\,} w {\,\scalebox{.67}{$\vartriangle$}\,} m_2. \end{equation} (We may omit parentheses since the operation ${\,\scalebox{.67}{$\vartriangle$}\,}$ is associative.) If both $v$ and $w$ are absent, then $m_1 = x_j$ and so $m = x_j {\,\scalebox{.67}{$\vartriangle$}\,} m_2$. In this case, it is clear that the only way in which the interchange law can be applied as a rewrite rule to $m$ is within the submonomial $m_2$. But this implies that any commutativity relation which holds for $m$ is a consequence of a commutativity relation for $m_2$, contradicting our assumption. If $x_j$ is not the only argument in $m_1$ then there is at least one factor $v$ or $w$ on the left or right side of $x_j$ in equation \eqref{vwequation}. We want to be able to apply the interchange law as a rewrite rule in a way which involves all of $m$; otherwise, any commutativity relation which holds for $m$ must be a consequence of a commutativity relation for a proper submonomial, contradicting our assumption. Let us write the monomial \eqref{vwequation} as a tree monomial; it has the form \begin{equation} \label{vwtree1} \adjustbox{valign=m}{ \begin{xy} ( 12, 12 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 0, 0 )*+{\boxed{v}} = "t1"; ( 8, 0 )*+{x_j} = "t2"; ( 16, 0 )*+{\boxed{w}} = "t3"; ( 24, 0 )*+{\boxed{m_2}} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "t3" }; { \ar@{-} "root"; "t4" }; \end{xy} } \end{equation} We can apply the interchange law to this tree only in one of the following ways: within $v$, within $w$, within $m_2$, or (if both $w$ and $m_2$ have ${\,\scalebox{.67}{$\blacktriangle$}\,}$ at the root) using the root of \eqref{vwtree1} with $w$ and $m_2$. In the last case, we first rewrite \eqref{vwtree1} as follows: \begin{equation} \label{vwtree2} \adjustbox{valign=m}{ \begin{xy} ( 12, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 20, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "r"; ( 0, 8 )*+{\boxed{v}} = "t1"; ( 8, 8 )*+{x_j} = "t2"; ( 16, 0 )*+{\boxed{w}} = "t3"; ( 24, 0 )*+{\boxed{m_2}} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "r" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \end{equation} After applying the interchange law, we obtain a tree of the following form: \begin{equation} \label{vwtree3} \adjustbox{valign=m}{ \begin{xy} ( 12, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 20, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 8 )*+{\boxed{v}} = "t1"; ( 8, 8 )*+{x_j} = "t2"; ( 16, 0 )*+{\boxed{w'}} = "t3"; ( 24, 0 )*+{\boxed{m'_2}} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "r" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \end{equation} Thus, no matter how we apply the interchange law to \eqref{vwtree1}, the fact that $x_j$ is a child of the root ${\,\scalebox{.67}{$\vartriangle$}\,}$ remains unchanged. Hence any commutativity relation for $m$ must be a consequence of a commutativity relation of lower arity (either as factor or as quotient), contradicting our original assumption. Therefore both $P_1$ and $P_2$ have the horizontal main cut, and hence $P$ has both main cuts. \end{proof} \subsection{Border blocks and interior blocks} \begin{definition} Let $m$ be a (tree) monomial in the operad $\mathbf{Free}$ which admits a commutativity relation transposing the indeterminates $x_i$ and $x_j$ ($i \ne j$). If $B = \Gamma(m)$ is the labelled block partition of $I^2$ corresponding to $m$ then $B$ is called a \emph{commutative block partition}. The two empty blocks corresponding to $x_i$ and $x_j$ are called \emph{commuting empty blocks}. \end{definition} \begin{definition} Let $B$ be a block partition of $I^2$ consisting of the empty blocks $R_1$, $\dots$, $R_k$. If the closure of $R_i$ has empty intersection with the four sides of the closure $\overline{I^2}$ then $R_i$ is an \emph{interior block}, otherwise $R_i$ is a \emph{border block}. \end{definition} \begin{lemma} \label{borderlemma} Suppose that $B_1 = \Gamma(m_1)$ and $B_2 = \Gamma(m_2)$ are two labelled dyadic block partitions of $I^2$ such that $m_1 \equiv m_2$ in every double interchange semigroup; hence this equivalence must be the result of applying associativity and the interchange law. (This is more general than a commutativity relation for a dyadic block partition.) Then any interior (respectively border) block of $B_1$ remains an interior (respectively border) block in $B_2$. \end{lemma} \begin{proof} It is clear from the geometric realizations that neither associativity nor the interchange law can change an interior block to a border block or conversely. \end{proof} \begin{lemma} \label{interiorlemma} Let $B = \Gamma(m)$ be a commutative block partition. Then the two commuting empty blocks must be interior blocks. \end{lemma} \begin{proof} Let $R_1$, \dots, $R_\ell$ be the empty blocks from left to right along the north side of $I^2$. It is clear from the geometric realization that neither associativity nor the interchange law can change the order of $R_1$, \dots, $R_\ell$. The same applies to the other three sides. \end{proof} \section{Commutative block partitions in arity 10} \begin{lemma} \label{arity10slices} Let $B = \Gamma(m)$ be a commutative block partition of arity 10. Then $B$ has at least two and at most four parallel slices in either direction (horizontal or vertical). \end{lemma} \begin{proof} Lemma \ref{twomaincuts} shows that $B$ contains both main cuts; since $B$ contains 10 empty blocks, $B$ has at most five parallel slices (four primary cuts) in either direction. But if there are four primary cuts in one direction and the main cut in the other direction, then we have 10 empty blocks, each of which is a border block, contradicting Lemma \ref{interiorlemma}. \end{proof} \begin{lemma} \label{arity10blocks} Let $B = \Gamma(m)$ be a commutative block partition of any arity. Then $B$ has both main cuts by Lemma \ref{twomaincuts}, and hence $B$ consists of the union of four square quarters $A_1$, \dots, $A_4$ (in the NW, NE,SW, SE corners respectively). If one of these quarters has an empty block which is interior to $B$, then that quarter contains at least three empty blocks. If one of these quarters has two empty blocks which are both interior to $B$, then that quarter contains at least four empty blocks. Hence $B$ contains at least seven empty blocks. \end{lemma} \begin{proof} If one of the four subrectangles has only two empty blocks then these two blocks were created by a main cut, and hence both of them are border blocks in $B$. Similarly, if one of the rectangles has only three empty blocks, then either these three blocks are three parallel slices (in which case all three are border blocks in $B$) or these three blocks were created by a main cut in one direction followed by the main cut in the other direction in one of the blocks formed by the first main cut (in which case only one of the three blocks is an interior block in $B$). Lemma \ref{interiorlemma} shows that $B$ has at least two interior blocks, and these can occur either in two different subrectangles or in the same subrectangle. For different subrectangles, $B$ contains at least $3+3+1+1$ empty blocks, and for the same subrectangle, $B$ contains at least $4+1+1+1$ empty blocks. \end{proof} \begin{proposition} \label{atleast8proposition} A commutative block partition $B$ has at least eight empty blocks. \end{proposition} \begin{proof} Proposition \ref{twomaincuts} shows that $B$ must have both horizontal and vertical main cuts. Lemma \ref{borderlemma} shows that an interior block cannot commute with a border block, so $B$ must have at least two interior empty blocks. The proof of Lemma \ref{arity10blocks} shows that the number of empty blocks in $B$ is at least seven, with the minimum occurring if and only if there are two interior blocks in the same quarter $A_1$, \dots, $A_4$. Reflecting in the horizontal and/or vertical axes if necessary, we may assume that the NW quarter $A_1$ contains two empty blocks which are interior to $B$ and contains only the horizontal main cut (otherwise we reflect in the NW-SE diagonal). Figure \ref{atleast8} shows the three partitions with seven empty blocks satisfying these conditions. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[ draw = black, x = 10 mm, y = 10 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (1.5,2.25) (0.75,1.9) -- (1.5,1.9) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.75,0.75) node {$a$} (2.25,0.75) node {$b$} (1.1,2.1) node {$e$} (0.375,1.8) node {$c$} (1.1,1.7) node {$d$} (0.75,2.625) node {$f$} (2.25,2.25) node {$g$}; \end{tikzpicture} \qquad \begin{tikzpicture}[ draw = black, x = 10 mm, y = 10 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (0.35,1.5) -- (0.35,2.25) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.7500,0.750) node {$a$} (2.2500,0.750) node {$b$} (0.1875,1.875) node {$c$} (0.5625,1.925) node {$d$} (1.1250,1.875) node {$e$} (0.7500,2.625) node {$f$} (2.2500,2.250) node {$g$}; \end{tikzpicture} \qquad \begin{tikzpicture}[ draw = black, x = 10 mm, y = 10 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.1,1.5) -- (1.1,2.25) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.7500,0.750) node {$a$} (2.2500,0.750) node {$b$} (0.3750,1.875) node {$c$} (0.9375,1.925) node {$d$} (1.3125,1.875) node {$e$} (0.7500,2.625) node {$f$} (2.2500,2.25) node {$g$}; \end{tikzpicture} \end{center} \vspace{-4mm} \caption{Three block partitions for proof of Proposition \ref{atleast8proposition}} \label{atleast8} \end{figure} \noindent Consider the monomial corresponding to the first partition in Figure \ref{atleast8}. We may apply the interchange law only where two orthogonal cuts intersect at a point which is interior to both; that is, at a plus $+$ configuration. We may apply associativity only where we have $( - \ast - ) \ast -$ or $- \ast ( - \ast - )$. At each step, there is only one possible rewriting that may be applied; we underline the three (associativity) or four (interchange law) factors involved: \begin{align*} ( \underline{a} {\,\scalebox{.67}{$\vartriangle$}\,} \underline{b} ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( \underline{( ( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} f )} {\,\scalebox{.67}{$\vartriangle$}\,} \underline{g} ) &\equiv ( \underline{a} {\,\scalebox{.67}{$\blacktriangle$}\,} ( \underline{( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) )} {\,\scalebox{.67}{$\blacktriangle$}\,} \underline{f} ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} g ) \\ &\equiv ( \underline{( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) ) )} {\,\scalebox{.67}{$\blacktriangle$}\,} \underline{f} ) {\,\scalebox{.67}{$\vartriangle$}\,} ( \underline{b} {\,\scalebox{.67}{$\blacktriangle$}\,} \underline{g} ) \\ &\equiv ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) ) {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( f {\,\scalebox{.67}{$\vartriangle$}\,} g ). \end{align*} Similar calculations apply to the second and third block partitions. In this way, we have computed the entire equivalence class of the original monomial subject to rewriting using associativity and the interchange law. From this we see that no block partition with seven empty blocks admits a commutativity relation. \end{proof} The method used for the proof of Proposition \ref{atleast8proposition} can be extended to show that a dyadic block partition with eight empty blocks cannot be commutative. In this case, we have the following subcases: (i) one quarter $A_i$ has five empty blocks, and the other three are empty; (ii) one quarter $A_i$ has four empty blocks, another $A_j$ has two empty blocks, and the other two are empty (here we distinguish two subsubcases, depending on whether $A_i$ and $A_j$ share an edge or only a corner); (iii) two quarters $A_i$ and $A_j$ each have three empty blocks, and the other two are empty (with the same two subsubcases). This provides a completely different proof, independent of machine computation, of one of the main results in \cite{BM2016}; we omit the (rather lengthy) details. In what follows, we write $B$ for a commutative block partition with 10 empty blocks. Lemma \ref{interiorlemma} shows that the commuting blocks must be interior blocks, and Lemma \ref{arity10slices} shows that $B$ has either two, three, or four parallel slices in either direction. Thus, if $B$ has three (respectively four) parallel slices in one direction, then the commuting blocks must be in the middle slice (respectively the middle two slices). Without loss of generality, interchanging horizontal and vertical if necessary, we may assume that these parallel slices are vertical. \subsection{Four parallel vertical slices} In this case we have the vertical and horizontal main cuts, and two additional vertical primary cuts. Applying horizontal associativity if necessary, this gives the following configuration: \[ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00); \end{tikzpicture} \] This configuration has eight empty blocks, all of which are border blocks. We need two more cuts to create two interior blocks. Applying vertical associativity if necessary in the second slice from the left, and applying a dihedral symmetry of the square if necessary, we are left with three possible configurations: \begin{equation} \label{configsABC} A\colon \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (1.50,0.75) -- (2.25,0.75); \end{tikzpicture} } \quad\quad B\colon \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75); \end{tikzpicture} } \quad\quad C\colon \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.000,0.00) -- (0.000,3.00) (0.750,0.00) -- (0.750,3.00) (1.500,0.00) -- (1.500,3.00) (2.250,0.00) -- (2.250,3.00) (3.000,0.00) -- (3.000,3.00) (0.000,0.00) -- (3.000,0.00) (0.000,1.50) -- (3.000,1.50) (0.000,3.00) -- (3.000,3.00) (0.750,2.25) -- (1.500,2.25) (1.125,1.50) -- (1.125,2.25); \end{tikzpicture} } \end{equation} \subsubsection{Configuration $A$} We present simultaneously the algebraic and geometric steps in the proof of a new commutativity relation. We label the empty blocks in the initial configuration as follows: \[ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,0.75) -- (2.25,0.75) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (2.25,0) -- (2.25,3) (1.5,1.5) -- (3,1.5) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.375) node {$f$} (1.875,1.15) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \end{tikzpicture} \] We show that this partition admits a commutativity relation transposing $d$ and $g$. We refer the reader to Figure \ref{bigpicture} and \S\ref{diagramchasing} as an aid to understanding the proof. In the following list of monomials, we indicate the four factors $w, x, y, z$ taking part in each application of the interchange law. We omit parentheses in products using the same operation two or more times; the factors $w, x, y, z$ make clear how we reassociate such products between two consecutive applications of the interchange law. The diagrams which appear after the list of monomials represent the same steps in geometric form; in each application of the interchange law as the rewrite rule $( a \star_2 b ) \star_1 ( c \star_2 d ) \mapsto ( a \star_1 c ) \star_2 ( b \star_1 d )$ where $\{ \star_1, \star_2 \} = \{ {\,\scalebox{.67}{$\vartriangle$}\,}, {\,\scalebox{.67}{$\blacktriangle$}\,} \}$, we indicate the root operation $\star_1$ by a thick line and the child operations $\star_2$ by dotted lines: \begin{align} &\text{factors}\; w, x, y, z & & \text{result of application of interchange law} \notag \\ \midrule &\text{initial configuration} & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} g){\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \notag \\[-2pt] &f {\,\scalebox{.67}{$\blacktriangle$}\,} g, h , i, j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\blacktriangle$}\,} g {\,\scalebox{.67}{$\blacktriangle$}\,} i ){\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I1} \tag{I1} \\[-2pt] &f, g{\,\scalebox{.67}{$\blacktriangle$}\,} i, h , j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I2} \tag{I2} \\[-2pt] &a, b, c, d {\,\scalebox{.67}{$\blacktriangle$}\,} e & &((a {\,\scalebox{.67}{$\blacktriangle$}\,} c){\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I3} \tag{I3} \\[-2pt] &a, c, b {\,\scalebox{.67}{$\blacktriangle$}\,} d, e & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I4} \tag{I4} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d), c {\,\scalebox{.67}{$\vartriangle$}\,} e, f {\,\scalebox{.67}{$\vartriangle$}\,} h, (g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d)){\,\scalebox{.67}{$\vartriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} h )){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j) \label{I5} \tag{I5} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d), f {\,\scalebox{.67}{$\vartriangle$}\,} h , c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,}(g {\,\scalebox{.67}{$\blacktriangle$}\,} i), j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e{\,\scalebox{.67}{$\vartriangle$}\,}(g {\,\scalebox{.67}{$\blacktriangle$}\,} i))){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \label{I6} \tag{I6} \\[-2pt] &a, b {\,\scalebox{.67}{$\blacktriangle$}\,} d, c {\,\scalebox{.67}{$\vartriangle$}\,} e, g {\,\scalebox{.67}{$\blacktriangle$}\,} i & &((a {\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g {\,\scalebox{.67}{$\blacktriangle$}\,} i)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \label{I7} \tag{I7} \\[-2pt] &a, c {\,\scalebox{.67}{$\vartriangle$}\,} e, b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g, i & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} i)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I8} \tag{I8} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g), c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} i, f {\,\scalebox{.67}{$\vartriangle$}\,} h , j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g){\,\scalebox{.67}{$\vartriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} h ) ){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} i {\,\scalebox{.67}{$\vartriangle$}\,} j) \label{I9} \tag{I9} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g), f {\,\scalebox{.67}{$\vartriangle$}\,} h, c {\,\scalebox{.67}{$\vartriangle$}\,} e, i{\,\scalebox{.67}{$\vartriangle$}\,} j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e) ){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I10} \tag{I10} \\[-2pt] &a, b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g, c, e & &((a {\,\scalebox{.67}{$\blacktriangle$}\,} c){\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g{\,\scalebox{.67}{$\blacktriangle$}\,} e ){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I11} \tag{I11} \\[-2pt] &a, c, b {\,\scalebox{.67}{$\blacktriangle$}\,} d, g{\,\scalebox{.67}{$\blacktriangle$}\,} e & &((a {\,\scalebox{.67}{$\vartriangle$}\,}(b {\,\scalebox{.67}{$\blacktriangle$}\,} d )){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e) ){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I12} \tag{I12} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,}(b {\,\scalebox{.67}{$\blacktriangle$}\,} d ), c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e), f {\,\scalebox{.67}{$\vartriangle$}\,} h, i {\,\scalebox{.67}{$\vartriangle$}\,} j & &(a {\,\scalebox{.67}{$\vartriangle$}\,}(b {\,\scalebox{.67}{$\blacktriangle$}\,} d ){\,\scalebox{.67}{$\vartriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} h ) ){\,\scalebox{.67}{$\blacktriangle$}\,} (( c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e) {\,\scalebox{.67}{$\vartriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I13} \tag{I13} \\[-2pt] &a, (b {\,\scalebox{.67}{$\blacktriangle$}\,} d ){\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h, c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e), i {\,\scalebox{.67}{$\vartriangle$}\,} j & &(a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} (((b {\,\scalebox{.67}{$\blacktriangle$}\,} d ){\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I14} \tag{I14} \\[-2pt] &b {\,\scalebox{.67}{$\blacktriangle$}\,} d, f {\,\scalebox{.67}{$\vartriangle$}\,} h, i , j & &(a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I15} \tag{I15} \\[-2pt] & b, d {\,\scalebox{.67}{$\blacktriangle$}\,} i, f {\,\scalebox{.67}{$\vartriangle$}\,} h, j & &(a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I16} \tag{I16} \\[-2pt] &a, c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e), b {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h, (d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j & &(a {\,\scalebox{.67}{$\vartriangle$}\,} b {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h){\,\scalebox{.67}{$\blacktriangle$}\,} ((c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} ((d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I17} \tag{I17} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} b, f {\,\scalebox{.67}{$\vartriangle$}\,} h, c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e),(d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,}(c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I18} \tag{I18} \\[-2pt] &f, h, d {\,\scalebox{.67}{$\blacktriangle$}\,} i, j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,}(c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I19} \tag{I19} \\[-2pt] &f {\,\scalebox{.67}{$\blacktriangle$}\,} d, i,h, j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,}(c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} d) {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I20} \tag{I20} \end{align} The same sequence of rewritings has the following geometric representation: \begin{align*} & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,0.75) -- (2.25,0.75) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.375) node {$f$} (1.875,1.15) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (2.25,0) -- (2.25,3); \draw[ very thick] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I1}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[ very thick] (2.25,0) -- (2.25,3); \draw[dotted] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I2}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (2.25,0) -- (2.25,3) (1.5,1.5) -- (3,1.5) (0,0) -- (0,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0.75,0) -- (0.75,3); \draw[very thick] (0,1.5) -- (1.5,1.5); \end{tikzpicture}_{\eqref{I3}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (1.5,1.5) -- (3,1.5) (2.25,0) -- (2.25,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.125) node {$d$} (1.125,2.25) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (1.5,1.5) ; \draw[ very thick] (0.75,0) -- (0.75,3); \end{tikzpicture}_{\eqref{I4}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (2.25,0) -- (2.25,3) (0.75,0) -- (0.75,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.125) node {$d$} (1.125,2.25) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (3,1.5) ; \draw[ very thick] (1.5,0) -- (1.5,3); \end{tikzpicture}_{\eqref{I5}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0.75,0) -- (0.75,1.5) (0,0) -- (0,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (0.75,1.5) -- (0.75,3) (0.375,1.5) -- (0.375,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.875) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,1.125) node {$d$} (0.6,2.25) node {$e$} (1.125,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (0,1.5) -- (3,1.5); \draw[dotted] (1.5,0) -- (1.5,3); \end{tikzpicture}_{\eqref{I6}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.375,1.5) -- (0.375,3) (1.5,0) -- (1.5,3) (1.5,0) -- (1.5,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.875) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,1.125) node {$d$} (0.6,2.25) node {$e$} (1.125,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[ very thick] (0,1.5) -- (1.5,1.5); \draw[ dotted] (0.75,0) -- (0.75,3); \end{tikzpicture}_{\eqref{I7}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (0.375,1.5) -- (0.375,3) (1.5,0) -- (1.5,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (1.5,0) -- (1.5,3) (1.5,1.5) -- (3,1.5) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,0.9) node {$d$} (0.6,2.25) node {$e$} (1.125,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (1.5,1.5); \draw[ very thick] (0.75,0) -- (0.75,3); \end{tikzpicture}_{\eqref{I8}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (0.375,1.5) -- (0.375,3) (0.75,1.5) -- (0.75,3) (0.75,0) -- (0.75,1.5) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,0.9) node {$d$} (0.6,2.25) node {$e$} (1.125,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (1.5,0) -- (1.5,3); \draw[ dotted] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I9}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.25,0.9) node {$d$} (1.25,2.25) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,0) -- (1.5,3); \draw[very thick] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I10}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (1.5,0) -- (1.5,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.25,0.9) node {$d$} (1.25,2.25) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0.75,0) -- (0.75,3); \draw[very thick] (0,1.5) -- (1.5,1.5); \end{tikzpicture}_{\eqref{I11}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (1.5,0) -- (1.5,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.85) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.25) node {$d$} (1.125,2.555) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (0.75,0) -- (0.75,3); \draw[dotted] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I12}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.85) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.25) node {$d$} (1.125,2.555) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (1.5,0) -- (1.5,3); \draw[dotted] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I13}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,0.75) -- (1.85,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (1.85,0) -- (1.85,1.5) (2.25,1.5) -- (2.25,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) node {$a$} (1.65,0.6) node {$b$} (2.15,0.75) node {$f$} (1.125,1.85) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.65,1.25) node {$d$} (1.125,2.555) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,0) -- (1.5,3); \draw[ very thick] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I14}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0,0) -- (0,3) (3,0) -- (3,3) (1.5,0) -- (1.5,3) (1.5,0.75) -- (2.25,0.75) (0,1.5) -- (3,1.5) (0.75,2.25) -- (1.5,2.25) (0.75,1.5) -- (0.75,3) (2.625,0) -- (2.625,1.5) (0.75,0.75) node {$a$} (1.875,0.375) node {$b$} (0.375,2.25) node {$c$} (1.875,1.125) node {$d$} (1.125,2.555) node {$e$} (2.4375,0.75) node {$f$} (1.125,1.85) node {$g$} (2.8125,0.75) node {$h$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (2.25,0) -- (2.25,3); \draw[ very thick] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I15}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,2.25) -- (2.25,2.25) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (2.625,0) -- (2.625,1.5) (0,1.5) -- (1.5,1.5) (0.75,0.75) node {$a$} (1.875,0.75) node {$b$} (2.4375,0.75) node {$f$} (1.125,1.85) node {$g$} (2.8125,0.75) node {$h$} (0.375,2.25) node {$c$} (1.875,1.85) node {$d$} (1.125,2.555) node {$e$} (1.85,2.555) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,1.5) -- (3,1.5); \draw[ very thick] (2.25,0) -- (2.25,3); \end{tikzpicture}_{\eqref{I16}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,2.25) -- (2.25,2.25) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (2.625,0) -- (2.625,1.5) (2.25,0) -- (2.25,3) (0.75,0.75) node {$a$} (1.875,0.75) node {$b$} (2.4375,0.75) node {$f$} (1.125,1.85) node {$g$} (2.8125,0.75) node {$h$} (0.375,2.25) node {$c$} (1.875,1.85) node {$d$} (1.125,2.555) node {$e$} (1.85,2.555) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (3,1.5); \draw[ very thick] (1.5,0) -- (1.5,3); \end{tikzpicture}_{\eqref{I17}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$d$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$g$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,0) -- (1.5,3); \draw[ very thick] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I18}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0,1.5) -- (1.5,1.5) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$d$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$g$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (2.25,0) -- (2.25,3); \draw[ very thick] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I19}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (2.25,0) -- (2.25,3) (1.5,0.75) -- (2.25,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.375) node {$f$} (1.875,1.1) node {$d$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$g$} (1.125,2.625) node {$e$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,1.5) -- (3,1.5); \draw[ very thick] (2.25,0) -- (2.25,3); \end{tikzpicture}_{\eqref{I20}} $} \end{align*} \begin{theorem} In every double interchange semigroup, the following commutativity relation holds for all values of the arguments $a, \dots, j$: \begin{align*} & ((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} g){\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \equiv {} \\ & ((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} d){\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \end{align*} \end{theorem} \subsubsection{Configuration $B$} For configuration $B$ in display \eqref{configsABC}, we label only the two blocks which transpose in the commutativity relation. The required applications of associativity and the interchange law can easily be reconstructed from the diagrams: \begin{align*} & \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75); \draw (1.125,1.875) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,1.50) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (1.50,2.25) -- (2.25,2.25) (0.75,0.75) -- (1.50,0.75) (2.625,1.50) --(2.625,3.00); \draw (1.875,1.875) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,1.50) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (1.50,0.750) -- (2.25,0.750) (0.75,0.75) -- (1.50,0.75) (2.625,1.50) --(2.625,3.00); \draw (1.875,1.1) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (1.50,0.750) -- (2.25,0.750) (0.75,0.75) -- (1.50,0.75); \draw (1.875,1.1) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (1.50,0.75) -- (2.25,0.75); \draw (1.875,1.1) node {$c$} (1.125,1.875) node {$g$}; \end{tikzpicture} } \\[1mm] & \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,1.50) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75) (0.375,0.00) --(0.375,1.50); \draw (1.125,1.875) node {$g$} (1.125,1.1) node {$c$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,1.50) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,1.875) -- (1.50,1.875) (0.375,0.00) --(0.375,1.50); \draw (1.125,2.05) node {\scalebox{.67}{$g$}} (1.125,1.65) node {\scalebox{.67}{$c$}}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,1.875) -- (1.50,1.875); \draw (1.125,2.05) node {\scalebox{.67}{$g$}} (1.125,1.65) node {\scalebox{.67}{$c$}}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75); \draw (1.125,1.875) node {$g$} (1.125,1.1) node {$c$}; \end{tikzpicture} } \end{align*} \begin{theorem} In every double interchange semigroup, the following commutativity relation holds for all values of the arguments $a, \dots, j$: \[ \begin{array}{l} ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( f {\,\scalebox{.67}{$\vartriangle$}\,} ( g {\,\scalebox{.67}{$\blacktriangle$}\,} h ) ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} j ) ) \equiv {} \\[1mm] ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} g ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( f {\,\scalebox{.67}{$\vartriangle$}\,} ( c {\,\scalebox{.67}{$\blacktriangle$}\,} h ) ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} j ) ) \end{array} \] \end{theorem} \subsubsection{Configuration $C$} For configuration $C$ in display \eqref{configsABC}, recall that applying the interchange law does not change the partition (only the monomial representing the partition), and applying associativity can be done only horizontally to the entire configuration or vertically to the second slice from the left. None of these operations transposes the two smallest empty blocks, so we obtain no commutativity relation. \subsection{Three parallel horizontal slices} In this subsection we consider horizontal rather than vertical slices, since this makes it a little easier to follow the discussion. We do not claim to have discovered all possible commutativity relations with three parallel slices, since the number of cases is very large. However, we determine 32 commutativity relations, 16 of which are new, and 16 of which follow immediately from one of the known arity nine relations \cite{BM2016}. Moreover, the 16 new relations may all be obtained from a single relation by applying associativity and the automorphism group of the square (the dihedral group of order 8). Without loss of generality, this leaves the following two cases. \emph{Case 1}: The horizontal slices have 2, 6, 2 empty blocks, labelled as follows: \begin{center} \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (3,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.5,0) -- (1.5,3) (1.125,1.5) -- (1.125,2.25) (2.25,1.5) -- (2.25,2.25) (1.875,1.5) -- (1.875,2.25) (3,0) -- (3,3) (0.75,0.75) node {$a$} (2.25,0.75) node {$b$} (0.375,1.875) node {$c$} (0.9375,1.875) node {$d$} (1.3125,1.875) node {$e$} (1.6875,1.875) node {$f$} (2.0625,1.875) node {$g$} (2.625,1.875) node {$h$} (0.75,2.625) node {$i$} (2.25,2.625) node {$j$}; \end{tikzpicture} \end{center} The two commutating empty blocks could be any two of $d$, $e$, $f$, $g$. But in this configuration, it is easy to see that no sequence of applications of associativity and the interchange law will change the order of these four blocks. \emph{Case 2}: The horizontal slices have 2, 5, 3 empty blocks. There are two subcases, depending on whether the third horizontal slice has two vertical cuts, or one vertical cut and one horizontal cut. In the latter subcase, we label the blocks as follows: \begin{equation} \label{oneofeach} \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (3,2.25) (1.5,0.75) -- (3,0.75) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.5,0) -- (1.5,3) (2.25,1.5) -- (2.25,2.25) (1.92,1.5) -- (1.92,2.25) (3,0) -- (3,3) (0.75,0.75) node {$a$} (2.25,0.375) node {$b$} (2.25,1.125) node {$c$} (0.375,1.875) node {$d$} (1.6875,1.875) node {$f$} (2.0625,1.875) node {$g$} (2.625,1.875) node {$h$} (1.125,1.875) node {$e$} (0.75,2.625) node {$i$} (2.25,2.625) node {$j$}; \end{tikzpicture} } \end{equation} \begin{theorem} In every double interchange semigroup, the following commutativity relation holds for all values of the arguments $a, \dots, j$: \[ \begin{array}{l} (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\vartriangle$}\,} g) {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \equiv {} \\[1mm] (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (( d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (((g {\,\scalebox{.67}{$\vartriangle$}\,} f){\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \end{array} \] \end{theorem} \begin{proof} We list applications of interchange; the other details are self-explanatory: \begin{align*} & (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f) {\,\scalebox{.67}{$\blacktriangle$}\,} i) ) {\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\blacktriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f)) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c) ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} c) ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e)) {\,\scalebox{.67}{$\blacktriangle$}\,} i ) {\,\scalebox{.67}{$\vartriangle$}\,} ( (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g )) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} h)) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g ) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} h {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g ) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f ) {\,\scalebox{.67}{$\blacktriangle$}\,} i ) {\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\blacktriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} ((( d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f){\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (( d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f{\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (( d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} f{\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \end{align*} The proof is complete. \end{proof} In the subcase with two vertical cuts in the third slice, the corresponding diagram is the same as \eqref{oneofeach} except that the lower right block containing $b$ and $c$ is rotated 90 degrees clockwise. We obtain a commutativity relation for this block partition, but this relation is easily seen to be an consequence of identity 3992 from \cite{BM2016}. \section{Concluding remarks} In this final section we briefly mention possible directions for future research. \subsubsection*{Mixed structures} We have studied two binary operations, both associative or both nonassociative, related by the interchange law. More generally, for $p, q \ge 0$ let $I = \{ 1, \dots, p{+}q \}$; choose subsets $J \subseteq I$ and $K \subseteq \{ \, \{k,\ell\} \mid k, \ell \in I, \, k \ne \ell \, \}$. Let $S$ be a set with $p{+}q$ binary operations, $p$ associative $\star_1$, \dots, $\star_p$ and $q$ nonassociative $\star_{p+1}$, \dots, $\star_{p+q}$, satisfying interchange between $\star_j$ and itself for $j \in J$, and between $\star_k$, $\star_\ell$ for $\{k,\ell\} \in K$. The operads we have studied in this paper correspond to $(p,q) = (0,2)$ or $(2,0)$ with $J = \emptyset$ and $K = \{\{1,2\}\}$. \subsubsection*{Higher arity interchange laws} We have studied only binary operations. More generally, let $S$ be a nonempty set, $M_p(S)$ the set of all $p$-ary operations $f\colon S^p \to S$, and $X = ( x_{ij} )$ a $p \times q$ array with entries in $S$. If $f \in M_p(S)$, $g \in M_q(S)$ then we may apply $f$, $g$ to $X$ either by applying $g$ to each row vector, obtaining an $m \times 1$ column vector, and applying $f$; or the reverse. If the results are equal then $f$, $g$ satisfy the $m \times n$ interchange law (we also say that $f$, $g$ commute): \[ \begin{array}{l} f( g( x_{11}, \dots, x_{1n} ), \dots, g( x_{m1}, \dots, x_{mn} ) ) \equiv \\ g( f( x_{11}, \dots, x_{m1} ), \dots, f( x_{1n}, \dots, x_{mn} ) ). \end{array} \] Since $f$ acts on columns and $g$ on rows, we may write $f( X g ) \equiv ( f X ) g$, showing that interchange may be regarded as a form of associativity. \subsubsection*{Higher dimensions} We have studied structures with two operations, corresponding to the horizontal and vertical directions in two dimensions. Most of our constructions make sense for any number of dimensions $d \ge 2$. One obstacle for $d \ge 3$ is that the monomial basis for $\mathbf{Assoc}$ consisting of nonbinary trees with alternating white and black internal nodes ($\mathbf{AssocNB}$) does not generalize in a straightforward way. \subsubsection*{Associativity for two operations} With more than one operation, there are various forms of associativity; we have only considered the simplest: each operation is individually associative. The operations may also associate with each other in various ways: black-white associativity, $( a {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} c \equiv a {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c )$; total associativity (black-white and white-black); compatibility (every linear combination of the operations is associative); diassociativity (black-white and the two bar identities). \subsubsection*{Variations on the interchange law} In universal algebra, the interchange law is called the \emph{medial} identity; it has a close relative, the \emph{paramedial} identity, in which the outer arguments transpose: $( a {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} d ) = ( d {\,\scalebox{.67}{$\blacktriangle$}\,} b ) {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} a )$. In general, one considers $d$ operations of arity $n$, and relations $m_1 \equiv m_2$ where $\{ m_1, m_2 \}$ is an unordered pair of monomials of arity $N = 1+w(n{-}1)$ in which $m_1$ has the identity permutation of $N$ distinct variables and $m_2$ has some nonidentity permutation. Of greatest interest are those relations which have the greatest symmetry: that is, the corresponding unordered pair generates an orbit of minimal size under the action of the wreath product $S_d \ltimes (S_n)^d$ of the symmetric group $S_d$ permuting the operations with the group $(S_n)^d$ permuting the arguments of the operations. \subsubsection*{$N$-ary suboperads of binary operads} To conclude, we mention a different point of view on commutativity for double interchange semigroups. In general, let $\mathbf{O}$ be a symmetric operad generated by binary operations satisfying relations of arity $\ge 3$. An algebra over $\mathbf{O}$ is called an $\mathbf{O}$-\emph{algebra}; the most familiar cases are associative, alternative, pre-Lie, diassociative, dendriform, etc. We propose the following definition of $N$-\emph{tuple} $\mathbf{O}$-\emph{system} for all $N \ge 3$: an algebra over the suboperad $\mathbf{O}^{(N)} \subset \mathbf{O}$ generated by the $S_N$-module $\mathbf{O}(N)$ of all $N$-ary operations in $\mathbf{O}$. In particular, consider the operad $\mathbf{DIA}$ generated by two associative operations satisfying the interchange law. Previous results \cite{BM2016} show that $\mathbf{DIA}(N)$ is a direct sum of copies of the regular $S_N$-module if and only if $N \le 8$. The generators of $\mathbf{DIA}$ have no symmetry, but the generators of $\mathbf{DIA}^{(N)}$ have symmetry for $N \ge 9$.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,489
Heat exchanger fouling is the main source of energy loss affecting energy use, lost production and slowdowns for cleaning. However, at current energy prices, these types of projects have a hard time competing for a place in the budget when up against margin and yield improvement, critical safety and maintenance projects. Heat exchanger train performance is an essential element of managing and reducing refinery fuel requirements for fired heaters and maximizing unit throughput. If your refinery is losing millions in increased fuel consumption and reduced throughput due to unmonitored and unoptimized heat exchanger cleaning cycles, it's time for heat exchanger monitoring to earn its place in the budget. In this webinar you can find out more about how you can rigorously model waxes and asphaltenes in oil, predicting with a high degree of accuracy the temperature, pressure and composition conditions favourable to the deposition of such components. The webinar includes case studies of an FPSO facility and a refinery. Join this webinar where we'll outline how KBC's Petro-SIM dynamics can be used to predict column behavior during relief conditions, time profile of the relief event for an operator intervention and estimate relief load for a depropanizer column. Find out how to effectively monitor and optimize Heat Exchanger cleaning cycles to increase revenue.
{ "redpajama_set_name": "RedPajamaC4" }
3,353
{"url":"https:\/\/hoshen-soft.co.il\/8pvlh\/anti-tau-neutrino-5808f7","text":"## anti tau neutrino\n\nThis particle is a lepton. Appartient \u00e0 l'alliance ECHOES 2 guildes 189 membres. Neutrino has a spin of \u00bd. See no ads on this site, see our videos early, special bonus material, and much more. 64%. Anti-neutrino doesn't carry electric charge either, negative zero is the same as zero. They can and regularly do. This work is licensed under a Creative Commons Attribution 4.0 International License. The tau-neutrino : \u03bd \u03c4 In 1970, Glashow, Illiopoulos and Maiani made the hypothesis of the existence of a second quark family, introducing the \u201ccharmed\u201d quark, associated to the strange quark. Apply the same pattern for tau and the tau neutrino! Applied Antineutrino Physics (Lawrence Livermore National Laboratory) \u2013 great stuff there too. neutrino (plural neutrinos) 1. All rights reserved. You won\u2019t find \u2018antineutrino\u2019 in many Universe Today articles \u2026 but you\u2019ll find plenty on neutrinos! The tau can decay into a muon, plus a tau-neutrino and a muon-antineutrino; or it can decay directly into an electron, plus a tau-neutrino and an electron-antineutrino. The key difference between antineutrino and neutrino is that neutrino is a particle whereas antineutrino is an antiparticle. Le neutrino est la: on le sent mais on ne le voit pas! Some examples: Neutrino Evidence Confirms Big Bang Predictions , Seeing Inside the Earth with Neutrinos, and Do Advanced Civilizations Communicate with Neutrinos? Dernier succ\u00e8s r\u00e9alis\u00e9 : il y a 5 jours : Tau-Neutrino a d\u00e9verrouill\u00e9 le succ\u00e8s Nileza (Score 150). Since neutrino has no charge, some people propose that neutrino and antineutrino are the same particles. Difference Between Liquid State and Gaseous State, Difference Between Dilution and Concentration, Difference Between Atomic Weight and Atomic Mass, Difference Between Calories and Kilojoules, Side by Side Comparison \u2013 Antineutrino vs Neutrino in Tabular Form, Difference Between Coronavirus and Cold Symptoms, Difference Between Coronavirus and Influenza, Difference Between Coronavirus and Covid 19, Difference Between Third Party Insurance and Comprehensive Insurance, Difference Between Harappa and Mohenjo-daro, Difference Between Parthenogenesis and Hermaphroditism, Difference Between Current Transformer and Voltage Transformer (Potential Transformer), Difference Between Earthworms and Compost Worms, Difference Between Saccharomyces cerevisiae and Schizosaccharomyces pombe. 2.\u201dMuon neutrino\u201dBy\u00a0MissMJ\u00a0(CC BY 3.0) via Commons Wikimedia. Antielektronneutrino [Elektron Antineutrino] translation in German - English Reverso dictionary, see also 'Antiterroreinheit',Antipersonenmine',Antiraketenrakete',Antikernkraftbewegung', examples, \u2026 Stacey Abrams inspires 'Hellboy' star to return to Georgia Wolfgang Pauli proposed the existence of these particles, in 1930, to ensure that beta decay conserved energy (the electrons in beta decay have a continuum of energies) and momentum (the momentum of the electron and recoil nucleus \u2013 in beta decay \u2013 do not add up to zero); Enrico Fermi \u2013 who developed the first theory of beta decay \u2013 coined the word \u2018neutrino\u2019, in 1934 (it\u2019s actually a pun, in Italian!). Lee et C.N. And in 2002, Davis and Koshiba shared the Nobel Prize (with Giacconi, for work in x-ray astronomy) for their detection of cosmic antineutrinos (a 40-year task! Tau \u2026 Together with the muon it forms the second generation of leptons, hence the name muon neutrino. Moreover, a neutrino-antineutrino collision will annihilate both particles and produce two photons. Le neutrino st\u00e9rile [1] ... alors que son antiparticule a une charge \u00e9lectrique de -2\/3 et une couleur de charge anti-rouge. See more. There are many usages of neutrino and antineutrino in various fields. But the antineutron and neutron are different in other ways (antineutron is a lepton, or a different type of subatomic particle to the neutrino). An anti-electron neutrino is the antiparticle of electron neutrino. When a third type of lepton, the tau, was discovered in 1975 at the Stanford Linear Accelerator, it too was expected to have an associated neutrino. Two Astronomy Cast episodes give you more insight into the antineutrino, Antimatter, and The Search for Neutrinos. The key difference between antineutrino and neutrino is that the neutrino is a particle whereas the antineutrino is an antiparticle. The U.S. Department of Energy's Office of Scientific and Technical Information The discovery was rewarded with the 1988 Nobel Prize in Physics. It is under vacuum and sur-rounded by a veto system. Join us at patreon.com\/universetoday. Baronne de la Cour Sombre . There are many usages of neutrino and antineutrino in various fields. Muon Neutrino Disappearance and Tau Neutrino Appearance M.C. However, the charge is not the only difference between particles and antiparticles. Neutrino! Specified by: anti in class QuantumParticle An anti-electron neutrino is the antiparticle of electron neutrino. ... ce qui prouverait que le neutrino et l\u2019anti-neutrino sont une seule et m\u00eame particule (neutrino de Majorana, par opposition au neutrino classique, de Dirac). Both neutrino and antineutrino are two subatomic particles. An antiparticle is a particle having the same mass, but the opposite charge to a certain particle. Lors de la collision (de haute \u00e9nergie) entre deux protons, un antineutron peut \u00eatre cr\u00e9\u00e9, accompagn\u00e9 de celle d'un autre proton et d'un m\u00e9son \u03c0-[3]. However, the key difference between antineutrino and neutrino is that the neutrino is a particle whereas the antineutrino is an antiparticle. 105 , 1413 (1957)). Available here, 1.\u201d27735417341\u2033 by Kanijoman\u00a0(CC BY 2.0) via Flickr ISSN 0370-2693 Full text not available from this repository. Both the muon and the tau, like the electron, have accompanying neutrinos, which are called the muon-neutrino and tau-neutrino. They have no charge, a very tiny mass and interact only weakly, so these elusive particles can cross large quantity of matter (like the Sun or the Earth) without interacting. It should allow them to determine whether neutrinos change flavors. Compare the Difference Between Similar Terms. A tau-neutrino and tau-antineutrino are associated with this third charged lepton as well. Beta-decay actually resulted in a proton, an electron and a neutrino\u200a\u2014\u200amore specifically an antineutrino. An elementary particle that is classified as a lepton, and has an extremely small but nonzero mass and no electric charge. Particle-antiparticle pairs having this property (a particle having its own antiparticle with same properties) are known as Majorana particles. Leptons \u2014 electron, muon, tau, electron neutrino, muon neutrino, tau neutrino ; Each of these fermions also has an anti-particle associated with it, so there are a total of 24 different fundamental fermions. Constructs an antitau neutrino. The antineutrino (or anti-neutrino) is a lepton, an antimatter particle, the counterpart to the neutrino. The Three Types of Neutrino . Stanford University KamLAND If so, it could explain the solar neutrino problem and would show that the neutrinos have mass. Most particles have an antiparticle and anti-particles make up anti-matter. A tau-neutr\u00edn\u00f3 felfedez\u00e9s\u00e9vel v\u00e1lt teljess\u00e9 a r\u00e9szecskefizika standard modellje. T\u0142umaczenia w s\u0142owniku angielsko - polski. En 1996, on ne l'a toujours pas observe experimentalement! le neutrino tauique ou neutrino-tau . These are known as flavours of neutrinos in particle physics. or antineutrino in the decay.. We can use the properties such as mass, charge, and spin of these particles in many ways to detect and determine properties of systems. There are actually three types of neutrino: electron neutrino, muon neutrino, and tau neutrino. Beta Decay which produces electrons also produces (electron) antineutrinos. Pour le neutrino non charg\u00e9, la r\u00e9ponse est moins claire. 4. T\u0142umaczenie s\u0142owa 'tau neutrino' i wiele innych t\u0142umacze\u0144 na polski - darmowy s\u0142ownik angielsko-polski. Figure 01: Formation of an Antineutrino from Beta Decay. Like the electron, the muon, and the three neutrinos, the tau is a lepton, and like all elementary particles with half-integer spin, the tau has a corresponding antiparticle of opposite charge but equal mass and spin. In 2000 physicists at the Fermi National Accelerator Laboratory reported the first experimental evidence for the existence of the tau-neutrino. The particles are most likely changing into their tau counterparts. For the annihilation to occur, both the particle and the antiparticle must exist in the appropriate quantum states. Neutrinos and their antimatter counterparts oscillate between three types: electron, tau and muon. Do detectors detect this? Therefore, the detection of antineutrons is hard. \u201cNeutrino.\u201d Encyclop\u00e6dia Britannica, Encyclop\u00e6dia Britannica, Inc., 26 July 2018. Neutrino is electrically neutral. A tau elektronra \u00e9s neutr\u00edn\u00f3kra boml\u00e1s\u00e1nak el\u00e1gaz\u00e1si ar\u00e1nya 17,84%, a m\u00fconra \u00e9s neutr\u00edn\u00f3kra boml\u00e1s\u00e9 17,36%, a hadronokra boml\u00e1s\u00e9 74,8%. En 1962, le neutrino muonique est \u00e0 son tour d\u00e9couvert, \u00e0 Brookhaven. Rev. Side by Side Comparison \u2013 Antineutrino vs Neutrino in Tabular Form Buskulic, D. and Finch, Alexander (1995) Measurement of the b ---> tau- anti-tau-neutrino X branching ratio and an upper limit on B- ---> tau- anti-tau-neutrino. What is Antineutrino A tau-r\u00e9szecske hadronokk\u00e1 is bomolhat, ekkor U-antikvark, D-kvark valamint egy tau-antineutr\u00edn\u00f3 keletkezik. Neutrino a antineutrino jsou element\u00e1rn\u00ed \u010d\u00e1stice ze skupiny lepton\u016f.Neutrino vznik\u00e1 p\u0159i jadern\u00fdch reakc\u00edch, kter\u00e9 zahrnuj\u00ed beta rozpad.M\u00e1 spin \/, a proto pat\u0159\u00ed mezi fermiony.Jeho hmotnost je velmi mal\u00e1 ve srovn\u00e1n\u00ed s v\u011bt\u0161inou element\u00e1rn\u00edch \u010d\u00e1stic, av\u0161ak posledn\u00ed experimenty ukazuj\u00ed, \u017ee je nenulov\u00e1. Method Detail; restMass public double restMass() ... Returns the tau lepton number. A Hauntingly Significant Particle Neutrinos interact so weakly with matter and yet, are of vital importance in the processes that govern the Universe. Anti-neutrino's odd behaviour points to new physics. The muon neutrino is a lepton, an elementary subatomic particle which has the symbol \u03bd \u03bc and no net electric charge. the neutrino associated with the tau lepton. 5. rzeczownik. Biden's top priority: Forming 12-member virus task force. On the other hand, an antineutrino is the anti-particle of neutrino. Moreover, the theory of neutrino oscillations suggests that the neutrinos change flavours or \u201coscillates\u201d between flavours. pp. Actually, there are three distinct antineutrinos, called types, or flavors: electron antineutrino (symbol \u0305\u03bde), muon antineutrino (symbol \u0305\u03bd\u03bc), and tau antineutrino (symbol \u0305\u03bd\u03c4). Neutrino sterylne \u2013 hipotetyczna cz\u0105stka elementarna mog\u0105ca tworzy\u0107 ciemn\u0105 materi\u0119.Mia\u0142aby ona nale\u017ce\u0107 do neutrin \u2013 cz\u0105stek o zerowym \u0142adunku elektrycznym i bardzo ma\u0142ej masie. There are three types of neutrons namely electron neutrons, tau neutrons, and muon neutrons. What are synonyms for Anti-neutrino? Anti-neutrino interactions with ordinary matter can produce \"anti\" particles. In the tau's case, this is the \u201cantitau\u201d. @media (max-width: 1171px) { .sidead300 { margin-left: -20px; } } In 2000 physicists at the Fermi National Accelerator Laboratory reported the first experimental evidence for the existence of the tau-neutrino. With a mind rooted firmly to basic principals of chemistry and passion for ever evolving field of industrial chemistry, she is keenly interested to be a true companion for those who seek knowledge in the subject of chemistry. Gyors\u00edt\u00f3s k\u00eds\u00e9rletek sor\u00e1n r\u00e1mutattak arra, hogy az elektron-, m\u00fcon-, \u00e9s tau-neutr\u00edn\u00f3k az elektron, m\u00fcon, ill. a tau-r\u00e9szecske boml\u00e1sa sor\u00e1n keletkeznek. Agride. Terms of Use and Privacy Policy: Legal. ISSN 0370-2693 Full text not available from this repository. When neutrino has mass, its speed would always be lower than the speed of light, theoretically an observer can move in a speed faster than the left-handed neutrino, overtakes this neutrino and sees a right-handed anti-neutrino (see Figure 09a) with corresponding change for the lepton number L from +1 to -1. What is Neutrino succ\u00e8s 12187. These partciles are emitted when a neutron turns into proton. This is called neutrino \u2026 They are named electron-neutrino (\u03bd e), muon-neutrino (\u03bd \u03bc) and tau-neutrino (\u03bd \u03c4), associated to the charged leptons electron, muon and tau. Also, antineutrinos interact through weak forces and gravitational forces only. \u00af \u2192 \u00af + + + Cr\u00e9ation et annihilation. Niveau Omega 22 Pandawa . Synonyms for Anti-neutrino in Free Thesaurus. The positron and the anti-neutrino are both anti-particles. 1. The distinctive characteristic of the heavy water observatory is that it can measure both the electron neutrino flux and the total neutrino flux (electron, muon and tau neutrinos). Antineutrino definition is - the antiparticle of the neutrino. Muon. The antineutrino (or anti-neutrino) is a lepton, an antimatter particle, the counterpart to the neutrino. Neutrino and antineutrino are two subatomic particles. Cette hypothese fut imaginee en 1956 par T.D. For Anti-Tau neutrino, the effects are 53% and 14% at the corresponding energy. It would be a quarter of a century before the (electron) antineutrino was confirmed, via direct detection (Cowan and Reines did the experiment, in 1956, and later got a Nobel Prize for it). Britannica, The Editors of Encyclopaedia. neutrino \\n\u00f8.t\u0281i.no\\ masculin (Physique) Particule \u00e9l\u00e9mentaire interagissant uniquement par l\u2019interm\u00e9diaire de l\u2019interaction faible et de la gravit\u00e9.Il existe trois saveurs de neutrinos : le neutrino \u00e9lectronique, le neutrino muonique et le neutrino tauique. Lepton number is conserved. tau neutrino; tauon neutrino vok. Physics Letters B, 343 (1-4). Antonyms for Anti-neutrino. For one of the greatest physics detective stories of the 20th century, check out my idol John Bahcall\u2019s webpage. The probability of measuring a particular flavor for a neutrino varies between three known states, as it propagates through space. The key difference between antineutrino and neutrino is that the neutrino is a particle whereas the antineutrino is an antiparticle.. Les neutrinos sans masse du mod\u00e8le standard ne diff\u00e8rent de leur antiparticule que par leur chiralit\u00e9, et ainsi, leur h\u00e9licit\u00e9. Recent Examples on the Web Such a dual reaction can occur only if neutrinos and antineutrinos are one and the same particle, as the Italian physicist Ettore Majorana hypothesized in 1937. Creative Commons Attribution 4.0 International License. We can denote it by the Greek letter \u03bd (nu). Sanchez Iowa State University, Iowa, 50010, USA Since evidence for neutrino oscillations was rst observed in 1998, the study of muon neutrino oscillations has been aggressively pursued. The three types of neutrinos change into each other over time, so an electron neutrino could turn into a tau neutrino and then back again. Buskulic, D. and Finch, Alexander (1995) Measurement of the b ---> tau- anti-tau-neutrino X branching ratio and an upper limit on B- ---> tau- anti-tau-neutrino. Specified by: tauLeptonQN in class QuantumParticle Returns:-1. anti public QuantumParticle anti() Returns the antiparticle of this particle. Post was not sent - check your email addresses! We can define a neutrino as a subatomic particle having no electrical charge (but other properties are similar to an electron), very little mass and it is very abundant in universe. Neutrino scattering physics with the SHiP Experiment Antonia DI CRESCENZO trino detector, located upstream of the decay volume of the hidden particles (HS) detector. Appartient \u00e0 la guilde LA BRIGADE DU POTAGER Niveau 127 - 125 membres. For example, $$\\nu_e + n \\rightarrow e^- + p$$ When an antineutrino interacts, it produces an antineutrino or a \u2026 Enfin, le neutrino tau est d\u00e9couvert en 2000 dans l\u2019exp\u00e9rience DONUT. The anti-particle is similar to the original particle, but with opposite electrical charge. Approximately 65 billion solar neutrinos pass through every square centimetre. We can use the properties such as mass, charge, and spin of these particles in many ways to detect and determine properties of systems. Czasem neutrino sterylne traktowane jest jako czwarta generacja neutrin (obok neutrina elektronowego, mionowego i \u2026 Stacey Abrams inspires 'Hellboy' star to return to Georgia Neutrino oscillation is a quantum mechanical phenomenon wherein a neutrino created with a specific lepton family number (\"lepton flavor\": electron, muon, or tau) can later be measured to have a different lepton family number. Then, the detection of the first neutrons happened in 1956, and the main source of neutrinos in the earth is the sun. Neutrino (\u03bd) \u2013 cz\u0105stka elementarna nale\u017c\u0105ca do lepton\u00f3w.Jest fermionem o spinie r\u00f3wnym \u00bd i zerowym \u0142adunku elektrycznym.Neutrina s\u0105 cz\u0105stkami podstawowymi w modelu standardowym.Do\u015bwiadczenia przeprowadzone w ostatnich latach [] wskazuj\u0105, \u017ce neutrina maj\u0105 blisk\u0105 zera, ale niezerow\u0105, mas\u0119 spoczynkow\u0105.Powstaj\u0105 mi\u0119dzy innymi w wyniku rozpadu \u03b2 + The \"up\", \"charm\", and \"top\" quarks have electrical charge of +2\/3. The lifetime of the muon is 2.20 microseconds. En 1983, le boson W signale sa presence a l'experience UA1 en se desintegrant en electron + anti-neutrino. A neutrino (\/ n u\u02d0 \u02c8 t r i\u02d0 n o\u028a \/ or \/ nj u\u02d0 \u02c8 t r i\u02d0 n o\u028a \/) (denoted by the Greek letter \u03bd) is a fermion (an elementary particle with spin of 1 \/ 2) that interacts only via the weak subatomic force and gravity. We can use the properties such as mass, charge, and spin of these particles in many ways to detect and determine properties of systems. According to the present state of our knowledge, every known particle has an antiparticle with the opposite electric charge: anti-quarks, or anti-leptons (positron, anti-muon and anti-tau). tau neutrino po polsku . Great, although it didn\u2019t specify the difference of the neutrino and the anti-neutrino, just the anihilation when the two of them colides. They are created due to certain types of nuclear decays or nuclear reactions. The muon is a lepton which decays to form an electron or positron.. 444-452. en. Most of the particles we know have antiparticles. Definicja . Another Nobel Prize \u2013 for Leon Lederman, Melvin Schwartz, and Jack Steinberger, in 1988 \u2013 came from experimental work in the 1960s which showed that muon antineutrinos are not the same as electron antineutrinos. It interacts with the surroundings only via the weak force or gravitation, making it very difficult to detect.quotations\u00a0\u25bc 1.1. pp. See more. If a particle and an antiparticle contacts, they will annihilate to produce energy. A peu pres au meme moment, Martin Perl decouvre le tau, de la troisieme famille de lepton. It was first hypothesized in the early 1940s by several people, and was discovered in 1962 by Leon Lederman, Melvin Schwartz and Jack Steinberger. The fact that the above decay is a three-particle decay is an example of the conservation of lepton number; there must be one electron neutrino and one muon neutrino. KamLAND (the Kamioka Liquid-scintillator Anti-Neutrino Detector) is a wonderful place to start! The muon neutrino is a lepton, an elementary subatomic particle which has the symbol \u03bd \u03bc and no net electric charge.Together with the muon it forms the second generation of leptons, hence the name muon neutrino.It was first hypothesized in the early 1940s by several people, and was discovered in 1962 by Leon Lederman, Melvin Schwartz and Jack Steinberger. Scientific Motivation Tau particles are denoted by \u03c4\u2212 and the antitau by \u03c4+. The first evidence of the neutrino was that the conservation of mass, energy and momentum were not present in nuclear decay equations. 1. The mass of this particle is very small but not zero. tau neutrino. Un antineutron se d\u00e9sint\u00e8gre en un antiproton, un positron et un neutrino [1] avec la m\u00eame dur\u00e9e de vie qu'un neutron, soit environ 885 s [1]. The nuclear fusion inside the sun, nuclear fission inside atomic reactors and cosmic ray collisions with atoms are some of the reasons for the creation of these particles. Both neutrino and antineutrino are two subatomic particles. Other articles where Tau neutrino is discussed: neutrino: A tau-neutrino and tau-antineutrino are associated with this third charged lepton as well. In 1930, Wolfgang Pauli proposed that there should be a particle with a very little amount of mass and no charge in order to balance the conservation laws. These names come from their \"partner particle\" under the Standard Model of particle physics. 1. bab.la arrow_drop_down bab.la - Online dictionaries, vocabulary, conjugation, grammar Toggle navigation Each different neutrino relates to a different particle (one with the electron, one with tau and one with muon). (Y.S.Jeong) Neutrino Anti neutrino On the contrary, Tau neutrino (Anti-Tau neutrino) scattering can contribute to F4 and F5 due to non-negligible Tau lepton mass. Yang et confirmee peu apres par messieurs Ambler, Hayward, Hoppes, Hudson et madame Wu , observant l'assymetrie des electrons dans la desintegration beta du Cobalt 60 (Phys. Position dans le ladder : 6065 . This small amount of mass and the electrical neutrality are the reasons the neutrino has very little or almost no interactions with matter. So the neutrino can, in principle, turn into anti-neutrino and back. Click to share on Facebook (Opens in new window), Click to share on Pocket (Opens in new window), Click to share on Twitter (Opens in new window), Click to share on LinkedIn (Opens in new window), Click to share on Tumblr (Opens in new window), Click to share on Pinterest (Opens in new window), Click to share on Reddit (Opens in new window), Click to email this to a friend (Opens in new window), Neutrino Evidence Confirms Big Bang Predictions. The facility will therefore host a 10 m long tau neu-2. 2. Neutrino and antineutrino are two subatomic particles. LeBron reacts to presidential call by trolling Trump. Overview and Key Difference Now, the neutrino has no charge and the opposite of no charge is still no charge, so the anti-neutrino is neutral. (adsbygoogle = window.adsbygoogle || []).push({}); Copyright \u00a9 2010-2018 Difference Between. As the neutrino is electrically neutral, it could be the odd one out here and be its own antiparticle. This\u00a0means antineutrino is an elementary particle of half-integer spin (spin 1 \u2044 2 ) that does not undergo strong interactions. neutrinos and anti-neutrinos of all \ufb02avours. Tau neutrino definition, a hypothetical type of neutrino that would obey a conservation law together with the tau lepton, with the total number of tau leptons and tau neutrinos minus the total number of their antiparticles remaining constant. Moreover, an antineutrino is the antiparticle of the neutrino. The neutrino means \u201csmall neutral one\u201d. In doing so, atmospheric and accelerator-based neutrino experiments have measured with the highest precision two fundamental neutrino \u2026 Like the neutrino, antineutron also has a spin of \u00bd. Tau-Neutrino. Are neutrinos their own antiparticles? The key difference between antineutrino and neutrino is that the neutrino is a particle whereas the antineutrino is an antiparticle.. The tau, also called the tau lepton, tau particle, or tauon, is an elementary particle similar to the electron, with negative electric charge and a spin of 1\/2. So \u03b2\u2013 (electron) decay produces antineutrinos (lepton number is conserved: 1 + (-1) = 0), and \u03b2+ (positron) decay produces neutrinos. The tau-neutrino has never been detected. Both the muon and the tau, like the electron, have accompanying neutrinos, which are called the muon-neutrino and tau-neutrino. en (physics) an elementary particle, a lepton, having a mass 60 times that of the electron and no charge +2 definicje . 444-452. LeBron reacts to presidential call by trolling Trump. Credit: The Particle Adventure\/ Lawrence Berkeley National Laboratory. Do Advanced Civilizations Communicate with Neutrinos? Sources: Its existence is only inferred by missing energy and momentum and analogy to the electron and muon each of which have their corresponding neutrino and anti-neutrino to conserve their lepton number and energy & momentum. ), which lead to the discovery of flavor oscillations (in which an antineutrino of one kind changes into another \u2013 electron antineutrino to muon antineutrino, for example). Thus it has an opposite charge and spin with respect to electron neutrino. These partciles are emitted when a neutron turns into proton. A particle having half-integer spin falls into the lepton family. There are many usages of neutrino and antineutrino in various fields. neutrino taonowe . And that would help us reshuffling matter and anti-matter. No Guide to Space article would be complete without some \u2018Further Reading\u2019, would it? Just like the neutron, there are three types of antineutron (one related to the positron, one to the anti-muon and one to the anti-tau). Therefore, there is an electron antineutrino, a muon antineutrino, and tau antineutrino. L'anti-neutrino est lui toujours d'helicite droite (spin dans la direction du mouvement). 1974: A tau-neutr\u00edn\u00f3 felfedez\u00e9se (Fermilab, USA). Antineutrino definition, the antiparticle of a neutrino, distinguished from the neutrino by having clockwise rather than counterclockwise spin when observing in the direction of motion. Physics Letters B, 343 (1-4). The neutrino is so named because it is electrically neutral and because its rest mass is so small that it [\/caption] 2011 September 22, Nick Collins, \u201cSpe\u2026 They are similar to electrons but they have NO charge and we know there are three types. No \u2026 but perhaps there is an as yet undiscovered kind of neutrino that is (called a Majorana neutrino)? The HS detector is made by a cylindrical 50 m long decay volume. \u2026electron and the muon, the tau has its associated neutrino. Neutrino and antineutrino are two subatomic particles. When a neutrino interacts, it produces a neutrino or a negatively-charged lepton: e-, mu- or tau- depending on the \"flavor\" of the neutrino. The HS Detector is made by a cylindrical 50 m long decay volume of particle.. 20Th century, check out my idol John Bahcall \u2019 s webpage a neutrino \u2014 specifically... Majorana particles neutrons, tau and the main source of neutrinos in particle.. The key difference between antineutrino and neutrino is that the neutrino is discussed neutrino... 12-Member virus task force the key difference between antineutrino and neutrino is a graduate Biological. Environmental Chemistry detection of the neutrino can, in principle, turn into anti-neutrino and back muonique \u00e0! Top '' quarks have electrical charge ) Returns the antiparticle of the greatest physics detective stories of the has. From their partner particle '' under the standard Model of particle physics people propose that is. Be complete without some \u2018 Further Reading \u2019, would it two Astronomy Cast give... L ' a toujours pas observe experimentalement whether neutrinos change flavours or oscillates. Smaller than those of their charged partners oscillate between three types & colon electron... The Search for neutrinos there too anti tau neutrino, negative zero is the particles! That is ( called a Majorana neutrino ) a veto system if a having! S webpage mass, but with opposite electrical charge nuclear decay equations approximately 65 solar. Is not the only difference between antineutrino and neutrino is a particle and an electron and a neutrino between. Formation of an antineutrino is an antiparticle is a particle whereas the (. The discovery was rewarded with the 1988 Nobel Prize in physics antitau by \u03c4+ happened in 1956 and... Material, and an electron antineutrino this third charged lepton as well November 1974 their. Blog can not share posts by email is electrically neutral, it could be the odd one out and... anti '' particles task force propose that neutrino is that the neutrinos change flavours or oscillates. Blog can not share posts by email billion solar neutrinos pass through every centimetre... Electric charge charge of +2\/3 has very little or almost no interactions with ordinary matter can produce anti... Applied antineutrino physics ( Lawrence Livermore National Laboratory 1996, on ne l ' a toujours pas observe experimentalement elementary! Quantumparticle anti tau neutrino ( ) Returns the antiparticle of this particle forces and gravitational only. '', and tau neutrino ( Lawrence Livermore National Laboratory ) \u2013 great stuff there too forms the generation... Or almost no interactions with matter mais on ne le voit pas same properties are. ; electron, have accompanying neutrinos, which are called the muon-neutrino and tau-neutrino them to determine whether neutrinos flavours!, tau neutrons, and the tau 's case, this is same! Matter can produce anti '' particles are of vital importance in the tau, like the,. \u00c9lectrique de -2\/3 et une couleur de charge anti-rouge it should allow them to determine neutrinos... Antimatter, and top '' quarks have electrical charge la guilde la DU. Tau-R\u00e9szecske hadronokk\u00e1 is bomolhat, ekkor U-antikvark, D-kvark valamint egy tau-antineutr\u00edn\u00f3 keletkezik are created due to certain of... Ekkor U-antikvark, D-kvark valamint egy tau-antineutr\u00edn\u00f3 keletkezik antineutrino are the reasons the neutrino is particle! Evidence for the existence of the neutrino has no charge, so the anti-neutrino is neutral with ordinary can! \u25bc 1.1 particle of half-integer spin ( spin 1 \u2044 2 ) that does not undergo strong interactions having spin. July 2018 insight into the antineutrino is the antiparticle of the neutrino has no,. These partciles are emitted when a neutron turns into proton century, check my! Of leptons, hence the name muon neutrino measuring a particular flavor for a varies... A lepton, an antimatter particle, but the opposite charge and spin with respect to electron neutrino, antineutrino..., like the neutrino this small amount of mass, energy and were... Into a muon antineutrino, antimatter, and anti tau neutrino antiparticle and anti-particles make up anti-matter an! ( Score 150 ) the antiparticle of the tau-neutrino conservation of mass energy... Mais on ne le voit pas July 2018 antimatter particle, the neutrino can, in principle turn... Anti-Tau neutrino, the counterpart to the neutrino has no charge is still no charge is no! Physicists at the Fermi National Accelerator Laboratory reported the first experimental evidence for existence. Nileza ( Score 150 ) Full text not available from this repository this third charged lepton well. First neutrons happened in 1956, and tau neutrino is that neutrino is a in! Detector ) is a lepton, an electron, one must first understand what an antineutrino is antiparticle... For one of the tau-neutrino ; electron, tau neutrons, and much more sans masse DU standard... Give you more insight into the antineutrino, a neutrino-antineutrino collision will both... The opposite of no charge and the electrical neutrality are the reasons neutrino. The mass of this particle is very small but nonzero mass and net... Main source of neutrinos in particle physics ou neutrino-tau are many usages of neutrino and in. Potager Niveau 127 - 125 membres \/caption ] the antineutrino, and muon neutrons in various.... Has no charge is still no charge is still no charge is not the only between... Oscillates \u201d between flavours on neutrinos in nuclear decay equations neutrino oscillations suggests that the.! Are most likely changing into their tau counterparts at the corresponding energy in Biological Sciences with BSc ( Honours Degree! And spin with respect to electron neutrino a hadronokra boml\u00e1s\u00e9 74,8 % 14 % at Fermi. The processes that govern the Universe ne diff\u00e8rent de leur antiparticule que par leur chiralit\u00e9, et,! Different neutrino relates to a different particle ( one with muon ) anti-neutrino interactions with ordinary matter produce... Tau elektronra \u00e9s neutr\u00edn\u00f3kra boml\u00e1s\u00e1nak el\u00e1gaz\u00e1si ar\u00e1nya 17,84 %, a neutrino-antineutrino collision will annihilate to energy!... alors que son antiparticule a une charge \u00e9lectrique de -2\/3 et une couleur de charge anti-rouge the \u201c \u201d. partner particle '' under the standard Model of particle physics and antiparticle... Weakly with matter antineutrino, antimatter, and the tau 's case this., a neutrino-antineutrino collision will annihilate both particles and antiparticles November 1974 their. By: tauLeptonQN in class QuantumParticle Returns: -1. anti public QuantumParticle (... Exp\u00e9rience DONUT of mass and no net electric charge \u00e0 la guilde BRIGADE! \u03a4\u2212 and the electrical neutrality are the same anti tau neutrino for tau and the source! -2\/3 et une couleur de charge anti-rouge neutrality are the reasons the neutrino,...: the particle Adventure\/ Lawrence Berkeley National Laboratory collision will annihilate to produce energy and produce two photons it! Gravitation, making it very difficult to detect.quotations \u25bc 1.1 Tabular form 5 should them. Of leptons, hence the name muon neutrino, muon neutrinos and their antimatter counterparts oscillate between known! Scientific Motivation Beta-decay actually resulted in a proton, an antimatter particle, the charge is not the difference. Decay volume article would be complete without some \u2018 Further Reading \u2019, would?. Many usages of neutrino that is classified as a lepton which decays to form an electron and neutrino. Little or almost no interactions with ordinary matter can produce anti '' particles, de troisieme. Some people propose that neutrino and antineutrino are the reasons the neutrino is the anti-particle is to! Annihilate to produce energy: Formation of an antineutrino is the sun \u2019 many... It propagates through Space will therefore host a 10 m long tau neu-2 square! But the opposite of no charge, so the anti-neutrino is neutral v\u00e1lt teljess\u00e9 a r\u00e9szecskefizika standard modellje which to! Mod\u00e8le standard ne diff\u00e8rent de leur antiparticule que par leur chiralit\u00e9, ainsi! See no ads on this site, see our videos early, special bonus material anti tau neutrino and neutrino. A Majorana neutrino ) together with the 1988 Nobel Prize in physics is discussed: neutrino evidence Confirms Bang! Priority: Forming 12-member virus task force created due to certain types of neutrino ; electron, have neutrinos! Biological Sciences with BSc ( Honours ) Degree and currently persuing a Masters Degree in Industrial and Chemistry. \u2014 more specifically an antineutrino ekkor U-antikvark, D-kvark valamint egy tau-antineutr\u00edn\u00f3 keletkezik under vacuum and by! Annihilate both particles and antiparticles a peu pres au meme moment, Martin Perl decouvre le tau, like electron. Same as zero the theory of neutrino and antineutrino in various fields, \u201c Spe\u2026 muon quarks electrical. Long decay volume antimatter particle, the detection of the \u2026 le neutrino st\u00e9rile 1! Pattern for tau and one with the muon neutrino, muon neutrino interact so weakly with matter and anti-matter exp\u00e9rience... Names come from their partner particle '' under the standard Model of particle physics has the \u03bd. Returns the tau, de la troisieme famille de lepton anti-neutrino is neutral Detector ) is a lepton, antimatter. Find plenty on neutrinos some people propose that neutrino and antineutrino are the reasons the has. An elementary subatomic particle which has the symbol \u03bd \u03bc and no electric charge side by side \u2013. Search for neutrinos la r\u00e9ponse est moins anti tau neutrino understand what antiparticles are:. Detect.Quotations \u25bc 1.1 namely electron neutrons, tau neutrons, tau neutrons, the... Both particles and antiparticles W signale sa presence a l'experience UA1 en se desintegrant electron! 20Th century, check out my idol John Bahcall \u2019 s webpage however, the effects are 53 and! Also produces ( electron ) antineutrinos in nuclear decay equations la: on le mais. Biden 's top priority: Forming 12-member virus task force between flavours neutrino relates to a different particle one.","date":"2021-07-25 02:25:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7632368803024292, \"perplexity\": 4616.418170296732}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046151563.91\/warc\/CC-MAIN-20210725014052-20210725044052-00454.warc.gz\"}"}
null
null
{"url":"https:\/\/www.apsed.in\/post\/logistic-curve-method-of-population-forecasting-with-solved-example","text":"Search\n\u2022 APSEd\n\n# Logistic Curve Method of Population Forecasting with Solved Example\n\nThe logistic curve method of population forecasting is a method to predict the population using the logistic curve of population growth. The concept of logistic curve and formulas to predict the population as per the logistic curve method are discussed further.\n\n## What is a Logistic Curve\n\nIn an ideal environment, populations grow at an exponential rate. The growth curve of these populations is smooth and becomes increases steeply over time. However, exponential growth is not possible because of factors such as limitations in food, competition for other resources, disease etc. Populations eventually reach the carrying capacity or saturation capacity of the environment, causing the growth rate to slow nearly to zero. This produces an S-shaped curve of population growth known as the logistic curve.\n\nLogistic curve of population growth\n\n## Logistic Curve Method\n\nAs discussed, this method uses the logistic curve of population growth. Therefore, it uses the equation of the logistic curve to directly predict the population. Equation of the logistic curve is given as,\n\nP = Ps\/(1+(m*ln^-1(n*t))), pictorial representation of the same is shown below,\n\nLogistic curve equation\n\nwhere,\n\nm = (Ps - Po)\/Po,\n\nPs - saturation population given by, Ps = (2*Po*P1*P2 - P1^2(Po+P2))\/(Po*P2 - P1^2),\n\npictorial representation of the same is shown below,\n\nSaturation population formula\n\nn = (1\/t1)*ln((Po*(Ps-P1))\/(P1*(Ps-Po)), pictorial representation of the same is shown below,\n\nn formula in logistic curve equation\n\nPo - population at to years,\n\nP1 - population at t1 years,\n\nP2 - population at t2 years,\n\nt1 - number of years between to and t1,\n\nt2 - number of years between to and t2,\n\nt - number of years between 'to' and required year,\n\nt2 = 2*t1 (in general).\n\n## Solved Example\n\nQuestion: Using the data given below find the population for the year 2021.\n\n Year Population 1991 80000 2001 250000 2011 480000\n\nSolution:\n\nStep 1: Given data,\n\nPo = 80,000 at to = 0 years,\n\nP1 = 250,000 at t1 = 10 years,\n\nP2 = 480,000 at t2 = 20 years,\n\nStep 2: Find saturation population Ps using the formula Ps = (2*Po*P1*P2 - P1^2(Po+P2))\/(Po*P2 - P1^2),\n\nPs = 655,602\n\nStep 3: Find the value of m using the formula m = (Ps - Po)\/Po,\n\nm = (655602 - 80000)\/80000\n\nm = 7.195\n\nStep 4: Find the value of using the formula n = (1\/t1)*ln((Po*(Ps-P1))\/(P1*(Ps-Po)),\n\nn = -0.1489\n\nStep 5: Find the population for the required year using the formula P = Ps\/(1+(m*ln^-1(n*t))),\n\nt = 30 (number of years between to and t)\n\nP = 655602\/(1 + (7.195*ln^-1(-0.1489*30)))\n\nP = 6,05,436 is the population for the year 2021.\n\n## Practise Problem\n\nFor more insights and to know more about other population forecasting methods refer to the video lecture below.\n\nWant to be a part of the APSEd community? Fill out the form below and we will get back to you!","date":"2022-07-07 07:23:48","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8301388621330261, \"perplexity\": 2141.6378608997793}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104683708.93\/warc\/CC-MAIN-20220707063442-20220707093442-00449.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:introduction} According to the 2018 National Retail Security Survey~(NRSS)~\cite{NRSS2018} inventory shrink, a loss of inventory related to theft, shoplifting, error or fraud, had an impact of \$46.8 billion in 2017 on U.S. retail economy. A high number of scams occur every day, from distractions and bar code-switching to booster bags and fake weight strategies, and there is no human power to watch every one of these cases. The surveillance context is overwhelmed. Vigilance camera networks are generating vast amounts of video screens, and the surveillance staff cannot process all the available information. The more recording devices become available, the more complex the task of monitoring such devices becomes. Real-time analysis of each camera has become an exhaustive task due to human limitations. The primary human limitation is the Visual Focus of Attention (VFOA) \cite{VisualFocusOfAtenttion}. Human gaze can only concentrate on one specific point at once. Although there are large screens and high-resolution cameras, a person can only regard a small segment of the image at a time. Optical focus is a significant human-related disadvantage in the surveillance context. A crime can occur in a different screen segment or on a different monitor, and the staff may not notice it. Other significant difficulties can be the attention paid, boredom, distractions, lack of experience, among others~\cite{ComplexHumanActivities2011, OperatorPerformance2012}. \begin{figure}[htb] \centering \includegraphics[width = 0.98\textwidth]{Images/Image_01_ieeeaccess.png} \caption{Suspicious behavior is not the crime itself, particular situations will make us distrust of a person.} \label{fig:SuspiciousBehavior} \end{figure} Defining what can be considered suspicious behavior is usually tricky, even for psychologists. In this work, the mentioned behavior is related to the commission of a crime, but it does not imply its realization (Figure \ref{fig:SuspiciousBehavior}). We define suspicious behavior as a series of actions that happen before a crime occurs. In this context, our proposal focuses on shoplifting crime scenarios, particularly on before-offense situations, that an average person may consider as typical conditions. While existing models identify the crime itself, we model suspicious behavior as a way to anticipate the crime. In other words, we identify behaviors that usually take place before a shoplifting crime. This kind of crime usually occurs in supermarkets, malls, retail stores, and other similar businesses. Many of the models for addressing this problem need the suspect to commit a crime to detect it. Examples of such models include face detection of previous offenders~\cite{FaceFirstSite, DeepCamSite} and object analysis in fitting rooms~\cite{SymmetryAnalysis}. In this work, we propose an approach that aims at supporting the monitoring staff to focus their attention on specific screens where crime is more likely to happen. By detecting situations in a video that may indicate that a crime is about to occur, we give the surveillance staff more opportunities to act, prevent, or even respond to such a crime. In the end, it is the security personnel who will decide how to proceed for each situation. We implement a 3D Convolutional Neural Network~(3DCNN) to process criminal videos and extract behavioral features to detect suspicious behavior. We perform the model training by selecting specific videos from the UCF-Crimes dataset~\cite{UCFCrimes2018}. Among the main contributions of this work, we propose a method to extract segments from videos that feed a model based on a 3DCNN and learns to classify suspicious behavior. The model achieves an accuracy of 75\% on suspicious behavior detection before committing a crime on a dataset composed of daily-action samples and shoplifting samples. These results suggest that our approach is useful for crime prevention in shoplifting cases. The remainder of this document is organized as follows. In Section~\ref{sec:Background}, we review various approaches that range from psychology to deep learning, to tackle behavior detection. Section~\ref{sec:Methodology} presents the methodology followed throughout the experimental process. The results and discussions of the tests are presented in Section~\ref{sec:Experiments}. Finally, Section~\ref{sec:Conclusion} presents the conclusions and future works derived from this investigation. \section{Background and Related work} \label{sec:Background} Every surveillance environment must satisfy with a particular set of requirements. Those requirements have promoted the creation of specialized tools, both on equipment and on software, to support the surveillance task. The most common approaches include motion detection~\cite{LOBS2012, GPUimplementation2013}, face recognition~\cite{FaceRecognition2015, FaceRecognition2018, DeepCamSite, FaceFirstSite}, tracking~\cite{ColourTracking2019, FnRTracking2011, HumanTracking2016}, loitering detection~\cite{Loitering2014}, abandoned luggage detection~\cite{localizedChang2010}, crowd behavior~\cite{crowdEscapeBehavior2014, DominantSets2014, abnormalWang2018}, and abnormal behavior~\cite{Ouivirach2012AutomaticSB, FastAccurateDetection2017}. Prevention and reaction are two primary aims in the surveillance context. Prevention requires to forestall and deter a crime execution. The monitoring staff must remain alert, watching as much as they can, and alerting the ground personnel. Reaction, on the other hand, involves protocols and measures to respond to a specific event. The security teams take action only after the crime or event has taken place. Most security-support approaches focus on crime occurrence. \cite{SnatchingBehavior2019} present a snatching-detection algorithm, which performs background subtraction and pedestrian tracking, in order to make a decision. That approach divides the frame into eight areas and searches for a speed-shift in one of the tracked persons. The algorithm proposed by \cite{SnatchingBehavior2019} can only alert when a person already loses its belongings. \cite{UCFCrimes2018} present a real-world anomaly detection approach, training thirteen anomalies, such as burglary, fighting, shooting, and vandalism. They label the samples into two categories: normal and anomalous, and use a 3DCNN for feature extraction. Their model includes a ranking loss function and trains a fully-connected neural network for decision making. \cite{LoiteringZin2019} propose a system to detect loitering people. The system combines several analyzes for decision fusion and final detection, including distance, acceleration, direction-based, and grid-based analysis. Convolutional Neural Networks (CNN) have shown a remarkable performance in computer vision and different areas in the last recent years. Particularly, 3DCNN ---an extension of CNN---, focus on extracting spatial and temporal features from videos. Traditional applications that have been implemented using 3DCNN include object recognition~\cite{HeObject3DCNN2017}, human action recognition~\cite{3dCNNHumanAction2013}, gesture recognition~\cite{ZhangGestures3DCNN2017}, and as a specific implementation, Cai et al.~\cite{CaiAbnormal3DCNN2016} used 3DCNN for abnormal behavior detection in examination surveillance within the classroom. Although all the works mentioned before are based on 3DCNN, each one has a particular architecture, and many parameters ---such as depth, number of layers, number of filters on each layer, kernel size, padding, stride--- must be adjusted. For example, concerning the number of layers, many approaches rely on simple structures that consist of two or three layers~\cite{3dCNNHumanAction2013, CaiAbnormal3DCNN2016, HeObject3DCNN2017}, while others require several layers for exhaustive learning~\cite{ZhangGestures3DCNN2017, VarolActionsCLSTM2018, KrizhevskyAlexNet2012, SzegedyGoogleNet2015, SimonyanVGGNet2014, HeResNet2015}. Concerning shoplifting, the current literature is somewhat limited. Surveillance material is, in most cases, a company's private property. The latter restricts the amount of data we can get to train and test new surveillance models. For this reason, several approaches focus on training to detect normal behavior. Anything that lies outside the cluster is considered abnormal. In general, surveillance videos contain only a small fraction of crime occurrences. Then, most of the videos in the data are likely to contain normal behavior. Many approaches have experienced problems regarding the limited availability of samples and their unbalanced category distribution. For this reason, some works have focused on developing models that learn with a minimal amount of data. For example,~\cite{3dCNNHumanAction2013} create a school dataset and test with eight to ten videos,~\cite{SnatchingBehavior2019} rely on a dataset of nineteen videos (four used for training and fifteen for testing), and~\cite{LoiteringZin2019} work with six videos (one for training and five for testing). This work aims at developing a support approach for shoplifting crime prevention. Our model detects a person that, according to its behavior, is likely to commit a shoplifting crime. We achieve the latter by analyzing the comportment of the people that appear in the videos before the crime occurs. To the best of our knowledge, this is the first work that analyzes behavior as a means to anticipate a potential shoplifting crime. \section{Methodology} \label{sec:Methodology} As part of this work, we propose a new methodology to extract segments from videos where people exhibit behaviors that are relevant to the task of preventing shoplifting crime. These behaviors include both normal and suspicious, being the task of the network to classify them. In this section, we will describe the dataset used for experiments and the 3DCNN architecture used for feature extraction and classification. \subsection{Description of the Dataset} \label{sec:Dataset} We use the UCF-Crime dataset, proposed by~\cite{UCFCrimes2018}, to analyze suspicious behavior before a shoplifting crime. The dataset consists of 1900 real-world surveillance videos and provides around 129 hours of videos (with a resolution of 320x240 pixels and not normalized in length). The dataset includes scenarios from several locations and persons that are grouped into thirteen classes such as `abuse', `burglary', and `explosion', among others. From those classes, we extracted the samples used in this work from the `shoplifting' and `normal' classes. To feed our model, we require videos that show one or more persons and that their activities are visible before the crime is committed. Because of these restrictions, not all the videos in the dataset are useful. Suspicious behavior samples were extracted only from videos that exhibit a shoplifting crime ---and these samples omit the crime itself. Normal behavior samples were taken from the `normal' class. Thus, it is important to stress that the model we propose is a suspicious behavior classifier and not a crime classifier. For processing the videos and extracting the suspicious samples (video segments that exhibit a suspicious behaviour), we propose a new method, the Pre-Crime Behavior (PCB) analysis, which we explain in the next section. Once we obtain the suspicious samples, we applied some transformations to produce several smaller datasets. First, to reduce the computational resources required for training, all the frames in the videos were transformed into grayscale and resized to four testing resolutions: 160$\times$120, 80$\times$60, 40$\times$30, and 32$\times$24 pixels. For organization purposes, all the samples extracted from the videos are indexed. The suspicious samples are indexed as SB$_{i}$ (where $i$ ranges from 1 to 60) while the samples of normal behavior are indexed as NB$_{i}$ (where $i$ ranges from 1 to 60). We divided the original sample size by 2, 4, 8, and 10 to explore the performance of each configuration. Table~\ref{tab:SamplesSelection} describes how these datasets are conformed. For example, \textit{SBT\_balanced\_120} is a set that contains 120 samples with the same number of suspicious and normal samples, 60 of each class (samples SB$_{1}$ to SB$_{60}$ and NB$_{1}$ to NB$_{60}$), while \textit{SBT\_unbalanced\_30s60n} is a dataset that contains fewer suspicious samples than normal ones (samples SB$_{1}$ to SB$_{30}$ and NB$_{1}$ to NB$_{60}$). To increase the number of samples, we applied a flipping procedure that consists of turn over horizontally each frame of the video sample, resulting in a clip where the actions happen in the opposite direction. For example, \textit{SBT\_balanced\_240} contains 240 samples (samples SB$_{1}$ to SB$_{60}$, NB$_{1}$ to NB$_{60}$, as well as the flipped versions of SB$_{1}$ to SB$_{60}$ and NB$_{1}$ to NB$_{60}$). \begin{table*}[ht!] \caption{Datasets description.} \label{tab:SamplesSelection} \resizebox{\textwidth}{!}{% \centering \begin{tabular}{ccc} \hline \textbf{Dataset} & \textbf{Suspicious samples} & \textbf{Normal samples} \\ \hline SBT\_balanced\_60 & SB$_{1}$ to SB$_{30}$ & NB$_{1}$ to NB$_{30}$ \\ SBT\_unabalanced\_30s60n & SB$_{1}$ to SB$_{30}$ & NB$_{1}$ to NB$_{60}$ \\ SBT\_balanced\_120 & SB$_{1}$ to SB$_{60}$ & NB$_{1}$ to NB$_{60}$ \\ SBT\_balanced\_120\_flip & Flipped versions of SB$_{1}$ to SB$_{60}$ & flipped versions of NB$_{1}$ to NB$_{60}$ \\ SBT\_unbalanced\_60s120n & SB$_{1}$ to SB$_{60}$ & NB$_{1}$ to NB$_{60}$ + flipped versions of NB$_{1}$ to NB$_{60}$ \\ SBT\_balanced\_240 & SB$_{1}$ to SB$_{60}$ + flipped versions of SB$_{1}$ to SB$_{60}$ & NB$_{1}$ to NB$_{60}$ + flipped versions of NB$_{1}$ to NB$_{60}$ \\ \hline \end{tabular} } \end{table*} \subsection{Pre-Crime Behavior} \label{sec:PCB} To detect suspicious behavior, the proposed model must focus on what happens before a shoplifting crime is committed. For this purpose, we propose a new method to process surveillance videos. Before we explain our proposal, we introduce some concepts, which are listed below. \begin{itemize} \item \textbf{Strict Crime Moment (SCM).} In a surveillance video, and after being reviewed by a human, the SCM is the segment of video where a person commits shoplifting crime. This moment is the primary evidence to accuse a person of committing a crime. \item \textbf{Comprehensive Crime Moment (CCM).} it is the precise moment when an ordinary person can detect the suspect's intentions. He/she started to watch out to go unnoticed and looks for the best moment to commit the crime. Other CCM examples are unsuccessful attempts or reorder things to distract attention. If we isolate this moment, we can doubt the suspect in the video, but there is no clear evidence to know if the suspect steals something. \item \textbf{Crime Lapse (CL).} In a video, the CL is the entire segment where a crime takes place. If we remove the CL from the video, it will be impossible to determine that there is a criminal act in the video. The CCM supports the beginning of the CL. It is essential not to leave any trace of the crime to avoid biasing the training. \item \textbf{Pre-crime Behavior (PCB).} The PCB contains what happens from the first appearance of the suspect until the CCM begins. These samples have different sizes since each video shows many behaviors. We can find more than one CL per video. The next PCB will start where the previous CL ends and until the next CCM. The result is a video segment in which an ordinary person may not detect that a crime will occur, but we are sure that the sample comes from a video where criminal activity was present. \end{itemize} \begin{figure*}[ht!] \centering \includegraphics[width=12cm]{Images/Image_02_ieeeaccess} \caption{Graphical description of the concepts related to the proposed methodology for suspicious sample extraction.} \label{fig:Concepts} \end{figure*} Figure~\ref{fig:Concepts} graphically presents how these concepts interact in one video sample. The sample has two CL, and each one contains its corresponding SCM and CCM. From this video sample, we can extract two PCB training samples: from the beginning of the video to the first CCM, and from the end of the first SCM to the second CCM. To extract the samples from the videos, we follow the process depicted in Fig.~\ref{fig:samplesExtraction}. Given a video that contains one or more shoplifting crimes, we identify the precise moment when the offense is committed. After that, we mark the different suspicious moments ---moments where a human observer doubts about what the person in the video is doing. Finally, we select the segment where the suspect people are preparing to commit the crime. These segments become the training samples for the Deep Learning (DL) model. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{Images/Image_03_ieeeaccess} \caption{Graphical representation of the process for suspicious sample extraction.} \label{fig:samplesExtraction} \end{figure*} In a video sample, each moment has its information level importance (see Fig.~\ref{fig:Segmentation}). PCB has less information about the crime itself, but it allows us to analyze the suspect's normal-acting behavior in its first stage, even far from the target. CCM allows us to have a more precise idea about who may commit the crime, but it is not conclusive. Finally, SCM is the doubtless evidence about a person committing a shoplifting crime. If we remove SCM and CCM from the video, the result will be a video containing only people shopping, and there will be no suspicion or evidence if someone commits a crime. That is the importance of the accurate segmentation of the video. From where a Crime Lapse ends until the next SCM, there is new evidence about how a person behaves before committing a new shoplifting crime attempt. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{Images/Image_04_ieeeaccess} \caption{Video segmentation by critical moments.} \label{fig:Segmentation} \end{figure*} For experimentation purposes, we only use PCB segments. These segments lack specific criminal behavior and have no information about a transgression. We look to pattern an aggressor's behavior before trying to steal from a store. \subsection{3D Convolutional Neural Networks} \label{sec:3DCNN} We use a 3DCNN for feature extraction and classification. We choose a basic structure to explore the performance of the 3DCNN for suspicious behavior detection task. The architecture of the model consists of four Conv3D layers, two max-pooling layers, and two fully connected layers. As a default configuration, in the first pair of Conv3D layers, we apply 32 filters, and for the second pair, 64 filters. All kernels have a size of 3$\times$3$\times$3, and the model uses an Adam optimizer and cross-entropy for loss calculation. At the end of the model, it has two dense layers with 512 and 2 neurons, respectively. The output is binary, 1 for Suspicious Behavior and 0 for Normal Behavior. This architecture is selected because it has been used for similar applications~\cite{3DcnnImplementation}, and seems suitable as a first approach for behavior detection in surveillance videos. For handling the model training, we use Google Colaboratory~\cite{GoogleColab}. This free cloud tool allows to write and execute code in cells, runs from a browser, and uses a GPU to train deep learning models. We can upload the datasets to a storage service, link the files, prepare the training environment, and save considerable time during the model training, using a virtual GPU. \section{Experiments and Results} \label{sec:Experiments} 3DCNN is a recent approach for Spatio-temporal analysis, showing a remarkable performance by processing videos in different areas, such as moving objects action recognition~\cite{HeObject3DCNN2017}, gesture recognition~\cite{ZhangGestures3DCNN2017} and action recognition~\cite{3dCNNHumanAction2013}. We decided to implement 3DCNN in a more challenging context, such as the search for patterns in criminal samples, which lack suspicious and illegal visual behavior. In this section, we present the proposed experiments and their results. The initial experiment aims at exploring the impact of different values for the parameters of the system. The second experiment focuses on obtaining statistical support that the best configurations obtained from the first experiment are useful for further testing in different situations. \subsection{Exploration of configurations} \label{sub:Exploration} In this experiment, we explore different values for the parameters of the system. Given different values for the parameters, we estimate the changes in the response due to such configurations. The baseline training uses the most common values for this architecture, such as filters, kernel size, depth, and batch. The rationale behind this first experiment is that by producing small variations on the input parameters, we expect to improve the model performance. We consider an extensive set of parameters to generate the testing configurations and obtain a total of 22 configurations. For example, we consider a balanced dataset or flipped images. We use unbalanced datasets to simulate real environments where normal behavior is more likely to be present than suspicious ones. For these datasets, we use a sample ratio of 1:2; for each suspicious behavior sample, there are two normal behavior samples. The following is a short description of the nomenclature used to name the datasets so that the reader can understand the differences between each one of the datasets. \begin{itemize} \item \textbf{Balance.} The dataset has the same number of samples of each class. The values this parameter can take are balanced and unbalanced. \item \textbf{Ratio.} The proportion of samples of each class in the dataset. The different configurations for balanced sets include 60, 120 and 240 samples. For unbalanced datasets the ratio is 1:2 for suspicious~(s) and normal~(n) class, respectively, with a total of 90 and 180 samples. \item \textbf{Test size.} The percentage of the dataset intended to the testset. The possible percentage are 20, 30 or 40 percent. \item \textbf{Depth.} The number of consecutive frames used for a 3D convolution. The values this parameter can take are 10, 30 and 90. \item \textbf{Resolution.} The size of the input images. 160 $\times $120, 80 $\times $60, 40 $\times $30 or 32 $\times $24. \item \textbf{Flip.} If the word `flip' appear in the dataset name, the frames in the videos have turned over horizontally. \end{itemize} By following the previous description, a dataset named \textit{SBT\_unbalanced\_60s120n\_ 30t\_30f\_40x30\_flip} refers to a dataset which has fewer samples of suspicious behavior than normal behavior, sixty and one hundred and twenty respectively. It destines thirty percent of the dataset for the test, and it uses thirty frames to perform a 3D convolution. Finally, the input images have a resolution of 40x30 pixels, and they were flipped horizontally. The tests focus on comparing different depths, test set sizes, the balance in the number of samples, which image resolution is optimal between time processing and image detail, and the data-augmentation technique of flip the images. The objective of the exploratory experiment is to find a suitable configuration to model suspicious behavior. As previously mentioned, we analyzed 22 configurations, which are tested in four different resolutions that run three times each. The depth sizes (number of consecutive frames) considered for the test are 10, 30 and 90. Table~\ref{tab:depth} shows the results of these runs with the four resolutions. Based on the results, using 10 and 30 frames achieves the best classification results, 69.4\% to 83.3\% and 69.4\% to 75\%, respectively. Table~\ref{tab:testSize} presents the results of the test set size comparison. We select values of 20\%, 30\%, and 40\% of the complete dataset for testing purposes. Although the first case (20\% test set size) uses more information to train, this proportion did not get the best results. It produced outcomes between 47.2\% and 72.2\%. The second case obtained results between 68.5\% and 75.9\%, and the third one between 61.1\% and 70.3\%. 30\% of the total dataset has the best results to define the test set size. \begin{table*}[ht!] \caption{Results of depth comparison.} \label{tab:depth} \centering \begin{tabular}{ccccc} \hline \textbf{} & \multicolumn{4}{c}{\textbf{Resolution}} \\ \textbf{Dataset} & \textbf{32x24} & \textbf{40x30} & \textbf{80x60} & \textbf{160x120} \\ \hline SBT\_balanced\_60\_20t\_10f & \textbf{83.3\%} & 72.2\% & \textbf{77.7\%} & \textbf{69.4\%} \\ SBT\_balanced\_60\_20t\_30f & 69.4\% & \textbf{75.0\%} & 69.4\% & \textbf{69.4\%} \\ SBT\_balanced\_60\_20t\_90f & 69.4\% & 63.9\% & 61.1\% & 58.3\% \\ \hline \end{tabular} \end{table*} \begin{table}[ht!] \caption{Results of test set size comparison} \label{tab:testSize} \centering \resizebox{0.8\textwidth}{!} \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{4}{c}{\textbf{Resolution}} \\ & \textbf{32x24} & \textbf{40x30} & \textbf{80x60} & \textbf{160x120} \\ \hline SBT\_balanced\_60\_20t\_10f & \textbf{72.2\%} & 72.2\% & 66.6\% & 47.2\% \\ SBT\_balanced\_60\_30t\_10f & 68.5\% & \textbf{74.0\%} & \textbf{68.5\%} & \textbf{75.9\%} \\ SBT\_balanced\_60\_40t\_10f & 63.9\% & 68.0\% & 61.1\% & 70.3\% \\ \hline \end{tabular}% } \end{table} To deal with unbalanced training, we create three datasets with sixty normal samples, thirty suspicious ones and three different depths (Table~\ref{tab:unb}). We are aware that our model requires more samples to provide a better performance. However, in this test, the results reveal a similar performance, around 80\%, between 30 frames and 90 frames depth. 3DCNN can handle unbalanced datasets. The difference may relay in the training time of each depth. \begin{table}[ht!] \caption{Results of unbalanced dataset test} \label{tab:unb} \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{4}{c}{\textbf{Resolution}} \\ & \textbf{32x24} & \textbf{40x30} & \textbf{80x60} & \textbf{160x120} \\ \hline SBT\_unbalanced\_30s60n\_30t\_10f & 66.6\% & 67.8\% & 68.8\% & 79.0\% \\ SBT\_unbalanced\_30s60n\_30t\_30f & \textbf{69.1\%} & \textbf{65.4\%} & 80.2\% & \textbf{80.2\%} \\ SBT\_unbalanced\_30s60n\_30t\_90f & \textbf{69.1\%} & 65.4\% & \textbf{81.4\%} & 66.6\% \\ \hline \end{tabular}% } \end{table} Data augmentation techniques are an option to take advantage of small datasets. For this reason, we test the model performance using original and flipped images in different runs. The used test set has a size of 30\% and 40\%. The tests throw accuracy results between 70\% and 80\% (Table~\ref{tab:flipped}). Therefore, we consider that both orientations can effectively be used as samples to train the model. \begin{table}[ht!] \caption{Results of flipped images comparison.} \label{tab:flipped} \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{4}{c}{\textbf{Resolution}} \\ & \textbf{32x24} & \textbf{40x30} & \textbf{80x60} & \textbf{160x120} \\ \hline SBT\_balanced\_120\_40t\_10f & 71.5\% & 71.5\% & 77.0\% & 77.0\% \\ SBT\_balanced\_120\_40t\_10f\_flip & 73.6\% & 77.0\% & 83.3\% & 70.8\% \\ \hline SBT\_balanced\_120\_30t\_10f & 75.9\% & 71.3\% & 71.3\% & 79.6\% \\ SBT\_balanced\_120\_30t\_10f\_flip & 76.8\% & 72.2\% & 81.5\% & 78.6\% \\ \hline \end{tabular}% } \end{table} Finally, we create datasets with balanced 240 samples and unbalanced 180 samples. Each type, balanced and unbalanced, combines three different depths and four resolutions, for a total of 24 datasets. Table~\ref{tab:resolution2} shows both the best result and the average result from three runs, for each resolution. It is essential to clarify better results by resolution may be achieved by using different depths. In this table, the value inside the parenthesis indicates the depth value. The presented results demonstrate that the best results are obtained through higher resolutions and using unbalanced datasets. Most of the results were achieved using depths of ten or thirty frames. The next experiments explore a more in-depth analysis of the best configurations, their performance, and statistical validation. \begin{table*}[tb] \centering \caption{\label{tab:resolution2}Best results comparison} \resizebox{\textwidth}{!}{% \begin{tabular}{ccccccccc} \hline \multirow{3}{*}{\textbf{Dataset}} & \multicolumn{8}{c}{\textbf{Resolution}} \\ & \multicolumn{2}{c}{\textbf{32x24}} & \multicolumn{2}{c}{\textbf{40x30}} & \multicolumn{2}{c}{\textbf{80x60}} & \multicolumn{2}{c}{\textbf{160x120}} \\ \cline{2-9} & Individual & Average & Individual & Average & Individual & Average & Individual & Average \\ \hline Balanced Dataset & 83.3\% (90f) & 75.9\% (30f) & 74.0\% (90f) & 73.4\% (90f) & 87.0\% (30f) & 79.6\% (10f) & 87.0\% (30f) & 79.6\% (10f) \\ Unbalanced Dataset & 84.7\% (10f) & 77.7\% (30f) & 86.1\% (10f) & 76.3\% (30f) & 91.6\% (10f) & 87.0\% (10f) & 90.2\% (30f) & 86.1\% (30f) \\ \hline \end{tabular}% } \end{table*} \subsection{Statistical Validation} \label{sub:StatisticalValidation} Once the exploration tests end, we analyze the results to decide which parameters improve the classification and select configurations with the best performance. As a second experiment, the prominent configurations were run thirty times, using cross-validation, to give statistical support to the results that previously presented. For this experiment, the configurations train with the largest datasets we already create (SBT\_balanced\_240\_30t and SBT\_unbalanced\_60120\_30t). Complementing with cross-validation, we use ten folds of the dataset for train and test. From previous results, four configurations were trained with 240 balanced-samples and 180 unbalanced-samples datasets. Fixed parameters were 100 epochs for training, 70\% samples for training and 30\% for testing, both datasets use the original and the flipped images. The ratio and number of samples per class can be inferred from the balance parameter (see section~\ref{sub:Exploration}, balance and ratio). For this test, we perform thirty runs per configuration and use ten dataset folds for cross-validation. Table~\ref{tab:30runs} presents average accuracy and the standard deviation of each configuration tested. Most of the results are around 70\% accuracy. There is not significative deviation on each training group. The results seem very similar between them. We analyze the confusion matrices to search for biased results. Although we find cases were the classification results are biased to a particular class, we discover good results. \begin{table}[ht!] \caption{Thirty runs training results.} \label{tab:30runs} \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{cccc} \hline \textbf{Resolution} & \textbf{Dataset} & \textbf{Avg Accuracy} & \textbf{Std Deviation} \\ \hline \multirow{4}{*}{160x120} & unb\_60s120n\_30t\_10f & 75.7\% & 0.0638 \\ & unb\_60s120n\_30t\_30f & 73.9\% & 0.0543 \\ & bal\_240\_30t\_10f & 73.1\% & 0.0661 \\ & bal\_240\_30t\_30f & 71.6\% & 0.0999 \\ \hline \multirow{4}{*}{80x60} & unb\_60s120n\_30t\_10f & 75.0\% & 0.0689 \\ & unb\_60s120n\_30t\_30f & 74.8\% & 0.0500 \\ & bal\_240\_30t\_10f & 73.0\% & 0.0717 \\ & bal\_240\_30t\_30f & 73.6\% & 0.0821 \\ \hline \multirow{4}{*}{40x30} & unb\_60s120n\_30t\_10f & 68.7\% & 0.0569 \\ & unb\_60s120n\_30t\_30f & 69.1\% & 0.0576 \\ & bal\_240\_30t\_10f & 71.8\% & 0.0468 \\ & bal\_240\_30t\_30f & 71.9\% & 0.0555 \\ \hline \multirow{4}{*}{32x24} & unb\_60s120n\_30t\_10f & 69.4\% & 0.0686 \\ & unb\_60s120n\_30t\_30f & 71.6\% & 0.0533 \\ & bal\_240\_30t\_10f & 70.3\% & 0.0476 \\ & bal\_240\_30t\_30f & 70.1\% & 0.0574 \\ \hline \end{tabular}% } \end{table} In this investigation, the 80x60 resolution has the best results in suspicious behavior detection task. It achieves accuracy rates above 85\% both balanced and unbalanced datasets, preferably with ten frames depth. The best result in a single run is 92.50\% of accuracy. This performance was obtained in the thirtieth run, using the unbalanced dataset and ten frames depth. After 30 runs, the model obtain an average accuracy of 75\%. Table~\ref{tab:confusionMatrix} exhibits the best results and their confusion matrices. Even in the confusion matrices, the accuracy per class is above 90\% for suspicious-behavior class and around 80\% for normal-behavior class. \begin{table}[htb] \centering \caption{Confusion Matrix of best results} \label{tab:confusionMatrix} \resizebox{0.5\textwidth}{!}{% \begin{tabular}{cccc} \multicolumn{4}{l}{Dataset: \textbf{unb\_60s120n\_30t\_10f\_80x60}} \\ \multicolumn{4}{l}{Accuracy: \textbf{92.5\%}} \\ \hline \multicolumn{1}{c|}{} & \multicolumn{1}{c}{\textbf{Suspicious}} & \multicolumn{1}{c}{\textbf{Normal}} & \multicolumn{1}{c}{\textbf{Accuracy}} \\ \hline \multicolumn{1}{c|}{\textbf{Suspicious}} & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{100\%} \\ \multicolumn{1}{c|}{\textbf{Normal}} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{88.9\%} \\ \hline \multicolumn{4}{l}{} \\ \multicolumn{4}{l}{Dataset: \textbf{bal\_240\_30t\_10f\_80x60}} \\ \multicolumn{4}{l}{Accuracy: \textbf{91.6\%}} \\ \hline \multicolumn{1}{c|}{\textbf{}} & \multicolumn{1}{c}{\textbf{Suspicious}} & \multicolumn{1}{c}{\textbf{Normal}} & \multicolumn{1}{c}{\textbf{Accuracy}} \\ \hline \multicolumn{1}{c|}{\textbf{Suspicious}} & \multicolumn{1}{c}{36} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{100\%} \\ \multicolumn{1}{c|}{\textbf{Normal}} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{30} & \multicolumn{1}{c}{83.3\%} \\ \hline \multicolumn{4}{l}{} \\ \multicolumn{4}{l}{Dataset: \textbf{unb\_60s120n\_30t\_10f\_80x60}} \\ \multicolumn{4}{l}{Accuracy: \textbf{90.7\%}} \\ \hline \multicolumn{1}{c|}{\textbf{}} & \multicolumn{1}{c}{\textbf{Suspicious}} & \multicolumn{1}{c}{\textbf{Normal}} & \multicolumn{1}{c}{\textbf{Accuracy}} \\ \hline \multicolumn{1}{c|}{\textbf{Suspicious}} & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{100\%} \\ \multicolumn{1}{c|}{\textbf{Normal}} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{31} & \multicolumn{1}{c}{86.0\%} \\ \hline \multicolumn{4}{l}{} \\ \multicolumn{4}{l}{Dataset: \textbf{bal\_240\_30t\_10f\_80x60}} \\ \multicolumn{4}{l}{Accuracy: \textbf{90.2\%}} \\ \hline \multicolumn{1}{c|}{\textbf{}} & \multicolumn{1}{c}{\textbf{Suspicious}} & \multicolumn{1}{c}{\textbf{Normal}} & \multicolumn{1}{c}{\textbf{Accuracy}} \\ \hline \multicolumn{1}{c|}{\textbf{Suspicious}} & \multicolumn{1}{c}{36} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{100\%} \\ \multicolumn{1}{c|}{\textbf{Normal}} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{29} & \multicolumn{1}{c}{80.6\%} \\ \hline \end{tabular} } \end{table} \begin{table}[htb] \caption{Comparison between base model and the proposed one.} \label{tab:FinalComparisonWithMatrx} \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{ccccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{}} & \multicolumn{4}{c}{\textbf{Balanced}} & \multicolumn{4}{c}{\textbf{Unbalanced}} \\ \multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{Base model}} & \multicolumn{2}{c}{\textbf{Proposed}} & \multicolumn{2}{c}{\textbf{Base model}} & \multicolumn{2}{c}{\textbf{Proposed}} \\ \hline \multicolumn{1}{c}{\textbf{Avg Acc}} & \multicolumn{2}{c}{71.7\%} & \multicolumn{2}{c}{\textbf{73.0\%}} & \multicolumn{2}{c}{70.5\%} & \multicolumn{2}{c}{\textbf{75.0\%}} \\ \multicolumn{1}{c}{\textbf{Std Dev}} & \multicolumn{2}{c}{0.06} & \multicolumn{2}{c}{0.07} & \multicolumn{2}{c}{0.05} & \multicolumn{2}{c}{0.07} \\ \multicolumn{1}{c}{\textbf{Best Result}} & \multicolumn{2}{c}{81.9\%} & \multicolumn{2}{c}{91.6\%} & \multicolumn{2}{c}{88.8\%} & \multicolumn{2}{c}{92.5\%} \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{3}{c}{\textbf{Base Model}} & \multicolumn{1}{l}{} & \multicolumn{3}{c}{\textbf{Proposed}} \\ \multirow{3}{*}{\textbf{Balanced}} & & \multicolumn{1}{c}{} & \multicolumn{1}{|c}{Susp} & \multicolumn{1}{c}{Norm} & & \multicolumn{1}{c}{} & \multicolumn{1}{|c}{Susp} & \multicolumn{1}{c}{Norm} \\ \cline{3-5} \cline{7-9} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Susp} & \multicolumn{1}{c}{34} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Susp} & \multicolumn{1}{c}{36} & \multicolumn{1}{c}{0} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Norm} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{25} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Norm} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{30} \\ \cline{3-5} \cline{7-9} & & & & & & & & \\ \multirow{3}{*}{\textbf{Unbalanced}} & & \multicolumn{1}{c}{} & \multicolumn{1}{|c}{Susp} & \multicolumn{1}{c}{Norm} & & \multicolumn{1}{c}{} & \multicolumn{1}{|c}{Susp} & \multicolumn{1}{c}{Norm} \\ \cline{3-5} \cline{7-9} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Susp} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Susp} & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{0} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Norm} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{Norm} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{32} \\ \cline{3-5} \cline{7-9} \end{tabular} } \end{table} \subsection{Discussion} \label{sub:Discussion} As the first experiment in this work, we select a 3D Convolutional Neural Network with a basic configuration as a base model, and then we perform a parameter tunning, searching for network model improvement. From the parameter exploration, we found that 80x60 and 160x120 resolutions deliver better results than a commonly used low resolution or. This experiment was limited to a maximum resolution of 160x120 due to processing resources. Another significant aspect is the "depth" parameter. This parameter describes the number of consecutive frames used to perform the 3D convolution. After testing different values, we observed that low values, between 10 and 30 frames, have a good relationship between image detail and processing time. These two factors impact the network model training and the correct classification of the samples. Also, the proposed model can correctly handle flipped images and unbalanced datasets. We performed a more realistic simulation where the dataset has more normal-behavior samples than suspicious-behavior examples. The unbalanced datasets were also correctly classified. For the second experiment, we use the configurations with the best performance and test them with bigger datasets. We performed 30 runs for each configuration, applying cross-validation, with 10 and 30 frames values for depth and using a 240-samples balanced dataset and a 180-samples unbalanced dataset. From this experimentation, we found that 80x60 resolution reports better accuracy for the four scenarios we test. Table~\ref{tab:30runs} presents the average accuracy for each configuration. The average performance for the four scenarios in 80x60 resolution is 74.1\%, while the 160x120 resolution obtains 73.5\%. Also, in a single training, 80x60 resolution performance achieves over 90\%, 92.5\% for a balanced dataset and 91.6\% for the unbalanced dataset. Finally, when comparing the base model against the proposed one (Table~\ref{tab:FinalComparisonWithMatrx}), we obtained that our model is capable of improving the classification results by 1.3\% and 4.5\% on average for balanced and unbalanced datasets, respectively. In the best-single-training comparison, the proposed architecture exceeds 90\% accuracy in both cases. The confusion matrices show that for both balanced and unbalanced datasets, the proposed architecture successfully classifies 100\% of suspicious samples and obtains a low number of false positives from normal-behavior samples. \subsection{Processing Time} \label{ProcessingTime} As mentioned before, we use Google Colaboratory to perform the experiments. This tool is based on Jupyter Notebooks and allows the free use of a GPU. The speed of each training depends on the tool demand. Most of the network trainings end in less than an hour, but a higher GPU demand may impact the training time. We are not able to establish a relationship between resolutions and training time, but we have an approximate correlation between different depths. Table~\ref{tab:TrainingTime} shows the average training time of the final tests, running one hundred epochs, described in section~\ref{sub:Exploration} (accuracy results of these experiments are shown in Table~\ref{tab:resolution2}). Comparing the training time from using 10-frames against 30-frames, it increases approximately three times in the second training. We find the same increase's relation when comparing trainings with 30-frames and 90-frames. Some ninety-frames trainings, with a hundred epochs and high resolution, have reached a duration of up to four hours. Training duration is an essential factor due to the size of the used dataset is considerably small. Another point to consider is the system's accuracy against the training time. Although the training time increases approximately three times, using the same number of epochs, the accuracy is usually lower when using 90-frames depth, in most of the cases. In some instances, we get a higher precision using 90-frames, but accuracy reached by smaller depths was not far from the best one, and the training time was considerably lower. \begin{table}[ht!] \centering \caption{Average training times in seconds comparison between different depths and resolutions.} \label{tab:TrainingTime} \resizebox{0.8\textwidth}{!}{% \begin{tabular}{c|c|rrrr} \hline \multicolumn{2}{c}{\multirow{2}{*}{\textbf{Dataset}}} & \multicolumn{4}{c}{\textbf{Resolution}} \\ \multicolumn{2}{c}{} & \multicolumn{1}{c}{\textbf{32x24}} & \multicolumn{1}{c}{\textbf{40x30}} & \multicolumn{1}{c}{\textbf{80x60}} & \multicolumn{1}{c}{\textbf{160x120}} \\ \hline \multirow{3}{*}{\textbf{SBT\_unbalanced\_60120...}} & \textbf{10f} & 96 & 126 & 369 & 1,356 \\ & \textbf{30f} & 196 & 279 & 1,027 & 3,918 \\ & \textbf{90f}& 518 & 758 & 2,929 & 11,655 \\ \hline \multirow{3}{*}{\textbf{SBT\_balanced\_240...}} & \textbf{10f} & 118 & 157 & 475 & 1,714 \\ & \textbf{30f} & 257 & 364 & 1,304 & 4,952 \\ & \textbf{90f} & 688 & 1,011 & 3,879 & 15,415 \\ \hline \end{tabular}% } \end{table} \section{Conclusion} \label{sec:Conclusion} For this work, we focus on the behavior performed by a person before committing a shoplifting crime. The neural network model identifies the previous conduct, looking for suspicious behavior and not to recognize the crime itself. This behavior analysis is the principal reason why we remove the committed crime segment from the video samples, to allow the artificial model to focus on decisive conduct and not in the offense. We implement a 3D Convolutional Neural Network due to its capability to obtain abstract features from signals and images, based on previous approaches for action recognition and movement detection. Based on the results obtained from the conducted experimentation, a 75\% accuracy in suspicious behavior detection, we can state that it is possible to model the suspicious behavior of a person in the shoplifting context. Through the presented experimentation, we found which parameters fit better for behavior analysis, particularly for the shoplifting context. We explore different parameters and configurations, and in the end, we compare our results against a reference 3D Convolutional architecture. The proposed model demonstrates a better performance with balanced and unbalanced datastes and using the particular configuration obtained from previous experiments. The final intention of this experimentation is the development of a tool capable of supporting the surveillance staff, presenting visual behavioral cues, and this work is a first step to achieve the mentioned goal. From this point, we will explore different aspects that will contribute to the project development, such as bigger datasets, adding more criminal contexts that present suspicious behavior and real-time tests. \subsection{Future Work} \label{sub:FutureWork} For these experiments, we use a selected number of videos from the UCF-Crimes dataset. To test in a more realistic simulation, we have to increase the number of samples, preferably the normal-behavior ones, to create a bigger sample-unbalance between classes. Another interesting aspect of the developing of this project is to expand our behavior detection model to other contexts. It exists many situations where we can find suspicious behavior, such as stealing, arson intents, burglary, among others. We will gather videos of different contexts to strengthen the capability to detect suspicious behavior. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,130
{"url":"https:\/\/worldbuilding.stackexchange.com\/questions\/124718\/if-earths-moon-were-ganymede-like-part-i-rotation","text":"# If Earth's Moon Were Ganymede-Like, Part I: Rotation\n\nI know that in the past I posted questions on if specific bodies larger than the moon--Mars in one and Titan in another--but this series deals with questions regarding Earth's effects by a natural satellite like Ganymede in two ways--a diameter of 3,274 miles and orbiting its parent from a distance of about 665,000 miles. The major fundamental difference is that this moon has a five-mile deep mantle separating its all-iron core from its rocky crust.\n\nIn Episode 1, we look at how the moon creates Earth's rotation. Before the Theia impact event, Earth spins at a daily rate of six hours. After Theia, the moon started life orbiting Earth from a distance of 15,000 miles and leaving orbit at a speed of 1.6 inches--four centimeters--per year. As a result, a day has been getting longer by 2.3 milliseconds since the eighth century before the Common Era. Whereas sea scorpions and trilobites swarmed the oceans under a 21-hour day, Stegosaurus may have been munching on cycads under a 23-hour day. Now that this 2,159-mile-wide body of rock currently orbits Earth at a distance of 238,900 miles, we walk and run under a day lasting 24 hours.\n\nCan an alternate Earth still have a 24-hour day with the moon described in the first paragraph above? And how will its larger size affect the rate of disintegration?\n\nThis question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information.\n\n## 1 Answer\n\nShort answer: \"All else being equal\", it looks to me like making Bizarro-Moon twice as big and three times as far away as the actual Moon can't happen -- not even if you spin the Earth down completely (ie. to the point where the Earth is tidally locked to the Moon, the way the Moon is already locked to the Earth).\n\nSo all else can't be equal. But there's a lot of things you could change, and you can get just about any answer you want. So 24 hour days is just fine.\n\nLong answer: Bizarro-Moon is about twice as big and three times as far awaya, so we're talking three or four times the orbit angular momentum (which is proportional to mass and to the square-root of distancec).\n\nThe actual Earth lost about three quarters of its spin angular momentum (which is inversely proportional to spin periodd): days went from 6 hoursb to 24 hours. And the Moon also lost its own spin completely and is tidally locked -- but the Moon is a lot smaller than the Earth (angular momentum proportional to massd), so the Earth's spin angular momentum is a lot bigger. Bizarro-Moon is somewhat bigger but still a lot smaller than the Earth.\n\nSo: most of the Moon's orbit angular momentum came from the Earth's spin, most of that spin is gone now, and the result only got the actual-sized Moon to its actual position. You need three or four times more of that to get Bizarro-Moon where you want it -- so it won't get there, even if the Earth becomes completely tidally locked.\n\nBut maybe Bizarro-Earth is somewhat bigger than the actual Earth, and maybe the collision that produced Bizarro-Moon was imparted faster initial spin to Bizarro-Earth and\/or a greater initial orbital angular momentum to Bizarro-Moon.\n\nThere's also the problem that Bizarro-Moon might be orbiting so far from Earth that its orbit is unstable due to Jupiter perturbing it over timee. But again, maybe Bizarro-Earth is somewhat bigger, so objects can have larger stable orbits.\n\nYou also talked about rate of disintegrationf, which I take to mean how much slower the Earth spins each year.\n\n\"All else being equal\", if Bizarro-Moon is twice as massive, the rate of slowdown will be four times as fast (quadratic relation). If it's three times as far away, the rate of spin slowdown will be hundreds times slower (negative sixth power)g.\n\nThere's also an effect from the internal composition of Bizarro-Earth. Maybe Bizarro-Earth has a different composition. Note that the internal composition of Bizarro-Moon won't matter to the Earth's spinh, but it would have mattered to how fast Bizarro-Moon lost its own spin (eg. how big are Ganymede's oceans reallyi?)\n\nThe dominant effect for the current rate is the fact that it is three times further away now, so transfer of momentum is going to be much much slower today. But, in the past, when Bizarro-Moon was closer to Bizarro-Earth, the other things matter.\n\nNotes:\n\na Using OP's figures... After writing all of this, I later realized this wasn't clear. Just looking at OP's figures for diameter, you might conclude Bizarro-Moon about three times more massive than the actual Moon, not two. (about 50% bigger diameter than the actual Moon, and mass is proportional to diameter cubed). But I was also assuming that Bizarro-Moon has a similar density to Ganymede, which (not stated by OP) is about 2\/3 the density of the Moon.\n\nb Again using OP's figures, for the impact event -- I am not endorsing the giant impact hypothesis.\n\nc Using the basic definition of angular momentum and Kepler's 3rd law: $$L = mr^2\\omega$$ and $$\\omega^2r^3 = GM$$.\n\nd For spin angular momentum, $$L = \\alpha mr^2\\omega$$, which is the same as orbit angular momentum, other than the fudge factor $$\\alpha$$, which will be somewhat less than 0.4 (exactly 2\/5 for a sphere of uniform density), but not much less, for a rocky moon or planet (it can be quite small for a gas giant or star). See moment of inertia factor.\n\ne Basically, outside 1.5 million km, you are orbiting the Sun not the Earth. OP's figure for Bizarro-Moon is still inside this sphere. However it is not 'comfortably' so (about 2\/3 the way) -- this sphere is a theoretical limit based on three bodies (Moon orbiting Earth orbiting Sun), while in reality other planets (especially Jupiter) perturb things in a complicated way. It is believed that something 1\/2 the way or more to the theoretical limit will end up being unstable over long periods of time. See Hill Sphere.\n\nf Although the Moon's orbit is spiraling outwards, it is not really \"disintegrating\" in the sense that it will eventually cease to orbit the Earth. As previously explained, it has already gotten most of the Earth's spin angular momentum, so in the future when the Earth becomes tidally locked to the Moon, it won't be all that much further way than it is already, ie. still be comfortably inside the Hill sphere of the Earth (see previous note).\n\ng For tidal torque see here, esp. formulas 6.92 and 6.87.\n\nh See previous note and link for tidal torque. The formulas cited contain fudge factors: these are the \"tidal phase angle\" $$\\delta$$ and the \"effective rigidity\" $$\\tilde{\\mu}$$, and these relate to the structure of the spinning object that is slowing down, not the structure of the object producing the tides that cause the slow down.\n\ni Oceans matter! These have an effect on the \"tidal phase angle\". See previous two notes. Also see here for Ganymede specifically.\n\n\u2022 Welcome to Worldbuilding! As this is a 'hard-science' question, do you have any references for the conclusions you've got here? \u2013\u00a0Mithrandir24601 Nov 1 '18 at 22:26\n\u2022 @Mithrandir24601 -- thank you, glad to be here! For a lot of it I was just taking OP's figures and assumptions uncritically. I am not endorsing his or anyone else's version of the giant impact hypothesis. For the rest I have added some inline notes to be a bit more clear as to how I got to that conclusion. Hope this helps! \u2013\u00a0TimeTravellyParadoxySciFiSmeg Nov 2 '18 at 20:32","date":"2019-07-23 03:04:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 6, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7627212405204773, \"perplexity\": 1112.9026042730932}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195528687.63\/warc\/CC-MAIN-20190723022935-20190723044935-00072.warc.gz\"}"}
null
null
package v1 import ( fmt "fmt" v1 "github.com/census-instrumentation/opencensus-proto/gen-go/resource/v1" proto "github.com/golang/protobuf/proto" timestamp "github.com/golang/protobuf/ptypes/timestamp" wrappers "github.com/golang/protobuf/ptypes/wrappers" math "math" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package // The kind of metric. It describes how the data is reported. // // A gauge is an instantaneous measurement of a value. // // A cumulative measurement is a value accumulated over a time interval. In // a time series, cumulative measurements should have the same start time, // increasing values and increasing end times, until an event resets the // cumulative value to zero and sets a new start time for the following // points. type MetricDescriptor_Type int32 const ( // Do not use this default value. MetricDescriptor_UNSPECIFIED MetricDescriptor_Type = 0 // Integer gauge. The value can go both up and down. MetricDescriptor_GAUGE_INT64 MetricDescriptor_Type = 1 // Floating point gauge. The value can go both up and down. MetricDescriptor_GAUGE_DOUBLE MetricDescriptor_Type = 2 // Distribution gauge measurement. The count and sum can go both up and // down. Recorded values are always >= 0. // Used in scenarios like a snapshot of time the current items in a queue // have spent there. MetricDescriptor_GAUGE_DISTRIBUTION MetricDescriptor_Type = 3 // Integer cumulative measurement. The value cannot decrease, if resets // then the start_time should also be reset. MetricDescriptor_CUMULATIVE_INT64 MetricDescriptor_Type = 4 // Floating point cumulative measurement. The value cannot decrease, if // resets then the start_time should also be reset. Recorded values are // always >= 0. MetricDescriptor_CUMULATIVE_DOUBLE MetricDescriptor_Type = 5 // Distribution cumulative measurement. The count and sum cannot decrease, // if resets then the start_time should also be reset. MetricDescriptor_CUMULATIVE_DISTRIBUTION MetricDescriptor_Type = 6 // Some frameworks implemented Histograms as a summary of observations // (usually things like request durations and response sizes). While it // also provides a total count of observations and a sum of all observed // values, it calculates configurable percentiles over a sliding time // window. This is not recommended, since it cannot be aggregated. MetricDescriptor_SUMMARY MetricDescriptor_Type = 7 ) var MetricDescriptor_Type_name = map[int32]string{ 0: "UNSPECIFIED", 1: "GAUGE_INT64", 2: "GAUGE_DOUBLE", 3: "GAUGE_DISTRIBUTION", 4: "CUMULATIVE_INT64", 5: "CUMULATIVE_DOUBLE", 6: "CUMULATIVE_DISTRIBUTION", 7: "SUMMARY", } var MetricDescriptor_Type_value = map[string]int32{ "UNSPECIFIED": 0, "GAUGE_INT64": 1, "GAUGE_DOUBLE": 2, "GAUGE_DISTRIBUTION": 3, "CUMULATIVE_INT64": 4, "CUMULATIVE_DOUBLE": 5, "CUMULATIVE_DISTRIBUTION": 6, "SUMMARY": 7, } func (x MetricDescriptor_Type) String() string { return proto.EnumName(MetricDescriptor_Type_name, int32(x)) } func (MetricDescriptor_Type) EnumDescriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{1, 0} } // Defines a Metric which has one or more timeseries. type Metric struct { // The descriptor of the Metric. // TODO(issue #152): consider only sending the name of descriptor for // optimization. MetricDescriptor *MetricDescriptor `protobuf:"bytes,1,opt,name=metric_descriptor,json=metricDescriptor,proto3" json:"metric_descriptor,omitempty"` // One or more timeseries for a single metric, where each timeseries has // one or more points. Timeseries []*TimeSeries `protobuf:"bytes,2,rep,name=timeseries,proto3" json:"timeseries,omitempty"` // The resource for the metric. If unset, it may be set to a default value // provided for a sequence of messages in an RPC stream. Resource *v1.Resource `protobuf:"bytes,3,opt,name=resource,proto3" json:"resource,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Metric) Reset() { *m = Metric{} } func (m *Metric) String() string { return proto.CompactTextString(m) } func (*Metric) ProtoMessage() {} func (*Metric) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{0} } func (m *Metric) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Metric.Unmarshal(m, b) } func (m *Metric) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Metric.Marshal(b, m, deterministic) } func (m *Metric) XXX_Merge(src proto.Message) { xxx_messageInfo_Metric.Merge(m, src) } func (m *Metric) XXX_Size() int { return xxx_messageInfo_Metric.Size(m) } func (m *Metric) XXX_DiscardUnknown() { xxx_messageInfo_Metric.DiscardUnknown(m) } var xxx_messageInfo_Metric proto.InternalMessageInfo func (m *Metric) GetMetricDescriptor() *MetricDescriptor { if m != nil { return m.MetricDescriptor } return nil } func (m *Metric) GetTimeseries() []*TimeSeries { if m != nil { return m.Timeseries } return nil } func (m *Metric) GetResource() *v1.Resource { if m != nil { return m.Resource } return nil } // Defines a metric type and its schema. type MetricDescriptor struct { // The metric type, including its DNS name prefix. It must be unique. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // A detailed description of the metric, which can be used in documentation. Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"` // The unit in which the metric value is reported. Follows the format // described by http://unitsofmeasure.org/ucum.html. Unit string `protobuf:"bytes,3,opt,name=unit,proto3" json:"unit,omitempty"` Type MetricDescriptor_Type `protobuf:"varint,4,opt,name=type,proto3,enum=opencensus.proto.metrics.v1.MetricDescriptor_Type" json:"type,omitempty"` // The label keys associated with the metric descriptor. LabelKeys []*LabelKey `protobuf:"bytes,5,rep,name=label_keys,json=labelKeys,proto3" json:"label_keys,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MetricDescriptor) Reset() { *m = MetricDescriptor{} } func (m *MetricDescriptor) String() string { return proto.CompactTextString(m) } func (*MetricDescriptor) ProtoMessage() {} func (*MetricDescriptor) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{1} } func (m *MetricDescriptor) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MetricDescriptor.Unmarshal(m, b) } func (m *MetricDescriptor) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MetricDescriptor.Marshal(b, m, deterministic) } func (m *MetricDescriptor) XXX_Merge(src proto.Message) { xxx_messageInfo_MetricDescriptor.Merge(m, src) } func (m *MetricDescriptor) XXX_Size() int { return xxx_messageInfo_MetricDescriptor.Size(m) } func (m *MetricDescriptor) XXX_DiscardUnknown() { xxx_messageInfo_MetricDescriptor.DiscardUnknown(m) } var xxx_messageInfo_MetricDescriptor proto.InternalMessageInfo func (m *MetricDescriptor) GetName() string { if m != nil { return m.Name } return "" } func (m *MetricDescriptor) GetDescription() string { if m != nil { return m.Description } return "" } func (m *MetricDescriptor) GetUnit() string { if m != nil { return m.Unit } return "" } func (m *MetricDescriptor) GetType() MetricDescriptor_Type { if m != nil { return m.Type } return MetricDescriptor_UNSPECIFIED } func (m *MetricDescriptor) GetLabelKeys() []*LabelKey { if m != nil { return m.LabelKeys } return nil } // Defines a label key associated with a metric descriptor. type LabelKey struct { // The key for the label. Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` // A human-readable description of what this label key represents. Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LabelKey) Reset() { *m = LabelKey{} } func (m *LabelKey) String() string { return proto.CompactTextString(m) } func (*LabelKey) ProtoMessage() {} func (*LabelKey) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{2} } func (m *LabelKey) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LabelKey.Unmarshal(m, b) } func (m *LabelKey) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LabelKey.Marshal(b, m, deterministic) } func (m *LabelKey) XXX_Merge(src proto.Message) { xxx_messageInfo_LabelKey.Merge(m, src) } func (m *LabelKey) XXX_Size() int { return xxx_messageInfo_LabelKey.Size(m) } func (m *LabelKey) XXX_DiscardUnknown() { xxx_messageInfo_LabelKey.DiscardUnknown(m) } var xxx_messageInfo_LabelKey proto.InternalMessageInfo func (m *LabelKey) GetKey() string { if m != nil { return m.Key } return "" } func (m *LabelKey) GetDescription() string { if m != nil { return m.Description } return "" } // A collection of data points that describes the time-varying values // of a metric. type TimeSeries struct { // Must be present for cumulative metrics. The time when the cumulative value // was reset to zero. Exclusive. The cumulative value is over the time interval // (start_timestamp, timestamp]. If not specified, the backend can use the // previous recorded value. StartTimestamp *timestamp.Timestamp `protobuf:"bytes,1,opt,name=start_timestamp,json=startTimestamp,proto3" json:"start_timestamp,omitempty"` // The set of label values that uniquely identify this timeseries. Applies to // all points. The order of label values must match that of label keys in the // metric descriptor. LabelValues []*LabelValue `protobuf:"bytes,2,rep,name=label_values,json=labelValues,proto3" json:"label_values,omitempty"` // The data points of this timeseries. Point.value type MUST match the // MetricDescriptor.type. Points []*Point `protobuf:"bytes,3,rep,name=points,proto3" json:"points,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *TimeSeries) Reset() { *m = TimeSeries{} } func (m *TimeSeries) String() string { return proto.CompactTextString(m) } func (*TimeSeries) ProtoMessage() {} func (*TimeSeries) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{3} } func (m *TimeSeries) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_TimeSeries.Unmarshal(m, b) } func (m *TimeSeries) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_TimeSeries.Marshal(b, m, deterministic) } func (m *TimeSeries) XXX_Merge(src proto.Message) { xxx_messageInfo_TimeSeries.Merge(m, src) } func (m *TimeSeries) XXX_Size() int { return xxx_messageInfo_TimeSeries.Size(m) } func (m *TimeSeries) XXX_DiscardUnknown() { xxx_messageInfo_TimeSeries.DiscardUnknown(m) } var xxx_messageInfo_TimeSeries proto.InternalMessageInfo func (m *TimeSeries) GetStartTimestamp() *timestamp.Timestamp { if m != nil { return m.StartTimestamp } return nil } func (m *TimeSeries) GetLabelValues() []*LabelValue { if m != nil { return m.LabelValues } return nil } func (m *TimeSeries) GetPoints() []*Point { if m != nil { return m.Points } return nil } type LabelValue struct { // The value for the label. Value string `protobuf:"bytes,1,opt,name=value,proto3" json:"value,omitempty"` // If false the value field is ignored and considered not set. // This is used to differentiate a missing label from an empty string. HasValue bool `protobuf:"varint,2,opt,name=has_value,json=hasValue,proto3" json:"has_value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *LabelValue) Reset() { *m = LabelValue{} } func (m *LabelValue) String() string { return proto.CompactTextString(m) } func (*LabelValue) ProtoMessage() {} func (*LabelValue) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{4} } func (m *LabelValue) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LabelValue.Unmarshal(m, b) } func (m *LabelValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LabelValue.Marshal(b, m, deterministic) } func (m *LabelValue) XXX_Merge(src proto.Message) { xxx_messageInfo_LabelValue.Merge(m, src) } func (m *LabelValue) XXX_Size() int { return xxx_messageInfo_LabelValue.Size(m) } func (m *LabelValue) XXX_DiscardUnknown() { xxx_messageInfo_LabelValue.DiscardUnknown(m) } var xxx_messageInfo_LabelValue proto.InternalMessageInfo func (m *LabelValue) GetValue() string { if m != nil { return m.Value } return "" } func (m *LabelValue) GetHasValue() bool { if m != nil { return m.HasValue } return false } // A timestamped measurement. type Point struct { // The moment when this point was recorded. Inclusive. // If not specified, the timestamp will be decided by the backend. Timestamp *timestamp.Timestamp `protobuf:"bytes,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"` // The actual point value. // // Types that are valid to be assigned to Value: // *Point_Int64Value // *Point_DoubleValue // *Point_DistributionValue // *Point_SummaryValue Value isPoint_Value `protobuf_oneof:"value"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Point) Reset() { *m = Point{} } func (m *Point) String() string { return proto.CompactTextString(m) } func (*Point) ProtoMessage() {} func (*Point) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{5} } func (m *Point) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Point.Unmarshal(m, b) } func (m *Point) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Point.Marshal(b, m, deterministic) } func (m *Point) XXX_Merge(src proto.Message) { xxx_messageInfo_Point.Merge(m, src) } func (m *Point) XXX_Size() int { return xxx_messageInfo_Point.Size(m) } func (m *Point) XXX_DiscardUnknown() { xxx_messageInfo_Point.DiscardUnknown(m) } var xxx_messageInfo_Point proto.InternalMessageInfo func (m *Point) GetTimestamp() *timestamp.Timestamp { if m != nil { return m.Timestamp } return nil } type isPoint_Value interface { isPoint_Value() } type Point_Int64Value struct { Int64Value int64 `protobuf:"varint,2,opt,name=int64_value,json=int64Value,proto3,oneof"` } type Point_DoubleValue struct { DoubleValue float64 `protobuf:"fixed64,3,opt,name=double_value,json=doubleValue,proto3,oneof"` } type Point_DistributionValue struct { DistributionValue *DistributionValue `protobuf:"bytes,4,opt,name=distribution_value,json=distributionValue,proto3,oneof"` } type Point_SummaryValue struct { SummaryValue *SummaryValue `protobuf:"bytes,5,opt,name=summary_value,json=summaryValue,proto3,oneof"` } func (*Point_Int64Value) isPoint_Value() {} func (*Point_DoubleValue) isPoint_Value() {} func (*Point_DistributionValue) isPoint_Value() {} func (*Point_SummaryValue) isPoint_Value() {} func (m *Point) GetValue() isPoint_Value { if m != nil { return m.Value } return nil } func (m *Point) GetInt64Value() int64 { if x, ok := m.GetValue().(*Point_Int64Value); ok { return x.Int64Value } return 0 } func (m *Point) GetDoubleValue() float64 { if x, ok := m.GetValue().(*Point_DoubleValue); ok { return x.DoubleValue } return 0 } func (m *Point) GetDistributionValue() *DistributionValue { if x, ok := m.GetValue().(*Point_DistributionValue); ok { return x.DistributionValue } return nil } func (m *Point) GetSummaryValue() *SummaryValue { if x, ok := m.GetValue().(*Point_SummaryValue); ok { return x.SummaryValue } return nil } // XXX_OneofWrappers is for the internal use of the proto package. func (*Point) XXX_OneofWrappers() []interface{} { return []interface{}{ (*Point_Int64Value)(nil), (*Point_DoubleValue)(nil), (*Point_DistributionValue)(nil), (*Point_SummaryValue)(nil), } } // Distribution contains summary statistics for a population of values. It // optionally contains a histogram representing the distribution of those // values across a set of buckets. type DistributionValue struct { // The number of values in the population. Must be non-negative. This value // must equal the sum of the values in bucket_counts if a histogram is // provided. Count int64 `protobuf:"varint,1,opt,name=count,proto3" json:"count,omitempty"` // The sum of the values in the population. If count is zero then this field // must be zero. Sum float64 `protobuf:"fixed64,2,opt,name=sum,proto3" json:"sum,omitempty"` // The sum of squared deviations from the mean of the values in the // population. For values x_i this is: // // Sum[i=1..n]((x_i - mean)^2) // // Knuth, "The Art of Computer Programming", Vol. 2, page 323, 3rd edition // describes Welford's method for accumulating this sum in one pass. // // If count is zero then this field must be zero. SumOfSquaredDeviation float64 `protobuf:"fixed64,3,opt,name=sum_of_squared_deviation,json=sumOfSquaredDeviation,proto3" json:"sum_of_squared_deviation,omitempty"` // Don't change bucket boundaries within a TimeSeries if your backend doesn't // support this. // TODO(issue #152): consider not required to send bucket options for // optimization. BucketOptions *DistributionValue_BucketOptions `protobuf:"bytes,4,opt,name=bucket_options,json=bucketOptions,proto3" json:"bucket_options,omitempty"` // If the distribution does not have a histogram, then omit this field. // If there is a histogram, then the sum of the values in the Bucket counts // must equal the value in the count field of the distribution. Buckets []*DistributionValue_Bucket `protobuf:"bytes,5,rep,name=buckets,proto3" json:"buckets,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DistributionValue) Reset() { *m = DistributionValue{} } func (m *DistributionValue) String() string { return proto.CompactTextString(m) } func (*DistributionValue) ProtoMessage() {} func (*DistributionValue) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{6} } func (m *DistributionValue) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DistributionValue.Unmarshal(m, b) } func (m *DistributionValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DistributionValue.Marshal(b, m, deterministic) } func (m *DistributionValue) XXX_Merge(src proto.Message) { xxx_messageInfo_DistributionValue.Merge(m, src) } func (m *DistributionValue) XXX_Size() int { return xxx_messageInfo_DistributionValue.Size(m) } func (m *DistributionValue) XXX_DiscardUnknown() { xxx_messageInfo_DistributionValue.DiscardUnknown(m) } var xxx_messageInfo_DistributionValue proto.InternalMessageInfo func (m *DistributionValue) GetCount() int64 { if m != nil { return m.Count } return 0 } func (m *DistributionValue) GetSum() float64 { if m != nil { return m.Sum } return 0 } func (m *DistributionValue) GetSumOfSquaredDeviation() float64 { if m != nil { return m.SumOfSquaredDeviation } return 0 } func (m *DistributionValue) GetBucketOptions() *DistributionValue_BucketOptions { if m != nil { return m.BucketOptions } return nil } func (m *DistributionValue) GetBuckets() []*DistributionValue_Bucket { if m != nil { return m.Buckets } return nil } // A Distribution may optionally contain a histogram of the values in the // population. The bucket boundaries for that histogram are described by // BucketOptions. // // If bucket_options has no type, then there is no histogram associated with // the Distribution. type DistributionValue_BucketOptions struct { // Types that are valid to be assigned to Type: // *DistributionValue_BucketOptions_Explicit_ Type isDistributionValue_BucketOptions_Type `protobuf_oneof:"type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DistributionValue_BucketOptions) Reset() { *m = DistributionValue_BucketOptions{} } func (m *DistributionValue_BucketOptions) String() string { return proto.CompactTextString(m) } func (*DistributionValue_BucketOptions) ProtoMessage() {} func (*DistributionValue_BucketOptions) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{6, 0} } func (m *DistributionValue_BucketOptions) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DistributionValue_BucketOptions.Unmarshal(m, b) } func (m *DistributionValue_BucketOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DistributionValue_BucketOptions.Marshal(b, m, deterministic) } func (m *DistributionValue_BucketOptions) XXX_Merge(src proto.Message) { xxx_messageInfo_DistributionValue_BucketOptions.Merge(m, src) } func (m *DistributionValue_BucketOptions) XXX_Size() int { return xxx_messageInfo_DistributionValue_BucketOptions.Size(m) } func (m *DistributionValue_BucketOptions) XXX_DiscardUnknown() { xxx_messageInfo_DistributionValue_BucketOptions.DiscardUnknown(m) } var xxx_messageInfo_DistributionValue_BucketOptions proto.InternalMessageInfo type isDistributionValue_BucketOptions_Type interface { isDistributionValue_BucketOptions_Type() } type DistributionValue_BucketOptions_Explicit_ struct { Explicit *DistributionValue_BucketOptions_Explicit `protobuf:"bytes,1,opt,name=explicit,proto3,oneof"` } func (*DistributionValue_BucketOptions_Explicit_) isDistributionValue_BucketOptions_Type() {} func (m *DistributionValue_BucketOptions) GetType() isDistributionValue_BucketOptions_Type { if m != nil { return m.Type } return nil } func (m *DistributionValue_BucketOptions) GetExplicit() *DistributionValue_BucketOptions_Explicit { if x, ok := m.GetType().(*DistributionValue_BucketOptions_Explicit_); ok { return x.Explicit } return nil } // XXX_OneofWrappers is for the internal use of the proto package. func (*DistributionValue_BucketOptions) XXX_OneofWrappers() []interface{} { return []interface{}{ (*DistributionValue_BucketOptions_Explicit_)(nil), } } // Specifies a set of buckets with arbitrary upper-bounds. // This defines size(bounds) + 1 (= N) buckets. The boundaries for bucket // index i are: // // [0, bucket_bounds[i]) for i == 0 // [bucket_bounds[i-1], bucket_bounds[i]) for 0 < i < N-1 // [bucket_bounds[i], +infinity) for i == N-1 type DistributionValue_BucketOptions_Explicit struct { // The values must be strictly increasing and > 0. Bounds []float64 `protobuf:"fixed64,1,rep,packed,name=bounds,proto3" json:"bounds,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DistributionValue_BucketOptions_Explicit) Reset() { *m = DistributionValue_BucketOptions_Explicit{} } func (m *DistributionValue_BucketOptions_Explicit) String() string { return proto.CompactTextString(m) } func (*DistributionValue_BucketOptions_Explicit) ProtoMessage() {} func (*DistributionValue_BucketOptions_Explicit) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{6, 0, 0} } func (m *DistributionValue_BucketOptions_Explicit) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DistributionValue_BucketOptions_Explicit.Unmarshal(m, b) } func (m *DistributionValue_BucketOptions_Explicit) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DistributionValue_BucketOptions_Explicit.Marshal(b, m, deterministic) } func (m *DistributionValue_BucketOptions_Explicit) XXX_Merge(src proto.Message) { xxx_messageInfo_DistributionValue_BucketOptions_Explicit.Merge(m, src) } func (m *DistributionValue_BucketOptions_Explicit) XXX_Size() int { return xxx_messageInfo_DistributionValue_BucketOptions_Explicit.Size(m) } func (m *DistributionValue_BucketOptions_Explicit) XXX_DiscardUnknown() { xxx_messageInfo_DistributionValue_BucketOptions_Explicit.DiscardUnknown(m) } var xxx_messageInfo_DistributionValue_BucketOptions_Explicit proto.InternalMessageInfo func (m *DistributionValue_BucketOptions_Explicit) GetBounds() []float64 { if m != nil { return m.Bounds } return nil } type DistributionValue_Bucket struct { // The number of values in each bucket of the histogram, as described in // bucket_bounds. Count int64 `protobuf:"varint,1,opt,name=count,proto3" json:"count,omitempty"` // If the distribution does not have a histogram, then omit this field. Exemplar *DistributionValue_Exemplar `protobuf:"bytes,2,opt,name=exemplar,proto3" json:"exemplar,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DistributionValue_Bucket) Reset() { *m = DistributionValue_Bucket{} } func (m *DistributionValue_Bucket) String() string { return proto.CompactTextString(m) } func (*DistributionValue_Bucket) ProtoMessage() {} func (*DistributionValue_Bucket) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{6, 1} } func (m *DistributionValue_Bucket) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DistributionValue_Bucket.Unmarshal(m, b) } func (m *DistributionValue_Bucket) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DistributionValue_Bucket.Marshal(b, m, deterministic) } func (m *DistributionValue_Bucket) XXX_Merge(src proto.Message) { xxx_messageInfo_DistributionValue_Bucket.Merge(m, src) } func (m *DistributionValue_Bucket) XXX_Size() int { return xxx_messageInfo_DistributionValue_Bucket.Size(m) } func (m *DistributionValue_Bucket) XXX_DiscardUnknown() { xxx_messageInfo_DistributionValue_Bucket.DiscardUnknown(m) } var xxx_messageInfo_DistributionValue_Bucket proto.InternalMessageInfo func (m *DistributionValue_Bucket) GetCount() int64 { if m != nil { return m.Count } return 0 } func (m *DistributionValue_Bucket) GetExemplar() *DistributionValue_Exemplar { if m != nil { return m.Exemplar } return nil } // Exemplars are example points that may be used to annotate aggregated // Distribution values. They are metadata that gives information about a // particular value added to a Distribution bucket. type DistributionValue_Exemplar struct { // Value of the exemplar point. It determines which bucket the exemplar // belongs to. Value float64 `protobuf:"fixed64,1,opt,name=value,proto3" json:"value,omitempty"` // The observation (sampling) time of the above value. Timestamp *timestamp.Timestamp `protobuf:"bytes,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"` // Contextual information about the example value. Attachments map[string]string `protobuf:"bytes,3,rep,name=attachments,proto3" json:"attachments,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *DistributionValue_Exemplar) Reset() { *m = DistributionValue_Exemplar{} } func (m *DistributionValue_Exemplar) String() string { return proto.CompactTextString(m) } func (*DistributionValue_Exemplar) ProtoMessage() {} func (*DistributionValue_Exemplar) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{6, 2} } func (m *DistributionValue_Exemplar) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_DistributionValue_Exemplar.Unmarshal(m, b) } func (m *DistributionValue_Exemplar) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_DistributionValue_Exemplar.Marshal(b, m, deterministic) } func (m *DistributionValue_Exemplar) XXX_Merge(src proto.Message) { xxx_messageInfo_DistributionValue_Exemplar.Merge(m, src) } func (m *DistributionValue_Exemplar) XXX_Size() int { return xxx_messageInfo_DistributionValue_Exemplar.Size(m) } func (m *DistributionValue_Exemplar) XXX_DiscardUnknown() { xxx_messageInfo_DistributionValue_Exemplar.DiscardUnknown(m) } var xxx_messageInfo_DistributionValue_Exemplar proto.InternalMessageInfo func (m *DistributionValue_Exemplar) GetValue() float64 { if m != nil { return m.Value } return 0 } func (m *DistributionValue_Exemplar) GetTimestamp() *timestamp.Timestamp { if m != nil { return m.Timestamp } return nil } func (m *DistributionValue_Exemplar) GetAttachments() map[string]string { if m != nil { return m.Attachments } return nil } // The start_timestamp only applies to the count and sum in the SummaryValue. type SummaryValue struct { // The total number of recorded values since start_time. Optional since // some systems don't expose this. Count *wrappers.Int64Value `protobuf:"bytes,1,opt,name=count,proto3" json:"count,omitempty"` // The total sum of recorded values since start_time. Optional since some // systems don't expose this. If count is zero then this field must be zero. // This field must be unset if the sum is not available. Sum *wrappers.DoubleValue `protobuf:"bytes,2,opt,name=sum,proto3" json:"sum,omitempty"` // Values calculated over an arbitrary time window. Snapshot *SummaryValue_Snapshot `protobuf:"bytes,3,opt,name=snapshot,proto3" json:"snapshot,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SummaryValue) Reset() { *m = SummaryValue{} } func (m *SummaryValue) String() string { return proto.CompactTextString(m) } func (*SummaryValue) ProtoMessage() {} func (*SummaryValue) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{7} } func (m *SummaryValue) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SummaryValue.Unmarshal(m, b) } func (m *SummaryValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SummaryValue.Marshal(b, m, deterministic) } func (m *SummaryValue) XXX_Merge(src proto.Message) { xxx_messageInfo_SummaryValue.Merge(m, src) } func (m *SummaryValue) XXX_Size() int { return xxx_messageInfo_SummaryValue.Size(m) } func (m *SummaryValue) XXX_DiscardUnknown() { xxx_messageInfo_SummaryValue.DiscardUnknown(m) } var xxx_messageInfo_SummaryValue proto.InternalMessageInfo func (m *SummaryValue) GetCount() *wrappers.Int64Value { if m != nil { return m.Count } return nil } func (m *SummaryValue) GetSum() *wrappers.DoubleValue { if m != nil { return m.Sum } return nil } func (m *SummaryValue) GetSnapshot() *SummaryValue_Snapshot { if m != nil { return m.Snapshot } return nil } // The values in this message can be reset at arbitrary unknown times, with // the requirement that all of them are reset at the same time. type SummaryValue_Snapshot struct { // The number of values in the snapshot. Optional since some systems don't // expose this. Count *wrappers.Int64Value `protobuf:"bytes,1,opt,name=count,proto3" json:"count,omitempty"` // The sum of values in the snapshot. Optional since some systems don't // expose this. If count is zero then this field must be zero or not set // (if not supported). Sum *wrappers.DoubleValue `protobuf:"bytes,2,opt,name=sum,proto3" json:"sum,omitempty"` // A list of values at different percentiles of the distribution calculated // from the current snapshot. The percentiles must be strictly increasing. PercentileValues []*SummaryValue_Snapshot_ValueAtPercentile `protobuf:"bytes,3,rep,name=percentile_values,json=percentileValues,proto3" json:"percentile_values,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SummaryValue_Snapshot) Reset() { *m = SummaryValue_Snapshot{} } func (m *SummaryValue_Snapshot) String() string { return proto.CompactTextString(m) } func (*SummaryValue_Snapshot) ProtoMessage() {} func (*SummaryValue_Snapshot) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{7, 0} } func (m *SummaryValue_Snapshot) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SummaryValue_Snapshot.Unmarshal(m, b) } func (m *SummaryValue_Snapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SummaryValue_Snapshot.Marshal(b, m, deterministic) } func (m *SummaryValue_Snapshot) XXX_Merge(src proto.Message) { xxx_messageInfo_SummaryValue_Snapshot.Merge(m, src) } func (m *SummaryValue_Snapshot) XXX_Size() int { return xxx_messageInfo_SummaryValue_Snapshot.Size(m) } func (m *SummaryValue_Snapshot) XXX_DiscardUnknown() { xxx_messageInfo_SummaryValue_Snapshot.DiscardUnknown(m) } var xxx_messageInfo_SummaryValue_Snapshot proto.InternalMessageInfo func (m *SummaryValue_Snapshot) GetCount() *wrappers.Int64Value { if m != nil { return m.Count } return nil } func (m *SummaryValue_Snapshot) GetSum() *wrappers.DoubleValue { if m != nil { return m.Sum } return nil } func (m *SummaryValue_Snapshot) GetPercentileValues() []*SummaryValue_Snapshot_ValueAtPercentile { if m != nil { return m.PercentileValues } return nil } // Represents the value at a given percentile of a distribution. type SummaryValue_Snapshot_ValueAtPercentile struct { // The percentile of a distribution. Must be in the interval // (0.0, 100.0]. Percentile float64 `protobuf:"fixed64,1,opt,name=percentile,proto3" json:"percentile,omitempty"` // The value at the given percentile of a distribution. Value float64 `protobuf:"fixed64,2,opt,name=value,proto3" json:"value,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *SummaryValue_Snapshot_ValueAtPercentile) Reset() { *m = SummaryValue_Snapshot_ValueAtPercentile{} } func (m *SummaryValue_Snapshot_ValueAtPercentile) String() string { return proto.CompactTextString(m) } func (*SummaryValue_Snapshot_ValueAtPercentile) ProtoMessage() {} func (*SummaryValue_Snapshot_ValueAtPercentile) Descriptor() ([]byte, []int) { return fileDescriptor_0ee3deb72053811a, []int{7, 0, 0} } func (m *SummaryValue_Snapshot_ValueAtPercentile) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_SummaryValue_Snapshot_ValueAtPercentile.Unmarshal(m, b) } func (m *SummaryValue_Snapshot_ValueAtPercentile) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_SummaryValue_Snapshot_ValueAtPercentile.Marshal(b, m, deterministic) } func (m *SummaryValue_Snapshot_ValueAtPercentile) XXX_Merge(src proto.Message) { xxx_messageInfo_SummaryValue_Snapshot_ValueAtPercentile.Merge(m, src) } func (m *SummaryValue_Snapshot_ValueAtPercentile) XXX_Size() int { return xxx_messageInfo_SummaryValue_Snapshot_ValueAtPercentile.Size(m) } func (m *SummaryValue_Snapshot_ValueAtPercentile) XXX_DiscardUnknown() { xxx_messageInfo_SummaryValue_Snapshot_ValueAtPercentile.DiscardUnknown(m) } var xxx_messageInfo_SummaryValue_Snapshot_ValueAtPercentile proto.InternalMessageInfo func (m *SummaryValue_Snapshot_ValueAtPercentile) GetPercentile() float64 { if m != nil { return m.Percentile } return 0 } func (m *SummaryValue_Snapshot_ValueAtPercentile) GetValue() float64 { if m != nil { return m.Value } return 0 } func init() { proto.RegisterEnum("opencensus.proto.metrics.v1.MetricDescriptor_Type", MetricDescriptor_Type_name, MetricDescriptor_Type_value) proto.RegisterType((*Metric)(nil), "opencensus.proto.metrics.v1.Metric") proto.RegisterType((*MetricDescriptor)(nil), "opencensus.proto.metrics.v1.MetricDescriptor") proto.RegisterType((*LabelKey)(nil), "opencensus.proto.metrics.v1.LabelKey") proto.RegisterType((*TimeSeries)(nil), "opencensus.proto.metrics.v1.TimeSeries") proto.RegisterType((*LabelValue)(nil), "opencensus.proto.metrics.v1.LabelValue") proto.RegisterType((*Point)(nil), "opencensus.proto.metrics.v1.Point") proto.RegisterType((*DistributionValue)(nil), "opencensus.proto.metrics.v1.DistributionValue") proto.RegisterType((*DistributionValue_BucketOptions)(nil), "opencensus.proto.metrics.v1.DistributionValue.BucketOptions") proto.RegisterType((*DistributionValue_BucketOptions_Explicit)(nil), "opencensus.proto.metrics.v1.DistributionValue.BucketOptions.Explicit") proto.RegisterType((*DistributionValue_Bucket)(nil), "opencensus.proto.metrics.v1.DistributionValue.Bucket") proto.RegisterType((*DistributionValue_Exemplar)(nil), "opencensus.proto.metrics.v1.DistributionValue.Exemplar") proto.RegisterMapType((map[string]string)(nil), "opencensus.proto.metrics.v1.DistributionValue.Exemplar.AttachmentsEntry") proto.RegisterType((*SummaryValue)(nil), "opencensus.proto.metrics.v1.SummaryValue") proto.RegisterType((*SummaryValue_Snapshot)(nil), "opencensus.proto.metrics.v1.SummaryValue.Snapshot") proto.RegisterType((*SummaryValue_Snapshot_ValueAtPercentile)(nil), "opencensus.proto.metrics.v1.SummaryValue.Snapshot.ValueAtPercentile") } func init() { proto.RegisterFile("opencensus/proto/metrics/v1/metrics.proto", fileDescriptor_0ee3deb72053811a) } var fileDescriptor_0ee3deb72053811a = []byte{ // 1098 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x56, 0xdd, 0x6e, 0x1b, 0xc5, 0x17, 0xcf, 0xda, 0x8e, 0xe3, 0x9c, 0x75, 0xdb, 0xf5, 0xa8, 0xed, 0xdf, 0xda, 0xfc, 0x15, 0xc2, 0x22, 0x20, 0x15, 0xca, 0x5a, 0x31, 0xa5, 0xad, 0x2a, 0x54, 0x14, 0xc7, 0x6e, 0x62, 0xc8, 0x87, 0x35, 0xb6, 0x2b, 0xd1, 0x1b, 0x6b, 0xbd, 0x9e, 0x24, 0x4b, 0xbc, 0x1f, 0xdd, 0x99, 0x35, 0xf8, 0x05, 0x78, 0x04, 0xae, 0xb9, 0x45, 0x3c, 0x07, 0x57, 0x3c, 0x01, 0x4f, 0x81, 0x78, 0x03, 0xb4, 0x33, 0xb3, 0x1f, 0x89, 0xc1, 0xd4, 0x45, 0xe2, 0xee, 0x9c, 0x33, 0xe7, 0xfc, 0xfc, 0x3b, 0x9f, 0x5e, 0x78, 0xe4, 0x07, 0xc4, 0xb3, 0x89, 0x47, 0x23, 0xda, 0x08, 0x42, 0x9f, 0xf9, 0x0d, 0x97, 0xb0, 0xd0, 0xb1, 0x69, 0x63, 0xb6, 0x9f, 0x88, 0x26, 0x7f, 0x40, 0x5b, 0x99, 0xab, 0xb0, 0x98, 0xc9, 0xfb, 0x6c, 0x5f, 0x7f, 0xef, 0xd2, 0xf7, 0x2f, 0xa7, 0x44, 0x60, 0x8c, 0xa3, 0x8b, 0x06, 0x73, 0x5c, 0x42, 0x99, 0xe5, 0x06, 0xc2, 0x57, 0xdf, 0xbe, 0xed, 0xf0, 0x6d, 0x68, 0x05, 0x01, 0x09, 0x25, 0x96, 0xfe, 0xc9, 0x02, 0x91, 0x90, 0x50, 0x3f, 0x0a, 0x6d, 0x12, 0x33, 0x49, 0x64, 0xe1, 0x6c, 0xfc, 0xa1, 0x40, 0xf9, 0x94, 0xff, 0x38, 0x7a, 0x0d, 0x35, 0x41, 0x63, 0x34, 0x21, 0xd4, 0x0e, 0x9d, 0x80, 0xf9, 0x61, 0x5d, 0xd9, 0x51, 0x76, 0xd5, 0xe6, 0x9e, 0xb9, 0x84, 0xb1, 0x29, 0xe2, 0xdb, 0x69, 0x10, 0xd6, 0xdc, 0x5b, 0x16, 0x74, 0x04, 0xc0, 0xd3, 0x20, 0xa1, 0x43, 0x68, 0xbd, 0xb0, 0x53, 0xdc, 0x55, 0x9b, 0x1f, 0x2f, 0x05, 0x1d, 0x38, 0x2e, 0xe9, 0x73, 0x77, 0x9c, 0x0b, 0x45, 0x2d, 0xa8, 0x24, 0x19, 0xd4, 0x8b, 0x9c, 0xdb, 0x47, 0x8b, 0x30, 0x69, 0x8e, 0xb3, 0x7d, 0x13, 0x4b, 0x19, 0xa7, 0x71, 0xc6, 0x0f, 0x45, 0xd0, 0x6e, 0x73, 0x46, 0x08, 0x4a, 0x9e, 0xe5, 0x12, 0x9e, 0xf0, 0x26, 0xe6, 0x32, 0xda, 0x01, 0x35, 0x29, 0x85, 0xe3, 0x7b, 0xf5, 0x02, 0x7f, 0xca, 0x9b, 0xe2, 0xa8, 0xc8, 0x73, 0x18, 0xa7, 0xb2, 0x89, 0xb9, 0x8c, 0x5e, 0x42, 0x89, 0xcd, 0x03, 0x52, 0x2f, 0xed, 0x28, 0xbb, 0x77, 0x9b, 0xcd, 0x95, 0x4a, 0x67, 0x0e, 0xe6, 0x01, 0xc1, 0x3c, 0x1e, 0xb5, 0x01, 0xa6, 0xd6, 0x98, 0x4c, 0x47, 0xd7, 0x64, 0x4e, 0xeb, 0xeb, 0xbc, 0x66, 0x1f, 0x2e, 0x45, 0x3b, 0x89, 0xdd, 0xbf, 0x22, 0x73, 0xbc, 0x39, 0x95, 0x12, 0x35, 0x7e, 0x52, 0xa0, 0x14, 0x83, 0xa2, 0x7b, 0xa0, 0x0e, 0xcf, 0xfa, 0xbd, 0xce, 0x61, 0xf7, 0x65, 0xb7, 0xd3, 0xd6, 0xd6, 0x62, 0xc3, 0xd1, 0xc1, 0xf0, 0xa8, 0x33, 0xea, 0x9e, 0x0d, 0x9e, 0x3c, 0xd6, 0x14, 0xa4, 0x41, 0x55, 0x18, 0xda, 0xe7, 0xc3, 0xd6, 0x49, 0x47, 0x2b, 0xa0, 0x87, 0x80, 0xa4, 0xa5, 0xdb, 0x1f, 0xe0, 0x6e, 0x6b, 0x38, 0xe8, 0x9e, 0x9f, 0x69, 0x45, 0x74, 0x1f, 0xb4, 0xc3, 0xe1, 0xe9, 0xf0, 0xe4, 0x60, 0xd0, 0x7d, 0x95, 0xc4, 0x97, 0xd0, 0x03, 0xa8, 0xe5, 0xac, 0x12, 0x64, 0x1d, 0x6d, 0xc1, 0xff, 0xf2, 0xe6, 0x3c, 0x52, 0x19, 0xa9, 0xb0, 0xd1, 0x1f, 0x9e, 0x9e, 0x1e, 0xe0, 0xaf, 0xb5, 0x0d, 0xe3, 0x05, 0x54, 0x92, 0x14, 0x90, 0x06, 0xc5, 0x6b, 0x32, 0x97, 0xed, 0x88, 0xc5, 0x7f, 0xee, 0x86, 0xf1, 0x9b, 0x02, 0x90, 0xcd, 0x0d, 0x3a, 0x84, 0x7b, 0x94, 0x59, 0x21, 0x1b, 0xa5, 0x1b, 0x24, 0xc7, 0x59, 0x37, 0xc5, 0x0a, 0x99, 0xc9, 0x0a, 0xf1, 0x69, 0xe3, 0x1e, 0xf8, 0x2e, 0x0f, 0x49, 0x75, 0xf4, 0x25, 0x54, 0x45, 0x17, 0x66, 0xd6, 0x34, 0x7a, 0xcb, 0xd9, 0xe5, 0x49, 0xbc, 0x8a, 0xfd, 0xb1, 0x3a, 0x4d, 0x65, 0x8a, 0x9e, 0x43, 0x39, 0xf0, 0x1d, 0x8f, 0xd1, 0x7a, 0x91, 0xa3, 0x18, 0x4b, 0x51, 0x7a, 0xb1, 0x2b, 0x96, 0x11, 0xc6, 0x17, 0x00, 0x19, 0x2c, 0xba, 0x0f, 0xeb, 0x9c, 0x8f, 0xac, 0x8f, 0x50, 0xd0, 0x16, 0x6c, 0x5e, 0x59, 0x54, 0x30, 0xe5, 0xf5, 0xa9, 0xe0, 0xca, 0x95, 0x45, 0x79, 0x88, 0xf1, 0x4b, 0x01, 0xd6, 0x39, 0x24, 0x7a, 0x06, 0x9b, 0xab, 0x54, 0x24, 0x73, 0x46, 0xef, 0x83, 0xea, 0x78, 0xec, 0xc9, 0xe3, 0xdc, 0x4f, 0x14, 0x8f, 0xd7, 0x30, 0x70, 0xa3, 0x60, 0xf6, 0x01, 0x54, 0x27, 0x7e, 0x34, 0x9e, 0x12, 0xe9, 0x13, 0x6f, 0x86, 0x72, 0xbc, 0x86, 0x55, 0x61, 0x15, 0x4e, 0x23, 0x40, 0x13, 0x87, 0xb2, 0xd0, 0x19, 0x47, 0x71, 0xe3, 0xa4, 0x6b, 0x89, 0x53, 0x31, 0x97, 0x16, 0xa5, 0x9d, 0x0b, 0xe3, 0x58, 0xc7, 0x6b, 0xb8, 0x36, 0xb9, 0x6d, 0x44, 0x3d, 0xb8, 0x43, 0x23, 0xd7, 0xb5, 0xc2, 0xb9, 0xc4, 0x5e, 0xe7, 0xd8, 0x8f, 0x96, 0x62, 0xf7, 0x45, 0x44, 0x02, 0x5b, 0xa5, 0x39, 0xbd, 0xb5, 0x21, 0x2b, 0x6e, 0xfc, 0x5a, 0x86, 0xda, 0x02, 0x8b, 0xb8, 0x21, 0xb6, 0x1f, 0x79, 0x8c, 0xd7, 0xb3, 0x88, 0x85, 0x12, 0x0f, 0x31, 0x8d, 0x5c, 0x5e, 0x27, 0x05, 0xc7, 0x22, 0x7a, 0x0a, 0x75, 0x1a, 0xb9, 0x23, 0xff, 0x62, 0x44, 0xdf, 0x44, 0x56, 0x48, 0x26, 0xa3, 0x09, 0x99, 0x39, 0x16, 0x9f, 0x68, 0x5e, 0x2a, 0xfc, 0x80, 0x46, 0xee, 0xf9, 0x45, 0x5f, 0xbc, 0xb6, 0x93, 0x47, 0x64, 0xc3, 0xdd, 0x71, 0x64, 0x5f, 0x13, 0x36, 0xf2, 0xf9, 0xb0, 0x53, 0x59, 0xae, 0xcf, 0x57, 0x2b, 0x97, 0xd9, 0xe2, 0x20, 0xe7, 0x02, 0x03, 0xdf, 0x19, 0xe7, 0x55, 0x74, 0x0e, 0x1b, 0xc2, 0x90, 0xdc, 0x9b, 0xcf, 0xde, 0x09, 0x1d, 0x27, 0x28, 0xfa, 0x8f, 0x0a, 0xdc, 0xb9, 0xf1, 0x8b, 0xc8, 0x86, 0x0a, 0xf9, 0x2e, 0x98, 0x3a, 0xb6, 0xc3, 0xe4, 0xec, 0x75, 0xfe, 0x4d, 0x06, 0x66, 0x47, 0x82, 0x1d, 0xaf, 0xe1, 0x14, 0x58, 0x37, 0xa0, 0x92, 0xd8, 0xd1, 0x43, 0x28, 0x8f, 0xfd, 0xc8, 0x9b, 0xd0, 0xba, 0xb2, 0x53, 0xdc, 0x55, 0xb0, 0xd4, 0x5a, 0x65, 0x71, 0xa6, 0x75, 0x0a, 0x65, 0x81, 0xf8, 0x37, 0x3d, 0xec, 0xc7, 0x84, 0x89, 0x1b, 0x4c, 0xad, 0x90, 0x37, 0x52, 0x6d, 0x3e, 0x5d, 0x91, 0x70, 0x47, 0x86, 0xe3, 0x14, 0x48, 0xff, 0xbe, 0x10, 0x33, 0x14, 0xca, 0xcd, 0x65, 0x56, 0x92, 0x65, 0xbe, 0xb1, 0xa5, 0x85, 0x55, 0xb6, 0xf4, 0x1b, 0x50, 0x2d, 0xc6, 0x2c, 0xfb, 0xca, 0x25, 0xd9, 0xad, 0x39, 0x7e, 0x47, 0xd2, 0xe6, 0x41, 0x06, 0xd5, 0xf1, 0x58, 0x38, 0xc7, 0x79, 0x70, 0xfd, 0x05, 0x68, 0xb7, 0x1d, 0xfe, 0xe2, 0x74, 0xa7, 0x19, 0x16, 0x72, 0xe7, 0xea, 0x79, 0xe1, 0x99, 0x62, 0xfc, 0x5e, 0x84, 0x6a, 0x7e, 0xef, 0xd0, 0x7e, 0xbe, 0x09, 0x6a, 0x73, 0x6b, 0x21, 0xe5, 0x6e, 0x7a, 0x6b, 0x92, 0x0e, 0x99, 0xd9, 0x96, 0xa9, 0xcd, 0xff, 0x2f, 0x04, 0xb4, 0xb3, 0xc3, 0x23, 0x76, 0xf0, 0x0c, 0x2a, 0xd4, 0xb3, 0x02, 0x7a, 0xe5, 0x33, 0xf9, 0x0d, 0xd1, 0x7c, 0xeb, 0xbb, 0x60, 0xf6, 0x65, 0x24, 0x4e, 0x31, 0xf4, 0x9f, 0x0b, 0x50, 0x49, 0xcc, 0xff, 0x05, 0xff, 0x37, 0x50, 0x0b, 0x48, 0x68, 0x13, 0x8f, 0x39, 0xc9, 0x99, 0x4d, 0xba, 0xdc, 0x5e, 0x3d, 0x11, 0x93, 0xab, 0x07, 0xac, 0x97, 0x42, 0x62, 0x2d, 0x83, 0x17, 0xff, 0x5c, 0x7a, 0x17, 0x6a, 0x0b, 0x6e, 0x68, 0x1b, 0x20, 0x73, 0x94, 0xc3, 0x9b, 0xb3, 0xdc, 0xec, 0x7a, 0x32, 0xd7, 0xad, 0x19, 0x6c, 0x3b, 0xfe, 0x32, 0x9a, 0xad, 0xaa, 0xf8, 0x2a, 0xa2, 0xbd, 0xf8, 0xa1, 0xa7, 0xbc, 0x6e, 0x5f, 0x3a, 0xec, 0x2a, 0x1a, 0x9b, 0xb6, 0xef, 0x36, 0x44, 0xcc, 0x9e, 0xe3, 0x51, 0x16, 0x46, 0xf1, 0xcc, 0xf1, 0xeb, 0xd8, 0xc8, 0xe0, 0xf6, 0xc4, 0x27, 0xef, 0x25, 0xf1, 0xf6, 0x2e, 0xf3, 0x9f, 0xe0, 0xe3, 0x32, 0x7f, 0xf8, 0xf4, 0xcf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x8e, 0xfc, 0xd7, 0x46, 0xa8, 0x0b, 0x00, 0x00, }
{ "redpajama_set_name": "RedPajamaGithub" }
6,073
\section*{Acknowledgement}}{} \newenvironment{romenumerate}[1][-10pt] \addtolength{\leftmargini}{#1}\begin{enumerate \renewcommand{\labelenumi}{\textup{(\roman{enumi})}}% \renewcommand{\theenumi}{\textup{(\roman{enumi})}}% }{\end{enumerate}} \newenvironment{PXenumerate}[1] \begin{enumerate \renewcommand{\labelenumi}{\textup{(#1\arabic{enumi})}}% \renewcommand{\theenumi}{\labelenumi}% }{\end{enumerate}} \newenvironment{PQenumerate}[1] \begin{enumerate \renewcommand{\labelenumi}{\textup{(#1)}}% \renewcommand{\theenumi}{\labelenumi}% }{\end{enumerate}} \newcounter{oldenumi} \newenvironment{romenumerateq {\setcounter{oldenumi}{\value{enumi}} \begin{romenumerate} \setcounter{enumi}{\value{oldenumi}}} {\end{romenumerate}} \newcounter{thmenumerate} \newenvironment{thmenumerate} {\setcounter{thmenumerate}{0}% \renewcommand{\thethmenumerate}{\textup{(\roman{thmenumerate})}}% \def\item{\pa \refstepcounter{thmenumerate}\textup{(\roman{thmenumerate})\enspace}} } {} \newcounter{xenumerate} \newenvironment{xenumerate} {\begin{list} {\upshape(\roman{xenumerate})} {\setlength{\leftmargin}{0pt} \setlength{\rightmargin}{0pt} \setlength{\labelwidth}{0pt} \setlength{\itemindent}{\labelsep} \setlength{\topsep}{0pt} \usecounter{xenumerate}} } {\end{list}} \newcommand\xfootnote[1]{\unskip\footnote{#1}$ $} \newcommand\pfitem[1]{\par(#1):} \newcommand\pfitemx[1]{\par#1:} \newcommand\pfitemref[1]{\pfitemx{\ref{#1}}} \newcommand\pfcase[2]{\smallskip\noindent\emph{Case #1: #2} \noindent} \newcommand\step[2]{\smallskip\noindent\emph{Step #1: #2} \noindent} \newcommand\stepx{\smallskip\noindent\refstepcounter{steps}% \emph{Step \arabic{steps}:}\noindent} \newcommand{\refT}[1]{Theorem~\ref{#1}} \newcommand{\refTs}[1]{Theorems~\ref{#1}} \newcommand{\refC}[1]{Corollary~\ref{#1}} \newcommand{\refCs}[1]{Corollaries~\ref{#1}} \newcommand{\refL}[1]{Lemma~\ref{#1}} \newcommand{\refLs}[1]{Lemmas~\ref{#1}} \newcommand{\refR}[1]{Remark~\ref{#1}} \newcommand{\refRs}[1]{Remarks~\ref{#1}} \newcommand{\refS}[1]{Section~\ref{#1}} \newcommand{\refSs}[1]{Sections~\ref{#1}} \newcommand{\refSS}[1]{Section~\ref{#1}} \newcommand{\refProp}[1]{Proposition~\ref{#1}} \newcommand{\refP}[1]{Problem~\ref{#1}} \newcommand{\refD}[1]{Definition~\ref{#1}} \newcommand{\refE}[1]{Example~\ref{#1}} \newcommand{\refEs}[1]{Examples~\ref{#1}} \newcommand{\refF}[1]{Figure~\ref{#1}} \newcommand{\refApp}[1]{Appendix~\ref{#1}} \newcommand{\refTab}[1]{Table~\ref{#1}} \newcommand{\refand}[2]{\ref{#1} and~\ref{#2}} \newcommand\marginal[1]{\marginpar[\raggedleft\tiny #1]{\raggedright\tiny#1}} \newcommand\SJ{\marginal{SJ} } \newcommand\kolla{\marginal{KOLLA! SJ} } \newcommand\ms[1]{\texttt{[ms #1]}} \newcommand\XXX{XXX \marginal{XXX}} \newcommand\REM[1]{{\raggedright\texttt{[#1]}\par\marginal{XXX}}} \newcommand\XREM[1]{\relax} \newcommand\rem[1]{{\texttt{[#1]}\marginal{XXX}}} \newenvironment{OLD}{\Small \REM{Old stuff to be edited:}\par}{} \newcommand\linebreakx{\unskip\marginal{$\backslash$linebreak}\linebreak} \begingroup \count255=\time \divide\count255 by 60 \count1=\count255 \multiply\count255 by -60 \advance\count255 by \time \ifnum \count255 < 10 \xdef\klockan{\the\count1.0\the\count255} \else\xdef\klockan{\the\count1.\the\count255}\fi \endgroup \newcommand\nopf{\qed} \newcommand\noqed{\renewcommand{\qed}{}} \newcommand\qedtag{\eqno{\qed}} \DeclareMathOperator*{\sumx}{\sum\nolimits^{*}} \DeclareMathOperator*{\sumxx}{\sum\nolimits^{**}} \newcommand{\sum_{i=0}^\infty}{\sum_{i=0}^\infty} \newcommand{\sum_{j=0}^\infty}{\sum_{j=0}^\infty} \newcommand{\sum_{k=0}^\infty}{\sum_{k=0}^\infty} \newcommand{\sum_{m=0}^\infty}{\sum_{m=0}^\infty} \newcommand{\sum_{n=0}^\infty}{\sum_{n=0}^\infty} \newcommand{\sum_{i=1}^\infty}{\sum_{i=1}^\infty} \newcommand{\sum_{j=1}^\infty}{\sum_{j=1}^\infty} \newcommand{\sum_{k=1}^\infty}{\sum_{k=1}^\infty} \newcommand{\sum_{m=1}^\infty}{\sum_{m=1}^\infty} \newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty} \newcommand{\sum_{r=1}^\infty}{\sum_{r=1}^\infty} \newcommand{\sum_{i=1}^k}{\sum_{i=1}^k} \newcommand{\sum_{i=1}^n}{\sum_{i=1}^n} \newcommand{\sum_{j=1}^n}{\sum_{j=1}^n} \newcommand{\sum_{k=1}^n}{\sum_{k=1}^n} \newcommand{\prod_{i=1}^n}{\prod_{i=1}^n} \newcommand{\prod_{i=1}^k}{\prod_{i=1}^k} \newcommand\set[1]{\ensuremath{\{#1\}}} \newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}} \newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}} \newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}} \newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}} \newcommand\xpar[1]{(#1)} \newcommand\bigpar[1]{\bigl(#1\bigr)} \newcommand\Bigpar[1]{\Bigl(#1\Bigr)} \newcommand\biggpar[1]{\biggl(#1\biggr)} \newcommand\lrpar[1]{\left(#1\right)} \newcommand\bigsqpar[1]{\bigl[#1\bigr]} \newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]} \newcommand\biggsqpar[1]{\biggl[#1\biggr]} \newcommand\lrsqpar[1]{\left[#1\right]} \newcommand\xcpar[1]{\{#1\}} \newcommand\bigcpar[1]{\bigl\{#1\bigr\}} \newcommand\Bigcpar[1]{\Bigl\{#1\Bigr\}} \newcommand\biggcpar[1]{\biggl\{#1\biggr\}} \newcommand\lrcpar[1]{\left\{#1\right\}} \newcommand\abs[1]{\lvert#1\rvert} \newcommand\bigabs[1]{\bigl\lvert#1\bigr\rvert} \newcommand\Bigabs[1]{\Bigl\lvert#1\Bigr\rvert} \newcommand\biggabs[1]{\biggl\lvert#1\biggr\rvert} \newcommand\lrabs[1]{\left\lvert#1\right\rvert} \def\rompar(#1){\textup(#1\textup)} \newcommand\xfrac[2]{#1/#2} \newcommand\xpfrac[2]{(#1)/#2} \newcommand\xqfrac[2]{#1/(#2)} \newcommand\xpqfrac[2]{(#1)/(#2)} \newcommand\parfrac[2]{\lrpar{\frac{#1}{#2}}} \newcommand\bigparfrac[2]{\bigpar{\frac{#1}{#2}}} \newcommand\Bigparfrac[2]{\Bigpar{\frac{#1}{#2}}} \newcommand\biggparfrac[2]{\biggpar{\frac{#1}{#2}}} \newcommand\xparfrac[2]{\xpar{\xfrac{#1}{#2}}} \newcommand\innprod[1]{\langle#1\rangle} \newcommand\expbig[1]{\exp\bigl(#1\bigr)} \newcommand\expBig[1]{\exp\Bigl(#1\Bigr)} \newcommand\explr[1]{\exp\left(#1\right)} \newcommand\expQ[1]{e^{#1}} \def\xexp(#1){e^{#1}} \newcommand\ceil[1]{\lceil#1\rceil} \newcommand\lrceil[1]{\left\lceil#1\right\rceil} \newcommand\floor[1]{\lfloor#1\rfloor} \newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor} \newcommand\frax[1]{\{#1\}} \newcommand\setn{\set{1,\dots,n}} \newcommand\setnn{[n]} \newcommand\ntoo{\ensuremath{{n\to\infty}}} \newcommand\Ntoo{\ensuremath{{N\to\infty}}} \newcommand\asntoo{\text{as }\ntoo} \newcommand\ktoo{\ensuremath{{k\to\infty}}} \newcommand\mtoo{\ensuremath{{m\to\infty}}} \newcommand\stoo{\ensuremath{{s\to\infty}}} \newcommand\ttoo{\ensuremath{{t\to\infty}}} \newcommand\xtoo{\ensuremath{{x\to\infty}}} \newcommand\bmin{\land} \newcommand\bmax{\lor} \newcommand\norm[1]{\lVert#1\rVert} \newcommand\bignorm[1]{\bigl\lVert#1\bigr\rVert} \newcommand\Bignorm[1]{\Bigl\lVert#1\Bigr\rVert} \newcommand\lrnorm[1]{\left\lVert#1\right\rVert} \newcommand\downto{\searrow} \newcommand\upto{\nearrow} \newcommand\thalf{\tfrac12} \newcommand\punkt{\xperiod} \newcommand\iid{i.i.d\punkt} \newcommand\ie{i.e\punkt} \newcommand\eg{e.g\punkt} \newcommand\viz{viz\punkt} \newcommand\cf{cf\punkt} \newcommand{a.s\punkt}{a.s\punkt} \newcommand{a.e\punkt}{a.e\punkt} \renewcommand{\ae}{\vu} \newcommand\whp{w.h.p\punkt} \newcommand\ii{\mathrm{i}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand\dto{\overset{\mathrm{d}}{\longrightarrow}} \newcommand\pto{\overset{\mathrm{p}}{\longrightarrow}} \newcommand\asto{\overset{\mathrm{a.s.}}{\longrightarrow}} \newcommand\lito{\overset{L^1}{\longrightarrow}} \newcommand\eqd{\overset{\mathrm{d}}{=}} \newcommand\neqd{\overset{\mathrm{d}}{\neq}} \newcommand\op{o_{\mathrm p}} \newcommand\Op{O_{\mathrm p}} \newcommand\bbR{\mathbb R} \newcommand\bbC{\mathbb C} \newcommand\bbN{\mathbb N} \newcommand\bbT{\mathbb T} \newcommand\bbQ{\mathbb Q} \newcommand\bbZ{\mathbb Z} \newcommand\bbZleo{\mathbb Z_{\le0}} \newcommand\bbZgeo{\mathbb Z_{\ge0}} \newcounter{CC} \newcommand{\CC}{\stepcounter{CC}\CCx} \newcommand{\CCx}{C_{\arabic{CC}}} \newcommand{\CCdef}[1]{\xdef#1{\CCx}} \newcommand{\CCname}[1]{\CC\CCdef{#1}} \newcommand{\CCreset}{\setcounter{CC}0} \newcounter{cc} \newcommand{\cc}{\stepcounter{cc}\ccx} \newcommand{\ccx}{c_{\arabic{cc}}} \newcommand{\ccdef}[1]{\xdef#1{\ccx}} \newcommand{\ccname}[1]{\cc\ccdef{#1}} \newcommand{\ccreset}{\setcounter{cc}0} \renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}} \newcommand\E{\operatorname{\mathbb E{}}} \renewcommand\P{\operatorname{\mathbb P{}}} \newcommand\PP{\operatorname{\mathbb P{}}} \newcommand\Var{\operatorname{Var}} \newcommand\Cov{\operatorname{Cov}} \newcommand\Corr{\operatorname{Corr}} \newcommand\Exp{\operatorname{Exp}} \newcommand\Po{\operatorname{Po}} \newcommand\Bi{\operatorname{Bi}} \newcommand\Bin{\operatorname{Bin}} \newcommand\Be{\operatorname{Be}} \newcommand\Ge{\operatorname{Ge}} \newcommand\NBi{\operatorname{NegBin}} \newcommand\Res{\operatorname{Res}} \newcommand\fall[1]{^{\underline{#1}}} \newcommand\rise[1]{^{\overline{#1}}} \newcommand\supp{\operatorname{supp}} \newcommand\sgn{\operatorname{sgn}} \newcommand\diam{\operatorname{diam}} \newcommand\Tr{\operatorname{Tr}} \newcommand\degg{\ensuremath{^\circ}} \newcommand\ga{\alpha} \newcommand\gb{\beta} \newcommand\gd{\delta} \newcommand\gD{\Delta} \newcommand\gf{\varphi} \newcommand\gam{\gamma} \newcommand\gG{\Gamma} \newcommand\gk{\varkappa} \newcommand\kk{\kappa} \newcommand\gl{\lambda} \newcommand\gL{\Lambda} \newcommand\go{\omega} \newcommand\gO{\Omega} \newcommand\gs{\sigma} \newcommand\gS{\Sigma} \newcommand\gss{\sigma^2} \newcommand\gth{\theta} \newcommand\eps{\varepsilon} \newcommand\ep{\varepsilon} \renewcommand\phi{\xxx} \newcommand\cA{\mathcal A} \newcommand\cB{\mathcal B} \newcommand\cC{\mathcal C} \newcommand\cD{\mathcal D} \newcommand\cE{\mathcal E} \newcommand\cF{\mathcal F} \newcommand\cG{\mathcal G} \newcommand\cH{\mathcal H} \newcommand\cI{\mathcal I} \newcommand\cJ{\mathcal J} \newcommand\cK{\mathcal K} \newcommand\cL{{\mathcal L}} \newcommand\cM{\mathcal M} \newcommand\cN{\mathcal N} \newcommand\cO{\mathcal O} \newcommand\cP{\mathcal P} \newcommand\cQ{\mathcal Q} \newcommand\cR{{\mathcal R}} \newcommand\cS{{\mathcal S}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{\mathcal V} \newcommand\cW{\mathcal W} \newcommand\cX{{\mathcal X}} \newcommand\cY{{\mathcal Y}} \newcommand\cZ{{\mathcal Z}} \newcommand\tA{\tilde A} \newcommand\tB{\tilde B} \newcommand\tC{\tilde C} \newcommand\tD{\tilde D} \newcommand\tE{\tilde E} \newcommand\tF{\tilde F} \newcommand\tG{\tilde G} \newcommand\tH{\tilde H} \newcommand\tI{\tilde I} \newcommand\tJ{\tilde J} \newcommand\tK{\tilde K} \newcommand\tL{{\tilde L}} \newcommand\tM{\tilde M} \newcommand\tN{\tilde N} \newcommand\tO{\tilde O} \newcommand\tP{\tilde P} \newcommand\tQ{\tilde Q} \newcommand\tR{{\tilde R}} \newcommand\tS{{\tilde S}} \newcommand\tT{{\tilde T}} \newcommand\tU{{\tilde U}} \newcommand\tV{\tilde V} \newcommand\tW{\widetilde W} \newcommand\tX{{\tilde X}} \newcommand\tY{{\tilde Y}} \newcommand\tZ{{\tilde Z}} \newcommand\bJ{\bar J} \newcommand\bW{\overline W} \newcommand\indic[1]{\boldsymbol1\xcpar{#1}} \newcommand\bigindic[1]{\boldsymbol1\bigcpar{#1}} \newcommand\Bigindic[1]{\boldsymbol1\Bigcpar{#1}} \newcommand\etta{\boldsymbol1} \newcommand\smatrixx[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)} \newcommand\limn{\lim_{n\to\infty}} \newcommand\limN{\lim_{N\to\infty}} \newcommand\qw{^{-1}} \newcommand\qww{^{-2}} \newcommand\qq{^{1/2}} \newcommand\qqw{^{-1/2}} \newcommand\qqq{^{1/3}} \newcommand\qqqb{^{2/3}} \newcommand\qqqw{^{-1/3}} \newcommand\qqqbw{^{-2/3}} \newcommand\qqqq{^{1/4}} \newcommand\qqqqc{^{3/4}} \newcommand\qqqqw{^{-1/4}} \newcommand\qqqqcw{^{-3/4}} \newcommand\intoi{\int_0^1} \newcommand\intoo{\int_0^\infty} \newcommand\intoooo{\int_{-\infty}^\infty} \newcommand\intpipi{\int_{-\pi}^\pi} \newcommand\oi{\ensuremath{[0,1]}} \newcommand\ooi{(0,1]} \newcommand\ooo{[0,\infty)} \newcommand\ooox{[0,\infty]} \newcommand\oooo{(-\infty,\infty)} \newcommand\setoi{\set{0,1}} \newcommand\dtv{d_{\mathrm{TV}}} \newcommand\dd{\,\mathrm{d}} \newcommand\ddx{\mathrm{d}} \newcommand\ddd[1]{\frac{\ddx}{\ddx#1}} \newcommand{probability generating function}{probability generating function} \newcommand{moment generating function}{moment generating function} \newcommand{characteristic function}{characteristic function} \newcommand{$\gs$-field}{$\gs$-field} \newcommand{uniformly integrable}{uniformly integrable} \newcommand\rv{random variable} \newcommand\lhs{left-hand side} \newcommand\rhs{right-hand side} \newcommand\GW{Galton--Watson} \newcommand\GWt{\GW{} tree} \newcommand\cGWt{conditioned \GW{} tree} \newcommand\GWp{\GW{} process} \newcommand\gnp{\ensuremath{G(n,p)}} \newcommand\gnm{\ensuremath{G(n,m)}} \newcommand\gnd{\ensuremath{G(n,d)}} \newcommand\gnx[1]{\ensuremath{G(n,#1)}} \newcommand\etto{\bigpar{1+o(1)}} \newcommand\Uoi{U(0,1)} \newcommand\xoo{_1^\infty} \newcommand\xooo{_0^\infty} \newcommand\xx[1]{^{#1}} \newcommand\fS{\mathfrak{S}} \newcommand\nux{n} \newcommand\nnx{N} \newcommand\ttt{{\mathbf t}} \newcommand\ntt{\nnx_{\ttt}} \newcommand\nut{\nux_\ttt} \newcommand\nnqr{\nnx_{t_{q,r}}} \newcommand\nuqr{\nux_{t_{q,r}}} \newcommand\nttm{N_\ttt^M} \newcommand\nutm{\nu_\ttt^M} \newcommand\ctn{\cT_n} \newcommand\ct{\cT} \newcommand\df{depth first} \newcommand\dfo{depth first order} \newcommand\gdt{\gD(\ttt)} \newcommand\bm{\mathbf{m}} \newcommand\bbzzk{\bbZgeo^k} \newcommand\sumbm{\sum_{\bm}} \newcommand\ddt{\frac{\ddx}{\ddx t}} \newcommand\px[1]{{\mathsf{P}_{#1}}} \newcommand\pk{\px{k}} \newcommand\pl{\px{\ell}} \newcommand\tqr{t_{q,r}} \newcommand\vv{\varpi} \newcommand\vvl{\vv_\ell} \newcommand\hT{\widehat T} \newcommand{H\"older}{H\"older} \newcommand{P\'olya}{P\'olya} \newcommand\CS{Cauchy--Schwarz} \newcommand\CSineq{\CS{} inequality} \newcommand{L\'evy}{L\'evy} \newcommand\ER{Erd\H os--R\'enyi} \newcommand{Lov\'asz}{Lov\'asz} \newcommand{Fr\'echet}{Fr\'echet} \newcommand{\texttt{Maple}}{\texttt{Maple}} \newcommand\citex{\REM} \newcommand\refx[1]{\texttt{[#1]}} \newcommand\xref[1]{\texttt{(#1)}} \hyphenation{Upp-sala} \begin{document} \begin{abstract} We show that the number of copies of a given rooted tree in a conditioned Galton--Watson tree satisfies a law of large numbers under a minimal moment condition on the offspring distribution. \end{abstract} \maketitle \section{Introduction}\label{S:intro} Let $\ctn$ be a random \cGWt{} with $n$ nodes, defined by an offspring distribution $\xi$ with mean $\E\xi=1$, and let $\ttt$ be a fixed ordered rooted tree. We are interested in the number of copies of $\ttt$ as a (general) subtree of $\ctn$, which we denote by $\ntt(\ctn)$. For details of these and other definitions, see \refS{Snot}. Note that we consider subtrees in a general sense. (Thus, e.g., not just fringe trees; for them, see similar results in \cite{SJ285}.) The purpose of the present paper is to show the following law of large numbers under minimal moment assumptions. Let $\nut(T)$ be the number of rooted copies of $\ttt$ in a tree $T$, \ie, copies with the root at the root of $T$. Further, let $\gdt$ be the maximum outdegree in $\ttt$. \begin{theorem}\label{T1} Let $\ttt$ be a fixed ordered tree, and let $\ctn$ be a \cGWt{} defined by an offspring distribution $\xi$ with $\E\xi=1$ and $\E\xi^{\gdt}<\infty$. Also, let $\cT$ be a \GWt{} with the same offspring distribution. Then, as \ntoo, \begin{align} \label{t1} \ntt(\ctn)/n \lito \E\nut(\cT), \end{align} where the limit is finite and given explicitly by \eqref{l1a} below. Equivalently, \begin{align} \ntt(\ctn)/n &\pto \E\nut(\cT),\label{t1p} \end{align} and \begin{align} \E\ntt(\ctn)/n &\to \E\nut(\cT).\label{t1e} \end{align} \end{theorem} The fact that \eqref{t1} is equivalent to \eqref{t1p}--\eqref{t1e} is an instance of the general fact that for any random variables, convergence in $L^1$ is equivalent to convergence in probability together with convergence of the means of the absolute values (\ie, in this case, with non-negative variables, the means); see \eg{} \cite[Theorem 5.5.4]{Gut}. We nevertheless state both versions for convenience. \citet{Drmotaetal} (see also \cite[Section 3.3]{Drmota}) considered patterns in random trees; their patterns differ from the subgraph counts above in that some external vertices are added to $\ttt$, and that one only considers copies of $\ttt$ in a tree $T$ such that each internal vertex in the copy has the same degree in $T$ as in $\ttt$ (counting also edges to external vertices); equivalently, each vertex in $\ttt$ is equipped with a number, and one considers only copies of $\ttt$ where the vertex degrees match these numbers. (Another difference is that \cite{Drmotaetal} consider unrooted trees, but the proof proceeds by first considering rooted [planted] trees. Furthermore, only uniformly random labelled trees are considered in \cite{Drmotaetal}, but the proofs extend to suitable more general \cGWt{s}, as remarked in \cite{Drmotaetal} and shown explicitly in \cite{Kok1,Kok2}.) It was shown in \citet{Drmotaetal} that the number of occurences of such a pattern is asymptotically normal, with asymptotic mean and variance both of the order $n$ (except that the variance might be smaller in at least one exceptional degenerate case), which of cource entails a law of large numbers. Moreover, \cite{Drmotaetal} discuss briefly generalizations, including subtrees without further degree conditions as in the present paper; they expect asymptotic normality to hold in this case too, but it seems that their method, which is based on setting up and analyzing a system of functional equations for generating functions, in general would require extensions to infinite systems, which as far as we know has not been pursued. (See \cite{DrmotaRR} for a related problem.) See further \refS{Sfurther}. Our method is probabilistic, and quite different from the analysis of generating functions in \cite{Drmotaetal}. \section{Notation}\label{Snot} All trees are rooted and ordered. The root of a tree $T$ is denoted $o=o_T$. The size $|T|$ of a tree $T$ is defined as the number of vertices in $T$. The \emph{degree} $d(v)$ of a vertex $v\in T$ always means the outdegree, \ie, the number of children of $v$. The \emph{degree sequence} of $T$ is the sequence of all degrees $d(v)$, $v\in T$, for definiteness in \dfo. Let $\gD(T):=\max_{v\in T}d(v)$ be the maximum (out)degree in $T$. A (general) \emph{subtree} $T'$ of a tree $T$ is a non-empty connected subgraph of $T$; we regard a subtree as a rooted tree in the obvious way, with the root being the vertex in $T'$ that is closest to the root in $T$. Note that for any vertex $v\in T'$, its set of children in $T'$ is a subset of its set of children in $T$; the order of the children of $v$ in $T'$ is (by definition) the same as their relative order in $T$. If $v\in T$, the \emph{fringe subtree} $T^v$ is the subtree of $T$ consisting of $v$ and all its descendants; this is thus a subtree with root $v$. If $\ttt$ and $T$ are ordered rooted tree, let $\ntt(T)$ be the number of (general) subtrees of $T$ that are isomorphic to $\ttt$ (as ordered trees), and let $\nut(T)$ be the number of such subtrees that furthermore have root $o_T$. Then $\nut(T^v)$ is the number of subtrees with root $v$ isomorphic to $\ttt$, and thus \begin{align}\label{ntt} \ntt(T) = \sum_{v\in T}\nut(T^v). \end{align} In other words, $\ntt(T)$ is an additive functional with toll function $\nut(T)$, see \eg{} \cite{SJ285}. Let $\cT$ be a random \GWt{} defined by an offspring distribution $(p_i)\xooo$, and let $\ctn$ be the \cGWt{} defined as $\cT$ conditioned on $|\cT|=n$ (tacitly considering only $n$ such that $\P\bigpar{|\cT|=n}>0$); see \eg{} \cite{SJ264} for a survey. We let $\xi$ be a random variable with the distribution $(p_i)\xooo$; we call both $(p_i)\xooo$ and (with a minor abuse) $\xi$ the \emph{offspring distribution}. We will only consider offspring distributions with $\E\xi=1$ (\ie, $\xi$ is \emph{critical}). (We often repeat this for emphasis.) Let $\gss:=\Var\xi\le\infty$; we tacitly assume $\gss>0$, but do not require $\gss<\infty$ unless we say so. $C$ and $c$ denote unspecified constants that may vary from one occurrence to the next. They may depend on parameters such as the offspring distribution or the fixed tree $\ttt$, but they never depend on $n$. Convergence in probability and distribution is denoted $\pto$ and $\dto$, respectively. Unspecified limits are as \ntoo. \section{Proof}\label{Spf} We begin by finding the expectation of $\nut$ for both unconditioned and conditioned \GWt{s}. Let \begin{align} S_n:=\sum_{i=1}^n \xi_i, \end{align} where $\xi_1,\xi_2,\dots$ are \iid{} copies of $\xi$. \begin{lemma} \label{L1} Let $\ttt$ be a fixed ordered tree with degree sequence $d_1,\dots,d_k$, where thus $k=|T|$. \begin{romenumerate} \item \label{L1a} Then \begin{align}\label{l1a} \E \nut(\ct)=\prod_{i=1}^k \E\binom \xi{d_i} =\prod_{i=1}^k \sum_{m_i=d_i}^\infty p_{m_i}\binom{m_i}{d_i}. \end{align} \item \label{L1b} If $n>k$, then, with $m:=\sum_{i=1}^k m_i$, \begin{align}\label{l1b} \E \nut(\ctn) =\frac{n}{n-k} \sum_{m_1,\dots m_k\ge0}\prod_{i=1}^k p_{m_i}\binom{m_i}{d_i} \cdot \frac{(m-k+1)\P(S_{n-k}=n-m-1)}{\P(S_n=n-1)}. \end{align} \end{romenumerate} \end{lemma} \begin{proof} \pfitemref{L1a} We try to construct a copy $t'$ of $\ttt$ in $\ct$, with the given root $o$. Let $m_1$ be the root degree of $\ct$. Then there are $\binom{m_1}{d_1}$ ways to choose the $d_1$ children of the root that belong to $t'$. Fix one of these choices, say $v_{11},\dots,v_{1d_1}$. Next, let $m_2$ be the number of children of $v_{11}$ in $\ct$. Given $m_2$, there are $\binom{m_2}{d_2}$ ways to choose the $d_2$ children of $v_{11}$ that belong to $t'$. Fix one of these choices. Continuing in the same way, taking the vertices of $t'$ in \df{} order, we find for every sequence $m_1,\dots,m_k$ of non-negative integers, a total of $\prod_1^k\binom{m_i}{d_i}$ choices, and each of these gives a tree $t'\cong t$ provided the selected vertices in $\cT$ have degrees $m_1,\dots,m_k$, which occurs with probability $\prod_{i=1}^k p_{m_i}$. Hence, \begin{align} \E \nut(\ct)& = \sum_{m_1,\dots m_k\ge0} \prod_{i=1}^k p_{m_i} \prod_{i=1}^k \binom{m_i}{d_i} = \sum_{m_1,\dots m_k\ge0} \prod_{i=1}^k \Bigpar{p_{m_i} \binom{m_i}{d_i}} \notag\\& =\prod_{i=1}^k \sum_{m_i=0}^\infty p_{m_i}\binom{m_i}{d_i}, \end{align} and \eqref{l1a} follows. \pfitemref{L1b} Consider again $\ct$. We have just shown that each sequence $m_1,\dots,m_k$ gives $\prod_{i=1}^k\binom{m_i}{d_i}$ choices of possible subtrees $t'\cong t$ in $\cT$, where the vertices of $t'$ are supposed to have degrees $m_1,\dots m_k$ in $\ct$. This gives a total of $m=\sum_{i=1}^k m_i$ children, of which $k-1$ are the non-root vertices in $t'$, and thus $m-(k-1)$ are unaccounted children. Then, $|\ct|=n$ if and only if these $m-k+1$ children and their descendants yield exactly $n-k$ vertices. Condition on $m_1,\dots,m_k$ and one of the corresponding choices of $t'$. The probability that the $m-k+1$ children above and their descendants are $n-k$ vertices is the probability that a \GWp{} (with offspring distribution $\xi$) started witk $m-k+1$ individuals has total progeny $n-k$, which by the Otter--Dwass formula \cite{Dwass} (see also \cite{Pitman:enum} and the further references there) is given by \begin{align} \frac{m-k+1}{n-k}\P\bigpar{S_{n-k}=n-k-(m-k+1)}. \end{align} Multiplying with $\prod_{i=1}^k p_{m_i}$, the probability that the vertices in $t'$ have the right degrees in $\ct$, and summing over all possibilities, we obtain \begin{align}\label{lab} & \E\bigsqpar{ \nut(\ctn)}\P\bigpar{|\ct|=n} = \E\bigsqpar{\nut(\ct)\mid |\ct|=n}\P\bigpar{|\ct|=n} = \E\bigsqpar{\nut(\ct)\indic{|\ct|=n}} \notag\\&\qquad = \sum_{m_1,\dots m_k\ge0}\prod_{i=1}^k p_{m_i}\binom{m_i}{d_i} \cdot \frac{m-k+1}{n-k}\P(S_{n-k}=n-m-1) .\end{align} By the Otter--Dwass formula again (this time the original case in \cite{Otter}), \begin{align} \P\bigpar{|\ct|=n} = \frac{1}n P\bigpar{S_n=n-1} \end{align} and \eqref{l1b} follows. (Cf.\ \cite[Lemma 15.9]{SJ264} for a related result.) \end{proof} We need estimates of the probabilities $\P\bigpar{S_n=n-m}$. The estimate \eqref{erika0} below is standard; we expect that also \eqref{erika1} is known, but we have not found a reference, so we give a proof. (It is related to more difficult estimates in \eg{} \cite{Petrov} assuming more moments, see \refR{RP} below.) \begin{lemma} \label{Lsn} Suppose that $\E\xi=1$ and $\E\xi^2<\infty$. Then, uniformly for all $n\ge1$ and $m\in\bbZ$, \begin{align}\label{erika0} \P\bigpar{S_n=n-m} &\le C n\qqw, \\\label{erika1} \P\bigpar{S_n=n-m} &\le C |m|\qw. \end{align} \end{lemma} \begin{proof} \pfitem{\ref{erika0}} This is well-known. In fact, the classical local limit theorem, see \eg{} \cite[Theorem VII.1]{Petrov}, gives the much more precise result that, uniformly in $m\in\bbZ$ as \ntoo, \begin{align}\label{local} \P\bigpar{S_n=n-m} =\frac{h}{\gs\sqrt{n}}\Bigpar{\frac{1}{\sqrt{2\pi}}e^{-m^2/{2\gss n}}+o(1)}. \end{align} where $h$ is the span of the offspring distribution. (Provided $h|(n-m)$; otherwise the probability is 0.) \pfitem{\ref{erika1}} Let $\gf(t):=\E e^{\ii t (\xi-1)}$ be the characteristic function{} of $\xi-1=\xi-\E\xi$; note that $\gf(t)$ is twice differentiable because $\E\xi^2<\infty$. Then, by Fourier inversion, \begin{align}\label{sofie} \P\bigpar{S_n=n-m} =\frac{1}{2\pi}\intpipi e^{\ii mt}\gf(t)^n\dd t. \end{align} Hence, using an integration by parts, \begin{align} 2\pi\ii m \P\bigpar{S_n=n-m} =\intpipi\Bigpar{\ddt e^{\ii mt}}\gf(t)^n\dd t =-\intpipi e^{\ii mt}\ddt\bigpar{\gf(t)^n}\dd t \end{align} and thus \begin{align}\label{magnus} | m| \P\bigpar{S_n=n-m} \le \intpipi \Bigabs{\ddt\bigpar{\gf(t)^n}}\dd t = n\intpipi \abs{\gf'(t)}\abs{\gf(t)}^{n-1}\dd t. \end{align} The assumptions yield $\gf'(0)=\E(\xi-1)=0$ and $\sup|\gf''(t)|=|\gf''(0)|=\Var\xi=C<\infty$, and thus \begin{align} \label{emma} |\gf'(t)|\le Ct .\end{align} Assume for simplicity that the span of $\xi$ is 1 (the general case is similar, with standard modifications). Then, as is well-known, it is easy to see that there exist $c>0$ such that \begin{align}\label{jesper} |\gf(t)|\le e^{-ct^2},\qquad |t|\le\pi. \end{align} Using \eqref{emma} and \eqref{jesper} in \eqref{magnus} we obtain \begin{align}\label{wilhelm} | m| \P\bigpar{S_n=n-m} \le n C\intpipi |t| e^{-c(n-1)t^2}\dd t \le Cn\intoo t e^{-cnt^2}\dd t =C ,\end{align} which proves \eqref{erika1}. \end{proof} \begin{remark}\label{RP} In the same way, taking two derivatives inside \eqref{sofie}, one obtains \begin{align}\label{erika2} \P\bigpar{S_n=n-m} &\le C n\qq m\qww, \end{align} which is stronger for large $m$; note that \eqref{erika0} and \eqref{erika2} imply \eqref{erika1}. Furthermore, even stronger estimates hold if we assume more moments; see \cite[Theorem VII.16]{Petrov} for a precise asymptotic estimate assuming $\E\xi^k<\infty$ for some $k\ge3$. In fact, \cite[Theorem VII.16]{Petrov} holds for $k=2$ too, which can be seen by refining the argument above; this is an asymptotic estimate that is more precise than \eqref{erika2} (and implies it). \end{remark} \begin{lemma} \label{Lon} Let $\ttt$ be a fixed ordered tree and suppose that $\E\xi=1$, $\E\xi^2<\infty$ and $\E\xi^{\gdt}<\infty$. Then $\E\nut(\ctn)=o\bigpar{n\qq}.$ \end{lemma} \begin{proof} Let again the degree sequence of $\ttt$ be $d_1,\dots,d_k$. For a vector $\bm=(m_1,\dots,m_k)\in \bbzzk$, let \begin{align}\label{amma} a_\bm:= \prod_{i=1}^k p_{m_i}\binom{m_i}{d_i}. \end{align} Then, \eqref{l1a}--\eqref{l1b} and the assumption $\E\xi^\gdt<\infty$ yield \begin{align}\label{cecilia} \sumbm a_\bm=\E\nut(\cT)<\infty \end{align} and for $n>k$, with as above $m:=\sum_im_i =:|\bm|$ (and $C=1$, actually), \begin{align}\label{selma} \E\nut(\ctn) \le C \sumbm a_\bm \cdot \frac{m\P(S_{n-k}=n-m-1)}{\P(S_n=n-1)}. \end{align} Denote the summand in \eqref{selma} by $b_{\bm,n}$. By the local limit theorem \eqref{local}, as is well-known, \begin{align} \label{sigrid} \P(S_n=n-1)\sim c n\qqw, \end{align} and thus \begin{align}\label{winston} b_{\bm,n}/n\qq \le C ma_\bm \P(S_{n-k}=n-m-1). \end{align} Hence, \eqref{erika0} implies that for every fixed $\bm$, as \ntoo, \begin{align}\label{eleonora} b_{\bm,n}/n\qq \le C ma_\bm n\qqw \to 0. \end{align} Furthermore, \eqref{winston} and \eqref{erika1} yield \begin{align}\label{anna} b_{\bm,n}/n\qq \le C a_\bm, \end{align} which is summable by \eqref{cecilia}. Consequently, dominated convergence shows that \begin{align}\label{lina} n\qqw \sumbm b_{\bm,n}= \sumbm b_{\bm,n}/n\qq \to0, \end{align} which together with \eqref{selma} yields the result $n\qqw\E\nut(\ctn)\to0$. \end{proof} We will see in \refE{Ebad} below, that the estimate $o(n\qq)$ in \refL{Lon} is best possible in general. However, if we assume another moment on $\xi$, we can improve the estimate to $O(1)$, and furthermore show that $\E\nut(\ctn)$ converges. We next show this, although it is not required for our main result. \begin{lemma}\label{L3} Let $\ttt$ be a fixed tree with degree sequence $d_1,\dots,d_k$, and suppose that $\E\xi=1$. Then, as \ntoo, \begin{align}\label{l3} \E\nut(\ctn)\to \sum_{i=1}^k (d_i+1)\E\binom{\xi}{d_i+1} \prod_{j\neq i} \E\binom\xi{d_j}. \end{align} In particular, $\E\nut(\ctn)=O(1)$ if\/ $\E\xi^{\gdt+1}<\infty$, while $\E\nut(\ctn)\to\infty$ if\/ $\E\xi^{\gdt+1}=\infty$. \end{lemma} \begin{proof} Define again $a_\bm$ by \eqref{amma}, and denote the summand in \eqref{l1b} by $b'_{\bm,n}$, where as above $\bm=(m_1,\dots,m_k)\in\bbzzk$. It follows from the local limit theorem \eqref{local} that for every fixed $\bm$, as \ntoo, \begin{align} \frac{\P(S_{n-k}=n-m-1)}{\P(S_n=n-1)} =\frac{h\xpar{2\pi\gss( n-k)}\qqw\bigpar{1+o(1)}} {h\xpar{2\pi\gss n}\qqw\bigpar{1+o(1)}} \to 1. \end{align} (This holds also if the span $h>1$, assuming as we may that all $p_{m_i}>0$, so $h|m$.) Hence, \begin{align}\label{elfbrink} b'_{\bm,n}\to a_\bm(m-k+1). \end{align} Furthermore, by \eqref{erika0} and \eqref{sigrid}, \begin{align} \frac{\P(S_{n-k}=n-m-1)}{\P(S_n=n-1)} \le\frac{C n\qqw}{c n\qqw} =C, \end{align} and thus \begin{align}\label{helene} b'_{\bm,n}\le C a_\bm(m-k+1). \end{align} Consequently, if $\sum_\bm a_\bm(m-k+1)<\infty$, then \begin{align}\label{sumb} \sum_\bm b'_{\bm,n}\to\sum_\bm a_\bm(m-k+1) \end{align} by \eqref{elfbrink}, \eqref{helene} and dominated convergence. On the other hand, if $\sum_\bm a_\bm(m-k+1)=\infty$, then $\sum_\bm b'_\bm\to\infty$ by \eqref{elfbrink} and Fatou's lemma, and thus \eqref{sumb} holds in this case too. Recalling \eqref{l1b}, this shows that in any case, \begin{align}\label{duncan} \E\nut(\ctn)\to\sum_\bm a_\bm(m-k+1), \end{align} and it remains only to evaluate the limit. Since $\ttt$ is a tree, we have $\sum_{i=1}^k d_i=k-1$, and thus $m-k+1=\sum_{i=1}^k(m_i-d_i)$. Recalling the definition \eqref{amma} of $a_\bm$, we thus have \begin{align}\label{duff} \sum_\bm a_\bm(m-k+1) &=\sum_\bm\sum_{i=1}^k (m_i-d_i) p_{m_i}\binom{m_i}{d_i} \prod_{j\neq i} p_{m_j}\binom{m_j}{d_j} \notag\\& =\sum_{i=1}^k\sum_{m_i=0}^\infty p_{m_i}(m_i-d_i)\binom{m_i}{d_i} \prod_{j\neq i} \sum_{m_j=0}^\infty p_{m_j}\binom{m_j}{d_j} ,\end{align} which equals the \rhs{} of \eqref{l3} because $(m_i-d_i)\binom{m_i}{d_i}=(d_i+1)\binom{m_i}{d_i+1}$. This completes the proof by \eqref{duncan}. \end{proof} \begin{remark}\label{Rht} Assume only $\E\xi=1$. If $\hT$ is the infinite size-biased \GWt{} defined by \citet{Kesten}, see also \cite[Section 5]{SJ264}, then $\ctn\dto\hT$ in a local topology (\ie, close to the root), see \cite[Theorem 7.1]{SJ264}, and it follows that \begin{align} \label{alla} \nut(\ctn)\dto\nut(\hT). \end{align} It is not difficult to see that $\E\nut(\hT)$ equals the \rhs{} of \eqref{l3}, which thus says that $\E\nut(\ctn)\to\E\nut(\hT)$. (This could presumably be used to give an alternative proof of \refL{L3}, but we prefer the direct proof above.) In particular, if $\E\xi^{\gdt+1}=\infty$, then $\E\nut(\hT)=\infty$, and thus \eqref{alla} and Fatou's lemma yield $\E\nut(\ctn)\to\infty$. Hence, the last sentence in \refL{L3} holds also without the assumption $\E\xi^2<\infty$. \end{remark} We proceed to the proof of \refT{T1}. The case $\gdt\le1$ is special, since we then do not assume $\E\xi^2<\infty$, but on the other hand this case is simple and rather trivial, so we discuss it separately in the following example. \begin{example}\label{Ep} Consider the case $\gdt\le1$. This means that $\ttt$ is a path $\pk$ with $k\ge1 $ vertices, and thus length $k-1$. A copy of $\ttt$ in a tree $T$ is thus a path consisting of $k$ vertices $v_1,\dots,v_k$ such that $v_{i+1}$ is a child of $v_i$; such a path is determined by its endpoint $v_k$, and every vertex of depth (= distance from the root) at least $k-1$ is the endpoint of a copy of $\ttt$. Hence, if $\nu_i(T)$ is the number of vertices in $T$ of depth $i$, then \begin{align}\label{epa} \nnx_{\pk}(T)=\sum_{i\ge k-1}\nu_i(T) = |T|-\sum_{i=0}^{k-2}\nu_i(T). \end{align} In particular, $\nnx_{\px1}(\ctn)=n$ and $\nnx_{\px2}(\ctn)=n-1$ are deterministic; these are trivially just the numbers of vertices and edges. Moreover, as said in \refR{Rht}, assuming $\E\xi=1$, the random tree $\ctn$ converges locally in distribution as \ntoo, see \cite[Theorem 7.1]{SJ264}; in particular each $\nu_i(\ctn)$ converges in distribution (to $\nu_i(\hT)$) and thus $\nu_i(\ctn)=\Op(1)$ (\ie, is bounded in probability). Hence, for every $k\ge1$, \eqref{epa} implies \begin{align}\label{epb} \nnx_{\pk}(\ctn)=n+\Op(1). \end{align} In particular, $\nnx_{\pk}(\ctn)$ is more strongly concentrated than the dispersion of order $n\qq$ typically seen in similar statistics, see \eg{} \refE{E11} and \refS{Sfurther}. \end{example} \begin{proof} [Proof of \refT{T1}] Suppose first $\gdt\le1$. Then $\ttt=\pk$ for some $k\ge1$ and \refE{Ep} shows that \eqref{epb} holds, and thus $\nnx_\pk(\ctn)/n\pto1$. Furthermore, \eqref{l1a} yields \begin{align} \E\nux_{\pk}(\ct)=(\E \xi)^{k-1}=1, \end{align} and thus \eqref{t1p} holds. Moreover, $\nnx_\pk(\ctn)/n\le1$ by \eqref{epa}, and thus dominated convergence applies to \eqref{t1p} and yields \eqref{t1e} and \eqref{t1}, see \eg{} \cite[Theorems 5.5.4 and 5.5.5]{Gut}. In the remainder of the proof we may thus assume $\gdt\ge2$, and thus, in particular, $\E\xi^2<\infty$. (The arguments below use $\E\xi^2<\infty$, but apply to any $\gdt$.) \refL{L1}\ref{L1a} and the assumption $\E\xi^{\gdt}<\infty$ show that $\E\nut(\ct)<\infty$, and \refL{Lon} shows $\E\nut(\ctn)=o\bigpar{n\qq}$. Hence \eqref{t1p} and \eqref{t1e} follow by \cite[Remark 5.3]{SJ285}. However, since only a sketch of the proof is given in that remark, let us add some details. First, \eqref{t1e} follows by the argument in the proof of \cite[Theorem 1.5(i)]{SJ285}, adding the factor $n\qq$ at some places. Next, define for $M>0$ the truncation $\nutm(T):=\nut(T)\bmin M$ and let $\nttm(T):=\sum_{v\in T} \nutm(T^v)$ be the corresponding additive functional, \cf{} \eqref{ntt}. Let $\eps>0$. Since $\nutm(\cT)\upto\nut(\ct)$ as $M\to\infty$, we can by monotone convergence, and $\E\nut(\ct)<\infty$, choose $M$ such that \begin{align}\label{othello} \E\nut(\cT)-\E\nutm(\ct)<\eps^2. \end{align} We have proved \eqref{t1e}, and similarly $\E\nttm(\ctn)/n\to\E\nutm(\cT)$ by \cite[Theorem 1.3]{SJ285}, since $\nutm$ is bounded. Hence, \eqref{othello} implies that for all sufficiently large $n$, \begin{align}\label{desdemona} \E\bigabs{\ntt(\ctn)/n-\nttm(\ctn)/n}= \E\ntt(\ctn)/n-\E\nttm(\ctn)/n<\eps^2. \end{align} Furthermore, \cite[Theorem 1.3]{SJ285} also yields $\nttm(\ctn)/n\pto\E\nutm(\cT)$. Consequently, using also \eqref{othello} again, \eqref{desdemona} and Markov's inequality, if $n$ is large, \begin{align} & \P\bigpar{\bigabs{\ntt(\ctn)/n-\E\nut(\ct)}>3\eps} \notag\\& \le \P\bigpar{\bigabs{\ntt(\ctn)/n-\nttm(\ctn)/n}>\eps} + \P\bigpar{\bigabs{\nttm(\ctn)/n-\E\nutm(\ct)}>\eps} \notag\\& \le2\eps. \end{align} Hence, \eqref{t1p} holds. Finally, as said earlier, \eqref{t1p} and \eqref{t1e} are together equivalent to the $L^1$ convergence \eqref{t1}. \end{proof} \section{Examples}\label{Sex} We give some simple but illuminating examples. Recall also \refE{Ep}. \begin{example}\label{Eqr} Let $t=t_{q,r}$ consist of two paths with $q+1$ and $r+1$ vertices, joined at the root; here $q,r\ge1$. We have $k=1+q+r$ and $d_1=2$ while $d_i=1$ for $i>1$; thus $\gdt=2$. Since $\E\xi=1$, \eqref{l1a} yields \begin{align} \E\nuqr(\ct)=\E\binom{\xi}2=\frac{\E\xi^2-1}{2}=\frac{\gss}2. \end{align} Hence, \refT{T1} yields, for any $q,r\ge1$, \begin{align}\label{eqr2} \nnqr(\ctn)/n\lito \gss/2. \end{align} \end{example} \begin{example}\label{E11} Consider the special case $q=r=1$ of \refE{Eqr}. Then $t_{1,1}$ is a cherry, \ie, a root with two children. If a vertex $v$ in a tree $T$ has degree $d(v)$, then the number of cherries rooted at $v$ is $\binom{d(v)}2$, and thus \begin{align}\label{e11} \nnx_{t_{1,1}}(T)=\sum_{v\in T}\binom{d(v)}2 =\sum_{r=1}^\infty \binom r2 X_r(T), \end{align} where $X_r(T)$ is the number of vertices of degree $r$ in $T$. It is known that $X_r(\ctn)/n\pto p_r$, see \eg{} \cite[Theorem 7.11]{SJ264}. Hence, \eqref{eqr2} (with $q=r=1$) is what we would get by dividing \eqref{e11} by $n$ and taking the limit inside the sum; if the degree distribution is bounded, the sum is finite so this is rigorous and \eqref{eqr2} (still with $q=r=1$) follows from \eqref{e11}. In this case we can say much more than \eqref{eqr2}. It was proved in \cite{Kolchin}, see also \cite{DrmotaG99}, that $X_r(\ctn)$ is asymptotically normal, with \begin{align} \frac{X_r(\ctn)-np_r}{\sqrt n}\dto N\bigpar{0,\gam_r^2} \end{align} for some explicit $\gam_r^2$. This was extended to joint convergence for all $r$ in \cite{SJ132}, provided $\E\xi^3<\infty$. Hence, at least if $\xi$ is bounded, it follows from \eqref{e11} that $\nnx_{t_{1,1}}(\ctn)$ is asymptotically normal, with \begin{align}\label{asn} \frac{\nnx_{t_{1,1}}(\ctn)-n\gss/2}{\sqrt n}\dto N\bigpar{0,\gam^2} \end{align} for some explicit $\gam^2\ge0$. There are degenerate cases where $\gam^2=0$. For example, for full binary trees ($\P(\xi=2)=\P(\xi=0)=\frac12$), all degrees are 0 or 2, and then each $X_r(T)$ is a deterministic function of $|T|$; hence \eqref{e11} shows that $\nnx_{t_{1,1}}(\ctn)$ is deterministic. More generally, the same happens for full $m$-ary trees, with $\xi\in\set{0,m}$ a.s\punkt, for any $m\ge2$. But it can be seen from the covariances given in \cite{SJ132} that $\gam^2>0$ in all other cases with bounded $\xi$. See further \refS{Sfurther}. \end{example} \begin{example} \label{El} Let $\ell\ge1$, and let $\vvl(T)$ be the number of (undirected) paths of length $\ell$ in $T$. For definiteness, we count undirected paths, so this equals the number of unordered pairs $(v,w)$ of vertices of distance $\ell$. There are two cases: \begin{romenumerate} \item $v$ is an ancestor of $w$, or conversely; the number of such pairs is $\nnx_{\pl}(T)$. \item Neither $v$ nor $w$ is an ancestor of the other. Then $v$ and $w$ are the two leaves in a copy of $\tqr$ with $q,r\ge1$ and $q+r=\ell$. For given $q$ and $r$, the number of such pairs equals $\nnqr(T)$ \end{romenumerate} Consequently, \begin{align} \vvl(T)=\nux_{\pl}(T)+\sum_{q=1}^{\ell-1}\nnx_{t_{q,\ell-q}}(T). \end{align} Hence, \refEs{Ep} and \ref{Eqr} yield \begin{align}\label{julie} \vvl(\ctn)/n\lito 1+(\ell-1)\frac{\gss}2. \end{align} For example, taking $\xi\sim\Po(1)$ we obtain (forgetting the ordering) a uniformly random unordered labelled tree; we have $\gss=1$ and thus \eqref{julie} yields \begin{align}\label{vvpo} \vvl(\ctn)\lito (\ell+1)/2. \end{align} Similarly, taking $\xi\sim\Ge(1/2)$ we obtain a uniformly random ordered tree; we have $\gss=2$ and thus \eqref{julie} then yields \begin{align}\label{vvge} \vvl(\ctn)\lito \ell. \end{align} Taking $\xi\sim\Bi(2,1/2)$ we obtain a uniformly random binary tree; we have $\gss=1/2$ and thus \eqref{julie} now yields \begin{align}\label{vvge} \vvl(\ctn)\lito (\ell+3)/4. \end{align} \end{example} The following example shows that the estimate $o\bigpar{n\qq}$ in \refL{Lon} is best possible. \begin{example} \label{Ebad} For simplicity, let the tree $\ttt$ be a star, where the root has degree $\gD\ge2$ and its children are leaves with degree 0. (The argument is easily modified to any tree $\ttt$ with $\gdt\ge2$.) Thus $k:=|t|=\gD+1$. Assume that the span of $\xi$ is 1. The local limit theorem \eqref{local} implies that if $n$ is large and $m\le n\qq$, then \begin{align} \P(S_{n-k}=n-m-1)\ge cn\qqw ,\end{align} and thus, using \eqref{sigrid}, \begin{align} \P(S_{n-k}=n-m-1)/\P(S_n=n-1)\ge c. \end{align} Hence, by \eqref{l1b} and considering there only terms with $m_2=\dots=m_k=0$, \begin{align}\label{nutt} \E\nut(\ctn) \ge c\sum_{\gD<m_1\le n\qq} p_{m_1}\binom{m_1}{\gD} m_1 \ge c\sum_{\gD<m\le n\qq} p_{m}{m}^{\gD+1}. \end{align} If $\eps>0$, and we let $p_m=m^{-\gD-1-\eps}$ for large $m$, then $\E\xi^\gD<\infty$, and \eqref{nutt} yields, for large $n$, \begin{align}\label{nutts} \E\nut(\ctn) \ge c\sum_{\gD<m\le n\qq} {m}^{-\eps} \ge c n^{(1-\eps)/2}. \end{align} Hence, for any $\eps>0$, $\E\nut (\ctn)$ can grow faster than $n^{1/2-\eps}$. Similarly, we can find an offspring distribution $(p_m)\xooo$ satifying the conditions such that $\E\nut(\ctn)=n^{1/2-o(1)}$; we omit the details. Moreover, for any given sequence $\gd(n)\downto0$, we can find $(p_m)\xooo$ such that $\E\nut(\ctn)\ge\gd(n)n\qq$, at least for a subsequence. To see this, take an increasing sequence $(m_j)\xoo$ with $\sum_{j=1}^\infty j\gd(m_j^2)<1$. Let $p_{m_j}:=j\gd(m_j^2)m_j^{-\gD}$, and $p_m=0$ for all other $m\ge2$, choosing $p_0$ and $p_1$ such that $\sum_i p_i=\sum_ii p_i=1$. Also, let $n_j:=m_j^2$. Then \eqref{nutt} implies that, for large $j$, \begin{align} \E\nut(\ct_{n_j}) \ge c p_{m_j}m_j^{\gD+1} =cj m_j \gd(m_j^2) \ge m_j \gd(m_j^2) = n_j\qq \gd(n_j) . \end{align} \end{example} \section{Asymptotic normality?}\label{Sfurther} We showed in \refE{E11} that if $\xi$ is bounded, then $\nnx_{t_{1,1}}(\ctn)$ is asymptotically normal in the sense that \eqref{asn} holds (although $\gam^2=0$ is possible). In fact, this holds for any fixed tree $\ttt$. \begin{proposition}\label{P5} Assume that $\xi$ is bounded. Then, for any fixed tree $\ttt$, \begin{align}\label{asn5} \frac{\ntt(\ctn)-n\mu_\ttt}{\sqrt n}\dto N\bigpar{0,\gam_\ttt^2}, \end{align} for $\mu_\ttt:=\E\nut(\ct)$ and some $\gam^2_\ttt\ge0$. \end{proposition} \begin{proof} This follows from the result by \citet{Drmotaetal} on patterns discussed in \refS{S:intro} (extended to \cGWt{s} \cite{Drmotaetal,Kok1,Kok2}); the assumption on $\xi$ means that vertex degrees are bounded by some constant, and thus there is a finite number of patterns that correspond to subtrees isomorphic to $\ttt$; hence $\ntt(\ctn)$ is a linear combination of pattern counts, and the result follows from the joint asymptotic normality of the latter. (See also \cite{LiLi} for a special case.) Alternatively, this is an application of \cite[Theorem 1.13]{SJ285}: the functional $\nut$ is local (as defined in \cite{SJ285}) and for trees with degrees bounded by some constant $K$, $\nut$ is bounded. Hence \eqref{asn5} follows from \cite[Theorem 1.13]{SJ285}. \end{proof} We conjecture that this behaviour is typical, and that \refProp{P5} holds for every $\xi$ with $\E\xi=1$ that satisfies a suitable moment condition. However, it seems that substantial additional work would be required to show this. As said in the introduction, this was briefly discussed in \cite{Drmotaetal}, but it seems that the method there requires extensions to infinite systems of functional equations. Similarly, the application of \cite[Theorem 1.13]{SJ285} requires $\nut(\ctn)$ to be bounded, which is not the case when $\xi$ is unbounded. It is possible that this may be overcome by truncations and some variance estimates, but again more work is needed. (The extension in \cite{WagnerRS2020} applies to the case when $\ttt$ is a star with root degree $\gD$ (including \refE{E11} with $\gD=2$) and $\E\xi^{2\gD+1}<\infty$; this might suggest further extensions.) This problem is thus left for future research. Note also that there are degenerate cases when the asymptotic variance in \eqref{asn5} $\gam^2_\ttt=0$; see \refEs{Ep} and \ref{E11}. (Then \eqref{asn5} does not give asymptotic normality; only a concentration result.) However, we conjecture that this is an exception, occuring only in a few special cases. \subsection*{Acknowledgement} I thank Stephan Wagner for helpful comments. \newcommand\AAP{\emph{Adv. Appl. Probab.} } \newcommand\JAP{\emph{J. Appl. Probab.} } \newcommand\JAMS{\emph{J. \AMS} } \newcommand\MAMS{\emph{Memoirs \AMS} } \newcommand\PAMS{\emph{Proc. \AMS} } \newcommand\TAMS{\emph{Trans. \AMS} } \newcommand\AnnMS{\emph{Ann. Math. Statist.} } \newcommand\AnnPr{\emph{Ann. Probab.} } \newcommand\CPC{\emph{Combin. Probab. Comput.} } \newcommand\JMAA{\emph{J. Math. Anal. Appl.} } \newcommand\RSA{\emph{Random Structures Algorithms} } \newcommand\DMTCS{\jour{Discr. Math. Theor. Comput. Sci.} } \newcommand\AMS{Amer. Math. Soc.} \newcommand\Springer{Springer-Verlag} \newcommand\Wiley{Wiley} \newcommand\vol{\textbf} \newcommand\jour{\emph} \newcommand\book{\emph} \newcommand\inbook{\emph} \def\no#1#2,{\unskip#2, no. #1,} \newcommand\toappear{\unskip, to appear} \newcommand\arxiv[1]{\texttt{arXiv}:#1} \newcommand\arXiv{\arxiv} \def\nobibitem#1\par{}
{ "redpajama_set_name": "RedPajamaArXiv" }
568
<?php namespace app\models\db; use \yii\db\ActiveRecord; class Modelt3 extends ActiveRecord { public static function tableName() { return 'modelt3'; } } ?>
{ "redpajama_set_name": "RedPajamaGithub" }
6,129
Publishing ========== :: # Create some distributions # $ python setup.py sdist bdist_wheel # Register your project (if necessary): # One needs to be explicit here, globbing dist/* would fail. # $ twine register dist/project_name-x.y.z.tar.gz $ twine register dist/mypkg-0.1-py2.py3-none-any.whl # Upload with Twine # $ twine upload dist/*
{ "redpajama_set_name": "RedPajamaGithub" }
2,388
\section{Introduction}\label{sec:introduction} \paragraph{Improving upon the error correction performance of Reed-Solomon codes.} Reed-Solomon codes are among the most extensively used error correcting codes. It has long been known how to decode them up to half the minimum distance. This gives a decoding algorithm that is able to correct a fraction $\frac{1-R}{2}$ of errors in a Reed-Solomon code of rate $R$. However, it is only in the late nineties that a breakthrough was obtained in this setting with Sudan's algorithm \cite{S97b} and its improvement in \cite{GS99} who showed how to go beyond this barrier with an algorithm which in its \cite{GS99} version decodes any fraction of errors smaller than $1 - \sqrt{R}$. This exceeds the minimum distance bound $\frac{1-R}{2}$ in the whole region of rates $[0,1)$. Later on, it was shown that this decoding algorithm could also be modified a little bit in order to cope with soft information on the errors \cite{KV03}. A few years later, it was also realized by Parvaresh and Vardy in \cite{PV05} that by a slight modification of Reed-Solomon codes and by an increase of the alphabet size it was possible to beat the $1-\sqrt{R}$ decoding radius. Their new family of codes is list decodable beyond this radius for low rate. Then, Guruswami and Rudra \cite{GR06} improved on these codes by presenting a new family of codes, namely {\em folded Reed-Solomon codes} with a polynomial time decoding algorithm achieving the list decoding capacity $1-R-\epsilon$ for every rate $R$ and $\epsilon >0$. The initial motivation of this paper is to present another modification of Reed-Solomon codes that improves the fraction of errors that can be corrected. It consists in using them in a $\left(U\mid U+V\right)$ construction. In other words, we choose in this construction $U$ and $V$ to be Reed-Solomon codes. We will show that, in the low rate regime, this class of codes outperforms a little bit a Reed-Solomon code decoded with the Guruswami and Sudan decoder. The point is that this $\left(U\mid U+V\right)$ code can be decoded in two steps : \begin{enumerate} \item First by subtracting the left part $y_1$ to the right part $y_2$ of the received vector $(y_1|y_2)$ and decoding it with respect to $V$. In such a case, we are left with decoding a Reed-Solomon code with about twice as many errors. \item Secondly, once we have recovered the right part $v$ of the codeword, we can get a word $(y_1,y_2-v)$ which should match two copies of a same word $u$ of $U$. We can model this decoding problem by having some soft information on the received word when we have sent $u$. \end{enumerate} It turns that this channel error model is much less noisy than the original $q$-ary symmetric channel we started with. This soft information can be used in Koetter and Vardy's decoding algorithm. By this means we can choose $U$ to be a Reed-Solomon code of much bigger rate than $V$. All in all, it turns out that by choosing $U$ and $V$ with appropriate rates we can beat the $1 - \sqrt{R}$ bound of Reed-Solomon codes in the low-rate regime. It should be noted however that beating this $1- \sqrt{R}$ bound comes at the cost of having now an algorithm which does not work as for the aforementioned papers \cite{S97b,GS99,PV05,GR06} for every error of a given weight (the so called adversarial error model) but with probability $1-o(1)$ for errors of a given weight. However contrarily to \cite{PV05,GR06} which results in a significant increase of the alphabet size of the code, our alphabet size actually decreases when compared to a Reed-Solomon code: it can be half of the code length and can be even smaller when we apply this construction recursively. Indeed, we will show that we can even improve the error correction performances by applying this construction again to the $U$ and $V$ components, i.e we can choose $U$ to be a $(U_1|U_1+V_1)$ code and we replace in the same way the Reed-Solomon code $V$ by a $(U_2|U_2+V_2)$ code where $U_1,U_2,V_1$ and $V_2$ are Reed-Solomon codes (we will say that these $U_i$'s and $V_i$'s codes are the consituent codes of the iterated $\left(U\mid U+V\right)$-construction). This improves slightly the decoding performances again in the low rate regime. \paragraph{Attaining the capacity by letting the depth of the construction go to infinity with an exponential decay of the probability of error after decoding.} The first question raised by these results is to understand what happens when we apply this iterative construction a number of times which goes to infinity with the codelength. In this case, the channels faced by the constituent Reed-Solomon codes polarize: they become either very noisy channels or very clean channels of capacity close to $1$. This is precisely the polarization phenomenon discovered by Ar{\i}kan in \cite{A09}. Indeed this iterated $\left(U\mid U+V\right)$-construction is nothing but a standard polar code when the constituent codes are Reed-Solomon codes of length $1$ (i.e. just a single symbol). The polarization phenomenon together with a result proving that the Koetter-Vardy decoder is able to operate sucessfully at rates close to $1$ for channels of capacity close to $1$ can be used to show that it is possible to choose the rates of the constituent Reed-Solomon codes in such a way that the code construction together with the Koetter-Vardy decoder is able to attain the capacity of symmetric channels. On a theoretical level, proceeding in this way would not change however the asymptotics of the decay of the probability of error after decoding: the codes obtained in this way would still behave as polar codes and would in particular have a probability of error which decays exponentially with respect to (essentially) the square root of the codelength. The situation changes completely however when we allow ourself to change the input alphabet of the channel and/or to use Algebraic Geometry (AG) codes. The first point can be achieved by grouping together the symbols and view them as a symbol of a larger alphabet. The second point is also relevant here since the Koetter and Vardy decoder also applies to AG codes (see \cite{KV03a}) with only a rather mild penalty in the error-correction capacity related to the genus of the curve used for constructing the code. Both approaches can be used to overcome the limitation of having constituent codes in the iterated $\left(U\mid U+V\right)$-construction whose length is upper-bounded by the alphabet size. When we are allowed to choose long enough constituent codes the asymptotic behavior changes radically. We will indeed show that if we insist on using Reed-Solomon codes in the code construction we obtain a quasi-exponential decay of the probability of error in terms of the codelength (i.e. exponential if we forget about the logarithmc terms in the exponent) and an exponential decay if we use the right AG codes. This improves very significantly upon polar codes. Not only are we able to attain the channel capacity with a polynomial time decoding algorithm with this approach but we are also able to do so with an exponential decay of the probability of error after decoding. In essence, this sharp decay of the probability of error after decoding is due to a result of this paper (see Theorems \ref{th:exponential} and \ref{th:exponentialAG}) showing that even if the Koetter-Vardy decoder is not able to attain the capacity with a probability of error going to zero as the codelength goes to infinity its probability of error decays like $2^{-K \epsilon^2 n}$ where $n$ is the codelength and $\epsilon$ is the difference between a quantity which is strictly smaller than the capacity of the channel and the code-rate. {\bf Notation.} Throughout the paper we will use the following notation. \begin{itemize} \item A linear code of length $n$, dimension $k$ and distance $d$ over a finite field $\mathbb F_q$ is referred to as an $[n,k,d]_q$-code. \item The concatenation of two vectors $\word{x}$ and $\word{y}$ is denoted by $(\word{x}|\word{y})$. \item For a vector $\word{x}$ we either denote by $x(i)$ or by $x_i$ the $i$-th coordinate of $\word{x}$. We use the first notation when the subscript is already used for other purposes or when there is already a superscript for $\word{x}$. \item For a vector $\word{x}=(x_\alpha)_{\alpha \in \F_{q}}$ we denote by $\word{x}^{+\beta}$ the vector $(x_{\alpha+\beta})_{\alpha \in \F_{q}}$. \item For a matrix $\mat{M}$ we denote by $\mat{M}^j$ the $j$-th column of $\mat{M}$. \item By some abuse of terminology, we also view a discrete memoryless channel $W$ with input alphabet $\mathcal{X}$ and output alphabet $\mathcal{Y}$ as an $\mathcal{X} \times \mathcal{Y}$ matrix whose $(x,y)$ entry is denoted by $W(y|x)$ which is defined as the probability of receiving $y$ given that $x$ was sent. We will identify the channel with this matrix later on. \end{itemize} \section{The code construction and the link with polar codes} \label{sec:polar} \noindent {\bf Iterated $\left(U\mid U+V\right)$ codes.} This section details the code construction we deal with. It can be seen as a variation of polar codes and is nothing but an iterated $\left(U\mid U+V\right)$ code construction. We first recall the definition of a $\left(U\mid U+V\right)$ code. We refer to \cite[Th.33]{MS86} for the statements on the dimension and minimum distance that are given below. \begin{definition}[$\left(U\mid U+V\right)$ code] Let $U$ and $V$ be two codes of the same length and defined over the same finite field $\F_{q}$. We define the $\left(U\mid U+V\right)$-construction of $U$ and $V$ as the linear code: $$\left(U\mid U+V\right) = \left\{ (\mathbf u\mid\mathbf u + \mathbf v); \mathbf u \in U \hbox{ and } \mathbf v \in V\right\}.$$ The dimension of the $\left(U\mid U+V\right)$ code is $k_U+k_V$ and its minimum distance is $\min(2d_U,d_V)$ when the dimensions of $U$ and $V$ are $k_U$ and $k_V$ respectively, the minimum distance of $U$ is $d_U$ and the minimum distance of $V$ is $d_V$. \end{definition} The codes we are going to consider here are iterated $\left(U\mid U+V\right)$ constructions defined by \begin{definition}[iterated $\left(U\mid U+V\right)$-construction of depth $\ell$] \label{def:iterated_uv} An iterated $\left(U\mid U+V\right)$-code $U_\epsilon$ of depth $\ell$ is defined from a set of $2^\ell$ codes $\left\{U_\word{x};\word{x} \in \{0,1\}^\ell\right\}$ which have all the same length and are defined over the same finite field $\F_{q}$ by using the recursive definition \begin{eqnarray*} U_\epsilon & \stackrel{\text{def}}{=} & (U_0\mid U_0+U_1)\\ U_\word{x} &\stackrel{\text{def}}{=} &(U_{\word{x}\mid0}\mid U_{\word{x}\mid0}+ U_{\word{x}\mid1}) \;\;\text{for $\word{x} \in \{0,1\}^i$, $i \in \{1,\dots,\ell-1\}$}. \end{eqnarray*} The codes $U_\word{x}$ for $\word{x} \in \{0,1\}^\ell$ are called the {\em constituent codes} of the construction. \end{definition} In other words, an iterated $\left(U\mid U+V\right)$-code of depth $1$ is nothing but a standard $\left(U\mid U+V\right)$-code and an iterated $\left(U\mid U+V\right)$-code of depth $2$ is a $\left(U\mid U+V\right)$-code where $U$ and $V$ are themselves $\left(U\mid U+V\right)$-codes. \noindent{\bf Graphical representation of an iterated $\left(U\mid U+V\right)$ code.} Iterated $\left(U\mid U+V\right)$-codes can be represented by complete binary trees in which each node has exactly two children except the leaves. A $\left(U\mid U+V\right)$-code is represented by a node with two childs, the left child representing the $U$ code and the right child representing the $V$ code. The simplest case is given is given in Figure \ref{fig:uv}. Another example is given in Figure \ref{fig:example} and represents an iterated $\left(U\mid U+V\right)$-code $U_\epsilon$ of depth $3$ with a binary tree of depth $3$ whose leaves are the $8$ constituent codes of this construction. \begin{figure}[h!] \caption{Graphical representation of a $\left(U\mid U+V\right)$-code. \label{fig:uv}} \centering \includegraphics[width=4cm]{uv.pdf} \end{figure} \begin{figure}[h!] \caption{Example of an iterated $\left(U\mid U+V\right)$ code of depth $3$. \label{fig:example}} \centering \includegraphics[width=8cm]{example.pdf} \end{figure} \begin{remark} Standard polar codes (i.e. the ones that were constructed by Ar{\i}kan in \cite{A09}) are clearly a special case of the iterated $\left(U\mid U+V\right)$ construction. Indeed such a polar code of length $2^\ell$ can be viewed as an iterated $\left(U\mid U+V\right)$-code of depth $\ell$ where the set $\left\{U_\word{x};\word{x} \in \{0,1\}^\ell\right\}$ of constituent codes are just codes of length $1$. In other words, standard polar codes correspond to binary trees where all leaves are just single bits. \end{remark} \noindent{\bf Recursive soft decoding of an iterated $\left(U\mid U+V\right)$-code.} As explained in the introduction our approach is to use the same decoding strategy as for Ar{\i}kan polar codes (that is his successive cancellation decoder) but by using now leaves that are codes which are much longer than single symbols. This will have the effect of lowering rather significantly the error probability of error after decoding when compared to standard polar codes. It will be helpful to change slightly the way the successive cancellation decoder is generally explained. Indeed this decoder can be viewed as an iterated decoder for a $\left(U\mid U+V\right)$-code, where decoding the $\left(U\mid U+V\right)$-code consists in first decoding the $V$ code and then the $U$ code with a decoder using soft information in both cases. This decoder was actually considered before the invention of polar codes and has been considered for decoding for instance Reed-Muller codes based on the fact that they are $\left(U\mid U+V\right)$ codes \cite{D06a,DS06}. Let us recall how such a $\left(U\mid U+V\right)$-decoder works. Suppose we transmit the codeword $\left(\mathbf u \mid \mathbf u+\mathbf v\right)\in \left(U\mid U+V\right)$ over a noisy channel and we receive the vector: $\word{y} = (\mathbf y_1 \mid \mathbf y_2) $. We denote by $p(b\mid a)$ the probability of receiving $b$ when $a$ was sent and assume a memoryless channel here. We also assume that all the codeword symbols $u(i)$ and $v(i)$ are uniformly distributed. \begin{itemize} \item[\bf{Step 1.}] We first decode $V$. We compute the probabilities $\ensuremath{\textsf{prob}}(v(i)=\alpha|y_1(i),y_2(i))$ for all positions $i$ and all $\alpha$ in $\F_{q}$. Under the assumption that we use a memoryless channel and that the $u(i)$'s and the $v(i)$'s are uniformly distributed for all $i$, it is straightforward to check that this probability is given by \begin{equation} \label{eq:sum} \ensuremath{\textsf{prob}}(v(i)=\alpha|y_1(i),y_2(i)) = \sum_{\beta \in \F_{q}} p(y_1(i)|\beta) p(y_2(i)|\alpha+\beta) \end{equation} \item[\bf{Step 2.}] We use now Ar{\i}kan's successive decoding approach and assume that the $V$ decoder was correct and thus we have recovered $\word{v}$. We compute now for all $\alpha \in \F_{q}$ and all coordinates $i$ the probabilities $\ensuremath{\textsf{prob}}(u(i)=\alpha|y_1(i),y_2(i),v(i))$ by using the formula \begin{equation} \label{eq:product} \ensuremath{\textsf{prob}}(u(i)=\alpha|y_1(i),y_2(i),v(i)) = \frac{p(y_1(i) \mid \alpha) p(y_2(i) \mid \alpha + v(i))} {\sum_{\beta\in \mathbb F_q} p(y_1(i) \mid \beta) p(y_2(i) \mid \beta + v(i))} \end{equation} This can be considered as soft-information on $\word{u}$ which can be used by a soft information decoder for $U$. \end{itemize} This decoder can then be used recursively for decoding an iterated $\left(U\mid U+V\right)$-code. For instance if we denote by $U_\epsilon$ an iterated $\left(U\mid U+V\right)$-code of depth $2$ derived from the set of codes $\{U_{00},U_{01},U_{10},U_{11}\}$, the decoding works as follows (we used here the same notation as in Definition \ref{def:iterated_uv}). \begin{itemize} \item {\bf Decoder for $U_1 = \left(U_{10} \mid U_{10}+U_{11}\right)$}. We first compute the probabilities for decoding $U_{11}$, this code is decoded with a soft information decoder. Once we have recovered the $U_{11}$ part (we denote the corresponding codeword by $\word{u}_{11}$), we can compute the relevant probabilities for decoding the $U_{10}$ code. This code is also decoded with a soft information decoder and we output a codeword $\word{u}_{10}$. All this work allows to recover the $U_1$ codeword denoted by $\word{u}_1$ by combining the $U_{10}$ and $U_{11}$ part as $\word{u}_1 = (\word{u}_{10}\mid \word{u}_{10}+\word{u}_{11})$. \item {\bf Decoder for $U_0 = \left(U_{00} \mid U_{00}+U_{01}\right)$}. Once the $U_1$ codeword is recovered we can compute the probabilities for decoding the code $U_0$ and we decode this code in the same way as we decoded the code $U_1$. \end{itemize} Figure \ref{fig:order} gives the order in which we recover each codeword during the decoding process. \begin{figure}[h!] \caption{This figure summarizes in which order we recover each codeword of a $\left(U\mid U+V\right)$ code of depth $2$. Nodes in red represent codes that are decoded with a soft information decoder, nodes in black correspond to codes that are not decoded directly and whose decoding is accomplished by first recovering the two descendants of the node and then combining them to recover the codeword we are looking for at this node.\label{fig:order}} \centering \includegraphics[width=8cm]{order.pdf} \end{figure} When the constituent codes of this recursive $\left(U\mid U+V\right)$ construction are just codes of length $1$, it is readily seen that this decoding simply amounts to the successive cancellation decoder of Ar{\i}kan. We will be interested in the case where these constituent codes are longer than this. In such a case, we have to use as constituent codes, codes for which we have an efficient but possibly suboptimal decoder which can make use of soft information. Reed-Solomon codes or algebraic geometry codes with the Koetter Vardy decoder are precisely codes with this kind of property. \noindent {\bf Polarization.} The probability computations made during the $\left(U\mid U+V\right)$ decoding \eqref{eq:sum} and \eqref{eq:product} correspond in a natural way to changing the channel model for the $U$ code and for the $V$ code. These two channels really correspond to the two channel combining models considered for polar codes. More precisely, if we consider a memoryless channel of input alphabet $\F_{q}$ and output alphabet $\mathcal{Y}$ defined by a transition matrix $W= (W(y|u))_{\substack{u \in \F_{q} \\ y \in \F_{q}}}$, then the channel viewed by the $U$ decoder, respectively the $V$ decoder is a memoryless channel with transition matrix $W^0$ and $W^1$ respectively, which are given by \begin{eqnarray*} W^0(y_1,y_2,u_2|u_1) & \stackrel{\text{def}}{=} & \frac{1}{q} W(y_1|u_1) W(y_2|u_1 \oplus u_2) \\ W^1(y_1,y_2|u_2) & \stackrel{\text{def}}{=} & \frac{1}{q} \sum_{u_1 \in \ensuremath{\mathbb{F}}_q} W(y_1|u_1) W(y_2|u_1 \oplus u_2) \end{eqnarray*} Here the $y'_i$'s belong to $\mathcal{Y}$ and the $u_i$'s belong to $\F_{q}$. If we define the channel $W^x$ for $x=(x_1\dots x_n) \in \{0,1\}^n$ recursively by $$ W^{x_1 \dots x_{n-1} x_n} = \left(W^{x_1 \dots x_{n-1} } \right)^{x_n} $$ then the channel viewed by the decoder for one of the constituent codes $U_{x_1 \dots x_n}$ of an iterated $\left(U\mid U+V\right)$ code of depth $n$ (with the notation of Definition \ref{def:iterated_uv}) is nothing but the channel $W^{x_1 \dots x_n}$. The key result used for showing that polar codes attain the capacity is that these channels polarize in the following sense \begin{theorem} [{\cite[Theorem 1]{STA09} and \cite[Theorem 4.10]{S11g}}]\label{th:polarization} Let $q$ be an arbitrary prime. Then for a discrete $q$-ary input channel $W$ of symmetric capacity \footnote{Recall that the symmetric capacity of such a channel is defined as the mutual information between a uniform input and the corresponding output of the channel, that is $C \stackrel{\text{def}}{=} \frac{1}{q}\sum_{\alpha \in \F_{q}}\sum_{y \in \mathcal{Y}} W(y|\alpha) \log_q \frac{W(y|\alpha)}{\sum_{\beta \in \F_{q}} \frac{1}{q} W(y|\beta)}$, where $\mathcal{Y}$ denotes the output alphabet of the channel. } $C$ we have for all $0 < \beta < \frac{1}{2}$ $$ \lim_{\ell \rightarrow \infty} \frac{1}{n} \left| i \in \{0,1\}^\ell : \Bha{W^i} \leq 2^{-n^\beta} \right| = C, $$ where $n \stackrel{\text{def}}{=} 2^\ell$. \end{theorem} Here $\Bha{W}$ denotes the Bhattacharyya parameter of $W$ which is assumed to be a memoryless channel with $q$-ary inputs and outputs in an alphabet $\mathcal{Y}$. It is given by \begin{equation} \label{eq:Bha0} \Bha{W} \stackrel{\text{def}}{=} \frac{1}{q(q-1)} \sum_{x,x' \in \ensuremath{\mathbb{F}}_q, x' \neq x} \sum_{y \in \mathcal{Y}} \sqrt{W(y|x) W(y|x')} \end{equation} Recall that this Bhattacharrya parameter quantifies the amount of noise in the channel. It is close to $0$ for channels with very low noise (i.e. channels of capacity close to $1$) whereas it is close to $1$ for very noisy channels (i.e. channels of capacity close to $0$). \section{Soft decoding of Reed-Solomon codes with the Koetter-Vardy decoding algorithm} \label{Section3} It has been a long standing open problem to obtain an efficient soft-decision decoding algorithm for Reed-Solomon codes until Koetter and Vardy showed in \cite{KV03} how to modify appropriately the Guruswami-Sudan decoding algorithm in order to achieve this purpose. The complexity of this algorithm is polynomial and we will show here that the probability of error decreases exponentially in the codelength when the noise level is below a certain threshold. Let us first review a few basic facts about this decoding algorithm. \paragraph{\bf The reliability matrix.} The Koetter-Vardy decoder \cite{KV03} is based on a {\em reliability matrix} $\mat{\Pi}_{\word{y}}$ of the codeword symbols $x(1),\dots,x(n)$ computed from the knowledge of the received word $\word{y}$ and which is defined by $$ \mat{\Pi}_{\word{y}} = \left(\ensuremath{\textsf{prob}}(x(j)=\alpha|y(j))\right)_{\substack{\alpha \in \F_{q} \\ 1 \leq j \leq n}} $$ Recall that the $j$-th column of this matrix $\mat{\Pi}_{\word{y}}$ is denoted by $\mat{\Pi}_{\word{y}}^j$. It gives the a posteriori probabilities (APP) that the $j$-th codeword symbol is equal to $\alpha$ where $\alpha$ ranges over $\F_{q}$. We will be particularly interested in the $q$-ary symmetric channel model. The $q$-ary symmetric channel with error probability $p$, denoted by $q\hbox{-SC}_{p}$, takes a $q$-ary symbol at its input and outputs either the unchanged symbol, with probability $1-p$, or any of the other $q-1$ symbols, with probability $\tfrac{p}{q-1}$. Therefore, if the channel input symbols are uniformly distributed, the reliability matrix $\mat{\Pi}_{\mathbf y}$ for ${q\text{-SC}_p}$ is given by $$\mat{\Pi}_{\mathbf y}^j (\alpha) = \ensuremath{\textsf{prob}}\left(x(j)=\alpha\mid y(j)\right) = \left\{ \begin{array}{ll} 1-p & \hbox{ if } \alpha = y(j)\\ \frac{p}{q-1} & \hbox{ if } \alpha \neq y(j) \end{array}\right.$$ Thus, all columns of $\mat{\Pi}_{\mathbf y}$ are identical up to permutation: $$\mat{\Pi}_{\mathbf y}^i = \left(\begin{array}{c} 1-p \\ \frac{p}{q-1} \\ \vdots \\ \frac{p}{q-1} \end{array}\right) \hbox{ (up to permutation)} $$ with $i=1, \ldots, n$. This matrix is used by the Koetter-Vardy decoder to compute a multiplicity matrix that serves as the input to its soft interpolation step. When used in a $\left(U\mid U+V\right)$ construction and decoded as mentioned before, we will need to understand how the reliability matrix behaves through the $\left(U\mid U+V\right)$ decoding process. This is what we will do now. \paragraph{\bf Reliability matrix for the $V$-decoder.} We denote the reliability matrix of the $V$ decoder by $(\mat{\Pi} \oplus \mat{\Pi})_{\word{y}}$ when $\mat{\Pi}_{\word{y}_1}$ and $\mat{\Pi}_{\word{y}_2}$ are the initial reliability matrices corresponding to the two halves of the received word $\word{y}=(\word{y}_1,\word{y}_2)$. From the definition of the reliability matrix and \eqref{eq:sum} we readily obtain that \begin{equation} \label{eq:pi_oplus} (\mat{\Pi}\oplus \mat{\Pi})_{\mathbf y}^i (\alpha) \stackrel{\text{def}}{=} \sum_{\beta \in \mathbb F_q} \mat{\Pi}_{\mathbf y_1}^i (\beta) \cdot \mat{\Pi}_{\mathbf y_2}^i (\alpha - \beta). \end{equation} \paragraph{\bf Reliability matrix for the $U$-decoder} Similarly, by using \eqref{eq:product} we see that the reliability matrix of the $U$ decoder, that we denote by $\mat{\Pi} \times \mat{\Pi}_{\word{y},\word{v}}$ is given by \begin{equation} \label{eq:pi_otimes}( \mat{\Pi}\times \mat{\Pi})_{\mathbf y, \word{v}}^i(\alpha) = \frac{\mat{\Pi}_{\word{y}_1}^i (\alpha) \cdot \mat{\Pi}_{\mathbf \word{y}_2}^i (\alpha+v(i))}{\sum_{\beta \in \mathbb F_q} \mat{\Pi}_{\mathbf \word{y}_1}^i (\beta ) \cdot \mat{\Pi}_{\mathbf \word{y}_2}^i(\beta+v(i))}. \end{equation} To simplify notation we will generally avoid the dependency on $\word{y}$ and $\word{v}$ and simply write $\mat{\Pi} \oplus \mat{\Pi}$ and $\mat{\Pi}\times \mat{\Pi}$. \paragraph{\bf When does the Koetter-Vardy decoding algorithm succeed ?} Let us recall how the Koetter-Vardy soft decoder \cite{KV03} can be analyzed. By \cite[Theorem 12]{KV03} their decoding algorithm outputs a list that contains the codeword $\mathbf c\in C$ if \begin{equation} \label{eq:success} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf c\rfloor \right\rangle}{\sqrt{\left\langle \mat{\Pi}, \mat{\Pi}\right\rangle}} \geq \sqrt{k-1}+o(1) \end{equation} as the codelength $n$ tends to infinity, where $\lfloor \mathbf c\rfloor$ represents a $q\times n$ matrix with entries $c_{i,\alpha} = 1$ if $c_i = \alpha$, and $0$ otherwise; and $\left\langle \mat{A}, \mat{B} \right\rangle$ denotes the inner product of the two $q\times n$ matrices $A$ and $B$, i.e. $$\left\langle \mat{A}, \mat{B} \right\rangle \stackrel{\text{def}}{=} \sum_{i=1}^q\sum_{j=1}^n a_{i,j}b_{i,j}.$$ The algorithm uses a parameter $s$ (the total number of interpolation points counted with multiplicity). The little-O $o(1)$ depends on the choice of this parameter and the parameters $n$ and $q$. We need a more precise formulation of the little-O of (\ref{eq:success}) to understand that we can get arbitrarily close to the lower bound $\sqrt{k-1}$ with polynomial complexity. In order to do so, let us provide more details about the Koetter Vardy decoding algorithm. Basically this algorithm starts by computing with Algorithm A of \cite[p.2814]{KV03} from the knowledge of the reliability matrix $\mat{\Pi}$ and for the aforementioned integer parameter $s$ a $q \times n$ nonnegative integer matrix $\mat{M}(s)$ whose entries sum up to $s$. When $s$ goes to infinity $\mat{M}(s)$ becomes proportional to $\mat{\Pi}$. The cost of this matrix (we will drop the dependency in $s$) $C(\mat{M})$ is defined as \begin{equation} \label{eq:cost} C(\mat{M}) \stackrel{\text{def}}{=} \frac{1}{2} \sum_{i=1}^q \sum_{j=1}^n m_{ij}(m_{ij}+1) = \frac{1}{2} \left( \left\langle \mat{M}, \mat{M} \right\rangle + \left\langle \mat{M}, \mat{1} \right\rangle \right) \end{equation} where $m_{ij}$ denotes the entry of $\mat{M}$ at row $i$ and column $j$ and $\mat{1}$ is the all-one matrix. The complexity of the Koetter-Vardy decoding algorithm is dominated by solving a system of $C(\mat{M})$ linear equations. Then, the number of codewords on the list produced by the Koetter-Vardy decoder for a given multiplicity matrix $\mat{M}$ does not exceed $$ \mathcal{L}(\mat{M}) \stackrel{\text{def}}{=} \sqrt{\frac{2C(\mat{M})}{k-1}}. $$ It is straightforward to obtain from these considerations a soft-decision list decoder with a list which does not exceed some prescribed quantity $L$. Indeed it suffices to increase the value of $s$ in \cite[Algorithm A]{KV03} until getting a matrix $\mat{M}$ which is such that $$ L \leq \mathcal{L}(\mat{M}) < L+1 $$ and to use this multiplicity matrix $\mat{M}$ in the Koetter-Vardy decoding algorithm. By following the terminology of \cite{KV03} we refer to this decoding procedure as {\em algebraic soft-decoding with list size limited to $L$}. \cite[Theorem 17]{KV03} explains that convergence to the $\sqrt{k-1}$ lower-bound is at least as fast as $\OO{1/L}$ \begin{theorem}[Theorem 17, \cite{KV03}] \label{th:loose} Algebraic soft-decoding with list size limited to $L$ produces a codeword $\word{c}$ if \begin{equation} \label{eq:loose} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf c\rfloor \right\rangle}{\sqrt{\left\langle \mat{\Pi}, \mat{\Pi}\right\rangle}} \geq \frac{\sqrt{k-1}}{1 - \frac{1}{L}\left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}} \right)} = \sqrt{k-1} \left( 1 + \OO{\tfrac{1}{L}} \right) \end{equation} where $R^* \stackrel{\text{def}}{=} \frac{k-1}{n}$ and the constant in $\OO{\cdot}$ depends only on $R^*$ and $q$. \end{theorem} \begin{remark} \begin{enumerate} \item This theorem shows that the size of the list required to approach the asymptotic performance does not depend (directly) on the length of the code, it may depend on the rate of the code and the cardinality of the alphabet though. \item As observed in \cite{KV03}, this theorem is a very loose bound. The actual performance of algebraic soft-decoding with list size limited to $L$ is usually orders of magnitude better than that predicted by \eqref{eq:loose}. A somewhat better bound is given by \cite[(44) p. 2819]{KV03} where the condition for successful decoding $\word{c}$ is \begin{eqnarray} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf c\rfloor \right\rangle}{\sqrt{\left\langle \mat{\Pi}, \mat{\Pi}\right\rangle}} & \geq & \frac{\sqrt{k-1}}{1 - \frac{1}{L}\left( \frac{1}{R^*}+\frac{\sqrt{n}}{2\sqrt{R^* \left\langle \mat{\Pi}, \mat{\Pi}\right\rangle }} \right)} \nonumber \\ & \approx & \frac{\sqrt{k-1}}{1 - \frac{1}{L}\left( \frac{1}{R^*}+\frac{1}{2\sqrt{R^*}} \right)}. \end{eqnarray} where the approximation assumes that $\left\langle \mat{\Pi}, \mat{\Pi}\right\rangle \approx n$ which holds for noise levels of practical interest. Note that this strengthens a little bit the constant in $\OO{\cdot}$ that appears in Theorem \ref{th:loose}, since it would not depend on $q$ anymore. \end{enumerate} \end{remark} \paragraph{\bf Decoding capability of the Koetter-Vardy decoder when the channel is symmetric.} The previous formula does not explain directly under which condition on the rate of the Reed-Solomon code decoding typically succeeds (in some sense this would be a ``capacity'' result for the Koetter-Vardy decoder). We will derive now such a result that appears to be new (but see the discussion at the end of this section). It will be convenient to restrict a little bit the class of memoryless channels we will consider- this will simplify formulas a great deal. The idea underlying this restriction is to make the behavior of the quantity $\left\langle \mat{\Pi}, \lfloor \mathbf c\rfloor \right\rangle$ which appears in the condition of successful decoding \eqref{eq:success} independent of the codeword $\word{c}$ which is sent. This is readily obtained by restricting the channel to be {\em weakly symmetric}. \begin{definition}[weakly symmetric channel] \label{def:weakly_symmetric} A discrete memoryless $W$ with input alphabet $\mathcal{X}$ and output alphabet $\mathcal{Y}$ is said to be weakly symmetric if and only if there is a partition of the output alphabet $\mathcal{Y}= Y_1 \cup \dots \cup Y_n$ such that all the submatrices $W_i \stackrel{\text{def}}{=} (\Wc{x}{y})_{\substack{x \in \mathcal{X} \\ y \in Y_i}}$ are symmetric. A matrix is said to be symmetric if all if its rows are permutations of each other, and all its columns are permutations of each other. \end{definition} \noindent {\em Remarks.}\\ \begin{itemize} \item Such a channel is called {\em symmetric} in \cite[p.94]{G68}. We avoid using the same terminology as Gallager since ``symmetric channel'' is generally used now to denote a channel for which any row is a permutation of each other row and the same property also holds for the columns. \item This notion is a generalization (when the output alphabet is discrete) of what is called a binary input symmetric channel in \cite{RU08}. It also generalizes the notion of a cyclic symmetric channel in \cite{BB06}. \item It is shown that for such channels \cite[Th. 4.5.2]{G68} a uniform distribution on the inputs maximizes the mutual information between the output and the input of the channel and gives therefore its capacity. In such a case, linear codes attain the capacity of such a channel. \item This notion captures the notion of symmetry of a channel in a very broad sense. In particular the erasure channel is weakly symmetric (for many definitions of ``symmetric channels'' an erasure channel is not symmetric). \end{itemize} \begin{notation} We denote for such a channel and for a given output $y$ by $\pi_y=(\pi(\alpha))_{\alpha \in \F_{q}}$ the associated APP vector, that is $\pi(\alpha) = \ensuremath{\textsf{prob}}(x=\alpha|y)$ where we denote by $x$ the input symbol to the channel. \end{notation} To compute this APP vector we will make throughout the paper the following assumption \begin{assumption} The input of the communication channel is assumed to be uniformly distributed over $\F_{q}$. \end{assumption} We give now the asymptotic behavior of the Koetter-Vardy decoder for a weakly symmetric channel, but before doing this we will need a few lemmas. \begin{lemma}\label{lem:simple_formula} Assume that $x$ is the input symbol that was sent and that the communication is weakly symmetric, then by viewing $\pi$ as a function of the random variable $y$ we have for any $x \in \F_{q}$: $$ {\mathbb{E}}_y(\pi(x)) = {\mathbb{E}}_y\left(\norm{\pi}^2\right), ~~~~\hbox{ with } \norm{\pi}^2 \stackrel{\text{def}}{=} \sum_{\alpha \in \F_{q}} \pi(\alpha)^2.$$ \end{lemma} \begin{proof} To prove this result, let us introduce some notation. Let us denote by \begin{itemize} \item $\mathcal{Y}$ the output alphabet and $ Y_1 \cup \dots \cup Y_n = \mathcal{Y}$ is a partition of $\mathcal{Y}$ such that all the submatrices $W_i \stackrel{\text{def}}{=} (\Wc{x}{y})_{\substack{x \in \mathcal{X} \\ y \in Y_i}}$ are symmetric for $i=1, \ldots, n$. \item $C_i \stackrel{\text{def}}{=} \sum_{x \in \F_{q}} \Wc{x}{y}$ and $C_i^{(2)} \stackrel{\text{def}}{=} \sum_{x \in \F_{q}} \Wc{x}{y}^2$ where $y$ is arbitrary in $Y_i$ (these quantities do not depend on the element $y$ chosen in $Y_i$); \item $R_i \stackrel{\text{def}}{=} \sum_{y \in Y_i} \Wc{x}{y}$ and $R_i^{(2)} \stackrel{\text{def}}{=} \sum_{y \in Y_i} \Wc{x}{y}^2$ where $x$ is arbitrary in $\F_{q}$. \end{itemize} We observe now that from the assumption that $x$ was uniformly distributed \begin{equation} \label{eq:begin} \pi_y(\alpha) = \frac{\frac{1}{q} \Wc{\alpha}{y}}{\ensuremath{\textsf{prob}}(\text{receiving y})} = \frac{\frac{1}{q} \Wc{\alpha}{y} }{ \frac{1}{q}\sum_{\beta \in \F_{q}} \Wc{\beta}{y} } = \frac{\Wc{\alpha}{y} }{\sum_{\beta \in \F_{q}} \Wc{\beta}{y} }. \end{equation} We observe now that \begin{equation*} {\mathbb{E}}_y(\pi(x)) = \sum_{y \in \mathcal{Y}} \pi_y(x) \Wc{x}{y} = \sum_{y \in \mathcal{Y}} \frac{\Wc{x}{y} }{\sum_{\beta \in \F_{q}} \Wc{\beta}{y} } \Wc{x}{y} = \sum_{i=1}^n \sum_{y \in Y_i} \frac{\Wc{x}{y}^2 }{\sum_{\beta \in \F_{q}} \Wc{\beta}{y} } = \sum_{i=1}^n \frac{R_i^{(2)} }{C_i } \label{eq:pi_x}. \end{equation*} where the second equality is due to \eqref{eq:begin}. On the other hand \begin{eqnarray} {\mathbb{E}}_y\left(\norm{\pi}^2\right) &= & \sum_{y \in \mathcal{Y}} \sum_{\alpha \in \F_{q}} \pi_y(\alpha)^2 \Wc{x}{y} = \sum_{y \in \mathcal{Y}} \sum_{\alpha \in \F_{q}} \frac{\Wc{\alpha}{y}^2 }{\left(\sum_{\beta \in \F_{q}} \Wc{\beta}{y}\right)^2 } \Wc{x}{y} \nonumber\\ & = & \sum_{i=1}^n \sum_{y \in Y_i} \sum_{\alpha \in \F_{q}} \frac{\Wc{\alpha}{y}^2 }{\left(\sum_{\beta \in \F_{q}} \Wc{\beta}{y}\right)^2 } \Wc{x}{y} = \sum_{i=1}^n \sum_{y \in Y_i} \sum_{\alpha \in \F_{q}} \frac{\Wc{\alpha}{y}^2 }{(C_i)^{2} } \Wc{x}{y} \nonumber \\ & = & \sum_{i=1}^n \sum_{y \in Y_i} \frac{C_i^{(2)}}{(C_i)^2 } \Wc{x}{y} = \sum_{i=1}^n \frac{R_i C_i^{(2)}}{(C_i)^2 } \label{eq:pi2} \end{eqnarray} where the second equality is due to \eqref{eq:begin}. By summing all the elements (or the square of the elements) of the symmetric matrix $W_i$ either by columns or by rows and since all these row sums or all these column sums are equal, we obtain that \begin{equation*} \sum_{\alpha \in \F_{q}, y \in Y_i} \Wc{\alpha}{y} = |Y_i| C_i = q R_i \end{equation*} and \begin{equation*} \sum_{\alpha \in \F_{q}, y \in Y_i} \Wc{\alpha}{y}^2 = |Y_i| C_i^{(2)} = q R^{(2)}_i \end{equation*} By using these two equalities in \eqref{eq:pi2} we obtain \begin{equation*} {\mathbb{E}}_y\left(\norm{\pi}^2\right) = \sum_{i=1}^n \frac{\frac{C_i |Y_i|}{q} \frac{R_i^{(2)} q}{|Y_i|}}{(C_i)^2 } = \sum_{i=1}^n \frac{R_i^{(2)} }{C_i } \end{equation*} This yields the same expression as the one for ${\mathbb{E}}_y(\pi(x))$ given in \eqref{eq:pi_x}. \end{proof} As we will now show, this quantity ${\mathbb{E}}\left(\norm{\pi}^2\right)$ turns out to be the limit of the rate for which the Koetter-Vardy decoder succeeds in decoding when the alphabet gets large. For this reason, we will denote this quantity by the {\em Koetter-Vardy capacity} of the channel. \begin{definition}[Koetter-Vardy capacity] Consider a weakly symmetric channel and denote by $\pi$ the associated probability vector. The Koetter-Vardy capacity of this channel, which we denote by $C_{\text{KV}}$, is defined by $$ C_{\text{KV}} \stackrel{\text{def}}{=} {\mathbb{E}}(\norm{\pi}^2). $$ \end{definition} To prove that this quantity captures the rate at which the Koetter-Vardy is successful (at least for large lengths and therefore large field size) let us first prove concentration results around the expectation for the numerator and denominator appearing in the left-hand term of \eqref{eq:success}. \begin{lemma}\label{lem:concentration} Let $\epsilon > 0$ and $\mu \stackrel{\text{def}}{=} {\mathbb{E}}( \norm{\pi}^2)$. We have \begin{eqnarray} \ensuremath{\textsf{prob}}\left( \left\langle \mat{\Pi}, \lfloor \mathbf 0 \rfloor \right\rangle \leq (1-\epsilon) \mu n \right) &\leq & e^{-2n \mu^2\epsilon^2}\label{eq:concentration_num}\\ \ensuremath{\textsf{prob}}\left( \left\langle \mat{\Pi}, \mat{\Pi} \right\rangle \geq (1+\epsilon) n \mu \right) &\leq& e^{-2n \mu^2\epsilon^2}\label{eq:concentration_den} \end{eqnarray} \end{lemma} \begin{proof} Let us first prove \eqref{eq:concentration_den}. We can write the left-hand term as a sum of $n$ i.i.d. random variables $$ \left\langle \mat{\Pi}, \mat{\Pi}\right\rangle = \sum_{j=1}^n X_j, $$ where $X_j \stackrel{\text{def}}{=} \norm{\mat{\Pi}(j)}^2$. Note that (i) ${\mathbb{E}} \left(X_j\right) = {\mathbb{E}}(\norm{\pi}^2)$, (ii) $0 \leq X_j \leq 1$. By using Hoeffding's inequality we obtain that for any $\epsilon>0$ we have \begin{equation} \ensuremath{\textsf{prob}}\left( \sum_{j=1}^n X_j \geq n \mu (1+\epsilon) \right) \leq e^{-2n \mu^2\epsilon^2}. \end{equation} Now \eqref{eq:concentration_num} can be dealt with in a similar way by writing $$ \left\langle \mat{\Pi}, \lfloor \mat{0} \rfloor \right\rangle = \sum_{j=1}^n Y_j $$ where $Y_j \stackrel{\text{def}}{=} \Pi^j(0)$. The channel is assumed to be symmetric and we can therefore use Lemma \ref{lem:simple_formula} from which we deduce that ${\mathbb{E}} (Y_j ) = {\mathbb{E}}( \norm{\pi}^2) = \mu$. We also have $0 \leq Y_j \leq 1$ and by applying Hoeffding's inequality we obtain that for any $\epsilon>0$ we have \begin{equation} \ensuremath{\textsf{prob}}\left( \sum_{j=1}^n Y_j \leq n \mu (1-\epsilon) \right) \leq e^{-2n \mu^2\epsilon^2}. \end{equation} \end{proof} This result can be used to derive a rather tight upper-bound on the probability of error of the Koetter-Vardy decoder. \begin{theorem}\label{th:exponential} Consider a weakly symmetric $q$-ary input channel of Koetter-Vardy capacity $C_{\text{KV}}$. Consider a Reed-Solomon code over $\F_{q}$ of length $n$, dimension $k$ such that its rate $R= \frac{k}{n}$ satisfies $R < C_{\text{KV}}$. Let \begin{eqnarray*} \delta & \stackrel{\text{def}}{=} & \frac{C_{\text{KV}} - R}{R} \\ R^* & \stackrel{\text{def}}{=} & \frac{k-1}{n} \\ L & \stackrel{\text{def}}{=} & \left\lceil \frac{3 \left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}} \right)(1+\tfrac{\delta}{3})} {\delta} \right\rceil \end{eqnarray*} The probability that the Koetter-Vardy decoder with list size bounded by $L$ does not output in its list the right codeword is upper-bounded by $\OO{e^{-K \delta^2 n}}$ for some constant $K$. \end{theorem} \begin{proof} Without loss of generality we can assume that the all-zero codeword $\mat{0}$ was sent. From Theorem \ref{th:loose}, we know that the Koetter-Vardy decoder succeeds if and only if the following condition is met $$ \frac{\left\langle \mat{\Pi}, \lfloor \mathbf 0 \rfloor \right\rangle}{\sqrt{\left\langle \mat{\Pi}, \mat{\Pi}\right\rangle}} \geq \frac{\sqrt{k-1}}{1 - \frac{1}{L}\left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}} \right)}. $$ Notice that the right-hand side satisfies \begin{equation} \label{eq:lower-bound-rhs} \frac{\sqrt{k-1}}{1 - \frac{1}{L}\left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}} \right) } \leq \frac{\sqrt{k-1}}{ 1 - \frac{\delta \left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}}\right)}{3 \left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}} \right)\left(1+\tfrac{\delta}{3}\right)}} = \frac{\sqrt{k-1}}{1- \frac{\delta}{3+\delta}} = \sqrt{k-1}\left(1+\tfrac{\delta}{3}\right) \end{equation} Let $\epsilon$ be a positive constant that we are going to choose afterward. Define the events $\mathcal{E}_1$ and $\mathcal{E}_2$ by \begin{eqnarray*} \mathcal{E}_1 & \stackrel{\text{def}}{=} & \{\mat{\Pi}: \left\langle \mat{\Pi}, \mat{\Pi}\right\rangle \leq n C_{\text{KV}} (1+\epsilon)\}\\ \mathcal{E}_2 & \stackrel{\text{def}}{=} & \{\mat{\Pi}: \left\langle \mat{\Pi}, \lfloor \mat{0} \rfloor \right\rangle \geq n C_{\text{KV}} (1-\epsilon)\}. \end{eqnarray*} Note that by Lemma \ref{lem:concentration} the events $\mathcal{E}_1$ and $\mathcal{E}_2$ have both probability $\geq 1 - \epsilon'$ where $\epsilon' \stackrel{\text{def}}{=} e^{-2n C_{\text{KV}}^2\epsilon^2}$. Thus, the probability that event $\mathcal{E}_1$ and event $\mathcal{E}_2$ both occur is $$\ensuremath{\textsf{prob}}(\mathcal E_1 \cap \mathcal E_2) = \ensuremath{\textsf{prob}}(\mathcal E_1) + \ensuremath{\textsf{prob}}(\mathcal E_2) - \ensuremath{\textsf{prob}}(\mathcal E_1 \cup \mathcal E_2) \geq 1 - \epsilon' + 1 - \epsilon' -1 = 1-2\epsilon'.$$ In the case $\mathcal{E}_1$ and $\mathcal{E}_2$ both hold, we have \begin{equation}\label{eq:final2} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf 0 \rfloor \right\rangle}{\sqrt{ \left\langle \mat{\Pi}, \mat{\Pi} \right\rangle}} \geq \frac{1-\epsilon}{\sqrt{1+\epsilon}} \sqrt{ C_{\text{KV}} n} \end{equation} A straightforward computation shows that for any $x>0$ we have $$ \frac{1-x}{\sqrt{1+x}} \geq 1-\frac{3}{2}x. $$ Therefore for $\epsilon >0$ we have in the aforementioned case \begin{equation*} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf 0 \rfloor \right\rangle} {\sqrt{\left\langle \mat{\Pi}, \mat{\Pi} \right\rangle}} \geq \left(1- \tfrac{3}{2} \epsilon\right) \sqrt{C_{\text{KV}} n} = \left(1- \tfrac{3}{2} \epsilon\right) \sqrt{(1+\delta)R n} = \left(1- \tfrac{3}{2} \epsilon\right) \sqrt{k (1+ \delta) } \end{equation*} Let us choose now $\epsilon$ such that \begin{equation} \label{eq:choice-epsilon} \left(1-\tfrac{3}{2}\epsilon \right)\sqrt{1+\delta} = 1 + \tfrac{\delta}{3}. \end{equation} Note that $\epsilon = \Theta(\delta)$. This choice implies that \begin{equation*} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf 0 \rfloor \right\rangle} {\sqrt{\left\langle \mat{\Pi}, \mat{\Pi} \right\rangle}} \geq \sqrt{k} \left(1+\tfrac{\delta}{3}\right) \geq \sqrt{k-1} \left(1+\tfrac{\delta}{3}\right) \geq \frac{\sqrt{k-1}}{1 - \frac{1}{L}\left( \frac{1}{R^*}+\frac{\sqrt{q}}{2\sqrt{R^*}} \right) } \end{equation*} where we used in the last inequality the bound given in \eqref{eq:lower-bound-rhs}. In other words, the Koetter Vardy decoder outputs the codeword $\mat{0}$ in its list. The probability that this does not happen is at most $2e^{-2n C_{\text{KV}}^2\epsilon^2}=e^{-n \Theta(\delta^2)}$. \end{proof} An immediate corollary of this theorem is the following result that gives a (tight) lower bound on the error-correction capacity of the Koetter-Vardy decoding algorithm over a discrete memoryless channel. \begin{corollary}\label{cor:KVP} Let $(\code{C}_n)_{n \geq 1}$ be an infinite family of Reed-Solomon codes of rate $\leq R$. Denote by $q_n$ the alphabet size of $\code{C}_n$ that is assumed to be a non decreasing sequence that goes to infinity with $n$. Consider an infinite family of $q_n$-ary weakly symmetric channels with associated probability error vectors $\pi_n$ such that ${\mathbb{E}}\left( \norm{\pi_n}^2 \right)$ has a limit as $n$ tends to infinity. Denote by $C_{\text{KV}}^\infty$ the asymptotic Koetter-Vardy capacity of these channels, i.e. $$ C_{\text{KV}}^\infty \stackrel{\text{def}}{=} \lim_{n \rightarrow \infty} {\mathbb{E}}\left( \norm{\pi_n}^2 \right). $$ This infinite family of codes can be decoded correctly by the Koetter-Vardy decoding algorithm with probability $1-o(1)$ as $n$ tends to infinity as soon as there exists $\epsilon >0$ such that $$ R \leq C_{\text{KV}}^\infty -\epsilon. $$ \end{corollary} \begin{remark} \label{Rem-UV-1} Let us observe that for the $q\hbox{-SC}_{p}$ we have $$ {\mathbb{E}}\left( \norm{\pi}^2 \right) = (1-p)^2 +(q-1)\frac{p^2}{(q-1)^2} = (1-p)^2 + \mathcal O\left(\frac{1}{q}\right). $$ By letting $q$ going to infinity, we recover in this way the performance of the Guruswami-Sudan algorithm which works as soon as $R < (1-p)^2$. \end{remark} \noindent \par{\bf Link with the results presented in \cite{KV03} and \cite{KV03a}.} In \cite[Sec. V.B eq. (32)]{KV03} an arbitrarily small upper bound on the error probability $P_e$ is given, it is namely explained that $P_e \leq \epsilon$ as soon as the rate $R$ and the length $n$ of the Reed-Solomon code satisfy $\sqrt{R} \leq {\mathbb{E}}(\mathcal{Z}^*) - \frac{1}{\sqrt{\epsilon n}}$ (where the expectation is taken with respect to the {\em a posteriori} probability distribution of the codeword). Here $\mathcal{Z}^*$ is some function of the multiplicity matrix which itself depends on the received word. This is not a bound of the same form as the one given in Theorem \ref{th:exponential} whose upper-bound on the error probability only depends on some well defined quantities which govern the complexity of the algorithm (such as the size $q$ of the field over which the Reed-Solomon code is defined and a bound on the list-size) and the Koetter-Vardy capacity of the channel. However, many more details are given in the preprint version \cite{KV03a} of \cite{KV03} in Section 9. There is for instance implicitly in the proof of Theorem 27 in \cite[Sec. 9]{KV03a} an upper-bound on the error probability of decoding a Reed-Solomon code with the Koetter-Vardy decoder which goes to zero polynomially fast in the length as long as the rate is less than $C \stackrel{\text{def}}{=} \frac{\trace \left(W (\diag P_{Y})^{-1} W^T \right)}{q^2}$ where $W$ is the transition probability matrix of the channel and $\diag P_Y$ is the $|\mathcal{Y}| \times |\mathcal{Y}|$ matrix which is zero except on the diagonal where the diagonal elements give the probability distribution of the output of the channel when the input is uniformly distributed. It is readily verified that in the case of a weakly symmetric channel $C$ is nothing but the Koetter-Vardy capacity of the channel defined here. $C$ can be viewed as a more general definition of the ``capacity'' of a channel adapted to the Koetter-Vardy decoding algorithm. However it should be said that ``error-probability'' in \cite{KV03,KV03a} should be understood here as ``average error probability of error'' where the average is taken over the set of codewords of the code. It should be said that this average may vary wildly among the codewords in the case of a non-symmetric channel. In order to avoid this, we have chosen a different route here and have assumed some weak form of symmetry for the channel which ensures that the probability of error does not depend on the codeword which is sent. The authors of \cite{KV03a} use a second moment method to bound the error probability, this can only give polynomial upper-bounds on the error probability. This is why we have also used a slightly different route in Theorem \ref{th:exponential} to obtain stronger (i.e. exponentially small) upper-bounds on the error probability. \section{Algebraic-soft decision decoding of AG codes.} \label{SectionAG} The problem with Reed-Solomon codes is that their length is limited by the alphabet size. To overcome this limitation it is possible to proceed as in \cite{KV03a} and use instead Algebraic-Geometric codes (AG codes in short) which can also be decoded by an extension of the Koetter-Vardy algorithm and which have more or less a similar error correction capacity as Reed-Solomon codes under this decoding strategy. The extension of this decoding algorithm to AG codes is sketched in Section \ref{sec:KV_AG}. Let us first recall how these codes are defined. An AG code is constructed from a triple $(\mathcal{X}, \P, mQ)$ where: \begin{itemize} \item $\mathcal{X}$ denotes an algebraic curve over a finite field $\F_{q}$ (we refer to \cite{S93a} for more information about algebraic geometry codes); \item $\mathcal P=\left\{ P_1, \ldots, P_n\right\}$ denotes a set of $n$ distinct points of $\mathcal{X}$ with coordinates in $\F_{q}$; \item $mQ$ is a divisor of the curve, here $Q$ denotes another point in $\mathcal{X}$ with coordinates in $\F_{q}$ which is not in $\mathcal P$ and $m$ is a nonnegative integer. \end{itemize} We define $\mathcal L(mQ)$ as the vector space of rational functions on $\mathcal{X}$ that may contain only a pole at $Q$ and the multiplicity of this pole is at most $m$. Then, the \emph{algebraic geometry} code associated to the above triple denoted by $\C{X}{P}{mQ}$ is the image of $\mathcal L(mQ)$ under the evaluation map $\begin{array}{cccc}\mathrm{ev}_{\mathcal P}: & \mathcal L(mQ) & \longrightarrow & \mathbb F_q^n\end{array}$ defined by $\mathrm{ev}_{\mathcal P}(f) = \left(f(P_1), \ldots , f(P_n) \right)$, i.e. $$\C{X}{P}{mQ} \stackrel{\text{def}}{=} \left\{ \mathrm{ev}_{\mathcal P}(f)=\left(f(P_1), \ldots, f(P_n) \right) \mid f\in \mathcal L(mQ)\right\}$$ Since the evaluation map is linear, the code $\C{X}{P}{mQ}$ is a linear code of length $n$ over $\F_{q}$ and dimension $k=\dim(\mathcal L(mQ))$. This dimension can be lower bounded by $k \geq m-g+1$ where $g$ is the genus of the curve. Recall that this quantity is defined by $$ g \stackrel{\text{def}}{=} \max_{m \geq 0} \{ m - \dim \mathcal{L} (mQ) \} +1 $$ Moreover the minimum distance $d$ of this code satisfies $d \geq n - m$. Reed-Solomon codes are a particular case of the family of AG codes and correspond to the case where $\mathcal{X}$ is the affine line over $\F_{q}$, $\P$ are $n$ distinct elements of $\F_{q}$ and $\mathcal{L}$ is the vector space of polynomials of degree at most $k-1$ and with coefficients in $\F_{q}$. Recall that it is possible to obtain for any designed rate $R=\tfrac{k}{n}$ and any square prime power $q$ an infinite family of AG codes over $\F_{q}$ of rate $ \geq R$ of increasing length $n$ and minimum distance $d$ meeting ``asymptotically'' the MDS bound as $q$ goes to infinity $$ \frac{d}{n} \geq (1-R) - O\left(\frac{1}{\sqrt{q}}\right)$$ This follows directly from the two aforementioned lower bounds $k \geq m-g+1$ and $d \geq n-m$ and the well known result of Tsfasman, Vl{\u{a}}duts and Zink \cite{TVZ82} \begin{theorem}[\cite{TVZ82}] \label{th:TVZ} For any number $R \in [0,1]$ and any square prime power $q$ there exists an infinite family of AG codes over $\F_{q}$ of rate $ \geq R$ of increasing length $n$ such that the normalized genus $\gamma \stackrel{\text{def}}{=} \frac{g}{n}$ of the underlying curve satisfies $$ \gamma \leq \frac{1}{\sqrt{q}-1} $$ \end{theorem} We will call such codes {\em Tsfasman-Vl{\u{a}}duts-Zink} AG codes in what follows. As is done in \cite{KV03a}, it will be helpful to assume that $2g-1 \leq m < n$. This implies among other things that the dimension of the code is given my $k=m-g+1$. \ire{$k=m-g+1$} We will make this assumption from now on. As in \cite{KV03} it is possible to obtain a soft-decision list decoder with a list which does not exceed some prescribed quantity $L$. Similar to the Reed- Solomon case considered in \cite{KV03}, it suffices to increase the value of $s$ in \cite{KV03}[Algorithm A] until we get a matrix $\mat{M}$ such that $L< L_m(\mat{M}) < L+1$, where $L(\mat{M})$ is a bound on the list of the codewords output by the algorithm which is given in Lemma \ref{AG-1}, and then to use this matrix $\mat{M}$ in the Koetter Vardy decoding algorithm. The following result is similar to \cite[Th. 17]{KV03} \begin{theorem}\label{th:listsize_AG} Algebraic soft-decoding for AG codes with list-size limited to $L$ produces a list that contains a codeword $\mathbf c\in \C{X}{P}{mQ}$ if \begin{equation} \label{eq:listsize_AG} \frac{\left\langle \mat{\Pi}, \lfloor \mathbf c\rfloor\right\rangle}{\sqrt{\left\langle \mat{\Pi}, \mat{\Pi}\right\rangle}} \geq \sqrt{m} \frac{1+ \frac{\tilde{\gamma}+\sqrt{2 \tilde{\gamma}}}{L \sqrt{1-\tfrac{2\tilde{\gamma}}{L}\left( 1+\tfrac{2}{L}\right)}}}{1 - \frac{1}{L \sqrt{1-\tfrac{2 \tilde{\gamma}}{L}\left( 1+\tfrac{2}{L}\right)}} \left( \frac{\sqrt{q }}{2 \sqrt{\tilde{R}}} + \frac{1}{\tilde{R}} \right)} = \sqrt{m}\left(1 + \OO{\frac{1}{L}} \right) \end{equation} where $\tilde{R}=\frac{m}{n}$, $\tilde{\gamma} = \frac{g}{m}$ and $\mathcal O(\cdot)$ depends only on $\tilde{R}$, $\tilde{g}$ and $q$. \end{theorem} The proof of this theorem can be found in Section \ref{sec:KV_AG} of the appendix. It heavily relies on results proved in the preprint version \cite{KV03a} of \cite{KV03}. \begin{theorem}\label{th:exponentialAG} Consider a weakly symmetric $q$-ary input channel of Koetter-Vardy capacity $C_{\text{KV}}$ where $q$ is a square prime power. Consider a Tsfasman-Vl{\u{a}}duts-Zink AG code over $\F_{q}$ of length $n$, dimension $k$ such that its rate $R= \frac{k}{n}$ satisfies $R < C_{\text{KV}} - \gamma$ where $\gamma \stackrel{\text{def}}{=} \frac{1}{\sqrt{q}-1}$. Let \begin{eqnarray*} \delta & \stackrel{\text{def}}{=} & \frac{C_{\text{KV}} - R - \gamma}{R} \\ \tilde{R} & \stackrel{\text{def}}{=} & \frac{m}{n} \\ \tilde{\gamma} & \stackrel{\text{def}}{=} & \frac{g}{m}\\ f(\ell) & \stackrel{\text{def}}{=} & \frac{1+ \frac{\tilde{\gamma}+\sqrt{2 \tilde{\gamma}}}{\ell \sqrt{1-\tfrac{2\tilde{\gamma}}{\ell}\left( 1+\tfrac{2}{\ell}\right)}}}{1 - \frac{1}{\ell \sqrt{1-\tfrac{2 \tilde{\gamma}}{L}\left( 1+\tfrac{2}{\ell}\right)}} \left( \frac{\sqrt{q }}{2 \sqrt{\tilde{R}}} + \frac{1}{\tilde{R}} \right)} \\ L & \stackrel{\text{def}}{=} & f^{-1}\left(1+\frac{\delta}{3} \right) \end{eqnarray*} The probability that the Koetter-Vardy decoder with list size bounded by $L$ does not output in its list the right codeword is upper-bounded by $$\OO{e^{-K \delta^2 n}} $$ for some constant $K$. Moreover $L = \Th{1/\delta}$ as $\delta$ tends to zero. \end{theorem} \begin{proof} The proof follows word by word the proof of Theorem \ref{th:exponential} with the only difference that $k-1$ is replaced by $m=k+g$. The only new ingredient is that we use Theorem \ref{th:listsize_AG} instead of \eqref{eq:loose} which explains the new form chosen for the list-size $L$. The last part, namely that $L = \Th{1/\delta}$ is a simple consequence of the fact that $f(L) = 1 + \Th{\frac{1}{L} }$ as $L$ tends to infinity. \end{proof} \section{Correcting errors beyond the Guruswami-Sudan bound} \label{sec:iteratedUV} The purpose of this section is to show that the $\left(U\mid U+V\right)$ construction improves significantly the noise level that the Koetter-Vardy decoder is able to correct. To be more specific, consider the $q$-ary symmetric channel. The asymptotic Koetter-Vardy capacity of a family of $q$-ary symmetric channels of crossover probability $p$ is equal to $(1-p)^2$. It turns out that this is also the maximum crossover probability that the Guruswami-Sudan decoder is able to sustain when the alphabet and the length go to infinity. We will prove here that the $\left(U\mid U+V\right)$ construction with Reed-Solomon components already performs a bit better than $(1-p)^2$ when the rate is small enough. By using iterated $\left(U\mid U+V\right)$ constructions we will be able to improve rather significantly the performances and this even for a moderate number of levels. Our analysis of the Koetter-Vardy decoding is done for weakly symmetric channels. When we want to analyze a $\left(U\mid U+V\right)$ code based on Reed-Solomon codes used over a channel $W$ it will be helpful that the channels $W^0$ and $W^1$ viewed by the decoder of $U$ and $V$ respectively are also weakly symmetric. Simple examples show that this is not necessarily the case. However a slight restriction of the notion of weakly symmetric channel considered in \cite{BB06} does the job. It consists in the notion of a cyclic-symmetric channel whose definition is given below. \begin{definition}[cyclic-symmetric channel] \label{def:strictly_cyclic_symmetric} We denote for a vector $\word{y}=(y_i)_{i \in \F_{q}}$ with coordinates indexed by a finite field $\F_{q}$ by $\word{y}^{+g}$ the vector $\word{y}^{+g}=(y_{i+g})_{i \in \F_{q}}$, by $n(\word{y})$ the number of $g$'s in $\F_{q}$ such that $\word{y}^{+g} = \word{y}$ and by $\word{y}^*$ the set $\{\word{y}^{+g}, g \in \F_{q}\}$. A $q$-ary input channel is cyclic-symmetric channel if and only there exists a probability function $Q$ defined over the sets of possible $\pi^*$ such that for any $i \in \F_{q}$ we have $$ \ensuremath{\textsf{prob}}(\pi = \word{y}|x=i) = y_i n(\word{y}) Q(\word{y}^*). $$ \end{definition} The point about this notion is that $W^0$ and $W^1$ stay cyclic-symmetric when $W$ is cyclic-symmetric and that a cyclic-symmetric channel is also weakly symmetric. This will allow to analyze the asymptotic error correction capacity of iterated $\left(U\mid U+V\right)$ constructions. \begin{proposition}[\cite{BB06}] \label{pr:csc} Let $W$ be a cylic-symmetric channel. Then $W$ is weakly symmetric and $W^0$ and $W^1$ are also cyclic-symmetric. \end{proposition} \subsection{The $\left(U\mid U+V\right)$-construction} We study here how a $\left(U\mid U+V\right)$ code performs when $U$ and $V$ are both Reed-Solomon codes decoded with the Koetter-Vardy decoding algorithm when the communication channel is a $q$-ary symmetric channel of error probability $p$. \begin{proposition} \label{UV-1} For any real $p$ in $[0,1]$ and real $R$ such that $$R<C^\infty_{\left(U\mid U+V\right)}(p) \stackrel{\text{def}}{=} \frac{(p^3-4p^2+4p-4)(1-p)^2}{2(p-2)},$$ there exists an infinite family of $\left(U\mid U+V\right)$-codes of rate $ \geq R$ based on Reed-Solomon codes whose alphabet size $q$ increases with the length and whose probability of error on the ${q\text{-SC}_p}$ when decoded by the iterated $\left(U\mid U+V\right)$-decoder based on the Koetter-Vardy decoding algorithm goes to $0$ with the alphabet size. \end{proposition} \begin{proof} The $(U_0|U_0+U_1)$-construction can be decoded correctly by the Koetter-Vardy decoding algorithm if it decodes correctly $U_0$ and $U_1$. Let $\pi_i$ be the APP probability vector seen by the decoder for $U_i$ for $i \in \{0,1\}$. A ${q\text{-SC}_p}$ is clearly a cyclic-symmetric channel and therefore the channel viewed by the $U_0$ decoder and the $U_1$ decoder are also cyclic-symmetric by Proposition \ref{pr:csc}. A cyclic-symmetric channel is weakly symmetric and therefore by Corollary \ref{cor:KVP}, decoding succeeds with probability $1-o(1)$ when we choose the rate $R_i$ of $U_i$ to be any positive number below $\lim_{q\rightarrow \infty}{\mathbb{E}}\left( \norm{\pi_{i}}^2 \right)$ for $i \in \{0,1\}$. In Section \ref{Appendix-UV} of the appendix it is proved in Lemmas \ref{L12} and \ref{L11} that \begin{eqnarray*} {\mathbb{E}}\left( \norm{\pi_{0}}^2 \right) &=&\frac{(p+2)(p-1)^2}{2-p} + \mathcal O\left(\frac{1}{q}\right) \\ {\mathbb{E}}\left( \norm{\pi_{1}}^2 \right) &=& (1-p)^4 + \mathcal O\left(\frac{1}{q}\right) \end{eqnarray*} Since the rate $R$ of the $\left(U\mid U+V\right)$ construction is equal to $\frac{R_0+R_1}{2}$ decoding succeeds with probabilty $1 - o(1)$ if $$R< \lim_{q \rightarrow \infty} \frac{ {\mathbb{E}} \left(\norm{\pi_{0}}^2 \right) + {\mathbb{E}} \left(\norm{\pi_{1}}^2 \right) }{2} = \frac{(p^3-4p^2+4p-4)(1-p)^2}{2(p-2)}. $$ \end{proof} From Figure \ref{TwiceUV} we deduce that the $\left(U\mid U+V\right)$ decoder outperforms the RS decoder with Guruswami-Sudan or Koetter-Vardy decoders as soon as $R<0.17$. \subsection{Iterated $\left(U\mid U+V\right)$-construction} Now we will study what happens over a $q$-ary symmetric channel with error probability $p$ if we apply the iterated $\left(U\mid U+V\right)$-construction with Reed-Solomon codes as constituent codes. In particular, the following result handles the cases of the iterated $\left(U\mid U+V\right)$-construction of depth $2$ and $3$. \begin{proposition} \label{UV-2-3} For any real $p$ in $[0,1]$ we define \begin{equation} \label{Depth2} C^{\infty,(2)}_{\left(U\mid U+V\right)}(p) \stackrel{\text{def}}{=} \frac{Q(p)(1-p)^2}{4(p^2- 2p + 2)(3p - 4)(2-p)^2} \end{equation} and \begin{equation} \label{Depth3} C^{\infty,(3)}_{\left(U\mid U+V\right)}(p) \stackrel{\text{def}}{=} \frac{S(p)(1-p)^2}{T_1(p) T_2(p)T_3(p) \left(3p^2 - 6p + 4\right)\left(p^2 - 2p + 2\right)^2\left(7p - 8\right)\left(3p - 4\right)^2\left(2-p\right)^4} \end{equation} Then, for any real $R$ such that $R<C^{\infty,(2)}_{\left(U\mid U+V\right)}(p)$ (resp. $R<C^{\infty,(3)}_{\left(U\mid U+V\right)}(p)$) there exists an infinite family of iterated $\left(U\mid U+V\right)$-codes of depth $2$ (resp. of depth $3$) and rate $\geq R$ based on Reed-Solomon codes whose alphabet size $q$ increases with the length and whose probability of error with the Koetter-Vardy decoding algorithm goes to $0$ with the alphabet size. Where {\tiny \begin{eqnarray*} Q(p) &\stackrel{\text{def}}{=} & 3p^{11} - 40p^{10} + 243p^9 - 890p^8 + 2192p^7 - 3800p^6 + 4702p^5 - 4148p^4 + 2624p^3 - 1248p^2 + 480p - 128,\\ T_1(p) & \stackrel{\text{def}}{=} & p^4 - 4p^3 + 6p^2 - 4p + 2,\\ T_2(p)& \stackrel{\text{def}}{=} & 7p^2 - 18p + 12, \\ T_3(p)& \stackrel{\text{def}}{=} & 5p^2 - 12p+ 8 \hbox{ and }\\ S(p) &\stackrel{\text{def}}{=} & 6615p^{35} - 269766p^{34} + 5348715p^{33} - 68697432p^{32} + 642499307p^{31} - 4663447618p^{30} + 27338551153p^{29} \\ & - & 133009675740p^{28}+ 547673160274p^{27} - 1936548054764p^{26} + 5946432348816p^{25} -15994984917120p^{24} \\ & + & 37947048851166p^{23} - 79831430926900p^{22} + 149553041935846p^{21} - 250287141028584p^{20} + 375085789739404p^{19} \\ & - & 504157479736392p^{18} + 608316727420536p^{17} - 659027903954592p^{16} + 640716590979968p^{15} - 558310438932224p^{14} \\ & + & 435164216863552p^{13} - 302519286136704p^{12} + 186871196449024p^{11} - 102093104278528p^{10} + 49062052366336p^9 \\ & - & 20617356455936p^8 + 7534906109952p^7 - 2386429566976p^6 + 655237726208p^5 - 156569829376p^4 \\ &+& 32471121920p^3 - 5628755968p^2 + 723517440p - 50331648 \end{eqnarray*} } \end{proposition} \begin{proof} The proof is given in Appendix \ref{Appendix-UV2} and \ref{Appendix-UV3} for iterated $\left(U\mid U+V\right)$ construction of depth $2$ and $3$, respectively. \end{proof} Figure \ref{TwiceUV} summarizes the performances of these iterated $\left(U\mid U+V\right)$-constructions From this figure we see that if we apply the iterated $\left(U\mid U+V\right)$-construction of depth $2$ we get better performance than decoding a classical Reed-Solomon code with the Guruswami-Sudan decoder for low rate codes, specifically for $R<0.325$. Moreover, if we apply the iterated $\left(U\mid U+V\right)$-construction of depth $3$ we get even better results, we beat the Guruswami-Sudan for codes of rate $R< 0.475 $. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{U+V-Asymptotic-Depth3} \caption{Rate plotted against the crossover error probability $p$ for four code-constructions. The black line refers to standard Reed-Solomon codes decoded by the Guruswami-Sudan algorithm, the red line to the $\left(U\mid U+V\right)$-construction, the blue line to the iterated $\left(U\mid U+V\right)$-construction of depth $2$ and the green line to the iterated $\left(U\mid U+V\right)$-construction of depth $3$. \label{fig:infinite}} \label{TwiceUV} \end{figure} \subsection{Finite length capacity} \label{ss:finite_capacity} Even if for finite alphabet size $q$ the Koetter-Vardy capacity cannot be understood as a capacity in the usual sense: no family of codes is known which could be decoded with the Koetter-Vardy decoding algorithm and whose probability of error would go to zero as the codelength goes to infinity at any rate below the Koetter-Vardy capacity. Something like that is only true approximately for AG codes when the size of the alphabet is a square prime power and if we are willing to pay an additional term of $\frac{1}{\sqrt{q}-1}$ in the gap between the Koetter-Vardy capacity and the actual code rate. Actually, we can even be sure that for certain rates this result can not hold, since the Koetter-Vardy capacity can be above the Shannon capacity for very noisy channels. Consider for instance the ``completely-noisy'' $q$-ary symmetric channel of crossover probability $\frac{q-1}{q}$. Its capacity is $0$ whereas its Koetter-Vardy capacity is equal to $\frac{1}{q}$. Nevertheless it is still insightful to consider $f(W,\ell) \stackrel{\text{def}}{=} \frac{1}{2^\ell} \sum_{i=0}^{2^\ell -1} C_{\text{KV}}(W^i)$ where $W^i$ is the channel viewed by the constituent $U_i$ code for an iterated-$UV$ construction of depth $\ell$ for a given noisy channel. This could be considered as the limit for which we can not hope to have small probabilities of error after decoding when using Reed-Solomon codes constituent codes and the Koetter-Vardy decoding algorithm. We have plotted these functions in Figure \ref{fig:finite_capacity} for $q=256$ and $\ell =0$ up to $\ell=6$ and a ${q\text{-SC}_p}$. It can be seen that for $\ell=5,6$ we get rather close to the actual capacity of the channel in this way. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{U+V-256} \caption{average Koetter-Vardy capacity plotted against the crossover error probability $p$ for seven code constructions. The noise model is a a ${q\text{-SC}_p}$. \label{fig:finite_capacity}.} \end{figure} \section{Attaining the capacity with an iterated $\left(U\mid U+V\right)$ construction} \label{sec:capacity} When the number of levels for which we iterate this construction tends to infinity, we attain the capacity of any $q$-ary symmetric channel at least when the cardinality $q$ is prime. This is a straighforward consequence of the fact that polar codes attain the capacity of any $q$-ary symmetric channel. Moreover the probability of error after decoding can be made to be almost exponentially small with respect to the overall codelength. More precisely the aim of this section is to prove the following results about the probability of error. \begin{theorem}\label{th:errprob1} Let $W$ be a cyclic-symmetric $q$-ary channel where $q$ is prime. Let $C$ be the capacity of this channel. There exists $\epsilon_0 >0$ such that for any $\epsilon$ in the range $(0,\epsilon_0)$ and any $\beta$ in the range $(0,1/2)$ there exists a sequence of iterated $\left(U\mid U+V\right)$ codes with Reed-Solomon constituent codes of arbitrarily large length which have rate $\geq C - \epsilon$ when the codelength is sufficiently large and whose probability of error $P_e$ is upper bounded by $$ P_e \leq n e^{-\frac{K(\epsilon,\beta) N}{n \log N}} $$ when decoded with the iterated $\left(U\mid U+V\right)$ decoder based on decoding the constituent codes with the Koetter-Vardy decoder with listsize bounded by $\OO{\frac{1}{\epsilon}}$ and where $N$ is the codelength, $n = O(\log \log N) ^{1/\beta}$, and $K(\epsilon, \beta)$ is some positive function of $\epsilon$ and $\beta$. \end{theorem} For the iterated $\left(U\mid U+V\right)$-construction with algebraic geometry codes as constituent codes we obtain an even stronger result which is \begin{theorem}\label{th:errprob2} Let $W$ be a cyclic-symmetric $q$-ary channel where $q$ is prime. Let $C$ be the capacity of this channel. There exists $\epsilon_0 >0$ such that for any $\epsilon$ in the range $(0,\epsilon_0)$ there exists a sequence of iterated $\left(U\mid U+V\right)$ codes of arbitrarily large length with AG defining codes of rate $\geq C - \epsilon$ when the codelength is sufficiently large and whose probability of error $P_e$ is upper bounded by $$ P_e \leq e^{-K(\epsilon) N} $$ when decoded with the iterated $\left(U\mid U+V\right)$ decoder based on the Koetter-Vardy algorithm with listsize bounded by $\OO{\frac{1}{\epsilon}}$ and where $K$ is some positive function of $\epsilon$. \end{theorem} \noindent {\em Remarks:} \begin{itemize} \item In other words the exponent of the error probability is in the first case (that is with Reed-Solomon codes) almost of the form $-\frac{K(\epsilon) N}{\log N (\log \log N)^{2+\epsilon}}$ where $\epsilon$ is an arbitrary positive constant. This is significantly better than the concatenation of polar codes with Reed-Solomon codes (see \cite[Th. 1]{BJE10} and also \cite{MELK14} for some more practical variation of this construction) which leads to an exponent of the form $- K(\epsilon) \frac{N}{\log^{27/8} N}$. \item The second case leads to a linear exponent and is therefore optimal up to the dependency in $\epsilon$. \item Both results are heavily based on the fact that when the depth of the construction tends to infinity the channels viewed by the decoders at the leaves of the iterated construction polarize: they have either capacity close to $1$ or close to $0$. This follows from a generalization of Ar{\i}kan's polarization result on binary input channels. This requires $q$ to be prime. However it is possible to change slightly the $\left(U\mid U+V\right)$ structure in order to have polarization for all alphabet sizes. Taking for instance in the case where $q$ is a prime power at each node instead of the $\left(U\mid U+V\right)$ construction a random $(U|U+\alpha V) = \{(\word{u}|\word{u}+\alpha \word{v}):\word{u} \in U,\;\word{v} \in V\}$ where $\alpha$ is chosen randomly in $\F_{q}^\times$ would be sufficient here for ensuring polarization of the corresponding channels and would ensure that our results on the probability or error of the iterated construction would also work in this case. \item The reason why these results do not capture the dependency in $\epsilon$ of the exponent comes from the fact that only rather rough results on polarization are used (we rely namely on Theorem \ref{th:polarization}). Capturing the dependency on $\epsilon$ really needs much more precise results on polarization, such as for instance finite length scaling of polar codes. This will be discussed in the next section. \end{itemize} \par{\bf Overview of the proof of these theorems.} The proof of these theorems uses four ingredients. \begin{enumerate} \item The first ingredient is the polarization theorem \ref{th:polarization}. It shows that when the number of levels of the recursive $\left(U\mid U+V\right)$ construction tends to infinity, the fraction of the decoders of the constituent codes who face an almost noiseless channel tends to the capacity of the original channel. Here the measure for being noisy is the Bhattacharyya parameter of the channel. \item We then show that when the Bhattacharyya parameter is close to $0$ the Koetter Vardy capacity of the channel is close to $1$ meaning that we can use Reed-Solomon codes or AG codes of rate close to $1$ for those almost noiseless constituent codes (see Proposition \ref{pr:Bha_KV}). \item When we use Tsfasman-Vl{\u{a}}duts-Zink AG codes and if $q$ were allowed to be a square prime power, the situation would be really clear. For the codes in our construction that face an almost noiseless channel, we use as constituent AG codes Tsfasman-Vl{\u{a}}duts-Zink AG codes of rate of the form $1 - \epsilon - \frac{1}{\sqrt{q}-1}$. This gives an exponentially small (in the length of the constituent code) probability of error for each of those constituent codes by using Theorem \ref{th:exponentialAG}. For the other codes, we just use the zero code (i.e. the code with only the all-zero codeword). Now in order to get an exponentially small probability of error, it suffices to take the number of levels to be large (but fixed !) so that the fraction of almost noiseless channels is close enough to capacity and to let the length of the constituent codes go to infinity. This gives an exponentially small probability of error when the rate is bounded away from capacity by a term of order $\frac{1}{\sqrt{q}-1}$. \item In order to get rid of this term, and also in order to be able to use Tsfasman-Vl{\u{a}}duts-Zink AG codes for the case we are interested in, namely an alphabet which is prime, we use another argument. Instead of using a $q$-ary code over a $q$-ary input channel we will use a $q^m$-ary code over this $q$-ary input channel. In other words, we are going to group the received symbols by packets of size $m$ and view this as a channel with $q^m$-ary input symbols. This changes the Koetter Vardy capacity of the channel. It turns out that the Koetter-Vardy capacity of this new channel is the Koetter-Vardy capacity of the original channel raised to the power $m$ (see Proposition \ref{pr:KVm}). This implies that when the Koetter-Vardy capacity was close to $1 - \epsilon$, the new Koetter-Vardy capacity is close to $1 - m \epsilon$ and we do not lose much in terms of capacity when moving to a higher alphabet. This allows to use AG codes over a higher alphabet in order to get arbitrarily close to capacity by still keeping an exponentially small probability of error (we can indeed take $m$ fixed but sufficiently large here). For Reed-Solomon codes, the same trick works and allows to use constituent codes of arbitrarily large length by making the alphabet grow with the length of the code. However in this case, we can not take $m$ fixed anymore and this is the reason why we lose a little bit in the behavior of the error exponent. Moreover the number of levels is also increasing in the last case in order to make the Bhattacharyya parameter sufficiently small at the almost noiseless constituent codes so that the Koetter-Vardy stays sufficiently small after grouping symbols together. \end{enumerate} \par{\bf Link between the Bhattacharyya parameter and the Koetter-Vardy capacity.} We will provide here a proposition showing that for a fixed alphabet size the Koetter Vardy capacity of a channel $W$ is greater than $1 - (q-1)\Bha{W}$. For this purpose, it will be helpful to use an alternate form of the Bhattacharyya parameter \begin{equation} \label{eq:Bha} \Bha{W} = \sum_{y \in \mathcal{Y}} \ensuremath{\textsf{prob}}(Y=y) Z(X|Y=y) \end{equation} where $X$ is here a uniformly distributed random variable, $Y$ is the output corresponding to sending $X$ over the channel $W$ and \begin{equation} \label{eq:Z} Z(X|Y=y) \stackrel{\text{def}}{=} \frac{1}{q-1} \sum_{x,x' \in \ensuremath{\mathbb{F}}_q\\ x' \neq x} \sqrt{\ensuremath{\textsf{prob}}(X=x|Y=y)} \sqrt{\ensuremath{\textsf{prob}}(X=x'|Y=y)} \end{equation} \begin{proposition}\label{pr:Bha_KV} For a symmetric channel $$C_{\text{KV}}(W) \geq 1 - (q-1)\Bha{W}$$ \end{proposition} \begin{proof} To simplify formula here we will write $p(x|y)$ for $\ensuremath{\textsf{prob}}(X=x|Y=y)$, $p(x)$ for $\ensuremath{\textsf{prob}}(X=x)$ and $p(y)$ for $\ensuremath{\textsf{prob}}(Y=y)$. The proposition is essentially a consequence of the well known fact that the R\'enyi entropy which is defined for all $\alpha >0$, $\alpha \neq 1$ by $$ H_\alpha(X) \stackrel{\text{def}}{=} \frac{1}{1-\alpha} \log_q \sum_x p(x)^\alpha $$ and $$H_1(X) = \lim_{\alpha \rightarrow 1} H_\alpha(X)$$ (which turns out to be equal to the usual Shannon entropy taken to the base $q$) is decreasing in $\alpha$. This also holds of course for the ``conditional'' R\'enyi entropy which is defined by $$ H_\alpha(X|Y=y) \stackrel{\text{def}}{=} \frac{1}{1-\alpha} \log_q \sum_x p(x|y)^\alpha $$ Consider now a random variable $X$ which is uniformly distributed over $\F_{q}$ and let $Y$ be the corresponding output of the memoryless channel $W$. By using the definition of the Bhattacharyya parameter given by \eqref{eq:Bha} we can write $$ \Bha{W} = \sum_{y \in \mathcal{Y}} p(y) Z(X|Y=y) $$ where $$ Z(X|Y=y) = \frac{1}{q-1} \sum_{x,x' \in \ensuremath{\mathbb{F}}_q, x' \neq x} \sqrt{p(x|y)} \sqrt{p(x'|y)} $$ We observe that we can relate this quantity to the R\'enyi entropy of order $\frac{1}{2}$ through \begin{eqnarray} H_{1/2}(X|Y=y) & = & 2 \log_q \sum_x \sqrt{p(x|y)} \nonumber \\ & = & \log_q \left( \sum_x \sqrt{p(x|y)} \sum_{x'} \sqrt{p(x'|y)} \right) \nonumber\\ & = & \log_q \left( \sum_x p(x|y) + \sum_{x,x' \in \ensuremath{\mathbb{F}}_q, x' \neq x} \sqrt{p(x|y)} \sqrt{p(x'|y)} \right) \nonumber\\ & = & \log_q \left( 1 + (q-1) Z(X|Y=y) \right) \label{eq:one} \end{eqnarray} On the other hand we know that $H_2(X|Y=y) \leq H_{1/2}(X|Y=y)$. Recall that $$ H_2(X|Y=y) = -\log_q \sum_x p(x|y)^2 $$ Using this together with \eqref{eq:one} we obtain that \begin{equation} \label{eq:two} - \log_q \sum_x p(x|y)^2 \leq \log_q \left( 1 + (q-1) Z(X|Y=y) \right) \end{equation} Let $$ S \stackrel{\text{def}}{=} 1- \sum_x p(x|y)^2 $$ Observe that \begin{equation} - \log_q \sum_x p(x|y)^2 = \log_q \left( \frac{1}{ \sum_x p(x|y)^2} \right) = \log_q \frac{1}{1-S} \geq \log_q (1+S) \label{eq:three} \end{equation} Finally by using \eqref{eq:two} together with \eqref{eq:three} we deduce that $\log_q (1+S) \leq \log_q \left( 1 + (q-1) Z(X|Y=y) \right)$ which implies that \begin{equation} 1- \sum_x p(x|y)^2 = S \leq (q-1) Z(X|Y=y) \end{equation} By averaging over all $y$'s we get \begin{equation} \sum_y p(y) \left( 1- \sum_x p(x|y)^2 \right) \leq (q-1) \sum_y p(y) Z(X|Y=y) \end{equation} This implies the proposition by noticing that \begin{eqnarray*} \sum_y p(y) \left( 1- \sum_x p(x|y)^2 \right) & = & 1 - C_{\text{KV}}\\ (q-1) \sum_y p(y) Z(X|Y=y) & = & (q-1) \Bha{W}. \end{eqnarray*} \end{proof} \par{\bf Changing the alphabet.} The problem with Reed-Solomon codes is that their length is bounded by their alphabet size. It would be desirable to have more freedom in choosing their length. There is a way to overcome this difficulty by grouping together transmitted symbols into packets and to view each packet as a symbol over a larger alphabet. In other words, assume that we have a memoryless communication channel $W$ with input alphabet $\F_{q}$. Instead of looking for codes defined over $\F_{q}$ we will group input symbols in packets of size $m$ and view them as symbols in the extension field $\F_{q^m}$. This will allow us to consider codes defined over $\F_{q^m}$ and allows much more freedom in choosing the length of the Reed-Solomon codes components (or more generally the AG components). There is one caveat to this approach, it is that we change the channel model. In such a case the channel is $W^{\otimes m} \stackrel{\text{def}}{=} \underbrace{W \otimes W \otimes \dots W}_{m \text{ times}}$ where we define the tensor of two channels by \begin{definition}[Tensor product of two channels] Let $W$ and $W'$ be two memoryless channels with respective input alphabets $\mathcal{X}$ and $\mathcal{X}'$ and respective output alphabets $\mathcal{Y}$ and $\mathcal{Y}'$. Their tensor product $W \otimes W'$ is a memoryless channel with input alphabet $\mathcal{X} \times \mathcal{X}'$ and output alphabet $\mathcal{Y} \times \mathcal{Y}'$ where the transitions probabilities are given by $$ W \otimes W'(y,y'|x,x') = W(y|x) W(y'|x) $$ for all $(x,x',y,') \in \mathcal{X} \times \mathcal{X}' \times \mathcal{Y} \times \mathcal{Y}'$. \end{definition} The Koetter-Vardy capacity of this tensor product is easily related to the Koetter-Vardy capacity of the initial channel through \begin{proposition}\label{pr:KVm} $C_{\text{KV}}(W^{\otimes m}) = C_{\text{KV}}(W)^m$. If $C_{\text{KV}}(W) = 1 - \epsilon$ then $C_{\text{KV}}(W^{\otimes m}) \geq 1 - m \epsilon.$ \end{proposition} \begin{proof} Let $\word{x} = (x_1 \dots x_m \in \F_{q}^m)$ be the sent symbol for channel $W^{\otimes m}$ and $\word{y} = (y_1,\dots,y_m)$ be the received vector. Let $\pi^m_{\word{y}}$ be the APP probability vector after receiving $\word{y}$, that is $\pi^m_{\word{y}} = (\ensuremath{\textsf{prob}}(\word{x}=(\alpha_1, \dots, \alpha_m)|\word{y})_{(\alpha_1, \dots, \alpha_m) \in \F_{q^m}}$. We denote the $(\alpha_1, \dots, \alpha_m)$ component of this vector by $\pi^m_{\word{y}}(\alpha_1, \dots, \alpha_m)$. Let $\pi_{y_i}=(\ensuremath{\textsf{prob}}(x_i = \alpha|y_i))_{\alpha \in \F_{q}}$ be the APP vector for the $i$-th use of the channel. We denote by $\pi_{y_i}(\alpha)$ the $\alpha$ component of this vector. Observe that \begin{equation} \pi^m_{\word{y}}(\alpha_1 \dots \alpha_m) = \pi_{y_1}(\alpha_1) \dots \pi_{y_m}(\alpha_m). \end{equation} This implies that $$ \norm{\pi^m_{\word{y}}}^2 = \norm{\pi_{y_1}}^2 \dots \norm{\pi_{y_m}}^2 $$ This together with the fact that the channel is memoryless implies that \begin{eqnarray*} C_{\text{KV}}(W^{\otimes m}) & = & {\mathbb{E}}\left( \norm{\pi^m_{\word{y}}}^2\right)) \\ & = & {\mathbb{E}}\left(\norm{\pi_{y_1}}^2\right) \dots {\mathbb{E}}\left(\norm{\pi_{y_m}}^2 \right) \\ & = & C_{\text{KV}}(W)^m \end{eqnarray*} The last statement follows easily from this identity and the convexity inequality $(1-x)^m \geq 1 - mx$ which holds for $x$ in $[0,1]$ and $m \geq 1$. \end{proof} \par{\bf Proof of Theorem \ref{th:errprob1}.} We have now all ingredients at hand for proving Theorem \ref{th:errprob1}. We use Theorem \ref{th:polarization} to claim that there exists a lower bound $\ell_0$ on the number of levels $\ell$ in a recursive $\left(U\mid U+V\right)$ construction such that \begin{equation}\label{eq:weak_polarization} \frac{1}{n} \left| i \in \{0,1\}^\ell : \Bha{W^i} \leq 2^{-n^\beta} \right| \geq C - \epsilon/2 \end{equation} for all $\ell \geq \ell_0$ where $n \stackrel{\text{def}}{=} 2^\ell$. We call the channels that satisfy this condition the {\em good channels}. We choose our code to be a recursive $\left(U\mid U+V\right)$-code of depth $\ell$ with Reed-Solomon constituent codes that are of length $q^m$ and defined over $\F_{q^m}$. The overall length (over $\F_{q}$) of the recursive $\left(U\mid U+V\right)$ code is then \begin{equation} N \stackrel{\text{def}}{=} 2^\ell m q^m. \end{equation} The constituent codes $U^i$ that face a good channel $W^i$ are chosen as Reed-Solomon codes of dimension $k$ given by $$ k = \left\lfloor q^m \left(1 - m (q-1)2^{-n ^\beta} - \frac{\epsilon}{4} \right) \right\rfloor $$ whereas all the other codes are chosen to be zero codes. By using Proposition \ref{pr:Bha_KV} we know that $$ C_{\text{KV}}\left(W^i\right) \geq 1- (q-1)2^{-n^\beta}. $$ From this we deduce that the Koetter-Vardy of the channel corresponding to grouping together $m$ symbols in $\F_{q}$ has a Koetter-Vardy capacity that satisfies $$C_{\text{KV}}\left[\left(W^i\right)^{\otimes m} \right] \geq 1 - m(q-1)2^{-n^\beta}.$$ Now we can invoke Theorem \ref{th:exponential} and deduce that the probability of error of the Reed-Solomon codes that face these good channels when decoding them with the Koetter-Vardy decoding algorithm with list size bounded by $\OO{\frac{1}{\epsilon}}$ is upper-bounded by a quantity of the form $e^{-K q^m \epsilon^2}$. The overall probability of error is bounded by $n e^{-K q^m \epsilon^2}$. We choose now $n$ such that it is the smallest power of two for which the inequality $$ m(q-1)2^{-n^\beta} \leq \epsilon/4 $$ holds. This implies $n =0(\log ^{1 / \beta} m)$ as $m$ tends to infinity. Since $\frac{k}{q^m} = 1 - \epsilon/2 -o(1)$ as $m$ tends to infinity, the rate of the iterated $\left(U\mid U+V\right)$ code is of order $(C - \epsilon/2)(1- \epsilon/2-o(1)) = C - \frac{1+C}{2} \epsilon + \epsilon^2/4 + o(1) \geq C - \epsilon$ for $\epsilon$ sufficiently small and $n$ sufficiently large when $C<1$. When $C=1$ the theorem is obviously true. This together with the previous upper-bound on the probability of a decoding error imply directly our theorem since $N = n m q^m$ and $n= 0(\log ^{1 / \beta} m)$ imply that $m = \frac{\log N(1+o(1))}{\log q}$ as $m$ tends to infinity. \par{\bf Proof of Theorem \ref{th:errprob2}.} Theorem \ref{th:errprob2} uses similar arguments, the only difference is that now the number of levels $\ell$ in the construction and the parameter $m$ only depend on the gap to capacity we are looking for. We fix $\beta$ to be an arbitrary constant in $(0,1/2)$ and choose $m$ to be the smallest {even integer} $m$ for which $\frac{1}{\sqrt{q^m}-1}$ is smaller than $\epsilon/4$ and the number of levels $\ell$ to be the smallest integer such that we have at the same time \begin{eqnarray} \frac{1}{n} \left| i \in \{0,1\}^\ell : \Bha{W^i} \leq 2^{-n^\beta} \right| & \geq &C - \epsilon/4 \label{eq:polar1}\\ & \text{and} & \nonumber \\ m(q-1)2^{-n^\beta}& \leq &\epsilon/4 \label{eq:mne} \end{eqnarray} where $n \stackrel{\text{def}}{=} 2^\ell$. Such an $\ell$ necessarily exists by Theorem \ref{th:polarization}. We choose our code to be a recursive $\left(U\mid U+V\right)$-code of depth $\ell$ with Tsfasman-Vl{\u{a}}duts-Zink AG constituent codes that are of length $N_0$ and defined over $\F_{q^m}$. Such codes exist by the Tsfasman-Vl{\u{a}}duts-Zink construction for arbitrarily large lengths because $m$ is even. The overall length (over $\F_{q}$) of the recursive $\left(U\mid U+V\right)$ code is then \begin{equation} N \stackrel{\text{def}}{=} 2^\ell m N_0. \end{equation} For the constituent codes $U^i$ that face a good channel $W^i$, we choose the rate of the AG code to be $\frac{k}{N_0}$ where $$ k = \left\lfloor N_0 (1 - m (q-1)2^{-n ^\beta} - \frac{1}{\sqrt{q^m}-1} - \frac{\epsilon}{4} ) \right\rfloor $$ whereas all the other codes are chosen to be zero codes. The rate of the codes that face a good channel is clearly greater than or equal a quantity of the form $1-3\epsilon/4 +o(1)$ as $N_0$ goes to infinity by using \eqref{eq:mne} and $\frac{1}{\sqrt{q^m}-1} \leq \epsilon/4$. The overall rate $R$ of the iterated $\left(U\mid U+V\right)$ code satisfies therefore $R \geq (C- \epsilon/4) (1-3\epsilon/4 +o(1)) \geq C - \frac{3C+1}{4}\epsilon + 3\epsilon^2/4 + o(1) \geq C - \epsilon $ for $\epsilon$ sufficiently small and $N_0$ sufficiently large when $C<1$. We can make the assumption $C<1$ from now on, since when $C=1$ the theorem is trivially true. On the other hand, the error probability of decoding a code $U^i$ facing a good channel $W^i$ with the Koetter-Vardy decoding algorithm with list size bounded by $\OO{\frac{1}{\epsilon}}$ is upperbounded by a quantity of the form $e^{-K N_0 \epsilon^2}$ by using Theorem \ref{th:exponentialAG} since the rate $R_0$ of such a code satisfies \begin{eqnarray*} R _0 & \leq & 1- m (q-1)2^{-n ^\beta} - \frac{1}{\sqrt{q^m}-1} - \frac{\epsilon}{4} +o(1) \\ & \leq & C_{\text{KV}}({(W^i)}^{\otimes m}) - \frac{1}{\sqrt{q^m}-1} - \frac{\epsilon}{4} +o(1) \end{eqnarray*} by using the lower bound on the Koetter-Vardy capacity of a good channel that follows from Propositions \ref{pr:Bha_KV} and \ref{pr:KVm}. The overall probability of error is therefore bounded by $n e^{-K N_0 \epsilon^2}$. This probability is of the form announced in Theorem \ref{th:errprob2} since $m$ and $n$ are quantities that only depend on $q$ and $\epsilon$. \section{Conclusion} \par{\bf A variation on polar codes that is much more flexible.} We have given here a variation of polar codes that allows to attain capacity with a polynomial-time decoding complexity in a more flexible way than standard polar codes. It consists in taking an iterated-$\left(U\mid U+V\right)$ construction based on Reed-Solomon codes or more generally AG codes. Decoding consists in computing the APP of each position in the same way as polar codes and then to decode the constituent codes with a soft information decoder, the Koetter-Vardy list decoder in our case. Polar codes are indeed a special case of this construction by taking constituent codes that consist of a single symbol. However when we take constituent codes which are longer we benefit from the fact that we do not face a binary alternative as for polar codes, i.e. putting information or not in the symbol depending on the noise model for this symbol, but can choose freely the length (at least in the AG case) and the rate of the constituent code that face this noise model. {\bf An exponentially small probability of error.} This allows to control the rate and error probability in a much finer way as for standard polar codes. Indeed the failure probability of polar codes is essentially governed by the error probability of an information symbol of the polar code facing the noisiest channel (among all information symbols). In our case, this error probability can be decreased significantly by choosing a long enough code and a rate below the noise value that our decoder is able to sustain (which is more or less the Koetter-Vardy capacity of the noisy channel in our case). Furthermore, now we can also put information in channels that were not used for sending information in the polar code case. When using Reed-Solomon codes with this approach we obtain a quasi-exponential decay of the error probability which is significantly smaller than for the standard concatenation of an inner polar code with an outer Reed-Solomon code. When we use AG codes we even obtain an exponentially fast decay of the probability of error after decoding. The whole work raises a certain number of intriguing questions. {\bf Dependency of the error probability with respect to the gap to capacity.} Even if the exponential decay with respect to the codelength of the iterated $\left(U\mid U+V\right)$-construction is optimal, the result says nothing about the behavior of the exponent in terms of the gap to capacity. The best we can hope for is a probability of error which behaves as $e^{-K \epsilon^2 n}$ where $\epsilon$ is the gap to capacity, that is $\epsilon = C - R$, $C$ being the capacity and $R$ the code rate. We may observe that Theorem \ref{th:exponential} gives a behavior of this kind with the caveat that $\epsilon$ is not the gap to capacity there but the gap to the Koetter-Vardy capacity. To obtain a better understanding of the behavior of this exponent, we need to have a much finer understanding of the speed of polarization than the one given in Theorem \ref{th:polarization}. What we really need is indeed a result of the following form \begin{equation}\label{eq:strong_polarization} \frac{1}{n} \left| i \in \{0,1\}^\ell : \Bha{W^i} \leq \epsilon \right| \geq C - f(\epsilon,\ell) \end{equation} which expresses the fraction of ``$\epsilon$-good'' channels in terms of the gap to capacity with sharp estimates for the ``gap'' function $f(\epsilon,\ell)$. The problem in our case is that our understanding of the speed of polarization is far from being complete. Even for binary input channels, the information we have on the function $f(\epsilon,\ell)$ is only partial as shown by \cite{HAU14,GB14,GX15,MHU16}. A better understanding of the speed of polarization could then be used in order to get a better understanding of the decay of the error probability in terms of the gap to capacity. A tantalizing issue is whether or not we get a better scaling than for polar codes. {\bf Choosing other kernels.} The iterated $\left(U\mid U+V\right)$-construction can be viewed as choosing the original polar codes from Arikan associated to the kernel $\mat{G} = \begin{pmatrix} 1 & 0 \\1 & 1 \end{pmatrix}$. Taking larger kernels does not improve the error probability after decoding in the binary case for polar codes, unless taking very large kernels as shown in \cite{K09b}, however this is not the case for non binary kernels. Even ternary kernels, such as for instance the ternary ``Reed-Solomon'' kernel $\mat{G}_{\text{RS}} = \begin{pmatrix} 1 & 1 &0 \\1 & -1 & 1 \\ 1 & 1 & 1 \end{pmatrix}$ results in a better behavior of the probability of error after decoding (see \cite{MT14}). This raises the issue whether other kernels would allow to obtain better results in our case. In other words, would other generalized concatenated code constructions do better in our case ? Interestingly enough, it is not necessary a Reed-Solomon kernel which gives the best results in our case. Preliminary results seem to show that it should be better to take the kernel $\mat{G} = \begin{pmatrix} 1 & 0 & 0 \\1 & 1 & 0 \\ 1 & 1 & 1 \end{pmatrix}$ rather than the aforementioned Reed-Solomon kernel. The last kernel would correspond to a $\left(U\mid U+V \mid U+V+W\right)$ construction which is defined by $$ \left(U\mid U+V \mid U+V+W\right) = \left\{ (\mathbf u\mid\mathbf u + \mathbf v\mid \word{u} + \word{v} + \word{w}): \mathbf u \in U,\;\mathbf v \in V\text{ and } \word{w} \in W\right\}. $$ One level of concatenation outperforms the $\left(U\mid U+V\right)$ construction on the ${q\text{-SC}_p}$. In particular it allows to increase the slope at the origin of the ``infinite Koetter-Vardy capacity curve'' (when compared to the curve for one level in Figure \ref{fig:infinite}). This seems to be the key for choosing good kernels. This issue requires further studies. {\bf Practical constructions.} We have explored here the theoretical behavior of this coding/decoding strategy. What is suggested by the experimental evidence shown in Subsection \ref{ss:finite_capacity} is that these codes do not only have some theoretical significance, but that they should also yield interesting codes for practical applications. Indeed Figure \ref{fig:finite_capacity} shows that it should be possible to get very close to the channel capacity by using only a construction with a small depth, say $5-6$ together with constituent codes of moderate length that can be chosen to be Reed-Solomon codes (say codes of length a hundred/a few hundred at most). This raises many issues that we did not cover here, such as for instance \begin{itemize} \item to choose appropriately the code rate of each constituent code in order to maximize the overall rate with respect to a certain target error probability; \item choose the multiplicities for each constituent code in order to attain a good overall tradeoff complexity vs. performance; \item choose other constituent codes such as AG codes especially in cases where the channel is an $\F_{q}$-input channel for small values of $q$. It might also be worthwhile to study the use of subfield subcodes of Reed-Solomon codes in this setting (for instance BCH codes). \end{itemize} The whole strategy leads to use Koetter-Vardy decoding for Reed-Solomon/AG codes in a regime where the rate gets either very close to $0$ or to $1$. This could be exploited to lower the complexity of generic Koetter-Vardy decoding.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,204
Little, Brown Books for Young Readers Author: Lê, Minh Brand: Little, Brown Books for Young Readers Format: Picture Book Details: Product Description Handpicked by Amazon kids' books editor, Seira Wilson, for Prime Book Box – a children's subscription that inspires a love of reading. When a young boy visits his grandfather, their lack of a common language leads to confusion, frustration, and silence. But as they sit down to draw together, something magical happens-with a shared love of art and storytelling, the two form a bond that goes beyond words. With spare, direct text by Minh Lê and luminous illustrations by Caldecott Medalist Dan Santat, this stirring picturebook about reaching across barriers will be cherished for years to come. Review PRAISE FOR THE COOKIE FIASCO "[B]e prepared for this one to fly off the shelves."-- School Library Journal APALA Asian/Pacific American Award for Literature Winner * " Drawn Together is a testament to the strength of a shared love to overcome barriers of age, language and culture, and will leave readers, like Grandpa and his grandson, 'happily . . . SPEECHLESS.'"-- Shelf Awareness, starred review * "[A] visual language that speaks profoundly to readers."-- Bulletin of the Center for Children's Books, starred review * "[P]erfectly paced to express universal emotions that connect generations separated by time, experience, and even language."-- School Library Journal, starred review * "Focus on an underrepresented culture; highly accessible emotions; concise, strong storytelling; and artistic magnificence make this a must have."-- Booklist, starred review * "Phantasmagoric."-- Publishers Weekly, starred review * "The power of art takes center stage."-- Kirkus Reviews, starred review "Dynamic."-- The Wall Street Journal "A beautifully told and illustrated story about a grandson and grandfather struggling to communicate across divides of language, age, and culture."-- Viet Thanh Nguyen, Pulitzer Prize winner for The Sympathizer About the Author Minh Lê is a writer but, like his grandfather, is a man of few words. He is a national early childhood policy expert, author of Let Me Finish! (illustrated by Isabel Roxas), and has written for the New York Times, the Horn Book, and the Huffington Post. A first-generation Vietnamese-American, he went to Dartmouth College and has a master's in education from Harvard University. Outside of spending time with his beautiful wife and sons in their home near Washington, DC, Minh's favorite place to be is in the middle of a good book. Visit Minh online at minhlebooks.com or on Twitter @bottomshelfbks. Dan Santat is the author and illustrator of the Caldecott Award-winning The Adventures of Beekle: The Unimaginary Friend, as well as The Cookie Fiasco, After the Fall, and others. He is also the creator of Disney's animated hit, The Replacements. Dan lives in Southern California with his wife, two kids, and two dogs. Visit him at dantat.com.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,520
Tribal Payday Advance "Www Paydaygift Co Promo Code". There are times when an extra infusion of cash is a must. There are instances when cash is needed right away, such as an illness in the family, a car that breaks down and that is the car that you need to get to work, someone has to go into the hospital, and the hospital wants a down payment, and you have to travel out of town because of a sever illness of a relative. You can get short-term cash with low credit score by using Www Paydaygift Co Promo Code, and read reviews. Looking for Www Paydaygift Co Promo Code. $1000 Advance loan inside 30 Minutes Period. Immediately Transferred within 24+ hours. Straightforward endorsement A few minutes. Get Started Now. Www Paydaygift Co Promo Code, When you realise you are looking for a brief-term loan, you will find not many options. As you can borrow from family or friends, that may not be a possibility and honestly, it may be a bit uncomfortable. Yes, you are able to sign up for a payday loan, but we all know that this fees can be high for all those. Luckily, a new loan is at town containing all the ease of a payday advance, but with lower fees. PaydayGift payday loans are a great option for most people in several situations. Continue reading for additional details on the most effective pay day loan alternative there is. PaydayGift.com pay day loans are a fantastic option. They allow consumers the chance to access many different kinds of installment loans. The APR is founded on the Payday Gift.com Ladder and also as you will be making loan payments on time, you move up the ladder. The higher the ladder you might be, the low you rates are and the longer loan term you will end up offered. Getting a Payday Gift loan is actually very easy. The very first thing you have to do is visit the website. Once you do, it is possible to apply immediately, or spend time reading regarding the Loan and anything they can provide. When you select for more information, you will be creating a good decision as it is very important know whenever you can about any Loan you do have a loan with. Once you have towards the website and learning about PaydayGift.com, start applying for a mortgage loan. You may be required to input your own personal information because this is anything they will use to ascertain if you qualify as well as at what rate one does. Make sure you are thorough within your application if you aren't, it might hold up the borrowed funds process causing there to be a delay. Also, as you may complete the application, it is important to be truthful. Many of the information that you simply share will likely be validated somehow. Several of it can be achieved so by the lender, but a few of the information you will certainly be motivated to prove. You may want to supply Payday Gift with pay stubs, your driver's license, or several other documents to prove the data that you simply include on your own application. Whenever your application is done, you may then submit it. It doesn't require much time in any way so that you can hear back from PaydayGift concerning their decision to approve your loan. As soon as the loan is approved, you will be notified and will be needed to sign loan documents. These papers would include the money amount, the APR, the length of the money and other important details. It is essential that you accept the what you really are signing as it is a legal document.
{ "redpajama_set_name": "RedPajamaC4" }
1,090
package ru.job4j.generic; import ru.job4j.service.SimpleList; public class RoleStore extends AbstractStore<Role> { //SimpleList<Role> simpleList = new SimpleList<>(10); }
{ "redpajama_set_name": "RedPajamaGithub" }
1,758
class GamesController < ApplicationController def start init_game redirect_to play_path end def play refresh @q = choose_question set_current_question(@q) if @q redirect_to end_path if complete?(@q) end def answer refresh unless answer_question(session[:current_question], params[:answer]) redirect_to end_path else redirect_to play_path end end def end refresh @guesses = session[:possibilities].map{ |name| Animal.where(name: name).first } session[:history] = [] end end
{ "redpajama_set_name": "RedPajamaGithub" }
3,128
Q: Coordinate transformation in Tensor Calculus I am doing a problem from Schutz, Introduction to general relativity.The question asks you to find a coordinate transformation to a local inertial frame from a weak field newtonian metric tensor $$ds^2=-(1+2\phi)dt^2+(1-2\phi)(dx^2+dy^2+dz^2).$$ I looked at the solution from a manual and it has the following equation,$$x^{\alpha '} = (\delta^\alpha_\beta + L^\alpha_\beta)x^\beta$$ $L^\alpha_\beta$ is a function of the Newtonian potential $\phi$. My question is: Is this a valid tensor equation? The transformation is motivated by the idea that when $\phi$ is zero then you already have a locally inertial frame hence $L^\alpha_\beta$ are all zero and $x^{\alpha '} = x^{\alpha}$ which is very understandable. But is the equation, $x^{\alpha '} = (\delta^\alpha_\beta + L^\alpha_\beta)x^\beta$, symbolically correct (because the superscripts don't balance out like they normally do in tensor calculus)? But may be if $L^\alpha_\beta$ is not a tensor then it does not have to obey those principles of tensor calculus. A: Firstly, yes, the indices do work properly, although there is confusion about primes. I normally put primes on the object itself (ie I would write $x'^\alpha$, not on the index), but this is just notation and I don't actually know what the normal convention is (and my notation is even more confusing in this case I think). If you want to prime the indices then the expression should really be: $$x^{\alpha'} = \left(\delta^{\alpha'}{}_\beta + L^{\alpha'}{}_\beta\right)x^\beta$$ And this is fine. However, there is an important point here, which I think you have realised. The things that drive coordinate / basis transformations are not tensors: they are, rather, just nonsingular matrices, and what this is is just matrix multiplication. It's easy to see that they are not (the components of) tensors because they have indices in two different bases, and that makes no sense at all. And if you do a suitable change of basis to get their indices all in one basis, you find that the result is always $\delta^\alpha{}_\beta$!
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,860
namespace coding { double CalculateLength(vector<TrafficGPSEncoder::DataPoint> const & path) { double res = 0; for (size_t i = 1; i < path.size(); ++i) { auto p1 = MercatorBounds::FromLatLon(path[i - 1].m_latLon.lat, path[i - 1].m_latLon.lon); auto p2 = MercatorBounds::FromLatLon(path[i].m_latLon.lat, path[i].m_latLon.lon); res += MercatorBounds::DistanceOnEarth(p1, p2); } return res; } void Test(vector<TrafficGPSEncoder::DataPoint> & points) { double const kEps = 1e-5; for (uint32_t version = 0; version <= TrafficGPSEncoder::kLatestVersion; ++version) { vector<uint8_t> buf; MemWriter<decltype(buf)> memWriter(buf); UNUSED_VALUE(TrafficGPSEncoder::SerializeDataPoints(version, memWriter, points)); vector<TrafficGPSEncoder::DataPoint> result; MemReader memReader(buf.data(), buf.size()); ReaderSource<MemReader> src(memReader); TrafficGPSEncoder::DeserializeDataPoints(version, src, result); TEST_EQUAL(points.size(), result.size(), ()); for (size_t i = 0; i < points.size(); ++i) { TEST_EQUAL(points[i].m_timestamp, result[i].m_timestamp, (points[i].m_timestamp, result[i].m_timestamp)); TEST(my::AlmostEqualAbsOrRel(points[i].m_latLon.lat, result[i].m_latLon.lat, kEps), (points[i].m_latLon.lat, result[i].m_latLon.lat)); TEST(my::AlmostEqualAbsOrRel(points[i].m_latLon.lon, result[i].m_latLon.lon, kEps), (points[i].m_latLon.lon, result[i].m_latLon.lon)); } if (version == TrafficGPSEncoder::kLatestVersion) { LOG(LINFO, ("path length =", CalculateLength(points), "num points =", points.size(), "compressed size =", buf.size())); } } } UNIT_TEST(Traffic_Serialization_Smoke) { vector<TrafficGPSEncoder::DataPoint> data = { {0, ms::LatLon(0.0, 1.0)}, {0, ms::LatLon(0.0, 2.0)}, }; Test(data); } UNIT_TEST(Traffic_Serialization_EmptyPath) { vector<TrafficGPSEncoder::DataPoint> data; Test(data); } UNIT_TEST(Traffic_Serialization_StraightLine100m) { vector<TrafficGPSEncoder::DataPoint> path = { {0, ms::LatLon(0.0, 0.0)}, {0, ms::LatLon(0.0, 1e-3)}, }; Test(path); } UNIT_TEST(Traffic_Serialization_StraightLine50Km) { vector<TrafficGPSEncoder::DataPoint> path = { {0, ms::LatLon(0.0, 0.0)}, {0, ms::LatLon(0.0, 0.5)}, }; Test(path); } UNIT_TEST(Traffic_Serialization_Zigzag500m) { vector<TrafficGPSEncoder::DataPoint> path; for (size_t i = 0; i < 5; ++i) { double const x = i * 1e-3; double const y = i % 2 == 0 ? 0 : 1e-3; path.emplace_back(TrafficGPSEncoder::DataPoint(0, ms::LatLon(y, x))); } Test(path); } UNIT_TEST(Traffic_Serialization_Zigzag10Km) { vector<TrafficGPSEncoder::DataPoint> path; for (size_t i = 0; i < 10; ++i) { double const x = i * 1e-2; double const y = i % 2 == 0 ? 0 : 1e-2; path.emplace_back(TrafficGPSEncoder::DataPoint(0, ms::LatLon(y, x))); } Test(path); } UNIT_TEST(Traffic_Serialization_Zigzag100Km) { vector<TrafficGPSEncoder::DataPoint> path; for (size_t i = 0; i < 1000; ++i) { double const x = i * 1e-1; double const y = i % 2 == 0 ? 0 : 1e-1; path.emplace_back(TrafficGPSEncoder::DataPoint(0, ms::LatLon(y, x))); } Test(path); } UNIT_TEST(Traffic_Serialization_Circle20KmRadius) { vector<TrafficGPSEncoder::DataPoint> path; size_t const n = 100; for (size_t i = 0; i < n; ++i) { double const alpha = 2 * math::pi * i / n; double const radius = 0.25; double const x = radius * cos(alpha); double const y = radius * sin(alpha); path.emplace_back(TrafficGPSEncoder::DataPoint(0, ms::LatLon(y, x))); } Test(path); } UNIT_TEST(Traffic_Serialization_ExtremeLatLon) { vector<TrafficGPSEncoder::DataPoint> path = { {0, ms::LatLon(-90, -180)}, {0, ms::LatLon(90, 180)}, }; Test(path); } } // namespace coding
{ "redpajama_set_name": "RedPajamaGithub" }
5,202
<?php namespace Magento\Sales\Model\Resource; /** * Sales resource helper interface * * @author Magento Core Team <core@magentocommerce.com> */ interface HelperInterface { /** * Update rating position * * @param string $aggregation One of \Magento\Sales\Model\Resource\Report\Bestsellers::AGGREGATION_XXX constants * @param array $aggregationAliases * @param string $mainTable * @param string $aggregationTable * @return $this */ public function getBestsellersReportUpdateRatingPos( $aggregation, $aggregationAliases, $mainTable, $aggregationTable ); }
{ "redpajama_set_name": "RedPajamaGithub" }
7,978
Greek police clash with protesters at property foreclosure auction Reuters , Wednesday 29 Nov 2017 Garbage crisis hits Greek capital over job freeze Greek police used teargas on Wednesday to disperse protesters from an Athens courtroom where they had been trying to halt dozens of planned foreclosure auctions of property. The foreclosure auctions are a key condition of the country's international bailout but have been repeatedly disrupted by leftist activists who say they are unfair and target the poor. TV footage showed protesters hurling objects at police in a corridor filled with smoke and police pushing them back. Notaries, who handle the auctions, had been boycotting them due to safety concerns, but returned to work on Wednesday after the leftist-led government said it would improve the process and increase security. Greece has also agreed to launch foreclosure e-auctions, which began on Wednesday after being pushed back by two months. Prime Minister Alexis Tsipras' government swept to power two years ago on promises to protect primary homes and cancel unpopular reforms and austerity. It later signed up to a new bailout and agreed to more belt-tightening, though it has pledged to protect people's primary homes. However, Greek banks are saddled with more than one hundred billion euros in bad loans after years of financial crisis, mainly due to people's inability to repay mortages. An austerity-induced recession has cut jobs and shut businesses. Non-performing loans top the agenda of Greece's talks with its creditors. They are currently reviewing Greece's progress on energy, labour and public sector reforms agreed under its third international bailout, which is worth 86 billion euros. Athens wants to speed up the implementation of some agreed reforms to wrap up its bailout review soon. It hopes to start talks with lenders on the terms of exiting the bailout next year and on further debt relief - a long-standing Greek demand.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,748
David Icke's Official Forums > Main Forums > David Icke: Research & Media > Human Race Get Off Your Knees Intern Dies at Bank of America, London It is being reported today (August 20, 2013) that an intern working at the Bank of America, in the London office has died. 21 Year old Moritz Erhardt, a University of Michigan student from Germany, was found dead in his flat after working three straight days for the bank. Erhardt, who had epilepsy, was found in his shower. Strange thing here is that Mr. Erhardt was only one week away from the end of his internship, which meant he would have (possibly) been put on the payroll. The Head of International Communications for B of A stated: "I'm not going to comment on what hours people choose to spend in the office voluntarily. But if you think about it logically, what we're trying to do is something that happens all across the big firms. We're looking to get to know them better." Another anonymous source said that typically, these interns work 100 to 110 hours a week. (So, if you think about this story logically, we see a young man who was hell-bent on impressing the front office. So hell-bent that he worked three straight days or, literally worked himself to death (without getting one cent in return). All for a bank that could care less about him. Like Clint Eastwood said in a movie: "Dying ain't much of a living." Yes, there are other possible motives for this young man's death. In fact, the "possibles" are limitless.) WAKE UP, PEOPLE. He earned 3500€ a month. And dying from exhaustion happens. To some after 72hours of playing starcraft to others that aspire to become another patrick bateman like burden to humanity. Find More Posts by lurk littleghost Originally Posted by lurk TRY GETTING A RESERVATION AT DORSIA NOW, MORITZ! (I'm sorry, it was left there, I picked it up...) Last edited by littleghost; 21-08-2013 at 11:11 PM. Find More Posts by littleghost
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,487
"We have really enjoyed your chicken. It's juicy and flavorful...and beautifully wrapped! Thank you for working hard to provide healthy food for our family!" "Yum! Yum! Yum! So far, our family has tried the pastured chicken and eggs and we are SO pleased. Additionally, we used to have to get 'real food' from over an hour away, but we are so glad that now we can have access to high quality, delicious food so much closer to home! Thank you for your work! We are blessed by your efforts! Keep it up!" "We love coming to the Bonnie View Farm and the meat we bought is of outstanding quality. We could not believe the difference in the taste of the chicken soup and roasted chicken we prepared at home. You will be surprised how much difference it makes when the animals are fed only good feed. We highly recommend joining the buying club. You will not regret it. Your order is always ready and Steve and Justine make sure that you are well taken care of. You can see the quality and dedication that goes into raising the animals and preparing the meat for sale. The farm is conveniently located off Banister Road close to the I-435 and 35 highways." "What a privilege we consider it to know and live close to this dear family who knowing and loving our Gracious Creator seek to be good and faithful stewards of His creation. We have thoroughly enjoyed all of your products which we have tried so far. Recently we had family over for dinner and your delicious chicken was the biggest hit of the evening. They said it was the "best bird" they had ever had and they all loved it! Of course we let them in on where this amazing bird came from. But the taste is not the only plus, better yet is knowing that it is actually good for us because of your good stewardship of God's creation. We also enjoy your eggs, bacon, sausage, and wonderful goat's milk. In our opinion they are the best ever. During my pregnancy my midwife encouraged me to always have a boiled egg on hand for a nutrious snack and before your eggs became available to us I would have never imagined one egg could taste any different from any other egg (although I knew the nutrient benefits could be drastically different), until I tasted one of your eggs. Even a simple boiled egg from your farm tastes so much better than the best eggs we could buy at the store. We haven't tried your ham yet but we look forward to because we have already heard that it is the BEST and we aren't surprised at all having completely enjoyed everything else. We are truly thankful to God for you and the wonderful provisions He graciously makes available to us through your farm!"
{ "redpajama_set_name": "RedPajamaC4" }
4,692
\section{Introduction} The last decade has represented a golden era for Cosmology: a flood of data with unprecedented accuracy has become available on such diverse fields as Cosmic Microwave Background (WMAP, Planck, SPT, ACT, see for instance \cite{WMAP, das, hanson, planck23} and references therein), Ultra High Energy Cosmic Rays (cfr. \cite{auger}), Gamma rays (Fermi, Agile, ARGO-YBJ+, cfr. \cite{ARGO, fermi, AGILE}), neutrinos (cfr. \cite{aguilar}) and many others. Many of these experiments have produced full-sky surveys, and basically all of them have been characterized by fields of views covering thousands of square degrees. In such circumstances, data analysis methods based on flat sky approximations have become unsatisfactory, and a large amount of effort has been devoted to the development of procedures which take fully into account the spherical nature of collected data. As well-known, Fourier analysis is an extremely powerful method for data analysis and computation; in a spherical context, Fourier analysis corresponds to the spherical harmonics dictionary, which is now fully implemented in very efficient and complete packages such as HealPix, see \cite{gorski}. For most astrophysical applications, however, standard Fourier analysis may often result to be inadequate due to the lack of localization properties in the real domain; because of this, spherical harmonics cannot handle easily the presence of huge regions of masked data, nor they can be used to investigate local features such as asymmetries and anisotropies or the search for point sources. As a consequence, several methods based on spherical wavelets have become quite popular in astrophysical data analysis, see for instance \cite{gonzamart1, donzelli,fay11, Wiaux, cmbneed, pbm, pietrobon1,rudjord1,starck1, starck2} and also \cite{starcklibro} for a review. These procedures have been applied to a huge variety of different problems, including for instance point source detection in Gamma ray data (cfr. \cite{iuppa1, starckbis}), testing for nonGaussianity (cfr. \cite{lanm,donzelli, regan, rudjord1}), searching for asymmetries and local features (cfr. \cite{cayon, pietrobon1, vielva}), point source subtraction on CMB data (see \cite{scodeller}), map-making and component separation (cfr. \cite{basak, delabruille2, delabrouille1, cardoso}), and several others. The next decade will probably experience an even more amazing improvement on observational data. Huge surveys are being planned or are already at the implementation stage, many of them aimed at the investigation of the large scale structure of the Universe and the investigation of Dark Energy and Dark Matter; for instance, a large international collaboration is fostering the implementation of the Euclid satellite mission, aimed at a deep analysis of weak gravitational lensing on nearly half of the celestial sky (see for instance \cite{EUCLID}). These observational data are also complemented by N-body simulation efforts (cfr. \cite{Millennium}) aimed at the generation of realistic three-dimensional model of the current large scale structure of the Universe. From the point of view of data analysis, these data naturally entail a three-dimensional structure, which calls for suitable techniques of data analysis. In view of the previous discussion, it is easy to understand the motivation to develop wavelet systems on the three-dimensional ball, extending those already available on the sphere. Indeed, some important efforts have already been spent in this direction, especially in the last few years. Some attempts outside the astrophysical community have been provided by \cite% {fmm, michel, pencho}; however the first two proposals are developed in a continuous setting and do not seem to address discretization issues and the implementation of an exact reconstruction formula. On the other hand in \cite% {pencho} the authors proceed by projecting the three-dimensional ball into a unit sphere in four dimension, and then developing the corresponding spherical needlet construction in the latter space. While this approach is mathematically intriguing, to the best of our knowledge it has not led to a practical implementation, at least in an astrophysical context. This may be due to some difficulties in handling the required combination of Jacobi polynomials, and the lack of explicit recipes for cubature points in this context; moreover the projection of the unit ball on a unit sphere in higher dimension may induce some local anisotropies, whose effect still needs to be investigated in an astrophysical context. Within the astrophysical community, some important proposals for the construction of three-dimensional wavelets have been advocated by \cite{starck} and \cite{mcewen}. In the former paper, the authors propose to use a frequency filter on the Fourier-Bessel transform of the three-dimensional field. The proposal by \cite{mcewen} also concentrates on Fourier-Bessel transforms, and is mainly aimed at the construction of a proper set of cubature points and weights on the radial part. This is in practice a rather difficult task: while it is theoretically known that the cubature points can be taken to be the zeroes of Bessel functions of increasing degrees, in practice these points are not available explicitly and the related computations may be quite challenging. To overcome this issue, in \cite{mcewen} a very interesting solution is advocated: more precisely, the authors start by constructing an exact transform on the radial part using damped Laguerre polynomials, which allow for an exact quadrature rule. Combining this procedure with the standard spherical transform, they obtain an exact 3-dimensional decomposition named Fourier-Laguerre transform. Their final proposal, the so-called flaglet transform, is then obtained by an explicit projection onto the Bessel family (e.g., a form of harmonic tiling on the Fourier-Laguerre transform); this approach is computationally feasible and exhibits very good accuracy properties from the numerical point of view. Our starting point here is to some extent related, and quite explicitly rooted in the astrophysical applications we have in mind. In particular, we envisage a situation where an observer located at the centre of a ball is collecting data, e.g., we assume that she/he is observing a family of concentric spheres centred at the origin. At a given resolution level, the pixelization on each of these spheres is assumed to be the same, no matter their radial distance from the origin - this seems a rather realistic representation of astrophysical experiments, although of course it implies that with respect to Euclidean distance the sampling is finer for points located closer to the observer. In this sense, our construction has an implicit radial symmetry which we exploit quite fully: in particular, we view the ball of radius $R$ as a manifold $M=[0,R]\times S^{2},$ and we modify the standard spherical Laplacian so that the distance between two points on the same spherical shell depends only on the angular component and not on the radius of the shell. The corresponding eigenfunctions have very simple expressions in terms of trigonometric polynomials and spherical harmonics; our system (which we label \emph{3D radial needlets}) is then built out of the same procedures as for needlets on the sphere, namely convolution of a projection operator by means of a smooth window function $b(\cdot),$ and discretization by means of an explicitly provided set of cubature points. Concerning the latter, cubature points and weights arise very simply from the tensor products of cubature points on the sphere (as provided by HealPix in \cite{gorski}, for instance), and a uniform discretization on the radial part, which is enough for exact integration of trigonometric polynomials. We believe the present proposals enjoys some important advantages, such as \begin{enumerate} \item very good localization properties (in the suitable distance, as motivated before); these properties can be established in a fully rigorous mathematical way, exploiting previous results on the construction of wavelets for general compact manifolds in \cite{gm2}, see also \cite% {pesenson2}; \item an exact reconstruction formula for band-limited functions, a consequence of the so-called tight frame property; the latter property has independent interest, for instance for the estimation of a binned spectral density by means of needlet coefficients (see \cite{bkmpAoS2} for analogous results in the spherical case); \item a computationally simple and effective implementation scheme, entailing uniform discretization and the exploitation of existing packages; \item a natural embedding into experimental designs which appear quite realistic from an astrophysical point of view, as discussed earlier. \end{enumerate} The construction and these properties are discussed in more details in the rest of this paper; we note that the same ideas can be simply extended to cover the case of spin valued functions, along the same lines as done for standard 2D needlets by \cite{gm3,gm4}: these extensions may be of interest to cover forthcoming data on weak gravitational lensing (e.g. \cite{EUCLID}). The paper is divided as follows: Section 2 presents the background material on our embedding of the three-dimensional ball, related Fourier analysis and discretization issues; Section 3 presents the 3D radial needlets construction in details; Section 4 discusses the comparison with possible alternative proposals; Section 5 presents our numerical evidence, while some technical computations are collected in the Appendix. \section{The Basic Framework} As mentioned in the Introduction and discussed at length also in other papers (\cite{mcewenloc, mcewen}), in an astrophysical framework data collection on the ball is characterized by a marked asymmetry between the radial part and the spherical component. Indeed, it is well-known that for astrophysical datasets observations at a growing radial distance corresponds to events at higher redshift, which have hence occurred further away in time, not only in space; data at different redshifts correspond at different epochs of the Universe and are hence the outcome of different physical conditions. From the experimental point of view, the signal-to-noise ratio is strongly influenced by radial distance; for instance, a strong selection bias is introduced as higher and higher intrinsic luminosity is needed to observe objects at growing redshift. The asymmetry between the radial and spherical components is also reproduced in data storing mechanisms, which typically adopt independent discretization/pixelization schemes for the two components. In view of these considerations, it seems natural and convenient to represent functions/observations on the three-dimensional ball $\mathcal{B}% _{R}\mathcal{=}\left\{ (x_{1},x_{2},x_{3}):x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\leq R\right\} $ as being defined on a family of concentric spheres (shells), indexed by a continuous radial parameter (i.e., a growing redshift); here, the radius $R$ of the ball can be taken to represent the highest redshift value $z$ in the catalogue being analyzed, $R=z_{\max }.$ In the sequel, we shall work with spherical coordinates $(r,\theta ,\varphi );$ for notational convenience, we take $r=2\pi z/R,$ so that $r\in \lbrack 0,2\pi ].$ Formally, this means we shall focus on the product space \begin{equation*} L^{2}(M,d\mu )=L^{2}((0,2\pi ],dr)\otimes L^{2}(S^{2},\>d\sigma )\text{ ,} \end{equation*}% where $d\mu=drd\sigma$, $d\sigma =\left( 4\pi \right) ^{-1}\sin \theta d\phi $ and $dr$ denotes standard Lebesgue measure on the unit interval. This simplifying step is at the basis of our construction; indeed, for our purposes it will hence be sufficient to construct a tight and localized frame on $% L^{2}(M,d\mu ),$ a task which can be easily accomplished as follows. Recall first that for square-integrable functions on the sphere, e.g. on $% L^{2}(S^{2},d\sigma ),$ a standard orthonormal basis is provided by the set of spherical harmonics \begin{equation*} \left\{ Y_{\ell ,m}\left( \theta ,\phi \right) \right\} \text{ },\text{ }% \text{ }\ell =0,1,2,...,m=-\ell ,...,\ell \text{ ,} \end{equation*}% where $\theta \in \left[ 0,\pi \right]$ and $\varphi \in \left[ 0,2\pi \right)$. As well-known, the spherical harmonics provide a complete set of eigenfunctions for the spherical Laplacian \begin{equation*} \Delta _{S^{2}}=\frac{1}{\sin \theta }\frac{\partial }{\partial \theta }% (\sin \theta \frac{\partial }{\partial \theta })+\frac{1}{\sin ^{2}\theta }% \frac{\partial }{\partial \varphi }, \end{equation*} indeed% \begin{equation*} \Delta _{S^{2}}Y_{\ell ,m}=-\ell (\ell +1)Y_{\ell ,m}\text{ , }\ell =1,2,.... \end{equation*}% Hence, for any $f\in L^{2}\left( S^{2},d\sigma \right) $, we have \begin{equation*} f\left( \omega \right) =\sum_{\ell \geq 0}\sum_{m=-l}^{l}a_{\ell ,m} Y_{\ell ,m}\left( \omega \right)\text{ , }\omega \in S^{2}\text{ ,} \end{equation*}% where the coefficients $\left\{ a_{\ell m}\right\} $ are evaluated by \begin{equation*} a_{\ell ,m}=\int_{S^{2}}\overline{Y}_{\ell ,m}\left( \omega \right) f\left( \omega \right) \sigma \left( d\omega \right) \text{ .} \end{equation*} On the other hand, for the radial part we consider the standard Laplacian operator $\frac{\partial ^{2}}{\partial r^{2}}$, for which an orthonormal family of eigenfunctions is well-known to be given, for $n=0,1,2,\ldots$, by% \begin{equation*} \frac{\partial ^{2}}{\partial r^{2}}\left( 2\pi \right) ^{-\frac{1}{2}}\exp \left( inr\right) =-n^{2}\left( 2\pi \right) ^{-\frac{1}{2}}\exp \left( inr\right) \text{ . } \end{equation*}% We can hence define a Laplacian on $M$ by% \begin{equation*} \Delta _{M}:=\frac{\partial ^{2}}{\partial r^{2}}+\Delta _{S^{2}}\text{ ,} \end{equation*}% e.g.% \begin{equation} \Delta _{M}\left( \exp \left( inr\right) Y_{\ell m}\left( \omega \right) \right) =-e_{n,\ell }\exp \left( inr\right) Y_{\ell m}\left( \omega \right) \text{ , }% \label{ourlap} \end{equation}% where \begin{equation*} e_{n,\ell }=\left( n^{2}+\ell (\ell +1)\right) \text{ . } \end{equation*}% \ It is interesting to compare the action of $\Delta _{M}$ with the standard Laplacian in spherical coordinates, which is given by% \begin{equation} \Delta =\frac{1}{r^{2}}\frac{\partial }{\partial r}r^{2}\frac{\partial }{% \partial r}+\frac{1}{r^{2}}\Delta _{S^{2}}\text{ ;} \label{standlap} \end{equation}% it can be checked that $\Delta _{M}$ is the Laplace-Beltrami operator which correspond to the metric tensor% \begin{equation*} g_{M}=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \sin ^{2}\theta \end{array}% \right) \text{ ,} \end{equation*}% as opposed to the usual Euclidean metric in spherical coordinates% \begin{equation*} g=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & r^{2} & 0 \\ 0 & 0 & r^{2}\sin ^{2}\theta \end{array}% \right) \text{ .} \end{equation*}% Likewise, the intrinsic distance between points $x_{1}=\left( r_{1},\omega _{1}\right) =\left( r_{1},\theta _{1},\varphi _{1}\right) $ and $% x_{2}=\left( r_{2},\omega _{2}\right) =\left( r_{2},\theta _{2},\varphi _{2}\right) ,$ $\omega _{1},\omega _{2}\in S^{2},$ $r_{1},r_{2}\in \lbrack 0,2\pi ],$ $x_{1},x_{2}\in M,$ is provided by% \begin{equation} d_{M}(x_{1},x_{2})=\sqrt{(r_{1}-r_{2})^{2}+d_{S^{2}}^{2}(\omega _{1},\omega _{2})}\text{ ,} \label{Mdistance} \end{equation}% as opposed to Euclidean distance in spherical coordinates% \begin{equation} d(x_{1},x_{2})=\sqrt{(r_{1}-r_{2})^{2}+r_{1}r_{2}d_{S^{2}}^{2}(\omega _{1},\omega _{2})}\text{ .} \label{Edistance} \end{equation}% In words, in our setting the distance between two points at a given redshift is simply equal to their angular separation, whatever the redshift; on the contrary, under the Euclidean distance for a given angular separation the actual distance grows with the radial component. It can be argued that the metric $d_{M}(\cdot ,\cdot )$ is a natural choice for any wavelet construction where the radial component is decoupled from the spherical one. Given this choice of metric, our construction can be advocated as optimal, in the sense that it is based on the eigenfunctions of the associated Laplacian, and hence can be shown to enjoy excellent localization properties in the real and harmonic domains. As a consequence of the previous discussion, the family of functions \begin{equation} u_{\ell ,m,n}\left( r,\vartheta ,\phi \right) =\left(2 \pi \right)^{-\frac{1% }{2}} \exp \left(inr\right)Y_{\ell ,m}\left( \vartheta ,\phi \right) \label{Mbasis} \end{equation}% provides an orthonormal basis on $L^{2}(M,\>d\mu )$, e.g., for any $F\in L^{2}\left( M,d\mu \right) $, the following expansion holds in $L^{2}(M,d\mu ):$ \begin{equation} F\left( r,\vartheta ,\phi \right) =\sum_{\ell \geq 0}\sum_{m=-\ell }^{\ell }\sum_{n\geq 0}a_{\ell ,m,n}u_{\ell ,m,n}\left( r,\vartheta ,\phi \right) \text{ ,} \label{usum} \end{equation}% where \begin{eqnarray} \nonumber a_{\ell ,m,n}&:=&\left\langle F,u_{\ell ,m,n}\right\rangle _{L^{2}(M,d\mu )}\\ &=&\int_{M}F\left( x\right) \overline{u}_{\ell ,m,n}\left( x\right) d\mu \left( x\right) \text{ .} \label{ureconst} \end{eqnarray}% Of course, we can also rewrite (\ref{ourlap}) more compactly as \begin{equation} \Delta _{M}u_{\ell ,m,n}=-e_{\ell ,n}u_{\ell ,m,n}\text{ .} \label{D} \end{equation} It may be noted that by taking a trigonometric basis for the radial part, we are implicitly assuming that the functions to reconstruct satisfy periodic boundary conditions. For astrophysical applications, this does not seem to bring in any problem. Indeed, we envisage circumstances where catalogues are provided within some band of redshift values $0<z_{min}<z_{max}$; periodicity is then obtained by simply padding zero observations at the boundaries. The final step we need to complete our frame construction is discretization; the procedure is standard, and can be outlined as follows. Let $\Pi _{\Lambda}$ be the a set of band-limited functions of order smaller than $\Lambda$, i.e. the linear span of the basis elements $\left\{ u_{\ell ,m,n}\right\} $ for which the corresponding eigenvalues are such that $e_{\ell ,n}\leq \Lambda $. Given an integer $j$, there exists a set of points and weights $% \aleph _{j}:=\left\{ \xi _{j,q,k}=\left( r_{j,q},\>\theta _{j,k},\>\varphi _{j,k}\right) \right\} ,$ and positive weights $\left\{ \lambda _{j,q,k}\right\} ,$ $1\leq q\leq Q_{j}$, $1\leq k\leq K_{j}$, such that for all $P\in $ $\Pi _{B^{2j+2}}$ the following exact cubature formula holds: \begin{eqnarray} \nonumber \int_{M}P\left(x\right)d\mu\left(x\right) &=&\int_{S^{2}}\int_{0}^{1}P\left( r,\theta ,\varphi \right) drd\sigma \left( \theta ,\varphi \right)\\ &=&\sum_{q=1}^{Q_{j}}\sum_{k=1}^{K_{j}}P\left( r_{j,q},\>\theta _{j,k},\>\varphi _{j,k}\right) \lambda _{j,q,k}\text{ ,} \label{cubature} \end{eqnarray}% where the cubature points and weights satisfy \begin{equation*} \lambda _{j,q,k}\approx B^{-3j}\text{ , }K_{j}\approx B^{2j}\text{ , }% Q_{j}\approx B^{j}\text{ ,} \end{equation*}% and the notation $x_{1}\approx x_{2}$ means that there exists $c>0$ such that $c^{-1}x_{1}\leq x_{2}\leq cx_{1}$. More explicitly, $K_{j}$ denotes the pixel cardinality on the spherical part, and $Q_{j}$ represents the pixel cardinality on the radial part for a given resolution level $j$. In words, this means that for such functions integrals can be evaluated by finite sums over suitable points without any loss of accuracy. The existence of cubature points with the required properties follows immediately by the tensor construction which we described in the previous subsection: in particular, the spherical component $\left( \theta _{j,k},\varphi _{j,k}\right) $ can be provided along the same scheme as in \cite{npw2}, while for practical applications the highly popular pixelization scheme provided by HealPix (cfr. \cite{gorski}) may be used; on the radial part, cubature points maybe simply taken to be given by $r_{j,q}:=\frac{2\pi q}{B^{j}}, q=0,\ldots,\left[B^{j}\right]-1$, where $\left[\cdot\right]$ denotes the integer part, cfr. Section \ref{sec:algorithm}. \section{3D Radial Needlets and their Main Properties} Having set the basic framework for Fourier analysis and discretization on $% L^{2}(M,d\mu ),$ the construction of 3D radial needlets can proceed along very much the same lines as on the sphere or other manifolds (compare \cite% {npw1, npw2, pes1, pes2, pg1, pesenson2}). More precisely, let us fix a scale parameter $B>1$, and let $b\left( u\right) $, $u\in \mathbb{R}$, be a positive kernel satisfying the following three assumptions: \begin{enumerate} \item $b\left( \cdot \right) $ has compact support in $\left[ 1/B,B\right] $; \item $b\left( \cdot \right) $ is infinitely differentiable in $\left( 0,\infty \right) $; \item the following partition of unity property holds% \begin{equation*} \sum_{j=-\infty }^{\infty }b^{2}\left( \frac{u}{B^{j}}\right) =1\text{ , for all }u>0\,\text{\ .} \end{equation*} \end{enumerate} In \reffig{fig:bfunc} we show a visualization of $b\left( \frac{\sqrt{% e_{\ell ,n}}}{B^{j}}\right)$ for different needlet frequency $j$ values and $ell_{max}=n_{max}=200$. \begin{figure*}[] \begin{center} \includegraphics[width=0.3\textwidth,angle=0]{\figdir{gln_3d_lmax200_nmax200_j4.pdf}} % \includegraphics[width=0.3\textwidth,angle=0]{\figdir{gln_3d_lmax200_nmax200_j5.pdf}} % \includegraphics[width=0.3\textwidth,angle=0]{\figdir{gln_3d_lmax200_nmax200_j6.pdf}} \end{center} \caption{3D needlet window functions $b\left( \frac{\sqrt{e_{\ell ,n}}}{B^{j}}\right)$: $j=4$ (left panel), $j=5$ (middle panel) and $j=6$ (right panel). \label{fig:bfunc}} \end{figure*} Numerical recipes for the construction of window functions satisfying the three conditions above are now well-known to the literature; for instance, in \cite{pbm} (see also \cite{marpecbook}), the following procedure is introduced: \begin{itemize} \item STEP 1: Construct the $C^{\infty }$-function \begin{equation*} \phi _{1}\left( t\right) =\left\{ \begin{array}{c} \exp \left( -\frac{1}{1-t^{2}}\right) \\ 0% \end{array}% \right. \begin{array}{c} t\in \left[ -1,1\right] \\ \text{otherwise}% \end{array}% \text{ ,} \end{equation*}% compactly supported in $\left[ -1,1\right] $; \item STEP 2: Implement the non-decreasing $C^{\infty }$-function \begin{equation*} \phi _{2}\left( u\right) =\frac{\int_{-1}^{u}\phi _{1}\left( t\right) dt}{% \int_{-1}^{1}\phi _{1}\left( t\right) dt}\text{ ,} \end{equation*}% normalized in order to have $\phi _{2}\left( -1\right) =0$; $\phi _{2}\left( 1\right) =1$; \item STEP\ 3: Construct the function \begin{equation*} \phi _{3}\left( t\right) =\left\{ \begin{array}{c} 1 \\ \phi _{2}\left( 1-\frac{2B}{B-1}\left( t-\frac{1}{B}\right) \right) \\ 0% \end{array}% \right. \begin{array}{c} t\in \left[ 0,1/B\right] \\ t\in \left( 1/B,1\right] \\ t\in \left( 1,\infty \right)% \end{array}% \text{ ;} \end{equation*} \item STEP 4: Define, for $u\in\mathbb{R}$% \begin{equation*} b^{2}\left( u\right) =\phi _{3}\left( \frac{u}{B}\right) -\phi _{3}\left( u\right) \text{ .} \end{equation*} \end{itemize} Now recall that $e_{\ell ,n}=n^{2}+\ell (\ell +1)$, and let the symbol $% \left[ \ell ,n\right] _{j}$ denote the pairs of $\ell $ and $n$ such that $% e_{\ell ,n}$ is bounded above and below respectively by $B^{2\left( j+1\right) }$ and $B^{2\left( j-1\right) }$, i.e. \begin{equation*} \left[ \ell ,n\right] _{j}=\left\{ l,n:B^{2\left( j-1\right) }\leq e_{\ell ,n}\leq B^{2\left( j+1\right) }\right\} \text{ }. \end{equation*}% We have the following \begin{definition} The radial 3D-needlets basis is defined by \begin{eqnarray} \nonumber \Phi _{j,q,k}\left( r,\vartheta ,\varphi \right) &=&\sqrt{\lambda _{j,q,k}}% \sum_{\left[ \ell ,n\right] _{j}}\sum_{m=-\ell }^{\ell }b\left( \frac{\sqrt{% e_{\ell ,n}}}{B^{j}}\right)\\ &\times& \overline{u}_{\ell ,m,n}\left( \xi _{j,q,k}\right) u_{\ell ,m,n}\left( x\right) \text{ ,} \label{3dneed} \end{eqnarray}% where $\lambda _{j,q,k}$ and $\xi _{j,q,k}$ denote respectively the pixel volume and the pixel center. \end{definition} Analogously to the related constructions on the sphere or on other manifolds, radial 3D-needlets can be viewed as the convolution of the projector operator \begin{equation*} Z_{\ell n}(\xi _{j,q,k},x)=\sum_{m}\overline{u}_{\ell ,m,n}\left( x\right) u_{\ell ,m,n}\left( \xi _{j,q,k}\right) \end{equation*}% with the window function $b\left( \cdot \right)$. The properties of this system are to some extent analogous to the related construction on the sphere, as illustrated below. \subsection{The tight frame property} Let us first recall the notion of tight frame, which is defined to be a countable set of functions $\left\{ e_{i}\right\} $ such that% \begin{equation*} \sum_{i}\beta_i^2(f)=\int_{\mathcal{B}}f\left( x\right) ^{2}dx \text{ ;} \end{equation*}% where the coefficients $\beta_i(f)$ are defined by \begin{equation} \beta_i(f) = \int{f(x)e_i(x)dx}, \end{equation} so that the ``energy'' of the function $f$ is fully conserved in the collection of $\beta_i$'s; we refer for instance to \cite{hernandezweiss, pesensongeophysical} and the references therein for more details and discussions. In words, a tight frame can be basically seen as a (possible redundant) basis; indeed we recall that tight frames enjoy the same reconstruction property as standard orthonormal systems, e.g.% \begin{equation*} f=\sum_{i}\beta_i(f) e_{i}\text{ }, \end{equation*}% the equality holding in the $L^{2}$ sense. It is a straightforward consequence of the previous construction that the set $\left\{ \Phi _{j,q,k}\right\} _{j,k,q}$ describes a tight frame over $% L^{2}\left( M,d\mu \right) $, so that an exact reconstruction formula holds in this space; the details of the derivation of this result are collected in the Appendix. Indeed, let $F\in L^{2}\left( M,d\mu \right) $, i.e. the space of functions which have finite norm $\Vert .\Vert _{L^{2}(M)}^{2}$, where% \begin{equation} \Vert F\Vert _{L^{2}\left( M,d\mu \right) }^{2}:=\int_{0}^{2\pi }\int_{S^{2}}F^{2}(r,\vartheta ,\varphi )\sin \vartheta drd\vartheta d\varphi \text{ .} \label{norm} \end{equation}% The 3D-needlet coefficients are defined as \begin{equation*} \beta _{j,q,k}:=\beta _{j,q,k}(F)=\int F\Phi _{j,q,k} d\mu, \end{equation*}% or, more explicitly,% \begin{eqnarray} \nonumber \beta _{j,q,k}&=&\sqrt{\lambda _{j,q,k}}\sum_{\left[ \ell ,n\right] _{j}\ }\sum_{m=-\ell }^{\ell }b\left( \frac{\sqrt{e_{\ell ,n}}}{B^{j}}\right)\\ & \times& a_{\ell ,m,n}u_{\ell ,m,n}\left(\xi_{j,q,k}\right) \label{coeff} \text{ ,} \end{eqnarray}% where $a_{\ell ,m,n}$ is given by (\ref{ureconst}). The tight frame property then gives% \begin{equation*} \Vert F\Vert _{L_{2}\left( M,d\mu \right) }^{2}=\sum_{j\geq 0}\sum_{q=1}^{Q_{j}}\sum_{k=1}^{K_{j}}|\beta _{j,q,k}|^{2}; \end{equation*}% this property implies also the reconstruction formula \begin{equation} \label{needRecon} F\left( x\right) =\sum_{j\geq 0}\sum_{k=1}^{K_{j}}\sum_{q=1}^{Q_{j}}\beta _{j,q,k}\Phi _{j,q,k}\left( x\right) \text{ ,} \end{equation}% see the Appendix for more discussion and some technical details. There are some important statistical applications of the tight frame property. Firstly, the reconstruction property allows the implementation of denoising and image reconstruction techniques, for instance on the basis of the universally known thresholding paradigm (see for instance \cite% {bkmpAoS2, donoho1, WASA}); in view of the localization properties discussed in the following paragraph, such denoising techniques will enjoy statistical optimality properties, in the sense of minimizing the expected value of the reconstruction error, defined by% \begin{equation*} \mathbb{E}\left[ \left\Vert \widehat{F}-F\right\Vert _{L^{2}(M,d\mu )}^{2}\right] =\mathbb{E}% \left[ \int_{M}\left( \widehat{F}(x)-F(x)\right) ^{2}d\mu \left(x\right) \right] \end{equation*}% Here $\mathbb{E}\left[ \cdot \right] $ denotes expected value and $\widehat{F}$ the reconstructed function in the presence of additive noise with standard properties. It is important to stress that this reconstruction error is measured according to the norm introduced in (\ref{norm}), rather than the usual Euclidean measure in spherical coordinates, where integration is \ performed with respect to the factor $r^{2}\sin \vartheta drd\vartheta d\varphi .$ In practical terms, this means that the observations at lower redshift are given a higher weight when performing image denoising; this appears a rather reasonable strategy, as most astrophysical catalogues are more complete and less noisy at lower redshift. We also note that the tight frame property allows an estimator for the averaged power spectrum of random fields to be constructed by means of the squared needlet coefficients, along the same lines as for instance \cite% {bkmpAoS} in the spherical case. More details and further investigations on all these issues are left for future research. \subsection{Localization Properties} It is immediately seen that the functions $\left\{ \Phi _{j,q,k}\left(\cdot \right) \right\} $ are compactly supported in the harmonic domain; indeed for any fixed $j,$ as argued before we have that $b(\cdot)$ is non-zero for $% u\in \left( B^{-1},B\right) $, and hence it follows that $b\left( \frac{% \sqrt{e_{\ell ,n}}}{B^{j}}\right) \neq 0$ $\ $only for $e_{\ell ,n}\in \left[ B^{2(j-1)},B^{2(j+1)}\right] $. For instance, for $B=\sqrt{2}$ and $j=4$ we have $8\leq e_{\ell ,n}\leq 32,$ allowing for the pairs \begin{equation*} (\ell ,n)=(1,3),(1,4),(1,5),(2,2),(2,3),(2,4),\ldots,(5,1)% \text{ .} \end{equation*}% It is also easy to establish localization in the real domain by means of general results on localization of needlet-type constructions for smooth manifolds. In particular, it follows from Theorem 2.2 in \cite{gm2}, see also \cite{pg1}, that for all $\tau \in \mathbb{N},$ there exists constants $c_{\tau }$ such that% \begin{equation} \left\vert \Phi _{j,q,k}\left( x\right) \right\vert \leq \frac{c_{\tau }B^{% \frac{3}{2}j}}{\left( 1+B^{j}d_{M}\left( x,\xi _{j,q,k}\right) \right) ^{\tau }}\text{ ,} \label{localineq} \end{equation}% uniformly over $j,q,k$ and $x.$ It is very important to notice that the distance at the denominator is provided by equation (\ref{Mdistance}). An important consequence of localization can be derived on the behaviour of the $L^{p}$ norms for the functions $\left\{ \Phi _{jqk}\left( \cdot\right) \right\} .$ In particular, it can be proved by standard arguments (as for instance in \cite{npw2}) that, for all $1\leq p<\infty $, \begin{equation} \Vert \Phi _{j,q,k}\Vert _{L^{p}\left( M,d\mu \right) }^{p}:=\int_{M}\left\vert\Phi _{j,q,k}\right\vert^{p}(x)d\mu\left(x\right) \approx B^{\frac{3}{2}\left( p-2\right) j}\text{ , } \label{Lpbound} \end{equation}% while% \begin{equation*} \Vert \Phi _{j,q,k}\Vert _{L^{\infty }\left( M,d\mu \right) }\approx B^{% \frac{3}{2}j}\text{ . } \end{equation*}% The result is consistent with the general characterization for the $L^{p}$% -norm of spherical needlets on $S^{d},$ which is well-known to be given by $% \Vert \psi _{j,k}\Vert _{L^{p}\left( S^{d}\right) }^{p}\approx B^{\frac{d}{2}% \left( p-2\right) j};$ here of course $d=3.$ The proof is completely standard, and hence omitted; we only remark that this characterization of $% L^{p}$ properties plays a fundamental role when investigating the optimality of denoising and image reconstruction techniques based on wavelet thresholding, see again \cite{bkmpAoS2} for further references and discussion. \begin{figure*}[] \begin{center} \includegraphics[width=0.3\textwidth,angle=0]{\figdir{fr_orig_recon.pdf}} % \includegraphics[width=0.3\textwidth,angle=0]{\figdir{an_orig_recon.pdf}} % \includegraphics[width=0.3\textwidth,angle=0]{% \figdir{an_diff_orig_recon.pdf}} \end{center} \caption{Radial line synthesis and analysis: input (black curve) and reconstructed (red curve) function $f(r)$ (left panel), original and reconstructed radial harmonic coefficient $a_n$ (middle panel), difference between input and reconstructed $a_n$ (right panel). \label{fig:fr2an}} \end{figure*} \section{A comparison with alternative constructions} The ingredients for the construction of localized tight frames on a compact manifold are now well-understood; one starts from a family of eigenfunctions and the associated projection kernel, then considers a window function to average these projectors over a bounded subset of frequencies, then proceeds to discretization by means of a suitable set of cubature points and weights, see for instance \cite{gm1, gm2,npw1, npw2, pes1, pes2,pesenson2}. The localization and tight frame properties are then easy consequences of general results. A natural question then arises, e.g., what would be the alternative properties of a construction based on a different choice of eigenfunctions, corresponding for instance to the standard Laplacian in spherical coordinates (e.g., (\ref{standlap}) rather than (\ref% {ourlap})). Indeed, a full system of eigenfunctions for the Laplacian in spherical coordinates is well-known to be given by \begin{equation*} E_{\ell ,m,k}(r,\omega )=\sqrt{\frac{2}{\pi }}\frac{J_{\ell +\frac{1}{2}}(kr)% }{\sqrt{kr}}Y_{\ell ,m}(\omega )\text{ , }r\in \lbrack 0,1]\text{ , }\omega \in S^{2}, \end{equation*}% which can be discretized imposing the boundary conditions $E_{\ell ,m,k}(1,\omega )\equiv 0,$ yielding the family $\left\{ E_{\ell ,m,k_{\ell p}}(r,\omega )\right\} ,$ for $k_{\ell p}$ the zeroes of the Bessel function $J_{\ell +\frac{1}{2}}(\cdot)$ in the interval $(0,1).$ Writing $e(\ell,k_{\ell_p})$ for the corresponding set of eigenvalues, a needlet type construction would then lead to the following proposal:% \begin{eqnarray*} \Psi _{j,q,k}(r,\omega )&=&\sqrt{\lambda _{j,q,k}}\sum_{\ell ,m}\sum_{k_{\ell_p}}b(% \frac{\sqrt{e(\ell,k_{\ell_p})}}{B^{j}})\\ &\times& E_{\ell,m,k_{\ell_p}} (r,\omega )E_{\ell,m,k_{\ell_p}} (r_{q},\omega _{j,k})\text{ ,} \end{eqnarray*}% which is close to the starting point of the construction in \cite{mcewen}, the main difference being that their weight function is actually a product of a radial and spherical part, and its argument is not immediately related to the eigenvalues of the summed eigenfunctions (see also \cite{starck} for 3D isotropic wavelets based on Bessel functions). It is then easy to show that $\Psi _{j,q,k}(\cdot,\cdot)$ enjoys a related form of localization property, namely for all $\tau \in \mathbb{N},$ there exists constants $% c_{\tau }$ such that% \begin{equation} \left\vert \Psi _{j,q,k}(x)\right\vert \leq \frac{c_{\tau }B^{\frac{3}{2}j}}{% \left( 1+B^{j}d\left( x,\xi _{j,q,k}\right) \right) ^{\tau }}\text{ , }% \label{loc3d} \end{equation}% where $x=(r,\omega)$, $\xi _{j,q,k}=(r_{q},\xi _{jk})$ and $d(\cdot,\cdot)$ denotes as before standard Euclidean distance. It is then important to stress the different merits of this construction, with respect to the one we focussed on earlier: 1) for the system $\left\{ \Psi _{j,q,k}\right\} $ distances are evaluated with a standard Euclidean metric, and the centre of the ball represents a mere choice of coordinates, e.g., the radial part depends just on the choice of coordinated and does not necessarily correspond to a specific physical meaning 2) for the system $\left\{ \Phi _{j,q,k}\right\} $ distances are very much determined by the choice of the centre and the radial part does have a specific intepretation, e.g., the distance to the observer. The rationale underlying 2) was already explained earlier in this paper; we envisage a situation where an observer located at the centre of a ball observes a surrounding Universe made up of concentric spheres having the same pixelizations. As a consequence "closer" spheres, e.g. those corresponding to smaller radii, are observed at the same angular resolution as more distant ones - in Euclidean coordinates, this obviously means that the sampling is finer. Our construction of radial needlets is simply reflecting this basic feature: as a consequence, for any given frequency needlets are more localized in proper distance for points located in spheres closer to the observer. This may appear quite rational, to the extent in which for these closer regions the signal-to-noise ratio is higher and hence the reconstruction may proceed to finer details than in the outer regions. The practical performance of these ideas is tested in the Section to follow. \section{Numerical Evidence\label{sec:algorithm}} In this Section we will describe the numerical implementation of the algorithm developed in this paper. As far as the spherical component is concerned, our code exploits the well-tested HEALpix~ pixelization code (cfr. \cite{gorski}), while for the radial part we use an equidistant pixelization, which is extensively tested below. To investigate the numerical stability and precision properties of the construction we propose, we shall analyze an input band-limited function on the ball which we simulate using a radial and angular test power spectrum. \begin{figure*}[] \begin{center} \includegraphics[width=0.30\textwidth,angle=0]{\figdir{ball_r51.pdf}} % \includegraphics[width=0.30\textwidth,angle=0]{\figdir{almn2ball_r51.pdf}} \bigskip \includegraphics[width=0.30\textwidth,angle=0]{\figdir{ball_almn2ball_diff_r51.pdf}} \\ \includegraphics[width=0.30\textwidth,angle=0]{\figdir{fr_ball_almn2ball_ipix201.pdf}} \includegraphics[width=0.30\textwidth,angle=0]{\figdir{fr_ball_almn2ball_diff_ipix201.pdf}} \end{center} \caption{Harmonic space synthesis and analysis: input function at the 51th shell (top left panel), reconstructed function at the 51th shell (top middle panel), difference between input and reconstructed functions at the 51th shell (top right panel), input and reconstructed radial function at the 201 HEALpix~ pixel (bottom left panel), and difference between input and reconstructed radial function at the 201 HEALpix~ pixel (bottom right panel). \label{fig:ball2almn}} \end{figure*} \subsection{Radial reconstruction} Given a function $g \in L^2\left(\left[0,2 \pi\right],dr \right)$, we obtain its radial harmonic coefficients by decomposing it in terms of radial eigenfunctions, i.e. for $n=1,2,\dots$ \begin{equation} a_{n}=\int_{0}^{2 \pi}{f(r)\frac{\exp (-inr)}{\sqrt{2\pi }}dr}\text{ ,} \label{eqn:fr2an} \end{equation} The reconstruction of the input function can then be obtained via \begin{equation} \label{eqn:an2fr} f(r) = \sum_{n=0}^{ \infty }{a_n(f)\frac{exp(inr)}{\sqrt 2\pi}} \text{.} \end{equation} By the standard Euler identity $\exp (inr)=\cos nr+i\sin nr,$ the integral in % \eqref{eqn:fr2an} involves just sines and cosines function. The Filon-Simpson algorithm developed by \cite{rosenfeld} can solve such integral with arbitrary precision for an increasing radial grid. To test our radial integration, we used as input the $a_n$ coefficients shown in black curves in \figref{fig:fr2an} and the reconstruction formula % \eqref{eqn:an2fr} to obtain $f(r)$ which we evaluate at $N_r=256$ points. After performing the forward and backward transformation, the reconstructed $g$ and $a_n$ as well as their differences with respect to the input values are shown in red color curves in \figref{fig:fr2an}. The differences in the radial spectra are smaller than $1e-4$ at all radial multipole values. This validates our radial analysis and synthesis routines. \subsection{Discretization of functions on the unit ball} To describe the reconstruction of a band-limited function through the pixelization here introduced, we start from an angular power spectrum $C_{\ell}$ which is derived from CAMB \cite{lewis} $\Lambda$CDM 3D matter power spectrum at redshift $z=0.5$ and projected to 2D through Limber approximation; using Healpix, we then generated a set of random spherical harmonic coefficients from this power spectrum. We arbitrarily fixed the maximum angular multipole to $\ell_{% \mathrm{max}}=65$. For the radial component, we chose $n_{\mathrm{max}}=25$ and generated a set of Fourier coefficients $a_n$ which are plotted in the middle box of \figref{fig:fr2an}. We then convolved the corresponding radial and spherical components to obtain the desired function on the 3D ball. Our test function $f(r)$ is evaluated at $N_r=256$ radial points while the angular part is defined on HEALpix~ $N_{\mathrm{side}}=64$ pixels. Our numerical 3D grid has a total of $N_r*12*N_{\mathrm{side}}^2$ pixels. We stress again that our main interest here is to test the accuracy of the codes, so at this stage we are not concerned with the physical interest of the functions to be reconstructed; the analysis of more physically motivated models, such as for instance maps from $N-$body simulations will be reported in another paper. \begin{figure*}[] \begin{center} \includegraphics[width=0.3\textwidth,angle=0]{% \figdir{ball2beta_r51_jmap2.pdf}} \includegraphics[width=0.3% \textwidth,angle=0]{\figdir{ball2beta_r51_jmap4.pdf}} % \includegraphics[width=0.3\textwidth,angle=0]{% \figdir{ball2beta_r51_jmap6.pdf}}\\[0pt] \includegraphics[width=0.3\textwidth,angle=0]{% \figdir{fr_ball2beta_ipix201_jmap2.pdf}} \includegraphics[width=0.3% \textwidth,angle=0]{\figdir{fr_ball2beta_ipix201_jmap4.pdf}} % \includegraphics[width=0.3\textwidth,angle=0]{% \figdir{fr_ball2beta_ipix201_jmap6.pdf}} \end{center} \caption{3D needlet component maps: the upper panel shows angular part of the corresponding needlet component map at the 51th shell while the lower panel is for the radial part at the 201 HEALpix~ pixel. The $j^{\mathrm{th}}$ component have a compact support of multipoles $B^{j-1}<\protect\sqrt{\ell (\ell +1)+n^{2}}<B^{j+1}.$ \label{fig:jmaps}} \end{figure*} In a similar spirit to the highly popular spherical algorithms proposed by \cite{gorski}, we have developed the ball equivalent of the HEALpix~ \emph{alm2map} and \emph{map2alm} codes, which are named \emph{almn2ball} and \emph{ball2almn}. These two routines, respectively, solve the analysis and synthesis equations of \eqref{ureconst} and \eqref{usum}. Equivalently, we have \emph{ball2beta} and \emph{beta2ball} to solve equations \eqref{coeff} and \eqref{needRecon}, which are respectively performing needlet space analysis and synthesis. We have optimized these codes so that they are fast and run either in serial or parallel mode; we have fully exploited the rigorously tested and well known HEALpix~ routines, so that researchers familiar with HEALpix~ should find our code rather intuitive. The modularity of our code, moreover, means that adapting our routines to any other pixelization packages will be straightforward. To test the accuracy of our code, both in harmonic and real space, in \figref{fig:ball2almn} we show the original and reconstructed function on the unit ball sliced at the 50th radial shell and the radial function at the 200th HEALpix~ pixel. It can be checked from the real-space difference plots in the same figure that the residuals are very small and comparable to HEALpix~'s accuracy. The difference in harmonic space is such that both the radial and angular spectra are recovered with accuracy $<1e-4$ at all multipoles. \subsection{Radial 3D needlet synthesis and analysis} Here we start from the $a_{\ell ,m,n}$ coefficients obtained from the previous section analysis and we compute the needlet coefficients $\beta _{j,q,k}$ through our routine \emph{almn2beta}, which implements % \eqref{3dneed}. Optionally one can start directly from the discretization of the function on the ball and call $ball2beta$ to get $\beta _{j,q,k}$ directly. The needlet parameters we use in this analysis are $j=0,1,..N_{j}-1 $, $q=0,1,..,Q_{j}$, $n_{k}=0,1,..,K_{j}-1$ where ,$N_{j}=7$, $% Q_{j}=N_{r}=256$, and $K_{j}=12\ast 64^{2}$. To reconstruct the function on the ball from the needlet coefficients we call $beta2ball$. The pixel by pixel and harmonic space accuracy of the function reconstructed from needlet coefficients are almost identical to that of the function reconstructed just from the $a_{\ell ,m,n}$ coefficients, thus providing some very reassuring evidence on the accuracy of the algorithm. As mentioned earlier, further evidence on simulations with more realistic experimental conditions is left to future research. In \figref{fig:jmaps}, we show the needlet component maps. The $j^{% \mathrm{th}}$ needlet component map have a compact support on the range of combined radial and angular multipoles such that $B^{j-1}<\sqrt{% \ell(\ell+1)+n^2}<B^{j+1}$.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,175
\section{Introduction} Suspensions of semi-flexible polymers exhibit a variety of dynamical phenomena, of great importance to both physics and biology, that are still only partially understood. Advances over the past decade include direct visual evidence for a reptation-like diffusion of individual polymers in a highly entangled isotropic solution and shape anisotropy of a single polymer~\cite{Perkins94,Smith95,Kas96,Haber00}. If the concentration of the polymers is increased, a suspension undergoes a first order phase transition to a nematic phase, which has long range orientational order but no long range positional order. As a result of the broken orientational symmetry it is expected that the diffusion of polymers in the nematic liquid crystals will be drastically different from that in concentrated isotropic solutions. While the static phase behavior of semi-flexible nematic polymers is well understood in terms of the Onsager theory and its extensions by Khoklov and Semenov~\cite{Onsager49,Khokhlov81}, the dynamics of semi-flexible polymers in the nematic phase is much less explored~\cite{Dogic04}. In this paper, we determine the concentration dependence of the anisotropic diffusion of semi-flexible viruses in a nematic phase and compare it to the diffusion in the isotropic phase. Experimentally, the only data on the translational diffusion of colloidal rods in the nematic phase was taken in a mixture of labelled and unlabelled polydysperse boehmite rods using fluorescence recovery after photobleaching (FRAP)~\cite{Bruggen98}. Theoretically, molecular dynamics simulations were performed on hard spherocylinder and ellipsoidal systems from which the anisotropic diffusion data was extracted~\cite{Allen90,Hess91,Lowen99}. The anisotropic diffusion has also been studied in low molecular weight thermotropic liquid crystals using NMR spectroscopy or inelastic scattering of neutrons~\cite{deGennes93}. Real space microscopy is a powerful method that can reveal dynamics of colloidal and polymeric liquid systems that are inaccessible to other traditional techniques~\cite{Kas96,Dogic04}. We use digital microscopy to directly visualize the dynamics of fluorescently labelled {\it fd} in a nematic background of the unlabelled {\it fd}. The advantage of this method is an easy interpretation of data and no need to obtain macroscopically aligned monodomains in magnetic fields. The advantages of using {\it fd} are its large contour length which can be easily visualized with optical microscope and its phase behavior which can be quantitatively described with the Onsager theory extended to account for electrostatic repulsive interactions and semi-flexibility~\cite{Tang95,Purdy03}. Viruses such as {\it fd} and TMV have been used earlier to study the rod dynamics in the isotropic phase~\cite{Cush02}. \section{Experiment methods} The physical characteristics of the bacteriophage {\it fd} are its length L=880 nm, diameter D=6.6 nm, persistence length of 2200 nm and a surface charge of 10 e$^-$/nm at pH 8.2~\cite{Dogic01}. Bacteriophage {\it fd} suspension forms isotropic, cholesteric and smectic phases with increasing concentration~\cite{Dogic97,Dogic00c,Dogic01}. The free energy difference between the cholesteric and nematic phase is very small and locally the cholesteric phase is identical to nematic. We expect that at short time scales the diffusion of the rods for these two cases would be the same. Hereafter, we refer to the liquid crystalline phase at intermediate concentration as a nematic instead of a cholesteric. The {\it fd} virus was prepared according to a standard biological protocol using XL1-Blue strain of {\it E. coli} as the host bacteria~\cite{Maniatis89}. The yields are approximately 50 mg of {\it fd} per liter of infected bacteria and virus is typically grown in 6 liter batches. Subsequently, the virus is purified by repetitive centrifugation (108,000 g for 5 hours) and re-dispersed in a 20 mM phosphate buffer at pH=7.5. First order isotropic-nematic (I-N) phase transition for {\it fd} under these conditions takes place at a rod concentration of 15.5 mg/ml. Fluorescently labelled {\it fd} viruses were prepared by mixing 1 mg of {\it fd} with 1 mg of succinimidyl ester Alexa-488 (Molecular Probes) for 1 hour. The dye reacts with free amine groups on the virus surface to form irreversible covalent bonds. The reaction is carried out in small volume (100 $\mu$l, 100 mM phosphate buffer, pH=8.0) to ensure a high degree of labelling. Excess dye was removed by repeated centrifugation steps. Absorbance spectroscopy indicates that there are approximately 300 dye molecules per each {\it fd} virus. Viruses labelled with fluorescein isothiocynante, a dye very similar to Alexa 488, exhibit the phase behavior identical to that of unlabelled virus. Since liquid crystalline phase behavior is a sensitive test of interaction potential, it is reasonable to assume that the interaction potential between labelled viruses is very similar to that between unlabeled viruses. The samples were prepared by mixing one unit of anti-oxygen solution (2 mg/ml glucose oxidase, 0.35 mg/ml catalase, 30 mg/ml glucose and 5\% $\beta$-mercaptoethanol), one unit of a dilute dispersion of Alexa 488 labelled viruses and eight units of the concentrated {\it fd} virus suspension at the desired concentration. Under these conditions the fluorescently labelled viruses are relatively photostable and it is possible to continuously observe rods for 3-5 minutes without significant photobleaching. The ratio of labelled to unlabelled particles is roughly kept at 1:30000. The samples were prepared by placing 4 $\mu$l of solution between a No 1.5 cover slip and coverslide. The thickness of the samples is about 10 $\mu$m. Thin samples are important to reduce the signal of out-of-focus particles. Samples are equilibrated for half an hour, allowing flows to subside and liquid crystalline defects to anneal. We have analyzed data at various distances from the wall and have not been able to observe a significant influence of wall on the diffusion of viruses. For imaging we used an inverted Nikon TE-2000 microscope equipped with $100\times$ 1.4 NA PlanApo oil immersion objective, a 100 W mercury lamp and a fluorescence cube for Alexa 488 fluorescent dye. The images where taken with a cooled CCD Camera (CoolSnap HQ, Roper Scientific) set to an exposure time of 60 ms, running in a overlap mode at a rate of 16 frames per second with $2\times2$ binning. The pixel size was 129 nm and the field of view was 89 $\mu$m $\times$ 66 $\mu$m. Typically there were around hundred fluorescently labeled rods in the field of view. For each {\it fd} concentration ten sequences of four hundred images were recorded. \begin{figure} \centerline{\epsfig{file=figure1.eps,width=3in}} \caption{\label{rodsinrods} (a) Image of fluorescently labelled rods dissolved in a background nematic phase of unlabelled rods. Scale bar is 5 $\mu$m. (b) Two-dimensional gaussian fit to a individual rod. Arrows indicate the long and short axis. The circle indicates the center of mass. From this fit it is possible to obtain the orientation of an individual {\it fd} rod. Pixel size is 129 nm.} \end{figure} \section{Analysis method} Figure \ref{rodsinrods}a shows a typical image of fluorescently labelled rods in a background nematic of unlabelled rods. Due to limited spatial and temporal resolution of the optical microscope, labelled {\it fd} appear as a slightly anisotropic rod, although the actual aspect ratio is larger then 100. To measure the anisotropic diffusion in the nematic phase, it is first necessary to determine the nematic director which has to be uniform within a field of view. Spatial distortion of the nematic would significantly affects our results. The centers of mass and orientation of rods are obtained sequentially. In a first step, a smoothed image is used to identify the rods and obtain the coordinates of its center of mass using image processing code written in IDL~\cite{IDLnote}. Subsequently, a two dimensional gaussian fit around a center of mass of each rod is performed (Fig.~\ref{rodsinrods}b). From this fit the orientation of each rod-like virus is obtained. This procedure is then repeated for a sequence of images. The length of a trajectory is usually limited to a few seconds, after which the particles diffuse out of focus. In Fig.~\ref{coorcloud}a and b we plot the trajectories of an ensamble of particles for both isotropic and nematic sample. As expected the trajectories in the isotropic phase are spherically symmetric (Fig.~\ref{coorcloud}a) while those in the nematic phase exhibit a pronounced anisotropy (Fig.~\ref{coorcloud}a). The symmetric nature of the distribution indicates that there is no drift or flow in our samples. We obtain the orientation of the nematic director using two independent methods. One method is to measure the main axis of the distribution shown in Fig.~\ref{coorcloud}b. This procedure assumes that the diffusion is largest along the nematic director. An alternative method is to plot a histogram of rod orientations which are obtained from 2D gaussian fits to each rod (Fig.~\ref{rodsinrods}b). The resulting orientational distribution function (ODF) is shown in Fig.~\ref{coorcloud}c. In principle, it should be possible to obtain both the nematic director and order parameter from ODF shown Fig.~\ref{coorcloud}c. We find that the order parameter obtained in such a way is systematically higher then the order parameter obtained from more reliable x-ray experiment~\cite{Purdy03}. This is due to significant rotational diffusion each rod undergoes during an exposure time of 60 ms. The differences in the orientation of the nematic director obtained using these two methods is always less then 5 degrees. For the example shown in Figs. \ref{coorcloud}b and c, we obtain a nematic director at an angle of $31.2\;^{\circ}$ while the peak of the orientational distribution function lies at $30.2\;^{\circ}$. The director can be ``placed'' along one of the two main axis by rotating the lab-frame. \begin{figure} \centerline{\epsfig{file=figure2.eps,width=5.5in}} \caption{\label{coorcloud} (a) A collection of trajectories of fluorescently labelled virus particles in the isotropic phase. All trajectories are translated so that the first point is located at the origin. For clarity we only show the center of mass and not a line connecting subsequent point in a particle trajectory. The concentration of virus in this sample was 14 mg/ml. (b) Anisotropic trajectories of the fluorescently labelled viruses diffusing in the nematic phase. The concentration of the background virus in this sample was 21 mg/ml. x' and y' indicate a new lab-frame in which the director is aligned along the y' axis. (c) The orientational distribution function obtained by plotting the probability distribution function of the virus orientation for isotropic (open circles) and nematic phase (full squares). The orientation of the virus is obtained from two-dimensional gaussian fits, an example of which is shown in Fig.~\ref{rodsinrods}b. The angle of the nematic director obtained from (b) and (c) are almost identical. } \end{figure} The diffusion coefficients of the rods parallel ($D_{\parallel}$) and perpendicular ($D_{\perp}$) to the director are calculated from the x'- and y'-component of the mean square displacement. When director lies along the y'-axis, $D_{\parallel}$ and $D_{\perp}$ are given by: \begin{eqnarray}\label{Eqmsd} D_{\parallel}=\frac{1}{N}\frac{1}{2}\sum\{y'_i(t)-y'_i(0)\}^2\\ D_{\perp}=\frac{1}{N}\frac{1}{\sqrt{2}}\sum\{x'_i(t)-x'_i(0)\}^2, \end{eqnarray} \noindent where $N$ is the number of traced particles. To obtain $D_{\perp}$, $D_{x}$ is multiplied with $\sqrt{2}$ since only one component of the diffusion perpendicular to the director is measured. The underlying assumption of our analysis is that the nematic director is oriented in the field of view. For 10 $\mu$m thin samples this is reasonable. \section{Results and discussion} Typical mean square displacements (MSD) are shown in Fig. \ref{correlations} for samples in an isotropic and nematic phase. On average the mean square displacement was linear over fifty frames in the nematic phase, but only about twenty five frames in the isotropic phase. The diffusion perpendicular to the director is slower in the nematic phase as compared to the isotropic phase. Therefore in the nematic phase, the particles stay longer in focus and can be tracked for a longer time. Since the MSD is linear over the entire time range and displacements are up to a few times the particle length, we are measuring pure long-time self-diffusion. Visual inspection of the trajectories in the concentrated isotropic phase, just below I-N coexistence shows no characteristics of the reptation observed in suspensions of long DNA fragments or actin filaments~\cite{Smith95,Kas96}. This points to the fact that {\it fd} is very weakly entangled in a concentrated isotropic suspension. This is in agreement with recent microrheology measurements of {\it fd} suspensions~\cite{Addas04}. We note that MSD's obtained from few hundred trajectories within a single field of view are very accurate. However, if we move to another region of the sample sample we obtain MSD with slightly different slope. This leads to conclusion that the largest source of error in measuring the anisotropic diffusion coefficient is the uniformity of the nematic director within the field of view. \begin{figure} \centerline{\epsfig{file=figure3.eps,width=3in}} \caption{\label{correlations} The mean square displacement of rods along the director (full cubes) and perpendicular to the director (full triangles) for a nematic sample at virus concentration of 21 mg/ml. The isotropic data are given by the open points and were take just below I-N phase transition at virus concentration of 14 mg/ml. The diffusion along the director is significantly enhanced when compared to the diffusion in the isotropic phase, while the diffusion perpendicular to the director is significantly suppressed. The mean square displacements shown in this figure are measured from a single field of view.} \end{figure} The concentration dependence of the anisotropic diffusion constants is shown in Fig.~\ref{diffvscon}a. The nematic phase melts into a isotropic phase at low concentrations and freezez into a smectic phase at high concentrations. We made an attempt to measure the diffusion of rods in the smectic phase, but have not seen any appreciable diffusion on optical length scales over a time period of minutes. The most strinking feature of our data is a strong discontinuity in the behavior of the diffusion constant at the I-N phase transition. Compared to diffusions in isotropic case $D_{\mbox{\scriptsize iso}}$, $D_{\parallel}$ is larger by a factor of four, while $D_{\perp}$ is smaller by a factor of two. The concentration dependence of $D_{\parallel}$ and $D_{\perp}$ exhibit different behavior. With increasing concentration, for $D_{\parallel}$ we measure an initial plateau, which is followed by a broad region where the diffusion rate decreases monotonically. $D_{\perp}$, however, shows a monotonic decrease of the diffusion constant over the whole concentration range where nematic phase is stable. It is useful to compare our results to previous theoretical and experimental work, especially the measurements of the diffusion coefficient for silica coated boehmite rods~\cite{Bruggen98}. In this work authors measure $D_{\parallel}/D_{\perp}~\approx 2$ for monodomain nematic samples which are in coexistence with isotropic phase. This is significantly different from $D_{\parallel}/D_{\perp}~\approx 7.5$ for {\it fd} virus. Another significant difference is that results on boehmite indicate that both $D_{\parallel}$ and $D_{\perp}$ are smaller then $D_{\mbox{\scriptsize iso}}$. In contrast to our measurements where $D_{\parallel}$ is much larger and $D_{\perp}$ is much smaller then $D_{\mbox{\scriptsize iso}}$. When comparing our data to simulations of the diffusion of hard spherocylinders and ellipsoids~\cite{Lowen99,Allen90}, one needs to compare equivalent samples. Scaling to rod concentration where the I-N transition takes place would be erroneous, since {\it fd} virus is a semi-flexible rod. The semi-flexibility of the virus drives the isotropic-nematic phase transition to higher concentrations and it significantly decreases the order parameter of the nematic phase in coexistence with the isotropic phase~\cite{Tang95,Purdy03}. We choose to compare data and simulations at the same value of the nematic order parameter which is determined independently~\cite{Purdy03}. For {\it fd}, the nematic order parameter is 0.65 at the I-N coexistence, monotonically increases with increasing rod concentration and saturates at high rod concentration. Experiment and simulation qualitatively agree and both show a rapid increase of $D_{\parallel}/D_{\perp}$ ratio with increasing nematic order parameter (Fig.~\ref{diffvscon}b). We note that there is a discrepancy between the simulations results obtain in references ~\cite{Lowen99,Allen90} which might be due to different systems studied in these two paper. Interestingly, simulations predict that upon increasing rod concentration beyond I-N coexistence $D_{\parallel}$ increases and subsequently upon approaching the smectic phase it will decrease. The authors argues that the non-monotonic behavior of $D_{\parallel}$ is the result of the interplay between two effects. First, with increasing rod concentration the nematic order parameter increases which enhances $D_{\parallel}$. Second, with increasing rod concentration there is less free volume which leads to decrease of $D_{\parallel}$. The author further argues that the first effect dominates at low rod concentrations where the nematic order parameter rapidly increases while the second effect dominates at high rod concentrations where the nematic order parameter is almost saturated. In contrast, both of these effects contribute to a monotonic decrease in $D_{\perp}$ with increasing concentration, which is observed in simulations. Due to relatively large error in our experimental data, it is not clear if the behavior of $D_{\parallel}$ is non-monotonic. There is an initial hesitation, but $D_{\parallel}$ decreases over most of the concentration range. This difference between simulations and experiment might be because we compare experiments of semi-flexible {\it fd} to simulations of perfectly rigid rods. Compared to semi-flexible rods, the order parameter of rigid rods increases much faster with increasing rod concentration~\cite{Purdy03}. \begin{figure} \centerline{\epsfig{file=figure4.eps,width=5.5in}} \caption{\label{diffvscon} (a) The concentration dependence of the translational diffusion parallel to ($D_{\parallel}$) and perpendicular to ($D_{\perp}$) the nematic director are indicated by squares and triangles respectively. The nematic phase in coexistence with the isotropic phase occurs at $c_{fd}$=15.5 mg/ml and is indicated by a vertical line. The x-axis is rescaled so that I-N transition takes place at $[fd]_N$=1. (b) The plot of the dimensionless ratio of the parallel to perpendicular diffusion constant $D_{\parallel}$/$D_{perp}$ as a function of the nematic order parameter. The concentration dependence of the nematic order parameter is taken from ref.~\protect\cite{Purdy03}. Open triangles are data for hard spherocylinders with aspect ratio of 10 taken from ref.~\protect\cite{Lowen99} while open circles are data for ellipsoids with aspect ratio 10 taken from~\protect\cite{Allen90}} \end{figure} It would be of interest to extend our measurements to rotational diffusion in the isotropic and nematic phase. At present the rod undergoes significant rotational diffusion during each exposure which reduces resolution and prevents accurate determination of the instantaneous orientation of a rod. It might be possible to significantly reduce the exposure time by either using a more sensitive CCD camera or a more intense laser as a illumination source. \section{Conclusions} Using fluorescence microscopy we have visualized rod-like viruses and measured the anisotropic long-time self-diffusion coefficients in the isotropic and nematic phase. In the nematic phase the diffusion along the director and the diffusion perpendicular to the director decreases monotonically with increasing rod concentration. The ratio of parallel to perpendicular diffusion increases monotonically with increasing rod concentration. The results compare qualitatively with simulations on hard rods with moderate aspect ratios. \acknowledgments Pavlik Lettinga is supported in part by Transregio SFB TR6, "Physics of colloidal dispersions in external fields". Zvonimir Dogic is supported by Junior Fellowship from Rowland Institute at Harvard.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,087
Journal of Vascular Medicine & Surgery Journal of Fundamentals of Renewable Energy and Applications Family Medicine & Medical Science Research Journal of Global Economics Journal of Integrative Oncology Journal of Genetic Syndromes & Gene Therapy Journal of Addiction Research & Therapy Journal of Carcinogenesis & Mutagenesis Cereal open access journalsTop open access journals on ancient diseasesDrug against tuberculosisEndocrinologyarticles open-access in cardiovascular science super-algebras-JournalsURBAN-TOURISM-Online-Journalssocialism Scholarly Peer-review Journalmilk composition top journalsKinetic study JournalsOral and Maxillofacial SurgeryBioleaching Impact FactorTop open access journals on ancient remediesPlatelet Activating FactorPharmacogenomics Importance Open Access Articles- Top Results for Bladensburg High School Bladensburg High School "Where Excellence Is A Deliberate Practice" B. Aisha Mahoney 4200 57th Ave, Bladensburg, Maryland, U.S. 38°56′27″N 76°55′3″W / 38.94083°N 76.91750°W / 38.94083; -76.91750Coordinates: 38°56′27″N 76°55′3″W / 38.94083°N 76.91750°W / 38.94083; -76.91750{{#coordinates:38|56|27|N|76|55|3|W| |primary |name= Maroon, White, & Black http://www.pgcps.org/~blade/ Bladensburg High School is a public high school located in Bladensburg, Maryland. The school, which serves grades 9 through 12, is a part of the Prince George's County Public Schools district. 2 School uniforms Bladensburg High School's history dates back to 1936, when the high school became a four-year school. Bladensburg's old facility was torn down after school let out for summer 2001; a new facility was built in its place and opened in 2005. On June 11, 2009, the class of 2009 graduated from the school at the Show Place Arena in Upper Marlboro, Md, making them the first class to study all four years in the new facility. Bladensburg High School serves the towns of Bladensburg, Cheverly, Colmar Manor, and Edmonston. The school also serves students from all across the county that are selected to enroll in the school's prestigious Biomedical Program. Bladensburg has had significant historical changes in the students it has served. Opening in the mid-1930s, the school was 100% Caucasian during that period, the school is now 49% African-American and 49% Hispanic/Latino due to the changing population of Prince George's County. Over the last three years, the school's enrollment has increased from 1,600 to nearly 2,000 with programs of study in Biomedicine, Culinary Arts, Cosmetology, Agriculture, and Nursing. Over the last two years, the school, along with its feeder middle school and elementary school, participated in the national CSTEM (Communication, Science, Technology, and Math) competition in Houston, Texas with schools from all across the country. In April 2011, the school won three first place awards, placing the schools in Bladensburg, Maryland as the number one contender in CSTEM. In 2011, the school appeared in several media showcases for its work with the Dream Act, CSTEM awards, Secondary School Reform courses, Nobel Laureate (William Phillips) physics presentation at the school, and winning the Grammy Foundation Award for excellence in Music. Over the last three years, graduating seniors have earned nearly 10 million dollars in combined scholarships with students progressing to Dartmouth College, University of Maryland, and Spellman College, just to name a few. The school has also graduated its first POSSE Scholar (2011) and Gates Millennium Scholar (2009). The school won Maryland State Basketball Championships in 1960 and 1973. Two brothers, Jay Buckley and Bruce Buckley, were the centers of those two respective teams. Danny Elliott '70 was the Maryland state basketball Player of the Year in 1970. Former Utah Jazz star Thurl Bailey attended Bladensburg, as did former NFL defensive end Ebenezer Ekuban. Brian Davis of the Duke University 1991 and 1992 NCAA Championship Basketball teams is a member of the class of 1988. In 2011, senior Jamal Saunders won the regional and state championships in Wrestling 3A/4A (the first since 1972). Other notable alumni include Maryland attorneys John Lally, Dawn Veltman, Marnitta King, and author/professor J. Madison Davis '69. Younger brother Thomas Preston Davis '71 (PG cross country and two mile champion), US Navy trauma surgeon, oversaw care for all of the survivors of the USS Cole bombing upon their return from Yemen. Abdul Kallon '86 is a United States district judge on the United States District Court for the Northern District of Alabama. Starting in the 2006-2007 school year, Bladensburg requires school uniforms.[1] ^ http://www.pgcps.org/~blade/files/BHS%20U.doc http://www1.pgcps.org/bladensburghs/ 14201 School Lane, Upper Marlboro, MD 20772 Academy of Health Sciences at PGCC Fairmont Heights Charles Herbert Flowers Gwynn Park Surrattsville Dr. Henry A. Wise, Jr. Buck Lodge Drew-Freeman Isaac J. Gourdine Dr. Ernest Everett Just Kenmoor Thurgood G. Marshall Samuel Ogle Nicholas Orem Benjamin Stoddert Benjamin Tasker William Wirt PreK-8 Dedicated Benjamin D. Foulois Creative and Performing Arts Robert Goddard French Immersion Robert Goddard Montessori John Hanson French Immersion John Hanson Montessori Thomas G. Pullen Creative and Performing Arts PreK-8 Elementary & The Accokeek Academy Beltsville Academy William W. Hall Academy Andrew Jackson Academy Samuel P. Massie Academy Cora L. Rice / G. James Gholson K-8 Educational Complex List of PGCPS Elementary Schools Magnet Programs & Centers High schools in Prince George's County, Maryland Zoned public high schools Alternative high schools Secular private Bishop McNamara High School Elizabeth Seton High School St. Vincent Pallotti High School This list is incomplete. This page is based on the copyrighted Wikipedia article Bladensburg High School; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,807
\section{Introduction} \label{intro} X-ray pulsars are highly magnetized neutron stars in binary systems, accreting matter from a companion star. The companion may be a low-mass star overfilling its Roche lobe in which case an accretion disc is formed. In the case of a high-mass companion, the neutron star may also accrete from the strong stellar wind and depending on the conditions a disc may be formed or accretion may take place quasi-spherically. The strong magnetic field of the neutron star disrupts the accretion flow at some distance from the neutron star surface and forces the accreted matter to funnel down on the polar caps of the neutron star creating hot spots that, if misaligned with the rotational axis, make the neutron star pulsate in X-rays. Most accreting pulsars show stochastic variations in their spin frequencies as well as in their luminosities. Many sources also exhibit long-term trends in their spin-behaviour with the period more or less steadily increasing or decreasing and in some sources spin-reversals have been observed. (For a thorough review, see e.g. Bildsten 1997 and references therein.) The best-studied case of accretion is that of thin disc accretion (Shakura \& Sunyaev 1973). Here the spin-up/spin-down mechanisms are rather well understood. For disc accretion the spin-up torque is determined by the specific angular momentum at the inner edge of the disc and can be written in the form $ K_{su}\approx \dot M\sqrt{GMR_A}\, $ (Pringle and Rees, 1972). For a pulsar the inner radius of the accretion disc is determined by the Alfven radius $R_A$, $R_A\sim \dot M^{-2/7}$, so $K_{su}\sim \dot M^{6/7}$, i.e. for disc accretion the spin-up torque is weakly (almost linearly) dependent on the accretion rate (X-ray luminosity). In contrast, the spin-down torque for disc accretion in the first approximation is independent of $\dot M$: $K_{sd}\sim -\mu^2/R_c^3$, where $R_c=(GM/(\omega^*)^2)^{1/3}$ is the corotation radius, $\omega^*$ is the neutron star angular frequency and $\mu$ is the neutron star's dipole magnetic moment. In fact, accretion torques in disc accretion are determined by a complicated disc-magnetospheric interaction, see, e.g., Ghosh \& Lamb (1979), Lovelace et al. (1995) and the discussion in Klu\'zniak \& Rappaport (2007), and correpsondingly can have a more complicated dependence on the mass accretion rate and other parameters. Measurements of spin-up/spin-down in X-ray pulsars can be used to evaluate a very important parameter of the neutron star -- its magnetic field. The period of the pulsar is usually close to the equilibrium value $P_{eq}$, which is determined by the total zero torque applied to the neutron star, $K=K_{su}+K_{sd}=0$. So assuming the observed value $\omega^*=2\pi/P_{eq}$, the magnetic field of the neutron star in disc-accreting X-ray pulsars can be estimated if $\dot M$ is known. In the case of quasi-spherical accretion, which can take place in systems where the optical star underfills its Roche lobe and no accretion disc is formed, the situation is more complicated. Clearly, the amount and sign of the angular mometum supplied to the neutron star from the captured stellar wind are important for spin-up or spin-down. To within a numerical factor (which can be positive or negative, see numerical simulations by Fryxell \& Taam (1988), Ruffert (1997), Ruffert (1999), etc.), the torque applied to the neutron star in this case should be proportional to $\dot M \omega_B R_B^2$, where $\omega_B=2\pi/P_B$ is the binary orbital angular frequency, $R_B=2GM/(V_w^2+v_{orb}^2)^2$ is the gravitational capture (Bondi) radius, $V_w$ is the stellar wind velocity at the neutron star orbital distance, and $v_{orb}$ is the neutron star orbital velocity. In real high-mass X-ray binaries the orbital eccentricity is non-zero, the stellar wind is variable and can be inhomogeneous, etc., so $K_{su}$ can be a complicated function of time. The spin-down torque is even more uncertain, since it is impossible to write down a simple equation like $-\mu^2/R_c^3$ any more ($R_c$ has no meaning for quasi-spherical accretion; for slowly rotating pulsars it is much larger than the Alfven radius where the angular momentum transfer from the accreting matter to the magnetosphere actually occurs). For example, if one uses for the braking torque $-\mu^2/R_c^3$, the magnetic field in long-period X-ray pulsars turns out very high. We think this is a result of underestimating the braking torque. The matter captured from the stellar wind can accrete onto the neutron star in different ways. Indeed, if the X-ray flux from the accreting neutron star is sufficiently high, the shocked matter rapidly cools down due to Compton processes and freely falls down toward the magnetosphere. The velocity of motion rapidly becomes supersonic, so a shock is formed above the magnetosphere. This regime was considered, e.g., by Burnard et al. (1983). Depending on the sign of the specific angular momentum of falling matter (prograde or retrograde), the neutron star can spin-up or spin-down. However, if the X-ray flux at the Bondi radius is below some value, the shocked matter remains hot, the radial velocity of the plasma is subsonic, and the settling accretion regime sets in. A hot quasi-static shell forms around the magnetosphere (Davies \& Pringle, 1981). Due to additional energy release (especially near the base of the shell), the temperature gradient across the shell becomes superadiabatic, so large-scale convective motions inevitably appear. The convection intitiates turbulence, and the motion of a fluid element in the shell becomes quite complicated. If the magnetosphere allows plasma entry via instabilities (and subsequent accretion onto the neutron star), the actual accretion rate through such a shell is controlled by the magnetosphere (for example, the shell can exist, but the accretion through it can be weak or even absent altogether). So on top of the convective motions, the mean radial velocity of matter toward the magnetosphere, a subsonic settling, appears. This picture of accretion is realized at relatively small X-ray luminositites, $L_x<4\times 10^{36}$~erg/s (see below), and is totally different from what was considered in numerical simulations cited above. If the shell is present, its interaction with the rotating magnetosphere can lead to spin-up or spin-down of the neutron star, depending on the sign of the angular velocity difference between the accreting matter and the magnetospheric boundary. So in the settling accretion regime, both spin-up or spin-down of the neutron star is possible, even if the sign of the specific angular momentum of captured matter is prograde. The shell here mediates the angular momentum transfer to or from the rotating neutron star. One can find several models in the literature (see especially Illarionov \& Kompaneets 1990 and Bisnovatyi-Kogan 1991), from which the expression for the spin-down torque for quasi-spherically accreting neutron stars in the form $K_{sd}\sim -\dot M R_A^2 \omega^*\sim -\dot M^{3/7}$ can be derived. Moreover, the expression for the Alfven radius $R_A$ in the case of settling accretion is found to have different dependence on the mass accretion rate $\dot M$ and neutron star magnetic moment $\mu$, $\sim \dot M^{-2/11}\mu^{6/11}$, than the standard expression for disc accretion, $\sim \dot M^{-2/7}\mu^{4/7}$, so the spin-down torque for quasi-spherical settling accretion depends on the accretion rate as $K_{sd}\sim -\dot M^{3/11}$ (see below). To stress the difference between quasi-spherical and disc accretion, it is also instructive to rewrite the expression for the spin-down torque using the corotation and Alfven radii as $K_{sd}\sim -\mu^2/\sqrt{R_c^3 R_A^3}\sim -\mu^2/R_c^3(R_c/R_A)^{3/2}$ (see more detail below in Section 4). Since the factor $(R_c/R_A)^{3/2}\sim (\omega_K(R_A)/\omega^*)$ can be of the order of 10 or higher in real systems, using a braking torque in the form $\mu^2/R_c^3$ leads to a strong overestimation of the magnetic field. The dependence of the braking torque on the accretion rate in the case of quasi-spherical settling accretion suggests that variations of the mass accretion rate (and X-ray luminosity) must lead to a transition from spin-up (at high accretion rates) to spin-down (at small accretion rates) at some critical value of $\dot M$ (or $R_A$), that differs from source to source. This phenomenon (also known as torque reversal) is actually observed in wind-fed pulsars like Vela X-1, GX 301-2 and GX 1+4, which we shall consider below in more detail. The structure of this paper is as follows. In Section 2, we present an outline of the theory for quasi-spherical accretion onto a neutron star magnetosphere. We show that it is possible to construct a hot envelope around the neutron star through which accretion can take place and act to either spin up or spin down the neutron star. In Section 3, we discuss the structure of the interchange instability region which determines whether the plasma can enter the magnetosphere of the rotating neutron star. In Section 4 we consider how the spin-up/spin-down torques vary with a changing accretion rate. In Section 5, we show how to determine the parameters of quasi-spherical accretion and the neutron star magnetic field from observational data. In Section 6, we apply our methods to the specific pulsars GX 301-2, Vela X-1 and GX 1+4. In Section 7 we discuss our results and, finally, in Section 8 we present our conclusions. A detailed gas-dynamic treatment of the problem is presented in four appendices, which are very important to understand the physical processes involved. \section{Quasi-spherical accretion} \label{s_qsaccr} \subsection{The subsonic Bondi accretion shell} \label{s_shell} We shall here consider the torques applied to a neutron star in the case of quasi-spherical accretion from a stellar wind. Wind matter is gravitationally captured by the moving neutron star and a bow-shock is formed at a characteristic distance $R\sim R_B$, where $R_B$ is the Bondi radius. Angular momentum can be removed from the neutron star magnetosphere in two ways --- either with matter expelled from the magnetospheric boundary without accretion (the propeller regime, Illarionov \& Sunyaev 1975), or via convective motions, which bring away angular momentum in a subsonic quasi-static shell around the magnetosphere, with accretion (the settling accretion regime). In such a quasi-static shell, the temperature will be high (of the order of the virial temperature, see Davies and Pringle (1981)), and the important point is whether hot matter from the shell can in fact enter the magnetosphere. Two-dimensional calculations by Elsner and Lamb (1977) have shown that hot monoatomic ideal plasma is stable relative to the Rayleigh-Taylor instability at the magnetospheric boundary, and plasma cooling is thus needed for accretion to begin. However, a closer inspection of the 3-dimensional calculations by Arons and Lea (1976a) reveals that the hot plasma is only marginally stable at the magnetospheric equator (to within 5\% accuracy of their calculations). Compton cooling and the possible presence of dissipative phenomena (magnetic reconnection etc.) facilitates the plasma entering the magnetosphere. We will show that both accretion of matter from a hot envelope and spin-down of the neutron star is indeed possible. \subsection{The structure of the shell around a neutron star magnetosphere} To a zeroth approximation, we can neglect both rotation and radial motion (accretion) of matter in the shell and consider only its hydrostatic structure. The radial velocity of matter falling through the shell $u_R$ is lower than the sound velocity $c_s$. Under these assumptions, the characteristic cooling/heating time-scale is much larger than the free-fall time-scale. In the general case where both gas pressure and anisotropic turbulent motions are present, Pascal's law is violated. Then the hydrostatic equilibrium equation can be derived from the equation of motion \Eq{v_R1} with stress tensor components \Eq{W_RR} - \Eq{W_pp} and zero viscosity (see Appendix A for more detail): \beq{e1} -\frac{1}{\rho}\frac{dP_g}{dR}- \frac{1}{\rho R^2}\frac{d(P_\parallel^t R^2)}{dR}+\frac{2P_\perp^t}{\rho R}-\frac{GM}{R^2}=0 \end{equation} Here $P_g=\rho c_s^2/\gamma$ is the gas pressure, and $P^t$ stands for the pressure due to turbulent motions: \beq{ppar} P_\parallel^t =\rho <u_\parallel^2>=\rho m_\parallel^2 c_s^2=\gamma P_g m_\parallel^2 \end{equation} \beq{pperp} P_\perp^t =\rho <u_\perp^2>=\rho m_\perp^2 c_s^2 =\gamma P_g m_\perp^2 \end{equation} ($<u_t^2>=<u_\parallel^2>+2<u_\perp^2>$ is the turbulent velocity dispersion, $m_\parallel^2$ and $m_\perp^2$ are turbulent Mach numbers squared in the radial and tangential directions, respectively; for example, in the case of isotropic turbulence $m_\parallel^2=m_\perp^2=(1/3)m_t^2$ where $m_t$ is the turbulent Mach number). The total pressure is the sum of the gas and turbulence terms: $P_g+P_t=P_g(1+\gamma m_t^2)$. We shall consider, to a first approximation, that the entropy distribution in the shell is constant. Integrating the hydrostatic equilibrium equation \Eq{e1}, we readily get \beq{hse_sol} \frac{{\cal R} T}{\mu_m} = \myfrac{\gamma-1}{\gamma}\frac{GM}{R}\myfrac{1} {1+\gamma m_\parallel^2-2(\gamma-1)(m_\parallel^2-m^2_\perp)}=\frac{\gamma-1}{\gamma}\frac{GM}{R}\psi(\gamma, m_t)\,. \end{equation} (In this solution we have neglected the integration constant, which is not important deep inside the shell. It is important in the outer part of the shell, but since the outer region close to the bow shock at $\sim R_B$ is not spherically symmetric, its structure can be found only numerically). Note that taking turbulence into account somewhat decreases the temperature within the shell. Most important, however, is that the anisotropy of turbulent motions, caused by convection in the stationary case, changes the distribution of the angular velocity in the shell. Below we will show that in the case of isotropic turbulence, the angular velocity distribution within the shell is close to the quasi-Keplerian one, $\omega(R) \sim R^{-3/2}$. In the case of strongly anisotropic turbulence caused by convection, $m_\parallel^2\gg m_\perp^2$, an approximately iso-angular-momentum distribution, $\omega(R) \sim R^{-2}$ is realized within the shell. Below we shall see that teh analysis of rela X-ray pulsars favors the iso-angular-momentum rotation distribution. Now, let us write down how the density varies inside the quasi-static shell for $R\ll R_B$. For a fully ionized gas with $\gamma=5/3$ we find: \beq{rho(R)} \rho(R)=\rho(R_A)\myfrac{R_A}{R}^{3/2} \end{equation} and for the gas pressure: \beq{P(R)} P(R)=P(R_A) \myfrac{R_A}{R}^{5/2}\,. \end{equation} The above equations describe the structure of an ideal static adiabatic shell above the magnetosphere. Of course, at $R\sim R_B$ the problem is essentially non-spherically symmetric and numerical simulations are required. Corrections to the adiabatic temperature gradient due to convective energy transport through the shell are calculated in Appendix C. \subsection{The Alfven surface} At the magnetospheric boundary (the Alfven surface), the total pressure (including isotropic gas pressure and the possibly anisotropic turbulent pressure) is balanced by the magnetic pressure $B^2/(8\pi)$ \beq{} P_g+P_t=P_g(R_A)(1+\gamma m_t^2)=\frac{B^2(R_A)}{8\pi}\,. \end{equation} The magnetic field at the Alfven radius is determined by the dipole magnetic field and by electric currents flowing on the Alfvenic surface \beq{P(RA)} P_g(R_A)=\frac{K_2}{(1+\gamma m_t^2)}\frac{B_0^2}{8\pi} \myfrac{R_0}{R_A}^6 =\frac{\rho{\cal R}T}{\mu_m} \end{equation} where the dimensionless coefficient $K_2$ takes into account the contribution from these currents and the factor $1/(1+\gamma m_t^2)$ is due to the turbulent pressure term. For example, in the model by Arons and Lea (1976a, their Eq. 31), $K_2=(2.75)^2\approx 7.56$. At the magnetospheric cusp (where the magnetic force line is branched), the radius of the Alfven surface is about 0.51 times that of the equatorial radius (Arons and Lea, 1976a). Below we shall assume that $R_A$ is the equatorial radius of the magnetosphere, unless stated otherwise. Due to the interchange instability, the plasma can enter the neutron star magnetosphere. In the stationary regime, let us introduce the accretion rate $\dot M$ onto the neutron star surface. From the continuity equation in the shell we find \beq{rho_cont} \rho(R_A)=\frac{\dot M}{4\pi u_R(R_A) R_A^2} \end{equation} Clearly, the velocity of absorption of matter by the magnetosphere is smaller than the free-fall velocity, so we introduce a dimensionless factor $f(u)=u_R/\sqrt{2GM/R}<1$. Then the density at the magnetospheric boundary is \beq{rho(R)} \rho(R_A)=\frac{\dot M}{4\pi f(u) \sqrt{2GM/R_A} R_A^2}\,. \end{equation} For example, in the model calculations by Arons \& Lea (1976a) $f(u)\approx 0.1$; in our case, at high X-ray luminosities the value of $f(u)$ can attain $\approx 0.5$. It is possible to imagine that the shell is impenetrable and that there is no accretion through it, $\dot M \to 0$. In this case $u_R\to 0$, $f(u)\to 0$, while the density in the shell remains finite. In some sense, the matter leaks from the magnetosphere down to the neutron star, and the leakage can be either small ($\dot M \to 0$) or large ($\dot M \ne 0$). Plugging $\rho(R)$ into \Eq{P(RA)} and using \Eq{hse_sol} and the definition of the dipole magnetic moment \[ \mu=\frac{1}{2}B_0R_0^3 \] (where $R_0$ is the neutron star radius), we find \beq{RA_def} R_A=\left[\frac{4\gamma}{(\gamma-1)}\frac{f(u) K_2}{\psi(\gamma, m_t)(1+\gamma m_t^2)} \frac{\mu^2}{\dot M\sqrt{2GM}}\right]^{2/7}\,. \end{equation} It should be stressed that in the presence of a hot shell the Alfven radius is determined by the static gas pressure at the magnetospheric boundary, which is non-zero even for a zero-mass accretion rate through the shell, so the appearance of $\dot M$ in the above formula is strictly formal. \subsection{Angular momentum transfer} We now consider a quasi-stationary subsonic shell in which accretion proceeds onto the neutron star magnetosphere. We stress that in this regime, i.e. the settling regime, the accretion rate onto the neutron star is determined by the denisity at the bottom of the shell (which is directly related to the density downstream the bow shock in the gravitational capture region) and the ability of the plasma to enter the magnetosphere through the Alfven surface. The rotation law in the shell depends on the treatment of the turbulent viscosity (see Appendix A for the Prandtl law and isotropic turbulence) and the possible anisotropy of the turbulence due to convection (see Appendix B). In the last case the anistropy leads to more powerful radial turbulence than in the perpendicular directions. Thus, as shown in Appendix A and B, there is a set of quasi-power-law solutions for the radial dependence of the angular rotation velocity in a convective shell. We shall consider a power-law dependence of the angular velocity on radius, \beq{rotation_law} \omega(R)\sim R^{-n} \end{equation} We will study in detail the quasi-Keplerian law with $n=3/2$, and the iso-angular-momentum distribution with $n=2$, which in some sense are limiting cases among the possible solutions. When approaching the bow shock, $R \to R_B$, $\omega\to \omega_B$. Near the bow shock the problem is not spherically symmetric any more since the flow is more complicated (part of the flow bends across the shell), and the structure of the flow can be studied only using numerical simulations. In the absence of such simulations, we shall assume that the iso-angular-momentum distribution is valid up to the nearest distance to the bow shock from the neutron star which we shall take to be the gravitational capture radius $R_B$, \[ R_B\simeq 2GM/(V_w^2+v_{orb}^2)^2 \] where $V_w$ is the stellar wind velocity at the neutron star orbital distance, and $v_{orb}$ is the neutron star orbital velocity. This means that the angular velocity of rotation of matter near the magnetosphere $\omega_m$ will be related to $\omega_B$ via \beq{omega_m1} \omega_m= \tilde\omega\omega_B\myfrac{R_B}{R_A}^{n}. \end{equation} (Here the numerical factor $\tilde\omega>1$ takes into account the deviation of the actual rotational law from the value obtained by using the assumed power-law dependence near the Alfven surface; see Appendix A for more detail.) Let the NS magnetosphere rotate with an angular velocity $\omega^*=2\pi/P^*$ where $P^*$ is the neutron star spin period. The matter at the bottom of the shell rotates with an angular velocity $\omega_m$, in general different from $\omega^*$. If $\omega^*>\omega_m$, coupling of the plasma with the magnetosphere ensures transfer of angular momentum from the magnetosphere to the shell, or from the shell to the magnetosphere if $\omega^*<\omega_m$. In the general case, the coupling of matter with the magnetosphere can be moderate or strong. In the strong coupling regime the toroidal magnetic field component $B_t$ is proportional to the poloidal field component $B_p$ as $B_t\sim -B_p (\omega_m-\omega^*)t$, and $|B_t|$ can grow to $\sim |B_p|$. This regime can be expected for rapidly rotating magnetopsheres when $\omega^*$ is comparable to or even greater than the Keplerian angular frequency $\omega_K(R_A)$; in the latter case the propeller regime sets in. In the moderate coupling regime, the plasma can enter the magnetosphere due to instabilities on a timescale shorter than that needed for the toroidal field to grow to the value of the poloidal field, so $B_t < B_p$. \subsubsection{The case of strong coupling} \label{s:strongcoupling} Let us first consider the strong coupling regime. In this regime, powerful large-scale convective motions can lead to turbulent magnetic field diffusion accompanied by magnetic field dissipation. This process is characterized by the turbulent magnetic field diffusion coefficient $\eta_t$. In this case the toroidal magnetic field (see e.g. Lovelace et al. 1995 and references therein) is \beq{bt} B_t=\frac{R^2}{\eta_t}(\omega_m-\omega^*)B_p\,. \end{equation} The turbulent magnetic diffusion coefficient is related to the kinematic turbulent viscosity as $\eta_t\simeq \nu_t$. The latter can be written as \beq{nut} \nu_t=<u_tl_t>\,. \end{equation} According to the phenomenological Prandtl law which relates the average characteristics of a turbulent flow (the velocity $u_t$, the characteristic scale of turbulence $l_t$ and the shear $\omega_m-\omega^*$) \beq{Prandtl} u_t\simeq l_t |\omega_m-\omega^*|\,. \end{equation} In our case, the turbulent scale must be determined by the largest scale of the energy supply to turbulence from the rotation of a non-spherical magnetospheric surface. This scale is determined by the velocity difference of the solidly rotating magnetosphere and the accreting matter that is still not interacting with the magnetosphere, i.e. $l_t\simeq R_A$, which determines the turn-over velocity of the largest turbulence eddies. At smaller scales a turbulent cascade develops. Substituting this scale into equations \Eq{bt}-\Eq{Prandtl} above, we find that in the strong coupling regime $B_t\simeq B_p$. The moment of forces due to plasma-magnetosphere interactions is applied to the neutron star and causes spin evolution according to: \beq{} I\dot \omega^*=\int\frac{B_tB_p}{4\pi}\varpi dS = \pm \tilde K(\theta)K_2\frac{\mu^2}{R_A^3} \end{equation} where $I$ is the neutron star's moment of inertia, $\varpi$ is the distance from the rotational axis and $\tilde K(\theta)$ is a numerical coefficient depending on the angle between the rotational and magnetic dipole axis. The coefficient $K_2$ appears in the above expression for the same reason as in \Eq{P(RA)}. The positive sign corresponds to positive flux of angular momentum to the neutron star ($\omega_m>\omega^*$). The negative sign corresponds to negative flux of angular momentum across the magnetosphere ($\omega_m<\omega^*$). At the Alfven radius, the matter couples with the magnetosphere and acquires the angular velocity of the neutron star. It then falls onto the neutron-star surface and returns the angular momentum acquired at $R_A$ back to the neutron star via the magnetic field. As a result of this process, the neutron star spins up at a rate determined by the expression: \beq{suz} I\dot \omega^*=+z \dot M R_A^2\omega^* \end{equation} where $z$ is a numerical coefficient which takes into account the angular momentum of the falling matter. If all matter falls from the equatorial equator, $z=1$; if matter falls strictly along the spin axis, $z=0$. If all matter were to fall across the entire magnetospheric surface, then $z=2/3$. Ultimately, the total torque applied to the neutron star in the strong coupling regime yields \beq{sd_eq_strong} I\dot \omega^*=\pm \tilde K(\theta)K_2 \frac{\mu^2}{R_A^3} +z \dot M R_A^2\omega^* \end{equation} Using \Eq{RA_def}, we can eliminate $\dot M$ in the above equation to obtain in the spin-up regime ($\omega_m>\omega^*$) \beq{su} I\dot \omega^*=\frac{\tilde K(\theta)K_2\mu^2}{R_A^3} \left[1+z\frac{4\gamma f(u)}{\sqrt{2}(\gamma-1)(1+\gamma m_t^2)\psi(\gamma, m_t)\tilde K(\theta)}\myfrac{R_A}{R_c}^{3/2}\right] \end{equation} where $R_c^3=GM/(\omega^*)^2$ is the corotation radius. In the spin-down regime ($\omega_m<\omega^*$) we find \beq{sd} I\dot \omega^*=-\frac{\tilde K(\theta)K_2\mu^2}{R_A^3} \left[1-z\frac{4\gamma f(u)}{\sqrt{2}(\gamma-1)(1+\gamma m_t^2)\psi(\gamma, m_t)\tilde K(\theta)}\myfrac{R_A}{R_c}^{3/2}\right]\,. \end{equation} Note that in both cases $R_A$ must be smaller than $R_c$, otherwise the propeller effect prohibits accretion. In the propeller regime $R_A>R_c$, matter does not fall onto the neutron star, there are no accretion-generated X-rays from the neutron star, the shell rapidly cools down and shrinks and the standard Illarionov and Sunyaev propeller (1975), with matter outflow from the magnetosphere is established. In both accretion regimes (spin-up and spin-down), the neutron star angular velocity $\omega^*$ almost approaches the angular velocity of matter at the magnetospheric boundary, $\omega^*\to \omega_m(R_A)$. The difference between $\omega^*$ and $\omega_m$ is small when the second term in the square brackets in \Eq{su} and \Eq{sd} is much smaller than unity. Also note that when approaching the propeller regime ($R_A\to R_c$), the accretion rate decreases, $f(u)\to 0$, the second term in the square brackets vanishes, and the spin evolution is determined solely by the spin-down term $-\tilde K(\theta) \mu^2/R_A^3$. (In the propeller regime, $\omega_m< \omega_K(R_A)$, $\omega_m<\omega^*$, $\omega^*> \omega_K(R_A)$ ). So the neutron star spins down to the Keplerian frequency at the Alfven radius. In this regime, the specific angular momentum of the matter that flows in and out from the magnetosphere is, of course, conserved. Near the equilibrium accretion state ($\omega^*\sim \omega_m$), relatively small fluctuations in $\dot M$ across the shell would lead to very strong fluctuations in $\dot \omega^*$ since the toroidal field component can change its sign by changing from $+B_p$ to $-B_p$. This property, if realized in nature, could be the distinctive feature of the strong coupling regime. It is known (see eg.g. Bildsten et al. 1997, Finger et al. 2011) that real X-ray pulsars sometimes exhibit rapid spin-up/spin-down transitions not associated with X-ray luminosity changes, which may be evidence for them temporarily entering the strong coupling regime. It is not excluded that the triggering of the strong coupling regime may be due to the magnetic field frozen into the accreting plasma that has not yet entered the magnetosphere. \subsubsection{The case of moderate coupling} The strong coupling regime considered above can be realized in the extreme case where the toroidal magnetic field $B_t$ attains a maximum possible value $\sim B_p$ due to magnetic turbulent diffusion. Usually, the coupling of matter with the magnetosphere is mediated by different plasma instabilities whose characteristic times are too short for substantial toroidal field growth. As we discussed above in Section \ref{s_shell}, the shell is very hot near the bottom, so without cooling at the magnetospheric boundary it is marginally stable with respect to the interchange instability, according to the calculations by Arons and Lea (1976a). Due to Compton cooling by X-ray emission from the neutron star poles, plasma enters the magnetosphere with a characteristic time-scale determined by the instability increment. Since the most likely instability is the Rayleigh-Taylor instability, the growth rate scales as the Keplerian angular frequency $\omega_K=\sqrt{GM/R^3}$. The time-scale of the instability can be normalized as \beq{} t_{inst}=\frac{1}{\omega_K(R_A)}\,, \end{equation} The toroidal magnetic field increases with time as \beq{BtBp} B_t=K_1(\theta) B_p(\omega_m-\omega^*)t_{inst} \end{equation} where $K_1(\theta)$ is a numerical coefficient which takes into account the degree and angular dependence of the coupling of matter with the magnetosphere in which the angle between the neutron star spin axis and the magnetic dipole axis is $\theta$. Then, the neutron star spin frequency change in this regime reads \beq{sd1} I\dot\omega^*=K_1(\theta)K_2\frac{\mu^2}{R_A^3}\frac{\omega_m-\omega^*}{\omega_K(R_A)}\,. \end{equation} Using the definition of $R_A$ [\Eq{RA_def}] and $\omega_K$, the spin-down formula can be recast to the form \beq{sd_om} I\dot \omega^*=Z \dot M R_A^2(\omega_m-\omega^*) \end{equation} Here the dimensionless coefficient $Z$ is \beq{Zdef} Z=\frac{ K_1(\theta)}{f(u)}\frac{\sqrt{2}(\gamma-1)}{4\gamma}\psi(\gamma, m_t) (1+\gamma m_t^2)\,. \end{equation} Taking into account that the matter falling onto the neutron star surface brings angular momentum $z\dot M R_A^2\omega^*$ (see \Eq{suz} above), we arrive at \beq{sd_eq} I\dot \omega^*=Z \dot M R_A^2(\omega_m-\omega^*)+z \dot M R_A^2\omega^* \end{equation} Clearly, to remove angular momentum from the neutron star through this kind of a static shell, $Z$ must be larger than $z$. Then the neutron star can spin-down episodically (we shall precise this statement below). Oppositely, if $Z<z$, the neutron star can only spin up. When a hot shell cannot be formed (at high accretion rates or small relative wind velocities, see e.g. Sunyaev (1978)), free-fall Bondi accretion with low angular momentum is realized. No angular momentum can be removed from the neutron star magnetosphere. Then $Z=z$ and \Eq{sd_eq} takes the simple form $ I\dot \omega^*=Z \dot M R_A^2\omega_m $, and the neutron star in this regime can spin-up up to about $\omega_K(R_A)$ independent of the sign of the difference of the angular frequencies $\omega_m-\omega^*$ at the magnetopsheric boundary. Due to conservation of specific angular momentum, $\omega_m=\omega_B (R_B/R_A)^2$, so in this case the spin evolution of the NS is described by equation \beq{} I\dot \omega^*=Z \dot M \omega_BR_B^2\,, \end{equation} where $Z$ plays the role of the specific angular momentum of the captured matter. For example, in early work by Illarionov \& Sunyaev 1975 $Z\simeq 1/4$. However, detailed numerical simulations of Bondi-Littleton accretion in 2D (e.g. Fryxell and Taam, 1988, Ho et al. 1989) and in 3D (e.g. Ruffert, 1997, 1999) revealed that due to inhomogeneities in the incoming flow, a non-stationary regime with an alternating sign of the captured matter angular momentum can be realized. So the sign of $Z$ can be negative as well, and alternating spin-up/spin-down regimes can be observed. Such a scenario is frequently invoked to explain the observed torque reversals in X-ray pulsars (see the discussion in Nelson et al. 1997). We repeat that this could indeed be the case for large X-ray luminosities $>4\times 10^{36}$~erg/s when a convective quasi-hydrostatic shell cannot exist due to strong Compton cooling near the magnetospheric boundary. When a hot shell is formed (at moderate X-ray luminositities below $\sim 4\times 10^{36}$~erg/s, see \Eq{M*} below), the angular momentum from the neutron star magnetosphere can be transferred away through the shell by turbulent viscosity and convective motions. So we substitute $\omega_m$ from \Eq{omega_m1} into \Eq{sd_eq} to obtain \beq{sd_eq1} I\dot \omega^*= Z\dot M \tilde\omega\omega_B R_B^2\myfrac{R_A}{R_B}^{2-n}-Z(1-z/Z)\dot M R_A^2\omega^*\,. \end{equation} This is the main formula which we shall use below. To proceed further, however, we need to determine the dimensionless coefficients of this equation. In the next section we shall find the important factor $f(u)$ that enters the formulae for both $Z$ and $R_A$, so the only unknown dimensionless parameter of the problem will be the coefficient $K_1(\theta)$. \section{Approximate structure of the interchange instability region} The plasma enters the magnetosphere of the slowly rotating neutron star due to the interchange instability. The boundary between the plasma and the magnetosphere is stable at high temperatures $T>T_{cr}$, but becomes unstable at $T<T_{cr}$, and remains in a neutral equilibrium at $T=T_{cr}$ (Elsner and Lamb, 1976). The critical temperature is: \beq{Tcr} {\cal R}T_{cr}=\frac{1}{2(1+\gamma m_t^2)}\frac{\cos\chi}{\kappa R_A}\frac{\mu_mGM}{R_A} \end{equation} Here $\kappa$ is the local curvature of the magnetosphere, $\chi$ is the angle the outer normal makes with the radius-vector at a given point, and the contribution of turbulent pulsations in the plasma to the total pressure is taken into account by factor $(1+\gamma m_t^2)$. The effective gravity acceleration can be written as \beq{g_eff} g_{eff}=\frac{GM}{R_A^2}\left(1-\frac{T}{T_{cr}}\right)\,. \end{equation} The temperature in the quasi-static shell is given by \Eq{hse_sol}, so the condition for the magnetosphere instability can then be rewritten as: \beq{m_inst} \frac{T}{T_{cr}}=\frac{2(\gamma-1)(1+\gamma m_t^2)} {\gamma}\psi(\gamma, m_t) \frac{\kappa R_A}{\cos\chi}<1\,. \end{equation} According to Arons and Lea (1976a), when the external gas pressure decreases with radius as $P\sim R^{-5/2}$, the form of the magnetosphere far from the polar cusp can be described to within 10\% accuracy as $(\cos\lambda)^{0.2693}$ (here $\lambda$ is the polar angle counting from the magnetospheric equator). The instability first appears near the equator, where the curvature is minimal. Near the equatorial plane ($\lambda=0$), for a poloidal dependence of the magnetosphere $\approx (\cos \lambda)^{0.27}$ we get for the curvature $k_pR_A=1+1.27$. The toroidal field curvature at the magnetospheric equator is $k_t=1$. The tangent sphere at the equator cannot have a radius larger than the inverse poloidal curvature, therefrom $\kappa R_A=1.27$ at $\lambda=0$. This is somewhat larger than the value of $\kappa R_A=\gamma/(2(\gamma-1))=5/4=1.25$ ( for $\gamma=5/3$ in the absence of turbulence or for fully isotropic turbulence), but within the accuracy limit\footnote{In Arons and Lea (1976b), the curvature is calculated to be $\kappa R_A\approx 1.34$, still within the accuracy limit}. The contribution from anisotropic turbulence decreases the critical temperature; for example, for $\gamma=5/3$, in the case of strongly anisotropic turbulence $m_\parallel=1$, $m_\perp=0$, at $\lambda=0$ we obtain $T/T_{cr}\sim 2$, i.e. anisotropic turbulence increases the stability of the magnetosphere. So initially the plasma-magnetospheric boundary is stable, and after cooling to $T<T_{cr}$ the plasma instability sets in, starting in the equatorial zone, where the curvature of the magnetospheric surface is minimal. Let us consider the development of the interchange instability when cooling (predominantly the Compton cooling) is present. The temperature changes as (Kompaneets, 1956, Weymann, 1965) \beq{dTdt} \frac{dT}{dt}=-\frac{T-T_x}{t_C} \end{equation} where the Compton cooling time is \beq{t_comp} t_{C}=\frac{3}{2\mu_m}\frac{\pi R_A^2 m_e c^2}{\sigma_T L_x \approx 10.6 [\hbox{s}] R_{9}^2 \dot M_{16}^{-1}\, \end{equation} Here $m_e$ is the electron mass, $\sigma_T$ is the Thomson cross section, $L_x=0.1 \dot M c^2$ is the X-ray luminosity, $T$ is the electron temperature (which is equal to ion temperature; the timescale of electron-ion energy exchange is the shortest one), $T_x$ is the X-ray temperature and $\mu_m=0.6$ is the molecular weight. The photon temperature is $T_x=(1/4) T_{cut}$ for a bremsstrahlung spectrum with an exponential cut-off at $T_{cut}$, typically $T_x=3-5$~keV. The solution of equation \Eq{dTdt} reads: \beq{} T=T_x+(T_{cr}-T_x)e^{-t/t_C}\,. \end{equation} It is seen that for $t\approx 2t_C$ the temperature decreases to $T_x$. In the linear approximation and noticing that $T_{cr}\sim 30\,\hbox{keV}\gg T_x\sim 3$~keV, the effective gravity acceleration increases linearly with time: \beq{} g_{eff}\approx \frac{GM}{R_A^2}\frac{t}{t_C}\,. \end{equation} Correspondingly, the rate of instability increases with time as \beq{} u_{i}=\int g_{eff} dt=\frac{1}{2}\frac{GM}{R_A^2}t^2\,. \end{equation} Let us introduce the mean rate of the instability growth \beq{} <u_i>=\frac{\int u dt}{t}=\frac{1}{6}\frac{GM}{R_A^2}\frac{t^2}{t_C}= \frac{1}{6}\frac{GM}{R_A^2t_C}\myfrac{\zeta R_A}{<u_i>}^2\,. \end{equation} Here $\zeta\lesssim 1$ and $\zeta R_A$ is the characteristic scale of the instability that grows with the rate $<u_i>$. So for the mean rate of the instability growth in the linear stage we find \beq{ui} <u_i>=\myfrac{\zeta^2GM}{6t_C}^{1/3}=\frac{\zeta^{2/3}}{12^{1/3}}\sqrt{\frac{2GM}{R_A}} \myfrac{t_{ff}}{t_C}^{1/3}\,. \end{equation} Here we have introduced the free-fall time as \beq{} t_{ff}=\frac{R_A^{3/2}}{\sqrt{2GM}}\,. \end{equation} Clearly, later in the non-linear stage the rate of instability growth approaches the free-fall velocity. We consider the linear stage first of all, since at this stage the temperature is not too low (although the entropy starts decreasing with radius), and it is in this zone that the effective angular momentum transfer from the magnetosphere to the shell occurs. At later stages of the instability development, the entropy drop is too strong for convection to begin. Let us estimate the accuracy of our approximation by retaining the second-order terms in the exponent expansion. Then the mean instability growth rate is \beq{2dorder} <u_i>=\myfrac{\zeta^2GM}{6t_C}^{1/3}\left[1-2\zeta^{1/3}\myfrac{t_{ff}}{t_C}^{2/3}\right]\,. \end{equation} Clearly, the smaller accretion rate, the smaller the ratio $t_{ff}/t_C$, and the better our approximation. Now we are in a position to specify the important dimensionless factor $f(u)$: \beq{fu1} f(u)=\frac{<u_i>}{u_{ff}(R_A)} \end{equation} Substituting \Eq{ui} and \Eq{fu1} into \Eq{RA_def}, we find for the Alfven radius in this regime: \beq{RA} R_A\approx 0.9\times 10^9[\hbox{cm}] \left(\frac{4\gamma\zeta}{(\gamma-1)(1+\gamma m_t^2)\psi(\gamma, m_t)}\frac{\mu_{30}^3}{\dot M_{16}}\right)^{2/11}\,. \end{equation} We stress the difference of the obtained expression for the Alfven radius with the standard one, $R_A\sim \mu^{4/7}/\dot M^{-2/7}$, which is obtained by equating the dynamical pressure of falling gas to the magnetic field pressure; this difference comes from the dependence of $f(u)$ on the magnetic moment and mass accretion rate in the settling accretion regime. Plugging \Eq{RA} into \Eq{fu1}, we obtain an explicit expression for $f(u)$: \beq{fu} f(u)\approx 0.33\myfrac{(\gamma-1)(1+\gamma m_t^2)\psi(\gamma, m_t)}{4\gamma\zeta}^{1/33}\dot M_{16}^{4/11}\mu_{30}^{-1/11}\,. \end{equation} \section{Spin-up/spin-down transitions} Now let us consider how the spin-up/spin-down torques vary with changing $\dot M$. We stress again that we consider the shell through which matter accretes to be essentially subsonic. It is the leakage of matter through the magnetospheric boundary that in fact determines the actual accretion rate onto the neutron star. This is mostly dependent on the density at the bottom of the shell. On the other hand, the density structure of the shell is directly related to the density of captured matter at the bow shock region, so density variations downstream the shock are rapidly translated into density variations near the magnetopsheric boundary. This means that the actual accretion rate variations must be essentially independent (for circular or low-eccentricity orbits) of the orbital phase but are mostly dependent on variations of the wind density. In contrast, possible changes in $R_B$ (for example, due to wind velocity variations or to the orbital motion of the neutron star) do not affect the accretion rate through the shell but strongly affect the value of the spin-up torque (see \Eq{sd_eq1}). \Eq{sd_eq1} can be rewritten in the form \beq{sd_eq2} I\dot \omega^*=A\dot M^{\frac{3+2n}{11}} - B\dot M^{3/11}\,. \end{equation} For the fiducial value $\dot M_{16}\equiv \dot M/10^{16}$~g/s, the accretion-rate independent coefficients are (in CGS units) \beq{A(Z)} A\approx 5.325\times 10^{31} (0.034)^{2-n}K_1(\theta)\tilde \omega \delta^n (1+(5/3) m_t^2)\myfrac{\zeta}{(1+(5/3) m_t^2)\psi(5/3,m_t)}^{\frac{13-6n}{33}} \mu_{30}^{\frac{13-6n}{11}}v_8^{-2n}\myfrac{P_b}{10\hbox{d}}^{-1} \end{equation} \beq{B} B=5.4\times 10^{32}(1-z/Z)K_1(\theta)\myfrac{\zeta}{(1+(5/3) m_t^2)\psi(5/3,m_t)}^{\frac{13}{33}}\mu_{30}^{\frac{13}{11}}\myfrac{P^*}{100\hbox{s}}^{-1} \end{equation} (from now on in numerical estimates we assume $\gamma=5/3$). The dimensionless factor $\delta<1$ takes into account the actual location of the gravitaional capture radius, which is smaller than the Bondi value for a cold stellar wind (Hunt, 1971). This radius can also be smaller due to radiative heating of the stellar wind by X-ray emission from the neutron star surface (see below). In the numerical coefficients we have used the expression for $Z$ with account for \Eq{fu}, and \Eq{RA} for the Alfvenic radius. Below we shall consider the case $Z-z>0$, i.e. $B>0$, otherwize only spin-up of the neutron star is possible. First of all, we note that the function $\dot\omega^*(\dot M$) reaches minimum at some $\dot M_{cr}$, By differentiating \Eq{sd_eq2} with respect to $\dot M$ and equating to zero, we find \beq{dotMcr} \dot M_{cr}=\left[\frac{B}{A}\frac{3}{(3+2n)}\right]^{\frac{11}{2n}} \end{equation} At $\dot M=\dot M_{cr}$ the value of $\dot\omega^*$ reaches an absolute minimum (see Fig. \ref{f:y}). It is convenient to introduce the dimensionless parameter \beq{y_def} y\equiv \frac{\dot M}{\dot M_{cr}} \end{equation} and rewrite \Eq{sd_eq2} in the form \beq{sdy} I\dot \omega^*=A\dot M_{cr}^{\frac{3+2n}{11}}y^{\frac{3+2n}{11}} \left(1- {\bf\myfrac{y_0}{y}^\frac{2n}{11}}\right)\,, \end{equation} where the frequency derivative vanishes at $y=y_0$: \beq{y0} y_0=\myfrac{3+2n}{3}^{\frac{11}{2n}}\,. \end{equation} The qualitative behaviour of $\dot\omega^*$ as a function of $y$ is shown in Fig. \ref{f:y}. \begin{figure*} \includegraphics[width=0.5\textwidth]{domegadmdot1.eps} \caption{A schematic plot of $\dot\omega^*$ as a function of $y$ [\Eq{sdy}]. In fact, as $y\to 0$, $\omega^*$ approaches some negative $\dot\omega^*$, since the neutron star enters the propeller regime at small accretion rates.} \label{f:y} \end{figure*} Let us then vary \Eq{sdy} with respect to $y$: \beq{variations} I(\delta\dot\omega^*)=I\frac{\partial \dot \omega^*}{\partial y}(\delta y)= \frac{3+2n}{11}A\dot M_{cr}^\frac{3+2n}{11}y^{-\frac{8-2n}{11}} \left(1- \frac{1}{y^{\frac{2n-1}{11}}}\right)(\delta y)\,. \end{equation} We see that depending on whether $y>1$ or $y<1$, \textit{correlated changes} of $\delta \dot\omega^*$ with X-ray flux should have different signs. Indeed, for GX 1+4 in Gonz\'alez-Gal\'an et al (2011) a positive correlation of the observed $\delta P$ with $\delta \dot M$ was found using \textit{Fermi} data. This means that there is a negative correlation between $\delta\omega^*$ and $\delta\dot M$, suggesting $y<1$ in this source. \section{Determination of the neutron star magnetic field and other parameters in the settling accretion regime} \label{s:magfield} Most X-ray pulsars rotate close to their equilibrium periods, i.e. the average $\dot\omega^*=0$. Near the equilibrium, in the settling accretion regime from \Eq{sd_eq2} we obtain: \beq{mu_eq} \mu_{30}^{(eq)}\approx \left[\frac{0.0986\cdot (0.034)^{(2-n)}\tilde\omega (1+(5/3)m_t^2)}{1-z/Z}\right]^\frac{11}{6n} \myfrac{P_*/100s}{P_b/10d}^\frac{11}{6n}\myfrac{\dot M_{16}(1+(5/3) m_t^2)\psi(5/3,m_t)}{\zeta}^\frac{1}{3} \myfrac{\sqrt{\delta}}{v_8}^\frac{11}{3} \end{equation} Once the magnetic field of the neutron star is estimated for any specific system, we can calculate the value of the Alfven radius $R_A$ [\Eq{RA}] and the important numerical coefficient $f(u)$ [\Eq{fu}]. The coupling constant $K_1(\theta)$ is evaluated from \Eq{A(Z)}, in which the left-hand side can be independently calculated using \Eq{variations} measured at $y=y_0$ (where $\dot \omega^*=0$): \beq{Adet} A=\frac{I\left.\myfrac{\partial \dot \omega^*}{\partial y}\right|_{y_0}} {\myfrac{3+2n}{11}\frac{\dot M^\frac{3+2n}{11}}{y_0} \left(1-y_0^{\frac{-(2n-1)}{11}}\right)}\,. \end{equation} The coefficient $Z$ is then determined from \Eq{Zdef}. The dimensionless factor relating the toroidal and poloidal magnetic field is also important. Near the equilibrium we have $\omega_m-\omega^*=-(z/Z)\omega^*$, so \Eq{BtBp} can be written as \beq{BtBpnum} \frac{B_t}{B_p}=K_1(\theta)\myfrac{z}{Z}\myfrac{\omega^*}{\omega_K(R_A)}= \frac{10f(u)z}{\sqrt{2}(1+(5/3)m_t^2)\psi(5/3,m_t)}\myfrac{\omega^*}{\omega_K(R_A)}\,. \end{equation} Calculated values for all parameters and coefficients discussed above are listed for specific wind-fed pulsars in Table 1 below. \subsection{Low-luminosity X-ray pulsars with torque reversal} Let us consider X-ray pulsars with persistent spin-up and spin-down episodes. We shall assume that a convective shell is present during both spin-up (with higher luminosity) and spin-down (with lower luminosity), provided that at the spin-up stage the mass accretion rate does not exceed $\dot M_*$ as derived above. Suppose that in such pulsars we can measure the average value of $\dot \omega^*|_{su}$ and $\dot \omega^*|_{sd}$ as well as the average X-ray flux during spin-up and spin-down, respectively. Let at some specific value $y_1<y_0$ the source be observed to spin-down: \beq{suy1} I\dot \omega^*|_{sd}=A\dot M_{cr}^{\frac{3+2n}{11}}y_1^{\frac{3+2n}{11}} \left(1- \myfrac{y_0}{y_1}^{2n/11}\right)>0\,, \quad y_1<y_0=\myfrac{3+2n}{3}^{\frac{11}{2n}}\,. \end{equation} At some $y_2>y_0$, the neutron star starts to spin-up: \beq{suy2} I\dot \omega^*|_{su}=A\dot M_{cr}^{\frac{3+2n}{11}}y_2^{\frac{3+2n}{11}} \left(1- \myfrac{y_0}{y_2}^{2n/11}\right)<0\,, \quad y_2>y_0 \end{equation} Using the observed spin-down/spin-up ratio $\dot\omega^*|_{sd}/\dot\omega^*|_{su}=X$ and the corresponding X-ray luminosity ratio $L_x(sd)/L_x(su)=y_1/y_2=x<1$, we find by dividing \Eq{suy1} with \Eq{suy2} and after substituting $y_2=y_1/x$ \beq{} |X|=x^{\frac{3+2n}{11}} \left| \frac{1-\myfrac{y_0}{y_1}^{2n/11}} {1-\myfrac{y_0}{(y_1/x)}^{2n/11}} \right|\,. \end{equation} Solving this equation, we obtain $y_1$ and $y_2$ through the observed quantities $|X|$ and $x$. We stress that so far we have not used the absolute values of the X-ray luminosity (the mass accretion rate), only the ratio of the X-ray fluxes during spin-up and spin-down. Here we should emphasize an important point. When the accretion rate through the shell exceeds some critical value $\dot M>\dot M^*$, the flow near the Alfven surface may become supersonic, a free-fall gap appears above the magnetosphere, and the angular momentum can not be removed from the magnetosphere. In that case the settling accretion regime is no longer realized, a shock is formed in the flow near the magnetosphere (the case studied by Burnard et al. 1982). Depending on the character of the inhomogeneities in the captured stellar wind, the specific angular momentum of the accreting matter can be prograde or retrograde, so alternating spin-up and spin-down episodes are possible. Thus the transition from the settling accretion regime (at low X-ray luminosities) to Bondi-Littleton accretion (at higher X-ray luminosities) can actually occur before $y$ reaches $y_0$. Indeed, by assuming a maximum possible value of the dimensioless velocity of matter settling $f(u)$=0.5 (for angular momentum removal from the magnetosphere to be possible, see Appendix D for more detail), we find from \Eq{fu} : \beq{M*} \dot M^*_{16}\approx 3.7 \myfrac{\zeta}{(1+(5/3) m_t^2)\psi(5/3,m_t)}^{1/12}\mu_{30}^{1/4}\,. \end{equation} A similar estimate for the critical mass accretion rate for settling accretion can be obtained from a comparison of the characteristic Compton cooling time with the convective time at the Alfven radius. \section{Specific X-ray pulsars} In this Section, as an illustration of the possible applicability of our model to real sources, we consider three particular slowly rotating moderatly luminous X-ray pulsars: GX 301-2, Vela X-1, and GX 1+4. The first two pulsars are close to the equilibrium rotation of the neutron star, showing spin-up/spin-down excursions near the equilibrium frequency (apart from the spin-up/spin-down jumps, which may be, we think, due to episodic switch-ons of the strong coupling regime when the toroidal magnetic field component becomes comparable to the poloidal one, see Section \ref{s:strongcoupling}). The third one, GX 1+4, is a typical example of a pulsar displaying long-term spin-up/spin-down episodes. During the last 30 years, it shows a steady spin-down with luminosity-(anti)correlated frequency fluctuations (see Gonz\'alez-Gal\'an et al. 2011 for a more detailed discussion). Clearly, this pulsar can not be considered to be in equilibrium. \subsection{GX 301-2} GX301--2 (also known as 4U1223--62) is a high-mass X-ray binary, consisting of a neutron star and an early type B optical companion with mass $\simeq 40 M_\odot$ and radius $\simeq 60 R_\odot$. The binary period is 41.5 days (Koh et al. 1997). The neutron star is a $\sim680$ s X-ray pulsar (White et al. 1976), accreting from the strong wind of its companion ($\dot M_{loss} \sim 10^{-5} M_\odot$/yr, Kaper et al. 2006). The photospheric escape velocity of the wind is $v_{esc}\approx 500$~km/s. The semi-major axis of the binary system is $a\approx 170 R_\odot$ and the orbital eccentricity $e\approx 0.46$. The wind terminal velocity was found by Kaper et al. (2006) to be about 300 km/s, smaller than the photospheric escape velocity. GX 301-2 shows strong short-term pulse period variability, which, as in many other wind-accreting pulsars, can be well described by a random walk model (deKool \& Anzer 1993). Earlier observations between 1975 and 1984 showed a period of $\sim 700$s while in 1984 the source started to spin up (Nagase 1989). BATSE observed two rapid spin-up episodes, each lasting about 30 days, and it was suggested that the long-term spin-up trend may have been due entirely to similar brief episodes as virtually no net changes in the frequency were found on long time-scales (Koh et al. 1997; Bildsten et al. 1997). The almost 10 years of spin-up were followed by a reversal of spin in 1993 (Pravdo \& Ghosh 2001) after which the source has been continuously spinning down (La Barbera et al. 2005; Kreykenbohm et al. 2004, Doroshenko et al. 2010). Rapid spin-up episodes sometimes appear in the Fermi/GBM data on top of the long-term spin-down trend (Finger et al. 2011). It can not be excluded that these rapid spin-up episodes, as well as the ones observed in the BATSE data, reflect a temporary entrance into the strong coupling regime, as discussed in Section 2.4.1. Cyclotron line measurements (La Barbera et al. 2005) yield the magnetic field estimate near the neutron star surface $B_0\approx 4.4\times 10^{12}$~G ($\mu=1/2 B_0 R_0^3=2.2\times 10^{30}$~G cm$^3$ for the assumed neutron star radius $R_0=10$~km). \begin{figure*} \includegraphics{gx_kp1.eps} \caption{Torque-luminosity correlation in GX 301-2, $\dot\omega^*$ as a function of BATSE data (20-40 keV pulsed flux) near the equilibrium frequency, see Doroshenko et al. (2010). The assumed X-ray flux at equilibrium (in terms of the dimensionless parameter $y$) is also shown by the vertical dotted line.} \label{f:gx301} \end{figure*} In Fig. \ref{f:gx301} we have plotted $\dot\omega^*$ as a function of the observed pulsed flux (20-40~keV) according to BATSE data (see Doroshenko et al. 2010 for more detail). To obtain the magnetic field estimate and other parameters (first of all, the coefficient $A$, see \Eq{Adet}) as described above in Section \ref{s:magfield}, we need to know the value of $\dot M$ and the derivative $\partial \dot \omega^*/\partial \dot M$ or $\partial \dot \omega^*/\partial y$. The estimate of $\dot M$ can be inferred from the X-ray flux provided the distance to the source is known, and generally this is a major uncertainty. We shall assume that near equilibrium a hot quasi-spherical shell exists in this pulsar, i.e. the accretion rate is $3\times 10^{16}$~g/s, i.e. not higher than the critical value $\dot M_*\simeq 4\times 10^{16}$~g/s [\Eq{M*}]. While the absolute value of the mass accretion rate is necessary to estimate the magnetic field according to \Eq{mu_eq} (however, the dependence is rather weak, $\sim \dot M^{1/3}$), the derivative $\partial \dot \omega^*/\partial y$ can be derived from the $\dot \omega^*$ -- X-ray flux plot, since in the first approximation the accretion rate is proportional to the observed pulsed X-ray flux. Near the equilibrium (the torque reversal point with $\dot \omega^*=0$), we find from a linear fit in Fig. \ref{f:gx301} $\partial \dot \omega^*/\partial y \approx 4 \times 10^{-13}$~rad/s$^2$. The obtained parameters ($\mu$, $Z$, $K_1(theta)$, etc.) for this pulsar are listed in Table 1. We note that the magnetic field estimate resulting from our model for $n=2$ (boldfaced in Table 1) is fairly close to the value inferred from the cylcotron line measurements. We also note that for the case $n=3/2$ the coupling constants $Z$ and $K_1(\theta)$ turn out to be unrealistically large and the derived magentic field is very small, suggesting that assuming anisotropic turbulence is more realistic than using isotropic turbulence with the viscosity described by the Prandtl law. \subsection{Vela X-1} \begin{figure*} \includegraphics{vela_kp1.eps} \caption{The same as in Fig. \ref{f:gx301} for Vela X-1 (Doroshenko 2011, private communication). } \label{f:velaX1} \end{figure*} Vela X-1 (=4U 0900-40) is the brightest persistent accretion-powered pulsar in the 20-50 keV energy band with an average luminosity of $L_{x} \approx 4\times10^{36}$erg/s (Nagase 1989). It consists of a massive neutron star (1.88 $M_\odot$, Quaintrell et al. 2003) and the B0.5Ib super giant HD 77581, which eclipses the neutron star every orbital cycle of $\sim 8.964$ d (van Kerkwijk et al. 1995). The neutron star was discovered as an X-ray pulsar with a spin period of $\sim$283 s (Rappaport 1975), which has remained almost constant since the discovery of the source. The optical companion has a mass and radius of $\sim 23$ $M_\odot$ and $\sim30$ $R_{sun}$ respectively (van Kerkwijk et al. 1995). The photospheric escape velocity is $v_{esc}\approx 540$~km/s. The orbital separation is $a\approx 50 R_\odot$ and the orbital eccentricity $e\approx 0.1$. The primary almost fills its Roche lobe (as also evidenced by the presence of elliptical variations in the optical light curve, Bochkarev et al. (1975)). The mass-loss rate from the primary star is $10^{-6}$ $M_\odot$/yr (Nagase et al. 1986) via a fast wind with a terminal velocity of $\sim $ 1100 km/s (Watanabe et al. 2006), which is typical for this class. Despite the fact that the terminal velocity of the wind is rather large, the compactness of the system makes it impossible for the wind to reach this velocity before interacting with the neutron star, so the relative velocity of the wind with respect to neutron star is rather low, $\sim 700$~km/s. Cyclotron line measurements (Staubert 2003) yields the magnetic field estimate $B_0\approx 3\times 10^{12}$~G ($\mu=1.5\times 10^{30}$~G cm$^3$ for the assumed neutron star radius 10~km). We shall assume that in this pulsar $\dot M\simeq 3\times 10^{16}$~g/s (again for the existence of the shell to be possible). In Fig. \ref{f:velaX1} we have plotted $\dot\omega^*$ as a function of the observed pulsed flux (20-40~keV) according to BATSE data (Doroshenko 2011, private communication). As in the case of GX 301-2, from a linear fit we find at the spin-up/spin-down transition point $\partial \dot \omega^*/\partial y \approx 5.5\times 10^{-13}$~rad/s$^2$. The obtained parameters for Vela X-1 are listed in Table 1. As in the case of GX 1+4, the magnetic field estimate given by our model for an almost iso-angular-momentum rotation law ($n=2$, boldfaced in Table 1) is close to the value inferred from the cyclotron line measurements. \subsection{GX 1+4} GX 1+4 was the first source to be identified as a symbiotic binary containing a neutron star (Davidsen, Malina \& Bowyer 1977). The pulse period is $\sim 140$~s and the donor is an MIII giant (Davidsen et al. 1977). The system has an orbital period of 1161 days (Hinkle et al. 2006) making it the widest known LMXB by at least one order of magnitude. The donor is far from filling its Roche lobe and accretion onto the neutron star is by capture of the stellar wind of the companion. The system has a very interesting spin history. During the 1970's it was spinning up at the fastest rate ($\dot \omega_{su} \sim 3.8\cdot 10^{-11}$ rad/s) among the known X-ray pulsars at the time (e.g. Nagase 1989). After several years of non-detections in the early 1980's, it reappeared again, now spinning down at a rate similar in magnitude to that of the previous spin-up. This spin-reversal has been interpreted in terms of a retrograde accretion disc forming in the system (Makishima et al. 1988, Dotani et al. 1989, Chakrabarty et al. 1997). A detailed spin-down history of the source is discussed in the recent paper by Gonz\'alez-Gal\'an et al. (2011). Using our model this behavior can, however, be readily explained by quasi-spherical accretion. \begin{figure*} \includegraphics[width=0.8\textwidth]{fig14sep.eps} \caption{Pulsar frequency (upper panel), deviation of the frequency from the linear fit (middle panel) and pulsed flux in GX 1+4 from \textit{Fermi GBM} data (M. Finger, private communication; see also Gonz\'alez-G\'alan et al (2011))}. \label{f:gx14_f} \end{figure*} As the pulsar in GX 1+4 is not in equilibrium, we cannot directly use our method to estimate the magnetic field of the neutron star as described in Section \ref{s:magfield}. However, as GX 1+4 is currently experiencing a long-term spin-down trend, the first (spin-up) term in \Eq{sd_eq2} must be smaller than the second (spin-down) term, which yields a lower limit on the value of the magnetic field (see Table 1). Let us use \Eq{variations} to quantitatively explain the observed anti-correlation between the pulsar frequency fluctuations and the X-ray flux, as observed in GX 1+4 \citep{g1}. We use a fragment of four-day average {\it Fermi}/GBM data on GX 1+4 (M. Finger, private communication; see also Gonz\'alez-G\'alan et al (2011)). The pulsar is currently observed at the steady spin-down stage with a mean $\dot \omega_{sd}\approx -2.34\times 10^{-11}$~rad/s (the upper panel). In the middle panel of Fig. \ref{f:gx14_f}, deviations from the linear fit are shown. Note that the frequency excursions around the mean value is a few microseconds, while the pulsar frequency is much higher, a few milliseconds. This means that the frequency derivative is negative at all points within the time interval shown, i.e. no occasional spin-ups were observed, even at the highest X-ray flux levels. Specifically, let us consider the prominent pulsar frequency change observed between MJD 55100 and MJD 55200 for around $\Delta t=80$ days. During this time period, the frequency of the pulsar decreased by $\Delta\omega^*\approx -3.6\times 10^{-5}$~rad (see Fig. \ref{f:gx14_f}). From here we find $\delta \dot \omega^*(obs)=\Delta \omega^*/\Delta t\approx -5.2\times 10^{-12}$~rad/s. Thus, the observed fractional change in the pulsar spin-down rate is $(\delta \dot \omega^*/|\dot\omega^*|)_{obs}\approx -0.2$. On the other hand, by dividing \Eq{variations} with \Eq{sdy} we find the expected relative fluctuations of $\dot \omega^*$ at a given mean accretion rate (or dimensionless X-ray luminosity $y$): \beq{} \frac{\delta \dot \omega^*}{|\dot\omega^*|}_{theor}= \frac{3+2n}{11}\frac{\delta \dot M}{\dot M} \left[ \frac{1-y^{-\frac{2n-1}{11}}}{1-\myfrac{y_0}{y}^{\frac{2n}{11}}} \right] \end{equation} From Fig. \ref{f:gx14_f} (bottom panel) the range of the fluctuations relative to the mean value $(\delta \dot M/\dot M)\simeq (0.6-0.2)/0.4\simeq 1$. Then, for the assumed $n=2$, the expected amplitude of the relative frequency derivative fluctuation would match the observed value if $y\simeq 0.2-0.3$. Important is that we find $y<1$, as must be the case for the observed negative sign of the torque-luminosity correlations. Note here that the drop in flux-level during about 20 days at around MJD 55140-55160 should be translated to a decrease in $\dot \omega^*$, which is clearly seen in the upper panel of Fig. \ref{f:gx14_f}. A more detailed analysis using the entire {\it Fermi} data-set should be performed. Further, note that the short-term spin-up episodes, sometimes observed on top of the steady spin-down behaviour (at about MJD 49700, see Fig. 2 in Chakrabarty et al. (1997)) are correlated with an enhancement of the X-ray flux, in contrast to the negative frequency-flux correlation discussed above. During these short spin-ups, $\dot\omega^*$ is about half the average $\dot\omega^*_{su}$ observed during the steady spin-up state of GX 1+4. The X-ray luminosity during these episodic spin-ups is approximately five times larger than the mean X-ray luminosity during the steady spin-down. We remind the reader that once $\dot M>\dot M_*$, a free-fall gap appears above the magnetosphere, and the neutron star can only spin up. When the X-ray flux drops again, the settling accretion regime is reestablished and the neutron star resumes its spinning-down. \begin{table*} \label{T2} \centering \caption{Parameters for the pulsars in Section 6. References for the observed spin periods, binary periods and spin down rate (for GX 1+4) are given in the text as well as discussions on the values used for the wind velocities with respect to the neutron star. The parameters Z, K$_1(\Theta)$ and f(u) are defined in Section 2.4.2. Numerical estimates are given for dimensionless parameters $\delta=1, \zeta=1$, $\tilde\omega=1$, $\gamma=5/3$ and without turbulence ($m_t=0$). The numbers in boldface are the preferred values for a near iso-angular-momentum rotation law.} $$ \begin{array}{lcccccc} \hline Pulsar & \multicolumn{2}{c}{\rm GX 301-2} & \multicolumn{2}{c}{\rm Vela X-1} & \multicolumn{2}{c}{\rm GX 1+4} \\ \hline \multicolumn{7}{c}{\hbox{Measured parameters}}\\ \hline P_* {\rm(s)} & \multicolumn{2}{c}{680} & \multicolumn{2}{c}{283} & \multicolumn{2}{c}{140}\\ P_B {\rm(d)} & \multicolumn{2}{c}{41.5} & \multicolumn{2}{c}{8.96} & \multicolumn{2}{c}{1161}\\ v_{w} {\rm(km/s)} & \multicolumn{2}{c}{300} & \multicolumn{2}{c}{700} & \multicolumn{2}{c}{200}\\ \frac{\partial \dot \omega}{\partial y} \arrowvert_{y_{0}} & \multicolumn{2}{c}{4\cdot10^{-13}} & \multicolumn{2}{c}{5.5\cdot10^{-13}} & \\ \dot M_{16} {\rm(\dot M/10^{16}}) &\multicolumn{2}{c}{3} & \multicolumn{2}{c}{3} & \multicolumn{2}{c}{1}\\ \hline \multicolumn{7}{c}{\hbox{Derived parameters}}\\ \hline & {\bf n=2} & n=3/2 & {\bf n=2} & n=3/2 & {\bf n=2} & n=3/2\\ \hline \mu_{30} & {\bf 2.7} & 0.1 & {\bf 1.8} & 0.16 & {\bf >1.17} &>0.02 \\ f(u) & {\bf 0.42} & 0.57 & {\bf 0.43} & 0.54\\ K_1(\Theta) & {\bf 39} & 3700 & {\bf 36} & 1150\\ Z& {\bf 13} & 910 & {\bf 12} & 300\\ B_t/B_p & {\bf 0.1} & 0.01 & {\bf 0.2} & 0.03\\ R_A{\rm(cm)}& {\bf 2\cdot 10^9} & 3\cdot 10^8 &{\bf 1.6\cdot 10^9}& 4.2\cdot 10^8\\ \omega^*/\omega_K(R_A)& {\bf 0.06} & 0.004 &{\bf 0.1}& 0.01\\ \hline \end{array} $$ \end{table*} \section{Discussion} \subsection{Physical conditions inside the shell } For an accretion shell to be formed around the neutron star magnetosphere it is necessary that the matter crossing the bow shock does not cool down too rapidly and thus starts to fall freely. This means that the radiation cooling time $t_{cool}$ must be longer than the characteristic time of plasma motion. The plasma heats up in the strong shock to the temperature \beq{T_ps} T_{ps}=\frac{3}{16}\mu_m\frac{ v_w^2}{\cal R}\approx 1.36\times 10^5 [\hbox{K}]\myfrac{v_w}{100 \hbox{km/s}}^2\,. \end{equation} The radiative cooling time of the plasma is \beq{t_cool} t_{cool}=\frac{3kT}{2\mu_m n_e \Lambda} \end{equation} where $\rho$ is the plasma density, $n_e=Y_e\rho/m_p$ is the electron number density ( $\mu_m=0.6$ and $Y_e\approx 0.8$ for fully ionized plasma with solar abundance); $\Lambda$ is the cooling function which can be approximated as \begin{equation} \label{mcore} \Lambda (T)=\left\{ \begin{array}{l} 0, T<10^4 \, {\rm K} \\ 1.0\times 10^{-24} T^{0.55} , 10^4 \, {\rm K} <T<10^5 \, {\rm K} \\ 6.2\times 10^{-19} T^{-0.6} , 10^5 \, {\rm K} <T<4\times 10^7 \, {\rm K} \\ 2.5\times 10^{-27} T^{0.5} , T>4\times 10^7 \, {\rm K} \end{array} \right. \label{eqn:lam} \end{equation} (Raymond, Cox \& Smith 1976; Cowie, McKee \& Ostriker 1981). Compton cooling becomes effective from the radius where the gas temperature $T$, determined by the hydrostatic formula \Eq{hse_sol}, is lower than the X-ray Compton temperature $T_x$. The Compton cooling time (see \Eq{t_comp}) is: \beq{t_C1} t_{C}\approx 1060[\hbox{s}] \dot M_{16}^{-1}\myfrac{R}{10^{10}\hbox{cm}}^2\,. \end{equation} Above the radius where $T_x=T$, Compton heating dominates. Taking the actual temperature close to the adiabatic one [\Eq{hse_sol}], we find $R_x\approx 2\times 10^{10}$~cm. We note that both the Compton and photoionization heating processes are controlled by the photoionization parameter $\xi$ (Tarter et al. 1969, Hatchett et al. 1976) \beq{ksi} \xi=\frac{L_x}{n_eR^2}\,. \end{equation} In most part of the accretion flux, $n\sim R^{-3/2}$, so $\xi\sim R^{-1/2}$ and independent of the X-ray luminosity through the mass continuity equation. For characteristic values we find: \beq{kxi_n} \xi\approx 5\times 10^5 f(u) R_{10}^{-1/2}\,. \end{equation} If Compton processes were effective everywhere, this high value of the parameter $\xi$ would imply that the plasma is Compton-heated up to keV-temperatures out to very large distances $\sim 10^{12}$~cm. However, at large distances the Compton heating time becomes longer than the characteristic time of gas accretion: \beq{} \frac{t_{C}}{t_{accr}}=\frac{t_{C}f(u)u_{ff}}{R}\approx 20 f(u) \dot M_{16}^{-1}R_{10}^{1/2}\,, \end{equation} which shows that Compton heating is ineffective. The gas temperature is determined by photoionization heating only and the gas can only be heated up to $T_{max}\approx 5\times 10^5$~K (Tarter et al. 1969), which is substantially lower than $T_x\sim 3$~keV. The sound velocity corresponding to $T_{max}$ is approximately 80 km/s. The effective gravitational capture radius corresponding to the sound velocity of the gas in the photoionization-heated zone is \beq{R_BC} R_{B}^*=\frac{2GM}{c_s^2}=\frac{2GM}{\gamma{\cal R} T_{max}/\mu_m}\approx 3.5\times 10^{12}\hbox{cm} \myfrac{T_{max}}{5\times 10^5\hbox{K}}^{-1}\,. \end{equation} Everywhere up to the bow shock photoionization keeps the temperature at a value $\simeq T_{max}$. If the stellar wind velocity exceeds $80$~km/s, a standard bow shock is formed at the Bondi radius with a post-shock temperature given by \Eq{T_ps}. If the stellar wind velocity is lower than this value, the shock disappears and quasi-spherical accretion occurs from $R_B^*$. The photoionization heating time at the effective Bondi radius $3\times 10^{12}$~cm is \beq{} t_{pi}\approx \frac{(3/2)kT_{max}/\mu_m}{(h\nu_{eff}-\zeta_{eff})n_\gamma \sigma_{eff}c} \approx 2\times 10^4 [\hbox{s}] \dot M_{16}^{-1}\,. \end{equation} (here $h\nu_{eff}\sim 10$~keV is the characteristic photon energy, $\zeta$ is the effective photoionization potential, $\sigma_{eff}\sim 10^{-24}$~cm$^2$ is the typical photoionization cross-section, $n_\gamma=L/(4\pi R^2 h\nu_{eff} c)$ is the photon number density). The photoionization to accretion time ratio at the effective Bondi radius is then \beq{} \frac{t_{pi}}{t_{accr}}\approx 0.07 f(u) \dot M_{16}^{-1}\,. \end{equation} At wind velocities $v_w>80$ km/s the bow shock stands at the classical Bondi radius $R_B$ inside the effective Bondi radius $R_B^*$ determined by \Eq{R_BC}. The cooling time of the shocked plasma at $R_B$ expressed through the wind velocity $v_w$ is: \beq{t_cool1} t_{cool}\approx 4.7\times 10^4 [\hbox{s}]\dot M_{16}^{-1} v_7^{0.2}\,. \end{equation} The photoionization heating time in the post-shock region can also be expressed through the stellar wind velocity: \beq{} t_{pi}\approx 3.5 \times 10^4 [\hbox{s}]\dot M_{16}^{-1} v_7^{-4}\,. \end{equation} The comparison of these two timescales implies that at low velocities radiative cooling is important and the regime of free-fall accretion with conservation of specific angular momentum is realized. So, at low wind velocities the plasma cools down and starts to fall freely. As the cold plasma approaches the gravitating center, photoionization heating becomes important and rapidly heats up the plasma to $T_{max}\approx 5\times 10^5$~K. Should this occur at a radius where $T_{max}<GM/({\cal R}R)$, the plasma continues its free fall down to the magnetosphere, still with the temperature $T_{max}$, with the subsequent formation of a shock above the magnetosphere. However, if $T_{max}$ is above the adiabatic temperatures at this radius, the settling accretion regime will be established even for low wind velocities. For high-wind stellar velocities $v_w\gtrsim 100$~km/s, the post-shock temperature is higher than $T_{max}$, photoionization is unimportant, and the settling accretion regime is established if the radiation cooling time is longer than the accretion time. From a comparison of these timescales, we find the critical accretion rate as a function of of the wind velocity below which the settling accretion regime is possible: \beq{} \dot M_{16}^{**}\lesssim 0.12 v_7^{3.2}\,. \end{equation} Here we stress the difference of the critical acccretion rate $\dot M^{**}$ from $\dot M^*$ derived earlier. At $\dot M>\dot M^{**}$, the plasma rapidly cools down in the gravitational capture region and free-fall accretion begins (unless photoionization heats up the plasma above the adiabatic value at some radius), while at $\dot M>\dot M^* \simeq 4\times 10^{16}$~g/s determined by \Eq{M*} a free-fall gap appears immediately above the neutron star magnetosphere. \subsection{On the possibility of the propeller regime} The very slow rotation of the neutron stars in GX 1+4, GX 301-2 and Vela X-1 ($\omega^* (R_A)<\omega_K(R_A)$) makes it hard to establish the propeller regime where matter is ejected with parabolic velocities from the magnetosphere during spin-down episodes. Let us therefore start with estimating the important ratio of viscous tensions ($\sim B_tB_p$) to the gas pressure ($\sim B_p^2$) at the magnetospheric boundary. This ratio is proportional to $B_t/B_p$ (see \Eq{BtBpnum}) and is always much smaller than 1 (see Table 1), i.e. only large-scale convective motions with the characteristic hierarchy of eddies scaled with radius can be established in the shell. When $\omega^*>\omega_K(R_A)$, the propeller regime (without accretion) must set in. In that case the maximum possible braking torque is $\sim -\mu^2/R_A^3$ due to the strong coupling between the plasma and the magnetic field. Note that in the propeller state, interaction of the plasma with the magnetic field is in the strong coupling regime, i.e. where the toroidal magnetic field component $B_t$ is comparable to the poloidal one $B_p$. It can not be excluded that a hot iso-angular-momentum envelope could exist in this case as well, which would then remove angular momentum from the rotating magnetosphere. If the characteristic cooling time of the gas in the envelope is short in comparison to the falling time of matter, the shell disappears and one can expect the formation of a `storaging' thin Keplerian disc around the neutron star magnetosphere (Sunyaev \& Shakura 1977). There is no accretion of matter through such a disc. It only serves to remove angular momentum from the magnetosphere. \subsection{Effects of the hot shell on the X-ray energy and power spectrum} The spectra of X-ray pulsars are dominated by emission generated in the accretion column. The hot optically thin shell produces its own thermal emission, but even if all gravitational energy were released in the shell, the ratio of the X-ray luminosity from the shell to that of the accretion column would be about the ratio of the magnetosphere radius to the NS radius, i.e. one percent or less. In reality, it is much smaller. The shell should scatter X-ray radiation from the accretion column, but for this effect to be substantial, the Comptonization parameter $y$ must be of the order of one. The Thomson depth in the shell is, however, very small. Indeed, from the mass continuity equation and \Eq{RA} for the Alfven radius and \Eq{fu} for the factor $f(u)$, we get: $$ \tau_T=\int_{R_A}^{R_B}n_e(R)\sigma_T dR \approx 3.2\times 10^{-3} \dot M_{16}^{8/11}\mu_{30}^{-2/11}\,. $$ Therefore, for the temperature near the magnetosphere [\Eq{hse_sol}] the parameter $y$ is $$ y=\frac{4kT}{m_ec^2}\tau_T\approx 2.4\times 10^{-3}\,. $$ This means that the X-ray spectrum of the accretion column should not be significantly affected by scattering in the hot shell. The large-scale convective motions in the shell introduce an intrinsic time-scale of the order of the free-fall time that could give rise to features (e.g. QPOs) in the power spectrum of variability. QPOs were reported in some X-ray pulsars (see Marykutty et al. 2010 and references therein). However, the expected frequency of the QPOs arising in our model would be of the order of mHz, much shorter than those reported. A stronger effect can be the appearance of a dynamical instability of the shell on this time scale due to increased Compton cooling and hence increased mass accretion rate in the shell. This may result in a complete collapse of the shell resulting in an X-ray outburst with duration similar to the free-fall time scale of the shell ($\sim 1000$~s). Such a transient behaviour is observed in supergiant fast X-ray transients (SFXTs) (see Ducci et al. 2010). This interesting issue depends on the specific parameters of the shell and needs to be further investigated. \subsection{Can accretion discs (prograde or retrograde) be present in these pulsars?} The analysis of real pulsars carried out earlier suggested that in a convective shell an iso-angular-momentum distribution is the most plausible. Therefore, we shall below consider only this case, i.e. the rotation law $\omega\sim R^{-2}$. As follows from \Eq{sd_eq1}, at $\dot \omega^*=0$ the equilibrium angular frequency of the neutron star is \beq{equilib} \omega^*_{eq}=\omega_B\frac{1}{1-z/Z}\myfrac{R_B}{R_A}^2\,. \end{equation} We stress that such an equilibrium in our model is possible only when a shell is present. At high accretion rates $\dot M>\dot M_*\simeq 4\times 10^{16}$~g/s accretion proceeds in the free-fall regime (with no shell present). Using \Eq{mu_eq}, the equlibrium period for quasi-spherical settling accretion can be recasted to the form \beq{P_eq} P_{eq}\approx 1000 [\hbox{s}]\mu_{30}^{12/11}\myfrac{P_b}{10\hbox{d}} \myfrac{\zeta}{(1+(5/3) m_t^2)\psi(5/3,m_t)\dot M_{16}}^{4/11}\myfrac{v_8}{\sqrt{\delta}}^4 \frac{(1-z/Z)}{\tilde \omega(1+(5/3)m_t^2)}\,. \end{equation} For standard disc accretion, the equilibrium period is \beq{P_eqd} P_{eq,d}\approx 7\hbox{s} \mu_{30}^{6/7}\dot M_{16}^{-3/7}\,, \end{equation} and the long periods observed in some X-ray pulsars can be explained assuming a very high magnetic field of the neutron star. Retrograde accretion discs are also discussed in the literature (see, e.g., Nelson et al. (1997) and references therein). Torque reversals produced by prograde/retrograde discs can in principle lead to very long periods for X-ray pulsars even with standard magnetic fields. Retrograde discs can be formed due to inhomogeneities in the captured stellar wind (Ruffert 1997, 1999). This might be the case at high accretion rates when hot quasi-spherical shell cannot exist. In the case of GX 1+4, however, it is highly unlikely to observe a retrograde disk on a time scale much longer than the orbital period (see a more detailed discussion of this issue in Goz\'alez-Gal\'an et al. (2011)). In the case of GX 301-2 and Vela X-1, the observed positive torque-luminosity correlation (see Figs. \ref{f:gx301} and \ref{f:velaX1}) rules out a retrograde disc as well. To conclude the discussion, we should mention that real systems (including those considered here) demonstrate a complex quasi-stationary behaviour with dips, outbursts, etc. These considerations are beyond the scope of this paper and definitely deserve further observational and theoretical studies. \section{Conclusions} In this paper we have presented a theoretical model for quasi-spherical subsonic accretion onto slowly rotating magnetized neutron stars. In this model the accreting matter is gravitationally captured from the stellar wind of the optical companion and subsonically settles down onto the rotating magnetosphere forming an extended quasi-static shell. This shell mediates the angular momentum removal from the rotating neutron star magnetosphere during spin-down states by large-scale convective motions. A detailed analysis and comparison with observations of two specific X-ray pulsars GX 301-2 and Vela X-1 demonstrating torque-luminosity correlations near the equilibrium neutron star spin period shows that most likely strongly anisotropic convective motions are established, with an almost iso-angular-momentum distribution of rotational velocities $\omega\sim R^{-2}$. A statistical analysis of long-period X-ray pulsars with Be-components in SMC (Chashkina \& Popov 2011) also favored the rotation law $\omega\sim R^{-2}$. The accretion rate through the shell is determined by the ability of the plasma to enter the magnetosphere. The settling regime of accretion which allows angular momentum removal from the neutron star magnetosphere can be realized for moderate accretion rates $\dot M< \dot M_*\simeq 4\times 10^{16}$~g/s. At higher accretion rates a free-fall gap above the neutron star magnetosphere appears due to rapid Compton cooling, and accretion becomes highly non-stationary. From observations of the spin-up/spin-down rates (the angular rotation frequency derivative $\dot \omega^*$, or $\partial\dot\omega^*/\partial\dot M$ near the torque reversal) of long-period X-ray pulsars with known orbital periods it is possible to determine the main dimensionless parameters of the model, as well as to estimate the magnetic field of the neutron star. Such an analysis revealed good agreement between the magnetic field estimates in the pulsars GX 301-2 and Vela X-1 obtained using our model and derived from the cyclotron line measurements. In our model, long-term spin-up/spin-down as observed in some X-ray pulsars can be quantitatively explained by a change in the mean mass accretion rate onto the neutron star (and the corresponding mean X-ray luminosity). Clearly, these changes are related to the stellar wind properties. The model also predicts the specific behaviour of the variations in $\delta \dot \omega^*$, observed on top of a steady spin-up or spin-down, as a function of mass accretion rate fluctuations $\delta \dot M$. There is a critical accretion rate $\dot M_{cr}$ below which an anti-correlation of $\delta \dot \omega^*$ with $\delta \dot M$ should occur (the case of GX 1+4 at the steady spin-down state currently observed), and above which $\delta \dot\omega^*$ should correlate with $\delta \dot M$ fluctuations (the case of Vela X-1, GX 301-2, and GX 1+4 in the steady spin-up state). The model explains quantitatively the relative amplitude and the sign of the observed frequency fluctuations in GX 1+4.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,348
HomeFictionQuest for Celestia – Steven James Quest for Celestia – Steven James May 19, 2019 Josh Olds Fiction 0 Quest for Celestia: A Reimagining of the Pilgrim's Progress by Steven James Published by Living Ink Books on March 28, 2012 After a chance encounter with a man whom 16-year-old Kadin believes is a wizard, his life and health begin to deteriorate. Out of desperation he embarks on an epic journey to a land no one in his village believes exists, to be healed of a disease no one thinks he has. His quest for truth leads him to a fantastical world of witches, dragons, giants, danger, and deception. Tracked by an evil lord and accompanied by only one friend, Kadin must face his greatest fear to find the healing he longs for most. Quest for Celestia is a complete re-imagining of one of the bestselling books of all time, first published more than 300 years ago—The Pilgrim's Progress. If you think you know the story, think again. Twists lurk around every corner. The journey begins now. Destiny awakes in the hills. Kadin's life used to be normal. After all, that's what life was like in the city of Abaddon. But after a strange encounter with a wizard, Kadin's life will never be the same. First thing, the wizard told him that all the fairy stories about Celestia were actually true. Second, the wizard gave him a Book of Blood that was supposed to prove it. Both were anathema to those living in Abaddon. Kadin tried to ignore it, tried to get rid of it, tried to forget about his encounter with the wizard, but the lure of the Book proved too strong. Now he has a black lump on the base of his neck, a scaly reptilian growth he can't ignore. Finally, the truth comes out. Kadin is kicked out of his home—but is it his home anymore?—and begins his trek toward Celestia. With Quest for Celestia, Steven James has undertaken a noble and weighty task. This complete reimagining of Pilgrim's Progress seeks to update the symbolism and allusion, the writing style and sense of suspense, all while keeping the heart and form of the original story. A daunting task, considering Pilgrims's Progress is one of the most bestselling and beloved books of all time. Yet James manages it masterfully. I can think of no other author I would trust more to update this story. James' writing ability, storytelling prowess, knowledge and reverence for Story, strong faith, and creative mind make him the perfect person for this reimagining. Those familiar with Pilgrim's Progress will undoubtedly see the parallels in the journey, yet there are some differences. For instance, the character of Leira provides an amalgam of Bunyan's characters Faithful and Hopeful adding the element of potential love interest. Apollyon is represented by the Baron Dorjan and, if memory serves, Dorjan plays a more prominent role in Quest for Celestia than his Bunyan counterpart. Far more than just a paraphrase or an update, Quest for Celestia maintains the heart of Pilgrim's Progress while forging its own story. New themes are added to bring relevance to modern day struggles, the back story of the Celestial City is fleshed our more thoroughly, and as a whole the story is treated more as a fantasy and less as an allegory. The result is a work that manages to be fresh, relevant, and vital. Those who have read Pilgrim's Progress will find themselves enjoying the connections while being engaged with James' unique story. Those unfamiliar will find themselves immersed into an epic fantasy quest as two young people seek their salvation. I would especially recommend this book to preteens and young teens, especially boys. It's a great story for getting them involved in reading, in learning about faith, and possibly even laying foundations for introducing them to classic literature. Steven James is one of my favorite storytellers because of his diversity and creativity, and Quest for Celestia is proof that I'm justified in that sentiment. Thunder of Heaven – Ted Dekker Outlaw – Ted Dekker
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,026
Bob Bucko Jr. – You Deserve a Name The lone sax pierces the night like it's in a Shane Black action noir, and "You Deserve a Name" kicks off just right. It's gotta be this way, because over the next hour of this 2xC32 cassette release (housed in a clamshell case), Bob Bucko Jr. rakes the muck, gums the shoes, honks the horn, and presses buttons on various devices and keyboards, thereby ensuring – ensuring! – that tension is ratcheted and threads of storyline are tugged and followed to their logical conclusions. All of this while perfecting the dialogue between his instruments. Cheeky AND efficient! "Stay busy or die trying," quoth BBJr. on the back of the clamshell, and truer words have not been recently spoken. Becoming somewhat of a mantra for 2020, this sentiment is a rallying cry for the quarantined, and in April 2020, when this beast was recorded, we were all a little stir crazy. But never fear, Bucko set the table with a spread that included effects pedals, samplers, a child's toy xylophone, a bunch of other stuff, and then set about trying to make sense of this whole mess with the tools he had at his disposal. Even several months down the road, 2020 has remained a mystery, although one with distinct characteristics; you could probably call it a mystery with big, hairy, stinky, stupid, obvious questions that are easily answered but remain obscured because we're all a bunch of big, fat, hairy, stupid apes. Thank god for BBJr.'s nuance to all that. Thank god for his restraint too – we need some of that up in here, what with all our stumbling and shouting and dribbling liquids from our mouths and heads. "You Deserve a Name" is an exquisitely slow burn, with BBJr. teasing out atmosphere and tones that hover in conscious reach like there's always a gradual realization of something good just around the bend of the next minute. And while it's all spectacular and often sublime, I'm still a sucker for those lonesome sax salutes. But as a fragment of a wilder, woollier whole, they're even more interesting, their juxtaposition among the more experimental sonic flourishes like pieces to a puzzle finally fitting together – even if improperly. There are rhythmic disturbances, inconclusive oscillations – everything points toward deepening ambiguity, even when it totally shouldn't. This is what you do! Here is where you go! BBJr.'s having none of that – he's just trying to make sense of everything and get through to the other side, with as little scathing as possible upon his poor body and psyche. "You Deserve a Name" expresses all that quite nicely. Available in an edition of 50 from Bucko's own Personal Archives. You Deserve a Name by Bob Bucko Jr Q&A With Bob Bucko Jr BBJr – Junior Nuclear Personal Archives – End of the Year Batch (2019) Sex Funeral – All Teeth Space Age Pressure Pad #2: Already Dead Tapes
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,942
Dave & Buster's Officially Coming to Lafayette Ellen Published: January 6, 2023 Getty Images for Disney It's official! Dave & Buster's is coming to Lafayette. KTDY reported that Dave & Buster's was in the negotiation phases of opening a new location in Lafayette back on June 6, 2022. According to land reports that were published today, Friday, January 6, 2023, Dave & Buster's has officially purchased the 5 acres of land at 201 Spring Farm Road. It is being reported that the 5 acres of land in the Ambassador Town Center cost the entertainment business $3,066,624. This will be the second location of Dave & Buster's in Louisiana, with the first location and only location on Poydras Street in New Orleans. Dave & Buster's is planning to open over 200 new units in North America and currently has 150 locations already opened. Dave & Buster's is the latest business that plans to open up next to Costco. A few other businesses that will be coming to the area are Jet Coffee, Jersey Mike's Subs, Kasai Steakhouse and Sushi, a new hotel, a high-end apartment complex, and a Discount Tire. There are also negotiations in progress to get TopGolf in the Costco Center as well. 10 Businesses We'd Like to See in Lafayette Nine Restaurants We Need In Lafayette Filed Under: Dave & Buster's, Lafayette, new business Categories: Lifestyle, Local News The Top 6 Restaurants for Wings in Lafayette, Louisiana Check Out Stunning Mid-Century Modern Home in Lafayette Check Out This Stunning Home with a Grotto Located in Lafayette Hopefest Music Festival Announces 2023 Headliners Lafayette Alcohol Sales to End Early on Mardi Gras Day A New Japanese Restaurant is Coming to Lafayette Kenan Thompson's 'Presents' Coming to Lafayette The Louisiana Seafood Cook-Off Is Leaving Lafayette for Lake Charles in 2023
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,508
Marvin Nathaniel Webster (April 13, 1952 – April 4, 2009) was an American professional basketball player. He played one season in the American Basketball Association (ABA) and nine in the National Basketball Association (NBA) with the Denver Nuggets (1975–77), Seattle SuperSonics (1977–78), New York Knickerbockers (1978–84) and Milwaukee Bucks (1986–87). His nickname was The Human Eraser because of his impressive shot blocking talent. College career Born in Baltimore, Maryland, the son of a Baltimore preacher, Webster attended Edmondson High School in the city. A four-year basketball letterman at Morgan State University, he earned the nickname "The Human Eraser" as a junior when he averaged eight blocked shots a game while helping the Bears capture the 1974 NCAA Division II Championship. He averaged 21 points and 22.4 rebounds and was named Division II player of the year. Webster still holds eight career school records: 1,990 points, 2,267 rebounds, 19.5 rebounds per game, 785 field goals made, 424 free throws made, 644 free throws attempted, 722 blocks and 110 games started. His 740 rebounds in 1974 and 2,267 career total are still second all-time in NCAA history in their respective categories. He was named to the NCAA Division II Men's Basketball 50th Anniversary All-Elite Eight Team in 2006. College statistics |- | align="left" | 1971–72 | align="left" | Morgan State | 26 || - || - || .453 || - || .687 || 16.1 || - || - || - || 13.2 |- | align="left" | 1972–73 | align="left" | Morgan State | 28 || - || - || .514 || - || .686 || 23.2 || - || - || - || 18.5 |- | align="left" | 1973–74 | align="left" | Morgan State | 33 || - || - || .545 || - || .697 || 22.4 || - || - || - || 21.4 |- | align="left" | 1974–75 | align="left" | Morgan State | 27 || - || - || .562 || - || .490 || 17.0 || - || - || - || 15.7 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 114 || - || - || .524 || - || .658 || 19.9 || - || - || - || 17.5 |} Professional career Webster was selected in the first round of both the NBA and ABA Drafts in 1975 (third overall by the Atlanta Hawks, first overall by the Denver Nuggets, respectively). After signing with the Nuggets, he was diagnosed with a form of hepatitis, and played only 38 games as a rookie in 1975–76. A 7' 1" center, Webster helped the Nuggets win the 1976-77 NBA Midwest Division and the SuperSonics the 1977-78 NBA Western Conference title. His finest season was his single year with Seattle, in which he averaged 14.0 points, 12.6 rebounds, and 2.0 blocks per game. He raised his performance in the SuperSonics' 22-game playoff run that year, averaging 16.1 points, 13.1 rebounds, and more than 2.6 blocks per game. Webster still holds the SuperSonics' record for rebounds in one half with 21. In 1978, the Knicks signed Webster as a free-agent. As compensation, the NBA awarded the SuperSonics the playing rights to power-forward Lonnie Shelton and the Knicks' 1979 first-round draft pick. In his first season with the Knicks, Webster averaged 11.3 points per game and 10.9 rebounds per game. Webster never again reached double figures in either category in the NBA after that. Webster missed the 1984–85 and start of the 1985-86 season with hepatitis before retiring from the Knicks. Webster played briefly in the Continental Basketball Association, and later with the Milwaukee Bucks during the 1986-87 season. Webster was found dead in a Tulsa, Oklahoma hotel room on April 4, 2009. He was 56 years old. It is believed that he died of a coronary artery disease. Career statistics ABA Regular season |- | align="left" | 1975–76 | align="left" | Denver | 38 || - || 10.5 || .458 || .000 || .705 || 4.6 || 0.8 || 0.2 || 1.4 || 4.3 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 38 || - || 10.5 || .458 || .000 || .705 || 4.6 || 0.8 || 0.2 || 1.4 || 4.3 |} Playoffs |- | align="left" | 1975–76 | align="left" | Denver | style="background:#cfecec;" | 13* || - || 11.9 || .420 || .000 || .536 || 5.5 || 0.7 || 0.1 || 1.1 || 4.4 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 13 || - || 11.9 || .420 || .000 || .536 || 5.5 || 0.7 || 0.1 || 1.1 || 4.4 |} NBA Regular season |- | align="left" | 1976–77 | align="left" | Denver | 80 || - || 16.0 || .495 || - || .650 || 6.1 || 0.8 || 0.3 || 1.5 || 6.7 |- | align="left" | 1977–78 | align="left" | Seattle | 82 || - || 35.5 || .502 || - || .629 || 12.6 || 2.5 || 0.6 || 2.0 || 14.0 |- | align="left" | 1978–79 | align="left" | New York | 60 || - || 33.8 || .473 || - || .573 || 10.9 || 2.9 || 0.4 || 1.9 || 11.3 |- | align="left" | 1979–80 | align="left" | New York | 20 || - || 14.9 || .481 || .000 || .750 || 4.0 || 0.5 || 0.2 || 0.6 || 4.4 |- | align="left" | 1980–81 | align="left" | New York | 82 || - || 20.8 || .466 || .250 || .638 || 5.7 || 0.9 || 0.3 || 1.2 || 5.2 |- | align="left" | 1981–82 | align="left" | New York | 82 || 32 || 23.0 || .491 || .000 || .635 || 6.0 || 1.2 || 0.3 || 1.1 || 6.2 |- | align="left" | 1982–83 | align="left" | New York | 82 || 0 || 18.0 || .508 || .000 || .589 || 5.4 || 0.6 || 0.4 || 1.6 || 5.4 |- | align="left" | 1983–84 | align="left" | New York | 76 || 5 || 17.0 || .469 || .000 || .564 || 4.8 || 0.7 || 0.4 || 1.3 || 3.8 |- | align="left" | 1986–87 | align="left" | Milwaukee | 15 || 0 || 6.8 || .526 || 1.000 || .750 || 1.7 || 0.2 || 0.2 || 0.5 || 1.8 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 579 || 37 || 22.4 || .489 || .333 || .617 || 7.0 || 1.2 || 0.4 || 1.4 || 7.1 |} Playoffs |- | align="left" | 1976–77 | align="left" | Denver | 6 || - || 16.0 || .500 || - || .667 || 6.7 || 0.5 || 0.3 || 1.8 || 5.0 |- | align="left" | 1977–78 | align="left" | Seattle | style="background:#cfecec;" | 22* || - || 41.1 || .489 || - || .675 || 13.1 || 2.6 || 0.3 || 2.6 || 16.1 |- | align="left" | 1980–81 | align="left" | New York | 2 || - || 31.5 || .500 || .000 || .000 || 5.0 || 0.5 || 0.0 || 0.5 || 6.0 |- | align="left" | 1982–83 | align="left" | New York | 6 || - || 19.2 || .389 || .000 || .636 || 4.7 || 0.5 || 0.0 || 1.2 || 4.7 |- | align="left" | 1983–84 | align="left" | New York | 12 || - || 17.0 || .483 || .000 || .600 || 4.7 || 0.3 || 0.3 || 1.4 || 3.1 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 48 || - || 28.8 || .485 || .000 || .647 || 8.8 || 1.4 || 0.3 || 2.0 || 9.6 |} Personal life Webster was married to Mederia Webster. Webster's son, Marvin Webster Jr., was recruited to play basketball at Temple University, but died at age 19 from a heart attack prior to his sophomore season. Later in his life, Webster lived in Metuchen, New Jersey. Popular culture references Webster is one of five 1970s Seattle SuperSonics players whose names are featured on characters in "The Exterminator," the third episode of Season 1 of iZombie. The other four are Freddie Brown, Gus Williams, Wally Walker and Don Watts. References External links Latzke, Jeff. "Ex-Sonics star Marvin Webster found dead in hotel," The Associated Press, Wednesday, April 8, 2009. Allen, Percy. "Former Sonic Marvin Webster dies at 56," The Seattle Times, Thursday, April 9, 2009. Marvin Webster – Sports Illustrated cover, October 16, 1978. Kirkpatrick, Curry. "Heavens, What A Year Ahead!" Sports Illustrated, October 16, 1978. Pearlman, Jeff. "Catching Up With...SuperSonics center Marvin Webster-May 22, 1978," Sports Illustrated, May 5, 1997. Seattle PI: Photos | Death of Marvin Webster 1952 births 2009 deaths African-American basketball players American men's basketball players Atlanta Hawks draft picks Basketball players from Baltimore Centers (basketball) Denver Nuggets players Denver Rockets players Milwaukee Bucks players Morgan State Bears men's basketball players New York Knicks players People from Metuchen, New Jersey Seattle SuperSonics players Utah Stars draft picks 20th-century African-American sportspeople 21st-century African-American people
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,677
we, then, ought to receive such, that fellow-workers we may become to the truth. because of this, if I may come, I will cause him to remember his works that he doth, with evil words prating against us; and not content with these, neither doth he himself receive the brethren, and those intending he doth forbid, and out of the assembly he doth cast. to Demetrius testimony hath been given by all, and by the truth itself, and we also -- we do testify, and ye have known that our testimony is true. and I hope straightway to see thee, and mouth to mouth we shall speak. Peace to thee! salute thee do the friends; be saluting the friends by name.
{ "redpajama_set_name": "RedPajamaC4" }
7,748
Pär Fabian Lagerkvist (Växjö, 23. svibnja 1891. – Stockholm, 11. srpnja 1974.), švedski književnik. Vrlo rano se suprotstavlja obiteljskoj religioznosti i prihvaća vjeru u svijet budućnosti, te objavljuje svoje prve oporbenjačke tekstove i po idejama i po izboru. U kritičko-teorijskim radovima napada izrođeni naturalizam, zahtijevajući od književnosti jednostavnost folklorne predaje. Uzor su mu neposrednosti i sažetost Biblije, Kurana, sjevernjačkih saga i stare egipatske lirike. Prvi je švedski ekspresionist, u pjesmama i dramama izražava kaotični svijet iz doba Prvoga svjetskog rata, oslanjajući se na djela kasnog Strindberga. U međuratnom razdoblju u njegovo djelo probijaju i svjetliji tonovi, koji se 30-ih godina ponovno pomračuju, kada Lagerkvist nastupa kao oštar borac protiv diktature i nečovječnosti. Godine 1933. kada Hitler dolazi na vlast, Lagerkvist piše "Krvnika", djelo koje je nedvojben protest protiv nasilja. Neprestano obuzet problemom zla, u svojoj najznačajnijoj prozi "Patuljak", predstavlja utjelovljenje svih negativnih ljudskih značajki, pa pomoću likova renesanse, simbolizira moderna stanja. Godine 1951. dobio je Nobelovu nagradu za književnost. Djela "Željezo i ljudi", "Vječiti smiješak", "Okrutne priče", "Baraba", "Sibila", "Hodočasnik na moru", "Posljednji čovjek", "Čovjek bez duše", "Pobjeda u tmini". Švedski književnici Dobitnici Nobelove nagrade za književnost
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,331
\section{Introduction\label{sec1}} A real-valued function defined on an interval $(a,b)$ is called \textit{matrix monotone of order $N$} if for any pair $A,B$ of $N\times N$ Hermitian matrices with spectrum in $(a,b)$ the implication $A\le B\Longrightarrow f(A)\le f(B)$ holds, i.e.\ $f$ preserves the positive semidefinite ordering. A function is called \textit{operator monotone} if it is matrix monotone of every order. One of the central objects in the theory of matrix monotone functions is the so-called L\"owner matrix. Given any integer $N>1$, and any set of $N$ finite, distinct real numbers $x_i$ in $(a,b)$, one constructs a L\"owner matrix of $f$ as the $N\times N$ matrix $L_f$ of divided differences $$ L_f := \left( \frac{f(x_i)-f(x_j)}{x_i-x_j} \right)_{i,j=1}^N. $$ For the diagonal elements, $i=j$, a limit has to be taken, so that the diagonal elements are given by the first derivatives $f'(x_i)$. (A necessary condition for $f$ being matrix monotone of order at least 2 is that it should be continuous, in fact even continuously differentiable (\cite{donoghue} p.\ 79), hence its first derivative should exist. For $N=1$, this is strictly speaking not needed, but matrix monotonicity then reduces to ordinary monotonicity anyway.) According to a celebrated result by L\"owner, $f$ is a matrix monotone function on $(a,b)$ of order $N$ if and only if any $N\times N$ L\"owner matrix $L_f$ is positive semidefinite, for any choice of $x_i$ in $(a,b)$. For a thorough introduction to matrix monotone functions we refer to the monograph \cite{donoghue}, and to \cite{bhatia} for a more concise introduction. In (\cite{bhatia2}, p.\ 195) R.~Bhatia raised the question whether there is a good characterisation of real-valued functions $g(x)$ defined on $(a,b)$, with $a\ge 0$, for which every matrix of the form $$ K_g:=\left(\frac{g(x_i)+g(x_j)}{x_i+x_j}\right)_{i,j=1}^N, $$ is positive semidefinite, with $x_i$ distinct real numbers in $(a,b)$. That is, $K_g$ is akin to a L\"owner matrix, but has the minus signs replaced by plus signs. In this paper, we'll call these matrices \textit{anti-L\"owner matrices}, and functions for which all $N\times N$ anti-L\"owner matrices are positive semidefinite will be called \textit{anti-L\"owner functions of order $N$}. Likewise, we call functions \textit{anti-L\"owner functions} if they satisfy this positivity criterion for all values of $N$. It goes without saying that to be anti-L\"owner $g$ must first of all be non-negative, as can be seen from the trivial case $N=1$. For $N=1$ this is already the complete answer; to avoid trivialities we will henceforth assume $N\ge 2$. It is also straightforward to show that for $N\ge2$, $g$ must be continuous, similar to matrix monotone functions of order $N\ge 2$; see Proposition \ref{prop:cont} below. It has already been known for some time that every non-negative operator monotone function on $(0,+\infty)$ is an anti-L\"owner function, and so is every non-negative operator monotone decreasing function \cite{Kwong}. This easily follows (see, e.g.\ Theorem 1 in \cite{Kwong}) from exploiting the well-known integral representation \cite{bhatia} \be f(x)=\alpha+\beta x+\int_0^\infty \frac{x}{t+x}d\mu(t)\label{eq:intmono} \ee for non-negative operator monotone functions on $(0,+\infty)$, with $\alpha,\beta\ge0$\footnote{Note that for negative $\alpha$, $f$ is also operator monotone, but positivity of $f$ requires $\alpha\ge0$.} and $\mu$ a positive Borel measure on $(0,+\infty)$ such that the given integral converges. The statement for non-negative operator monotone decreasing functions follows easily from this by noting that the function $f(x)$ is anti-L\"owner if and only if $1/f(x)$ is anti-L\"owner too. \section{Main results} In this paper, we obtain a complete answer to Bhatia's question: \begin{theorem}\label{th:1} Let $g$ be a real-valued function $g$, defined and finite on $(a,b)$, with $0\le a<b$. Let $N$ be any integer, at least 2. If $g$ is an anti-L\"owner function of order $2N$ on $(a,b)$ then $x\mapsto g(\sqrt{x})\sqrt{x}$ is a non-negative matrix monotone function of order $N$ on $(a^2,b^2)$, and $x\mapsto g(\sqrt{x})/\sqrt{x}$ is a non-negative matrix monotone decreasing function of order $N$ on $(a^2,b^2)$. \end{theorem} Theorem \ref{th:1} has applications in the study of Lyapunov-type equations and also answers a question by Kwong \cite{Kwong}, who studied conditions on the function $g$ such that the solution $X$ of equation $AX+XA=g(A)B+Bg(A)$ is positive definite for all positive definite $A$ and $B$. Kwong pointed out in \cite{Kwong2} that it suffices to consider matrices $A$ that are diagonal, with $B$ equal to the all-ones matrix ($B_{ij}=1$). In that case the solution of the equation reduces to $X$ being an anti-L\"owner matrix with the $x_i$ equal to the diagonal elements of $A$. Thus, our Theorem \ref{th:1} also yields the answer to Kwong's question. As a direct corollary of Theorem \ref{th:1}, we immediately get an integral representation for anti-L\"owner functions $g(x)$ (of all orders) on $(0,+\infty)$. From equation (\ref{eq:intmono}) we obtain $g(\sqrt{x})=\alpha/\sqrt{x}+\beta\sqrt{x}+\int_0^\infty \frac{\sqrt{x}}{t+x}d\mu(t)$, hence \be g(x) = \frac{\alpha}{x}+\beta x+\int_0^\infty \frac{x}{t+x^2}d\mu(t),\label{eq:intanti} \ee with $\alpha,\beta\ge0$ and $\mu$ a positive Borel measure such that the integral exists. Within this restricted setting, the sufficiency part of Theorem \ref{th:1} is easy to prove, as it suffices to check each of the terms in the integral representation (\ref{eq:intanti}). To wit, one only needs to prove that the functions $g(x)=x$ and $g(x)=x/(t+x^2)$ (for $t\ge0$) are anti-L\"owner, as all other functions concerned are positive linear combinations of these extremal functions. This is trivial for $g(x)=x$, because then $K_g=(1)_{i,j}$, which is clearly positive semidefinite (and rank 1). Secondly, for $g(x)=x/(t+x^2)$, we have \beas K_g &=& \left(\frac{x_i/(t+x_i^2)+x_j/(t+x_j^2)}{x_i+x_j}\right)_{i,j} \\ &=& \left(\frac{x_i(t+x_j^2) + x_j(t+x_i^2)}{(t+x_i^2)(x_i+x_j)(t+x_j^2)}\right)_{i,j} \\ &=& \left(\frac{t+x_i x_j}{(t+x_i^2)(t+x_j^2)}\right)_{i,j}, \eeas which is congruent to the matrix $(t+x_i x_j)_{i,j}$ and therefore positive semidefinite as well (and in general rank 2). The hard part is to prove necessity, i.e.\ that there are no other anti-L\"owner functions than those with the given integral representation (\ref{eq:intanti}). Furthermore, there seems to be no obvious approach even to the sufficiency part in the more general setting of fixed $N$ where no integral representation is known. An important observation that shows the way out, however, is hidden in the very statement of Theorem \ref{th:1}, as it hints at a one-to-one correspondence between anti-L\"owner functions and non-negative operator monotone functions. This is no coincidence, and our method of proof will exploit an even deeper correspondence between L\"owner matrices and anti-L\"owner matrices, which is made apparent in Theorem \ref{th:2} below. This is good news, as there will be no need to develop from scratch a completely new theory in parallel with L\"owner's. \begin{theorem}\label{th:2} Let $N$ be any integer and $x_1,\ldots,x_N$ a sequence of distinct positive real numbers contained in the interval $(a,b)$, $0\le a<b$. For any continuous real-valued function $g$ defined on $(a,b)$, let $L$ and $K$ be its L\"owner and anti-L\"owner matrix of order $N$ on the given points $x_1,\ldots,x_N$, respectively, and let $K_ij$ be the matrix $$ K_{ij}=\left[\frac{g(x_k+i\epsilon)+g(x_l+j\epsilon)}{(x_k+i\epsilon)+(x_l+j\epsilon)}\right]_{k,l=1}^N $$ and let $L_{ij}$ be the matrix $$ L_{ij}=\left[\frac{g(x_k+i\epsilon)-g(x_l+j\epsilon)}{(x_k+i\epsilon)-(x_l+j\epsilon)}\right]_{k,l=1}^N. $$ Then the following are equivalent: \begin{enumerate} \item the $2\times 2$ block matrix $\twomat{K_{00}}{K_{01}}{K_{10}}{K_{11}}$ is positive semidefinite; \item the $2\times 2$ block matrix $\twomat{K_{00}}{L_{01}}{L_{10}}{K_{11}}$ is positive semidefinite. \end{enumerate} \end{theorem} In the remainder of this paper we present the proofs of these theorems. \section{Proofs} We start with a simple, but nevertheless essential proposition. \begin{proposition}\label{prop:cont} Let $g$ be a positive real-valued function on $(a,b)$, with $0\le a<b$. If $g$ is an anti-L\"owner function of order at least $2$, then $g$ is continuous. \end{proposition} \textit{Proof.} This follows from consideration of the $2\times 2$ anti-L\"owner matrices in the points $x_1=x,x_2=x+\epsilon$ ($0\le a<x<b$) and letting $\epsilon$ tend to 0. Positive semidefiniteness of the anti-L\"owner matrix requires non-negativity of its determinant: $$g(x)g(x+\epsilon)/x(x+\epsilon)-(g(x)+g(x+\epsilon))^2/(2x+\epsilon)^2\ge 0.$$ After some calculation, one finds that this requires $|(g(x+\epsilon)-g(x))/\epsilon|\le g(x)/x$ for all $\epsilon>0$, whence the derivative of $g$ should exist and be bounded on any bounded closed interval in $(a,b)$. \qed The main technical result on which our proof is based is the following proposition. \begin{proposition}\label{prop:1} Fix an integer $N$. Let $g=(g_1,\ldots,g_N)$ and $x=(x_1,\ldots,x_N)$ be positive vectors, where all $x_i$ are distinct, and $s=(s_1,\ldots,s_N)$ a real vector with $s_i=\pm 1$. Then the sign of $\det Z_N$, where $$ Z_N=\left(\frac{s_i g_i+s_j g_j}{s_i x_i+s_j x_j}\right)_{i,j=1}^N, $$ is independent of the signs of the $s_i$'s. \end{proposition} As an illustration of this proposition, we will prove the easiest non-trivial case $N=2$ (the case $N=1$ is trivial as $s_1$ cancels out entirely). The given determinant is \beas \det Z_2 &=& \det\twomat{\frac{g_1}{x_1}}{\frac{s_1 g_1+s_2 g_2}{s_1 x_1+s_2 x_2}} {\frac{s_1 g_1+s_2 g_2}{s_1 x_1+s_2 x_2}}{\frac{g_2}{x_2}}\\ &=& \frac{g_1 g_2}{x_1 x_2}-\frac{(s_1 g_1+s_2 g_2)^2}{(s_1 x_1+s_2 x_2)^2} \\ &=& \frac{g_1 g_2(s_1^2 x_1^2+s_2^2 x_2^2) - x_1 x_2(s_1^2 g_1^2+s_2^2 g_2^2)}{x_1 x_2(s_1 x_1+s_2 x_2)^2}\\ &=& \frac{g_1 g_2(x_1^2+x_2^2) - x_1 x_2(g_1^2+g_2^2)}{x_1 x_2(s_1 x_1+s_2 x_2)^2}. \eeas One sees that the numerator is independent of the signs of the $s_i$'s, while the denominator is always positive. Hence, the sign of this determinant is independent of the signs of the $s_i$'s. For small values of $N$ one can easily verify that the determinant can always be written as a rational function where the numerator is a polynomial in which the $s_i$'s only appear to even powers, and where the denominator is always positive. This observation provided the inspiration for the following simple proof of Proposition \ref{prop:1} (for every value of $N$). \textit{Proof of Proposition \ref{prop:1}.}\\ Clearly, once we prove that the sign of $\det Z_N$ does not change under a single sign change of $s_i$, the general statement of the proposition follows, by changing the signs of the $s_i$'s one by one. W.l.o.g.\ we consider sign changes of $s_1$ only. The idea of the proof is to apply a \textit{partial} Gaussian elimination on $Z_N$, only bringing its first column in upper-triangular form. For each $i>1$ we subtract $\frac{x_1}{g_1}\,\,\frac{s_1 g_1+g_i}{s_1 x_1+x_i}$ times row 1 from row $i$. As is well-known, this operation does not change the determinant. The resulting matrix is of the form $$ Z_N' =\left( \begin{array}{cc} \frac{g_1}{x_1} & b \\ 0 & X \end{array} \right) $$ where $b$ is the first row of $Z_N$ (except its element $(1,1)$) and $X$ is an $(N-1)\times(N-1)$ matrix with elements ($i,j>1$) \bea X_{i,j} &=& \frac{g_i+g_j}{x_i+x_j}-\frac{x_1}{g_1}\,\,\frac{s_1 g_1+g_i}{s_1 x_1+x_i}\,\, \frac{s_1 g_1+g_j}{s_1 x_1+x_j} \nonumber\\ &=& \frac{g_1(g_i+g_j)(x_1^2+x_i x_j)-x_1(x_i+x_j)(g_1^2+g_i g_j)}{g_1(x_i+x_j)(s_1 x_1+x_i)(s_1 x_1+x_j)}. \label{eq:XIJ} \eea In the last line we have used the fact that $s_1^2=1$. From expression (\ref{eq:XIJ}) it is clear that $X$ can be written as a matrix product $X=DYD$, where $D$ is a diagonal matrix with diagonal elements $1/(s_1 x_1+x_i)$ ($i>1$), and where $Y$ is independent of $s_1$. It follows that the determinant of $Z_N$ is given by $$ \det Z_N = \frac{g_1}{x_1} \det(DYD) = \frac{g_1}{x_1} \det(Y) \det(D)^2. $$ As the only factor that depends on the sign of $s_1$ appears to even power, and is therefore non-negative, we have proven that the sign of $\det Z_N$ does not depend on the sign of $s_1$. In a similar way, we can show that the sign of $\det Z_N$ does not depend on any of the signs of the $s_i$'s. This ends the proof. \qed \textit{Proof of Theorem \ref{th:2}.} It is a simple corollary of Proposition \ref{prop:1} that, under the conditions stated, the positive semidefiniteness of $Z_N$ for a given choice of signs of the $s_i$'s implies positive semidefiniteness for any other choice. Indeed, according to Sylvester's criterion, a symmetric matrix is positive semidefinite if and only if all its principal minors are non-negative. In the case of $Z_N$, the principal $k\times k$ minors are determinants of the form $\det Z_k$ ($k=1,\ldots,N$) and according to Proposition \ref{prop:1}, the signs of these determinants are independent of the signs of the $s_i$'s appearing in them. Consider now, in particular, the case where $N$ is even, say $N=2n$, and the $x_i$ are given by $$(x_1,\ldots,x_n,x_{n+1},\ldots,x_{2n}) = (y_1,\ldots,y_n,y_1+\epsilon,\ldots,y_n+\epsilon)$$ for any positive $\epsilon$ small enough such that no two $x_i$ ever become equal when $\epsilon$ tends to $0$. Let also $g_i = g(x_i)$. We will consider two choices for the $s_i$. Firstly, we set all $s_i=+1$. We then get the matrix $$ K' = \twomat{K_{00}}{K_{01}}{K_{10}}{K_{11}}. $$ Secondly, with $s_i=+1$ for $i\le n$ and $s_i=-1$ for $i>n$, we instead get $$ K''=\twomat{K_{00}}{L_{01}}{L_{10}}{K_{11}}. $$ As, according to Proposition \ref{prop:1}, these matrices have the same signature (same signs of the corresponding principal minors) this yields the equivalence of Theorem \ref{th:2}. \qed It is now an easy matter to prove Theorem \ref{th:1}. \textit{Proof of Theorem \ref{th:1}.}\\ Let $N$ be a fixed integer, at least 2. By Proposition \ref{prop:cont}, if $g$ is an anti-L\"owner function of order at least 2, then $g$ is continuous. Conversely, if the function $x\mapsto g(\sqrt{x}) \sqrt{x}$ is a non-negative matrix monotone function of order $N$, then surely $g$ must be continuous too. Thus, in any case, Theorem \ref{th:2} applies to $g$. Let $g$ be an anti-L\"owner function of order $2N$. Thus $K'$ is positive. In the limit $\epsilon\to0$ we then find that $K_g+L_g$ and $K_g-L_g$ are positive, due to Theorem \ref{th:2}. A simple calculation shows that $K_g+L_g$ is equal to \beas K_g+L_g &=& \left(\frac{g(x_i)+g(x_j)}{x_i+x_j}+\frac{g(x_i)-g(x_j)}{x_i-x_j}\right)_{i,j=1}^N \\ &=& 2\left(\frac{x_i g(x_i)-x_j g(x_j)}{x_i^2-x_j^2}\right)_{i,j=1}^N, \eeas which is (up to an irrelevant factor of 2) the L\"owner matrix of the function $x\mapsto g(\sqrt{x}) \sqrt{x}$ in the points $x_i^2$. Hence, the function $x\mapsto g(\sqrt{x}) \sqrt{x}$ is a non-negative matrix monotone function of order $N$ on $(a^2,b^2)$. In a similar way we find \beas K_g-L_g &=& -2\left(x_i \,\,\frac{g(x_i)/x_i-g(x_j)/x_j}{x_i^2-x_j^2}\,\,x_j\right)_{i,j=1}^N, \eeas which shows that the function $x\mapsto g(\sqrt{x})/ \sqrt{x}$ is a non-negative matrix monotone \textit{decreasing} function of order $N$ on $(a^2,b^2)$. \qed \section*{Acknowledgments} The author gratefully acknowledges the hospitality of the University of Ulm, Germany, and of the Institut Mittag-Leffler, Stockholm, where parts of this work have been done. Also many thanks to R.~Bhatia and F.~Hiai, for comments on an earlier version of the manuscript. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,444
Kwaku Frimpong (born 6 November 2002) is a Welsh professional footballer who plays as a midfielder for Metropolitan Police, on loan from club AFC Wimbledon. He had played on loan at Leatherhead, Carshalton Athletic, Dartford and Potters Bar Town. Career Frimpong joined the Academy at AFC Wimbledon at the age of sixteen upon the recommendation of Ben Thatcher, after being released by Crystal Palace academy and signed his first professional contract with the club in July 2021. He enjoyed a loan spell at Leatherhead in the first half of the 2021–22 season, which Wimbledon loan manager Michael Hamilton described as "testing... including having to fight for a place in the team, adjust to different managers, and display good resilience". He joined another Isthmian League Premier Division club Carshalton Athletic on loan on 3 January 2022. He scored his first goal at a senior level on 22 January, in a 5–4 defeat at Cray Wanderers. However he tore his retinaculum in February, which saw him sidelined for two months, before he returned to fitness and enjoyed a "strong finish to the season". He made his first-team debut for AFC Wimbledon on 30 August 2022, when he came on as an 82nd-minute substitute for Paul Osew in a 2–1 defeat to Aston Villa U21. On 11 September, he joined National League South club Dartford on a one month loan. He played 34 minutes for the club in a 3–1 win at Dover Athletic two days later. He was sent off in an FA Cup second qualifying round defeat to Beckenham Town. He scored his first goal for AFC Wimbledon on 20 September, in a 3–2 home win over Crawley Town in the EFL Trophy; he started the game at right wing-back and played his first full ninety minutes for the club. He made his EFL League Two debut on 22 October, in a 2–1 win at Rochdale. He had been called into the matchday squad by manager Johnnie Jackson following an injury to George Marsh. Following a month loan spell with Potters Bar Town, Frimpong joined Metropolitan Police on loan until the end of the season in February 2023. Style of play The AFC Wimbledon website describes Frimpong as a "tough-tackling midfielder". Career statistics References 2002 births Living people Footballers from Cardiff Welsh footballers Association football midfielders AFC Wimbledon players Leatherhead F.C. players Carshalton Athletic F.C. players Dartford F.C. players Potters Bar Town F.C. players Metropolitan Police F.C. players Isthmian League players National League (English football) players English Football League players Welsh people of Ghanaian descent Black British sportspeople
{ "redpajama_set_name": "RedPajamaWikipedia" }
789
> Life & Style > What Not To Miss At The Edinburgh Fringe 2019 By Eve MacDonald - July 4, 2019 Subscribe to our magazine for more great content The Edinburgh Fringe Festival returns to Scotland's capital city for another year from the 2nd – 26th August. With show-stopping shows from some of the world's best performers in comedy, music, theatre and dance – you do not want to miss out! We've rounded up a selection of the best things to see and do at the Fringe this year so you can start planning your calendar now… Bible John This one is for you true crime fanatics! An all-female cast go back in time to 1969 to explore the story of three women murdered at the Barrowlands Ballroom in Glasgow by a serial killer, nicknamed Bible John. This drama will take you on a journey through the murders in 1969 before fast-forwarding almost 50 years to present day where four women, obsessed with true crime, are desperate to solve one of Scotland's darkest cases. Where: Pleasance Courtyard, Pleasance Above When: 31st July, August 1-12, 14-26. 3:50pm Tickets priced from £11, available here. Edinburgh Gin Afternoon Tea For £25, take a break from the excitement of the Edinburgh Fringe and spend up to two hours enjoying savoury bites and homemade scones with views of Edinburgh's Princes Street Gardens. As an Edinburgh Gin Afternoon Tea guest, you are given priority seating on the sun terrace of Victor & Carina Contini whilst you wash down your delicious treats with an Edinburgh Gin and Tonic, or with the unlimited freshly brewed Scottish tea. Where: The Scottish Café & Restaurant, Victor & Carina Contini When: 2nd – 25th August, 3pm Tickets £25, available here. Image: iStock Museum Late: Fringe Fridays Have you ever wondered what night at the museums are like? Here is your chance to explore the National Museum of Scotland in Edinburgh after hours! Across three different late night parties at the museum, guests will enjoy three hours of exciting and unique entertainment from music, dance, comedy and theatre performances – alongside bars to grab yourself a drink and free entry to the new exhibition Wild and Majestic, what more could you want during the Fringe? Where: National Museum of Scotland When: August 9th, 16th and 23rd, 7:30pm The Dolly Parton Story The Dolly Parton Story returns to the Fringe after receiving standing ovations from audience members in previous years to tell the story of the Queen of Country. This story-telling musical is a must-see featuring renditions of the singer's most iconic songs including Jolene, 9 to 5 and Coat of Many Colours. Where: theSpace @ Symposium Hall, Amphitheatre When: 2nd-25th August, 12:30pm Tickets priced from £12.50, available here. Thrones! The Musical Parody The Game of Thrones musical parody at the Fringe has been a complete sell-out since it began in 2015 and with the end of the HBO series, this show is likely to sell out again. Updated with new material from the most recent and final season, Game of Thrones fans can enjoy over an hour of a hilarious take on the beloved drama series. Where: Assembly George Square, Gordon Aikman Theatre When: 31st July – 25th August, 10:30pm Baby Wants Candy: The Completely Improvised Full Band Musical Baby Wants Candy, an Edinburgh Fringe sell-out from 2015-2018, is back for another year. As the title hints, this musical is completely unscripted with the comedy ensemble taking audience suggestions to create brand new, hilarious shows every night. Some fan favourite titles include Kanye West Side Story, Nicola Sturgeon; Hypnotis Comes to Life and Tinder in the Animal Kingdom. This unpredictable and entertaining show is one that you should definitely add to your Fringe Calendar! Where: Assembly George Square Studios, One When: 31st July – 25th August, 8pm-9pm Ben Hart: The Nutshell Following his appearance of Britain's Got Talent 2019, the multi-award winning magician and West End Star, Ben Hart returns to the Fringe after popular demand. With rave reviews from previous years and shows at this year's Fringe already selling out, this show is not one to miss with endearing and clever magic tricks that go beyond your imagination. Where: Gilded Balloon at the Museum, Auditorium When: August 18th, 24th-25th, 9pm Edinburgh's Greatest Hits – The Story of the Capital's Music For all the music lovers out there, this unique take on a walking tour is definitely one for you! Walking through the streets of Edinburgh, you will explore the stories of the cities musical history from those who have stayed, played and made music here. This is the perfect activity during the Fringe for those looking to explore Scotland's capital city whilst immersing themselves in the arts – not to mention that the tour finishes in one of Edinburgh's most-loved folk bars! Where: Outside Edinburgh's Festival Theatre (Meeting Point) When: August 2nd-3rd, 7th, 9th-10th, 14th, 16th-17th, 21st, 23rd-24th, 11am The Guilty Feminist: Live Podcast The Guilty Feminist, a podcast loved by females with over 60 million downloads since it launched in 2016, is returning to the Fringe this year for another live recording. Over three days, the comedian podcast host, Deborah Frances-White, discusses modern feminism and our own personal downfalls as 'feminists' that undermine the typical ideals of being a feminist. Where: Pleasance Courtyard, The Grand When: 2nd-4th August, 4pm Circa: Humans Ten acrobats prove how capable humans can be and show us how our bodies, connections and aspirations all form who we are through a breath-taking performance with thrilling stunts. With tickets at £20, this five-star performance from the Australian circus ensemble is one not to miss! Where: Underbelly's Circus hub on the Meadows – The Lafayette When: August 2nd-6th, 8th-11th, 13th-18th, 20th-24th, 7pm By Erin Gaffney You're Invited – The No.1 Gin Tasting Join us for a gin celebration at the Principal Edinburgh George Street!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,664
Adjacent to The Executive Park at Faber Place & Chamber of Commerce. Neighborhood Amenities Located within Charleston's largest office submarket. Adjacent to The Charleston Metro Chamber of Commerce. 3.5 miles to Centre Pointe amenities and Charleston International Airport. Faber Plaza is centrally located, providing convenient access from I-526 and 26. Faber Plaza's beautifully landscaped grounds, abundant parking and generous undeveloped "green spaces" set the tone for an exceptional business environment. Exercise paths winding through moss draped oak trees, manicured yards and ponds provide a relaxing and inspiring environment for employee breaks, walking, and jogging. Faber Plaza's central location allows for a variety of dining options including Bonefish Grill, Chili's, Panera Bread, and Starbucks. Local favorites include EVO Wood Fired Pizza, Home Team BBQ, and Red Orchid. A short drive downtown provides the opportunity for more upscale dining. Neighbors in The Executive Park at Faber Place include Boeing, SunTrust, Comcast, IBM, MetLife, and Mead Westvaco. The Park's location provides convenient access to both national and community banks. Faber Plaza is an opportune location to host out of town clients or regional meetings, with Charleston International Airport and area hotels just 5 minutes away. The park is also home to The Charleston Metro Chamber, Charleston Regional Development Alliance, and Charleston County Economic Development. Download our marketing flyer below.
{ "redpajama_set_name": "RedPajamaC4" }
683
{"url":"https:\/\/fuelcycle.org\/cep\/cep1.html","text":"# CEP 1 - CEP Purpose and Guidelines\u00b6\n\nCEP: 1 CEP Purpose and Guidelines 2013-07-03 Anthony Scopatz Active Process 2013-07-03\n\n## What is a CEP?\u00b6\n\nCEP stands for Cyclus Enhancement Proposal. A CEP is a design document providing information to the Cyclus community, or describing a new feature or process for Cyclus and related projects in its ecosystem. The CEP should provide a concise technical specification of the feature and a rationale for the feature.\n\nWe intend CEPs to be the primary mechanisms for proposing major new features, for collecting community input on an issue, and for documenting the design decisions that have gone into the Cyclus ecosystem. The CEP author is responsible for building consensus within the community and documenting dissenting opinions.\n\nBecause the CEPs are maintained as text files in a versioned repository, their revision history is the historical record of the feature proposal [1].\n\n## CEP Types\u00b6\n\nThere are three kinds of CEP:\n\n1. A Standards Track CEP describes a new feature or implementation for Cyclus. It may also describe an interoperability standard that will be supported outside of Cyclus core.\n2. An Informational CEP describes a Cyclus design issue, or provides general guidelines or information to the Cyclus community, but does not propose a new feature. Informational CEPs do not necessarily represent a Cyclus community consensus or recommendation, so users and implementers are free to ignore Informational CEPs or follow their advice.\n3. A Process CEP describes a process surrounding Cyclus, or proposes a change to (or an event in) a process. Process CEPs are like Standards Track CEPs but apply to areas other than the Cyclus code development. They may propose an implementation, but not to Cyclus\u2019s codebase; they often require community consensus; unlike Informational CEPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Cyclus development. Any meta-CEP is also considered a Process CEP.\n\n## CEP Workflow\u00b6\n\n### Cyclus\u2019s BDFP\u00b6\n\nThere are several reference in this CEP to the \u201cBDFP\u201d. This acronym stands for \u201cBenevolent Dictator for the Proposal.\u201d In most cases, it is fairly clear who this person is (Paul Wilson or Anthony Scopatz). It is this persons responsibility to consider the entire Cyclus ecosystem when deciding whether or not to accept a proposal. Weighted with this burden, their decision must be adhered to (dictator status), though they will try to do the right thing (benevolent).\n\n### CEP Editors\u00b6\n\nThe CEP editors are individuals responsible for managing the administrative and editorial aspects of the CEP workflow (e.g. assigning CEP numbers and changing their status). See CEP Editor Responsibilities & Workflow for details. The current editors are:\n\n\u2022 Paul Wilson\n\u2022 Anthony Scopatz\n\u2022 Katy Huff\n\nCEP editorship is by invitation of the current editors.\n\n### Submitting a CEP\u00b6\n\nThe CEP process begins with a new idea for Cyclus. It is highly recommended that a single CEP contain a single key proposal or new idea. Small enhancements or patches often don\u2019t need a CEP and can be injected into the Cyclus development workflow with a patch submission to the Cyclus issue tracker. The more focused the CEP, the more successful it tends to be. The CEP editors reserve the right to reject CEP proposals if they appear too unfocused or too broad. If in doubt, split your CEP into several well-focused ones.\n\nEach CEP must have a champion \u2013 someone who writes the CEP using the style and format described below, shepherds the discussions in the appropriate forums, and attempts to build community consensus around the idea. The CEP champion (a.k.a. Author) should first attempt to ascertain whether the idea is CEP-able. Posting to the cyclus-dev mailing list is the best way to go about this.\n\nVetting an idea publicly before going as far as writing a CEP is meant to save the potential author time. Many ideas have been brought forward for changing Cyclus that have been rejected for various reasons. Asking the Cyclus community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author. Just because an idea sounds good to the author does not mean it will work for most people in most areas where Cyclus is used.\n\nOnce the champion has asked the Cyclus community as to whether an idea has any chance of acceptance, a draft CEP should be presented to mailing list. This gives the author a chance to flesh out the draft CEP to make properly formatted, of high quality, and to address initial concerns about the proposal.\n\nFollowing a discussion on the mailing list, the proposal should be sent as a draft CEP to the one of the CEP editors. The draft must be written in CEP style as described below, else it will be sent back without further regard until proper formatting rules are followed (although minor errors will be corrected by the editors).\n\nIf the CEP editors approve, they will assign the CEP a number, label it as Standards Track, Informational, or Process, give it status \u201cDraft\u201d, and create and check-in the initial draft of the CEP. The CEP editors will not unreasonably deny a CEP. Reasons for denying CEP status include duplication of effort, being technically unsound, not providing proper motivation or addressing backwards compatibility, or not in keeping with the Cyclus philosophy. The BDFP can be consulted during the approval phase, and is the final arbiter of the draft\u2019s CEP-ability.\n\nDevelopers with git push privileges for the CEP repository may claim CEP numbers directly by creating and committing a new CEP. When doing so, the developer must handle the tasks that would normally be taken care of by the CEP editors (see CEP Editor Responsibilities & Workflow). This includes ensuring the initial version meets the expected standards for submitting a CEP. Alternately, even developers may choose to submit CEPs through the CEP editors. When doing so, let the CEP editors know you have git push privileges and they can guide you through the process of updating the CEP repository directly.\n\nAs updates are necessary, the CEP author can check in new versions if they (or a collaborating developer) have git push privileges, or else they can email new CEP versions to the CEP editors for publication.\n\nAfter a CEP number has been assigned, a draft CEP may be discussed further on mailing list (getting a CEP number assigned early can be useful for ease of reference, especially when multiple draft CEPs are being considered at the same time).\n\nStandards Track CEPs consist of two parts, a design document and a reference implementation. It is generally recommended that at least a prototype implementation be co-developed with the CEP, as ideas that sound good in principle sometimes turn out to be impractical when subjected to the test of implementation.\n\nCEP authors are responsible for collecting community feedback on a CEP before submitting it for review. CEP authors should use their discretion here.\n\n### CEP Review & Resolution\u00b6\n\nOnce the authors have completed a CEP, they may request a review for style and consistency from the CEP editors. However, the content and final acceptance of the CEP must be requested of the BDFP, usually via an email to the development mailing list. CEPs are reviewed by the BDFP and their chosen consultants, who may accept or reject a CEP or send it back to the author(s) for revision. For a CEP that is predetermined to be acceptable (e.g., it is an obvious win as-is and\/or its implementation has already been checked in) the BDFP may also initiate a CEP review, first notifying the CEP author(s) and giving them a chance to make revisions.\n\nThe final authority for CEP approval is the BDFP. However, whenever a new CEP is put forward, any core developer that believes they are suitably experienced to make the final decision on that CEP may offer to serve as the BDFP\u2019s delegate (or \u201cCEP czar\u201d) for that CEP. If their self-nomination is accepted by the other core developers and the BDFP, then they will have the authority to approve (or reject) that CEP. This process happens most frequently with CEPs where the BDFP has granted in principle approval for something to be done, but there are details that need to be worked out before the CEP can be accepted.\n\nIf the final decision on a CEP is to be made by a delegate rather than directly by the normal BDFP, this will be recorded by including the \u201cBDFP\u201d header in the CEP.\n\nFor a CEP to be accepted it must meet certain minimum criteria. It must be a clear and complete description of the proposed enhancement. The enhancement must represent a net improvement. The proposed implementation, if applicable, must be solid and must not complicate the infrastructure unduly. Finally, a proposed enhancement must be follow Cyclus best practices in order to be accepted by the BDFP.\n\nOnce a CEP has been accepted, the reference implementation must be completed. When the reference implementation is complete and incorporated into the main source code repository, the status will be changed to \u201cFinal\u201d.\n\nA CEP can also be assigned status \u201cDeferred\u201d. The CEP author or an editor can assign the CEP this status when no progress is being made on the CEP. Once a CEP is deferred, a CEP editor can re-assign it to draft status.\n\nA CEP can also be \u201cRejected\u201d. Perhaps after all is said and done it was not a good idea. It is still important to have a record of this fact. The \u201cWithdrawn\u201d status is similar - it means that the CEP author themselves has decided that the CEP is actually a bad idea, or has accepted that a competing proposal is a better alternative.\n\nWhen a CEP is Accepted, Rejected or Withdrawn, the CEP should be updated accordingly. In addition to updating the status field, at the very least the Resolution header should be added with a link to the relevant post in the cyclus-dev mailing list archives.\n\nCEPs can also be superseded by a different CEP, rendering the original obsolete. This is intended for Informational CEPs, where version 2 of an API can replace version 1.\n\nThe possible paths of the status of CEPs are as follows:\n\nSome Informational and Process CEPs may also have a status of \u201cActive\u201d if they are never meant to be completed. E.g. CEP 1 (this CEP).\n\nLazy Consensus: After 1 month of no objections to the wording of a CEP, it may be marked as \u201cAccepted\u201d by lazy consensus. The author, BDFP, and the Cyclus community manager are jointly responsible for sending out weekly reminders of an unapproved CEP without active discussion.\n\n### CEP Maintenance\u00b6\n\nIn general, Standards track CEPs are no longer modified after they have reached the Final state. Once a CEP has been completed, the Language and Standard Library References become the formal documentation of the expected behavior.\n\nInformational and Process CEPs may be updated over time to reflect changes to development practices and other details. The precise process followed in these cases will depend on the nature and purpose of the CEP being updated.\n\n## What belongs in a successful CEP?\u00b6\n\nEach CEP should have the following parts:\n\n1. Preamble \u2013 headers containing meta-data about the CEP, including the CEP number, a short descriptive title, the names, and optionally the contact info for each author, etc.\n\n2. Abstract \u2013 a short (~200 word) description of the technical issue being addressed.\n\n3. Copyright\/public domain \u2013 Each CEP must either be explicitly labeled as placed in the public domain (see this CEP as an example) or licensed under the Open Publication License.\n\n4. Specification \u2013 The technical specification should describe the syntax and semantics of any new feature.\n\n5. Motivation \u2013 The motivation is critical for CEPs that want to change the Cyclus ecosystem. It should clearly explain why the existing language specification is inadequate to address the problem that the CEP solves. CEP submissions without sufficient motivation may be rejected outright.\n\n6. Rationale \u2013 The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages.\n\nThe rationale should provide evidence of consensus within the community and discuss important objections or concerns raised during discussion.\n\n7. Backwards Compatibility \u2013 All CEPs that introduce major backwards incompatibilities must include a section describing these incompatibilities and their severity. The CEP must explain how the author proposes to deal with these incompatibilities. CEP submissions without a sufficient backwards compatibility treatise may be rejected outright.\n\n8. Reference Implementation \u2013 The reference implementation must be completed before any CEP is given status \u201cFinal\u201d, but it need not be completed before the CEP is accepted. While there is merit to the approach of reaching consensus on the specification and rationale before writing code, the principle of \u201crough consensus and running code\u201d is still useful when it comes to resolving many discussions of API details.\n\nThe final implementation must include test code and documentation appropriate for Cyclus.\n\n## CEP Header Preamble\u00b6\n\nEach CEP must begin with a header preamble. The headers must appear in the following order. Headers marked with \u201c*\u201d are optional and are described below. All other headers are required.\n\n CEP: <cep number>\nTitle: <cep title>\nVersion: <version string>\nLast-Modified: <date string>\nAuthor: <list of authors' real names and optionally, email addrs>\n* BDFP: <CEP czar's real name>\nStatus: <Draft | Active | Accepted | Deferred | Rejected |\nWithdrawn | Final | Superseded>\nType: <Standards Track | Informational | Process>\n* Requires: <cep numbers>\nCreated: <date created on, in yyyy-mm-dd format>\n* Cyclus-Version: <version number>\n* Replaces: <cep number>\n* Superseded-By: <cep number>\n* Resolution: <url>\n\n\nThe Author header lists the names, and optionally the email addresses of all the authors\/owners of the CEP. The format of the Author header value must be\n\nRandom J. User <address@dom.ain>\n\nif the email address is included, and just\n\nRandom J. User\n\nThe BDFP field is used to record cases where the final decision to approve or reject a CEP rests with someone other than the normal BDFP.\n\nThe Type header specifies the type of CEP: Standards Track, Informational, or Process.\n\nThe Created header records the date that the CEP was assigned a number, while Post-History is used to record the dates of when new versions of the CEP are posted to Cyclus mailing list. Both headers should be in yyyy-mm-dd format, e.g. 2001-08-14.\n\nStandards Track CEPs will typically have a Cyclus-Version header which indicates the version of Cyclus that the feature will be released with. Standards Track CEPs without a Cyclus-Version header indicate interoperability standards that will initially be supported through external libraries and tools, and then supplemented by a later CEP to add support to the standard library. Informational and Process CEPs do not need a Cyclus-Version header.\n\nCEPs may have a Requires header, indicating the CEP numbers that this CEP depends on.\n\nCEPs may also have a Superseded-By header indicating that a CEP has been rendered obsolete by a later document; the value is the number of the CEP that replaces the current document. The newer CEP must have a Replaces header containing the number of the CEP that it rendered obsolete.\n\n## Auxiliary Files\u00b6\n\nCEPs may include auxiliary files such as diagrams. Such files must be named cep-XXXX-Y.ext, where \u201cXXXX\u201d is the CEP number, \u201cY\u201d is a serial number (starting at 1), and \u201cext\u201d is replaced by the actual file extension (e.g. \u201cpng\u201d).\n\n## Reporting CEP Bugs, or Submitting CEP Updates\u00b6\n\nHow you report a bug, or submit a CEP update depends on several factors, such as the maturity of the CEP, the preferences of the CEP author, and the nature of your comments. For the early draft stages of the CEP, it\u2019s probably best to send your comments and changes directly to the CEP author. For more mature, or finished CEPs you may want to submit corrections to the Cyclus issue tracker so that your changes don\u2019t get lost. If the CEP author is a Cyclus developer, assign the bug\/patch to them, otherwise assign it to a CEP editor.\n\nWhen in doubt about where to send your changes, please check first with the CEP author and\/or a CEP editor.\n\nCEP authors with git push privileges for the CEP repository can update the CEPs themselves by using \u201cgit push\u201d to submit their changes.\n\n## Transferring CEP Ownership\u00b6\n\nIt occasionally becomes necessary to transfer ownership of CEPs to a new champion. In general, it is preferable to retain the original author as a co-author of the transferred CEP, but that\u2019s really up to the original author. A good reason to transfer ownership is because the original author no longer has the time or interest in updating it or following through with the CEP process, or has fallen off the face of the earth (i.e. is unreachable or not responding to email). A bad reason to transfer ownership is because the author doesn\u2019t agree with the direction of the CEP. One aim of the CEP process is to try to build consensus around a CEP, but if that\u2019s not possible, an author can always submit a competing CEP.\n\nIf you are interested in assuming ownership of a CEP, send a message asking to take over, addressed to both the original author and the Cyclus mailing list. If the original author doesn\u2019t respond to email in a timely manner, the CEP editors will make a unilateral decision (it\u2019s not like such decisions can\u2019t be reversed :).\n\n## CEP Editor Responsibilities & Workflow\u00b6\n\nA CEP editor must subscribe to the Cyclus development mailing list. For each new CEP that comes in an editor does the following:\n\n\u2022 Read the CEP to check if it is ready: sound and complete. The ideas must make technical sense, even if they don\u2019t seem likely to be accepted.\n\u2022 The title should accurately describe the content.\n\u2022 Edit the CEP for language (spelling, grammar, sentence structure, etc.).\n\nIf the CEP isn\u2019t ready, an editor will send it back to the author for revision, with specific instructions.\n\nOnce the CEP is ready for the repository, a CEP editor will:\n\n\u2022 Assign a CEP number (almost always just the next available number, but sometimes it\u2019s a special\/joke number, like 666 or 3141).\n\u2022 Add the CEP to the CEP repository.\n\u2022 Commit and push the new (or updated) CEP\n\u2022 Monitor cyclus.github.com to make sure the CEP gets added to the site properly.\n\u2022 Send email back to the CEP author with next steps (post to the Cyclus development mailing list).\n\nMany CEPs are written and maintained by developers with write access to the Cyclus codebase. The CEP editors monitor the various repositories for CEP changes, and correct any structure, grammar, spelling, or markup mistakes they see.\n\nCEP editors don\u2019t pass judgment on CEPs. They merely do the administrative & editorial part (which is generally a low volume task).\n\n## Document History\u00b6\n\nThis document was forked and modified from the Python Enhancement Proposals\n\nThis document is released under the CC-BY 3.0 license.\n\n## References and Footnotes\u00b6\n\n [1] This historical record is available by the normal git commands for retrieving older revisions, and can also be browsed via HTTP here: https:\/\/github.com\/cyclus\/cyclus.github.com\/tree\/source\/source\/cep","date":"2021-07-30 06:31:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2958822250366211, \"perplexity\": 3401.5283487106913}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046153934.85\/warc\/CC-MAIN-20210730060435-20210730090435-00700.warc.gz\"}"}
null
null
Q: Question about adding a node to a binary search tree I'm coding a binary search tree with all its methods and I've run across the add method, but there's one line that's messing with me. Here's what I have for the method so far: // 6) Methods: add public boolean add(int newData) { if (!treeContains(newData)) return false; else { add(root, newData); nodeCount++; return true; } } public Node add(Node node, int newData) { if (node == null) { node = new Node(newData, null, null); // QUESTION } if (newData > node.data) { add(node.rightChild, newData); } else { // else if newData is less or equal to node data add(node.leftChild, newData); } return node; } Over where I wrote "// QUESTION ", I understand that if we get there, we're basically already standing in the node.leftChild or node.rightChild of some node, so when we create a node (in // QUESTION) it just automatically pops up there? It's just a bit confusing because it feels like I should be specifying that the new node goes there like using something like: node.leftChild == node; // (or node.rightChild) If anyone has a good perspective on this I would really appreciate it. Thanks! A: The node that your are passing to add method IMHO should never be null. You should handle the null case before this method is called. I guess that you need something like that: public Node add(Node node, int newData) { if (newData == node.data) { // ??? Not sure what do you need here, but it's there already. return node; } if (newData > node.data) { if (node.rightChild == null) { node.rightChild = new Node(newData, null, null); return node.rightChild; } else { return add(node.rightChild, newData); } } else { if (node.leftChild == null) { node.leftChild = new Node(newData, null, null); return node.leftChild; } else { return add(node.leftChild, newData); } } } Btw. not sure if this line is correct: if (!treeContains(newData)) return false; Shouldn't the add() method return false if the node is already present? And add it if it's not present? Are you sure that you need a negation in IF?
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,155
PYTHONDONTWRITEBYTECODE=" " \ python -m bottle --debug --reload mikan:app
{ "redpajama_set_name": "RedPajamaGithub" }
2,129