text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Q: Sed script for find/replace inside .jsp files. (from Struts to JSTL EL syntax) I want a sed script that I can use for 1) finding instances, and 2) printing this string: <bean:write name='iframesrcUrl'/> <bean:write name="iframesrcUrl"/> <bean:write name="currentPage" property="title" filter="false"/> or similar. name and property values can differ. property and filter attributes are optional. Both single quotes ' and double quotes " occur. The sed command must be two headed: I want first to run one command to see what it finds. Then I want to run the next command to make the actual replacements. The strings should be replaced with: ${ iframesrcUrl } ${ currentPage.title } A quick grep shows there are 68 occurences in my project: grep '<bean:write name=' **/* |wc -l What would be the easiest way to solve this? A: Having learned from the other answers that partially covered my question, i ended up with the following. (I bet it can be made shorter but it works)my try to find every occurence of constructions like #my try to find every occurence of constructions like # <bean:write name='iframesrcUrl'/> # <bean:write name="iframesrcUrl"/> # <bean:write name="currentPage" property="title" filter="false"/> # # or similar. name and property values can differ. property and filter attributes are optional. # Both single quotes ' and double quotes " occur. # # cd jahia_virk/tomcat/webapps/ROOT/jsp/jahia/templates/virk/virk.dk # Printing occurences: # ===================== sed -nE \ -e '/<bean:write name="([[:alpha:]]+)"( property="([[:alpha:]]+)")( filter="false")?\/>/p' \ -e "/<bean:write name='([[:alpha:]]+)'( property='([[:alpha:]]+)')( filter='false')?\/>/p" \ -e '/<bean:write name="([[:alpha:]]+)"\/>/p' \ -e "/<bean:write name='([[:alpha:]]+)'\/>/p" \ *.jsp **/*.jsp **/*.inc # Replacing occurences: # ===================== sed -E -i .bak \ -e 's/<bean:write name="([[:alpha:]]+)"( property="([[:alpha:]]+)")( filter="false")?\/>/${ \1.\3 }/g' \ -e "s/<bean:write name='([[:alpha:]]+)'( property='([[:alpha:]]+)')( filter='false')?\/>/\${ \1.\3 }/g" \ -e 's/<bean:write name="([[:alpha:]]+)"\/>/${ \1 }/g' \ -e "s/<bean:write name='([[:alpha:]]+)'\/>/\${ \1 }/g" \ *.jsp **/*.jsp **/*.inc A few lessons learned: * *$ is reserved from the command line, so I had to escape the $ sign in lines where the sed expression is within double-quotes *\w did not work for matching any word character. So I had to substitute with [[:alpha:]] *Substitutions in any file in any directory (*/* **/*) is a no-go for hidden system files, binary files like images, etc. I had to focus on only .jsp and .inc files for my project: *.jsp **/*.jsp **/*.inc One more word of caution: I did this on a project to move it away from old-school struts style. If you are in a similar situation be careful to review any edits manually afterwards. Script shortcomings: For various reasons, the following examples were not found with the script above: <bean:write name='scriptEditor-Url'/> <bean:write name='currentSite' property='homePage.url'/> <bean:write name="portlet" property="value" filter="false" /> <bean:write name='<%= "optTextUrl" + id %>'/> #1 failed because [[:alpha:]] did not match - (and there are also some with underscores). #2 is the same: [[:alpha:]] does not match a dot .. #4 concatenates strings inside parameter name. I could write a script to find them , but there are only four occurences in the project. The big question is what it should be replaced with. I suspect inline java does not work. and I suspect I cannot just write ${ 'optTextUrl' + id } A: I'm going off of your grep regex here 1) Print what it finds sed '/<bean:write name=/!d' 2) Replace what it finds sed '/<bean:write name=/s/^.*$/${ iframesrcUrl }\n${ currentPage.title }/' Looking further at your question, I see you appear to have Bash4 with globstar on (due to the **/* glob). If you want these sed scripts to run on each file recursively I would suggest: #!/bin/bash for file in **/*; do <sed one-liner here> "$file" done For the replacement sed script, just add -i to do an in-place edit. Note that this requires GNU sed to work. If you don't have GNU sed, you will have to redirect the output to a temp file. A: its not exactly clear what your output might be. just a guess until you provide more clear sample inputs and output awk '/bean:write name/{ $0="${ iframesrcUrl }\n${ currentPage.title }" }{print}' file A: Assuming that you have a file like <root> <bean:write name='iframesrcUrl'/> <bean:write name="iframesrcUrl"/> <bean:write name="currentPage" property="title" filter="false"/> <foo><bar/></foo> </root> you can do replacements with this sed command (using GNU sed): sed "s/<bean:write name=[\'\"]\?iframesrcUrl[\'\"]\?\/>/\${ iframesrcUrl }/g; \ s/<bean:write name=[\'\"]\?currentPage[\'\"]\?.*\/>/\${ currentPage.title }/g;" \ input.xml which produces: <root> ${ iframesrcUrl } ${ iframesrcUrl } ${ currentPage.title } <foo><bar/></foo> </root> Is it what you need? Or do you want to replace attributes' values? Or do you want to put your substitution text into these tags? To find and edit all files in-place (attention! changes your files, please test without -i before use, put your file mask instead of '*.jsp'): find . -type f -name '*.jsp' -print0 | xargs -0 sed -i "..." UPDATE To replace attribute values, not the lines of file themselves, I would strongly recommend using xmlstarlet instead of sed/awk. It is much more reliable and flexible. I cannot post solution exactly for your case, because xmlstarlet needs a complete (valid) file to process, but this is an idea: Given a file: <a> <b> <c name="foo"/> <c name="bar"/> </b> </a> Let say we want replace foo with SPAM and bar with EGGS. Then this command will do it (splitted lines for readability): $ printf '<a><b><c name="foo"/><c name="bar"/></b></a>' | \ xmlstarlet ed --update "//c[@name='foo']/@name" -v SPAM \ --update "//c[@name='bar']/@name" -v EGGS <?xml version="1.0"?> <a> <b> <c name="SPAM"/> <c name="EGGS"/> </b> </a> I used XPath syntax to select an element to replace (in the first case it is name attribute which belongs to any c tag and is equal to foo). ed subcommand of xmlstarlet allows various transformations, replacing (updating) an element is just on of them. In real-life examples you will need to specify also bean workspace, i.e. add something like -N bean=urn:... to the list of xmlstarlet's options. You can find the correct URI in the first lines of your .jsp file (I don't have any to look at).
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,505
DVEC Event Cap - Purple – with mesh back for high breathability. DVEC Event Cap - Black – with mesh back for high breathability. DVEC Event Cap - Red – with mesh back for high breathability. Micromesh head sock designed to protect your neck and face from the harsh sun, wind and cold (cap not included). DVEC Trucker Cap – 100% cotton, velcro adjustment strap, clip-on safety strap. Keep your head warm this winter with Daiwa's new wool blend beanie.
{ "redpajama_set_name": "RedPajamaC4" }
9,142
Docente scolastico a inizio carriera, è stato a metà degli anni settanta una star del celebre programma televisivo Top of the Pops. Uno dei brani per i quali è maggiormente conosciuto è la sua cover del celebre standard Streets of London. Gode di notorietà particolarmente sul mercato musicale della Germania, paese nel quale ha suonato spesso durante diverse tournée. Biografia Whittaker è un artista conosciuto in tutto il mondo ed ha venduto dischi per oltre 55 milioni di copie. Il suo genere musicale rientra nel cosiddetto easy listening. È conosciuto per il caratteristico timbro di voce nel registro di baritono e per la particolare abilità nel produrre musica attraverso la tecnica del fischio. I suoi genitori - Edward e Viola - erano originari dello Staffordshire, inghilterra, dove gestivano una drogheria. In conseguenza di un incidente di motocicletta occorso al padre di Roger, bisognoso di cure e di un clima mite, la famiglia si trasferì in una fattoria del Kenya. Il nonno di Roger, anch'egli cantante, si esibiva in vari club mentre il padre del futuro cantante suonava il violino. Fu in quel periodo che il ragazzo incominciò a imparare a suonare la chitarra acustica. Nel 1954 Whittaker venne arruolato nel reggimento del Kenya dove rimase per due anni. Nel 1956, al momento di essere congedato, decise di studiare da medico e si iscrisse all'Università di Città del Capo, in Sudafrica. Lasciò tuttavia gli studi un anno e mezzo dopo per diventare insegnante per il servizio civile del suo paese. Carriera Con l'intento di riprendere l'università, Whittaker si trasferì in Gran Bretagna nel settembre del 1959. Nei successivi tre anni studiò zoologia, biochimica e biologia marina all'Università del Galles, a Bangor ottenendo un B.A. in scienze. Contestualmente continuò a cantare in club locali incidendo anche alcune canzoni per i dischi in acetato (flexi-discs) che venivano distribuiti abbinati al foglio universitario edito dagli studenti del campus. Successivamente passò a incidere per la Fontana Records, l'etichetta discografica che distribuì nel 1962 il suo primo singolo inciso come professionista: The Charge of The Light Brigade. Nella stessa estate del 1962, Whittaker prese parte ad un concorso internazionale di musica a Portrush, nell'Irlanda del Nord. Il successo gli venne subito con la scrittura ottenuta con la Ulster Television per uno spettacolo intitolato This And That (Questo e Quello). Il suo secondo singolo, ed il primo a scalare le classifiche della UK Top 30 (NME) fu una cover da Jimmy Dean, Steel Men, distribuito anch'esso nel giugno 1962. Il successo Nel 1968, la carriera di Whittaker ebbe un impulso con un cambio di etichetta che lo portò a incidere nell'autunno del 1969 per la EMI Records Durham Town (The Leavin') che fu il primo singolo del cantante a piazzarsi nella Top 20 britannica. Nella primavera dell'anno successivo, la RCA Victor distribuì poi New World In The Morning sul mercato USA (anche in questo casò il brano entrò nella Top 20 delle classifiche di brani easy listening. È il 1975 quando Whittaker incide il suo primo singolo dal successo globale, The Last Farewell, che avrebbe venduto in totale undici milioni di copie in tutto il mondo. Un'incursione del cantante nel genere country music lo indusse a registrare I Love You Because che entrò nelle chart del genere nel tardo 1983. Nel 1979, Whittaker aveva scritto per l'Eurovision Song Contest le canzoni Call My Name - che vinse la finale delle selezioni del Regno Unito - e A Song For Europe, poi cantata da Eleanor Keenan, che si piazzò terza. Lo stesso Whittaker incise poi la canzone che entrò nelle classifiche dei singoli di diverse nazioni europee. Vita privata Nella primavera del 1964 Roger si fidanzò con Natalie O'Brien e i due convolarono a nozze il 15 agosto dello stesso anno. Dalla loro unione sono nati cinque figli: Emily (nata il 28 maggio 1968), Lauren (4 giugno 1970), Jessica (14 febbraio 1973), Guy (15 novembre 1974) e Alexander (7 aprile 1978). La coppia ha nove nipoti. Discografia Singoli Album pubblicati in Europa Parziale elenco degli album pubblicati in Europa da Roger Whittaker: 1971 New World in the Morning 1976 A little goodbye 1977 The best of. 3 1979 Mein deutsches Album (in German) 1981 Zum Weinen ist immer noch Zeit 1982 Typisch Roger Whittaker 1983 Voyager 1984 Ein Glück, daß es Dich gibt 1987 Heut bin ich arm - Heut bin ich reich 1991 Frohe Weinacht - Die schonsten Weihnachtslieder 1991 Mein Herz schlägt nur für Dich 1992 Stimme des Herzens 1993 Stille Nacht, heilige Nacht 1994 Leben mit Dir 1994 Sehnsucht nach Liebe 1994 Geschenk des Himmels 1995 Ein schöner Tag mit Dir 1996 Alles Roger! 1996 Einfach leben 1997 Zurück zur Liebe 1999 Alles Roger 2 1999 Awakening 2000 Wunderbar geborgen 2002 Mehr denn je 2003 Alles Roger 3 2003 Der weihnachtliche Liedermarkt 2004 Live in Berlin 2004 Mein schönster Traum 2005 Moments in My Life 2007 the Danish collection Album pubblicati negli USA Altri progetti Collegamenti esterni Studenti dell'Università del Galles Fischiatori
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,716
Neon Static is the eighth studio album by La Mafia released on June 29, 1985. It peaked on the Billboard Regional Mexican chart at number fourteen. Track listing References 1985 albums La Mafia albums Spanish-language albums
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,877
Q: Is it OK to pass reference to a pointer as a function argument? I had the understanding that in c++ & and * cancel each other i.e int *&p is essentially equal to p as its value at address of integer p. Now is it valid to pass reference to a pointer in view of above i.e say i am trying to pass reference to a pointer as an argument in a function as below? void func(int* &p) Won't the above result in cancellation of * with & and will just be int p? How correct is it if i try to pass reference to pointer of a class object on similar terms? #include <iostream> using namespace std; int gobal_var = 42; // function to change Reference to pointer value void changeReferenceValue(int*& pp) { pp = &gobal_var; } int main() { int var = 23; int* ptr_to_var = &var; cout << "Passing a Reference to a pointer to function" << endl; cout << "Before :" << *ptr_to_var << endl; // display 23 changeReferenceValue(ptr_to_var); cout << "After :" << *ptr_to_var << endl; // display 42 return 0; } A: You are correct that the & address-of operator and the * indirection operator cancel each other out when used inside an expression. However, when used inside a declaration, these operators have a very different meaning. Inside a declaration, * means "pointer" and & means "reference". Therefore, when used inside a declaration, they do not cancel each other out. An object of type int*& is simply a reference to a pointer to an int.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,963
This digital edition first published in 2016 LOVE FOOD is an imprint of Parragon Books Ltd Parragon Queen Street House 4 Queen Street Bath BA1 1HE, UK Copyright (C) Parragon Books Ltd 2007 LOVE FOOD and the accompanying heart device is a registered trademark of Parragon Books Ltd in Australia, the UK, and the EU. Photography by Gunter Beer Home Economist Stevan Paul Design by Talking Design Introduction and additional recipes written by Beverly Le Blanc ISBN: 978-1-4748-4013-2 10 9 8 7 6 5 4 3 2 1 Copyright (C) 2016 SIL International with Reserved Font Name Gentium Book Basic. Copyright (C) 2016 tyPoland Lukasz Dziedzic with Reserved Font Name Lato. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright holder. This book uses imperial, metric, and US cup measurements. Follow the same units of measurement throughout; do not mix imperial and metric. All spoon measurements are level, unless otherwise stated: teaspoons are assumed to be 5ml, and tablespoons are assumed to be 15ml. Unless otherwise stated, milk is assumed to be whole, eggs and individual fruits such as bananas are medium, and pepper is freshly ground black pepper. Recipes using raw or very lightly cooked eggs should be avoided by infants, the elderly, pregnant women, convalescents, and anyone suffering from an illness. Pregnant and breast-feeding women are advised to avoid eating peanuts and peanut products. # contents introduction sunshine--a collection of vegetable salads hearty--a collection of meat & poultry salads sparkling--a collection of fish & seafood salads health-boosting--a collection of energizing salads index # The taste of summer all year round Take a fresh look at salads and you'll see there are a lot of exciting developments in the salad bowl. Banished forever is the image of salads as dull diet food consisting of little more than limp lettuce and soggy, flavorless tomatoes. Today, salads are one of the ultimate health foods. They can contain an exciting variety of colorful, delicious, and satisfying ingredients that provide the nutrients essential for healthy living. You'll find plenty of inspiration in this book for a cornucopia of healthy salads for all occasions, from light lunches and family meals to stylish dinner parties. And, remember, salads aren't just for summer. We have plenty of ideas for salads that brighten and lighten winter mealtimes, too. In fact, once you get into the habit of thinking "salad" while menu planning, ideas will also come to you whatever the time of the year as you push your shopping cart along the supermarket aisles or stop at the cheese counter or produce department. Salads are versatile enough to cater for vegetarians and meat-eaters alike. Meat, seafood, and poultry are ideal salad ingredients, along with the perhaps more commonly used lettuce and other leaves, vegetables, fruit, herbs, nuts, seeds, grains, legumes, and cheese. And with so many ingredients to choose from, salads can be as simple and light or complex and filling as you like. They also have the added bonus that they are versatile enough to fit easily into all your meal plans, from first courses through to desserts. As you flip through these recipes you'll find classic favorites--Caesar salad, chef's salad, and salad niçoise, to name a few--as well as fresh, new ideas that capture the flavors of cuisines around the world. If, for example, the idea of chicken salad doesn't excite because you've been making the same recipe out of habit for as long as you can remember, try Thai-Style Chicken Salad. You'll never think of chicken salad as humdrum again! # Bowls of Goodness It wasn't long ago that when "salads" and "health" were linked it was in the context of weight-reducing diets that were restrictive and ultimately unsatisfying. Today, however, salads are a delicious component of a healthy diet, giving you endless variety at mealtimes without long hours in the kitchen. With fresh produce from all corners of the globe readily available in supermarkets and gourmet food stores, you can enjoy a variety of salads all year round but, remember, salads are at their most flavorsome and nutritious when made with seasonal produce in its prime. We are all being urged to eat more fruit and vegetables every day, and a salad a day can go a long way to help you meet the minimum target of 2½ cups for adults. Enjoy a bowl of Traditional Greek Salad or Three-Color Salad, for example, and you'll be more than halfway to success. What could be easier--or more enjoyable? Salads also make great accompaniments, served alongside a filling bowl of pasta or a plate of hot or cold roast meat. So, if you regularly fall back on the old favorite of just tossing a few green leaves with a simple oil-and-vinegar dressing, it's definitely time to think again. It's very easy to mix and match ingredients and the choice has never been greater. And don't make the mistake of thinking all salad ingredients have to be raw, either. Adding small amounts of cooked meat, poultry and seafood to lettuce leaves and other vegetables gives you a satisfying meal. If you want a meal-in-a-bowl, try Roast Pork with Pumpkin, Smoked Chicken Salad with Avocado & Tarragon Dressing, and Shrimp & Rice Salad, for example. There are plenty of vegetarian main-course salads, too, such as the Middle Eastern favorite Tabbouleh and Buckwheat Noodle Salad with Smoked Tofu. Cooked vegetables also make good salad ingredients. Grilled peppers, fried eggplants, blanched beans of all varieties, and peas are just some of the cooked vegetables you'll find adding flavor and an extra dimension to the salads in this book. # Colorful Greens Even with so many ingredients to choose from, salad greens still provide the backbone of many popular salads. Take a look around your supermarket and you'll see leaves in many colors and textures, ranging from pearly, pale white Belgian endive to bright red and white radicchio. They also have a variety of flavors, from robust and peppery to sweet, nutty, and mild. The greater the variety of leaves you include in your salad, the more interesting it will be, and the more nutrients it will contain. When you select salad leaves, remember that the darker colored ones, such as spinach leaves, contain more beta-carotene, which help fight some forms of cancer and other illnesses. Leafy green vegetables are also excellent sources of fiber. It's become very convenient to grab a bag of mixed salad leaves at the supermarket, but it can be more satisfying to sample a selection of greens sold separately at farmers' markets. Asian and other ethnic food stores are also a good source of unusual greens. Try these new and familiar greens to add variety to your salad bowl: • Arugula--known for its pronounced peppery flavor, these dark green leaves perk up many salads. Popular in Italian salads. Substitute watercress if you can't find arugula. • Beet greens--distinctive with their ruby-red stems, these soft leaves are mildly flavored. • Mache--also labelled as corn salad or lamb's lettuce, these tender leaves have a mild, slightly nutty flavor. • Mesclun or mesclum--now sold in supermarkets, this French mix of leaves can include arugula, chervil, dandelion, and oak-leaf lettuce. Just add dressing and toss. • Mizuna--from the Far East, this winter green has a full peppery flavor. Its pointy green leaves add visual interest to salads, too. • Nasturtium--use both the colorful flowers and peppery leaves in salads. • Radicchio--there is nothing like the bright red and white leaves of this member of the bitter endive family to liven autumn and early winter salads. It has a crisp texture and nutty, peppery flavor. • Red chard--like beet greens, these fiber-rich leaves have bright red stems and sometimes the leaves are tinged red as well. • Romaine lettuce--Caesar salad simply wouldn't be Caesar salad without these long, crisp leaves. Comes in a large, compact head with long, crisp leaves that have a sweet nutty flavor. # Keep It Fresh Good salads are only made with good ingredients, and freshness is all-important when buying salad greens. Because of the leaves' high water content, they are very perishable, so buy them as close as possible to serving. Not only will they taste best, they contain the most nutrients when they are in peak condition. Let your eyes guide you when shopping for salad greens--fresh leaves look fresh. They won't have any leaves tinged with brown, nor will they be wilted or slimy. When you get salad ingredients home, give them a rinse in cold water, then spin them dry or use a tea cloth to pat them dry. Never leave them to soak in a sink of cold water because all the water-soluble vitamins and minerals will leech out. Use leafy ingredients as soon as possible, but most will keep for up to four days in a sealed container in the refrigerator. Once you open bags of prepared leaves, however, they should be used within 24 hours. You can prepare salad greens several hours in advance and store in the refrigerator, but do not dress until just before serving, because the acid in most dressings causes the leaves to wilt and become unappetizing. # sunshine a collection of vegetable salads # caesar salad serves 4 ingredients ⅔ cup olive oil 2 garlic cloves 5 slices white bread, crusts removed, cut into ½-inch/1-cm cubes 1 large egg 2 romaine lettuces or 3 Boston lettuces 2 tbsp lemon juice salt and pepper 8 canned anchovy fillets, drained and coarsely chopped ¾ cup fresh Parmesan cheese shavings Bring a small, heavy-bottom pan of water to a boil. Meanwhile, heat 4 tablespoons of the olive oil in a heavy-bottom skillet. Add the garlic and cubed bread and cook, stirring and tossing frequently, for 4-5 minutes, or until the bread is crispy and golden all over. Remove from the skillet with a slotted spoon and drain on paper towels. Add the egg to the boiling water and cook for 1 minute, then remove from the pan and set aside. Arrange the salad greens in a salad bowl. Mix the remaining olive oil and lemon juice together, then season to taste with salt and pepper. Crack the egg into the dressing and whisk to blend. Pour the dressing over the salad greens, toss well, then add the croutons and chopped anchovies and toss the salad again. Sprinkle with Parmesan cheese shavings and serve. # traditional greek salad serves 4 ingredients 7 oz/200 g Greek feta cheese ½ head of iceberg lettuce or 1 lettuce such as romaine or escarole, shredded or sliced 4 tomatoes, cut into fourths ½ cucumber, sliced 12 Greek black olives, pitted 2 tbsp chopped fresh herbs, such as oregano, flat-leaf parsley, mint, or basil for the dressing 6 tbsp extra-virgin olive oil 2 tbsp fresh lemon juice 1 garlic clove, crushed pinch of sugar salt and pepper Make the dressing by whisking together the oil, lemon juice, garlic, sugar, salt, and pepper, in a small bowl. Set aside. Cut the feta cheese into cubes about 1 inch/2.5 cm square. Put the lettuce, tomatoes, and cucumber in a salad bowl. Scatter over the cheese and toss together. Just before serving, whisk the dressing, pour over the salad greens, and toss together. Scatter over the olives and chopped herbs and serve. # mozzarella salad with sun-dried tomatoes serves 4 ingredients 3½ oz/100 g mixed salad greens, such as oak leaf lettuce, baby spinach, and arugula 1 lb 2 oz/500 g smoked mozzarella cheese, sliced for the dressing 5 oz/140 g sun-dried tomatoes in olive oil (drained weight), reserving the oil from the jar ¼ cup coarsely shredded fresh basil ¼ cup coarsely chopped fresh flat-leaf parsley 1 tbsp capers, rinsed 1 tbsp balsamic vinegar 1 garlic clove, coarsely chopped extra olive oil, if necessary pepper Put the sun-dried tomatoes, basil, parsley, capers, vinegar, and garlic in a food processor or blender. Measure the oil from the sun-dried tomatoes jar and make it up to ⅔ cup with more olive oil if necessary. Add it to the food processor or blender and process until smooth. Season to taste with pepper. Divide the salad greens among 4 individual serving plates. Top with the slices of mozzarella and spoon the dressing over them. Serve immediately. # red & green salad serves 4 ingredients 1 lb 7 oz/650 g cooked beet 3 tbsp extra-virgin olive oil juice of 1 orange 1 tsp superfine sugar 1 tsp fennel seeds salt and pepper 4 oz/115 g fresh baby spinach leaves Using a sharp knife, dice the cooked beet and set aside until required. Heat the olive oil in a small, heavy-bottom pan. Add the orange juice, sugar and fennel seeds and season to taste with salt and pepper. Stir constantly until the sugar has dissolved. Add the reserved beet to the pan and stir gently to coat. Remove the pan from the heat. Arrange the baby spinach leaves in a large salad bowl. Spoon the warmed beet on top and serve immediately. # roasted garlic, sweet potato, broiled eggplant & bell pepper salad with mozzarella serves 4 ingredients 2 sweet potatoes, peeled and cut into chunks 2 tbsp olive oil pepper 2 garlic cloves, crushed 1 large eggplant, sliced 2 red bell peppers, seeded and sliced 7 oz/200 g mixed salad greens 2 x 5½ oz/150 g mozzarella cheeses, drained and sliced for the dressing 1 tbsp balsamic vinegar 1 garlic clove, crushed 3 tbsp olive oil 1 small shallot, finely chopped 2 tbsp chopped mixed fresh herbs, such as tarragon, chervil, and basil pepper Preheat the oven to 375°F/190°C. Put the sweet potato chunks into a roasting pan with the oil, pepper to taste, and garlic and toss to combine. Roast in the preheated oven for 30 minutes, or until soft and slightly charred. Meanwhile, preheat the broiler to high. Arrange the eggplant and bell pepper slices on the broiler pan and cook under the preheated broiler, turning occasionally, for 10 minutes, or until soft and slightly charred. To make the dressing, whisk the vinegar, garlic, and oil together in a small bowl and stir in the shallot and herbs. Season to taste with pepper. To serve, divide the salad greens among 4 serving plates and arrange the sweet potato, eggplant, bell peppers, and mozzarella on top. Drizzle with the dressing and serve. # mixed mushroom salad serves 4 ingredients 3 tbsp pine nuts 2 red onions, cut into chunks 4 tbsp olive oil 2 garlic cloves, crushed 3 slices whole wheat bread, cubed 7 oz/200 g mixed salad greens 9 oz/250 g cremini mushrooms, sliced 5½ oz/150 g shiitake mushrooms, sliced 5½ oz/150 g oyster mushrooms, torn for the dressing 1 garlic clove, crushed 2 tbsp red wine vinegar 4 tbsp walnut oil 1 tbsp finely chopped fresh parsley pepper Preheat the oven to 350°F/180°C. Heat a nonstick skillet over medium heat, add the pine nuts, and cook, turning, until just browned. Tip into a bowl and set aside. Put the onions and 1 tablespoon of the olive oil into a roasting pan and toss to coat. Roast in the preheated oven for 30 minutes. Meanwhile, heat 1 tablespoon of the remaining oil with the garlic in the nonstick skillet over high heat. Add the bread and cook, turning frequently, for 5 minutes, or until brown and crisp. Remove from the skillet and set aside. Divide the salad greens among 4 serving plates and add the roasted onions. To make the dressing, whisk the garlic, vinegar, and oil together in a small bowl. Stir in the parsley and season to taste with pepper. Drizzle over the salad and onions. Heat the remaining oil in a skillet, add the cremini and shiitake mushrooms, and cook for 2-3 minutes, stirring frequently. Add the oyster mushrooms and cook for an additional 2-3 minutes. Divide the hot mushroom mixture among the 4 plates. Sprinkle over the pine nuts and croutons and serve. # warm red lentil salad with goat cheese serves 4 ingredients 2 tbsp olive oil 2 tsp cumin seeds 2 garlic cloves, crushed 2 tsp grated fresh gingerroot 1½ cups split red lentils 3 cups vegetable stock 2 tbsp chopped fresh mint 2 tbsp chopped fresh cilantro 2 red onions, thinly sliced 4⅜ cups baby spinach leaves 1 tsp hazelnut oil 5½ oz/150 g soft goat cheese 4 tbsp strained plain yogurt pepper Heat half the olive oil in a large skillet over medium heat, add the cumin seeds, garlic, and gingerroot and cook for 2 minutes, stirring constantly. Stir in the lentils, then add the stock, a ladleful at a time, until it is all absorbed, stirring constantly--this will take about 20 minutes. Remove from the heat and stir in the herbs. Meanwhile, heat the remaining olive oil in a skillet over medium heat, add the onions, and cook, stirring frequently, for 10 minutes, or until soft and lightly browned. Toss the spinach in the hazelnut oil in a bowl, then divide among 4 serving plates. Mash the goat cheese with the yogurt in a small bowl and season to taste with pepper. Divide the lentils among the serving plates and top with the onions and goat cheese mixture. # green bean & walnut salad serves 2 ingredients 1 lb/450 g green beans 1 small onion, finely chopped 1 garlic clove, chopped 4 tbsp freshly grated Parmesan cheese 2 tbsp chopped walnuts or almonds, to garnish for the dressing 6 tbsp olive oil 2 tbsp white wine vinegar salt and pepper 2 tsp chopped fresh tarragon Trim the beans, but leave them whole. Cook for 3-4 minutes in salted boiling water. Drain well, refresh under cold running water, and drain again. Put into a mixing bowl and add the onion, garlic, and cheese. Place the dressing ingredients in a jar with a screw-top lid. Shake well. Pour the dressing over the salad and toss gently to coat. Cover with plastic wrap and chill for at least 30 minutes. Remove the beans from the refrigerator 10 minutes before serving. Give them a quick stir and transfer to attractive serving dishes. Toast the nuts in a dry skillet over medium heat for 2 minutes, or until they begin to brown. Sprinkle the toasted nuts over the beans to garnish before serving. # red onion, tomato & herb salad serves 4 ingredients 2 lb/900 g tomatoes, sliced thinly 1 tbsp sugar (optional) salt and pepper 1 red onion, sliced thinly large handful coarsely chopped fresh herbs for the dressing 2-4 tbsp vegetable oil 2 tbsp red wine vinegar or fruit vinegar Arrange the tomato slices in a shallow bowl. Sprinkle with sugar (if using), salt, and pepper. Separate the onion slices into rings and sprinkle them over the tomatoes. Sprinkle the herbs over the top. Any fresh herbs that are in season can be used--for example, tarragon, sorrel, cilantro, or basil. Place the dressing ingredients in a jar with a screw-top lid. Shake well. Pour the dressing over the salad and mix gently. Cover with plastic wrap and chill for 20 minutes. Remove the salad from the refrigerator 5 minutes before serving. # nutty beet salad serves 4 ingredients 3 tbsp red wine vinegar or fruit vinegar 3 cooked beets, grated 2 sharp eating apples 2 tbsp lemon juice 4 large handfuls mixed salad greens, to serve 4 tbsp pecans, to garnish for the dressing ¼ cup plain yogurt ¼ cup mayonnaise 1 garlic clove, chopped 1 tbsp chopped fresh dill salt and pepper Sprinkle vinegar over the beets, cover with plastic wrap, and chill for at least 4 hours. Core and slice the apples, place the slices in a dish, and sprinkle with the lemon juice to prevent discoloration. Combine the dressing ingredients in a small bowl. Remove the beets from the refrigerator and dress. Add the apples to the beets and mix gently to coat with the salad dressing. To serve, arrange a handful of salad greens on each plate and top with a large spoonful of the apple and beet mixture. Toast the pecans in a heavy, dry skillet over medium heat for 2 minutes, or until they begin to brown. Sprinkle them over the beets and apple to garnish. # three-color salad serves 4 ingredients 10 oz/280 g buffalo mozzarella, drained and thinly sliced 8 large tomatoes, sliced salt and pepper 20 fresh basil leaves ½ cup extra-virgin olive oil Arrange the mozzarella and tomato slices on 4 individual serving plates and season to taste with salt. Set aside in a cool place for 30 minutes. Sprinkle the basil leaves over the salad and drizzle with the olive oil. Season with pepper and serve immediately. # broiled bell pepper salad serves 4-6 ingredients 6 large red, orange, or yellow bell peppers, each cut in half lengthwise, broiled, and skinned 4 hard-cooked eggs, shelled 12 anchovy fillets in oil, drained 12 large black olives, pitted extra-virgin olive oil or garlic-flavored olive oil, for drizzling sherry vinegar, to taste salt and pepper crusty bread, to serve Remove any cores and seeds from the broiled bell peppers and cut the flesh into thin strips. Arrange on a serving platter. Cut the eggs into wedges and arrange over the bell pepper strips, along with the anchovy fillets and olives. Drizzle oil over the top, then splash with sherry vinegar, adding both to taste. Sprinkle a little salt and pepper over the top and serve with crusty bread. # tomato salad with feta cheese serves 4 ingredients 12 plum tomatoes, sliced 1 very small red onion, thinly sliced ½ oz/15 g arugula leaves 20 Greek black olives, pitted 7 oz/200 g Greek feta cheese 1 egg 3 tbsp all-purpose flour 2 tbsp olive oil for the dressing 3 tbsp extra-virgin olive oil juice of ½ lemon 2 tsp chopped fresh oregano pinch of sugar pepper Make the dressing by whisking together the extra-virgin olive oil, the lemon juice, oregano, sugar, and black pepper in a pitcher or small bowl. Set aside. Prepare the salad by arranging the tomatoes, onion, arugula, and olives on 4 individual plates. Cut the feta cheese into cubes about 1 inch/2.5 cm square. Beat the egg in a dish and put the flour on a separate plate. Coat the cheese in the egg, shake off the excess, and then coat in the flour. Heat the olive oil in a large skillet, add the cheese, and cook over medium heat, turning over the cubes of cheese until they are golden on all sides. Scatter the fried feta over the salad. Whisk together the prepared dressing, spoon over the salad, and serve warm. # broiled bell pepper & goat cheese salad serves 4 ingredients 2 red bell peppers 2 green bell peppers 2 yellow or orange bell peppers ½ cup vinaigrette or herb vinaigrette 6 scallions, finely chopped 1 tbsp capers in brine, rinsed 7 oz/200 g soft goat cheese, any rind removed fresh flat-leaf parsley, chopped, to serve Preheat the broiler to high. Arrange the bell peppers on a broiler pan, position about 4 inches/10 cm from the heat, and broil for 8-10 minutes, turning them frequently, until the skins are charred all over. Transfer the bell peppers to a bowl, cover with a damp dish towel, and let stand until cool enough to handle. Using a small knife, skin each of the bell peppers. Working over a bowl to catch the juices from inside the bell peppers, cut each one in half and remove the cores and seeds, then cut the flesh into thin strips. Arrange the bell peppers on a serving platter and spoon over the reserved juices, then add the vinaigrette. Sprinkle over the scallions and capers, then crumble over the cheese. If not serving immediately, cover with plastic wrap and chill until required. Sprinkle with the parsley to serve. # fava bean salad serves 4 ingredients 3 lb/l.3 kg fresh young fava beans, or 1½ lb/675 g frozen baby fava beans 1½ cups crumbled Greek feta cheese 1 bunch of scallions, thinly sliced 2 tbsp chopped fresh dill or mint 2 hard-cooked eggs, cut into fourths crusty bread strained plain yogurt, to serve (optional) for the dressing 6 tbsp extra-virgin olive oil grated zest of 1 lemon and 2 tbsp lemon juice 1 small garlic clove, crushed pinch of sugar pepper Make the dressing by whisking together the oil, lemon zest and juice, garlic, sugar, and black pepper in a small bowl. Set aside. Shell the fresh fava beans, if using, and cook in boiling salted water for 5-10 minutes, or until tender. If using frozen fava beans, cook in boiling salted water for 4-5 minutes. Drain the cooked beans and put in a salad bowl. Whisk the dressing and pour over the beans while they are still warm. Spinkle over the feta cheese, add the scallions, and toss together. Sprinkle over the chopped dill and arrange the egg fourths in the bowl. Serve warm with crusty bread and a bowl of yogurt to spoon on top, if wished. # mexican tomato salad serves 4 ingredients 1 lb 5 oz/600 g tomatoes, peeled, seeded, and coarsely chopped 1 onion, thinly sliced and pushed out into rings 14 oz/400 g canned kidney beans, drained and rinsed for the dressing 1 fresh green chile, seeded and diced 3 tbsp chopped fresh cilantro 3 tbsp olive oil 1 garlic clove, finely chopped 4 tbsp lime juice salt and pepper Place the chopped tomatoes and onion slices in a large serving bowl and mix well. Stir in the kidney beans. Mix the chile, cilantro, olive oil, garlic, and lime juice together in a measuring cup and season to taste with salt and pepper. Pour the dressing over the salad and toss thoroughly. Serve immediately or cover with plastic wrap and let chill in the refrigerator until required. # thai noodle salad serves 4 ingredients 1 oz/25 g dried wood ears 2 oz/55 g dried Chinese mushrooms 4 oz/115 g cellophane noodles ½ cup cooked lean ground pork 4 oz/115 g shelled raw shrimp 5 fresh red chiles, seeded and thinly sliced 1 tbsp chopped fresh cilantro 3 tbsp Thai fish sauce 3 tbsp lime juice 1 tbsp brown sugar Put the wood ears and Chinese mushrooms in separate bowls and pour over enough boiling water to cover. Let soak for 30 minutes. After 20 minutes, put the cellophane noodles in a separate bowl and pour over enough hot water to cover. Let the noodles soak for 10 minutes, or according to the package instructions. Drain the wood ears, rinse thoroughly and cut into small pieces. Drain the mushrooms, squeezing out as much liquid as possible. Cut off and discard the stalks and cut the caps in half. Pour just enough water into a pan to cover the bottom and bring to a boil. Add the pork, shrimp, wood ears, and mushrooms and let simmer, stirring, for 3 minutes, or until cooked through. Drain well. Drain the noodles and cut them into short lengths with scissors. Put the chiles, cilantro, fish sauce, lime juice, and brown sugar in a salad bowl and stir until the sugar has dissolved. Add the noodles and the shrimp and pork mixture, toss well, and serve. # sweet potato & bean salad serves 4 ingredients 1 sweet potato 4 baby carrots, halved 4 tomatoes 4 celery stalks, chopped 8 oz/225 g canned cranberry beans, drained and rinsed 4 oz/115 g mixed salad greens, such as frisee, arugula, radicchio and oak leaf lettuce 1 tbsp golden raisins 4 scallions, sliced diagonally for the dressing 2 tbsp lemon juice 1 garlic clove, crushed 5 fl oz/150 ml plain yogurt 2 tbsp olive oil salt and pepper Peel and dice the sweet potato. Bring a pan of water to a boil over medium heat. Add the sweet potato and cook for 10 minutes, until tender. Drain the potato, transfer to a bowl, and set aside. Cook the carrots in a separate pan of boiling water for 1 minute. Drain thoroughly and add to the sweet potato. Cut the tops off the tomatoes and scoop out the seeds. Chop the flesh and add to the bowl with the celery and beans. Mix well. Line a large serving bowl with the mixed salad greens. Spoon the sweet potato and bean mixture on top, then sprinkle with the golden raisins and scallions. Put all the dressing ingredients in a screw-top jar, with salt and pepper to taste, screw on the lid and shake until well blended. Pour over the salad and serve. # raspberry & feta salad with couscous serves 6 ingredients 12 oz/350 g couscous 2½ cups boiling chicken stock or vegetable stock 12 oz/350 g fresh raspberries small bunch of fresh basil 8 oz/225 g feta cheese, cubed or crumbled 2 zucchini, thinly sliced 4 scallions, trimmed and diagonally sliced ⅓ cup pine nuts, toasted grated rind of 1 lemon for the dressing 1 tbsp white wine vinegar 1 tbsp balsamic vinegar 4 tbsp extra-virgin olive oil juice of 1 lemon salt and pepper Put the couscous in a large heatproof bowl and pour over the stock. Stir well, then cover and let soak until all the stock has been absorbed. Pick over the raspberries, discarding any that are overripe. Shred the basil leaves. Transfer the couscous to a large serving bowl and stir well to break up any lumps. Add the cheese, zucchini, scallions, raspberries, and pine nuts. Stir in the basil and lemon rind and gently toss all the ingredients together. Put all the dressing ingredients in a screw-top jar, with salt and pepper to taste, then screw on the lid and shake until well blended. Pour over the salad and serve. # orecchiette salad with pears & bleu cheese serves 4 ingredients 9 oz/250 g dried orecchiette 1 head radicchio, torn into pieces 1 oak leaf lettuce, torn into pieces 2 pears 1 tbsp lemon juice 9 oz/250 g bleu cheese, diced scant ½ cup chopped walnuts 4 tomatoes, quartered 1 red onion, sliced 1 carrot, grated 8 fresh basil leaves 2 oz/55 g corn salad for the dressing 4 tbsp olive oil 2 tbsp lemon juice salt and pepper Bring a large heavy-bottom pan of lightly salted water to a boil. Add the pasta, return to a boil, and cook for 8-10 minutes, or until tender but still firm to the bite. Drain, refresh in a bowl of cold water and drain again. Place the radicchio and oak leaf lettuce leaves in a large bowl. Halve the pears, remove the cores, and dice the flesh. Toss the diced pear with 1 tablespoon of lemon juice in a small bowl to prevent discoloration. Top the salad with the bleu cheese, walnuts, pears, pasta, tomatoes, onion slices, and grated carrot. Add the basil and corn salad. For the dressing, mix the lemon juice and the olive oil together in a measuring cup, then season to taste with salt and pepper. Pour the dressing over the salad, toss, and serve. # salad with garlic dressing serves 4 ingredients 3 oz/85 g cucumber, cut into batons 6 scallions, halved 2 tomatoes, seeded and cut into 8 wedges 1 yellow bell pepper, seeded and cut into strips 2 celery stalks, cut into strips 4 radishes, quartered 3 oz/85 g arugula 1 tbsp chopped fresh mint, to garnish (optional) for the dressing 2 tbsp lemon juice 1 garlic clove, crushed ⅔ cup plain yogurt 2 tbsp olive oil salt and pepper To make the salad, gently mix the cucumber batons, scallions, tomato wedges, yellow bell pepper strips, celery strips, radishes, and arugula in a large serving bowl. To make the dressing, stir the lemon juice, garlic, plain yogurt, and olive oil together in a small bowl until thoroughly combined. Season with salt and pepper to taste. Spoon the dressing over the salad and toss to mix. Sprinkle the salad with chopped mint (if using) and serve. # warm pasta salad serves 4 ingredients 8 oz/225 g dried farfalle or other pasta shapes 6 pieces of sun-dried tomato in oil, drained and chopped 4 scallions, chopped 1¼ cups arugula, shredded ½ cucumber, seeded and diced salt and pepper for the dressing 4 tbsp olive oil 1 tbsp white wine vinegar ½ tsp superfine sugar 1 tsp Dijon mustard salt and pepper 4 fresh basil leaves, finely shredded To make the dressing, whisk the olive oil, vinegar, sugar, and mustard together in a bowl or pitcher. Season to taste with salt and pepper and stir in the basil. Bring a large heavy-bottom pan of lightly salted water to a boil. Add the pasta, return to a boil, and cook for 8-10 minutes, or until tender but still firm to the bite. Drain and transfer to a salad bowl. Add the dressing and toss well. Add the tomatoes, scallions, arugula, and cucumber, season to taste with salt and pepper, and toss. Serve warm. # italian salad serves 4 ingredients 8 oz/225 g dried conchiglie 1 3/4 oz/50 g pine nuts 12 oz/350 g cherry tomatoes, cut in half 1 red bell pepper, seeded and cut into bite-size chunks 1 red onion, chopped 7 oz/200 g buffalo mozzarella, cubed 12 black olives, pitted 1 oz/25 g fresh basil leaves shavings of fresh Parmesan cheese, to garnish crusty bread, to serve for the dressing 5 tbsp extra-virgin olive oil 2 tbsp balsamic vinegar 1 tbsp chopped fresh basil salt and pepper Bring a large pan of lightly salted water to a boil. Add the pasta and cook over medium heat for about 10 minutes, or according to the package instructions. When cooked, the pasta should be tender but still firm to the bite. Drain, rinse under cold running water, and drain again. Let cool. While the pasta is cooking, put the pine nuts in a dry skillet and cook over low heat for 1-2 minutes, until golden brown. Remove from the heat, transfer to a dish, and let cool. To make the dressing, put the oil, vinegar, and basil into a small bowl. Season with salt and pepper and stir together well. Cover with plastic wrap and set to one side. To assemble the salad, divide the pasta among serving bowls. Add the pine nuts, tomatoes, red bell pepper, onion, cheese, and olives. Scatter over the basil leaves, then drizzle over the dressing. Garnish with fresh Parmesan cheese shavings and serve with crusty bread. # potato salad serves 4 ingredients 1 lb 9 oz/700 g new potatoes 8 scallions 1 cup mayonnaise 1 tsp paprika salt and pepper 2 tbsp snipped fresh chives pinch of paprika, to garnish Bring a large pan of lightly salted water to a boil. Add the potatoes and cook for 10-15 minutes, or until just tender. Drain the potatoes and rinse them under cold running water until completely cold. Drain again. Transfer the potatoes to a bowl and reserve until required. Using a sharp knife, slice the scallions thinly on the diagonal. Mix the mayonnaise, paprika, and salt and pepper to taste together in a bowl. Pour the mixture over the potatoes. Add the scallions to the potatoes and toss together. Transfer the potato salad to a serving bowl and sprinkle with snipped chives and a pinch of paprika. Cover and let chill in the refrigerator until required. # capri salad serves 4 ingredients 2 beefsteak tomatoes 4 1/2 oz/125 g mozzarella cheese 12 black olives 8 fresh basil leaves 1 tbsp balsamic vinegar 1 tbsp extra-virgin olive oil salt and pepper fresh basil leaves, to garnish Using a sharp knife, cut the tomatoes into thin slices. Drain the mozzarella, if necessary, and cut into slices and stone the black olives before slicing into rings. Layer the tomatoes, mozzarella slices, olives, and basil leaves in 4 stacks, finishing with a layer of cheese on top. Place each stack under a preheated hot broiler for 2-3 minutes or just long enough to melt the mozzarella. Drizzle over the balsamic vinegar and olive oil, and season to taste with a little salt and pepper. Transfer to individual serving plates and garnish with fresh basil leaves. Serve immediately. # hearty a collection of meat & poultry salads # waldorf summer chicken salad serves 4 ingredients 1 lb 2 oz red dessert apples, diced 3 tbsp fresh lemon juice ⅔ cup light mayonnaise 1 head celery 4 shallots, sliced 1 garlic clove, finely chopped ¾ cup walnuts, chopped 1 lb 2 oz cooked chicken, cubed 1 cos lettuce pepper chopped walnuts, to garnish Place the apples in a bowl with the lemon juice and 1 tablespoon of mayonnaise. Leave for 40 minutes. Using a sharp knife, slice the celery very thinly. Add the celery, shallots, garlic, and walnuts to the apple and mix together. Stir in the mayonnaise and blend thoroughly. Add the cooked chicken to the bowl and mix well. Line a serving dish with the lettuce. Pile the chicken salad into a serving bowl, sprinkle with pepper and garnish with the chopped walnuts. # chef's salad serves 6 ingredients 1 iceberg lettuce, shredded 6 oz/175 g cooked lean ham, cut into thin strips 6 oz/175 g cooked tongue, cut into thin strips 12 oz/350 g cooked chicken, cut into thin strips 6 oz/175 g Gruyere cheese 4 tomatoes, quartered 3 hard-cooked eggs, shelled and quartered 1¾ cups Thousand Island dressing sliced French bread, to serve Arrange the lettuce on a large serving platter. Arrange the cold meats decoratively on top. Cut the Gruyere cheese into thin cubes. Arrange the thin cheese sticks over the salad, and the tomato and egg quarters around the edge of the platter. Serve the salad immediately with the Thousand Island dressing and sliced French bread. # prosciutto with melon & asparagus serves 4 ingredients 8 oz/225 g asparagus spears 1 small or ½ medium-size Galia or cantaloupe melon 2 oz/55 g prosciutto, thinly sliced 5½ oz/150 g bag mixed salad greens, such as herb salad with arugula ⅝ cup fresh raspberries 1 tbsp freshly shaved Parmesan cheese for the dressing 1 tbsp balsamic vinegar 2 tbsp raspberry vinegar 2 tbsp orange juice Trim the asparagus, cutting in half if very long. Cook in lightly salted boiling water over medium heat for 5 minutes, or until tender. Drain and plunge into cold water, then drain again and set aside. Cut the melon in half and scoop out the seeds. Cut into small wedges and cut away the rind. Separate the prosciutto slices, cut in half and wrap around the melon wedges. Arrange the salad greens on a large serving platter and place the melon wedges on top together with the asparagus spears. Scatter over the raspberries and Parmesan shavings. Place the vinegars and juice in a screw-top jar and shake until blended. Pour over the salad and serve. # warm beef niçoise serves 4 ingredients 4 tenderloin steaks, about 4 oz/115 g each, fat discarded 2 tbsp red wine vinegar 2 tbsp orange juice 2 tsp ready-made English mustard 2 eggs 6 oz/175 g new potatoes 4 oz/115 g green beans, trimmed 6 oz/175 g mixed salad greens, such as baby spinach, arugula, and mizuna 1 yellow bell pepper, seeded, peeled, and cut into strips 6 oz/175 g cherry tomatoes, halved black olives, pitted, to garnish (optional) 2 tsp extra-virgin olive oil Place the steaks in a shallow dish. Blend the vinegar with 1 tablespoon of orange juice and 1 teaspoon of mustard. Pour over the steaks, cover, then let stand in the refrigerator for at least 30 minutes. Turn over halfway through the marinating time. Place the eggs in a pan and cover with cold water. Bring to a boil, then reduce the heat to a simmer and cook for 10 minutes. Remove and plunge the eggs into cold water. Once cold, shell and set aside. Meanwhile, place the potatoes in a pan and cover with cold water. Bring to a boil, then cover and let simmer for 15 minutes, or until tender when pierced with a fork. Drain and set aside. Bring a saucepan of water to the boil, add the beans and cook for 5 minutes, or until just tender. Drain, plunge into cold water and drain again. Arrange the potatoes and beans on top of the salad leaves together with the bell pepper, cherry tomatoes, and olives, if using. Blend the remaining orange juice and mustard with the olive oil and set aside. Heat a stovetop grill pan or griddle until smoking. Drain the steaks and cook for 3-5 minutes on each side or according to personal preference. Slice the steaks and arrange on top of the salad, then pour over the dressing and serve. # cajun chicken salad serves 4 ingredients 4 skinless, boneless chicken breasts, about 5 oz/140 g each 4 tsp Cajun seasoning 2 tsp corn oil (optional) 1 ripe mango, peeled, seeded, and cut into thick slices 7 oz/200 g mixed salad greens 1 red onion, thinly sliced and cut in half 6 oz/175 g cooked beet, diced 3 oz/85 g radishes, sliced generous ⅜ cup walnut halves 2 tbsp sesame seeds, to garnish for the dressing 4 tbsp walnut oil 1-2 tsp Dijon mustard 1 tbsp lemon juice salt and pepper Make 3 diagonal slashes across each chicken breast. Put the chicken into a shallow dish and sprinkle all over with the Cajun seasoning. Cover and let chill for at least 30 minutes. When ready to cook, brush a stove-top grill pan with the corn oil, if using. Heat over high heat until very hot and a few drops of water sprinkled into the pan sizzle immediately. Add the chicken and cook for 7-8 minutes on each side, or until thoroughly cooked. If still slightly pink in the center, cook a little longer. Remove the chicken and set aside. Add the mango slices to the pan and cook for 2 minutes on each side. Remove and set aside. Meanwhile, arrange the salad greens in a salad bowl and sprinkle over the onion, beet, radishes, and walnut halves. Put the walnut oil, mustard, lemon juice, and salt and pepper to taste in a screw-top jar and shake until well blended. Pour over the salad. Arrange the mango and the salad on the serving plate, top with the chicken breast and sprinkle with sesame seeds. # roast beef salad serves 4 ingredients 1 lb 10 oz/750 g beef fillet, trimmed of any visible fat pepper 2 tsp Worcestershire sauce 3 tbsp olive oil 14 oz/400 g green beans 3½ oz/100 g small pasta, such as orecchiette 2 red onions, finely sliced 1 large head radicchio generous ¼ cup green olives, pitted scant ⅓ cup shelled hazelnuts, whole for the dressing 1 tsp Dijon mustard 2 tbsp white wine vinegar 5 tbsp olive oil Preheat the oven to 425°F/220°C. Rub the beef with pepper to taste and Worcestershire sauce. Heat 2 tablespoons of the oil in a small roasting pan over high heat, add the beef, and sear on all sides. Transfer the dish to the preheated oven and roast for 30 minutes. Remove and let cool. Bring a large pan of water to a boil, add the beans, and cook for 5 minutes, or until just tender. Remove with a slotted spoon and refresh the beans under cold running water. Drain and put into a large bowl. Return the bean cooking water to a boil, add the pasta, and cook for 11 minutes, or until tender. Drain, return to the pan, and toss with the remaining oil. Add the pasta to the beans with the onions, radicchio leaves, olives, and hazelnuts, mix gently and transfer to a serving bowl or dish. Arrange some thinly sliced beef on top. Whisk the dressing ingredients together in a separate bowl, then pour over the salad and serve at once with extra sliced beef. # walnut, pear & crispy bacon salad serves 4 ingredients 4 lean bacon slices generous ⅝ cup walnut halves 2 Red Bartlett pears, cored and sliced lengthwise 1 tbsp lemon juice 6 oz/175 g watercress, tough stalks removed for the dressing 3 tbsp extra-virgin olive oil 2 tbsp lemon juice ½ tsp honey salt and pepper Preheat the broiler to high. Arrange the bacon on a foil-lined broiler pan and cook under the preheated broiler until well browned and crisp. Let cool, then cut into ½-inch/1-cm pieces. Meanwhile, heat a dry skillet over medium heat and lightly toast the walnuts, shaking the skillet frequently, for 3 minutes, or until lightly browned. Let cool. Toss the pears in the lemon juice to prevent discoloration. Put the watercress, walnuts, pears, and bacon into a salad bowl. To make the dressing, whisk the oil, lemon juice, and honey together in a small bowl or pitcher. Season to taste with salt and pepper, then pour over the salad. Toss well to combine and serve. # warm chicken liver salad serves 4 ingredients salad greens 1 tbsp olive oil 1 small onion, chopped finely 1 lb/450 g frozen chicken livers, thawed 1 tsp chopped fresh tarragon 1 tsp whole-grain mustard 2 tbsp balsamic vinegar salt and pepper Arrange the salad greens on serving plates. Heat the oil in a nonstick skillet, add the onion, and cook for 5 minutes, or until softened. Add the chicken livers, tarragon, and mustard and cook for 3-5 minutes, stirring, until tender. Put on top of the salad greens. Add the vinegar, salt, and pepper to the skillet and heat, stirring constantly, until all the sediment has been lifted from the skillet. Pour the dressing over the chicken livers and serve warm. # artichoke & prosciutto salad serves 4 ingredients 9¾ oz/275 g canned artichoke hearts in oil, drained 4 small tomatoes 1 oz/25 g sun-dried tomatoes in oil, drained 1½ oz/40 g prosciutto 1 tbsp pitted black olives, halved handful of fresh basil sprigs crusty bread, to serve for the dressing 3 tbsp olive oil 1 tbsp white wine vinegar 1 garlic clove, crushed ½ tsp mild mustard 1 tsp honey salt and pepper, to taste Make sure the artichoke hearts are thoroughly drained, then cut them into quarters and place in a serving bowl. Cut each fresh tomato into wedges. Slice the sun-dried tomatoes into thin strips. Cut the prosciutto into thin strips and add to the bowl with the tomatoes and olive halves. Keeping a few basil sprigs whole for garnishing, tear the remainder of the leaves into small pieces and add to the bowl containing the other salad ingredients. To make the dressing, place all the ingredients in a screw-top jar and shake vigorously until they are well blended. Pour the dressing over the salad and toss together. Garnish the salad with with a few basil sprigs and serve with crusty bread. # lima bean, onion & herb salad with spicy sausage serves 2 ingredients 1 tbsp corn oil 1 small onion, finely sliced 9 oz/250 g canned lima beans, drained and rinsed 1 tsp balsamic vinegar 2 chorizo sausages, sliced diagonally 1 small tomato, diced 2 tbsp harissa paste 3 oz/85 g mixed herb salad Heat the oil in a nonstick skillet over medium heat, add the onion, and cook, stirring frequently, until softened but not browned. Add the beans and cook for an additional 1 minute, then add the vinegar, stirring well. Keep warm. Meanwhile, heat a separate dry skillet over medium heat, add the chorizo slices, and cook, turning occasionally, until lightly browned. Remove with a slotted spoon and drain on paper towels. Mix the tomato and harissa paste together in a small bowl. Divide the herb salad between 2 plates, spoon over the bean mixture, and sprinkle over the warm chorizo slices. Top with a spoonful of the tomato and harissa mixture and serve at once. # turkey & rice salad serves 4 ingredients 4 cups chicken stock scant 1 cup mixed long-grain and wild rice 2 tbsp corn oil 8 oz/225 g skinless, boneless turkey breast, trimmed of all visible fat and cut into thin strips 2 cups snow peas 4 oz/115 g oyster mushrooms, torn into pieces ¼ cup shelled pistachio nuts, finely chopped 2 tbsp chopped fresh cilantro 1 tbsp snipped fresh garlic chives salt and pepper 1 tbsp balsamic vinegar fresh garlic chives, to garnish Set aside 3 tablespoons of the chicken stock and bring the remainder to a boil in a large pan. Add the rice and cook for 30 minutes, or until tender. Drain and let cool slightly. Meanwhile, heat 1 tablespoon of the oil in a preheated wok or skillet. Stir-fry the turkey over medium heat for 3-4 minutes, or until cooked through. Using a slotted spoon, transfer the turkey to a dish. Add the snow peas and mushrooms to the wok and stir-fry for 1 minute. Add the reserved stock, bring to a boil, then reduce the heat, cover, and let simmer for 3-4 minutes. Transfer the vegetables to the dish and let cool slightly. Thoroughly mix the rice, turkey, snow peas, mushrooms, nuts, cilantro, and garlic chives together, then season to taste with salt and pepper. Drizzle with the remaining corn oil and the vinegar and garnish with fresh garlic chives. Serve warm. # smoked chicken & cranberry salad serves 4 ingredients 1 smoked chicken, weighing 3 lb/1.3 kg scant 1 cup dried cranberries 2 tbsp apple juice or water 7 oz/200 g sugar snap peas 2 ripe avocados juice of ½ lemon 4 lettuce hearts 1 bunch watercress, trimmed 2 oz/55 g arugula for the dressing 2 tbsp olive oil 1 tbsp walnut oil 2 tbsp lemon juice 1 tbsp chopped fresh mixed herbs, such as parsley and lemon thyme salt and pepper Carve the chicken carefully, slicing the white meat. Divide the legs into thighs and drumsticks and trim the wings. Cover with plastic wrap and refrigerate. Put the cranberries in a bowl. Stir in the apple juice, then cover with plastic wrap and let soak for 30 minutes. Meanwhile, blanch the sugar snap peas, then refresh under cold running water and drain. Peel, pit and slice the avocados and toss in the lemon juice to prevent discoloration. Separate the lettuce hearts and arrange on a large serving platter with the avocados, sugar snap peas, watercress, arugula, and the chicken. Put all the dressing ingredients, with salt and pepper to taste, in a screw-top jar, then screw on the lid and shake until well blended. Drain the cranberries and mix them with the dressing, then pour over the salad. Serve immediately. # melon, chorizo & artichoke salad serves 8 ingredients 12 small globe artichokes juice of ½ lemon 2 tbsp Spanish olive oil 1 small orange-fleshed melon, such as cantaloupe 7 oz/200 g chorizo sausage, outer casing removed fresh tarragon or flat-leaf parsley sprigs, to garnish for the dressing 3 tbsp Spanish extra-virgin olive oil 1 tbsp red wine vinegar 1 tsp prepared mustard 1 tbsp chopped fresh tarragon salt and pepper Prepare the artichokes then brush the cut surfaces of the artichokes with lemon juice to prevent discoloration. Carefully remove the choke (the mass of silky hairs) by pulling it out with your fingers or by scooping it out with a spoon. It is very important to remove all the choke on older artichokes, as the little barbs, if eaten, can irritate the throat. Cut the artichokes into fourths and brush them again with lemon juice. Heat the olive oil in a large, heavy-bottom skillet. Add the prepared artichokes and cook, stirring frequently, for 5 minutes, or until the artichoke leaves are golden brown. Remove from the skillet, then transfer to a large serving bowl and let cool. To prepare the melon, cut in half and scoop out the seeds with a spoon. Cut the flesh into bite-size cubes. Add to the cooled artichokes. Cut the chorizo into bite-size chunks and add to the melon and artichokes. To make the dressing, place all the ingredients in a small bowl and whisk together. Just before serving, pour the dressing over the prepared salad ingredients and toss together. Serve the salad garnished with tarragon or parsley sprigs. # layered chicken salad serves 4 ingredients 1 lb 10 oz/750 g new potatoes, scrubbed 1 red bell pepper, halved and seeded 1 green bell pepper, halved and seeded 2 small zucchini, sliced 1 small onion, thinly sliced 3 tomatoes, sliced 12 oz/350 g cooked chicken, sliced chopped fresh chives, to garnish for the dressing ⅔ cup plain yogurt 3 tbsp mayonnaise 1 tbsp chopped fresh chives salt and pepper Put the potatoes into a large pan, add just enough cold water to cover, and bring to a boil. Lower the heat, cover, and simmer for 15-20 minutes until tender. Meanwhile, place the bell pepper halves, skin side up, under a preheated hot broiler and broil until the skins blacken and begin to char. Remove the bell peppers with tongs, place in a bowl, and cover with plastic wrap. Set aside until cool enough to handle, then peel off the skins and slice the flesh. Bring a small pan of lightly salted water to a boil. Add the zucchini, bring back to a boil, and simmer for 3 minutes. Drain, rinse under cold running water to prevent any further cooking, and drain again. Set aside. To make the dressing, whisk the yogurt, mayonnaise, and chopped chives together in a small bowl until well blended. Season to taste with salt and pepper. When the potatoes are tender, drain, cool, and slice them. Add them to the dressing and mix gently to coat evenly. Spoon the potatoes onto 4 serving plates, dividing them equally. Top each plate with one quarter of the bell pepper slices and zucchini. Layer one quarter of the onion and tomato slices, then the sliced chicken, on top of each serving. Garnish with chopped chives and serve immediately. # rare roast beef pasta salad serves 4 ingredients 1 lb/450 g round or sirloin steak in a single piece salt and pepper 4 cups dried fusilli 4 tbsp olive oil 2 tbsp lime juice 2 tbsp Thai fish sauce 2 tsp honey 4 scallions, sliced 1 cucumber, peeled and cut into 1-inch/2.5-cm chunks 3 tomatoes, cut into wedges 1 tbsp finely chopped fresh mint Season the steak with salt and pepper. Broil or pan-fry it for 4 minutes on each side. Let rest for 5 minutes, then slice thinly across the grain. Meanwhile, bring a large pan of lightly salted water to a boil. Add the pasta, bring back to a boil, and cook for 8-10 minutes or until tender, but still firm to the bite. Drain the fusilli, refresh in cold water, and drain again thoroughly. Toss the fusilli in the olive oil and set aside until required. Combine the lime juice, fish sauce, and honey in a small pan and cook over medium heat for 2 minutes. Add the scallions, cucumber, tomatoes, and mint to the pan, then add the steak and mix well. Season to taste with salt. Transfer the fusilli to a large, warm serving dish and top with the steak and salad mixture. Serve just warm or let cool completely. # roast duck salad serves 4 ingredients 2 duck breasts 2 Boston lettuces, shredded 1 cup bean sprouts 1 yellow bell pepper, seeded and cut into thin strips ½ cucumber, seeded and cut into short thin sticks 2 tsp shredded lime zest 2 tbsp shredded coconut, toasted for the dressing juice of 2 limes 3 tbsp fish sauce 1 tbsp soft brown sugar 2 tsp sweet chili sauce 1 inch/2.5 cm fresh gingerroot, grated finely 3 tbsp chopped fresh mint 3 tbsp chopped fresh basil Preheat the oven to 400°F/200°C. Place the duck breasts on a rack set over a roasting pan and roast in the oven for 20-30 minutes, or until cooked as desired and the skin is crisp. Remove from the oven and set aside to cool. In a large bowl, combine the lettuce, bean sprouts, bell pepper and cucumber. Cut the cooled duck into slices and add to the salad. Mix well. In a bowl, whisk together the lime juice, fish sauce, sugar, chili sauce, gingerroot, mint, and basil. Add the dressing to the salad and toss well. Turn the salad out onto a serving platter and garnish with the lime zest and shredded coconut before serving. # warm mushroom, spinach, & pancetta salad serves 4 ingredients generous 6 cups fresh baby spinach leaves 2 tbsp olive oil 5½ oz/150 g pancetta 10 oz/280 g mixed wild mushrooms, sliced for the dressing 5 tbsp olive oil 1 tbsp balsamic vinegar 1 tsp Dijon mustard pinch of sugar salt and pepper To make the dressing, place the olive oil, vinegar, mustard, sugar, salt, and pepper in a small bowl and whisk together. Rinse the baby spinach under cold running water, then drain and place in a large salad bowl. Heat the oil in a large skillet. Add the pancetta and cook for 3 minutes. Add the mushrooms and cook for 3-4 minutes, or until tender. Pour the dressing into the skillet and immediately turn the cooked mixture and dressing into the bowl with the spinach. Toss until coated with the dressing and serve at once. # crispy spinach & bacon serves 4 ingredients 4 tbsp olive oil 4 strips of lean bacon, diced 1 thick slice of white bread, crusts removed, cut into cubes 1 lb/450 g fresh spinach, torn or shredded Heat 2 tablespoons of the olive oil over high heat in a large skillet. Add the diced bacon to the skillet and cook for 3-4 minutes, or until crisp. Remove with a slotted spoon, draining carefully, and set aside. Toss the cubes of bread in the fat remaining in the skillet over high heat for about 4 minutes, or until crisp and golden. Remove the croutons with a slotted spoon, draining carefully, and set them aside. Add the remaining oil to the skillet and heat. Toss the spinach in the oil over high heat for about 3 minutes, or until it has just wilted. Turn into a serving bowl and sprinkle with the bacon and croutons. Serve immediately. # thai-style chicken salad serves 4 ingredients 14 oz/400 g small new potatoes, scrubbed and cut in half, lengthwise 7 oz/200 g baby corn cobs 1½ cups bean sprouts 3 scallions, trimmed and sliced 4 cooked, skinless chicken breasts, sliced 1 tbsp chopped lemongrass 2 tbsp chopped fresh cilantro salt and pepper wedges of lime, to garnish fresh cilantro leaves, to garnish for the dressing 6 tbsp chili oil or sesame oil 2 tbsp lime juice 1 tbsp light soy sauce 1 tbsp chopped fresh cilantro 1 small, red chile, seeded and finely sliced Bring two pans of water to the boil. Put the potatoes into one pan and cook for 15 minutes until tender. Put the corn cobs into the other pan and cook for 5 minutes until tender. Drain the potatoes and corn cobs well and let cool. When the vegetables are cool, transfer them into a large serving dish. Add the bean sprouts, scallions, chicken, lemongrass, and cilantro and season with salt and pepper. To make the dressing, put all the ingredients into a screw-top jar and shake well. Alternatively, put them into a bowl and mix together well. Drizzle the dressing over the salad and garnish with lime wedges and cilantro leaves. Serve at once. # duck & radish salad serves 4 ingredients 12 oz boneless duck breasts 2 tbsp all-purpose flour salt and pepper 1 egg 2 tbsp water 2 tbsp sesame seeds 3 tbsp sesame oil ½ head Chinese cabbage, shredded 3 celery stalks, sliced finely 8 radishes, trimmed and halved fresh basil leaves, to garnish for the dressing finely grated peel of 1 lime 2 tbsp lime juice 2 tbsp olive oil 1 tbsp light soy sauce 1 tbsp chopped fresh basil salt and pepper Put each duck breast between sheets of baking parchment or plastic wrap. Use a meat mallet or rolling pin to beat them out and flatten them slightly. Sprinkle the flour onto a large plate and season with salt and pepper. Beat the egg and water together in a shallow bowl, then sprinkle the sesame seeds onto a separate plate. Dip the duck breasts first into the seasoned flour, then into the egg mixture and finally into the sesame seeds, to coat the duck evenly. Heat the sesame oil in a preheated wok or large skillet. Fry the duck breasts over a medium heat for about 8 minutes, turning once. To test whether they are cooked, insert a sharp knife into the thickest part--the juices should run clear. Lift them out and drain on paper towels. To make the dressing for the salad, whisk together the lime peel and juice, olive oil, soy sauce, and chopped basil. Season with a little salt and pepper. Arrange the Chinese cabbage, celery, and radish on a serving plate. Slice the duck breasts thinly and place on top of the salad. Drizzle with the dressing and garnish with fresh basil leaves. Serve at once. # chicken, cheese & arugula salad serves 4 ingredients 5½ oz/150 g arugula leaves 2 celery stalks, trimmed and sliced ½ cucumber, sliced 2 scallions, trimmed and sliced 2 tbsp chopped fresh parsley 1 oz/25 g walnut pieces 12 oz/350 g boneless roast chicken, sliced 4½ oz/125 g bleu cheese, cubed handful of seedless red grapes, cut in half (optional) salt and pepper for the dressing 2 tbsp olive oil 1 tbsp sherry vinegar 1 tsp Dijon mustard 1 tbsp chopped mixed herbs Wash the arugula leaves, pat dry with paper towels, and put them into a large salad bowl. Add the celery, cucumber, scallions, parsley, and walnuts and mix together well. Transfer onto a large serving platter. Arrange the chicken slices over the salad, then scatter over the cheese. Add the red grapes, if using. Season well with salt and pepper. To make the dressing, put all the ingredients into a screw-top jar and shake well. Alternatively, put them into a bowl and mix together well. Drizzle the dressing over the salad and serve. # broiled lamb with yogurt & herb dressing serves 4 ingredients 2 tbsp sunflower oil, plus extra for broiling the lamb 1 tbsp tomato paste ½ tbsp ground cumin 1 tsp lemon juice 1 garlic clove, crushed pinch of cayenne pepper salt and pepper 1 lb 2 oz/500 g lamb neck fillets, trimmed with excess fat removed toasted sesame seeds and chopped fresh parsley, to garnish for the dressing 2 tbsp fresh lemon juice 1 tsp honey 3 oz/75 g thick plain yogurt 2 tbsp finely shredded fresh mint 2 tbsp chopped fresh parsley 1 tbsp finely snipped fresh chives salt and pepper Mix the 2 tablespoons oil, tomato paste, cumin, lemon juice, garlic, cayenne and salt and pepper to taste together in a non-metallic bowl. Add the lamb and rub all over with the marinade. Cover the bowl and marinate in the refrigerator for at least 2 hours, but ideally overnight. Meanwhile, to make the dressing, whisk the lemon juice and honey together until the honey dissolves. Whisk in the yogurt until well blended. Stir in the herbs and add salt and pepper to taste. Cover and chill until required. Remove the lamb from the fridge 15 minutes before you are ready to cook. Heat the broiler to its highest setting and lightly brush the broiler rack with oil. Broil the lamb, turning it once, for 10 minutes for medium and 12 minutes for well done. Leave the lamb to cool completely, then cover and chill until required. Thinly slice the lamb, then divide among 4 plates. Adjust the seasoning in the dressing, if necessary, then spoon over the lamb slices. Sprinkle with toasted sesame seeds and parsley and serve. # smoked chicken salad with avocado & tarragon dressing serves 4-6 ingredients 2 large, juicy tomatoes, sliced 1 lb 5 oz/600 g smoked chicken, skinned and cut into slices 9 oz/250 g fresh watercress, any thick stems or yellow leaves removed, then rinsed and patted dry 3 oz/75 g fresh bean sprouts, soaked for 20 minutes in cold water, then drained well and patted dry leaves from several sprigs fresh flat-leaf parsley or cilantro for the dressing 1 ripe, soft avocado 2 tbsp lemon juice 1 tbsp tarragon vinegar 3 oz/75 g thick plain yogurt 1 small garlic clove, crushed 1 tbsp chopped fresh tarragon leaves salt and pepper To make the dressing, put the avocado, lemon juice, and vinegar in a blender or food processor and blend until smooth, scraping down the side with a rubber spatula. Add the yogurt, garlic, and tarragon leaves and process again. Season with salt and pepper to taste, then transfer to a bowl. Cover closely with plastic wrap and chill for 2 hours. To assemble the salad, divide the tomato slices among 4-6 individual plates. Toss the smoked chicken, watercress, bean sprouts and parsley or cilantro leaves together. Divide the salad ingredients among the plates. Adjust the seasoning in the dressing, if necessary. Spoon the dressing over each salad and serve. # roast pork & pumpkin salad serves 4-6 ingredients 1 small pumpkin, about 3½ lb/1.6 kg 2 red onions, cut into wedges olive oil 3½ oz/100 g green beans, topped and tailed and cut in half 1¼ lb/600 g roast pork, any skin or rind removed and cut into bite-size chunks large handful fresh arugula leaves 3½ oz/100 g feta cheese, drained and crumbled 2 tbsp toasted pine nuts 2 tbsp chopped fresh parsley salt and pepper for the vinaigrette 6 tbsp extra-virgin olive oil 3 tbsp balsamic vinegar ½ tsp sugar ½ tsp Dijon, prepared English or wholegrain mustard salt and pepper Preheat the oven to 400°F/200°C. Cut the pumpkin in half, scoop out the seeds and fibers and cut the flesh into wedges about 1½ inches/4 cm wide. Very lightly rub the pumpkin and onion wedges with the olive oil, place in a roasting pan and roast for 25-30 minutes until the pumpkin and onions are tender but holding their shape. Meanwhile, bring a small pan of salted water to a boil. Add the green beans and blanch for 5 minutes, or until tender. Drain well and cool under cold running water to stop the cooking. Drain well and pat dry. Remove the pumpkin and onion wedges from the oven as soon as they are tender-crisp and leave to cool completely. When the pumpkin is cool, peel and cut into bite-size pieces. To make the vinaigrette, put the oil, vinegar, sugar, mustard, and salt and pepper to taste into a screw-top jar and shake until blended. To assemble the salad, put the pumpkins, onions, beans, pork, arugula, feta, pine nuts, and parsley in a large bowl and gently toss together--be careful not to break up the pumpkin. Shake the dressing again, pour over the salad and gently toss. Divide among individual bowls and serve. # roast chicken with pesto cream salad serves 4-6 ingredients 1 lb 5 oz/600 g cooked boneless chicken, any skin removed and cut into bite-size chunks 3 celery sticks, chopped 2 large skinned red bell peppers from a jar, well drained and sliced salt and pepper iceberg lettuce leaves, to serve for the pesto cream 5 oz/150 ml sour cream about 4 tbsp bottled pesto sauce To make the pesto cream, put the sour cream into a large bowl, then beat in 4 tablespoons pesto sauce. Taste and add more pesto if you want a stronger flavor. Add the chicken, celery and bell peppers to the bowl and gently toss together. Add salt and pepper to taste and toss again. Cover and chill until required. Remove the salad from the refrigerator 10 minutes before serving to return to room temperature. Give the salad ingredients a good stir, then divide among individual plates lined with lettuce leaves. # sparkling a collection of fish and seafood salads # salad niçoise serves 4 ingredients 2 tuna steaks, about ¾ inch/2 cm thick olive oil, for brushing salt and pepper 9 oz/250 g green beans, trimmed ½ cup vinaigrette or garlic vinaigrette dressing 2 hearts of lettuce, leaves separated 3 large hard-cooked eggs, cut into fourths 2 juicy vine-ripened tomatoes, cut into wedges 1¾ oz/50 g anchovy fillets in oil, drained 2 oz/55 g Niçoise olives, pitted Heat a ridged cast-iron grill pan over a high heat until you can feel the heat rising from the surface. Brush the tuna steaks with oil, place oiled side down on the hot pan, and cook for 2 minutes. Lightly brush the top side of the tuna steaks with more oil. Use a pair of tongs to turn the tuna steaks over, then season to taste with salt and pepper. Continue chargrilling for another 2 minutes for rare or up to 4 minutes for well done. Let cool. Meanwhile, bring a pan of salted water to the boil. Add the beans to the pan and return to the boil, then boil for 3 minutes, or until tender-crisp. Drain the beans and immediately transfer them to a large bowl. Pour over the vinaigrette and stir together, then let the beans to cool in the dressing. To serve, line a platter with lettuce leaves. Lift the beans out of the bowl, leaving the excess dressing behind, and pile them in the center of the platter. Break the tuna into large pieces and arrange it over the beans. Arrange the hard-boiled eggs and tomatoes around the side. Arrange the anchovy fillets over the salad, then scatter with the olives. Drizzle the remaining dressing in the bowl over everything and serve. # lentil & tuna salad serves 4 ingredients 2 ripe tomatoes 1 small red onion 14 oz/400 g can lentils, drained 6½ oz/185 g can tuna, drained 2 tbsp chopped fresh cilantro pepper for the dressing 3 tbsp virgin olive oil 1 tbsp lemon juice 1 tsp whole-grain mustard 1 garlic clove, crushed ½ tsp ground cumin ½ tsp ground coriander Using a sharp knife, deseed the tomatoes and then chop them into fine dice. Finely chop the red onion. To make the dressing, whisk together the virgin olive oil, lemon juice, mustard, garlic, cumin and ground coriander in a small bowl until thoroughly combined. Set aside until required. Mix together the chopped onion, diced tomatoes and drained lentils in a large bowl. Flake the tuna with a fork and stir it into the onion, tomato and lentil mixture. Stir in the chopped fresh coriander and mix well. Pour the dressing over the lentil and tuna salad and season with pepper to taste. Serve immediately. # tuna & two-bean salad serves 4 ingredients 7 oz/200 g green beans 14 oz/400 g canned small white beans, such as cannellini, rinsed and drained 4 scallions, finely chopped 2 fresh tuna steaks, about 8 oz/225 g each and ¾ inch/2 cm thick olive oil, for brushing salt and pepper 9 oz/250 g cherry tomatoes, halved lettuce leaves fresh mint and parsley sprigs, to garnish for the dressing handful of fresh mint leaves, shredded handful of fresh parsley leaves, chopped 1 garlic clove, crushed 4 tbsp extra-virgin olive oil 1 tbsp red wine vinegar salt and pepper First, make the dressing. Put the mint leaves, parsley leaves, garlic, olive oil and vinegar into a screw-top jar, add salt and pepper to taste, and shake until blended. Pour into a large bowl and set aside. Bring a pan of lightly salted water to the boil. Add the green beans and cook for 3 minutes. Add the white beans and cook for another 4 minutes until the green beans are tender-crisp and the white beans are heated through. Drain well and add to the bowl with the dressing and scallions. Toss together. To cook the tuna, heat a stovetop ridged grill pan over a high heat. Lightly brush the tuna steaks with oil, then season to taste with salt and pepper. Cook the steaks for 2 minutes, then turn over and cook on the other side for an additional 2 minutes for rare or up to 4 minutes for well done. Remove the tuna from the gri pan and leave to rest for 2 minutes, or alternatively leave until completely cool. When ready to serve, add the tomatoes to the bean mixture and toss lightly. Line a serving platter with lettuce leaves and pile on the bean salad. Place the tuna over the top. Serve warm or at room temperature, garnished with the herbs. # tuna & fresh vegetable salad serves 4 ingredients 12 cherry tomatoes, halved 1½ cups whole green beans, cut into 1-inch/2.5-cm pieces 8 oz/225 g zucchini, sliced thinly 3¼ cups thinly sliced white mushrooms salad greens 12 oz/350 g canned tuna in brine, drained and flaked fresh parsley, to garnish for the dressing 4 tbsp mayonnaise 4 tbsp plain yogurt 2 tbsp white wine vinegar salt and pepper To make the dressing, put the mayonnaise, yogurt, vinegar, salt, and pepper in a screw-topped jar and shake together until the ingredients are well blended. Put the tomatoes, beans, zucchini, and mushrooms in a bowl. Pour over the dressing and marinate for about 1 hour. Arrange the salad leaves on a serving dish. Add the vegetables and then the tuna, and garnish with parsley. # shrimp & mango salad serves 4 ingredients 2 mangoes 2 cups peeled, cooked shrimp salad greens, to serve 4 whole cooked shrimp, to garnish for the dressing juice from the mangoes 6 tbsp plain yogurt 2 tbsp mayonnaise 1 tbsp lemon juice salt and pepper Cutting close to the pit, cut a large slice from one side of each mango, then cut another slice from the opposite side. Without breaking the skin, cut the flesh in the segments into squares, then push the skin inside out to expose the cubes, and cut away from the skin. Use a sharp knife to peel the remaining center section and cut the flesh away from the pit into cubes. Reserve any juice in a bowl and put the mango flesh in a separate bowl. Add the shrimp to the mango flesh. Add the yogurt, mayonnaise, lemon juice, salt, and pepper to the juice and blend together. Arrange the salad greens on a serving dish and add the mango flesh and shrimp. Pour the dressing over them and serve garnished with the whole shrimp. # salmon & avocado salad serves 4 ingredients 1 lb/450 g new potatoes 4 salmon steaks, about 4 oz/115 g each 1 avocado juice of ½ lemon 1¼ cups baby spinach leaves 4½ oz/125 g mixed small salad greens, including watercress 12 cherry tomatoes, halved scant ½ cup chopped walnuts for the dressing 3 tbsp unsweetened clear apple juice 1 tsp balsamic vinegar freshly ground black pepper Cut the new potatoes into bite-size pieces, put into a pan, and cover with cold water. Bring to a boil, then reduce the heat, cover, and let simmer for 10-15 minutes, or until just tender. Drain and keep warm. Meanwhile, preheat the broiler to medium. Cook the salmon steaks under the preheated broiler for 10-15 minutes, depending on the thickness of the steaks, turning halfway through cooking. Remove from the broiler and keep warm. While the potatoes and salmon are cooking, cut the avocado in half, remove and discard the pit, and peel the flesh. Cut the avocado flesh into slices and coat in the lemon juice to prevent it discoloring. Toss the spinach leaves and mixed salad greens together in a large serving bowl until combined. Arrange 6 cherry tomato halves on each plate of salad. Remove and discard the skin and any bones from the salmon. Flake the salmon and divide among the plates along with the potatoes. Sprinkle the walnuts over the salads. To make the dressing, mix the apple juice and vinegar together in a small bowl or pitcher and season well with pepper. Drizzle over the salads and serve at once. # coconut shrimp with cucumber salad serves 4 ingredients 1 cup brown basmati rice ½ tsp coriander seeds 2 egg whites, lightly beaten generous ¾ cup dry unsweetened coconut 24 raw jumbo shrimp, shelled ½ cucumber 4 scallions, thinly sliced lengthwise 1 tsp sesame oil 1 tbsp finely chopped fresh cilantro Bring a large pan of water to a boil, add the rice, and cook for 25 minutes, or until tender. Drain and keep in a strainer covered with a clean dish towel to absorb the steam. Meanwhile, soak 8 wooden skewers in cold water for 30 minutes, then drain. Crush the coriander seeds in a mortar with a pestle. Heat a nonstick skillet over medium heat, add the crushed coriander seeds, and cook, turning, until they start to color. Tip onto a plate and set aside. Put the egg whites into a shallow bowl and the coconut into a separate bowl. Roll each shrimp first in the egg whites, then in the coconut. Thread onto a skewer. Repeat so that each skewer is threaded with 3 coated shrimp. Preheat the broiler to high. Using a potato peeler, peel long strips from the cucumber to create ribbons, put into a strainer to drain, then toss with the scallions and oil in a bowl, and set aside. Cook the shrimp under the preheated broiler for 3-4 minutes on each side, or until slightly browned. Meanwhile, mix the rice with the toasted coriander seeds and fresh cilantro and divide this and the cucumber salad among bowls. Serve with the hot shrimp skewers. # tuna & avocado salad serves 4 ingredients 2 avocados, pitted, peeled, and cubed 9 oz/250 g cherry tomatoes, halved 2 red bell peppers, seeded and chopped 1 bunch fresh flat-leaf parsley, chopped 2 garlic cloves, crushed 1 fresh red chile, seeded and finely chopped juice of ½ lemon 6 tbsp olive oil pepper 3 tbsp sesame seeds 4 fresh tuna steaks, about 5½ oz/150 g each 8 cooked new potatoes, cubed arugula leaves and crusty bread, to serve Toss the avocados, tomatoes, red bell peppers, parsley, garlic, chile, lemon juice, and 2 tablespoons of the oil together in a large bowl. Season to taste with pepper, cover, and let chill in the refrigerator for 30 minutes. Lightly crush the sesame seeds in a mortar with a pestle. Tip the crushed seeds onto a plate and spread out. Press each tuna steak in turn into the crushed seeds to coat on both sides. Heat 2 tablespoons of the remaining oil in a skillet, add the potatoes, and cook, stirring frequently, for 5-8 minutes, or until crisp and brown. Remove from the skillet and drain on paper towels. Wipe out the skillet, add the remaining oil, and heat over high heat until very hot. Add the tuna steaks and cook for 3-4 minutes on each side. To serve, divide the avocado salad among 4 serving plates. Top each with a tuna steak, sprinkle over the potatoes and arugula leaves and serve with crusty bread. # tomato, salmon & shrimp salad serves 4 ingredients 4 oz/115 g cherry or baby plum tomatoes several lettuce leaves 4 ripe tomatoes, coarsely chopped 3½ oz/100 g smoked salmon 7 oz/200 g large cooked shrimp, thawed if frozen for the dressing 1 tbsp Dijon mustard 2 tsp superfine sugar 2 tsp red wine vinegar 2 tbsp medium olive oil few fresh dill sprigs, plus extra to garnish pepper Halve most of the cherry tomatoes. Place the lettuce leaves round the edge of a shallow bowl and add all the tomatoes and cherry tomatoes. Using scissors, snip the smoked salmon into strips and sprinkle over the tomatoes, then add the shrimp. Mix the mustard, sugar, vinegar, and oil together in a small bowl, then tear most of the dill sprigs into it. Mix well and pour over the salad. Toss well to coat the salad with the dressing. Snip the remaining dill over the top and season to taste with pepper. # lobster salad serves 2 ingredients 2 raw lobster tails radicchio leaves fresh dill sprigs, to garnish for the mayonnaise 1 large lemon 1 large egg yolk ½ tsp Dijon mustard ⅔ cup olive oil salt and pepper 1 tbsp chopped fresh dill To make the lemon-dill mayonnaise, finely grate half the lemon rind and squeeze the juice. Beat the egg yolk in a small bowl, then beat in the mustard and 1 teaspoon of the lemon juice. Using a balloon whisk or electric mixer, beat the oil into the egg yolk mixture, drop by drop, until a thick mayonnaise forms. Stir in the lemon rind and 1 tablespoon of the remaining lemon juice. Season the mayonnaise to taste with salt and pepper and add more lemon juice if desired. Stir in the dill, cover, and let chill in the refrigerator until required. Bring a large pan of lightly salted water to a boil. Add the lobster tails, return to a boil, and cook for 6 minutes, or until the flesh is opaque and the shells are red. Drain at once and set aside to cool. Remove the lobster flesh from the shells and cut into bite-size pieces. Arrange the radicchio leaves on individual plates and top with the lobster flesh. Place a spoonful of the lemon-dill mayonnaise on the side. Garnish with dill sprigs and serve. # russian salad serves 4 ingredients 4 oz/115 g new potatoes generous 1 cup frozen or shelled fresh fava beans 4 oz/115 g baby carrots 4 oz/115 g baby corn 4 oz/115 g baby turnips 4 oz/115 g white mushrooms, cut into thin sticks 12 oz/350 g cooked shelled shrimp, deveined ½ cup mayonnaise 1 tbsp lemon juice 2 tbsp bottled capers, drained and rinsed salt and pepper 2 tbsp extra-virgin olive oil 2 hard-cooked eggs, shelled and halved 4 canned anchovy fillets, drained and halved paprika, to garnish Cook the new potatoes, fava beans, carrots, corn, and turnips simultaneously. Cook the potatoes in a large, heavy-bottom pan of lightly salted boiling water for 20 minutes. Cook the fava beans in a small pan of lightly salted water for 3 minutes, then drain, refresh under cold running water and set aside until required. Cook the carrots, corn, and turnips in a large, heavy-bottom pan of lightly salted boiling water for 6 minutes. Mix the mushrooms and shrimp together in a bowl. Mix the mayonnaise and lemon juice together in a separate bowl, then fold half the mixture into the shrimp mixture. Fold in the capers and season to taste with salt and pepper. Drain the mixed vegetables, refresh under cold running water and tip into a bowl. When the potatoes are cooked, drain, refresh under cold running water and tip into the bowl. Pop the fava beans out of their skins by pinching them between your index finger and thumb and add to the bowl. Add the olive oil and toss to coat. Divide the potatoes and vegetables between serving plates and top with the shrimp mixture. Place a hard-cooked egg half in the center of each and garnish with the halved anchovies. Dust the eggs with paprika and serve with the remaining mayonnaise mixture. # seafood salad serves 4 ingredients 9 oz/250 g live mussels 12 oz/350 g live scallops, shucked and cleaned 9 oz/250 g prepared squid, cut into rings and tentacles 1 red onion, halved and finely sliced chopped parsley, to serve lemon wedges, to serve for the dressing 4 tbsp extra-virgin olive oil 2 tbsp white wine vinegar 1 tbsp lemon juice 1 garlic clove, finely chopped 1 tbsp chopped fresh flat-leaf parsley salt and pepper Clean the mussels by scrubbing or scraping the shells and pulling out any beards that are attached to them. Discard any with broken shells or any that refuse to close when tapped. Put the mussels in a colander and rinse well under cold running water. Put them in a large pan with a little water and cook, covered, over a high heat, shaking the pan occasionally, for 3-4 minutes, or until the mussels have opened. Discard any mussels that remain closed. Strain the mussels, reserving the cooking liquid. Refresh the mussels under cold running water, drain, and set aside. Return the reserved cooking liquid to the pan and bring to a boil, add the scallops and squid, and cook for 3 minutes. Remove from the heat and drain. Refresh under cold running water and drain again. Remove the mussels from their shells. Put them in a bowl with the scallops and squid and let cool. Cover with plastic wrap and let chill in the refrigerator for 45 minutes. Divide the seafood among 4 serving plates and top with the onion. Combine all the dressing ingredients in a small bowl, then drizzle over the salad. Garnish with chopped parsley and lemon wedges to serve. # cantaloupe & crab salad serves 4 ingredients 12 oz/350 g fresh crabmeat 5 tbsp mayonnaise 2 fl oz/50 ml plain yogurt 4 tsp extra-virgin olive oil 4 tsp lime juice 1 scallion, finely chopped 4 tsp finely chopped fresh parsley pinch of cayenne pepper 1 cantaloupe melon 2 radicchio heads, separated into leaves fresh parsley sprigs, to garnish crusty bread, to serve Place the crabmeat in a large bowl and pick over it very carefully to remove any remaining shell or cartilage, but try not to break up the meat. Put the mayonnaise, yogurt, olive oil, lime juice, scallion, chopped fresh parsley, and cayenne pepper into a separate bowl and mix until thoroughly blended. Fold in the crabmeat. Cut the melon in half and remove and discard the seeds. Slice into wedges, then cut off the rind with a sharp knife. Arrange the melon slices and radicchio leaves on 4 large serving plates, then arrange the crabmeat mixture on top. Garnish with a few sprigs of fresh parsley and serve with fresh crusty bread. # shrimp & rice salad serves 4 ingredients scant 1 cup mixed long-grain and wild rice salt and pepper 12 oz/350 g cooked shelled shrimp 1 mango, peeled, seeded, and diced 4 scallions, sliced ¼ cup slivered almonds 1 tbsp finely chopped fresh mint for the dressing 1 tbsp extra-virgin olive oil 2 tsp lime juice 1 garlic clove, crushed 1 tsp honey salt and pepper Cook the rice in a large pan of lightly salted boiling water for 35 minutes, or until tender. Drain and transfer to a large bowl, then add the shrimp. To make the dressing, mix all the ingredients together in a large measuring cup, seasoning to taste with the salt and pepper, and whisk well until thoroughly blended. Pour the dressing over the rice and shrimp mixture and let cool. Add the mango, scallions, almonds, and mint to the salad and season to taste with pepper. Stir thoroughly, transfer to a large serving dish and serve. # anchovy & olive salad serves 4 ingredients large handful of mixed lettuce leaves 12 cherry tomatoes, halved 20 black olives, pitted and halved 6 canned anchovy fillets, drained and sliced 1 tbsp chopped fresh oregano wedges of lemon, to garnish crusty bread rolls, to serve for the dressing 4 tbsp extra-virgin olive oil 1 tbsp white wine vinegar 1 tbsp lemon juice 1 tbsp chopped fresh flat-leaf parsley salt and pepper Prepare all the salad ingredients as per ingredients list. To make the dressing, put all the ingredients into a small bowl, seasoning with salt and pepper to taste, and stir together well. To assemble the salad, arrange the lettuce leaves in a serving dish. Scatter the cherry tomatoes on top, followed by the olives, anchovies, and oregano. Drizzle over the dressing. Transfer to individual plates, garnish with lemon wedges and serve with crusty bread rolls. # smoked salmon & wild arugula salad serves 4 ingredients 1¾ oz/50 g wild arugula leaves 1 tbsp chopped fresh flat-leaf parsley 2 scallions, finely diced 2 large avocados 1 tbsp lemon juice 9 oz/250 g smoked salmon for the dressing ⅔ cup mayonnaise 2 tbsp lime juice finely grated rind of 1 lime 1 tbsp chopped fresh flat-leaf parsley, plus extra sprigs to garnish Shred the arugula and arrange in 4 individual glass bowls. Sprinkle over the chopped parsley and scallions. Halve, peel, and pit the avocados and cut into thin slices or small chunks. Brush with the lemon juice to prevent discoloration, then divide among the salad bowls. Mix together gently. Cut the smoked salmon into strips and sprinkle over the top. Put the mayonnaise in a bowl, then add the lime juice, lime rind, and chopped parsley. Mix together well. Spoon some of the mayonnaise dressing on top of each salad and garnish with parsley sprigs. # tuna & herbed fusilli salad serves 4 ingredients 7 oz/200 g dried fusilli 1 red bell pepper, seeded and quartered 1 red onion, sliced 4 tomatoes, sliced 7 oz/200 g canned tuna in brine, drained and flaked for the dressing 6 tbsp basil-flavored oil or extra-virgin olive oil 3 tbsp white wine vinegar 1 tbsp lime juice 1 tsp mustard 1 tsp honey 4 tbsp chopped fresh basil, plus extra sprigs to garnish Bring a large pan of lightly salted water to a boil. Add the pasta, return to a boil, and cook for 8-10 minutes until tender but still firm to the bite. Meanwhile, put the bell pepper quarters under a preheated hot broiler and cook for 10-12 minutes until the skins begin to blacken. Transfer to a plastic bag, seal, and set aside. Remove the pasta from the heat, drain, and set aside to cool. Remove the bell pepper quarters from the bag and peel off the skins. Slice the bell pepper into strips. To make the dressing, put all the dressing ingredients in a large bowl and stir together well. Add the pasta, bell pepper strips, onion, tomatoes, and tuna. Toss together gently, then divide among serving bowls. Garnish with basil sprigs and serve. # seafood & spinach salad serves 4 ingredients 1 lb 2 oz/500 g live mussels, soaked and cleaned 3½ oz/100 g shrimp, peeled and deveined 12 oz/350 g scallops 1 lb 2 oz/500 g baby spinach leaves 3 scallions, trimmed and sliced for the dressing 4 tbsp extra-virgin olive oil 2 tbsp white wine vinegar 1 tbsp lemon juice 1 tsp finely grated lemon zest 1 garlic clove, chopped 1 tbsp grated fresh gingerroot 1 small red chile, seeded and diced 1 tbsp chopped fresh cilantro salt and pepper Put the mussels into a large pan with a little water, bring to a boil, and cook over high heat for 4 minutes. Drain and reserve the liquid. Discard any mussels that remain closed. Return the reserved liquid to the pan and bring to a boil. Add the shrimp and scallops and cook for 3 minutes. Drain. Remove the mussels from their shells. Rinse the mussels, shrimp, and scallops in cold water, drain, and put them in a large bowl. Cool, cover with plastic wrap, and chill for 45 minutes. Meanwhile, rinse the baby spinach leaves and transfer them to a pan with 4 tablespoons of water. Cook over high heat for 1 minute, transfer to a strainer, refresh under cold running water, and drain. To make the dressing, put all the ingredients into a small bowl and mix. Arrange the spinach on serving dishes, then scatter over half of the scallions. Top with the mussels, shrimp and scallops, then scatter over the remaining scallions. Drizzle over the dressing and serve. # neapolitan seafood salad serves 4 ingredients 1 lb/450 g prepared squid, cut into strips 1 lb 10 oz/750 g cooked mussels 1 lb 10 oz/750 g cooked cockles in brine ⅝ cup white wine 1½ cups olive oil 2 cups dried campanelle or other small pasta shapes juice of 1 lemon 1 bunch chives, snipped 1 bunch fresh parsley, finely chopped salt and pepper mixed salad greens 4 large tomatoes, to garnish Put all of the seafood into a large bowl. Pour over the wine and half the olive oil, then set aside for 6 hours. Put the seafood mixture into a pan and simmer over a low heat for 10 minutes. Set aside to cool. Bring a large pan of lightly salted water to a boil. Add the pasta and 1 tbsp of the remaining olive oil and cook until tender, but still firm to the bite. Drain thoroughly and refresh in cold water. Strain off about half of the cooking liquid from the seafood and discard the rest. Mix in the lemon juice, chives, parsley, and the remaining olive oil. Season to taste with salt and pepper. Drain the pasta and add to the seafood. Shred the leaves and arrange them at the base of a salad bowl. Cut the tomatoes into quarters. Spoon the seafood salad into the bowl, garnish with the tomatoes and serve. # mussel salad serves 4 ingredients 2 red bell peppers, halved and seeded 12 oz/350 g cooked, shucked mussels, thawed if frozen 1 head radicchio ¾ cup arugula 8 cooked green-lipped mussels in their shells for the dressing 1 tbsp olive oil 1 tbsp lemon juice 1 tsp finely grated lemon peel 2 tsp honey 1 tsp French mustard 1 tbsp snipped fresh chives salt and pepper Put the bell peppers, skin-side up, on a broiler rack and cook under a preheated broiler for 8-10 minutes, or until the skin is charred and blistered and the flesh is soft. Remove from the broiler with tongs, put into a bowl, and cover with plastic wrap. Set aside for 10 minutes, or until cool enough to handle, then peel off the skins. Slice the bell pepper flesh into thin strips and put into a bowl. Gently stir in the shucked mussels. To make the dressing, whisk the oil, lemon juice and peel, honey, mustard, and chives together until well blended. Season to taste with salt and pepper. Add the bell pepper and mussel mixture and toss until coated. Remove the central core of the radicchio and shred the leaves. Put into a serving bowl with the arugula and toss together. Pile the mussel mixture into the center of the leaves and arrange the green-lipped mussels in their shells around the edge of the bowl. # sweet & sour fish salad serves 4 ingredients 8 oz/225 g trout fillets 8 oz/225 g white fish fillets (such as haddock or cod) 1¼ cups water 1 stem lemongrass 2 lime leaves 1 large red chile 1 bunch scallions, trimmed and shredded 4 oz/115 g fresh pineapple flesh, diced 1 small red bell pepper, seeded and diced 1 bunch watercress, washed and trimmed fresh snipped chives, to garnish for the dressing 1 tbsp sunflower oil 1 tbsp rice wine vinegar pinch of chili powder 1 tsp clear honey salt and pepper Rinse the fish, place in a skillet and pour over the water. Bend the lemongrass in half to bruise it and add to the skillet with the lime leaves. Prick the chile with a fork and add to the pan. Bring to a boil and simmer for 7-8 minutes. Let cool. Drain the fish fillet thoroughly, then flake the flesh away from the skin and place it in a bowl. Gently stir in the scallions, pineapple and bell pepper. Arrange the washed watercress on 4 serving plates and spoon the cooked fish mixture on top. To make the dressing, mix all the ingredients together, seasoning well. Spoon it over the fish and serve the salad garnished with chives. # chiled shrimp with pineapple & papaya salsa serves 8 ingredients 4 tbsp sunflower oil 1 fresh red chile, seeded and chopped 1 garlic clove, crushed 48 shrimps chopped fresh parsley, to garnish for the pineapple & papaya salsa 1 large papaya, halved, seeded, peeled and cut into ¼-inch/5 mm dice 1 small pineapple, halved, cored, peeled and cut into ¼ inch/5 mm dice 2 scallions, very finely chopped 1 fresh red chile, or to taste, seeded and finely chopped 1 garlic clove, very finely chopped 2½ tsp lemon juice ½ tsp ground cumin ¼ tsp salt black pepper To make the salsa, put the papaya in a large bowl with the pineapple, scallions, chile, garlic, lemon juice, cumin, salt and pepper. Adjust the lemon juice, cumin, salt or pepper to taste, if necessary. Cover and chill until required, ideally at least 2 hours. Heat a wok over a high heat. Add the oil and swirl around, then add the chile and garlic and stir-fry for 20 seconds. Add the shrimp and stir-fry for 2-3 minutes until the shrimp are cooked through, become pink and curl. Tip the shrimps, garlic and any oil left in the wok into a heatproof bowl and leave the shrimp to cool and marinate in the chile oil. When the shrimps are completely cool, cover the bowl and chill for at least 2 hours. When ready to serve, give the salsa a stir and adjust the seasoning, if necessary. Arrange a mound of salsa on each of 8 plates. Remove the shrimp from the marinade and divide among plates. Sprinkle with parsley and serve. # seared swordfish with fresh tomato salsa serves 4 ingredients 4 boneless swordfish steaks, about 5 oz/140 g each salt and pepper stick of butter 1 tbsp olive oil slices of crusty bread, to serve for the fresh tomato & olive salsa 4 tbsp extra-virgin olive oil 1 tbsp red-wine vinegar 1 lb 5 oz/600 g ripe, juicy beefsteak tomatoes, cored, seeded and finely chopped 5 oz/140 g large black olives, pitted and cut in half 1 shallot, finely chopped or thinly sliced 1 tbsp capers in brine, rinsed and dried salt and pepper 3 tbsp finely shredded fresh basil leaves To make the fresh tomato & olive salsa, whisk the olive oil and vinegar together in a bowl large enough to hold all the ingredients. Gently stir in the tomatoes, olives, shallot, and capers with salt and pepper to taste. Cover and chill until required. Season the swordfish steaks on both sides with salt. Melt the butter with the oil in a skillet large enough to hold the swordfish steaks in a single layer. (If you don't have a large enough pan, cook the steaks in 2 batches.) Add the swordfish steaks to the pan in a single layer and fry for 5 minutes, or until golden brown, then carefully turn the fish over and continue frying about 3 minutes longer until the fish is cooked through and flakes easily. Remove the fish from the pan and set aside to cool completely. Cover and chill for at least 2 hours. When ready to serve, remove the fish from the fridge at least 15 minutes in advance. Stir the basil into the salsa, then adjust the seasoning if necessary. Break the swordfish into large flakes and gently stir into the salsa-take care not to break up the fish too much. Arrange the fish salad in 4 bowls, spooning over any of the left-over juices and serve with slices of crusty bread. # shrimp cocktail salad serves 4 ingredients 2 tsp salt ½ lemon, sliced 32 large shelled and deveined shrimp, defrosted if frozen 6 oz/175 g tomato ketchup 1½ tbsp grated horseradish 3 celery sticks, cut into ¼-inch/5-mm slices finely grated zest and juice of 1 lemon salt and pepper iceberg lettuce leaves, shredded, to serve lemon wedges, to garnish Bring a large pan of water to a rolling boil. Stir in the salt and lemon slices, then reduce the heat to low. Add the shrimp and leave to simmer for about 3 minutes until they are cooked through, turn pink and curl. Drain the shrimp into a large colander and immediately refresh under cold running water to stop the cooking and cool the shrimps; set aside. Put the ketchup, horseradish, celery and lemon zest in a bowl and stir together. Stir in 1 tablespoon lemon juice, then add more juice and salt and pepper to taste. Stir in the shrimp, then cover and chill for at least 2 hours. When ready to serve, stir the shrimp salad and adjust the seasoning, if necessary. Divide the shredded lettuce among 4 glass bowls and spoon the salad on top. Serve at once while the salad is still chilled, with lemon wedges for squeezing over. # celeriac remoulade with crab serves 4 ingredients 1 lb/450 g celery root, peeled and grated juice of 1 lemon 9 oz/350 g fresh white crabmeat, picked over chopped fresh dill or parsley, to garnish for the remoulade dressing 5 oz/150 ml mayonnaise 1 tbsp Dijon mustard 1½ tsp white wine vinegar 2 tbsp capers in brine, well rinsed salt and white pepper To make the dressing, put the mayonnaise in a bowl. Beat in the mustard, vinegar, and capers, with salt and white pepper to taste-the mixture should be piquant with a strong mustard flavor. Cover and chill until required. Bring a large pan of salted water to a full, rolling boil. Meanwhile, peel the celery root and cut it into quarters, then grate it either in a food processor or on the coarse side of a box grater. Add the grated celery root and lemon juice to the water and blanch for 1½-2 minutes until it is just slightly tender. Rinse the celery root well, then put it under cold running water to stop the cooking. Use your hands to squeeze out the excess moisture, then pat the celery root dry with paper towels or a clean kitchen towel. Stir the celery root into the dressing, along with the crabmeat. Taste and adjust the seasoning, if necessary. Cover and chill for at least 30 minutes. When ready to serve, spoon into bowls and sprinkle with dill or parsley. # health-boosting a collection of energizing salads # wild rice salad with cucumber & orange serves 4 ingredients 1⅓ cups wild rice 3½ cups water 1 each red, yellow, and orange bell peppers, skinned, seeded, and thinly sliced ½ cucumber, halved lengthwise and sliced 1 orange, peeled, pith removed, and cubed 3 ripe tomatoes, cut into chunks 1 red onion, very finely chopped generous handful of chopped flat-leaf parsley for the dressing 1 clove garlic, crushed 1 tbsp balsamic vinegar 2 tbsp extra-virgin olive oil salt and pepper Put the wild rice and water into a large pan and bring to a boil. Stir, then cover and simmer for 40 minutes, or until the rice is al dente (firm to the bite). Uncover the rice for the last few minutes of cooking to let any excess water evaporate. To make the dressing, put the crushed garlic, vinegar, olive oil, and seasoning into a screw-top jar and shake vigorously. Add extra vinegar, oil, or seasoning as required. Drain the rice and turn into a large bowl. Pour over the dressing and mix in. Then mix in the chopped bell peppers, cucumber, orange, tomatoes, red onion, and flat-leaf parsley and serve. # red bell pepper & radicchio salad serves 4 ingredients 2 red bell peppers 1 head radicchio, separated into leaves 4 cooked whole beets, cut into matchsticks 12 radishes, sliced 4 scallions, finely chopped 4 tbsp vinaigrette crusty bread, to serve Core and seed the bell peppers and cut into rounds. Arrange the radicchio leaves in a salad bowl. Add the bell pepper, beets, radishes, and scallions. Drizzle with the vinaigrette and serve with crusty bread. # spring clean salad serves 4 ingredients 2 dessert apples, cored and diced juice of 1 lemon large chunk of watermelon, seeded and cubed 1 head Belgian endive, sliced into rounds 4 sticks celery with leaves, coarsely chopped 1 tbsp walnut oil Core and dice the apples. Place in a bowl and pour over the lemon juice. Mix well to prevent discoloration. Add the rest of the fruit and vegetables to the bowl and mix gently. Pour in the walnut oil, mix again and serve. # chickpea & tomato salad serves 4 ingredients 6 oz/175 g dried chickpeas or 1½ cups canned, drained and rinsed 2-3 ripe tomatoes, coarsely chopped 1 red onion, thinly sliced handful of fresh basil leaves, torn 1 romaine lettuce, torn crusty bread, to serve for the dressing 1 green chile, seeded and finely chopped 1 garlic clove, crushed juice and zest of 2 lemons 2 tbsp olive oil 1 tbsp water black pepper If using dried chickpeas, soak overnight, then boil for 30 minutes, or until soft. Let cool. Put the chile, garlic, lemon juice, olive oil, water, and black pepper in a screw-top jar and shake vigorously. Taste and add more lemon juice or oil if necessary. Add the tomatoes, onion, and basil to the chickpeas and mix gently. Pour over the dressing and mix again. Arrange on a bed of lettuce and serve with crusty bread. # fennel & orange salad serves 4 ingredients 2 oranges, peeled and sliced 1 bulb Florence fennel, thinly sliced 1 red onion, peeled and sliced into thin rings for the dressing juice of 1 orange 2 tbsp balsamic vinegar Arrange the orange slices in the bottom of a shallow dish. Place a layer of fennel on top and then add a layer of onion. Mix the orange juice with the vinegar and drizzle over the salad. # warm new potato & lentil salad serves 4 ingredients ⅜ cup puy lentils 1 lb/450 g new potatoes 6 scallions 1 tbsp olive oil 2 tbsp balsamic vinegar salt and pepper Bring a large pan of water to a boil. Rinse the lentils, then cook for 20 minutes, or until tender. Drain and rinse, then put to one side. Meanwhile, steam or boil the potatoes until they are soft right through. Drain and halve. Trim the base from the scallions and cut them in long strips. Put the lentils, potatoes, and scallions into a serving dish and toss with the olive oil and vinegar. Season with plenty of black pepper and a little salt if required. # bean sprout, apricot, & almond salad serves 4 ingredients 1⅔ cups bean sprouts, washed and dried small bunch seedless black and green grapes, halved 12 unsulfured dried apricots, halved ¼ cup blanched almonds, halved black pepper for the dressing 1 tbsp walnut oil 1 tsp sesame oil 2 tsp balsamic vinegar Place the bean sprouts in the bottom of a large salad bowl and sprinkle the grapes and apricots on top. Place the oils and vinegar in a screw-top jar and shake vigorously to mix. Pour over the salad. Scatter over the almonds and season with freshly ground black pepper. # asparagus & tomato salad serves 4 ingredients 8 oz/225 g asparagus spears 1 lamb's lettuce, washed and torn 1 handful arugula or mizuna leaves 1 lb/450 g ripe tomatoes, sliced 12 black olives, pitted and chopped 1 tbsp toasted pine nuts for the dressing 1 tsp lemon oil 1 tbsp olive oil 1 tsp whole-grain mustard 2 tbsp balsamic vinegar salt and pepper Steam the asparagus spears for 8 minutes, or until tender. Rinse under cold running water to prevent them cooking any further, then cut into 2-inch/5-cm pieces. Arrange the lettuce and arugula or mizuna leaves around a salad platter to form the base of the salad. Place the sliced tomatoes in a circle on top and the asparagus in the center. Sprinkle the black olives and pine nuts over the top. Put the lemon oil, olive oil, mustard, and vinegar in a screw-top jar and season to taste with sea salt and black pepper. Shake vigorously and drizzle over the salad. # avocado salad serves 4 ingredients large handful of radicchio large handful of arugula 1 small galia melon 2 ripe avocados 1 tbsp lemon juice 7 oz/200 g fontina cheese, cut into bite-size pieces for the dressing 5 tbsp lemon-flavored or extra-virgin olive oil 1 tbsp white wine vinegar 1 tbsp lemon juice 1 tbsp chopped fresh parsley To make the dressing, mix together the oil, vinegar, lemon juice, and parsley in a small bowl. Arrange the radicchio and arugula on serving plates. Cut the melon in half, then seed it, and cut the flesh away from the skin. Discard the skin. Slice the melon flesh and arrange it over the salad greens. Cut the avocados in half and remove and discard the pits and skin. Slice the flesh and brush with lemon juice. Arrange the slices over the melon, then scatter over the cheese. Drizzle over the dressing and serve. # herby potato salad serves 4-6 ingredients 1 lb 2 oz/500 g new potatoes salt and pepper 16 vine-ripened cherry tomatoes, halved generous ⅜ cup black olives, pitted and coarsely chopped 4 scallions, finely sliced 2 tbsp chopped fresh mint 2 tbsp chopped fresh parsley 2 tbsp chopped fresh cilantro juice of 1 lemon 3 tbsp extra-virgin olive oil Cook the potatoes in a pan of lightly salted boiling water for 15 minutes, or until tender. Drain, then let cool slightly before peeling off the skins. Cut into halves or fourths, depending on the size of the potato. Then combine with the tomatoes, olives, scallions, and herbs in a salad bowl. Mix the lemon juice and oil together in a small bowl or pitcher and pour over the potato salad. Season to taste with salt and pepper before serving. # tabbouleh serves 4 ingredients generous 1 cup quinoa 2½ cups water 10 vine-ripened cherry tomatoes, seeded and halved 3-inch/7.5-cm piece cucumber, diced 3 scallions, finely chopped juice of ½ lemon 2 tbsp extra-virgin olive oil 4 tbsp chopped fresh mint 4 tbsp chopped fresh cilantro 4 tbsp chopped fresh parsley salt and pepper Put the quinoa into a medium-size pan and cover with the water. Bring to a boil, then reduce the heat, cover, and let simmer over low heat for 15 minutes. Drain if necessary. Let the quinoa cool slightly before combining with the remaining ingredients in a salad bowl. Adjust the seasoning, if necessary, before serving. # buckwheat noodle salad with smoked tofu serves 2 ingredients 7 oz/200 g buckwheat noodles 9 oz/250 g firm smoked tofu (drained weight) 7 oz/200 g white cabbage, finely shredded 9 oz/250 g carrots, finely shredded 3 scallions, diagonally sliced 1 fresh red chile, seeded and finely sliced into circles 2 tbsp sesame seeds, lightly toasted for the dressing 1 tsp grated fresh gingerroot 1 garlic clove, crushed 6 oz/175 g silken tofu (drained weight) 4 tsp tamari (wheat-free soy sauce) 2 tbsp sesame oil 4 tbsp hot water salt Cook the noodles in a large pan of lightly salted boiling water according to the package instructions. Drain and refresh under cold running water. To make the dressing, blend the gingerroot, garlic, silken tofu, tamari, oil, and water together in a small bowl until smooth and creamy. Season to taste with salt. Place the smoked tofu in a steamer. Steam for 5 minutes, then cut into thin slices. Meanwhile, put the cabbage, carrots, scallions, and chile into a bowl and toss to mix. To serve, arrange the noodles on serving plates and top with the carrot salad and slices of tofu. Spoon over the dressing and sprinkle with sesame seeds. # zucchini & mint salad serves 4 ingredients 2 zucchini, cut into thin sticks 3½ oz/100 g green beans, cut into thirds 1 green bell pepper, seeded and cut into strips 2 celery stalks, sliced 1 bunch of watercress for the dressing scant 1 cup plain yogurt 1 garlic clove, crushed 2 tbsp chopped fresh mint pepper Cook the thin zucchini sticks and beans in a pan of lightly salted water for 7-8 minutes. Drain, rinse under cold running water, and drain again. Let cool completely. Mix the zucchini and beans with the green bell pepper strips, celery, and watercress in a large serving bowl. To make the dressing, combine the yogurt, garlic, and mint in a small bowl. Season to taste with pepper. Spoon the dressing onto the salad and serve immediately. # tomato, mozzarella & avocado salad serves 4 ingredients 2 ripe beefsteak tomatoes 3½ oz/100 g mozzarella cheese 2 avocados few fresh basil leaves, torn into pieces 20 black olives fresh crusty bread, to serve for the dressing 1 tbsp olive oil 1½ tbsp white wine vinegar 1 tsp coarse grain mustard salt and pepper Using a sharp knife, cut the tomatoes into thick wedges and place in a large serving dish. Drain the mozzarella cheese and coarsely tear into pieces. Cut the avocados in half and remove the pits. Cut the flesh into slices, then arrange the mozzarella cheese and avocado with the tomatoes. Mix the oil, vinegar, and mustard together in a small bowl, add salt and pepper to taste, then drizzle over the salad. Sprinkle the basil and olives over the top and serve at once with fresh crusty bread. # roasted vegetable salad serves 4 ingredients 1 onion 1 eggplant, about 8 oz/225 g 1 red bell pepper, seeded 1 orange bell pepper, seeded 1 large zucchini, about 6 oz/175 g 2-4 garlic cloves 2-4 tbsp olive oil salt and pepper 1 tbsp shredded fresh basil freshly shaved Parmesan cheese, to serve fresh crusty bread, to serve for the dressing 1 tbsp balsamic vinegar 2 tbsp extra-virgin olive oil salt and pepper Preheat the oven to 400°F/200°C. Cut all the vegetables into even-size wedges, put into a roasting pan, and sprinkle over the garlic. Pour over 2 tablespoons of the olive oil and turn the vegetables in the oil until well coated. Add a little salt and pepper. Roast in the preheated oven for 40 minutes, or until tender, adding the extra olive oil if becoming too dry. Meanwhile, put the vinegar, extra-virgin olive oil, and salt and pepper to taste into a screw-top jar and shake until blended. Once the vegetables are cooked, remove from the oven, arrange on a serving dish, and pour over the dressing. Sprinkle with the basil and serve with shavings of Parmesan cheese. Serve warm or cold with fresh crusty bread. # three bean salad serves 4-6 ingredients 6 oz/175 g mixed salad greens, such as spinach, arugula, and frisee 1 red onion 3 oz/85 g radishes 6 oz/175 g cherry tomatoes 4 oz/115 g cooked beet 10 oz/280 g canned cannellini beans, drained and rinsed 7 oz/200 g canned red kidney beans, drained and rinsed 10½ oz/300 g canned flageolets, drained and rinsed scant ⅓ cup dried cranberries scant ½ cup roasted cashews 8 oz/225 g feta cheese (drained weight), crumbled for the dressing 4 tbsp extra-virgin olive oil 1 tsp Dijon mustard 2 tbsp lemon juice 1 tbsp chopped fresh cilantro salt and pepper Arrange the salad greens in a salad bowl and set aside. Thinly slice the onion, then cut in half to form half moons and put into a bowl. Thinly slice the radishes, cut the tomatoes in half, and peel the beet, if necessary, and dice. Add to the onion with the remaining ingredients, except the nuts and cheese. Put all the ingredients for the dressing into a screw-top jar and shake until blended. Pour over the bean mixture, toss lightly, then spoon on top of the salad greens. Sprinkle over the nuts and cheese and serve at once. # succatash salad serves 4-6 ingredients 1 tbsp apple-cider vinegar 1 tsp wholegrain mustard 1 tsp sugar 3 tbsp garlic-flavored olive oil 1 tbsp sunflower oil 14 oz/400 g canned corn kernels, rinsed and drained 14 oz/400 g string beans, finely chopped 2 peeled red bell peppers from a jar, drained and finely chopped 2 scallions, very finely chopped salt and pepper 2 tbsp chopped fresh parsley, to garnish Beat the vinegar, mustard and sugar together. Gradually whisk in the olive and sunflower oils to form an emulsion. Stir in the corn kernels, string beans, bell peppers and scallions. Add salt and pepper to taste and stir together again. Cover and chill for up to one day until required. When ready to serve, adjust the seasoning, if necessary, and stir in the parsley. # spinach salad with bleu cheese dressing serves 4-6 ingredients 10 oz/300 g bag baby spinach leaves, any thick stems or yellow leaves removed, then well rinsed and dried 4 scallions, chopped 3 oranges, segmented 2 oz/55 g sunflower seeds for the bleu cheese dressing 4 oz/125 g full-flavored bleu cheese, such as Roquefort, crumbled 7 oz/200 g thick plain yogurt 1 tbsp white wine vinegar ½ onion, grated ½ small bunch fresh chives, chopped salt and pepper To make the dressing, put the bleu cheese, yogurt, vinegar and onion in a blender or food processor and blend until smooth. Add the chives and give another quick blitz. Season with salt and pepper to taste. Cover and chill until required. When you are ready to assemble the salad, place the spinach leaves and scallions in a salad bowl and toss with half the dressing. Transfer to a serving bowl and top with the orange segments and a sprinkling of sunflower seeds. Pass the remaining dressing separately for spooning over individual portions, if desired. # pear & roquefort salad serves 4 ingredients few leaves of lollo rosso few leaves of radicchio few leaves of mache 2 ripe pears pepper whole fresh chives, to garnish for the dressing 2 oz/55 g Roquefort cheese ⅔cup plain yogurt 2 tbsp chopped fresh chives pepper Place the cheese in a bowl and mash with a fork. Gradually blend the yogurt into the cheese to make a smooth dressing. Add the chives and season with pepper to taste. Tear the lollo rosso, radicchio, and mache leaves into manageable pieces. Arrange the salad greens on a large serving platter or divide them among individual serving plates. Cut the pears into fourths and remove the cores. Cut the quarters into slices. Arrange the pear slices over the salad leaves. Drizzle the dressing over the pears and garnish with a few whole chives. # green fruit salad serves 4 ingredients 1 honeydew melon 2 green apples 2 kiwi fruit 4 oz/115 g seedless white grapes fresh mint sprigs, to decorate for the syrup dressing 1 orange ⅔ cup white wine ⅔ cup water 4 tbsp honey fresh mint sprigs To make the syrup, pare the rind from the orange using a potato peeler. Put the orange rind in a pan with the white wine, water, and honey. Bring to a boil, then simmer gently for 10 minutes. Remove the syrup from the heat. Add the mint sprigs and set aside to cool. To prepare the fruit, first cut the melon in half and scoop out the seeds. Use a melon baller or a teaspoon to make melon balls. Core and chop the apples. Peel and slice the kiwi fruit. Strain the cooled syrup into a serving bowl, removing and reserving the orange rind, and discarding the mint sprigs. Add the apple, grapes, kiwi fruit, and melon to the serving bowl. Stir through gently to mix. Serve the fruit salad, decorated with sprigs of fresh mint and some of the reserved orange rind. # tropical fruit salad serves 4 ingredients 1 papaya 1 mango 1 pineapple 4 oranges, peeled and cut into segments 4½ oz/125 g strawberries, hulled and quartered light or heavy cream, to serve (optional) for the syrup dressing 6 tbsp superfine sugar 1¾ cups water ½ tsp ground allspice grated rind of ½ lemon Put the sugar, water, allspice, and lemon rind into a pan. Bring to a boil, stirring continuously, then continue to boil for 1 minute. Remove from the heat and let cool to room temperature. Transfer to a pitcher or bowl, cover with plastic wrap, and chill in the refrigerator for at least 1 hour. Peel and halve the papaya and remove the seeds. Cut the flesh into small chunks or slices, and put into a large bowl. Cut the mango twice lengthwise, close to the stone. Remove and discard the stone. Peel and cut the flesh into small chunks or slices, and add to the bowl. Cut off the top and bottom of the pineapple and remove the hard skin. Cut the pineapple in half lengthwise, then into quarters, and remove the tough core. Cut the remaining flesh into small pieces and add to the bowl. Add the orange segments and strawberries. Pour over the chilled syrup, cover with plastic wrap, and chill until required. Serve with light or heavy cream, if using. # fig & watermelon salad serves 4 ingredients 1 watermelon, weighing about 3 lb 5 oz/1.5 kg ¾ cup seeded black grapes 4 figs for the syrup dressing 1 lime grated rind and juice of 1 orange 1 tbsp maple syrup 2 tbsp honey Cut the watermelon into quarters and scoop out and discard the seeds. Cut the flesh away from the rind, then chop the flesh into 1-inch/2.5-cm cubes. Place the watermelon cubes in a bowl with the grapes. Cut each fig lengthwise into 8 wedges and add to the bowl. Grate the lime rind and mix it with the orange rind and juice, maple syrup, and honey in a small pan. Bring to a boil over low heat. Pour the mixture over the fruit and stir. Let cool. Stir again, cover, and let chill in the refrigerator for at least 1 hour, stirring occasionally. Divide the fruit salad equally among 4 bowls, and serve. # melon & mango salad serves 4 ingredients 1 cantaloupe melon 2 oz/55 g black grapes, halved and seeded 2 oz/55 g green grapes 1 large mango 1 bunch watercress, trimmed iceberg lettuce leaves, shredded 1 passion fruit for the melon dressing ⅔ cup plain yogurt 1 tbsp honey 1 tsp grated fresh gingerroot for the salad greens dressing 2 tbsp olive oil 1 tbsp apple vinegar salt and pepper To make the dressing for the melon, whisk together the yogurt, honey, and gingerroot in a small bowl. Halve the melon, scoop out the seeds with a spoon, and discard. Slice, peel, and dice the flesh. Place in a bowl with the grapes. Slice the mango on each side of its large flat pit. On each mango half, slash the flesh into a criss-cross pattern, down to but not through the skin. Push the skin from underneath to turn the mango halves inside out. Now remove the flesh and add to the melon mixture. Arrange the watercress and lettuce leaves on 4 serving plates. Make the dressing for the salad greens by whisking together the olive oil and vinegar with a little salt and pepper. Drizzle over the salad greens. Divide the melon mixture among the 4 plates and spoon the yogurt dressing over it. Scoop the seeds out of the passion fruit and sprinkle them over the salads. Serve immediately or chill in the refrigerator until required. # papaya salad serves 4 ingredients 1 crisp lettuce ¼ small white cabbage 2 papayas 2 tomatoes 1 oz/25 g roasted peanuts, chopped roughly 4 scallions, trimmed and sliced thinly basil leaves, to garnish for the dressing 4 tbsp olive oil 1 tbsp fish sauce or light soy sauce 2 tbsp lime or lemon juice 1 tbsp dark brown sugar 1 tsp finely chopped fresh red or green chile To make the dressing, whisk together the oil, fish sauce or soy sauce, lime or lemon juice, sugar and chile. Set aside, stirring occasionally to dissolve the sugar. Shred the lettuce and white cabbage, then toss together and arrange on a large serving plate. Peel the papayas and slice them in half. Scoop out the seeds, then slice the flesh thinly. Arrange on top of the lettuce and cabbage. Soak the tomatoes in a bowl of boiling water for 1 minute, then lift out and peel. Remove the seeds and slice the flesh. Arrange on the salad greens. Scatter the peanuts and scallions over the top. Whisk the dressing and pour over the salad. Garnish with basil leaves and serve at once. # exotic fruit cocktail serves 4 ingredients 2 oranges 2 large passion fruit 1 pineapple 1 pomegranate 1 banana Cut 1 orange in half and squeeze the juice into a bowl, discarding any pips. Using a sharp knife, cut away all the peel and pith from the second orange. Working over the bowl to catch the juice, carefully cut the orange segments between the membranes to obtain skinless segments of fruit. Discard any pips. Cut the passion fruit in half, scoop the flesh into a nylon strainer and, using a spoon, push the pulp and juice into the bowl of orange segments. Discard the pips. Using a sharp knife, cut away all the skin from the pineapple and cut the flesh lengthwise into fourths. Cut away the central hard core. Cut the flesh into chunks and add to the orange and passion fruit mixture. Cover and, if you are not serving at once, let the fruit chill. Cut the pomegranate into fourths and, using your fingers or a teaspoon, remove the red seeds from the membrane. Cover and let chill until ready to serve--do not add too early to the fruit cocktail because the seeds discolor the other fruit. Just before serving, peel and slice the banana, add to the fruit cocktail with the pomegranate seeds, and mix thoroughly. Serve at once. # melon & strawberry salad serves 4 ingredients ½ iceberg lettuce, shredded 1 small honeydew melon 2 cups strawberries, sliced 2-inch/5-cm piece cucumber, thinly sliced fresh mint sprigs, to garnish for the dressing scant 1 cup plain yogurt 2 inch/5 cm piece cucumber, peeled a few fresh mint leaves ½ tsp finely grated lime or lemon rind pinch of superfine sugar 3-4 ice cubes Arrange the shredded lettuce on 4 serving plates. Cut the melon lengthwise into fourths. Scoop out the seeds and cut through the flesh down to the skin at 1-inch/2.5-cm intervals. Cut the melon close to the skin and detach the flesh. Place the chunks of melon on the beds of lettuce with the strawberry and cucumber slices. To make the dressing, put the yogurt, cucumber, mint leaves, lime or lemon rind, superfine sugar, and ice cubes into a blender or food processor. Blend together for about 15 seconds until smooth. Alternatively, chop the cucumber and mint finely, crush the ice cubes, and combine with the other ingredients. Serve the salad with a little dressing poured over it. Garnish with sprigs of fresh mint. # index almonds: beansprout, apricot & almond salad anchovies anchovy & olive salad broiled bell pepper salad Caesar salad Russian salad salad Niçoise apples green fruit salad nutty beet salad salmon & avocado salad spring clean salad Waldorf summer chicken salad apricots: bean sprout, apricot, & almond salad artichokes artichoke & prosciutto salad melon, chorizo & artichoke salad arugula asparagus & tomato salad avocado salad chicken, cheese, & arugula salad mussel salad roast pork & pumpkin salad salad with garlic dressing smoked chicken & cranberry salad smoked salmon & wild arugula salad tomato salad with feta cheese tuna & avocado salad warm pasta salad asparagus asparagus & tomato salad prosciutto with melon & asparagus avocados avocado salad salmon & avocado salad smoked chicken & cranberry salad smoked chicken salad with avocado & tarragon dressing smoked salmon & wild arugula salad tomato, mozzarella & avocado salad tuna & avocado salad bacon & ham artichoke & prosciutto salad chef's salad crispy spinach & bacon prosciutto with melon & asparagus walnut, pear & crispy bacon salad warm mushroom, spinach, & pancetta salad bananas: exotic fruit cocktail beans & pulses chickpea & tomato salad lima bean, onion & herb salad, with spicy sweet potato & bean salad three bean salad tuna & two-bean salad _see also_ fava beans; green beans; kidney beans; lentils bean sprouts bean sprout, apricot, & almond salad roast duck salad smoked chicken salad with avocado & tarragon dressing Thai-style chicken salad beef rare roast beef pasta salad roast beef salad warm beef Niçoise beet Cajun chicken salad nutty beet salad red bell pepper & radicchio salad red & green salad three bean salad beet greens Belgian endive: spring clean salad bell peppers broiled bell pepper & goat cheese salad broiled bell pepper salad Italian salad layered chicken salad mussel salad red bell pepper & radicchio salad roast chicken with pesto cream salad roast duck salad roasted garlic, sweet potato, broiled eggplant, & bell pepper salad with mozzarella roasted vegetable salad salad with garlic dressing succatash salad sweet & sour fish salad tuna & avocado salad tuna & herbed fusilli salad warm beef Niçoise wild rice salad with cucumber & orange zucchini & mint salad broiled bell pepper & goat cheese salad broiled bell pepper salad broiled lamb with yogurt & herb dressing buckwheat noodle salad with smoked tofu cabbage buckwheat noodle salad with smoked tofu papaya salad Caesar salad Cajun chicken salad cantaloupe & crab salad capers broiled bell pepper & goat cheese salad celeriac remoulade with crab mozzarella salad with sun-dried tomatoes Russian salad seared swordfish with fresh tomato salsa Capri salad carrots buckwheat noodle salad with smoked tofu orecchiette salad with pears & bleu cheese Russian salad sweet potato & bean salad celeriac remoulade with crab celery chicken, cheese, & arugula salad duck & radish salad roast chicken with pesto cream salad salad with garlic dressing shrimp cocktail salad spring clean salad sweet potato & bean salad Waldorf summer chicken salad zucchini & mint salad cheese avocado salad Caesar salad chef's salad chicken, cheese & arugula salad green bean & walnut salad orecchiette salad with pears & bleu cheese pear & Roquefort salad prosciutto with melon & asparagus spinach salad with bleu cheese dressing _see also_ feta cheese; goat cheese; mozzarella chef's salad chicken Cajun chicken salad chef's salad chicken, cheese & arugula salad layered chicken salad roast chicken with pesto cream salad smoked chicken & cranberry salad smoked chicken salad with avocado & tarragon dressing Thai-style chicken salad Waldorf summer chicken salad warm chicken liver salad chickpea & tomato salad chiles buckwheat noodle salad with smoked tofu chickpea & tomato salad chiled shrimp with pineapple & papaya salsa lima bean, onion, & herb salad, with spicy sausage Mexican tomato salad papaya salad roast duck salad seafood & spinach salad sweet & sour fish salad Thai noodle salad Thai-style chicken salad tuna & avocado salad chives broiled lamb with yogurt & herb dressing layered chicken salad mussel salad Neapolitan seafood salad pear & Roquefort salad potato salad spinach salad with bleu cheese dressing turkey & rice salad chorizo lima bean, onion, & herb salad, with spicy sausage melon, chorizo & artichoke salad cockles: Neapolitan seafood salad coconut coconut shrimp with cucumber salad roast duck salad couscous: raspberry & feta salad with couscous crab cantaloupe & crab salad celeriac remoulade with crab cranberries smoked chicken & cranberry salad three bean salad crispy spinach & bacon cucumber chicken, cheese, & arugula salad coconut shrimp with cucumber salad melon & strawberry salad rare roast beef salad pasta roast duck salad salad with garlic dressing tabbouleh traditional Greek salad warm pasta salad wild rice salad with cucumber & orange duck duck & radish salad roast duck salad eggplants roasted garlic, sweet potato, broiled eggplant, & bell pepper salad with mozzarella roasted vegetable salad eggs broiled bell pepper salad Caesar salad chef's salad fava bean salad Russian salad salad Niçoise warm beef Niçoise exotic fruit cocktail fava beans fava bean salad Russian salad fennel & orange salad feta cheese fava bean salad raspberry & feta salad with couscous roast pork & pumpkin salad three bean salad tomato salad with feta cheese traditional Greek salad fig & watermelon salad fish sweet & sour fish salad _see also_ individual fish & seafood fruit salad exotic fruit cocktail green fruit salad tropical fruit salad garlic roasted garlic, sweet potato, grilled eggplant, & bell pepper salad with mozzarella salad with garlic dressing goat cheese broiled bell pepper & goat cheese salad warm red lentil salad with goat cheese grapes bean sprout, apricot, & almond salad chicken, cheese, & arugula salad fig & watermelon salad green fruit salad melon & mango salad green beans green bean & walnut salad roast beef salad roast pork & pumpkin salad salad Niçoise tuna & fresh vegetable salad tuna & two-bean salad warm beef Niçoise zucchini & mint salad green fruit salad herbs broiled lamb with yogurt & herb dressing herby potato salad lima bean, onion, & herb salad with spicy sausage red onion, tomato, & herb salad smoked chicken salad with avocado & tarragon dressing tuna & herbed fusilli salad zucchini & mint salad herby potato salad honey artichoke & prosciutto salad broiled lamb with yogurt & herb dressing fig & watermelon salad green fruit salad melon & mango salad mussel salad rare roast beef pasta salad shrimp & rice salad sweet & sour fish salad tuna & herbed fusilli salad walnut, pear, & crispy bacon salad Italian salad kidney beans Mexican tomato salad three bean salad kiwi fruit: green fruit salad lamb: broiled lamb with yogurt & herb dressing layered chicken salad lentils lentil & tuna salad warm new potato & lentil salad warm red lentil salad with goat cheese lima bean, onion, & herb salad, with spicy sausage limes cantaloupe & crab salad duck & radish salad fig & watermelon salad Mexican tomato salad papaya salad rare roast beef pasta salad roast duck salad shrimp & rice salad smoked salmon & wild arugula salad Thai noodle salad Thai-style chicken salad tuna & herbed fusilli salad lobster salad mache mangoes Cajun chicken salad melon & mango salad shrimp & mango salad shrimp & rice salad tropical fruit salad melon avocado salad cantaloupe & crab salad fig & watermelon salad green fruit salad melon, chorizo & artichoke salad melon & mango salad melon & strawberry salad prosciutto with melon & asparagus spring clean salad mesclun Mexican tomato salad mixed mushroom salad mizuna asparagus & tomato salad mozzarella Capri salad Italian salad mozzarella salad with sun-dried tomatoes roasted garlic, sweet potato, broiled eggplant, & bell pepper salad with mozzarella three-color salad tomato, mozzarella, & avocado salad mushrooms mixed mushroom salad Russian salad Thai noodle salad tuna & fresh vegetable salad turkey & rice salad warm mushroom, spinach, & pancetta salad mussels mussel salad Neapolitan seafood salad seafood salad seafood & spinach salad mustard artichoke & prosciutto salad asparagus & tomato salad Cajun chicken salad celeriac remoulade with crab chicken, cheese, & arugula salad lentil & tuna salad lobster salad melon, chorizo & artichoke salad mussel salad roast beef salad roast pork & pumpkin salad succatash salad three bean salad tomato, mozzarella & avocado salad tomato, salmon & shrimp salad tuna & herbed fusilli salad warm beef Niçoise warm chicken liver salad warm mushroom, spinach, & pancetta salad warm pasta salad nasturtium Neapolitan seafood salad noodles buckwheat noodle salad with smoked tofu Thai noodle salad nuts bean sprout, apricot, & almond salad nutty beet salad papaya salad roast beef salad shrimp & rice salad three bean salad turkey & rice salad _see also_ pine kernels; walnuts nutty beet salad olives anchovy & olive salad artichoke & prosciutto salad asparagus & tomato salad broiled bell pepper salad Capri salad herby potato salad Italian salad roast beef salad salad Niçoise seared swordfish with fresh tomato salsa tomato, mozzarella, & avocado salad tomato salad with feta cheese traditional Greek salad warm beef Niçoise onion lima bean, onion, & herb salad with spicy sausage red onion, tomato, & herb salad oranges exotic fruit cocktail fennel & orange salad fig & watermelon salad green fruit salad red & green salad spinach salad with bleu cheese dressing tropical fruit salad wild rice salad with cucumber & orange orecchiette salad with pears & bleu cheese papayas chiled shrimp with pineapple & papaya salsa papaya salad tropical fruit salad passion fruit exotic fruit cocktail melon & mango salad pasta Italian salad Neapolitan seafood salad orecchiette salad with pears & bleu cheese rare roast beef pasta salad roast beef salad tuna & herbed fusilli salad warm pasta salad pears orecchiette salad with pears & bleu cheese pear & Roquefort salad walnut, pear & crispy bacon salad pesto: roast chicken with pesto cream salad pine nuts asparagus & tomato salad Italian salad mixed mushroom salad raspberry & feta salad with couscous roast pork & pumpkin salad pineapple chiled shrimp with pineapple & papaya salsa exotic fruit cocktail sweet & sour fish salad tropical fruit salad pomegranates: exotic fruit cocktail pork roast pork & pumpkin salad Thai noodle salad potatoes herby potato salad layered chicken salad potato salad Russian salad salmon & avocado salad Thai-style chicken salad tuna & avocado salad warm beef Niçoise warm new potato & lentil salad prosciutto artichoke & prosciutto salad prosciutto with melon & asparagus pumpkin: roast pork & pumpkin salad quinoa: tabbouleh radicchio avocado salad cantaloupe & crab salad lobster salad mussel salad orecchiette salad with pears & bleu cheese pear & Roquefort salad red bell pepper & radicchio salad roast beef salad radishes Cajun chicken salad duck & radish salad red bell pepper & radicchio salad salad with garlic dressing three bean salad rare roast beef pasta salad raspberries prosciutto with melon & asparagus raspberry & feta salad with couscous red & green salad red bell pepper & radicchio salad red chard red onion, tomato, & herb salad rice coconut shrimp with cucumber salad shrimp & rice salad turkey & rice salad wild rice salad with cucumber & orange roast beef salad roast chicken with pesto cream salad roast duck salad roast pork & pumpkin salad roasted garlic, sweet potato, broiled eggplant, & bell pepper salad with mozzarella roasted vegetable salad romaine lettuce Caesar salad chickpea & tomato salad Russian salad salad greens anchovy & olive salad asparagus & tomato salad Caesar salad Cajun chicken salad chef's salad duck & radish salad lima bean, onion, & herb salad with spicy sausage melon & mango salad melon & strawberry salad mixed mushroom salad mozzarella salad with sun-dried tomatoes Neapolitan seafood salad nutty beet salad orecchiette salad with pears & bleu cheese papaya salad pear & Roquefort salad prosciutto with melon & asparagus roast chicken with pesto cream salad roast duck salad roasted garlic, sweet potato, broiled eggplant, & bell pepper salad with mozzarella salad Niçoise salmon & avocado salad shrimp cocktail salad shrimp & mango salad smoked chicken & cranberry salad sweet potato & bean salad three bean salad tomato, salmon, & shrimp salad traditional Greek salad tuna & two-bean salad Waldorf summer chicken salad warm beef Niçoise warm chicken liver salad _see also_ arugula; chicory; mizuna; radicchio; romaine lettuce; spinach; watercress salad leaf preparation salad leaf storage salad Niçoise salad with garlic dressing salmon salmon & avocado salad smoked salmon & wild arugula salad tomato, salmon, & shrimp salad sausage: see chorizo scallops seafood & spinach salad seafood salad seafood & spinach salad seafood salad seared swordfish with fresh tomato salsa sesame seeds broiled lamb with yogurt & herb dressing buckwheat noodle salad with smoked tofu Cajun chicken salad duck & radish salad tuna & avocado salad shrimp chiled shrimp with pineapple & papaya salsa coconut shrimp with cucumber salad Russian salad seafood & spinach salad shrimp cocktail salad shrimp & mango salad shrimp & rice salad Thai noodle salad tomato, salmon, & shrimp salad smoked chicken & cranberry salad smoked chicken salad with avocado & tarragon dressing smoked salmon & wild arugula salad snow peas: turkey & rice salad spinach crispy spinach & bacon red & green salad salmon & avocado salad seafood & spinach salad spinach salad with bleu cheese dressing warm mushroom, spinach, & pancetta salad warm red lentil salad with goat cheese spring clean salad squid Neapolitan seafood salad seafood salad strawberries melon & strawberry salad tropical fruit salad string beans: succatash salad succatash salad sugar snap peas: smoked chicken & cranberry salad sunflower seeds: spinach salad with bleu cheese dressing sweet & sour fish salad sweet potatoes roasted garlic, sweet potato, broiled eggplant, & bell pepper salad with mozzarella sweet potato & bean salad sweetcorn Russian salad succatash salad Thai-style chicken salad swordfish: seared swordfish with fresh tomato salsa tabbouleh Thai noodle salad Thai-style chicken salad three bean salad three-color salad tofu: buckwheat noodle salad with smoked tofu tomatoes anchovy & olive salad artichoke & prosciutto salad asparagus & tomato salad Capri salad chef's salad chickpea & tomato salad herby potato salad Italian salad layered chicken salad lentil & tuna salad lima bean, onion, & herb salad with spicy sausage Mexican tomato salad mozzarella salad with sun-dried tomatoes Neapolitan seafood salad orecchiette salad with pears & bleu cheese papaya salad rare roast beef pasta salad red onion, tomato, & herb salad salad with garlic dressing salad Niçoise salmon & avocado salad seared swordfish with fresh tomato salsa smoked chicken salad with avocado & tarragon dressing sweet potato & bean salad tabbouleh three bean salad three-color salad tomato, mozzarella, & avocado salad tomato salad with feta cheese tomato, salmon, & shrimp salad traditional Greek salad tuna & avocado salad tuna & fresh vegetable salad tuna & herbed fusilli salad tuna & two-bean salad warm beef Niçoise warm pasta salad wild rice salad with cucumber & orange tongue: chef's salad traditional Greek salad tropical fruit salad trout: sweet & sour fish salad tuna lentil & tuna salad salad Niçoise tuna & avocado salad tuna & fresh vegetable salad tuna & herbed fusilli salad tuna & two-bean salad turkey & rice salad turnips: Russian salad Waldorf summer chicken salad walnuts Cajun chicken salad chicken, cheese & arugula salad green bean & walnut salad orecchiette salad with pears & bleu cheese salmon & avocado salad Waldorf summer chicken salad walnut, pear & crispy bacon salad warm beef Niçoise warm chicken liver salad warm mushroom, spinach, & pancetta salad warm new potato & lentil salad warm pasta salad warm red lentil salad with goat cheese watercress melon & mango salad smoked chicken & cranberry salad smoked chicken salad with avocado & tarragon dressing sweet & sour fish salad walnut, pear & crispy bacon salad zucchini & mint salad wild rice salad with cucumber & orange yogurt: grilled lamb with yogurt & herb dressing zucchini layered chicken salad raspberry & feta salad with couscous roasted vegetable salad tuna & fresh vegetable salad zucchini & mint salad
{ "redpajama_set_name": "RedPajamaBook" }
3,392
{"url":"https:\/\/forum.math.toronto.edu\/index.php?PHPSESSID=okg7a9jpeodeasl4s0ngj6jou7&topic=1628.0;wap2","text":"MAT244--2018F > MAT244--Lectures & Home Assignments\n\nsec 9.2 question 18\n\n(1\/1)\n\nyoujianz:\nim not sure how i able to solve this question\n\ncan someone give the answer for this\n\nQingyang Wei:\nFor part a): We can write $$\\frac{dy\/dt}{dx\/dt} = \\frac{dy}{dx} = \\frac{-8x}{2y}$$\nThis is a separable equation, so we can write it as\n$$2ydy=-8xdx$$\nIntegrate both sides and get\n$$y^2 +c_1 = -4x^2 +c_2$$\nrearrange, and let $C = c_2 - c_1$, we\u00a0 get\n$$y^2 + 4x^2 = C$$. This is the expression $H(x,y)=C$ that all trajectories of the system satisfies.\n\nQingyang Wei:\nSorry, that previous post was for question 18 of 9.2 on the 10th edition of the book. If you are looking at the 11th edition then that's not the answer for that. Sorry if it causes any confusion.\n\nJust to clarify, is the question you are asking this one?\n$$\\frac{dx}{dt}=2x^2y - 3x^2 - 4y, \\frac{dy}{dt} = -2xy^2 + 6xy$$\na) Find an equation of the form $H(x,y)=c$ and b) plot several level curves of the function $H$.\n\nQingyang Wei:\nFor the question 18 on the 11th edition, we can write down $$\\frac{dy}{dx} = \\frac{-2xy^2 + 6xy}{2x^2y - 3x^2 - 4y}$$\nAnd we can rearrange the equation as:\n$$(2x^2y - 3x^2 - 4y)\\frac{dy}{dx} + (2xy^2 - 6xy) = 0$$\n\nNow does this equation looks like a type of equations we encountered before? Can you solve this with the methods we learned previously?\n\nVictor Ivrii:\nMay be it is exact? Or you can find an integrating factor.","date":"2021-09-23 14:13:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9122645258903503, \"perplexity\": 432.29377138394665}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057424.99\/warc\/CC-MAIN-20210923135058-20210923165058-00088.warc.gz\"}"}
null
null
{"url":"http:\/\/physics.stackexchange.com\/questions\/9010\/is-there-a-theoretical-upper-bound-on-the-mass-any-new-particles-can-have","text":"# Is there a theoretical upper bound on the mass any new particles can have?\n\nOne possible outcome of the collision experiments at LHC is the discovery of new elementary particles with large mass. Is there a theoretical way to derive an upper bound on the mass of elementary particles? For example, we have the electron, the muon and the tau particle. Can we be sure that there are no heavier elementary fermions with charge $e$? (Maybe there are some symmetry arguments?)\n\n-\nIf Higgs counts as \"new particle\", then there is. Also, there are lots of new particles at the scale of grand unification. And there may be an infinite tower of massive Kaluza-Klein particles if extra dimensions exist. \u2013\u00a0 felix Apr 23 '11 at 4:57\nWell, this would be a bound on the mass of a particular particle. I'd be interested in whether there's a bound for the mass of any particle. \u2013\u00a0 Lagerbaer Apr 23 '11 at 4:58\n\nThis needs an input from a theoretician. From my experimentalist's view point of the matter, the answer is \"probably not\" .\n\nWe do have the standard model, and it is symmetry based and masses are limited within this symmetry. Nevertheless, there are tantalizing indications of physics beyond the standard model, from deviations of measurements and theoretical calculations in several quantities.\n\nHigher symmetries are envisioned, as supersymmetry, and indeed there, there would be an upper limit in the masses of all the particles, except that theoreticians will embed it into strings which are another ball park, where even a black hole can be considered an excitation on the string. So no upper bound calculable.\n\nThis question then will be answered by the Theory Of Everything, strings, according to string theorists. Or we will keep opening russian dolls to higher and higher symmetries and particles, as fans of the unpopular composite theories contend, until an alternate TOE tells us whether we have reached a top mass.\n\n-\nComing from a phenomenologist (in training :-p), I'm inclined to agree - at least, I can't think of anything we know to be true that would provide an upper mass limit. But I'm hesitant to definitively say \"no\" in case there's something I just don't know about or am forgetting. \u2013\u00a0 David Z Apr 23 '11 at 5:22","date":"2014-03-09 16:12:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6950104832649231, \"perplexity\": 515.2964909730593}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-10\/segments\/1394009829804\/warc\/CC-MAIN-20140305085709-00079-ip-10-183-142-35.ec2.internal.warc.gz\"}"}
null
null
THE United States Congress is angry about a scandal. It is alleged that from 1996 to 2003 the United Nations oil-for-food programme enabled Saddam Hussein to misappropriate hundreds of millions of dollars. Certain senior UN officials, particularly the man in charge of the programme, Benon Sevan, are said to have pocketed large kickbacks, and it is claimed that foreign, especially French, politicians took similar advantage of the system. These are serious accusations that warrant detailed investigation. But it must be immediately pointed out that there is a wealth of public documentation on the operation of the programme since 1996. It contains all the relevant information, including lists of all items supplied to Iraq in each six-month period. Those lists, like all details of Iraqi transactions, were drawn up meticulously by the UN Security Council's sanctions committee, made up of council representatives and operated by consensus. No decision could be taken without endorsement by the US which, together with Britain, vetoed contracts worth hundreds of millions of dollars on the grounds that certain products might be used to manufacture weapons of mass destruction, which we now know were a figment of the imagination of US strategists. The programme was subject to strict monitoring: if there were breaches, the US bears at least as much responsibility for them as the UN (1). Nor should we forget the tens of millions of dollars misappropriated by the international community via the UN compensation committee (UNCC) in Geneva (2), which was largely manipulated by the US. On the pretext of compensating those who suffered because of the Iraqi invasion of Kuwait, the UNCC took up to 30% of Iraq's oil revenue to "reimburse" such impoverished victims as the Kuwaiti Oil Company. A payment of $200m was made as late as April this year, two years after the fall of Saddam Hussein, at a time when the Iraqi government was begging for loans. But no committee of inquiry has been set up to investigate the worst scandal. The imposition of sanctions on Iraq in August 1990, and their maintenance after the liberation of Kuwait in 1991, had devastating consequences that will burden Iraq for a long time. While the media often drew attention to Iraq's difficulties in obtaining food and medical supplies, even after the start of the oil-for-food programme in 1996, it neglected the destructive effect of sanctions on Iraqi society. Despite the inventiveness of Iraqi engineers, the infrastructure gradually crumbled and fell apart. Basic services, ministries, power stations and drinking water suffered. Corruption spread through all levels of society. Crime exploded. The inhabitants of Baghdad, who had never bothered to lock their front doors or their cars, barricaded their homes. When the US invaded, it needed only a little push for the worm-eaten state to collapse. Sanctions also affected the structure of the population. Middle-class emigration, which had begun before 1991 as people fled the brutal dictatorship, accelerated. Iraq was emptied of its managers and administrators. The education system, which had catered for all the young, was abandoned by its pupils. Children left school to work and help meet their families' needs, resulting in a generation of lowered literacy standards. Academic links with other countries were severed: the sanctions committee even banned the import of scientific journals. Iraq fell 15 years behind and is not about to catch up. And for what? Everyone realises sanctions did not penalise the regime's leaders, who continued to enjoy considerable resources. Nor did they weaken its grip on the population: the introduction of rationing enabled the Ba'ath party to keep tabs on everybody, and the regime could have survived for years. But sanctions do explain the problems in rebuilding Iraq. These are due not only to a rise in armed resistance, but also to the dilapidated state of the infrastructure. Another factor, which should not be underestimated, is the determination of the US to monopolise the reconstruction contracts. Getting the electric power supply working again would have required involving Siemens and ABB, the German and Swedish firms that built Iraq's modern electric power grid. With the telephone system, help was needed from Alcatel in France, which had installed the network and was familiar with the terrain. But the US was out to punish the governments of Old Europe and secure juicy contracts for the companies that fund the Republican party. Sanctions caused the deaths of hundreds of thousands of civilians. What is more, they destabilised a key state in the region and initiated its fragmentation. Who will be tried for these crimes? What committee will report on these errors, for which the whole Middle East is paying so dearly? And who will guarantee that the US and the UN do not again choose to impose sanctions on a country and punish an entire people for the crimes of its leaders? (1) See Joy Gordon, "The real sanctions scandal", Le Monde diplomatique, English language edition, February 2005.
{ "redpajama_set_name": "RedPajamaC4" }
9,184
<HTML> <BODY> <PRE> <STRONG>NAME</STRONG> <STRONG>glLoadIdentity</STRONG> - replace the current matrix with the identity matrix <STRONG>C</STRONG> <STRONG>SPECIFICATION</STRONG> void <STRONG>glLoadIdentity</STRONG>( void ) <STRONG>DESCRIPTION</STRONG> <STRONG>glLoadIdentity</STRONG> replaces the current matrix with the identity matrix. It is semantically equivalent to calling <STRONG>glLoadMatrix</STRONG> with the identity matrix ( 1 0 0 0 ) | | | 0 1 0 0 | | 0 0 1 0 | | | ( 0 0 0 1 ) but in some cases it is more efficient. <STRONG>ERRORS</STRONG> <STRONG>GL_INVALID_OPERATION</STRONG> is generated if <STRONG>glLoadIdentity</STRONG> is executed between the execution of <STRONG>glBegin</STRONG> and the corresponding execution of <STRONG>glEnd</STRONG>. <STRONG>ASSOCIATED</STRONG> <STRONG>GETS</STRONG> <STRONG>glGet</STRONG> with argument <STRONG>GL_MATRIX_MODE</STRONG> <STRONG>glGet</STRONG> with argument <STRONG>GL_MODELVIEW_MATRIX</STRONG> <STRONG>glGet</STRONG> with argument <STRONG>GL_PROJECTION_MATRIX</STRONG> <STRONG>glGet</STRONG> with argument <STRONG>GL_TEXTURE_MATRIX</STRONG> <STRONG>SEE</STRONG> <STRONG>ALSO</STRONG> <STRONG>glLoadMatrix</STRONG>, <STRONG>glMatrixMode</STRONG>, <STRONG>glMultMatrix</STRONG>, <STRONG>glPushMatrix</STRONG> </PRE> </BODY> </HTML>
{ "redpajama_set_name": "RedPajamaGithub" }
8,365
Q: simplexml_load_file : if main tag contains something I'm not so sure about the title, will try to explain in the next lines. I have an xml file like this : <CAR park="3" id="1" bay="0"> <SITE_ID>0</SITE_ID> <SITE_NAME>Car Seller 1</SITE_NAME> . . . </CAR> I am sucessfully iterating through my xml to get all the data. But, I want to be able to filter by bays. I want to do something like $xml = simplexml_load_file('myfile.xml'); $x = 1; foreach($xml as $car) { if($car->bay == '0'){ echo $car->SITE_ID; $x++; } } A: You can use XPath to fetch only the bay 0 cars... $bay0 = $xml->xpath('//CAR[@bay="0"]'); foreach ( $bay0 as $car ) { echo $car->SITE_ID.PHP_EOL; } The XPath statement is simply - any CAR element that has an attribute bay with the value 0 in it. In case you need to access attributes in other cases, with SimpleXML - you access them as though they are array elements, so it would be $car['bay'] in the code you had above.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,128
The goal of this project is to document common best practises for Oracle Forms development. It is meant to give you something to think about, with links to get more in-depth information about the subject. It is NOT meant to teach you Forms, Reports or PL/SQL. Subfolders are: /articles - Articles and Documents from the community can be stored here /demos - Demos for the Demo-Server (forms12c.de) /scripts - Community-Scripts
{ "redpajama_set_name": "RedPajamaGithub" }
7,812
Travel - Discoveries America Travel - Discoveries America National Parks Travel - Discoveries International Travel - Discoveries America Special Edition Travel - Discoveries America Music Fly Fishing - How To Fly Fish Fly Fishing - Fly Fishing Adventure Fly Fishing - Hooked On Fly Tying Fly Fishing - New Hooked On Fly Tying Cooking - Dare To Cook Cooking - Sweet Addition Baking & Cooking Bennett-Watt HD Productions & Stock Footage Great Basin: Caves, Trains & Wide Open Spaces Discoveries America National Parks Format: DVD - $ 24.95 Format: BLU-RAY - $ 24.95 Format: DOWNLOAD - $ 9.95 Great Basin Geosphere is a unique 200,000 square mile piece of real estate. High desert and mountains cover 20% of the lower 48 United States... Including most of Nevada, southern Oregon, Idaho, Western Utah and a portion of eastern California. This vast area is much more than "just a desert." Featuring the Great Basin National Heritage Area Great Basin National Park, Nevada The park begins over a mile above sea level and reaches over 13,000 feet at the pinnacle of Wheeler Peak. Explore mysterious subterranean passages on a ranger guided tour of Lehman Cave. Hike through the Bristlecone pine trees which can live to be 5,000 years old. Bask in the darkest of night skies. Nevada Northern Railway National Historic Landmark Experience a working museum where you can wander through old railroad buildings Watch 21st century trainmen and women repair 19th century equipment. Ride on a living, breathing, moving and steaming museum! Tour the East Ely Yard & Shop Complex while engineers prepare locomotives for travel. Learn from folks passionate about preserving this survivor from a grand era of railroading in the Silver State. The Nevada Northern Railway is one of the best-preserved examples of a standard-gauge short-line left in North America and one of the most complete rail facilities still in existence. Consisting of the original railway locomotives, rolling stock, track, passenger station, and buildings that served the historic copper mining region of Central Nevada for over a century. Available on DVD, Blu-ray and Download NP30 REVIEW: Video Librarian Subtitled "Caves, Trains & Wide Open Spaces," the latest entry in filmmakers Jim and Kelly Watt's high-def filmed Discoveries…America National Parks series takes viewers to the Great Basin—a vast, 200,000-plus square mile area of the Western United States' high desert region. With its boundaries stretching over most of Nevada, and parts of Oregon, Idaho, Utah, and a slice of eastern California, the Great Basin covers more than 20 percent of U.S. land. What makes it a basin? Instead of waterways draining into the ocean, all of the water in the Great Basin drains into itself—with moisture either working back into the soil, or evaporating under the hot sun. The Great Basin National Heritage Area is featured here, which preserves what is considered the classic Western landscape, and includes what is termed America's loneliest road—U.S. Route 50, which has a 350-mile stretch with only six towns (meaning 60-100 miles between gas stations and munchie stops). Within the Heritage Area lies Great Basin National Park, a microcosm of the Great Basin, with groves of indigenous bristlecone pines (some dating back more than 4,000 years) and the spectacular Lehman Caves (formed from marble and limestone 550 million years ago), among other notable features. A majestic travelogue sure to appeal to armchair travelers, this is recommended. Aud: P. (C. Block) NP30 REVIEW: Booklist Online Great Basin National Park, Nevada, is located within the Great Basin Geosphere, consisting of 200,000 square miles in portions of Nevada, Oregon, Idaho, Utah, and California. This program, a recent addition to the ongoing Discoveries . . . America National Parks series, introduces Great Basin National Park, Nevada, which boasts Lehman Cave, Nevada Northern Railway National Historic Landmark, and Nevada Northern Railway amid stunning landscapes. First we visit the cave, where a park ranger identifies stalactites and stalagmites that fill the underground structure. Lighting and handrails help visitors navigate the widened paths and appreciate the cave's breathtaking natural beauty. The executive director of the Nevada Northern Railway talks about the historic rail facility and fills us in on details about the railroad, which once linked copper mines with mills and smelters. As a working museum, the facility offers opportunities for travelers to ride the trains and visit historic buildings. Potential visitors and armchair travelers will enjoy the journey. — Candace Smith NP30 REVIEW: The Midwest Book Review Part of the "Discoveries... America National Parks" series of beautiful, high-definition video essays showcasing America's natural splendors, Great Basin: Caves, Trains & Wide Open Spaces is a virtual tour of this spectacular area of high desert and mountains. Spotlights include the Great Basin National Park of Nevada; the Nevada Northern Railway National Historic Landmark, a museum that invites visitors to take a walking tour through old railroad buildings, and observe living, breathing locomotives from the 19th century; and the Nevada Northern Railway, one of the most complete rail facilities still in existence. A treasure for admirers of nature's breathtaking splendor and railroading fans alike, Great Basin is enthusiastically recommended for public library collections, gift-giving, or simply savoring the next best thing to a personal cross-country vacation! 56 min., available in both DVD and Blu-ray format. Historic MASSACHUSETTS, Pilgrims to the 21st Century Oregon's CRATER LAKE National Park California REDWOODS to DEATH VALLEY RAILROADS: Magic, Mystery & Mystique Home / Collections / Blu-ray / Great Basin: Caves, Trains & Wide Open Spaces For video production or digital download inquiries, please email Jim & Kelly Watt, Info@Bennett-Watt.com Bennett-Watt Entertainment, Inc./ Anglers Book Supply Email: Orders@AnglersBookSupply.com Search About Us Bennett-Watt HD Productions & Stock Footage Copyright © 2020 Bennett-Watt Entertainment, Inc. / Anglers Book Supply.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,142
Skellington may refer to: Skellington Productions, a film production company Jack Skellington, a character from the 1993 film The Nightmare Before Christmas Skellington (band) Skellington (album), a 1989 album by Julian Cope Skellington 3, a 2018 album by Julian Cope Pirate Skellington, "A Skellington?" - Mr Centepide. A skeleton similar to Jack appears in the movie James and the Giant Peach. See also "Skelington", a song by British Jazz Rock band Colosseum, originally from their 1971 album Colosseum Live Skillington, a village and civil parish in Lincolnshire, England
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,355
Степанос Чмшкацагеци () — армянский поэт и писец XV века. Жизнь Родился в первой половине XV века, в селении Чмшкацаг исторической области Цопк. Биографические сведения известны из его памятных записей. Отец был ассирийцеем по имени Корел, мать — Ахмелик, была армянкой. Имел брата по имени Карапет. По собственному признанию с детства отличался плохим здоровьем и хилостью. По решению отца он получил армянское образование — в . Был учеником известного педагога, философа . После окончания обучения занимается переписыванием рукописей в Аваг ванке. К 1463 году был уже иеромонахом, не позже 1486 года получил титул вардапета. Умер предположительно в конце столетия. Наследие Из переписанных им рукописей сохранились собрание речей Григория Богослова (1463 год), «Ареопагитики» (частично, 1464 год), два сборника разнообразных материалов (один 1464 года, другой не датирован), «Тагаран» (собрание музыкально-поэтических произведений, 1476 год), «Толкование Евангелия от Матфея» Нерсеса Шнорали (1486 год). К этим рукописям Степанос добавлял колофоны (памятные записи) в стихах. В них он рассказывает о социально-политической ситуации в своё время, а также обращается к читателям с наставлениями: например, советует беречь рукописи, проникать в смысл книг, обогащать разум и чувства, и т. д. Этим колофонам характерна эмоциональность, ясность речи и правильность стихосложения. Некоторые из них не только представляют большую художественную ценность, но и богаты историческими сведениями. Наиболее ценен колофон «Ареопагитик», представляющий из себя панегирик в честь Ованеса Амшенци и других монахов Аваг-ванка. Художественными чертами выделяется также элегический колофон «Тагарана», в котором автор мрачными красками описывает тяжелое состояние христианского населения региона во время правления Узун-Гасана. Там же есть интересные сведения о захвате Тбилиси войсками Узун-Гасана и бегстве царя Баграта, об особенностях налогообложения в государстве Ак-Коюнлу, и т. д.. Примечания Поэты Армении Армянские поэты Писцы Армении
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,884
Jang Kyung-Jin est un footballeur sud-coréen né le . Biographie Liens externes Footballeur sud-coréen Naissance dans le Jeolla du Sud Naissance en août 1983 Joueur de l'Incheon United FC Joueur du Jeonnam Dragons Joueur de l'Oita Trinita
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,963
package placetracking.api.endpoint.action; import com.googlecode.objectify.Key; import com.googlecode.objectify.ObjectifyService; import java.util.ArrayList; import java.util.List; import java.util.Set; import placetracking.WebsiteRequest; import placetracking.api.ApiResponse; import placetracking.api.endpoint.Endpoint; import placetracking.api.endpoint.EndpointManager; import placetracking.api.endpoint.relation.AddRelationEndpoint; import placetracking.datastore.model.Action; public class AddActionEndpoint extends Endpoint { @Override public String getEndpointPath() { return EndpointManager.ENDPOINT_ACTIONS; } @Override public Set<String> getEndpointMethods() { return Endpoint.METHOD_POST; } @Override public List<String> getRequiredParameters(WebsiteRequest request) { List<String> params = super.getRequiredParameters(request); params.add("name"); params.add("userId"); params.add("topicId"); return params; } @Override public ApiResponse generateRequestResponse(WebsiteRequest request) throws Exception { ApiResponse response = new ApiResponse(); List<Action> results = getRequestResponseEntries(request); response.setContent(results); return response; } public static List<Action> getRequestResponseEntries(WebsiteRequest request) throws Exception { String name = request.getParameter("name"); long userId = request.getParameterAsLong("userId", -1); long topicId = request.getParameterAsLong("topicId", -1); // add a relation for the user to the topic, if not already set AddRelationEndpoint.addRelationIfNotYetSet(userId, topicId); // add the new action Action action = new Action(name) .byUser(userId) .onTopic(topicId); Key<Action> key = ObjectifyService.ofy() .save() .entity(action) .now(); List<Action> results = new ArrayList<Action>(); results.add(action); log.info("Added a new action with name: " + name + " and id: " + action.getId()); return results; } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,090
«Россия в обвале» — историко-публицистическое эссе Александра Солженицына, написанное в мае 1998 года, содержащее размышления автора об изменениях, произошедших в России в 1990-е годы, о сложившейся ситуации, государственных, общественных, национальных, нравственных проблемах, возможных путях возрождения страны и о судьбе русского народа. В этой работе Солженицын резко и недвусмысленно осудил приводимые правительством Ельцина — Гайдара — Чубайса реформы (в частности, приватизацию) и действия российских властей в Чечне. «Россия в обвале» продолжает серию предшествующих публицистических работ о положении России и проектах общественных преобразований: «Письмо вождям Советского Союза» (1973), «Как нам обустроить Россию» (1990), «Русский вопрос к концу XX века» (1994). В предисловии к работе автор пишет: При всей уже 12-летней затяжности нового глубокого государственного и всежизненного кризиса России, выпуская в свет нынешнюю работу — и последнюю мою на все эти темы, — я не надеюсь, что и мои соображения могут в близости помочь выходу из болезненного размыва нашей жизни. Эту книгу я пишу лишь как один из свидетелей и страдателей бесконечно жестокого века России — запечатлеть, что мы видели, видим и переживаем. По словам Людмилы Сараскиной, в период с 1994-го по 1998 годы труды Солженицына почти не издавали, а выход этой работы журналисты встретили негативно: «страна не в обвале, а на подъёме!» Но через три месяца грянул дефолт, сменивший настроения на «неужели старик знал?» — и популярность работ Солженицына значительно выросла. Примечания Ссылки Александр Солженицын. «Россия в обвале» Эссе Александра Солженицына Эссе 1998 года Распад СССР в культуре и искусстве
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,651
Zammara, Taxa Only, Exact Match: 30 Treatments Zammara smaragdula Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 14-15 14-15 Zammara hertha Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Peru including the description of twenty-four new species, three new synonymies, and thirty-seven new records, Zootaxa 4785 (1), pp. 1-129 : 10 10 Zammara smaragdula Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 11 11 Zammara smaragdina Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 14 14 Zammara intricata Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 13-14 13-14 Zammara tympanum Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1580 1580 Zammara brevis Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 13 13 Zammara Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 12-13 12-13 Zammara tympanum Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Peru including the description of twenty-four new species, three new synonymies, and thirty-seven new records, Zootaxa 4785 (1), pp. 1-129 : 119 119 Zammara olivacea Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 9-10 9-10 Zammara brevis Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1579 1579 Zammara calochroma Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1579 1579 Zammara Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1579 1579 Zammara guyanensis n. sp. Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Suriname including the description of two new species, five new combinations, and three new records, Zootaxa 4881 (3), pp. 453-481 : 455-458 455-458 Zammara tympanum Sanborn, Allen F. & Heath, Maxine S., 2014, The cicadas of Argentina with new records, a new genus and fifteen new species (Hemiptera: Cicadoidea: Cicadidae), Zootaxa 3883 (1), pp. 1-94 : 84 84 Zammara Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Suriname including the description of two new species, five new combinations, and three new records, Zootaxa 4881 (3), pp. 453-481 : 455 455 Zammara calochroma Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 69 69 Zammara hertha Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 13 13 Zammara olivacea Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1579 1579 Zammara smaragdina Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1580 1580 Zammara smaragdula Sanborn, Allen F., 2010, The cicadas of Colombia including new records and the description of a new species (Hemiptera: Cicadidae), Journal of Natural History 44 (25 - 26), pp. 1577-1607 : 1580 1580 Zammara Sanborn, Allen F. & Heath, Maxine S., 2014, The cicadas of Argentina with new records, a new genus and fifteen new species (Hemiptera: Cicadoidea: Cicadidae), Zootaxa 3883 (1), pp. 1-94 : 51 51 Zammara strepens Sanborn, Allen F. & Heath, Maxine S., 2014, The cicadas of Argentina with new records, a new genus and fifteen new species (Hemiptera: Cicadoidea: Cicadidae), Zootaxa 3883 (1), pp. 1-94 : 51 51 Zammara nigriplaga Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Ecuador including the description of five new species, a new subtribe, four new synonymies, and fifteen new records, Zootaxa 4880 (1), pp. 1-80 : 14 14 Zammara calochroma Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 8 8 Zammara boulardi Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 7 7 Zammara Sanborn, Allen F., 2020, The cicadas (Hemiptera: Cicadidae) of Peru including the description of twenty-four new species, three new synonymies, and thirty-seven new records, Zootaxa 4785 (1), pp. 1-129 : 9-10 9-10 Zammara nigriplaga Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 9 9 Zammara smaragdina Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 10-11 10-11 Zammara Sanborn, Allen F., 2018, The cicadas (Hemiptera: Cicadidae) of Panama including the description of six new species, three new combinations, one new synonymy, and nine new records, Zootaxa 4493 (1), pp. 1-69 : 7 7
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,375
Juicing is a hot trend among nutrition advocates. It's often looked on as an easy way to get more fruit, vegetables and fiber into your diet — breaking them down into an easy-to-consume drink. Whether you've turned to juicing to get finicky children to eat vegetables flavored with fruit — a popular ploy for parents — as a body cleanse, a way to lose weight, or simply for better nutrition, it helps to pick the right juicer. Today's juicers can liquefy even the knottiest vegetables, transforming them into a smooth, drinkable liquid. The most common juicers fall into two main categories: fast and slow. Each type offers distinct advantages and benefits, so it's helpful to know what each has to offer. One of the biggest distinctions among juicers is the type of juice they provide. For instance, centrifugal, or fast, juicers liquefy fruits and vegetables, separating the juice from the pulp. These juices are the thinnest. One drawback of a fast juice extractor is the noise, which can be significant. They also don't handle greens, such as kale, wheatgrass and spinach, as well as the slow, masticating style of juicer. But if fiber and pulp are what you want, a slow juicer is preferred for its fiber benefits, such as fullness and hunger satiety. This type of juicer has a slow, grinding action that completely pulverizes a fruit or vegetable, including the peel, stems and seeds. The masticating type is preferred by health advocate and dedicated juicer, Cheryl Castillo, of San Antonio, Texas (bodybyfengshui.com), who attributes juicing to helping her lose weight. "Juicing definitely helps with dieting," Castillo says. "I find I have more energy during the day and it helps curb my appetite so I'm not as likely to overeat at night." Castillo also recommends juices using vegetables and low sugar fruits, such as granny smith apples and grapefruit. "It's easy to fall into the diet trap that fruit is healthy, but if juices contain too much fruit, it's still a lot of sugar which isn't necessarily healthy or helpful for weight loss," Castillo says. She adds that using a recipe with balanced flavors is the best way to start because some mixtures are not always palatable to the uninitiated. Which type of juicer does Castillo prefer? She began with an inexpensive fast juicer, and when she found she liked juicing, she moved up to a slow juicer. "Fast juicers are great when you're in a hurry and want juice quickly, but you need to drink it then," she notes. Slow juicers, she says, handle greens well, use more of the vegetables and fruits, make less pulp, and the juice can be stored longer, up to 72 hours, which is helpful if you want to save juice to drink later in the day. Your need for speed may prompt your decision. Price may also influence your choice. Popular juicer brands don't come cheap, priced from $50 to $700 or more. A good quality juicer will be an investment, with average prices in the $200-$400 range.
{ "redpajama_set_name": "RedPajamaC4" }
8,113
{"url":"https:\/\/www.physicsforums.com\/threads\/ideal-gas.333099\/","text":"# Ideal gas\n\n1. Aug 28, 2009\n\n### songoku\n\n1. The problem statement, all variables and given\/known data\nA can filled with nitrogen has pressure 10 Pa, volume 10 cm3, and temperature 300 K.\n(i) Find the volume at STP\n(ii) Find the mass of the nitrogen\n(iii) Find the change of pressure over temperature at 300 K\n(iv) Find the energy of each molecule of nitrogen\n\n2. Relevant equations\nPV = nRT\n\n3. The attempt at a solution\n(i)\n$$\\frac{P_1 V_1}{P_2 V_2}=\\frac{n_1 R T_1}{n_2 R T_2}$$\n\nAssuming n1=n2 : ------> Is this right ?\n\n$$\\frac{10*10}{10^5*V_2}=\\frac{300}{273}$$\n\n$$V_2=2.73 x 10^{-3}\\; m^3$$\n\n(ii)\nP1V 1= nRT1\n10*10-5=n *8.31* 300\nn = 4.011 x 10-8 mole\n\nm = n*Mr = 4.011 x 10-8*28 = 1.12308 x 10-6\n\n(iii) Is the question asking the change at 300 K compared to STP ?\n\n$$\\frac{\\Delta P}{\\Delta T}=\\frac{P_2-P_1}{V_2-V_1}=\\frac{10-10^5}{10^{-5}-2.73 x 10^{-3}}\\approx 3.68 x 10^7 \\frac{Pa}{m^3}$$ ??\n\n(iv)\n$$E=\\frac{3}{2}nRT$$\n\nNot sure about using T = 300 K or T = 273 K (STP)\n\nThanks\n\n2. Aug 28, 2009\n\n### kuruman\n\n(i) Incorrect input value for volume - should be in m3. Also, the value \"2.73\" is suspicious. The ratio 300\/273 is close to one and the other side is all powers of 10.\n\n(ii) Mass calculation looks OK, but no units are given.\n\n(iii) Method OK but calculation needs to be redone because it depends on answer in (i).\n\n(iv) 300 K is not much different from 273 K. Expression is incorrect. You are asked to find the energy per molecule, not the total energy.\n\n3. Aug 28, 2009\n\n### songoku\n\nHi kuruman\n\nI've revised my answer for the (i), (ii), and (iii) according to your correction :)\n\nFor the last one :\n\n$$\\text{Energy per molecule}=\\frac{3}{2}\\frac{nRT}{N_A} ??$$\n\nThanks\n\n4. Aug 29, 2009\n\n### kuruman\n\nFor the total energy of an ideal gas, I prefer the form\n\n$$E = \\frac{3}{2}N k T$$\n\nwhere N is the number of molecules and k the Boltzmann constant. Then the energy per molecule is simply\n$$\\epsilon = \\frac{3}{2}k T$$\n\n5. Sep 6, 2009\n\n### songoku\n\nHi kuruman\n\nOk I get it now.\n\nThanks a lot for your help","date":"2017-10-22 01:56:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.482681006193161, \"perplexity\": 2152.051058173289}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187824931.84\/warc\/CC-MAIN-20171022003552-20171022023552-00169.warc.gz\"}"}
null
null
\section{Introduction} Besides the conventional meson and baryon states, QCD (Quantum Chromodynamics) also predicts a rich spectrum of so-called QCD exotics, among them glueballs ($gg$), hybrids ($q\bar{q}g$), and four-quark states ($qq\bar{q}\bar{q}$) in the 1.0 to 2.5 $~\mbox{GeV}/c^2$ mass region. Therefore, the search for evidence of these exotic states plays an important role in testing QCD. Radiative decays of quarkonium are expected to be a good place to look for glueballs, and there have been many studies performed in $J/\psi$ radiative decays~\cite{Jdecay, QWG}, while such studies have been limited in $\psi(2S)$ radiative decays due to low statistics in previous experiments~\cite{PDG, QWG}. Radiative decays of $\psi(2S)$ to light hadrons are expected at about 1\% of its total decay width~\cite{PRD-wangp}. However, the previously measured channels only sum up to 0.05\%~\cite{PDG}. In this paper we present measurements of $\psi(2S)$ decays into $\gammap\overline{p}$, $\gamma 2(\pi^+\pi^-)$, $\gamma K^0_S K^+ \pi^- + c.c.$, $\gamma K^+K^-\pi^+\pi^-$, $\gamma\kstarzK^-\pi^++c.c.$, $\gamma K^{\ast0} \overline{K}^{\ast0}$, $\gamma\pi^+\pi^-p\overline{p}$, $\gamma2(K^+K^-)$, $\gamma3(\pi^+\pi^-)$ and $\gamma2(\pi^+\pi^-)K^+K^-$ with the invariant mass of the hadrons ($m_{hs}$) in each final state less than 2.9$~\mbox{GeV}/c^2$. Measurements of $\psi(2S)$ decays into $\gammap\overline{p}\pi^0$, $p\overline{p}\pi^0\piz$, $2(\pi^+\pi^-)\pi^0$, $\omega\pi^+\pi^-$, $\omega f_2(1270)$, $b_1^\pm\pi^\mp$ , $\pi^0K^0_S K^+ \pi^- + c.c.$, $K^\pm\rho^\mpK^0_S$, $\piz2(\pi^+\pi^-)K^+K^-$ and $\gamma\piz2(\pi^+\pi^-)K^+K^-$ are also presented and are used for the background analysis. Many of the above measurements have been published previously~\cite{prlrad}; this paper provides more detailed information. \section{BES detector} BES is a conventional solenoidal magnetic detector that is described in detail in Refs.~\cite{bes,bes2}. A 12-layer vertex chamber (VTC) surrounding the beam pipe provides trigger and track information. A forty-layer main drift chamber (MDC), located radially outside the VTC, provides trajectory and energy loss ($dE/dx$) information for charged tracks over $85\%$ of the total solid angle. The momentum resolution is $\sigma _p/p = 1.78\% \sqrt{1+p^2}$ ($p$ in $~\mbox{GeV}/c$), and the $dE/dx$ resolution for hadron tracks is $\sim 8\%$. An array of 48 scintillation counters surrounding the MDC measures the time-of-flight (TOF) of charged tracks with a resolution of $\sim 200$ ps for hadrons. Radially outside the TOF system is a 12 radiation length, lead-gas barrel shower counter (BSC). This measures the energies of electrons and photons over $\sim 80\%$ of the total solid angle with an energy resolution of $\sigma_E/E=21\%/\sqrt{E}$ ($E$ in GeV). Outside of the solenoidal coil, which provides a 0.4~Tesla magnetic field over the tracking volume, is an iron flux return that is instrumented with three double layers of counters that identify muons of momentum greater than 0.5$~\mbox{GeV}/c$. The data sample used in this analysis was taken with the BESII detector at the BEPC storage ring at an energy of $\sqrt{s}= 3.686~\mbox{GeV}$. The number of $\psi(2S)$ events is ($14.0\pm 0.6)\times 10^6$, determined from inclusive hadronic events~\cite{pspscan}, and corresponds to a luminosity of $\mathcal{L}_{3.686} = (19.72\pm0.86)~\hbox{pb}^{-1}$~\cite{lum}, measured with large angle Bhabha events. Continuum data, used for background studies, were taken at $\sqrt{s} = 3.650~\mbox{GeV}$ with a luminosity of $\mathcal{L}_{3.650}=(6.42\pm0.24)~\hbox{pb}^{-1}$~\cite{lum}. The ratio of the two luminosities is $\mathcal{L}_{3.686}/\mathcal{L}_{3.650} = 3.07\pm0.09$. Monte Carlo (MC) simulations are used for the determination of mass resolutions and detection efficiencies, as well as background studies. The simulation of the BESII detector is GEANT3 based, where the interactions of particles with the detector material are simulated. Reasonable agreement between data and Monte Carlo simulation is observed~\cite{simbes} in various channels such as $e^+e^- \rightarrow (\gamma)e^+e^-$, $e^+e^-\rightarrow (\gamma)\mu^+\mu^-$, $J/\psi \rightarrow p\overline{p}$ and $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$, $J/\psi \rightarrow \ell^+\ell^-$ $(\ell=e,\mu)$. An inclusive $\psi(2S)$ decay MC sample of the same size as the $\psi(2S)$ sample is generated by LUNDCHARM~\cite{chenjc} and used to estimate backgrounds, \section{Event selection \label{sel}} A neutral cluster is taken as a photon candidate when the following conditions are satisfied: the energy deposited in the BSC is greater than 50$~\mbox{MeV}$; the first hit is in the beginning six radiation lengths; the angle between the cluster and the nearest charged track is greater than $15^{\circ}$; and the difference between the angle of the cluster development direction in the BSC and the photon emission direction is less than $37^{\circ}$. Each charged track is required to be well fitted to a three-dimensional helix, be in the polar angle region $|\cos\theta|<0.8$ in the MDC, and have a transverse momentum greater than 70$~\mbox{MeV}/c$. The particle identification chi-squared, $\chi^2_{PID}(i)$, is calculated based on the $dE/dx$ and TOF measurements with the following definition $$\chi^2_{PID}(i) = \chi^2_{dE/dx}(i) + \chi^2_{TOF}(i).$$ For all analyzed decay channels, the final states of the candidate events must have the correct number of charged tracks with net charge zero. If there is more than one photon candidate in an event, the candidate with the largest energy deposit in the BSC is taken as the radiative photon in the event, and a four-constraint kinematic fit (4C-fit) is performed. The combined confidence level, $prob(\chi^2_{comb}, ndf)=\frac{1}{\sqrt{2^{ndf}}\Gamma({ndf}/2)}\int_{\chi^2_{comb}}^{\infty}e^{-\frac{t}{2}}t^{\frac{{ndf}}{2}-1}dt$, is required to be greater than 1\%, where $ndf$ is the number of degrees of freedom and $\chi^2_{comb}$ is defined as the sum of the $\chi^2$ of the kinematic fit ($\chi^2_{4C}$) and $\chi_{PID}^2$: $\chi^2_{comb}=\chi_{4C}^2 + \sum\limits_{i} \chi_{PID}^2(i)$, where $i$ runs over all charged tracks. For each decay mode, $m_{hs}$ is required to be less than 2.9$~\mbox{GeV}/c^2$ to exclude $\psi(2S)$ radiative transitions into other charmonium states, such as $\chi_{cJ}$, $J/\psi$, and $\eta_{c}$. To remove background from charged particle misidentification, the value of $\chi^2_{comb}$ for $\psi(2S) \to \gamma + hs$ is required to be less than those for $\psi(2S)$ decays into background channels $\gamma + hs'$, where $hs'$ has the same number of charged tracks as $hs$, but different particle types for the charged tracks. If there is potential background from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$, it is largely suppressed by applying $|m^{\pi^+\pi^-}_{recoil}-m_{J/\psi}|>0.05~\mbox{GeV}/c^2$, where $m^{\pi^+\pi^-}_{recoil}$ is the mass recoiling from each possible $\pi^+\pi^-$ pair. Possible background from $\psiptoK^0_S+X$ is removed by requiring that the invariant mass of $\pi^+\pi^-$ is outside the $K^0_S$ mass region ($|m_{\pi\pi}-m_{K^0_S}|>0.04~\mbox{GeV}/c^2$). \section{Backgrounds and fitting procedure \label{bkgs}} In our analyses, the backgrounds for each $\psi(2S) \to \gamma + hs$ decay mode fall into three classes: (1) continuum background, estimated using the continuum data; (2) multi-photon backgrounds, e.g. $\psi(2S)\to \pi^0+hadrons$, $3\gamma+hadrons$, etc., where $hadrons$ have the same charged tracks as the signal final state, estimated with MC simulation and normalized according to their branching fractions; and (3) other backgrounds, estimated using an inclusive $\psi(2S)$ MC sample of 14 million events~\cite{chenjc}. Multi-photon backgrounds are dominant; continuum background and other backgrounds including contamination between the channels studied are lower. The observed $\chi^2_{4C}$ distributions include both signal events and all of these backgrounds. The number of signal events for most radiative decay channels is extracted by fitting the observed $\chi^2_{4C}$ distributions with those of the signal and background channels~\cite{fitchi2}, {\it i.e.} $\chi^2_{obs}=w_s\chi^2_{sig}+\sum_{w_{b_i}}w_{b_i}\chi^2_{bg}$, where $w_s$ and $w_{b_i}$ are the weights of the signal and the background decays, respectively. As an example, Fig.~\ref{chisqfit} shows the observed $\chi^2$ distribution for $\psi(2S) \rightarrow \gamma 2(\pi^+\pi^-)$, together with the $\chi^2$ distributions for the signal, multi-photon, continuum, and other background channels, as well as the final fit. In the fit, the weights of the multi-photon backgrounds and the continuum backgrounds ($w_b$) are fixed to be the normalization factors, but the weights of the signal ($w_s$) and the other backgrounds ($w_b$) are free. With this method, the number of signal events is extracted for each radiative decay mode for $m_{hs}<2.9~\mbox{GeV}/c^2$. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig1.eps} \caption{ \label{chisqfit} The fitted $\chi^2_{4C}$ distribution for $\psi(2S) \rightarrow \gamma 2(\pi^+\pi^-)$ candidate events. The dots with error bars are data. The solid line is the fitted result, which is the sum of the four components: signal events (dashed line), MC simulated multi-photon backgrounds (dotted line), continuum (hatched histogram), and other backgrounds (dot-dashed line).} \end{figure} \section{Event analysis} \subsection{\boldmath $\psi(2S) \rightarrow \gammap\overline{p}$} The major backgrounds to $\psi(2S) \rightarrow \gammap\overline{p}$ come from the channels $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$, $\gamma\pi^0p\overline{p}$, and $\pi^0p\overline{p}$. In order to estimate these backgrounds, we first measure their branching fractions. \subsubsection{Background estimation} For $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$, candidate events must have two charged tracks and four good photons. To reject background from $\psi(2S) \rightarrow \pi^0\pizJ/\psi, \jpsitop\overline{p}$ events, the $p\overline{p}$ invariant mass is required to satisfy $|m_{p\overline{p}}-m_{J/\psi}|>0.1~\mbox{GeV}/c^2$. Figure~\ref{ppb2piz} (a) shows the scatter plot of $\gamma_1\gamma_2$ versus $\gamma_3\gamma_4$ invariant mass for events after selection, where $\gamma_1\gamma_2$ and $\gamma_3\gamma_4$ are formed from all possible combinations of the four photon candidates. The cluster of events shows a clear $\pi^0\piz$ pair signal. Figure~\ref{ppb2piz} (b) shows the ${\gamma_1\gamma_2}$ invariant mass after requiring the other two photons be consistent with being a $\pi^0$ ($|m_{\gamma_3\gamma_4}-m_{\pi^o}|<0.03~\mbox{GeV}/c^2$). A fit to the peak is performed with a double Gaussian function, where the parameters are determined from MC simulation, plus a second order polynomial to describe the smooth background. The number of events in the peak determined from the fit is $254\pm24$, and the background contamination to the peak is estimated to be $51\pm11$ from the $m_{\gamma_3\gamma_4}$ sidebands: (0.0, 0.06) and (0.2, 0.26) $~\mbox{GeV}/c^2$. No significant resonance is observed in the mass distributions of any possible combination of particles in this final state, indicating no significant intermediate processes in the observed $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$ candidate events. Therefore the detection efficiency for $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$ is determined to be 8.3\% using a phase space generator, and the branching fraction is determined to be \begin{equation} {\cal B}(\psi(2S) \rightarrow \pi^0\pizp\overline{p}) =(1.75\pm0.21)\times10^{-4}, \nonumber \end{equation} where the error is statistical. \begin{figure} \begin{center} \includegraphics[width=0.52\textwidth]{fig2.eps} \caption{\label{ppb2piz} (a) Scatter plot of $\gamma_1\gamma_2$ versus $\gamma_3\gamma_4$ invariant mass for $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$ candidate events. (b) Invariant mass distribution of $\gamma_1\gamma_2$ after requiring $|m_{\gamma_3\gamma_4}-m_{\pi^o}|<0.03~\mbox{GeV}/c^2$. The data are fitted with a double Gaussian function and a second order background polynomial. } \end{center} \end{figure} For $\psi(2S) \rightarrow \gamma\pi^0p\overline{p}$, candidate events must have two charged tracks and three good photons. To reject background from $\psi(2S) \rightarrow \pi^0\pizJ/\psi, \jpsitop\overline{p}$ events, the $p\overline{p}$ invariant mass is required to satisfy $|m_{p\overline{p}}-m_{J/\psi}|>0.1~\mbox{GeV}/c^2$. Figure~\ref{gpizppb} (a) shows the $\gamma\gamma$ invariant mass distribution for $\psi(2S) \rightarrow \gamma\gamma\gammap\overline{p}$ candidate events, where $\gamma\gamma$ is any possible combination among the three photon candidates. There is a clear $\pi^0$ signal. The distribution is fitted with a double Gaussian function with parameters determined from MC simulation plus a second order polynomial for the background. The number of events determined from the fit is $345\pm33$. Background studies indicate that the main contamination to the $\pi^0$ signal comes from $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$ and $\pi^0p\overline{p}$; other backgrounds only contribute a smooth background. All possible backgrounds, including continuum, known simulated backgrounds ($\psi(2S) \rightarrow \pi^0\pizp\overline{p}$ and $\pi^0p\overline{p}$), and other unknown backgrounds estimated from the $\psi(2S)$ inclusive MC sample, are combined in Fig.~\ref{gpizppb} (b). Fitting this distribution in the same way as in Fig.~\ref{gpizppb} (a), the number of peaking background events is estimated to be $219\pm18$. Just as for $\psi(2S) \rightarrow \pi^0\pizp\overline{p}$, no significant intermediate process is observed in $\psi(2S) \rightarrow \gamma\pi^0p\overline{p}$. The efficiency is determined to be 8.94\% with a phase space generator, and the branching fraction is calculated to be \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma\pi^0p\overline{p}) = (1.0\pm0.3)\times 10^{-4}, \nonumber \end{equation} where the error is statistical. \begin{figure} \begin{center} \includegraphics[width=0.52\textwidth]{fig3.eps} \caption{\label{gpizppb} Distributions of $\gamma\gamma$ invariant mass for $\psi(2S) \rightarrow \gamma\gamma\gammap\overline{p}$ candidate events fitted with a double Gaussian function and a second order background polynomial. (a) signal and (b) backgrounds including continuum, known simulated backgrounds ($\psi(2S) \rightarrow \pi^0\pizp\overline{p}$ and $\pi^0p\overline{p}$), and other unknown backgrounds estimated from the $\psi(2S)$ inclusive MC sample.} \end{center} \end{figure} For $\psi(2S) \rightarrow \pi^0p\overline{p}$, there is an earlier measurement from BESII~\cite{ppbpi0-bes2}. We reanalyze this channel in the same way, and then extract the $m_{p\overline{p}}$ distribution and estimate its contamination to $\psi(2S) \rightarrow \gammap\overline{p}$. Candidate events are required to have two charged tracks and two good photons. The probability of the 4C-fit must be greater than 1\%, and the probability of the 4C-fit for the $\psi(2S) \rightarrow \gamma \gamma p\overline{p}$ hypothesis must be greater than that for $\psi(2S) \rightarrow \gamma \gamma K^+K^-$. To reject background from $\psi(2S) \rightarrow \pi^0\pizJ/\psi, \jpsitop\overline{p}$ events, the invariant mass of $p\overline{p}$ is required to be: $|m_{p\overline{p}}-m_{J/\psi}|>0.02~\mbox{GeV}/c^2$. A fit to the $m_{\gamma\gamma}$ distribution is performed with a double Gaussian function with parameters determined from MC simulation plus a second order polynomial for the background for $\psi(2S) \rightarrow \gamma\gammap\overline{p}$ candidate events. The number of events determined from the fit is $266\pm20$, and the detection efficiency is 14.8\%. The branching fraction is determined to be: \begin{equation} {\cal B}(\psi(2S) \rightarrow \pi^0p\overline{p}) = (13.0\pm1.0)\times10^{-6}, \nonumber \end{equation} where the error is statistical. This measurement agrees well with the previous BESII result of ($13.2\pm1.0\pm1.5)\times10^{-6}$~\cite{ppbpi0-bes2}. \subsubsection{Signal Analysis} For $\psi(2S) \rightarrow \gammap\overline{p}$, 329 events are observed after the event selection described in Section~\ref{sel}; the $p\overline{p}$ invariant mass distribution is shown in Fig.~\ref{mppb0}. After subtracting the normalized major backgrounds, $\psi(2S) \to \pi^0 \pi^0 p \bar{p}$, $\gamma \pi^0p\bar{p}$, and $\pi^0p\bar{p}$, the number of signal events is $142\pm18$. The detection efficiency determined from MC simulation is 35.3\%, and the branching fraction for this process is determined to be: \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma p\overline{p}) = (2.9\pm0.4)\times 10^{-5}, \nonumber \end{equation} where the error is statistical. There is an excess of events between $p\overline{p}$ threshold and $2.5~\mbox{GeV}/c^2$, but no significant narrow structure due to the $X(1859)$, that was observed in $J/\psi \rightarrow \gammap\overline{p}$~\cite{jpsi-gppb}. A fit to the mass spectrum (see Fig.~\ref{mppb}) with an acceptance-weighted $S$-wave Breit-Wigner for the $X$ resonance (with mass and width fixed to $1859~\mbox{MeV}/c^2$ and $30~\mbox{MeV}/c^2$, respectively), together with the normalized MC histograms for the above measured background channels ($\psi(2S) \to \pi^0 \pi^0 p \bar{p}$, $\gamma \pi^0p\bar{p}$, and $\pi^0p\bar{p}$) and the histogram from $\psi(2S) \to \gamma p\bar{p}$ phase space~\cite{massres}, yields $11.7\pm 6.7$ events with a statistical significance of 2.0 $\sigma$. The upper limit on the branching fraction is determined to be \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma X(1859)\rightarrow \gammap\overline{p})<5.4\times 10^{-6} \nonumber \end{equation} at the 90\% C.L. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig4.eps} \caption{\label{mppb0}The $p\overline{p}$ invariant mass distribution for $\psi(2S) \rightarrow \gammap\overline{p}$ candidate events (dots with error bars). The shaded histogram is the sum of all backgrounds, including continuum, known simulated backgrounds ($\psi(2S) \to \pi^0 \pi^0 p \bar{p}$, $\gamma \pi^0p\bar{p}$, and $\pi^0p\bar{p}$), and other unknown backgrounds estimated from the $\psi(2S)$ inclusive MC sample.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{fig5.eps} \caption{\label{mppb} The fit to the $m_{p\overline{p}}- 2m_p$ distribution of $\psi(2S) \rightarrow \gammap\overline{p}$ candidate events. The solid histogram is the fit result, the lower dashed line is the $X$ resonance shape, the dash-dotted histogram is the shape for $\psi(2S) \to \gamma p\bar{p}$ phase-space, the dotted histogram is the measured background channels ($\psi(2S) \to \pi^0 \pi^0 p \bar{p}$, $\gamma \pi^0p\bar{p}$, and $\pi^0p\bar{p}$), and the top dashed line is the efficiency curve.} \end{figure} \subsection{\boldmath $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)$} For $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)$, the main background comes from $\psi(2S) \rightarrow 2(\pi^+\pi^-)\pi^0$, so we first measure $\psipto2(\pi^+\pi^-)\pi^0$ in order to be able to estimate its contamination to $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)$. \subsubsection{Background analysis} For $\psi(2S) \rightarrow \piz2(\pi^+\pi^-)$, candidate events are required to have four charged tracks and two good photons. The probability of the 4C-fit must be greater than 1\%, the $\chi^2_{comb}$ probability for the $\psi(2S) \rightarrow \piz2(\pi^+\pi^-)$ hypothesis must be greater than those of $\psi(2S) \rightarrow \pi^0K^+K^-\pi^+\pi^-$ and $\psi(2S) \rightarrow \pi^0\pi^+\pi^-p\overline{p}$, and the sum of the momentum of any $\pi^+$ and $\pi^-$ pairs must be greater than $550~\mbox{MeV}/c$ to reject contamination from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$ events. After the above selection, a clear $\pi^0$ signal can be seen in the $\gamma\gamma$ invariant mass distribution of $\psi(2S) \rightarrow 2(\pi^+\pi^-)\gamma\gamma$ candidates. After subtracting backgrounds, such as $\psi(2S) \to \gamma \chi_{c0}, \chi_{c0} \to 2(\pi^+\pi^-)$, $\psi(2S) \to \pi^0\pi^0 J/\psi, J/\psi \to 2(\pi^+\pi^-)$, etc., in the $\gamma\gamma$ invariant mass spectrum, the distribution is fitted with a $\pi^0$ signal shape determined with MC simulation plus a second order polynomial for the other remaining backgrounds, and the number of $\pi^0$ signal events is $2173\pm53$. The detection efficiency is determined to be 6.32\% taking into consideration the significant intermediate states such as $\omega\pi^+\pi^-$, $\omega f_2(1270)$, and $b_1^\pm\pi^\mp$ described below. Figure~\ref{m5piall} (b) shows the $\pi^+\pi^-\pi^0$ invariant mass distribution for events satisfying $|m_{\gamma\gamma}-m_{\pi^o}|<0.03~\mbox{GeV}/c^2$. The fit is performed with an $\omega$ signal shape plus a second order polynomial for the background, and the number of $\omega$ signal events obtained is $386\pm23$. The efficiency determined from MC simulation is 3.74\% correcting for intermediate states, such as $\omega f_2(1270)$ and $b_1^\pm\pi^\mp$, described below. Figure~\ref{m5piall} (c) shows the distribution of $\pi^+\pi^-$ invariant mass recoiling against the $\omega$, selected with the requirements $|m_{\pi^+\pi^-\pi^0}-m_{\omega}|<0.05~\mbox{GeV}/c^2$ and $|m_{\omega\pi}-m_{b_1}|>0.2~\mbox{GeV}/c^2$ to reject $b_1^\pm\pi^\mp$ events. The invariant mass spectrum is fitted with a $\sigma$, a $f_2(1270)$ shape determined from MC simulation, and a second order polynomial to describe other backgrounds. The number of $f_2(1270)$ events obtained is $57\pm13$, and the detection efficiency determined from MC simulation is 3.65\%. Figure~\ref{m5piall} (d) shows the $\omega\pi^\pm$ invariant mass spectrum with the requirement $|m_{\pi^+\pi^-\pi^0}-m_{\omega}|<0.05~\mbox{GeV}/c^2$. A clear $b_1^\pm$ signal is seen. Fitting with a $b_1$ signal shape with the mass and width fixed to PDG values plus a background polynomial, the number of $b_1^\pm$ signal events is $202\pm21$, and the detection efficiency determined from MC simulation is 3.24\%. The branching fractions of these processes are determined to be: \begin{eqnarray} {\cal B}(\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)) = (24.9\pm0.7)\times 10^{-4},\nonumber \\ {\cal B}(\psi(2S) \rightarrow \omega\pi^+\pi^-) = (8.4\pm0.5)\times 10^{-4},\nonumber \\ {\cal B}(\psi(2S) \rightarrow \omega f_2(1270)) = (2.3\pm0.5)\times 10^{-4},\nonumber \\ {\cal B}(\psi(2S) \rightarrow b_1^\pm\pi^\mp) = (5.1\pm0.6)\times 10^{-4}, \nonumber \end{eqnarray} where the errors are statistical. \begin{figure}[htbp]\centering \includegraphics[width=0.5\textwidth]{fig6.eps} \caption{\label{m5piall} Invariant mass distributions with fits for $\psi(2S) \rightarrow \piz2(\pi^+\pi^-)$ candidates, where dots with error bars are data, and the solid histograms and curves are the fit results. (a) $\gamma\gamma$; (b) $\pi^+\pi^-\pi^0$ with $|m_{\gamma\gamma}-m_{\pi^o}|<0.03~\mbox{GeV}/c^2$; (c) $\pi^+\pi^-$ with $|m_{\pi^+\pi^-\pi^0}-m_{\omega}|<0.05~\mbox{GeV}/c^2$ and $b_1^\pm\pi^\mp$ events rejected; and (d) $\omega\pi^\pm$ for the $\psi(2S) \rightarrow \piz2(\pi^+\pi^-)$ candidate events. Resonance parameters are fixed to their world averaged values~\cite{PDG}.} \end{figure} \subsubsection{Signal analysis} For $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)$, candidate events require four charged tracks, and each track must be identified as a pion. The background from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$ is rejected by requiring $|m_{recoil}^{\pi^+\pi^-} - m_{J/\psi}| > 0.05$ GeV/$c^2$. After selection, 1697 candidates remain, and the $2(\pi^+\pi^-)$ invariant mass distribution for the candidate events is shown in Fig.~\ref{m4pi}. The backgrounds include contributions from the continuum (estimated from the data sample at $\sqrt{s}=3.65~\mbox{GeV}$), $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)$, backgrounds remaining from $\ppJ/\psi,J/\psi\rightarrow\rho\pi$ and $\pi^0K^0_S K^+ \pi^- + c.c.$, and the other unknown backgrounds assuming that they have the same shape as that obtained from the inclusive $\psi(2S)$ MC sample. Using the $\chi^2$ fitting method of Section~\ref{bkgs}, the number of signal events is $583\pm41$. The detection efficiency determined from MC simulation is 10.4\%, and the branching fraction for this process is determined to be: \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma2\pi^+\pi^-) = (39.6\pm2.8)\times 10^{-5}, \nonumber \end{equation} where the error is statistical. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig7.eps} \caption{\label{m4pi}The $2(\pi^+\pi^-)$ invariant mass distribution for $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)$ candidate events (dots with error bars). The shaded histogram includes contributions from the continuum (estimated from the data sample at $\sqrt{s}=3.65~\mbox{GeV}$), $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)$, backgrounds remaining from $\ppJ/\psi,J/\psi\rightarrow\rho\pi$ and $\pi^0K^0_S K^+ \pi^- + c.c.$, and other unknown backgrounds estimated from the inclusive $\psi(2S)$ MC sample.} \end{figure} \subsection{\boldmath $\psi(2S) \rightarrow \gammaK^0_S K^+ \pi^- + c.c.$} For $\psi(2S) \rightarrow \gammaK^0_S K^\pm\pi^\mp$, the main background comes from $\psiptoK^0_S K^\pm\pi^\mp\pi^0$, so we first measure $\psi(2S) \rightarrow K^0_S K^\pm\pi^\mp\pi^0$ in order to estimate its contamination to $\psi(2S) \rightarrow \gamma K^0_S K^\pm\pi^\mp$. \subsubsection{Background estimation} For $\psi(2S) \rightarrow \pi^0K^0_S K^+ \pi^- + c.c.$, candidate events require four charged tracks and two good photons. Figure~\ref{mks-lxy} shows the scatter plot of $\pi^+\pi^-$ invariant mass versus the decay length in the transverse plane ($L_{xy}$) of $K^0_S$ candidates, where a clear $K^0_S$ signal is observed. Candidate events are required to have only one $K^0_S$ candidate satisfying the requirements $|m_{\pi^+\pi^-}-m_{K_S^0}|<0.015~\mbox{GeV}/c^2$ and $L_{xy}>0.5$ cm. After $K^0_S$ selection, the remaining two tracks are identified using their $\chi^2_{K\pi}$ values, \textit{i.e.}, if $\chi^2_{K^+\pi^-}<\chi^2_{\pi^+K^-}$, the final state is considered to be $\gamma\gamma\ksK^+\pi^-$; if $\chi^2_{K^-\pi^+}<\chi^2_{\pi^-K^+}$, the final state is considered to be $\gamma \ksK^-\pi^+$, where $\chi^2_{K\pi}=\chi^2_{PID}(K)+\chi^2_{PID}(\pi)$. The confidence level of the 4C-fit must be greater than 1\%, and the sum of the momentum of any $\pi^+$ and $\pi^-$ pair greater than $550~\mbox{MeV}/c$ to reject contamination from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$ events. \begin{figure}[htbp]\centering \includegraphics[width=0.45\textwidth]{fig8.eps} \caption{\label{mks-lxy}The scatter plot of $\pi^+\pi^-$ invariant mass versus the $K^0_S$ decay length for $\psi(2S) \rightarrow \gamma\gammaK^0_S K^+ \pi^- + c.c.$ candidate events.} \end{figure} After requiring $|m_{\pi^+\pi^-}-m_{K_S^0}|<0.015~\mbox{GeV}/c^2$, the $\gamma\gamma$ invariant mass is shown in Fig.~\ref{ggkskp} (a), and a clear $\pi^0$ signal is seen. After requiring $|m_{\gamma\gamma}-m_{\pi^o}|<0.03 ~\mbox{GeV}/c^2$, the $\pi^\pm\pi^0$ invariant mass is shown in Fig.~\ref{ggkskp} (b), where there is a clear $\rho^\pm$ signal. \begin{figure}[htbp]\centering \includegraphics[width=0.52\textwidth]{fig9.eps} \caption{\label{ggkskp}Invariant mass spectra of (a) $\gamma\gamma$ and (b) $\pi^\pm\pi^0$ for $\psi(2S) \rightarrow \gamma\gammaK^0_S K^\pm\pi^\mp$ candidate events. Dots with error bars are data, the histograms are the fits using signal shapes determined from Monte Carlo simulation and second order polynomials for background, and the dashed curves are the background shapes from the fit.} \end{figure} The $\gamma\gamma$ invariant mass distribution is fitted with a $\pi^0$ signal shape determined with MC simulation plus a second order polynomial for the background, and the result is shown in Fig.~\ref{ggkskp} (a). The number of $\pi^0$ signal events fitted is $361\pm25$, and the efficiency determined from MC simulation is 4.40\%, including the effect of the intermediate $K^\pm\rho^\mpK^0_S$ state. The $\pi^\pm\pi^0$ invariant mass distribution is fitted with a $\rho^\pm$ signal shape determined with MC simulation plus a second order polynomial for the background, and the fit result is shown in Fig.~\ref{ggkskp} (b). The number of $\rho^\pm$ signal events is $100\pm20$, and the detection efficiency is 3.80\% determined from MC simulation. The branching fractions of these two processes are determined to be \begin{eqnarray} {\cal B}(\psi(2S) \rightarrow \pi^0K^0_S K^\pm\pi^\mp) = (8.9\pm0.6)\times10^{-4},\nonumber \\ {\cal B}(\psi(2S) \rightarrow K^\pm\rho^\mpK^0_S) = (2.9\pm0.6)\times10^{-4}, \nonumber \end{eqnarray} where the errors are statistical. \subsubsection{Signal analysis} The event selection is similar to $\psi(2S) \rightarrow \pi^0K^0_S K^+ \pi^- + c.c.$, but only one photon is required. After event selection, the $\pi^+\pi^-$ invariant mass distribution is shown in Fig.~\ref{mksfit-gkskp}. A fit is performed with a histogram describing the $K^0_S$ shape obtained from MC simulation, the normalized histogram for $\psi(2S) \rightarrow \pi^0K^0_S K^+ \pi^- + c.c.$ background, and a Legendre polynomial for the other smooth backgrounds. The fit yields $115\pm16$ events. The detection efficiency is 4.83\%, and the branching fraction is determined to be: $${\cal B}(\psi(2S) \rightarrow \gammaK^0_S K^+ \pi^- + c.c.)= (25.6\pm3.6)\times 10^{-5},$$ where the error is statistical. Figure~\ref{mkskp} shows the $K^0_S K^\pm\pi^\mp$ invariant mass distribution after event selection. \begin{figure}[htbp]\centering \includegraphics[width=0.45\textwidth]{fig10.eps} \caption{The $\pi^+\pi^-$ invariant mass distribution for $\psi(2S) \rightarrow \gammaK^0_S K^+ \pi^- + c.c.$ candidate events. Dots with error bars are data. The histogram is the fit with a histogram describing the $K^0_S$ shape obtained from MC simulation, the normalized histogram for $\psi(2S) \rightarrow \pi^0K^0_S K^+ \pi^- + c.c.$ background (dashed histogram), and a Legendre polynomial for the other smooth backgrounds (dotted histogram).} \label{mksfit-gkskp} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig11.eps} \caption{\label{mkskp} The $K^0_S K^\pm\pi^\mp$ invariant mass distribution for $\psi(2S) \rightarrow \gammaK^0_S K^+ \pi^- + c.c.$ candidates (dots with error bars). The shaded histogram is the sum of backgrounds including $\psi(2S) \rightarrow \pi^0K^0_S K^+ \pi^- + c.c.$ and continuum background.} \end{figure} \subsection{\boldmath $\psi(2S) \rightarrow \gammaK^+K^-\pi^+\pi^-$} For $\psi(2S) \rightarrow \gammaK^+K^-\pi^+\pi^-$, candidate events require four charged tracks, among which two tracks must be identified as pions and the other two tracks identified as kaons. The background from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$ is rejected by requiring $|m_{recoil}^{\pi^+\pi^-} - m_{J/\psi}| > 0.05$ GeV/$c^2$, the background from $\psi(2S) \to \gamma2(\pi^+\pi^-)$ is rejected by requiring $\chi^2_{\gamma K^+ K^-\pi^+\pi^-} < \chi^2_{\gamma 2(\pi^+\pi^-)}$, and the background from $\psi(2S) \to \gammaK^0_S K^+ \pi^- + c.c.$ is rejected by requiring $|m_{\pi^+\pi^-} - m_{K_S^0}| > 0.04$ GeV/$c^2$. Figure~\ref{m2k2pi} shows the $K^+K^-\pi^+\pi^-$ invariant mass distribution after event selection, where 361 events are observed. The backgrounds mainly come from $\psi(2S) \rightarrow \pi^0K^+K^-\pi^+\pi^-$ final states including intermediate states. Using the $\chi^2$ fitting method of Section~\ref{bkgs}, the number of signal events is $132\pm19$. The detection efficiency determined from MC simulation is 4.94\%, and the branching fraction for this process is determined to be: \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma K^+ K^- \pi^+\pi^- ) = (19.1\pm2.7)\times 10^{-5}, \nonumber \end{equation} where the error is statistical. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig12.eps} \caption{\label{m2k2pi}The $K^+K^-\pi^+\pi^-$ invariant mass distribution for $\psi(2S) \rightarrow \gammaK^+K^-\pi^+\pi^-$ candidates (dots with error bars). The shaded histogram is background which mainly comes from $\psi(2S) \rightarrow \pi^0K^+K^-\pi^+\pi^-$.} \end{figure} \subsection{\boldmath $\psi(2S) \rightarrow \gamma\kstarzK^+\pi^-+c.c.$ and $\psi(2S) \rightarrow \gammaK^{\ast0}\bar{K^{\ast0}}+c.c.$ } We apply the same event selection criteria as for $\psi(2S) \rightarrow \gammaK^+K^-\pi^+\pi^-$. For this decay channel, the main background channels are from $\psi(2S) \rightarrow \kstarzK^-\pi^+\pi^0 +c.c.$ (phase space), $\psi(2S) \rightarrow \kstarzK^-\rho^+ +c.c.$, and $\psi(2S) \rightarrow K^{\ast0} \overline{K}^{\ast0} \pi^0$. Using a Breit-Wigner function to describe the signal along with the sum of normalized histograms from the $\psi(2S) \rightarrow \kstarzK^-\pi^+\pi^0 +c.c.$ (phase space), $\psi(2S) \rightarrow \kstarzK^-\rho^+ +c.c.$, and $\psi(2S) \rightarrow K^{\ast0} \overline{K}^{\ast0} \pi^0$ background channels, and a second order Legendre polynomial to describe other remaining backgrounds to fit the $K^{\pm} \pi^{\mp}$ invariant mass spectrum, $237\pm39$ $\psi(2S) \rightarrow \gamma\kstarzK^-\pi^+ +c.c.$ candidate events are obtained, as shown in Fig.~\ref{kstar-fit}. Events from the intermediate state $\psi(2S) \rightarrow \gammaK^{\ast0}\overline{K}^{\ast0}$ are counted twice, so an efficiency correction must be made for this. \begin{figure}[hbtp]\centering \includegraphics[width=0.45\textwidth]{fig13.eps} \caption{\label{kstar-fit} The $K^\pm\pi^\mp$ invariant mass distribution for $\psi(2S) \rightarrow \gammaK^+K^-\pi^+\pi^-$ candidates. Dots with error bars are data. The blank histogram is the result of a fit using a Breit-Wigner function to describe the signal along with the sum of normalized histograms from the $\psi(2S) \rightarrow \kstarzK^-\pi^+\pi^0 +c.c.$ (phase space), $\psi(2S) \rightarrow \kstarzK^-\rho^+ +c.c.$, and $\psi(2S) \rightarrow K^{\ast0} \overline{K}^{\ast0} \pi^0$ background channels, and a second order Legendre polynomial to describe other remaining backgrounds. The dashed histogram is the fitted sum of all backgrounds.} \end{figure} The scatter plot of $K^+\pi^-$ versus $K^-\pi^+$ invariant mass is shown in Fig.~\ref{sandian}, where clear $K^{\ast0}$ and $\overline{K}^{\ast0}$ signals are seen. The numbers of $\psi(2S) \rightarrow \gammaK^{\ast0}\overline{K}^{\ast0}$ events and background events are estimated from the scatter plot. The signal region is shown as a square box at (0.896, 0.896)$~\mbox{GeV}/c^2$ with a width of $60~\mbox{MeV}/c^2$. Backgrounds are estimated from sideband boxes, which are taken 60 MeV$/c^2$ away from the signal box. Background in the horizontal or vertical sideband boxes is twice that in the signal region. If we subtract half the number of events in the horizontal and vertical sideband boxes, we double count the phase space background. Therefore the background is one-half the number of events in the horizontal and vertical boxes plus one fourth the number of the events in the four corner boxes. After subtraction, $41\pm8$ $\psi(2S) \rightarrow \gammaK^{\ast0}\overline{K}^{\ast0}$ candidates are obtained, and the efficiency is $(2.75\pm0.06)\%$. In simulating signal channels containing $K^{\ast}$, the shape of $K^{\ast}$ is described by a P-wave relativistic Breit-Wigner, with a width $$\Gamma=\Gamma_0 \frac{m_0}{m} \frac{1+r^{2}p_0^2}{1+r^{2} p^2}\Big[\frac{p}{p_0}\Big]^3,$$ where $m$ is the mass of the $K\pi$ system, $p$ is the momentum of the kaon in the $K\pi$ system, $\Gamma_0$ is the width of the resonance, $m_0$ is the mass of the resonance, $p_0$ is the momentum evaluated at the resonance mass, $r$ is the interaction radius, and $\frac{1+r^{2}p_0^2}{1+r^{2}p^2}$ represents the contribution of the barrier factor. The value $r=(3.4\pm0.6\pm0.3) (~\mbox{GeV}/c)^{-1}$ measured by the $K^-\pi^+$ scattering experiment \cite{r-aston} is used as an approximate estimation of the interaction radius $r$. \begin{figure}[hbtp]\centering \includegraphics[width=0.45\textwidth]{fig14.eps} \caption{\label{sandian} The scatter plot of $K^+\pi^-$ versus $K^-\pi^+$ invariant mass of $\psi(2S) \rightarrow \gammaK^+K^-\pi^+\pi^-$ candidate events. The center box indicates the signal region for $\psi(2S) \rightarrow \gammaK^{\ast0}\overline{K}^{\ast0}$ events, and the other boxes are used for background determination.} \end{figure} Taking into consideration the effect of the intermediate channel, $\psi(2S) \to \gamma K^{*0}\bar{K^{*0}}$, the efficiency for $\psi(2S) \rightarrow \gamma\kstarK^-\pi^+ +c.c.$ is 6.86\%, and we obtain the branching fractions: \begin{eqnarray} {\cal B}(\psi(2S) \rightarrow \gamma\kstarK^-\pi^+ +c.c.) = (37.0\pm6.1)\times 10^{-5}\nonumber \\ {\cal B}(\psi(2S) \rightarrow \gammaK^{\ast0}\overline{K}^{\ast0}) = (24.0\pm4.5)\times 10^{-5},\nonumber \end{eqnarray} where the errors are statistical. \subsection{\boldmath $\psi(2S) \rightarrow \gamma2(K^+K^-)$} For $\psi(2S) \rightarrow \gamma2(K^+K^-)$, the candidate events must have four charged tracks, and every track must be identified as a kaon. The backgrounds from $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-), \gamma\pi^+\pi^-K^+K^-$ are rejected by requiring the $\chi^2_{4C}$ for the signal channel to be less than those for backgrounds. There are 15 events observed after event selection, and the $2(K^+K^-)$ invariant mass distribution is shown in Fig.~\ref{m4k}. The detection efficiency for this channel is 2.93\%. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig15.eps} \caption{\label{m4k}Invariant mass distribution of $2(K^+K^-)$ for $\psi(2S) \rightarrow \gamma2(K^+K^-)$ candidates (dots with error bars). The shaded histograms is background mainly from $\psi(2S) \to \pi^02(K^+K^-)$.} \end{figure} The dominant background comes from $\psi(2S) \rightarrow \pi^0 2(K^+K^-)$. Using the branching fraction measured by CLEO~\cite{chic23hs}, the estimated number of background events remaining is $8\pm2$. To measure the continuum contribution in this channel, the continuum data at $E_{cm}=3.65~\mbox{GeV}$ is analyzed using the same criteria as for $\psi(2S)$ data, and no events survive. It is also found that no events survive from the simulated 14 million inclusive $\psi(2S)$ decay MC sample. The upper limit on the number of $\psi(2S) \rightarrow \gamma2(K^+K^-)$ events is 14 at the 90\% C.L., and the corresponding upper limit on the branching fraction after considering systematic uncertainties is \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma2(K^+K^-))<4.0\times 10^{-5}. \nonumber \end{equation} \subsection{\boldmath $\psi(2S) \rightarrow \gamma\pi^+\pi^-p\overline{p}$} For $\psi(2S) \rightarrow \gamma\pi^+\pi^-p\overline{p}$, there must be four good charged tracks, and two of them must be identified as a proton anti-proton pair. The backgrounds from $\gamma2(\pi^+\pi^-)$ and $\gamma\pi^+\pi^-K^+K^-$ are rejected by requiring $\chi^2_{4C}$ for the signal channel to be less than for the background channels. To eliminate possible contamination from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi,J/\psi \to\gammap\overline{p}$, we require $|m^{\pi^+\pi^-}_{recoil}-m_{J/\psi}|>0.02~\mbox{GeV}/c^2$. Figure~\ref{m2pippb} shows the $\pi^+\pi^-p\overline{p}$ invariant mass distribution with 55 events after event selection. The detection efficiency for this channel is 4.47\%. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig16.eps} \caption{\label{m2pippb}Invariant mass distribution of $\pi^+\pi^-p\overline{p}$ for $\psi(2S) \rightarrow \gamma\pi^+\pi^-p\overline{p}$ candidates (dots with error bars). The shaded histogram is background mainly from $\psi(2S) \rightarrow \pi^0\pi^+\pi^-p\overline{p}$ and $\psi(2S) \rightarrow \pi^+\pi^- J/\psi,J/\psi \rightarrow \gammap\overline{p}$.} \end{figure} The dominant backgrounds are $\psi(2S) \rightarrow \pi^0\pi^+\pi^-p\overline{p}$ and background remaining from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi,J/\psi \rightarrow \gammap\overline{p}$. The detection efficiencies for these two background channels are determined by MC simulation to be 0.35\% and 0.18\%, respectively. For the first background channel, using ${\cal B}(\psi(2S) \rightarrow \pi^0\pi^+\pi^-p\overline{p})$ measured by CLEO~\cite{chic23hs}, the number of background events remaining is estimated to be $35.8\pm 3.5$. Similarly, the estimated number of background events from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi,J/\psi \rightarrow \gammap\overline{p}$ is $1.7\pm0.4$. Subtracting backgrounds, the number of $\psi(2S) \rightarrow \gamma\pi^+\pi^-p\overline{p}$ events is $17\pm 7$, and the corresponding branching fraction is \begin{equation} {\cal B}(\psi(2S) \rightarrow \gamma\pi^+\pi^-p\overline{p}) = (2.8\pm 1.2)\times10^{-5}, \nonumber \end{equation} where the error is statistical. \subsection{\boldmath $\psi(2S) \rightarrow \gamma3(\pi^+\pi^-)$} For $\psi(2S) \rightarrow \gamma3(\pi^+\pi^-)$, six charged tracks are required. The backgrounds from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$ and $\psiptoK^0_S\ks\pi^+\pi^-$ are removed by eliminating events having the recoil mass of any pion pair satisfying $|m^{\pi^+\pi^-}_{recoil}-m_{J/\psi}|<0.05~\mbox{GeV}/c^2$ or having a pion pair in the $K_S^0$ mass region from 0.47 to 0.53 GeV/$c^2$. The remaining backgrounds mainly come from processes with multi-photon final states, such as $\psi(2S) \rightarrow \pi^0 3(\pi^+\pi^-), \gamma\pi^0 3(\pi^+\pi^-),$ and $\pi^0\piz 3(\pi^+\pi^-)$. Their contaminations are estimated using MC simulation. Figure~\ref{m6pi} shows the $3(\pi^+\pi^-)$ invariant mass distribution after event selection with 118 events observed. The detection efficiency for this channel is 1.97\%. Using the $\chi^2$ fitting method described in Section~\ref{bkgs}, the upper limit on the number of signal events is 45 at the 90 \% C.L., and the upper limit on the branching fraction after considering systematic uncertainties is $B(\psi(2S) \to \gamma 3(\pi^+\pi^-)) < 17 \times 10^{-5}$. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig17.eps} \caption{\label{m6pi}The $3(\pi^+\pi^-)$ invariant mass distribution for $\psi(2S) \rightarrow \gamma3(\pi^+\pi^-)$ candidates (dots with error bars). The shaded histogram is background mainly from processes with multi-photon final states, such as $\psi(2S) \rightarrow \pi^0 3(\pi^+\pi^-), \gamma\pi^0 3(\pi^+\pi^-),$ and $\pi^0\piz 3(\pi^+\pi^-)$.} \end{figure} \subsection{\boldmath $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)K^+K^-$} \subsubsection{Background estimation} First, background from $\psi(2S) \rightarrow \eta J/\psi$, $\eta \to \gamma \pi^+\pi^-$, $J/\psi \rightarrow \pi^+ \pi^- K^+ K^-$ is rejected by requiring $m_{2(\pi^+\pi^-)K^+K^-}<2.9$ GeV$/c^2$. The dominant backgrounds remaining are $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)K^+K^-$ and $\gamma\pi^0 2(\pi^+\pi^-)K^+K^-$. Branching fractions for these are not currently available, so they are measured using our $\psi(2S)$ data sample. For $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)K^+K^-$, the number of good photons is required to be $N_\gamma=2$ or $3$. A kinematic fit is performed under the $\psi(2S)\to\gamma\gamma2(\pi^+\pi^-)K^+K^-$ hypothesis running over all selected photons, and the combination with the smallest $\chi^2$ is retained. Background from $\psi(2S) \rightarrow \pi^+ \pi^- J/\psi$, $J/\psi \rightarrow \gamma \gamma 2(\pi^+ \pi^-) K^+K^-$ is rejected by requiring $|m^{\pi^+\pi^-}_{recoil}-m_{J/\psi}|>0.05$ $\hbox{GeV}/c^2$. The possible backgrounds from $\psi(2S)\to\gamma2(\pi^+\pi^-)K^+K^-$ and $3\gamma2(\pi^+\pi^-)K^+K^-$ are rejected by requiring the $\chi^2$ value for the signal to be less than those for the backgrounds. To remove backgrounds from $\psi(2S)\to\gamma\chi_{cJ}$ decays, we require $m_{2(\pi^+\pi^-)K^+K^-}<3.38$ GeV$/c^2$. Eight main peaking background channels, including $\psi(2S) \rightarrow \eta J/\psi$, $\pi^0 J/\psi$ and $\gamma \chi_{cJ}$ to decay into the same final states, are simulated and fitted using the same procedure and selection criteria, and $9.8\pm4.5$ background events are obtained. Using the 14 million inclusive MC sample, $14.7\pm5.7$ background events are found, which is consistent with the simulation result within the statistical error. Figure~\ref{5pi2k} shows the $\gamma\gamma$ invariant mass distribution for $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)K^+K^-$ candidate events. A fit is performed with a $\pi^0$ signal shape determined from MC simulation plus a third order polynomial for the background, and the number of $\pi^0$ signal events is determined to be $57.4\pm9.8$. After subtracting peaking backgrounds, the number of signal events is $47.6\pm10.8$. The detection efficiency determined from MC simulation using a phase space generator is $0.41\%$ below $m_{2(\pi^+\pi^-)K^+K^-}=3.38 \textrm{ GeV}/c^2$, and the branching fraction is determined to be: \begin{equation} {\cal B}(\psi(2S) \rightarrow \piz2(\pi^+\pi^-)K^+K^-) = (8.39\pm1.91)\times 10^{-4} \nonumber \end{equation} with the requirement $m_{2(\pi^+\pi^-)K^+K^-}<3.38 \textrm{ GeV}/c^2$, where the error is statistical. The effect of possible intermediate resonances is not considered. Assuming phase space production, the branching fraction extrapolated to the full $m_{2(\pi^+\pi^-)K^+K^-}$ energy region is determined to be ${\cal B}(\psi(2S) \rightarrow \piz2(\pi^+\pi^-)K^+K^-) = (11.5\pm2.6)\times 10^{-4}$. \begin{figure}[htpb]\centering \includegraphics[width=0.45\textwidth]{fig18.eps} \caption{\label{5pi2k} The $\gamma\gamma$ invariant mass distribution for $\psi(2S) \rightarrow \gamma\gamma2(\pi^+\pi^-)K^+K^-$ candidate events. Dots with error bars are data, and the blank histogram is the fit with a $\pi^0$ signal shape determined from MC simulation plus a third order polynomial for the background. The curve is the fitted background.} \end{figure} For $\psi(2S) \rightarrow \gamma\piz2(\pi^+\pi^-)K^+K^-$, the number of good photons is required to be $N_\gamma=3$ or 4. A kinematic fit is performed under the $\psi(2S)\to3\gamma2(\pi^+\pi^-)K^+K^-$ hypothesis running over the selected photons; the combination with the smallest $\chi^2$ is retained. Possible backgrounds from $\psi(2S) \rightarrow (n\gamma)2(\pi^+\pi^-)K^+K^-$ with $n=1$ $2$ and $4$ and from $\psipto3\gamma 3(\pi^+\pi^-),~3\gamma\pi^+\pim2(K^+K^-)$, and $3\gamma K^{\pm}\pi^{\mp}2(\pi^+\pi^-)$ are rejected by requiring that the $\chi^2$ of the signal is less than those of the backgrounds. Background from $\psi(2S)\to\pi^+\pi^- J/\psi,~J/\psi\to3\gamma \pi^+\pi^- K^+K^-$ is rejected by requiring $|m^{\pi^-\pi^+}_{recoil}-m_{J/\psi}|>0.05$ GeV/$c^2$, and backgrounds from $\psi(2S) \rightarrow \pizJ/\psi,~\etaJ/\psi,~\pi^0\pizJ/\psi,~J/\psi\to2(\pi^+\pi^-)K^+K^-$ are rejected with the requirement $|m_{2(\pi^+\pi^-)K^+K^-}-m_{J/\psi}|>0.05$ GeV/$c^2$. We select the $\pi^0$ from the $\gamma\gamma$ combinations as the one with $m_{\gamma\gamma}$ invariant mass closest to $m_{\pi^0}$. To remove backgrounds from $\psi(2S)\to\gamma\chi_{cJ}$ decays, $m_{\piz2(\pi^+\pi^-)K^+K^-}<3.38$ GeV$/c^2$ is required. After event selection, no significant $\pi^0$ candidates are observed. A fit with a $\pi^0$ shape determined from MC simulation plus a second order Legendre polynomial for background yields $27.1\pm8.5$ events. Fitting in the same way a histogram of 20 MC simulated background modes, $21.2\pm10.6$ background events are obtained. The detection efficiency determined from MC simulation is $5.8\times 10^{-4}$. The upper limit on the branching fraction at the $90\%$ C.L., determined using {\bf POLE} \cite{pole} and including systematic uncertainties, is $$B(\psi(2S) \rightarrow \gamma\piz2(\pi^+\pi^-)K^+K^-)<3.1\times 10^{-3}.$$ \subsubsection{Signal analysis} For $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)K^+K^-$, six charged tracks are required, and two of them must be identified as kaons. The background from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi,J/\psi \to \gamma +$\:four charged particles is removed by requiring $|m^{\pi^+\pi^-}_{recoil} - m_{J/\psi}| > 0.05$ GeV$/c^2$, and the backgrounds from $\psi(2S) \to \gamma 3(\pi^+\pi^-), \gamma K^\pm\pi^\mp 2(\pi^+\pi^-)$ and $\gamma2(K^+K^-) \pi^+\pi^-$ are rejected by requiring the $\chi^2$ values for the signal to be smaller than for the backgrounds. The background from $\psi(2S)\to\eta J/\psi\to \gamma 2(\pi^+\pi^-)K^+K^-$ is rejected by requiring $m_{2(\pi^+\pi^-)K^+K^-}<2.9\textrm{GeV}/c^2$. Figure~\ref{m2k4pi} shows the $2(\pi^+\pi^-)K^+K^-$ invariant mass distribution, where the shaded histogram is background mainly from $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)K^+K^-$ and $\gamma\pi^0 2(\pi^+\pi^-)K^+K^-$. For the $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)K^+K^-$ background channel, the branching fraction of $(8.39\pm1.91)\times 10^{-4}$ is used since the MC sample is produced with $m_{2(\pi^+\pi^-)K^+K^-}<3.38$ $\hbox{GeV}/c^2$. After subtracting all backgrounds, 17 events are obtained. The detection efficiency for this channel is 0.69\%. The upper limit on the number of signal events is 15.5 at the 90\% C.L., and the branching fraction after considering systematic uncertainties is $B(\psi(2S) \to \gamma 2(\pi^+ \pi^-)K^+K^-) < 22 \times 10^{-5}$. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{fig19.eps} \caption{\label{m2k4pi}The $2(\pi^+\pi^-)K^+K^-$ invariant mass distribution for $\psi(2S) \rightarrow \gamma2(\pi^+\pi^-)K^+K^-$ candidates (dots with error bars). The shaded histogram is background mainly from $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)K^+K^-$ and $\gamma\pi^0 2(\pi^+\pi^-)K^+K^-$.} \end{figure} \begin{table*} \begin{center} \caption{\label{syserr}Summary of systematic errors (\%), where WR, $\varepsilon_\gamma$, $K^0_S$ rec., and MC denote the wire resolution, photon efficiency, the error for $K^0_S$ reconstruction, and MC statistics, respectively. The sixth column gives the uncertainties due to the $\chi^2$ fits or the $K^{\ast}$ fit.} \begin{tabular}{lcccccccccr} \hline \hline Mode & WR & $\varepsilon_\gamma$ & PID & $K^0_S$ rec. & fit & Branching Fractions &Background&$N_{\psi(2S)}$&MC&Total\\\hline $\gamma p\overline{p}$ &6.3&2.0&4.0&---&---&---&9.4&4.0&0.5&12.8 \\ $\gammap\overline{p}\pi^0$ &11.6 & 6.0 & 4.0 & ---& ---& ---&14.3 & 4.0 &3.0 & 20.4\\ $\gamma 2(\pi^+\pi^-)$ &5.0&2.0&8.0&---&3.0 ($\chi^2$ fit)&---&6.4&4.0&1.0&12.7\\ $\gamma \ksK^+\pi^- +c.c.$ &5.0&2.0&---&3.4&---&---&11.8&4.0&1.0&14.1\\ $\gamma K^+K^-\pi^+\pi^-$ &10.7 &2.0 &8.0&--- & 3.0 ($\chi^2$ fit)&--- & 17.2 &4.0&2.2&22.6\\ $\gamma \kstarzK^+\pi^- +c.c.$&10.7 &2.0& 8.0&--- & 8.8 ($K^{\ast}$ fit)&--- & 10.0&4.0&1.2&19.5\\ $\gamma K^{\ast0}\overline{K}^{*0}$&10.7 &2.0& 8.0&--- & --- &---& 10.0&4.0&1.1&17.4\\ $\gamma\pi^+\pi^-p\overline{p}$ &10.4&2.0&8.0&---&---&--- &20.6&4.0&1.1&24.9\\ $\gamma2(K^+K^-)$ &11.1&2.0&8.0 &---&---&---&14.2&4.0&1.6&20.3\\ $\gamma3(\pi^+\pi^-)$&5.0&2.0&---&---&---&---&---&4.0&2.2&7.0\\ $\gamma2(\pi^+\pi^-)K^+K^-$&8.7&2.0&4.0&---&3.0 ($\chi^2$ fit)&--- &25.0&4.0&3.0&27.5 \\ $\gamma\pi^0 2(\pi^+\pi^-)K^+K^-$& 9.1&6.0&4.0&---&---&---&6.0&4.0&3.1&14.0\\ $p\overline{p}\pi^0\piz$ &11.7 & 8.0 & 4.0 & ---& ---& ---& 4.3 & 4.0 & 1.7 & 15.9\\ $\pi^0 2(\pi^+\pi^-)$ &10.0&4.0&8.0&---&---&---&1.8&4.0&1.0&14.2\\ $\omega\pi^+\pi^-$ &10.0&4.0&8.0&---&---& 0.8 ($\omega$ Br)&2.0&4.0&1.0&14.2\\ $\omega f_2(1270)$ &10.0&4.0&8.0&---&---&3.1 ($f_2$ Br)&10.0&4.0&1.0&17.5\\ $b_1^\pm\pi^\mp$ &10.0&4.0&8.0&---&---&0.8 ($b_1$ Br)&1.0&4.0&1.0&14.1\\ $\pi^0K^0_S K^\pm\pi^\mp$&10.0&4.0&---&3.4 &---&---&3.0&4.0&1.5&12.5\\ $K^\pm\rho^\mpK^0_S$&10.0&4.0&---&3.4&---&---&8.0&4.0&1.5&14.5\\ $\pi^0 2(\pi^+\pi^-)K^+K^-$&13.5&4.0&4.0&---&---&---&13.3&4.0&1.1&20.2\\ \hline \hline \end {tabular} \end{center} \end{table*} \section{Systematic errors} Systematic errors on the branching fractions, listed in Table~\ref{syserr}, mainly originate from the MC statistics, the track error matrix, the kinematic fit, particle identification, the photon efficiency, the $\chi^2$ fit method, the uncertainty of the branching fractions of intermediate states (taken from the PDG~\cite{PDG}), the uncertainty of the background estimation, and the total number of $\psi(2S)$ events. \begin{enumerate} \item The systematic error caused by the MDC tracking and the kinematic fit is estimated by using simulations with different MDC wire resolutions~\cite{simbes}. The systematic error ranges from 5\% to 13.5\% depending on the number of charged tracks in the different channels. \item The photon detection efficiency was studied with $J/\psi \to \pi^+\pi^-\pi^0$ events~\cite{simbes}, and the difference between data and MC simulation is about $2\%$ for each photon. \item Pure $\pi$ and $K$ samples were selected, and the particle identification efficiency was measured as a function of track momentum. On the average, a $1.3\%$ efficiency difference per $\pi$ track and a $1.0\%$ difference per $K$ track are observed between data and MC simulation. We take $2.0\%$ for each charged particle identification as a conservative estimate of the systematic error. \item In order to estimate the systematic error caused by the differences of the $\chi^2$ distributions between data and MC simulation, we use selected samples of $\psi(2S) \to \gamma \chi_{c0}$, $\chi_{c0}\to K^+ K^- \pi^+ \pi^-$ and $\pi^+ \pi^-\pi^+ \pi^-$ to compare the $\chi^2$ shapes of data and MC, because these two samples have similar final states and sufficient statistics. The difference is about 3\%, which is taken as the systematic error of the $\chi^2$ fit method. We also performed an input-output study of the $\chi^2$ fit, and found the difference between input and output values is very small ($<0.5\%$) and is neglected. \item The background uncertainties are estimated by changing the order of the polynomial or the fitting range used. The errors on the branching fractions of the main backgrounds ($\psi(2S) \to \pi^0 + hs$) have also been considered and included. The uncertainty of the background estimation varies from 1\%-25\% depending on the channel and background level. \item The uncertainty of the total number of $\psi(2S)$ events is 4\%~\cite{pspscan}. \end{enumerate} Adding up all these sources in quadrature, the total systematic errors range from 7\% to 28\% depending on the channel. \section{Results and conclusions} Figure~\ref{diffbr} shows the differential branching fractions for $\psi(2S)$ decays into $\gammap\overline{p}$, $\gamma 2(\pi^+\pi^-)$, $\gammaK^+K^-\pi^+\pi^-$, and $\gammaK^0_S K^+ \pi^- + c.c.$, and the numbers of events extracted for each decay mode with $m_{hs}<2.9~\mbox{GeV}/c^2$ are listed in Table~\ref{Tot-nev}. Broad peaks, which are similar to those observed in $J/\psi$ decays into the same final states~\cite{jpsi-gppb,g4pi}, appear in the $m_{p\overline{p}}$ and $m_{4\pi}$ distributions at masses between 1.9 and 2.5~$~\mbox{GeV}/c^2$ and 1.4 and 2.2~$~\mbox{GeV}/c^2$, respectively. Possible structure within these broad peaks cannot be resolved with the current statistics. No obvious structure is observed in other final states. The branching fractions for $m_{hs} < 2.9~~\mbox{GeV}/c^2$ in this paper sum up to 0.26\%~\cite{note1} of the total $\psi(2S)$ decay width, which is about a quarter of the total expected radiative $\psi(2S)$ decays. This indicates that a larger data sample is needed to search for more decay modes and to resolve the substructure of $\psi(2S)$ radiative decays. \begin{figure}\centering \includegraphics[width=0.45\textwidth]{fig20.eps} \caption{ \label{diffbr} Differential branching fractions for $\psi(2S)$ decays into $\gammap\overline{p}$, $\gamma 2(\pi^+\pi^-)$, $\gammaK^+K^-\pi^+\pi^-$, and $\gammaK^0_S K^+ \pi^- + c.c.$ Here $m_{hs}$ is the invariant mass of the hadrons in each final state. For each point, the smaller vertical error is the statistical error, while the bigger one is the sum of statistical and systematic errors. } \end{figure} Table~\ref{Br-pi0bg} lists the results of $\psi(2S)$ decays into $\pi^0$ + hadrons together with the world averaged values~\cite{PDG}, and values of $Q_h~[={\cal B}(\psi(2S) \rightarrow h)/{\cal B}(J/\psi \rightarrow h)]$. For $\psi(2S) \rightarrow \pi^0 2(\pi^+\pi^-)$ decay, intermediate resonances including $\sigma~[f_0(600)]$, $f_2(1270)$, $\omega$, and $b_1(1235)$ are observed, and the measurement of ${\cal B}[\psi(2S) \rightarrow \omega f_2(1270)]$ agrees with the previous measurement using the same data sample~\cite{bes2VP}. The $\rho^\pm$ resonance is observed in $\psi(2S) \rightarrow \pi^0K^0_S K^+ \pi^- + c.c.$ decay mode. \begin{table*} \begin{center} \caption{\label{Tot-nev} Results for $\psi(2S) \rightarrow \gamma +hadrons$. For each final state, the following quantities are given: the number of events in $\psi(2S)$ data, $N^{Tot}$; the number of background events from $\psi(2S)$ decays and continuum, $N^{Bg}$; the number of signal events, $N^{Sig}$; and the weighted averaged efficiency, $\varepsilon$; the branching fraction with statistical and systematic errors or the upper limit on the branching fraction at the 90\% C.L. For all the radiative channels, except the $\gammap\overline{p}\pi^0$ and $\gamma\piz2(\pi^+\pi^-)K^+K^-$ modes we require $m_{hs}<2.9~\mbox{GeV}/c^2$. The branching fraction for $\gamma\piz2(\pi^+\pi^-)K^+K^-$ is measured with the requirement $m_{\piz2(\pi^+\pi^-)K^+K^-}<3.38$ GeV/$c^2$. Possible interference effects for the modes with intermediate states are ignored. } \begin{tabular}{cccccc} \hline \hline Mode & $N^{Tot}$ & $N^{Bg}$ & $N^{Sig}$ & $\varepsilon$(\%) & ${\cal B}(\times 10^{-5})$\\\hline $\gammap\overline{p}$ & $329$ & $187$ & $142\pm18$ & 35.3 & 2.9$\pm$0.4$\pm$0.4 \\ $\gammap\overline{p}\pi^0$ & $345$ & $219$ & $126 \pm 38$ & 8.94 & $10.1\pm3.1\pm2.1$\\ $\gamma 2(\pi^+\pi^-)$ & $1697$ & $1114$ & $583\pm41$ & 10.4 & 39.6$\pm$2.8$\pm$5.0\\ $\gamma\ksK^+\pi^- +c.c.$ & $-$ & $-$ & $115\pm16$ & 4.83 & 25.6$\pm$3.6$\pm$3.6 \\ $\gamma K^+ K^-\pi^+\pi^-$ &$361$ &$229$ &$132\pm19$ & 4.94 & 19.1$\pm$2.7$\pm$4.3 \\ $\gamma K^{*0} K^+\pi^-+c.c.$&$-$ &$-$ & $237\pm39$ & 6.86 & 37.0$\pm$6.1$\pm$7.2\\ $\gamma K^{\ast0}\overline{K}^{\ast0}$&$58$&$17$&$41\pm8$&2.75& 24.0$\pm$4.5$\pm$5.0\\ $\gamma \pi^+\pi^-p\overline{p}$& $55$ & $38$ & $17\pm7$ &4.47 & 2.8$\pm$1.2$\pm$0.7 \\ $\gamma2(K^+K^-)$ & $15$ & $8$ & $<14$ & 2.93& $<4.0$\\ $\gamma3(\pi^+\pi^-)$& $118$ & $95$ & $<45$& 1.97 & $<17$\\ $\gamma2(\pi^+\pi^-)K^+K^-$&$17$ & $13$ & $<15.5$ & 0.69 & $<22$ \\ $\gamma\piz2(\pi^+\pi^-)K^+K^-$ & $27$ & $21$ & $<24.9$ & $0.058$ & $<310$\\ \hline \hline \end {tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{\label{Br-pi0bg} Results of $\psi(2S) \rightarrow \pi^0 +hadrons$. Here $N^{Sig}$ is the number of signal events, $\varepsilon$ is the detection efficiency, ${\cal B}$ is the measured branching fraction, ${\cal B}^{\textrm{PDG}}$ is the world averaged value~\cite{PDG}, and $Q_h ={\cal B}(\psi(2S) \rightarrow h)/{\cal B}(J/\psi \rightarrow h)$. The branching fraction for $\piz2(\pi^+\pi^-)K^+K^-$ is measured with the requirement $m_{2(\pi^+\pi^-)K^+K^-}<3.38$ GeV/$c^2$.} \begin{tabular}{cccccccc} \hline \hline Mode: $h$ & $N^{Sig}$ & $\varepsilon$(\%) & ${\cal B}(\times 10^{-4})$ & ${\cal B}^{\textrm{PDG}} (\times 10^{-4})$& $Q_h$(\%)\\ \hline $\piz2(\pi^+\pi^-)$ & $2173\pm53$ & $6.32$ & $24.9\pm0.7\pm3.6$&$23.7\pm 2.6$&$10.5\pm2.0$\\ $\omega\pi^+\pi^-$ & $386\pm23$ & $3.74$ & $8.4\pm0.5\pm1.2$&$6.6\pm1.7$&$11.7\pm2.4$\\ $\omega f_2(1270)$ & $57\pm13$& $3.65$ & $2.3\pm0.5\pm0.4$ &$2.0\pm0.6$&$5.4\pm0.6$\\ $b_1^\pm\pi^\mp$& $202\pm21$ & $3.24$ & $5.1\pm0.6\pm0.8$ &$3.6\pm0.6$&$17.0\pm4.2$ \\ $p\overline{p}\pi^0\piz$ & $203 \pm 27$ & 8.30 & $1.75\pm0.21\pm0.28$ &---&---\\ $\pi^0K^0_S K^\pm\pi^\mp$&$361\pm25$&4.40&$8.9\pm0.6\pm1.1$&---&---\\ $K^\pm\rho^\mpK^0_S$&$100\pm20$&3.80&$2.9\pm0.6\pm0.4$&---&---\\ $\piz2(\pi^+\pi^-)K^+K^-$ & $48\pm11$ & $0.41$ & $8.4\pm1.9\pm1.7$ &---&---\\ \hline \hline \end {tabular} \end{center} \end{table*} In summary, we report measurements of the branching fractions of $\psi(2S)$ decays into $\gammap\overline{p}$, $\gamma 2(\pi^+\pi^-)$, $\gamma K^0_S K^+ \pi^- + c.c.$, $\gamma K^+K^-\pi^+\pi^-$, $\gamma\kstarzK^-\pi^++c.c.$, $\gamma K^{\ast0} \overline{K}^{\ast0}$, $\gamma\pi^+\pi^-p\overline{p}$, $\gamma2(K^+K^-)$, $\gamma3(\pi^+\pi^-)$, $\gamma2(\pi^+\pi^-)K^+K^-$ and the differential branching fractions for $\psi(2S)$ decays into $\gammap\overline{p}$, $\gamma 2(\pi^+\pi^-)$, $\gammaK^+K^-\pi^+\pi^-$, and $\gamma K^0_S K^+ \pi^- + c.c.$ with hadron invariant mass less than 2.9$~\mbox{GeV}/c^2$. We also report branching fractions of $\psi(2S)$ decays into $\gammap\overline{p}\pi^0$, $p\overline{p}\pi^0\piz$, $\pi^0K^0_S K^+ \pi^- + c.c.$, $K^\pm\rho^\mpK^0_S$, $\piz2(\pi^+\pi^-)K^+K^-$ and $\gamma\piz2(\pi^+\pi^-)K^+K^-$. The measurements of $\psi(2S)$ decays into $\piz2(\pi^+\pi^-)$, $\omega\pi^+\pi^-$, $\omega f_2(1270)$, and $b_1^\pm\pi^\mp$ are consistent with previous measurements~\cite{PDG} and the recent measurements by the CLEO collaboration~\cite{chic23hs}. \acknowledgments The BES collaboration thanks the staff of BEPC and computing center for their hard efforts. This work is supported in part by the National Natural Science Foundation of China under contracts Nos. 10491300, 10225524, 10225525, 10425523, 10625524, 10521003, 10775142, the Chinese Academy of Sciences under contract No. KJ 95T-03, the 100 Talents Program of CAS under Contract Nos. U-11, U-24, U-25, and the Knowledge Innovation Project of CAS under Contract Nos. U-602, U-34 (IHEP), the National Natural Science Foundation of China under Contract No. 10225522 (Tsinghua University), and the Department of Energy under Contract No. DE-FG02-04ER41291 (U. Hawaii).
{ "redpajama_set_name": "RedPajamaArXiv" }
8,132
- IZI Passive speaker, to connect to the active speaker, in order to get a stereo sound. Marine Digital Media Receiver CMS5. Marine audio systems. Simply enjoy the great sound! Loud speakers and remote controls. Linak lifting kit for a flat screen.
{ "redpajama_set_name": "RedPajamaC4" }
4,099
Q: process files and their names in a directory I am new to the terminal world and would like to process some images in a directory. Some examples of images are as follows (they are from the foggy cityscapes dataset): frankfurt_000000_000294_leftImg8bit_foggy_beta_0.01.png frankfurt_000000_000294_leftImg8bit_foggy_beta_0.02.png frankfurt_000000_000294_leftImg8bit_foggy_beta_0.005.png munster_000000_000019_leftImg8bit_foggy_beta_0.01.png munster_000000_000019_leftImg8bit_foggy_beta_0.02.png munster_000000_000019_leftImg8bit_foggy_beta_0.005.png Note that _leftImg8bit_foggy_beta_ part of the names are common to all the images and the part before that is used to identify different images. I would like to first separate these images into three separate sub-directories with respect to beta suffix of 0.01 or 0.02 or 0.005. And after separating the files, I would like to remove the file name after _leftImg8bit, for all file names in a subdirectory, while retaining the .png extension. Could someone help with the linux (CentOs to be specific) terminal commands as I am not so familiar with them. Thanks in advance. A: With zsh: autoload -Uz zmv # best in ~/.zshrc mkmv() { mkdir -p -- $2:h && mv -- "$@"; } zmv -n -P mkmv '(*)_leftImg8bit_foggy_beta_(*)(.png)' '$2/$1$3' (remove the -n (dry-run) if happy). Or with any POSIX-like shell (though without the safeguards of zmv): for file in *_leftImg8bit_foggy_beta_*.png; do dir=${file#*_leftImg8bit_foggy_beta_} dir=${dir%.*} mkdir -p -- "$dir" && mv -- "$file" "${file%%_leftImg8bit_foggy_beta_*}.png" done
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,505
The JDHR adopts the rights based approach while defining it as a tool to reduce vulnerabilities and promote fundamental rights of the people ensuring peace, well-being and social justice for current and future generations. The JDHR produces knowledge with a view to build capacity of media/journalists and through them the people and civil society on issues of public interest. The research program at JDHR is based on policy advice, advocacy and training. Promotion of rights based approach to reduce vulnerabilities. Creation of awareness, and active support for democracy and human rights in government, private sector, NGOs, academia and general public. High quality training to journalists, NGO activists, organizations and individuals to strengthen institutions and build capacity for promotion of rights based approach.
{ "redpajama_set_name": "RedPajamaC4" }
7,618
Nizar Baraka Details how "Advanced Regionalization" is Advancing Democracy in Morocco – Jean R. AbiNader Updated February 12, 2016 09:38 AM EST Plan for the Sahara only the Beginning for Empowering All Moroccans Tamara Wittes, director of the Center for Middle East Policy with Nizar Baraka at the roundtable. Jean R. AbiNader, MATIC Jean R. AbiNader, Exec. Dir., Moroccan American Trade and Investment Center At a recent roundtable discussion in Washington, DC, The Honorable Nizar Baraka, former Minister of Finance and Economy, who serves as president of the Economic, Social, and Environmental Council (CESE) in Morocco, provided his analysis of the regionalization program being rolled out in Morocco, and how this is already changing the political space in the country. Mr. Baraka began by reviewing the CESE process for developing the first study of "the South" (the Saharan provinces), which included public hearings with testimony from some 1500 people as well as dozens of studies prepared by experts, which resulted in recommendations for extensive restructuring of local government and a robust economic development strategy. He explained that what is being done in the South is the beginning of "advanced regionalization" for all of Morocco. He believes this is part of the implementation of shared decision-making and devolution of power promised in the 2011 Constitution. Mr. Baraka emphasized that the credibility of regionalization will only become real when citizens participate in local decision-making that affects their daily lives. For example, the Parliament (Chamber of Deputies) is currently debating bills that give Civil Society the capacity to submit proposals and petitions directly to Parliament. There is great economic disparity among the regions in Morocco, he explained. For example, 52% of Morocco's GDP is produced in four regions, while 53% of its doctors practice in two regions. Similarly, the rate of joblessness in the South is twice the national average. Baraka insists that the direct election of the region's presidents (the highest locally elected officials), and the five-fold increase in budgets for regional development are strong incentives for citizens to be more involved in local affairs. So the CESE efforts have focused on how the government can create an environment for greater political responsiveness, and part of this campaign is a new economic development model for the region based on public-private partnerships. This includes large-scale investments in diversifying the economy, a new university focused on local needs, particular attention to conservation, and positioning the Sahara as a gateway to sub-Saharan Africa. Economic Diversity to Drive Economic Growth The Sahara is well poised for economic growth. Its GDP is 60% higher than the national average, but some 30% of that is generated by government programs. So the strategy going forward is to deeply engage the private sector to increase investments and jobs. One critical target is to diversify the local economy while protecting the environment. The focus is on empowering individuals to more fully participate in the economy; for example, raising the rate of women in the workforce from a woeful 14% to at least the national average of 25%, and doubling the number of employed youth. Sectors slated for diversification include fishing, aquaculture, value-added farming, renewable energies, downstream phosphate industries, and eco-tourism. Plans have been finalized for a local university focusing on the needs of the region, including professional development of medical personnel, educators, managers, and lawyers; tourism and hospitality; and research and development supporting local industries. Given that the South's literacy rate is already 20% higher than the national average, targeted efforts to build on their capabilities through focused programs of higher education should reap short and long term benefits, in terms of jobs and meeting future employer needs. Conserving the environment is also a prime consideration, especially well water, which is overused. Desalination, reuse of gray water, greater efficiency of energy utilization, treatment regulations for well water, a new dam, and a comprehensive campaign to preserve the eco-system in the Bay of Dakhla are the headline items in this effort. Looking at both the supply side, which pushes the growth of the local economy, and the demand side, which is the pull of market needs, Africa is the obvious market. Building a new expressway from Agadir to Dakhla onwards to Mauritania and Senegal, high speed digital connectivity, expanded port facilities, and the export of solar power along an interconnected grid are all in the plans for the next 10 years. It is anticipated that 75% of the targeted $10 billion of investment will come from national government public-private sector partnerships, while the regional governments will contribute the remaining 25%. The goal of these efforts is to create 120,000 jobs and cut unemployment in half. Mr. Baraka provided discussed other plans underway, which he believes will create a seismic shift in how citizens see their roles in relation to the government. Empowering proactive, engaged, and contributing citizens is the core mission of advanced regionalization, which will require a different mix of incentives in Morocco's different regions. The most important impact, according to him, is that the political space in Morocco has changed forever. This is clear in viewing the evolving role of the media and civil society, debates in Parliament over legislative initiatives, and the pressure on political parties to restructure their governance to reflect issues and priorities. More importantly, advanced regionalization will continue this process and move Morocco towards its goal of a new social compact based on engagement and respect.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
789
Comotini (; ; ) é uma cidade do nordeste da Grécia. É a capital da periferia Macedônia Oriental e Trácia e da unidade regional de Ródope. Latitude: 41° 7' 22" Norte Longitude: 25° 23' 47" Leste Altitude: 19 metros Demografia População da cidade de acordo com os últimos censos: 1981: 34.051 1991: 39.927 2001: 46.586 Ligações externas Portal da cidade de Komotini Portal comercial de Komotini Notícias de Komotini Mapa de Komotini Fotos aéreas Museu arqueológico de Komotini Museu bizantino de Komotini e-city.gr Guia turístico Democritus Universidade da Trácia Panthrakikos Football Club Shopping e parque de diversão
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,199
\section{Introduction} Question aware Open Information Extraction (Question aware Open IE) system takes question and passage as inputs and extracts a semi-structure answer in tuple format from passage which can answer the question. Question aware Open IE is both an Open IE task and a question answering task. From an Open IE view, the Open IE system extracts all possible tuples. For example, in Table \ref{table_openIE}, Open IE system aims to extract four answer tuples from the passage, which are independent of any questions. A question aware Open IE only extracts one answer tuple which can answer the question. From a question answering view, the answer of the search engine is a passage; the answer for Machine Reading Comprehension tasks like SQuAD~\cite{rajpurkar2016squad}, TriviaQA ~\cite{joshi2017triviaqa} and NewsQA ~\cite{trischler2016newsqa} is a span from the passage; the answer of MS MACRO ~\cite{nguyen2016msmacro} is a generated sentence. Different to them, the answer for a question aware Open IE is a semi-structure tuple which is shorter than the passage and longer than the span. It has a semantic role for each part which is easier for understanding for downstream task. \\ \begin{table}[t!] \begin{center} \begin{tabular}{p{0.7in}|p{2.3in}} \hline \textbf{Question} & how many albums has the doors sold \\ \hline \textbf{Passage} & although the doors' active career ended in 1973 , their popularity has persisted. according to the riaa, they have sold over 100 million records worldwide, making them one of the best-selling bands of all time. \\ \hline \textbf{Open IE Result} & \begin{minipage}{0.7\columnwidth} (the doors active career; ended; in 1973) \\ (their popularity; has persisted; although the doors active career ended in 1973) \\ (they; making; them one of the best-selling bands of all time) \\ (they; have sold; 100 million records worldwide) \\ \end{minipage}\\ \hline \textbf{Answer (Question aware Open IE)}& (they; have sold; 100 million records worldwide) \\ \hline \end{tabular} \caption{Example of Open IE and Question aware Open IE. The Open IE Result and Answer has same format, which is (subject; predicate; arguments), there could be more than one argument. The column "Open IE Result" is tuples extracted by Open IE tools from passage independent with question. The "Answer" is extracted from passage and could answer the question.} \label{table_openIE} \end{center} \end{table} The current solution for a question aware Open IE has two approaches, the extractive method and the generative method. The extractive method extracts all possible answer tuples as candidates from the passage, independent of the question, by Open IE models. It then ranks all the candidates by a matching model between candidate and question. The coverage of the extraction step is crucial for the final performance because it is a twostep method and the extraction step is independent of the question. Since the first step is extraction, most of the words in the result will come from the passage. The generative method concatenates the question and passage as input, and then generates the answer tuple as a concatenated sequence or generates each field one by one. The generative method uses the question and passage at the same stage and does not rely on an extraction model, so it has better interaction between question and passage. But while it removes the extraction step, it does not use the facts that most of the answer word is from passage any more. To better use passage information in generation, we propose a two-stage decoder model for this task which can have more interaction between question and passage by containing a tagging decoder and a correction decoder. At the first stage, the tagging decoder will tag the words which may be useful for answer generation. The output of this step could form a coarse answer. At the second stage, the correction decoder generates a new answer with a step by step decoder. A correction decoder can reorder and add new words to output a fluency answer. Then we joint train two decoders at the same time. We evaluated our model on the WebAssertion dataset ~\cite{yan2018assertion}. Our model achieves a 59.32 BLEU score which is better than previously generative methods. \begin{table*}[t!] \centering \begin{tabular}{p{1.1in}|p{5.5in}} \hline \bf Question & where is smallville filmed \\ \hline \bf Passage & smallville was primarily filmed in and around vancouver , british columbia , with local businesses and buildings substituting for smallville locations . \\ \hline \bf Answer & smallville; was filmed; in british columbia; with local businesses \\ \hline \bf Tagging Label & $smallville_{S-B}\; was_{P-B}\; primarily_{O}\; filmed_{P-B}\; in_{A_0-B}\; and_{O}\; around_{O}\; vancouver_{O}\; ,_{O}\;$ $british_{A_0-B}\; columbia_{A_0-I}\; ,_{O}\; with_{A_1-B}\; local_{A_1-I}\; businesses_{A_1-I}\; and_{O}\; buildings_{O}\;$ $substituting_{O}\; for_{O}\; smallville_{O}\; locations_{O}\; ._{O}$\\ \hline \bf Tagging Result &smallville, was primarily filmed, vancouver, british columbia, with local businesses \\ \hline \bf Correction Result & smallville, was filmed, in vancouver, british columbia with local businesses \\ \hline \end{tabular} \caption{Example of tagging label and model output of tagging decoder and correction decoder. The tagging label is created from answer since original dataset only has tuple format answer.} \label{table_tagging} \end{table*} \section{Related Work} \textbf{Open Information Extraction (Open IE)} ~\cite{banko2007open,etzioni2008open} aims to extract all (subject, predicate, arguments) tuples from a sentence. To solve this challenge, TextRunner ~\cite{banko2007open} and WOE ~\cite{wu2010WOE} use a self-supervised approach. Then many of the methods use a rule based approach like ReVerb ~\cite{fader2011ReVerb}, OLLIE ~\cite{schmitz2012OLLIE}, KrakeN ~\cite{akbik2012kraken}, ClauseIE ~\cite{del2013clausie} PropS ~\cite{stanovsky2016props}. Open IE4\footnote{https://github.com/dair-iitd/OpenIE-standalone} extracts tuples from Semantic Role Labeling structures. Stanford Open Information Extraction ~\cite{angeli2015leveraging} uses natural logic inference to extract a shorter argument. Recently, Stanovsky et al. ~\cite{stanovsky2018supOpenIE} have proposed a supervised method for Open IE by formulating it as a sequential labeling task. Compared to Open IE, our task has an additional question, so our tagging decoder needs to contain an interactive layer between question and passage. Our tagging decoder is similar to ~\cite{stanovsky2018supOpenIE}, since they have the same output and trained by supervised learning. However, we have an additional correction decoder to improve answer quality and can handle an answer field that is not a span. Current \textbf{Machine Reading Comprehension (MRC)} like SQuAD ~\cite{rajpurkar2016squad}, TriviaQA ~\cite{joshi2017triviaqa} and NewsQA ~\cite{trischler2016newsqa} focus on selecting a span from passage as the answer. Most MRC models ~\cite{wang2016pointerNetWork,wang2017rnet,yu2018qanet} generate answers by predicting the start and end points of a span. The MS MACRO ~\cite{nguyen2016msmacro} dataset needs to generate a sequence which is not a span of a passage. Tan et al. ~\cite{tan2017snet} solve it by selecting a span from the passage at first, then generates an answer based on the question, passage and selected span. Similar to Tan et al. ~\cite{tan2017snet}, we also used the idea of coarse to fine generation, but the answer of our task is not a span or sentence. Our answer has structure and each field has a semantic role. The arguments have dynamic length. Each field of it does not have to be a span, although most of the words are from a passage. because of this we use a sequential labeling method to tag each word in a passage instead of predicting the start and end point of a span. The two stages of our model could be jointly trained. For \textbf{Question aware Open IE}, Yan et al. ~\cite{yan2018assertion} propose two methods, an extractive method and a generative method. The extractive method extracts all answer tuples from the passage first, and ranks them with a matching model between answer candidates and questions. The generative model takes the concatenation of question and passage as input, and generates the representation of each answer field at first, and then generates each field based on its representation. \section{Our Approach} In this section, we formulate the Question aware Open IE problem and briefly introduce our model. Then we separately introduce each part of our model including the encoder, tagging decoder, and correction decoder. \subsection{Problem Formulation} The Question aware Open IE is a task which given a question containing $n$ words $Q=\{q_1,q_2,...,q_n\}$ and a passage containing $m$ words $P=\{p_1,p_2,...,p_m\}$, and output a semi-structured answer which can answer the question based on passage. The answer consists of a subject, a predicate, and one or more arguments. We represent the answer as $(subject, predicate, argument_1, ..., argument_k)$, $k\geq1$. Each answer field is a natural language word sequence. \subsection{Model Overview} Our model consists of three parts which contains an encoder, tagging decoder and correction decoder. We show our model in Figure \ref{figure_model}. We used two same-structure encoders to encode question and passage separately. Then the tagging decoder interacts between the encoded question and passage and tags each word in passage about its semantic role in the answer. The tagging decoder tags all passage words at same time. The correction decoder then generates an answer based on the tagging result. The correction decoder generate answer words one by one. Our intuition is to use tagging decoder to highlight the words in an article and use correction decoder to generate a fluent answer based on tagging results. As our approach's potentials, we show not only our approach being effective over this dataset but also the correction decoder in correcting the missing words which are not tagged as ground truth. We use an example in Table \ref{table_tagging} to show our idea. The $argument_0$ is not a span of passage, the ''in'' is far from ''british columbia'', but most of the words in the answer are from the passage. The first stage of our model is to tag keywords from the passage. In this case, our tagging decoder tags all location words as argument such as ''vancouver'' and ''british columbida''. But the tagging result misses ''in''. The second stage is to generate a fluent answer based on the tagging result. In this case, our correction decoder adds ''in'' compared to the tagging result. We also could noticed that model also able to remove positive adverb "primarily". Based on our case study, the correct model is good at word ordering and small post-editing guided by language model. \subsection{Encoder} The encoder of our model contains a question encoder and a passage encoder, which is been used to encode the question and passage separately. These two encoders have the same structure but different weights in implementation. The encoder is composed of two basic building blocks, Multi-Head Attention Block and Feed Forward Block~\cite{transformer}. We will introduce these two building blocks and how to build an encoder with them. \subsubsection{Multi-Head Attention Block (MHBlock)} The core layer of the Multi-Head Attention Block is the Multi-Head Attention Layer ~\cite{transformer}. The input of Multi-Head Attention Layer contains query($Q$), key($K$) and value($V$). All the inputs are matrices. $Q\in\mathbb{R}^{n_q\times d_k}$, $K\in\mathbb{R}^{n_k\times d_k}$, $V\in\mathbb{R}^{n_k\times d_v}$. The output $O$ of Multi-Head Attention Layer is a matrix too. $O\in\mathbb{R}^{n_q\times d_v}$. We represent this layer as a function $MultiHeadAttention(Q, K, V)$. Intuitively, this layer is a soft dictionary lookup layer in vector space, and all the operation unit is vector. The dictionary in computer science is a set of key value pairs, lookup in dictionary is to find the key which equals to query and return corresponding value as output. In Multi-Head Attention, there are $n_k$ key value pairs, each key is a vector with dimension $d_k$ and each value is a vector with dimension $d_v$. The $n_q$ queries will have $n_q$ corresponding output. For each query, we will calculate attention score to each key, and use attention score as weight to calculate the weighted sum of value. The weighted sum is the output. More details are at Vaswani et al. (2017). Multi-Head Attention Block has same input as Multi-Head Attention Layer. But it requires the $d_k=d_v$. The inputs will go through a Multi-Head Attention layer wrapped with residual connection. Then pass the output though a layer norm layer to get the final output.\\ \begin{equation} \begin{split} MHBlock(Q,K,V)=& \\ LayerNorm(Q+&MultiHeadAttention(Q,K,V)) \end{split} \nonumber \end{equation} \subsubsection{Feed-Forward Network Block (FFNBlock)} The core layer of Feed Forward Block is Feed-Forward Network ~\cite{transformer}. Feed-Forward Network is a two-layer projection on each row of matrix. \begin{equation} FFN(x)=max(0, xW_1+b_1)W_2+b_2 \nonumber \end{equation} The Feed-Forward Block has same input and output with Feed-Forward Network. We add the input and the output of Feed-Forward Network, then pass through a layer norm layer to get the final output. \begin{equation} \begin{split} FFNBlock(x)=LayerNorm(x+FFN(x)) \end{split} \nonumber \end{equation} \subsubsection{Encoder Structure} An encoder is used to map a sequence of words into a sequence of hidden states. The question encoder and the passage encoder have the same structure. The input of the encoder is the word embedding of each word plus the position embedding. We use sine and cosine position embedding ~\cite{transformer}. The encoder is composed of a stack of $N_e$ identical layers. The output of the question encoder and the passage encoder are $h_q$ and $h_p$ respectively. For the question encoder: \begin{equation} \begin{split} h_{q,0}&= Embedding(Q)+W_{pos}\\ h_{q,i}^m&=MHBlock(h_{q,i-1},h_{q,i-1},h_{q,i-1})\quad\forall i\in[1,N_e]\\ h_{q,i}&=FFNBlock(h_{q,i}^m)\quad\quad\quad\quad\quad\quad\quad\;\; \forall i\in[1,N_e]\\ h_q&=h_{q,n_e} \nonumber \end{split} \end{equation} $Embedding(x)$ is an embedding look up function, It takes the word id and output corresponding word embedding vector. $W_p$ is position embedding. $h_{q,i}^m$ is an intermediate result. Passage encoder has same structure, so we don't formulate it again. \begin{figure*}[t!] \centering \includegraphics[width=0.8\linewidth]{model.png} \caption{Overview of two-stage model. Multi-Head Attention Block has three inputs, query(Q), key(K) and value(V). In this figure, we draw them in order of K, V and Q for clarity. For the answer embedding, we use entire ground truth answer in training. In decoding, the correction decoder generates answer word one by one. So we only use the generated answer word as input to generate next word.} \label{figure_model} \end{figure*} \subsection{Tagging Decoder} A tagging decoder is used to generate the tagging probability distribution for each word in a passage given a question encode result $h_q$ and a passage encode result $h_p$. In this sub subsection, we will introduce the tag format Semantic BIO Tags, and tagging decoder structure. The output of the tagging decoder is a distribution of tags $T$. Formally, for each word $p_i$ in passage, the tagging decoder outputs a distribution $p(t_i|P,Q)$. $t_i$ is the tags for i-th passage word. We denote the result as $T=\{p(t_1|P,Q),p(t_2|P,Q),...,p(t_m|P,Q)\}$. In our model, we keep it as a continuous probability distribution so as to back propagate loss. If we want to give each word an explicit tag, we output the tag with maximum probability. \subsubsection{Semantic BIO Tags} We use semantic BIO tag like Stanovsky et al. (2018) to tag passage word. Each tag is combined by two parts, semantic tag for semantic role in answer and BIO tag for position in a field. The semantic role tag contains subject(S), predicate(P) and arguments(A). Since there are more than one argument, arguments also been distinguished by position as Ai for i-th argument. BIO tag contains Begin(B), Inside(I) and Outside(O). For each continuous subsequence belongs to same semantic role, we tag the first word to B, and tag the rest words to I. After tagging all continuous subsequence, we tag all the otherwords too. Then we add semantic role to BIO tags. If the semantic role is predicate,the tag will be extended to P-B, P-I. We showed an example at Table 2. The predicate has two words ''was filmed'' but they are not consecutive. For each sub span, we tag the first word as P-B. Then both the tag of ''was'' and ''filmed'' are P-B.So the re maybe more than one word been tagged as P-B although there is only one predicate in answer. For the same reason, there are two A1-B for ''in'' and ''british'' in example. \subsubsection{Tagging Decoder ground truth} We need to create ground truth for tagging decoder training by ourself because the answer in dataset is in tuple format. Intuitively, when answer tuple was been created, some words were been selected from passage and copied to answer. Ideally, we want to tag these words out and let our model generate answer based on them too. Formally, we need to select out some continuous subsequences based on answer and tag them by previous tagging rules. Each subsequence must belong to same semantic role, but each semantic role may correspond to several subsequences. The key challenge of it is that one word may have multiple occurrences in passage. We proposed a rule-based solution for this problem. Intuitively, for adjacent words in answer, we prefer to match the adjacent words in passage. For each field of answer, we prefer to keep all matches as close as possible. For details, we match the fields in the order of arguments, subject and predicate, because arguments is longest, and predicate is shortest. Then for each field, we try to match all bi-gram in passage. We will keep the multiple occurrence if exist. Then we match as much as possible single word which haven't covered by matched bi-gram. In this step, we will minimize the distance between rightmost word to leftmost word. In Open IE task, the predicate is shortest and often is unigram. So we match it at last and we prefer the predicate between subject and arguments. \subsubsection{Tagging Decoder Structure} Compared to the question encoder, the tagging decoder also needs to encode the passage. The difference is it needs to interactive with the question. We achieve this requirement by adding an additional attention layer from passage to question. The tagging decoder is composed of a stack of $N_t$ identical layers. Each layer consists of three sub-layers, self attention layer, passage to query encoding layer and feed forward layer. Self-attention layer is a Multi-Head Attention Block used to encode passage. The query, key and value of it is identical and is the output of previous layer. The passage to question layer is a Multi-Head Attention Block which is used to interactive with question. It's query is the output of previous self-attention layer, the key and value is identical and is the output of question encoder $h_q$. We also tried interactive layer like BIDAF (Seo et al. 2016), but there is no improvement compared to our model. Formally: \begin{equation} \begin{split} h_{t,0}&= h_{p}\\ h_{t,i}^{t_0}&=MHBlock(h_{t,i-1},h_{t,i-1},h_{t,i-1})\quad \forall i\in[1,N_t]\\ h_{t,i}^{t_1}&=MHBlock(h_{t,i}^{t_0},h_{q},h_{q})\quad\quad\quad\quad\quad \forall i\in[1,N_t]\\ h_{t,i}&=FFNBlock(h_{t,i}^{t_1})\quad\quad\quad\quad\quad\quad\quad\;\: \forall i\in[1,N_t]\\ h_{t}&=h_{t,N_t}\\ \nonumber \end{split} \end{equation} $h_{t,i}$ is the output of i-th layer, $h_{t,i}^{t_0}$ and $h_{t,i}^{t_1}$ is two intermediate results in same layer. $h_{t}$ is the final output of these $N_t$ layers. Then we used linear projection and softmax on each item of $h_t$ to calculate the tag probability distribution $p(t_i|P,Q)$ of each passage word $p_i$. \begin{equation} p(t_i|P,Q)=softmax(h_{t,i}*W_t) \nonumber \end{equation} $h_{t,i}$ is the i-th row of $h_t$. $W_t$ is a linear projection matrix. We use a semantic BIO tag like Stanovsky et al. 1. ~\cite{stanovsky2018supOpenIE} to tag passage words. Since there are multiple arguments, arguments are also distinguished by position as $A_i$ for the i-the argument. We need to create ground truth for tagging decoder training on our own because the answer in the dataset is in tuple format. Therefore we tag answer words which is in the passage and tag them as close as possible. \subsection{Correction Decoder} The correction decoder takes the output of the tagging decoder $h_{t}$ and $T$ as input, and generate a new answer. The correction decoder will generate answer words one by one like machine translation. We concatenate the answer tuple to one string as the output of the correction decoder. Formally, we concatenate the tuple to a sequence of l words $A=\{a_1,a_2,...,a_l\}$ which is formatted as "$subject$ \textless split\textgreater\ $predicate$ \textless split\textgreater\ $argument_1$ \textless split\textgreater\ ...\textless split\textgreater\ $argument_k$". The "\textless split\textgreater" is an additional format word used to identify the semantic role. We use "\textless split\textgreater" to separate multiple tuples for structured answer representation. We concatenate them into one string by "\textless split\textgreater" tag as our string version answer. We can choose a structured version or a string version as our output. The only difference between a structured version and a string version is whether it has <split> tag. The structure correction decoder is very similar to tagging decoder. it also is composed of a stack of $N_c$ identical layers. Each layer consists of same three sub-layers, except first layer is a masked self-attention layer. The input is the sum of answer word embedding and position embedding too. Different to tagging decoder, the first layer, masked self attention layer, contains additional memory mask. This memory mask only allows the hidden vector at position i to pay attention on hidden vector before position i. This is because the generative decoder is a step by step decoder.we only have hidden vector before position i when we generate i-th word. Answer to passage encoding layer also is a Multi-Head Attention layer. The query of it is the output of masked self attention layer. the key and value of it is identical and is concatenation of two parts, tagging decoder hidden state $h_t$ and tagging result $T$. In training, we could train the answer word decoding parallel by masked attention tricks. The only structure's difference between tagging decoder and correction decoder is that word in correction decoder only could attend to previous words. Because in decoding, we must generate answer word one by one. Suppose we had generated the first j-1 words: \begin{equation} \begin{split} h_{c,0}&=Embedding(concat(BOS,a_{<j})+W_{pos} \\ h_m&=concat(h_t,T)\\ h_{c,i}^{c_0}&=MHBlock(h_{c,i-1},h_{c,i-1},h_{c,i-1})\quad \forall i\in[1,N_c]\\ h_{c,i}^{c_1}&=MHBlock(h_{c,i}^{c_0},h_m,h_m)\quad\quad\quad\quad\;\: \forall i\in[1,N_c]\\ h_{c,i}&=FFNBlock(h_{c,i}^{c_1})\quad\quad\quad\quad\quad\quad\quad\;\: \forall i\in[1,N_c]\\ h_{c}&=h_{c,N_c} \nonumber \end{split} \end{equation} $BOS$ is a special word at the beginning of a sentence. $h_{c,i}$ is the output of i-th layer, $h_{c,i}^{c_0}$ and $h_{c,i}^{c_1}$ are two intermediate results in the same layer. Then we generate j-th word by: \begin{equation} \begin{split} p(a_j|a_{<j},P,Q)&=softmax(h_{c,j}*W_c)\\ a_j&=argmax(p(a_j|a_{<j},P,Q)) \end{split} \nonumber \end{equation} $h_{c,j}$ is the j-th row of $h_c$. $W_c$ is a linear projection matrix. \subsection{Training} In training, we create a ground truth for the tagging decoder and the correction decoder according to the previous method. These two decoders have separate loss but we train them jointly. The loss of the tagging decoder is the negative log likelihood of ground truth tags. \begin{equation}L_{tag}=-\frac{1}{N}\sum_{<P,Q>}\sum_{i=1}^mlogp(t_i|P,Q)\nonumber \end{equation} $t_i$ is the ground truth tag of i-th word in passage. $N$ is the number of samples.\\ For generation, the loss is conditionally negative log probability of the ground truth answer. \begin{equation}L_{correct}=-\frac{1}{N}\sum_{<P,Q,A>}\sum_{i=1}^{l}logp(a_i|a_{<i},P,Q) \nonumber \end{equation} $a_i$ is the i-th word of answer. $N$ is the number of samples. The loss of our model is the weighted sum of tagging decoder loss and correction decoder loss. \begin{equation}L=\lambda*L_{tag}+L_{correct}\nonumber \end{equation} $\lambda$ is the weight of $L_{tag}$ and will be tuned on validation set. \section{Experiments} \begin{table*}[t!] \small \begin{center} \begin{tabular}{l|c|c|c|c} \hline \bf Model & \bf Answer (BLEU-4) & \bf Subject (BLEU1) & \bf Predicate (BLEU1) & \bf Arguments (BLEU1) \\ \hline Seq2Seq + Attention ~\cite{yan2018assertion} & 31.85 & - & - & - \\ Seq2Ast ~\cite{yan2018assertion} & 35.76 & - & - & - \\ Tagging & 55.60 & 51.61 & 57.19 & 46.62 \\ Tagging + Correction & 59.32 & 63.40 & 67.50 & 61.01 \\ \hline w/o question & 56.71 & 63.02 & 64.03 & 56.89 \\ w/o semantic tag (Only BIO tag) & 58.78 & 62.61 & 66.36 & 59.72 \\ \hline \end{tabular} \end{center} \caption{\label{main-result} Test results on WebAssertions.} \end{table*} \subsection{Dataset and Evaluation} \subsubsection{Dataset} We use the WebAssertions dataset (Yan et al. 2018) to evaluate our model. The WebAssertions is a Question aware Open IE dataset. To construct this dataset, Yan et al. (2018) collect queries from search engine logs as questions, retrieving and filtering related passages by search engine which cab directly answer the question. Then they extract answer tuples from the passage with Open IE model ClausIE. The labeler will judges whether the answer tuple has complete meaning and can answer the question. The answer tuple that has a positive label is the final answer. About 40\% of answers contain a field which is not a span of the passage. For example, sometimes the answer will delete words in a passage. Some words in the correction ground truth don't appear in the passage, and they thus don't appear in the span that tags result. This dataset contains 358,427 (question, passage, answer) triples. We randomly split the WebAssertions dataset into training, validation, and test sets with 8:1:1 split. We use the validation set to tune the model and report the results on the test set. \subsubsection{Evaluation} We evaluate the quality of the entire answer and each semantic role. For entire answer, we concatenate the answer tuple to a string and split different role with the special word "\textless split\textgreater". Since there is only one subject and one predicate in answer, we can evaluate them directly. But there may be more than one argument, we concatenate them to one string by "\textless split\textgreater", just like the entire answer. We use BLEU-4 score ~\cite{papineni2002bleu} as the evaluation metric for the entire answer, so as to be comparable with previous work. The subject and predicate are relatively short with an average length of 3.3 and 1.4. Therefore, we used BLEU-1 to evaluate each semantic role. \subsection{Implementation Details} \subsubsection{Data Processing} We need a post process the output to get subject, predicate, and arguments separately. For tagging results, we collect all the words with same semantic tag to produce the corresponding answer field. The selected words are concatenated according to their order in passage. If no word is been tagged as one semantic role, then the result is an empty string. For the generated answer, we split them to a phrases list by the special split word. Then the first phrase is the subject, the second phrase is the predicate, and all other phrases are arguments. We use byte-pair encoding (BPE)~\cite{philigage1994bpe} to handle the out of vocabulary problem in the correction decoder. BPE will split each word to several subwords. We control the corpus distinct subwords number. In the ground truth creation of semantic BIO tags, we match the continuous subsequence at the word level, and map the semantic tag to sub-word level and label BIO tag at subword level. After the model output tagging result, we collect the subword belonging to the same semantic role and undo BPE. We ignore the possibility of incomplete words and just let the model learn. For the generation, we split output to phrases list first and undo BPE on each phrase. \subsubsection{Setting} We tune our hyper-parameter on the validation dataset. The hidden size of our model is 512. We use shared vocabulary between question, passage, and answer. The BPE vocabulary size is 37000. We share the embedding weight for the question encoder, passage encoder, correction decoder, and pre-softmax linear transformation of the correction decoder. We use 8 head for Multi-Head Attention. The question and passage encoder layer number Ne is 2, the tagging decoder layer number $N_e$ Nt is 4. The correction decoder layer number $N_t$ is 4. The correction decoder layer number $N_c$ is 6. The weight of loss $\lambda$ is set to 3. We use ADAM optimizer ~\cite{kingma2014adam} to update model parameters and set the learning rate to 0.001. \subsection{Baseline} Our proposed model is called \textbf{Tagging + Generation}. We compared it with three baselines. \begin{itemize} \item {\bf Seq2Seq + Attention ~\cite{yan2018assertion}} This model formulates this task to a sequence to sequence problem. They concatenate question and passage to a string as input, and concatenate the tuple to a string as output. They insert special tag "\textless EOQ\textgreater" between question and passage, and special tag "," between field of tuple for format. This model uses a bidirectional GRU as encoder, GRU as decoder, and used attention mechanism. \item {\bf Seq2Ast ~\cite{yan2018assertion}} This sequence to assertion model (Seq2Ast) has the same input process and encoder as Seq2Seq + Attention. The difference is this model used a hierarchical decoder which first generates a representation for each field by a tuple-level decoder, then generates the words for each field by a word-level decoder. \item {\bf Tagging} We remove the correction decoder and only train the tagging decoder. \end{itemize} The Yan et al. ~\cite{yan2018assertion} also propose an extractive method, but it is not comparable with generative methods. This method extracts all possible answer tuples from the passage first. We use a ranking model to select the best answer as output. This dataset also is constructed by extracting tuples and their extraction model using the same extractor. The right answer is always in the ranking list. The key challenge with extractive methods is how to design the matching model. It is evaluated by ranking metrics, such as MAP, MRR. If we evaluate it with BLEU, it will reach 72.27. This result is higher that our result, but it is also reasonable because they leverage the dataset construct property. Our method does not rely on any Open IE model, so it still does not work well on dataset constructed in this way. \subsection{Experiment Result} The results are in Table~\ref{main-result}. Both the result of the entire answer and each semantic role show the same trend. We see the following: (i) the Tagging + Correction model achieves the best results which proves the effectiveness of our model; (ii) the Tagging+ Correction model is better than the Seq2Seq model. It means that by tagging the keyword first improves generation quality, which we think is because the tagging decoder provides a guide for the correction decoder; (iii) the Tagging + Correction model is also better than the tagging decoder, which means the second step correction is necessary. For the subject, predicate, and arguments column, we find the results for the predicate are better than for subject, and subject results are better than arguments. This may be because of the different properties of different semantic roles. The subject is often a noun phrase. The predicate is a verb and has an average length of 1.4. The arguments are modifying phrase, which are longest and most complicated. Intuitively, the property of one word is enough to determine whether it is a predicate. The property of two adjacent words is enough to determine the boundary of a noun phrase. But we may need more sophisticated sentence information to extract arguments like syntax tree. We will leave this as future work to improve our model. We remove the question, so it becomes an Open IE problem. We denote it as \textbf{w/o question}. The entire answer result on BLEU will decrease by 2.6 compared with Tagging + Correction, and all the semantic role result will decrease too. This proves the Question aware Open IE cannot be solved as an Open IE task. We also try to remove the semantic role in tags and only keeps the BIO tags. The results \textbf{w/o semantic tag} show that the BLEU of the entire answer will decrease 0.56, and BLEU of each semantic role also will decrease more than 1. This proves the semantic tag benefits from the correction decoder. \subsection{Case Study} We also do a case study to analysis our result. We randomly sample 50 samples in test dataset and predict with Tagging + Correction model. The summarization of the results is in Table ~\ref{table-case-study}. \begin{table}[h] \centering \begin{tabular}{l|c} \hline \bf Label & \bf Ratio \\ \hline correct / exactly match & 30\% \\ correct / comparable & 10\% \\ correct / better & 18\% \\ correct / incomplete label & 18\% \\ \hline wrong / wrong focus & 12\% \\ wrong / grammar problem & 6\% \\ wrong / lost key words & 6\% \\ \hline \end{tabular} \caption{Case study of Tagging + Correction.} \label{table-case-study} \end{table} We find that about 76\% of cases are correct. \textbf{Comparable} means the model output is comparable with ground truth and it is hard to tell which one is better. About 18\% of cases are \textbf{better} than the ground truth. This is because the generated answer is shorter and clearer than the ground truth answer, especially on arguments. Another 18\% of wrong cases are because of an \textbf{incomplete label}, which means there are more than one answer in the passage for the question. Based on these results, we see that it is hard to evaluate the Question aware Open IE because of the open definition of information extraction problem. There may be more than one answer in the passage and each answer may have multiple paraphrases. A better dataset could help to solve the ''better'' and ''incomplete label'' problems. For the wrong output, about 12\% of wrong cases are because of the \textbf{wrong focus}. This means the answer is not related to the question. 6\% of cases are because of a \textbf{grammar problem}, which means the answer is not fluent. This is because the language model of the correction decoder is still not good enough. 6\% of cases are because of lost key words. In the future, we could try to improve the interaction between question and passage to improve the wrong focus and lost key words problem. In the future, we may try to improve the interaction between question and passage to improve the wrong focus and lost key words problem. We could also try to transfer learning to improve the language model. \section{Conclusion} In this paper, we introduce a two-stage decoder model to solve the question aware Open IE task. Because most of the answer words are from a passage, we use a tagging decoder to tag the key words in the passage first, and generate a refined answer with a correction decoder based on the output of the tagging decoder. The experiments on WebAssertions show that our method outperforms other pure generation models or tagging models. Our model does not rely on any Open IE tools which gives it good generalization ability. In the future, we will try more methods to improve our results like incorporate syntax information or more interaction methods. We will also consider creating a better dataset to accelerate research in this area.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,470
{"url":"http:\/\/effort.academickids.com\/encyclopedia\/index.php\/Chebyshev_filter","text":"# Chebyshev filter\n\nChebyshev filters, named in honor of Pafnuty Chebyshev, are analog or digital filters of a kind related to Butterworth filters, but having a steeper roll-off, but more passband ripple. They have the property that they minimise the error between the idealised filter characteristic and the actual over the range of the filter, but as noted, there can be ripples in the passband. For this reason filters which have a smoother response in the passband but a more irregular response in the rejection bands are preferred for some applications.\n\nThe name Chebyshev filter comes from the fact that its mathematical characteristics are derived using Chebyshev polynomials. The frequency (amplitude) characteristic of a Chebyshev filter of the [itex]n[itex]th order can be described mathematically by:\n\n[itex]|H(\\omega)| = \\frac{1}{\\sqrt{1+\\epsilon^2 T_n^2\\left(\\frac{\\omega}{\\omega_0}\\right)}}[itex]\n\nwhere [itex]|\\epsilon| < 1[itex] and [itex]|H(\\omega_0)| = \\frac{1}{\\sqrt{1+\\epsilon^2}}[itex] is the amplification at the cutoff frequency [itex]\\omega_0[itex] (note: the common definition of the cutoff frequency to \u22123 dB does not hold for Chebyshev filters!), and [itex]T_n\\left(\\frac{\\omega}{\\omega_0}\\right)[itex] is a Chebyshev polynomial of the [itex]n[itex]th order, e.g.:\n\n[itex]T_n\\left(\\frac{\\omega}{\\omega_0}\\right) = \\cos\\left(n\\cdot\\arccos\\frac{\\omega}{\\omega_0}\\right)\u00a0; 0 \\le \\omega \\le \\omega_0[itex]\n[itex]T_n\\left(\\frac{\\omega}{\\omega_0}\\right) = \\cosh\\left(n\\cdot \\operatorname{arccosh}\\frac{\\omega}{\\omega_0}\\right)\u00a0; \\omega > \\omega_0[itex]\n\nalternatively:\n\n[itex]T_n\\left(\\frac{\\omega}{\\omega_0}\\right) = a_0 + a_1\\frac{\\omega}{\\omega_0} + a_2\\left(\\frac{\\omega}{\\omega_0}\\right)^2 +\\, \\cdots\\, + a_n\\left(\\frac{\\omega}{\\omega_0}\\right)^n; 0 \\le \\omega \\le \\omega_0[itex]\n[itex]T_n\\left(\\frac{\\omega}{\\omega_0}\\right) = \\frac{\n\n\\left(\\frac{\\omega}{\\omega_0}\\sqrt{\\left(\\frac{\\omega}{\\omega_0}\\right)^2 - 1}\\right)^n + \\left(\\frac{\\omega}{\\omega_0}\\sqrt{\\left(\\frac{\\omega}{\\omega_0}\\right)^2 - 1}\\right)^{-n} }{2}\u00a0; \\omega > \\omega_0[itex]\n\nThe order of a Chebyshev filter is equal to the number of reactive components (for example, inductors) needed to realize the filter using analog electronics.\n\nThe ripple is often given in dB:\n\nRipple in dB = [itex]20 \\log_{10} \\sqrt{1+\\epsilon^2}[itex]\n\nA ripple of 3 dB thus equals a value of [itex]\\epsilon = 1[itex].\n\nAn even steeper roll-off can be obtained if we allow for ripple in the pass band, by allowing zeroes on the [itex]j\\omega[itex]-axis in the complex plane. This will however result in less suppression in the pass band. The result is called an elliptic filter, also known as Cauer filters.\n\n\u2022 Art and Cultures\n\u2022 Countries of the World\u00a0(http:\/\/www.academickids.com\/encyclopedia\/index.php\/Countries)\n\u2022 Space and Astronomy","date":"2020-10-01 16:05:08","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.825995922088623, \"perplexity\": 2544.5116475574596}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600402131777.95\/warc\/CC-MAIN-20201001143636-20201001173636-00741.warc.gz\"}"}
null
null
Le Caillou de Gargantua ist der Name folgender Menhire in Frankreich: Le Caillou de Gargantua, anderer Name des Menhir von Bissin bei Guerande, Département Loire-Atlantique Le Caillou de Gargantua (Eure) in Port Mort, Département Eure, Normandie
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,651
Low carbo, high-fat diet good for weight loss. But is it good for cyclists? Do regimes like the Atkins diet really work, and can cyclists maintain the power they need for the bike if losing a lot of weight? Should cyclists follow low carb, high fat diets to lose weight? International rider turned coach and nutritionist, Beth McCluskey of Peak Endurance Coaching offers some insights into 'low-carbohydrate, high-fat' diets. Good for weight loss, but are they good for cyclists? Low carb high fat diets are nothing new and have been around for over 100 years. In recent times they've become very popular, especially with athletes. The best known version of the LCHF is probably the Atkins diet. And it has been suggested by some as the answer to the West's obesity problem, type II diabetes crisis and cardiovascular disease, as well as an aid to improved physical performance for athletes. LCHF diets can indeed offer a short term solution to an individual with obesity and diabetes. But they are not the only solution, or indeed the long term answer. It is important to distinguish between the metabolic biochemistry of an athlete and an obese person. There is an inherent danger in being so focused on macronutrients and ratios that we lose sight of food and the complex nutrient interactions that occur within food and within the body. The LCHF lobby will argue that carbohydrates are the main cause of obesity, diabetes and cardiovascular disease and that the low fat recommendations of the last century appear to have contributed to the health crisis we now face in the 21st century. Carbs are seen as the enemy in some quarters, but the truth is that balance is healthiest. Some extremist LCHF lobbyists will claim that nutrition scientists who don't endorse their view won't have read or understood the right research and that their views are out of date or biased. The reality is that nutrition science is far more complicated than that. In the low fat era of the past, correlation between fat intake and cardiovascular disease resulted in nutritional guidelines recommending a reduction in fat intake and an increase in carbohydrate intake. While the experts meant us to eat less red meat and eat more vegetables and wholegrains, the food industry had other ideas. It responded by producing a vast array of tasty and cheap low fat versions of foods that were high in sugar, salt and processed oils. The focus on fat as bad and carbohydrates as good was interpreted as a licence to eat anything as long as the macronutrient content was correct. The long term damage this has done to human health is now becoming apparent and we urgently need a solution. LCHF proponents will argue that since low fat high carbohydrate diets have caused the crisis, then the opposite will cure it. Riders like Chris Froome (above) and Bradley Wiggins have become transformed as riders after losing a lot of weight; but a narrow focus on weight loss is unhealthy. However, when macronutrients are taken out of context, we invent mutually exclusive diet choices that are restrictive, nutritionally inadequate and unhealthy. By only focusing on the low carbohydrate or high fat content of a diet, we are demonising all low carb foods and endorsing all high fat foods as if these are the only options. A high fat meal of highly processed ingredients including salt and preservative, such as sausages and bacon, is completely different to a high fat nutrient-rich meal such as mackerel, nuts and avocados. Equally, a low fat high carb meal of highly processed sugary cereals is different from a low fat high carb meal; for example, sweet potato and vegetables with quinoa. There are people all over the world who live long and healthy lives on both high fat and high carbohydrate diets. Health problems only occur in these populations when they adapt a 'western' style diet. Focusing on just fat or carbohydrate rather than food just gives us another excuse to eat badly. It is this very concept that led to the crisis we now face. But replacing one fallacy with another is not the answer. We need to focus of real food and not macronutrients if we are to find a long term solution to the health crisis. Cycling requires a mix of characteristics, including speed and power. Weight loss is not a magic bullet for every rider. The current and future health of the athlete. Whether the strategy is going to improve overall performance. We do not know the long term effects of a high fat diet on healthy individuals. There is limited research on the topic and in my experience it doesn't actually improve performance, except perhaps in extreme ultra runners and adventure racers. Cycling is a unique sport requiring endurance, speed, power and strength. Lots of different types of athletes have adopted the LCHF strategy as a lifestyle choice. And there seems to be a certain obsession with blood glucose, insulin and lipoprotein profiles; as if these were somehow the only indicators of optimum health and performance. They are useful indicators but they don't tell us the effects of lipids on other aspects of the body. Lipids are made up of fatty acids and glycerol. Individual fatty acids play an important biological role in the functioning of all cells. And until we can determine the exact impact of a high fat diet on the immune system, liver and kidney function, the gut microbiota, bone health, the brain, the blood and the vascular system it would be premature to recommend this as a safe long term strategy for athletes. However, in the short term, manipulation of carbohydrate, protein and fat intake can induce favourable training adaptations; training with low muscle glycogen and training fasted at certain times of the training phase can help improve endurance adaptations. Likewise, other training adaptations for speed, strength and power can be improved by training with adequate or high carbohydrate stores. Optimising the timing and quality of protein intake can enhance strength adaptations. Athletes who require more than pure endurance need to be metabolically flexible to get the most out of their training. This needs to be done while meeting all the other nutritional needs of the athlete health and well-being. There is no one size fits all. Every individual is unique and some people respond differently to others. So adapting a LCHF diet as a lifestyle choice may not be the route to making you a faster, stronger, fitter cyclist or a healthier individual. The question we really need to address is not whether we endorse low carb or high carb regimes, but how we get the best out of the individual athlete. I would urge athletes to stop focusing on carbohydrate and fat and eat a balanced diet that is based on natural whole food.
{ "redpajama_set_name": "RedPajamaC4" }
513
\section{Introduction and Motivation} \label{S1} Reaction-diffusion equations are used to model a host of natural processes such as combustion, chemical reactions, or population dynamics. The baseline model, which already captures a lot of the properties of the dynamics involved, is the parabolic PDE \begin{equation} \label{1.1} u_t = \Delta u + f(x,u) \end{equation} for $u:(t_0,\infty)\times{\mathbb{R}}^d \to {\mathbb{R}}$, where $t_0\in[-\infty,\infty)$ and $d\ge 1$. If $t_0>-\infty$, then we also let \begin{equation} \label{1.2} u(t_0,x)=u_0(x) \qquad\text{for $x\in{\mathbb{R}}^d$.} \end{equation} The Lipschitz {\it reaction function} $f$ is such that there exist two ordered {\it equilibria} (time-independent solutions) $u^-<u^+$ for \eqref{1.1}, and one is usually interested in studying the transition of general solutions of \eqref{1.1} from one to the other as $t\to\infty$. A prototypical situation is when $u^-\equiv 0$ and $u^+\equiv 1$, with $f\ge 0$ vanishing at $u=0,1$. Here $u\in[0,1]$ is the (normalized) temperature of fuel, concentration of a reactant, or population density. Depending on the application, $f$ may be either an {\it ignition reaction} (vanishing near $u=0$) in combustion models; or a {\it monostable reaction} (positive for $u\in (0,1)$) such as {\it Zeldovich} and {\it Arrhenius reactions} with $f_u(x,0)\equiv0$ in models of chemical reactions and {\it KPP reaction} with $f(x,u)\le f_u(x,0)u$ for all $u\ge 0$ in population dynamics models. For the sake of clarity of presentation, we will first study this scenario, where our main results are Theorems \ref{T.1.2} and \ref{T.1.3}. Later we will extend these to more general situations: with general $u^-<u^+$, different types of reactions, including mixtures of ignition, monostable, and {\it bistable reactions} (the latter have $[f(x, u)-f(x,u^\pm(x))][u-u^\pm(x)]<0$ for $u$ near $u^\pm(x)$), and for solutions not necessarily satisfying $u^-\le u\le u^+$ (see Theorems \ref{T.1.11} and \ref{T.1.12}). However, in order to minimize technicalities, our first result will be stated in the even simpler setting of ignition reactions with a constant ignition temperature (see Theorem \ref{T.1.0} below). The study of transitions between equilibria of reaction-diffusion equations has seen a lot of activity since the seminal papers of Kolmogorov, Petrovskii, Piskunov \cite{KPP} and Fisher \cite{Fisher}. Of central interest has been long time propagation of solutions with ``typical'' initial data, and the related questions about traveling fronts. The first type of such initial data are {\it spark-like data} --- compactly supported, such as in \eqref{1.5a} below. The second are {\it front-like data} --- vanishing on a half-space $\{x\cdot e\ge R\}$ for some unit vector $e\in{\mathbb{R}}^d$ and with $\liminf_{x\cdot e\to-\infty} u_0(x)$ close enough to 1, such as in \eqref{1.6a}. For ignition reaction one can also allow rapid decay to 0 as $|x|\to\infty$ or $x\cdot e\to\infty$, such as in \eqref{1.5} and \eqref{1.6}. We will for now discuss these data (and also call the corresponding solutions front-like and spark-like), but later we will turn to more general ones (see, e.g., Theorem \ref{T.1.3}). In both cases it was proved, first for homogeneous ($x$-independent) reactions in several dimensions by Aronson, Weinberger \cite{AW} and then for $x$-periodic ones by Freidlin, G\" artner \cite{Freidlin, GF} and Weinberger \cite{Weinberger}, that for typical solutions, the state $u=1$ invades $u=0$ with a speed that is asymptotically constant (in each direction for spark-like data) as $t\to\infty$. Specifically, that for each unit $e\in{\mathbb{R}}^d$ there is a ({\it front speed}) $c_e>0$ such that for any $\delta>0$, \begin{equation} \label{0.1} \lim_{t\to\infty} \inf_{x\cdot e\le (c_e-\delta)t} u(x,t) = 1 \qquad\text{and}\qquad \lim_{t\to\infty} \sup_{x\cdot e\ge (c_e+\delta)t} u(x,t) = 0 \end{equation} for front-like initial data; and there is a ({\it spreading speed}) $s_e\in(0,c_e]$ such that for any $\delta>0$, \begin{equation} \label{0.2} \lim_{t\to\infty} \inf_{x\in (1-\delta)tS} u(x,t) = 1 \qquad\text{and}\qquad \lim_{t\to\infty} \sup_{x\notin (1+\delta)tS} u(x,t) = 0 \end{equation} for spark-like initial data, where $S:=\{ se \,\big|\, \|e\|=1 \text{ and } 0\le s\le s_e \}$ is the {\it Wulff shape} for $f$. Of course, for homogeneous reactions there is $c>0$ such that $s_e=c_e=c$ for all unit $e\in{\mathbb{R}}^d$. Closely related to this is the study of {\it traveling fronts} for $x$-independent $f$ and {\it pulsating fronts} for $x$-periodic $f$. Traveling fronts are front-like {\it entire} (with $t_0=-\infty$) solutions of \eqref{1.1} moving with a constant speed $c$ in a unit direction $e\in{\mathbb{R}}^d$, of the form $u(t,x)=U(x\cdot e-ct)$ with $\lim_{s\to-\infty}U(s)=1$ and $\lim_{s\to\infty}U(s)=0$. Pulsating fronts, first introduced by Shigesada, Kawasaki, Teramoto \cite{SKT} and proved to exist for general periodic $f$ as above by Xin \cite{Xin3} and Berestycki, Hamel \cite{BH}, are similar but $u(t,x)=U(x\cdot e-ct, x)$ and $U$ is periodic in the second argument. The minimal of the speeds for which such a front exists for a given unit $e\in{\mathbb{R}}^d$ is then precisely $c_e$, and we also have $s_e=\inf_{e'\cdot e>0} [c_{e'}/(e'\cdot e)]$. The above results hold for fairly general $f\ge 0$, and there is a vast literature on these and many other aspects of reaction-diffusion equations in homogeneous and periodic media. Instead of a more comprehensive discussion, we refer to \cite{BH, Weinberger} and the excellent reviews by Berestycki \cite{Berrev} and Xin \cite{Xin2}. Unsurprisingly, the picture becomes less satisfactory for non-periodic reactions, particularly in the several spatial dimensions case $d\ge 2$. The above results and the comparison principle show that if $c_0$ and $c_1$ are the ($e$-independent) speeds associated with homogeneous reactions $f_0$ and $f_1$ such that $f_0\le f\le f_1$, then \eqref{0.1}, \eqref{0.2} hold with $c_e$ and $S$ replaced by $c_0$ and $B_{c_0}(0)$ in the first statements and by $c_1$ and $B_{c_1}(0)$ in the second ones. That is, transition between $u\sim 0$ and $u\sim 1$ occurs inside a spatial strip or annulus whose width grows linearly in time with speed $c_1-c_0$ (while for homogeneous and $x$-periodic media it grows sub-linearly, by taking $\delta\to 0$ in \eqref{0.1}, \eqref{0.2}). In the general inhomogeneous case, these estimates cannot be improved, unless one includes further restrictive hypotheses on $f$ or is willing to tolerate complicated formulas involving $f$. For stationary ergodic reactions, the results should hold as originally stated, but also with $|x-x\cdot e|\le Ct$ for any $C<\infty$ in \eqref{0.1}. For $d\ge 2$ this {\it homogenization} result was proved only in the KPP case, by Lions, Souganidis \cite{LioSou}. (This case has an important advantage of a close relationship of the dynamics for \eqref{1.1} and for its linearization at $u=0$. Other authors also exploited this link in the study of spreading for KPP reactions, e.g., Berestycki, Hamel, Nadin \cite{BHNad}. However, results aiming to more precisely locate the transition region for non-stationary-ergodic reactions are somewhat restricted by the necessity of more complicated hypotheses involving the reaction.) Results from the present paper can be used to approach this problem for ignition and non-KPP monostable reactions. This will be done elsewhere. The above results hold also in the case $d=1$, with the stationary ergodic KPP reaction result proved earlier in \cite{GF}. However, some recent developments have gone further, particularly for ignition reactions. Mellet, Nolen, Roquejoffre, Ryzhik, Sire \cite{NolRyz, MRS, MNRR} proved for reactions of the form $f(x,u)=a(x)f_0(u)$ (with $f_0$ vanishing on $[0,\theta_0]\cup\{1\}$ and positive on $(\theta_0,1)$, and $a\ge 1$ bounded above), and Zlato\v s \cite{ZlaGenfronts} for more general ignition reactions the following. There is a unique right-moving (and a unique left-moving) transition front solution and as $t\to\infty$, each front-like solution with $e=1$ ($e=-1$) converges in $L^\infty_x$ to its time-translate. A similar result holds for spark-like solutions, when restricted to ${\mathbb{R}}^+$ (${\mathbb{R}}^-$). Moreover, if $f$ is stationary ergodic, then \eqref{0.1}, \eqref{0.2} hold with some $c_e=s_e$ for $e=\pm 1$. The {\it transition fronts} appearing here are a generalization of the concepts of traveling and pulsating fronts to disordered (non-periodic) media. In the one-dimensional setting they are entire solutions of \eqref{1.1} satisfying \begin{equation} \label{0.3} \lim_{x\to\mp\infty} u(t,x)=1 \qquad\text{and}\qquad \lim_{x\to\pm\infty} u(t,x)=0 \end{equation} for each $t\in{\mathbb{R}}$ (with upper sign for right-moving fronts and lower sign for left-moving fronts), as well as $\sup_{t\in{\mathbb{R}}} L_{u,\varepsilon}(t)$ for each $\varepsilon\in(0,\tfrac 12)$, where $L_{u,\varepsilon}(t)$ is the length of the shortest interval containing all $x\in{\mathbb{R}}$ with $u(t,x)\in[\varepsilon,1-\varepsilon]$. This last property is called {\it bounded width} in \cite{ZlaGenfronts}. The definition of transition fronts was first given in some specialized cases by Shen \cite{Shen} and Matano \cite{Matano}, and then in a very general setting (including several dimensions) by Berestycki, Hamel in their fundamental papers \cite{BH2,BH3}. Existence of transition fronts in one-dimensional disordered media (but no long term asymptotics of general solutions) was also proved for bistable reactions which are small perturbations of homogeneous ones by Vakulenko, Volpert \cite{VakVol}, for KPP reactions which are (spatially) decaying perturbations of homogeneous ones by Nolen, Roquejoffre, Ryzhik, Zlato\v s \cite{NRRZ}, for general KPP reactions by Zlato\v s \cite{ZlaInhomog}, and for general monostable reactions which are close to KPP reactions by Tao, Zhu, Zlato\v s \cite{TZZ}. We also mention results proving existence of a critical front, once some transition front exists, by Shen \cite{Shen} and Nadin \cite{Nadin}. While it is again not possible to improve the estimates on the length of the interval on which the transition between $u\sim 0$ and $u\sim 1$ is {\it guaranteed to happen} (which again grows as $(c_1-c_0)t$ in time if $f_0\le f\le f_1$), bounded width of transition fronts and the convergence-to-fronts results in \cite{MNRR,ZlaGenfronts} show that for ignition reactions and typical solutions, the transition does occur within intervals whose lengths are uniformly bounded in time. Moreover, this bound depends on some bounds on the reaction but neither on the reaction itself, nor on the initial condition. In particular, this shows that after a uniform-in-$(f,u,t)$ scaling in space, each such solution becomes, in some sense, close to the {\it characteristic function of a time-dependent spatial interval.} Moreover, the convergence-to-fronts results can be used to show that this interval grows in (equally scaled) time with speed within $[c_0,c_1]$. Experience from observation of natural processes modeled by \eqref{1.1} suggests that this picture should be also valid for media in several spatial dimensions. For instance, aerial footage of forest fires spreading through (spatially inhomogeneous) regions demonstrates variously curved but usually relatively narrow lines of fire separating burned ($u\sim 1$) and unburned ($u\sim 0$) areas. However, results demonstrating such phenomena for typical solutions of \eqref{1.1} with {\it general inhomogeneous} reactions have not been previously obtained in dimensions $d\ge 2$. It turns out that the multi-dimensional case is much more involved in this respect. The first issue is that it is not completely obvious how to extend the definition of bounded width of solutions of \eqref{1.1}, \eqref{1.2} to the multi-dimensional setting, and some first instincts may lead to unsatisfactory results for general non-periodic media (see the discussion below). The extension we introduce here is motivated by the Berestycki-Hamel definition of transition fronts (which are entire solutions of \eqref{1.1}) in several dimensions \cite{BH2,BH3}. However, there are a couple of differences, and we discuss the relationship of the two concepts after stating our main results, at the end of the next section. For solutions $u\in[0,1]$ of the Cauchy problem for \eqref{1.1}, our extension is as follows. We let $\Omega_{u,\varepsilon}(t):=\{x\in{\mathbb{R}}^d\,|\, u(t,x)\ge \varepsilon\}$ be the $\varepsilon$-super-level set of $u$ at time $t$ and define the {\it width of the transition zone} of $u$ from $\varepsilon$ to $1-\varepsilon$ (or to be more precise, from $[\varepsilon,1-\varepsilon)$ to $1-\varepsilon$) to be \begin{equation} \label{1.3xx} L_{u,\varepsilon}(t) := \inf \left\{L>0 \,\big|\, \Omega_{u,\varepsilon}(t)\subseteq B_L \left(\Omega_{u,1-\varepsilon}(t)\right) \right\} \end{equation} for $\varepsilon\in(0,\tfrac 12)$, with $B_r(A):=\bigcup_{x\in A} B_r(x)$ and $\inf\emptyset=\infty$. Notice that this is precisely the {\it Hausdorff distance} of the sets $\Omega_{u,\varepsilon}(t)$ and $\Omega_{u,1-\varepsilon}(t)$. We now say that $u$ has {\it bounded width} if \begin{equation} \label{1.4xx} \lim_{t\to\infty} L_{u,\varepsilon}(t) < \infty \end{equation} for each $\varepsilon\in(0,\tfrac 12)$. The limit is necessary in \eqref{1.4xx} because $\sup_{x} u(t,x)<1$ may hold for each $t$ (e.g., for spark-like solutions); it will be replaced by $\sup_{t\in{\mathbb{R}}}$ for entire solutions (see Definition~\ref{D.1.1}). So by \eqref{1.4xx}, $u\in[0,1]$ has bounded width if and only if for any $0<\varepsilon<\varepsilon'<1$, super-level sets $\Omega_{u,\varepsilon'}(t)\subseteq \Omega_{u,\varepsilon}(t)$ have uniformly (in large time) bounded Hausdorff distance. In particular, each of them is uniformly in time close to $\Omega_{u,1/2}(t)$. One may wonder why do we not treat the equilibria 0 and 1 in a symmetric fashion and include a similar definition involving the sub-level sets of $u$ as well. (We do so in \eqref{1.3a} and the related definition of {\it doubly-bounded width}, which is for $u\in[0,1]$ equivalent to uniformly bounded Hausdorff distance of the {\it boundaries} $\partial\Omega_{u,\varepsilon}(t)$ and $\partial\Omega_{u,\varepsilon'}(t)$.) While adding this requirement works well in one dimension \cite{MRS,MNRR,NolRyz,ZlaGenfronts}, it turns out to be too restrictive for the treatment of sufficiently general (not necessarily periodic) reactions and solutions of the Cauchy problem \eqref{1.1}, \eqref{1.2} in dimensions $d\ge 2$. This is due to $u\equiv 1$ being the invading equilibrium and $u\equiv 0$ the invaded one, coupled with the possibility of arbitrary variations in the medium on arbitrarily large scales in two or more unbounded dimensions. We discuss the issues involved after stating Theorem \ref{T.1.0} for reactions with a constant ignition temperature $\theta_0$, which is a special case of Theorem \ref{T.1.2} below. Even with a suitable definition at hand, our results proving bounded widths of typical solutions, under quite general and physically natural {\it qualitative} hypotheses on the reaction, only hold in dimensions $d\le 3$. Surprisingly, such results are in fact false for $d\ge 4$ without the addition of further {\it quantitative} hypotheses (e.g., $f$ being sufficiently close to a homogeneous reaction; see Remark 1 after Definition \ref{D.1.10} below). The reason is that in $d\ge 4$, even in the constant ignition temperature case, intermediate values of $u$ may spread faster than values close to 1 (see the discussion after Definition \ref{D.1.1a} and Remark 1 after Theorem \ref{T.1.2} for the general ignition case). This turns out to be related to the possibility of existence of non-constant stationary solutions $p\in(0,1)$ of \eqref{1.1} in ${\mathbb{R}}^{d-1}$ (see Section \ref{S8}). \begin{theorem} \label{T.1.0} Let $f$ be Lipschitz (with constant $K$) and non-increasing in $u$ on $[1-\theta,1]$ for each $x\in{\mathbb{R}}^d$ (where $\theta>0$). Assume that $f_0(u)\le f(x,u)\le f_1(u)$ for all $(x,u)\in {\mathbb{R}}^d\times[0,1]$, with $f_0,f_1:[0,1]\to[0,\infty)$ vanishing on $[0,\theta_0]\cup\{1\}$ and positive on $(\theta_0,1)$ (where $\theta_0>0$). Let $c_0$ and $c_1$ be the spreading speeds of $f_0$ and $f_1$. (i) If $d\le 3$, then the solution of \eqref{1.1}, \eqref{1.2} with any spark-like \eqref{1.5} or front-like \eqref{1.6} initial data $u_0\in[0,1]$ has bounded width \eqref{1.4xx}. In fact, for any $\varepsilon\in(0,\tfrac 12)$ there are $\ell_\varepsilon, T_\varepsilon$ such that $\sup_{t\ge T_\varepsilon} L_{u,\varepsilon}(t)\le \ell_\varepsilon$, with $\ell_\varepsilon$ depending only on $\varepsilon,f_0,K$ ($T_\varepsilon$ also depends on $u_0$). Finally, $u$ propagates with global mean speed in $[c_0,c_1]$ in the sense of Definition \ref{D.1.1b} below, with $\tau_{\varepsilon,\delta}$ in that definition depending only on $\varepsilon, f_0,K,\delta,f_1$. (ii) If $d\ge 4$, then there are $f,f_0,f_1$ as above such that no solution of \eqref{1.1}, \eqref{1.2} with compactly supported $u_0\in[0,1]$ and satisfying $\limsup_{t\to\infty}\|u(t,\cdot)\|_\infty>0$ has bounded width. \end{theorem} {\it Remarks.} 1. Hence in dimensions $d\le 3$, each typical solution $u$ eventually becomes uniformly close (in the sense of Hausdorff distance of $\varepsilon$-super-level sets for each $\varepsilon\in(0,1)$) to the characteristic function of $\Omega_{u,1/2}(t)$, and the latter grows with speed (averaged over long enough time intervals) essentially in $[c_0,c_1]$. So after a uniform-in-$(f,u,t)$ space-time scaling, typical solutions look like Figure~1, with the shaded area expanding at speeds within $[c_0,c_1]$. Since this also shows that an observer at any point $x\in{\mathbb{R}}^d$ at which $u(t,x)=\varepsilon$ (for a large enough $t$) will see transition to the value $1-\varepsilon$ within a uniformly bounded time interval, this means that the {\it reaction zone} (where $u\sim \tfrac 12$) is uniformly bounded in both space and time. \smallskip 2. One can use \cite[Theorem 1.11]{BH3} to prove (i) for homogeneous ignition reactions (see \cite{Jones, Roussier} for bistable ones), and also for $x$-periodic ignition reactions and front-like solutions. However, besides disordered media, (i) is new for $x$-periodic media and spark-like solutions as well. In fact, some of our results are new even for homogeneous media (e.g., Theorem \ref{T.1.3}). \smallskip 3. We will generalize Theorem \ref{T.1.0} in several ways. This will include a proof that solutions eventually increase in time on each interval of values $[\varepsilon,1-\varepsilon]$, extensions to other types of reactions (ignition with non-constant ignition temperature, monostable, bistable, and their mixtures) and to transitions between general equilibria $u^-<u^+$, as well as a treatment of more general types of solutions (trapped between time shifts of general time-increasing solutions, and not necessarily satisfying $u^-\le u\le u^+$). \smallskip The reasons for the new complications for $d\ge 2$, described above, are not just technical but stem from ``real world'' considerations in the case of {\it two or more unbounded dimensions.} (Note that the result in \cite{ZlaGenfronts} extends to the quasi-one-dimensional case of infinite cylinders in ${\mathbb{R}}^d$. The results described in Remark 2 after Theorem \ref{T.1.0} also have a (quasi-)one-dimensional nature due to either radial symmetry or periodicity.) First, one might think that the reaction zone will always coincide with a bounded neighborhood of some time-dependent hypersurface. This turns out to not be the case in general in dimensions $d\ge 2$, since without some order in the medium (such as periodicity) a fire will not always spread at roughly the same speed everywhere, so the initial spherical or hyperplanar shape of the reaction zone can become very distorted. In fact, areas of slowly burning material in the medium may cause it to propagate {\it around them faster than through them,} resulting in pockets of temporarily unburned material behind the leading edge of the fire. See Figure \ref{fig} for an illustration of this phenomenon, and the proof of Theorem \ref{T.1.6}(ii) for an extreme example of it. While these pockets will eventually burn up, variations in the medium can create arbitrarily many or even infinitely many of them (the latter for front-like solutions, although not spark-like) at a given (large) time, and they can be arbitrarily large and occur arbitrarily far behind the leading edge as $t\to\infty$. As a result of this potentially complicated geometry of the reaction zones of general solutions of \eqref{1.1}, our definition of bounded width includes no requirements on the shape of the sets $\Omega_{u,\varepsilon}(t)$ or their boundaries. \begin{figure}[ht] \centering \scalebox{0.18}{\includegraphics{inhomog_figure.pdf}} \caption{On the shaded region $u\sim 1$, and on the white region $u\sim 0$.} \label{fig} \end{figure} It is worth noting that while one might think that this issue can only arise if the medium has large variations in combustivity, this is not the case either. In fact, as long as $f_0,f_1$ satisfy $c_0<c_1$, it is always possible to construct $f$ such that $f_0\le f\le f_1$ and the above situation (arbitrarily many unburned pockets which can be arbitrarily large and arbitrarily far behind the leading edge) does indeed occur for typical solutions $u$. In particular, it happens almost surely for stationary ergodic media with short range correlations. Another critical issue, related to this, arises from the consideration of what happens to such an unburned pocket far behind the leading edge of the fire. It ``burns in'' from its perimeter and at the time when it is just about to be burned up (say when the minimum of $u$ on it is $\tfrac 3{4}$), the nearest point where $u$ is close to 0 (say $\le \tfrac 1{4}$) may be very far from the pocket. This shows that for general inhomogeneous media in dimensions $d\ge 2$, one may have {\it unbounded-in-time width of the transition zone from $u\sim 1$ to $u\sim 0$.} On the other hand, in the situation studied here when the invaded state $u= 0$ is either stable or relatively weakly unstable (the invading state $u=1$ clearly must be stable), pockets of burned material cannot form arbitrarily far ``ahead'' of the leading edge, unlike pockets of yet-unburned material ``behind'' the leading edge. (This is very different from the KPP case where $u=0$ is strongly unstable; see \cite{NRRZ} for examples of such media in one dimension, and the discussion after Definition \ref{D.1.1} for what may be done in that case.) This means that typical solutions will be {\it pushed} (as opposed to {\it pulled}), their propagation being driven by intermediate (rather than small) values of $u$. Thus one can still hope to see a {\it uniformly-in-time bounded width of the transition zone from $u\sim 0$ to $u\sim 1$}. This lack of symmetry between the spatial transitions $1\arrow 0$ and $0\arrow 1$ is why our definition of bounded width involves the Hausdorff distance of the super-level sets of $u$ but not of the sub-level sets (or of their boundaries). Let us conclude this introduction with the discussion of convergence of typical solutions of the Cauchy problem to entire solutions (such as transition fronts) of \eqref{1.1} in several dimensions. In contrast to one dimension, it is unlikely that any general enough such results can be obtained for disordered media. Firstly, the disorder may result in reaction zones of solutions neither moving in a particular direction nor attaining a particular geometric shape. Secondly, in Theorem \ref{T.1.6}(ii) we construct media where any entire solution with uniformly-in-time bounded width of the transition zone from $u\sim 0$ to $u\sim 1$ satisfies $\lim_{t\to\infty} \inf_{x} u(t,x)=1$ (while typical solutions of the Cauchy problem have $\inf_{x} u(t,x)=0$ for each $t\in{\mathbb{R}}$). And thirdly, if such a result existed, one should also expect the following Liouville-type claim to hold: If a solution $u$ is initially between two time translates of a front-like (or spark-like) solution $v$ (and so by the comparison principle, $v(\cdot,\cdot)\le u (T+\cdot,\cdot)\le v(2T+\cdot,\cdot)$ for some $T\ge 0$), then for any $\varepsilon>0$ there is $T_\varepsilon>0$ such that for any $(t,x)\in[T_\varepsilon,\infty)\times {\mathbb{R}}^d$, \begin{equation} \label{1.00} \|u(t,\cdot)-v(t+\tau_{t,x},\cdot)\|_{L^\infty({B_{1/\varepsilon}(x)})}<\varepsilon \end{equation} for some $|\tau_{t,x}|\le T$. That is, $u$ should locally look more and more like a (possibly $(t,x)$-dependent) time translate of $v$ as time progresses. Somewhat surprisingly, this claim is false in general in dimensions $d\ge 2$, even if $v$ is required to be an entire solution. This is for non-pathological reasons and we discuss a counter-example in Section \ref{S7}. Nevertheless, despite the likely lack of sufficiently general results on convergence to transition fronts or other entire solutions in general disordered media, these solutions will still play an important role in our analysis. This is because one can use parabolic regularity to build entire solutions from those of the Cauchy problem sampled near any sequence of points $(t_n,x_n)$ with $t_n\to\infty$, so results for the former can be used in the analysis of the latter. Finally, let us mention that our results can be extended to some more general PDEs, with $x$-dependent second order terms as well as first order terms with divergence-free coefficients. This will be done elsewhere. \section{The Definition of Bounded Width and the Main Results} \label{S1a} Let us now turn to our main results. We will first assume that $u\in[0,1]$ and $f\ge 0$ is Lipschitz and bounded below by some homogeneous {\it pure ignition} reaction $f_0$. (Later we will consider more general situations.) We will thus assume the following. \medskip {\it Hypothesis (H): $f$ is Lipschitz with constant $K\ge 1$ and \begin{equation}\label{0.ac} f(x,0)=f(x,1)=0 \qquad \text{for $x\in{\mathbb{R}}^d$.} \end{equation} There is also $\theta_0\in (0,1)$ and a Lipshitz function $f_0:[0,1]\to[0,\infty)$ with $f_0(u)=0$ for $u\in[0,\theta_0]\cup\{1\}$ and $f_0(u)>0$ for $u\in(\theta_0,1)$ such that \[ f(x,u)\ge f_0(u) \qquad \text{for $(x,u)\in {\mathbb{R}}^d\times [0,1]$}. \] Finally, there is $\theta\in[0,\tfrac 13]$ such that $f(x,u)=0$ for $(x,u)\in {\mathbb{R}}^d\times[0,\theta]$ and $f$ is non-increasing in $u$ on $[1-\theta,1]$ for each $x\in{\mathbb{R}}^d$. If such $\theta>0$ exists, then $f$ is an {\it ignition reaction}, otherwise (in which case the last hypothesis is vacuous) $f$ is a {\it monostable reaction}.} \medskip {\it Remarks.} 1. The definition of ignition reactions sometimes also includes existence of $\tilde\theta(x)\in[\theta,\theta_0]$ such that $f(x,u)>0$ if and only if $u\in(\tilde\theta(x),1)$, which we call the {\it pure ignition} case. We will not need this stronger hypothesis here. \smallskip 2. While the requirement of $f$ being non-increasing in $u$ on $[1-\theta,1]$ is not always included in the definition of ignition reactions, many results for them need to assume it. This includes our main results, although the hypothesis is not needed for their slightly weaker versions (specifically, not including those statements which use Theorem \ref{T.1.5}(ii) below). Notice also that we can assume without loss that $f_0$ is non-increasing on $[1-\delta,1]$ for some $\delta>0$ because this can be achieved after replacing $f_0(u)$ by $\min_{v\in[1-\delta,u]} f_0(v)$. Thus $f_0$ is itself an ignition reaction according to the above definition. \smallskip For a set $A\subseteq{\mathbb{R}}^d$ and $r>0$, we let $B_r(A):=\bigcup_{x\in A} B_r(x)$ (for $r\le 0$ we define $B_r(A):=\emptyset$). If $u:(t_0,\infty)\times{\mathbb{R}}^d\to[0,1]$ and $\varepsilon\in[0,1]$, we let $\Omega_{u,\varepsilon}(t):=\{x\in{\mathbb{R}}^d\,|\, u(t,x)\ge \varepsilon\}$ for $t>t_0$. For $\varepsilon\in(0,\tfrac 12)$, the {\it width of the transition zone} of $u$ from $\varepsilon$ to $1-\varepsilon$ at time $t>t_0$ is \begin{equation} \label{1.3} L_{u,\varepsilon}(t) := \inf \left\{L>0 \,\big|\, \Omega_{u,\varepsilon}(t)\subseteq B_L \left(\Omega_{u,1-\varepsilon}(t)\right) \right\}, \end{equation} with the usual convention $\inf\emptyset=\infty$. For $\varepsilon\in(\tfrac 12,1)$ the corresponding width is \begin{equation} \label{1.3a} L_{u,\varepsilon}(t) := \inf \left\{L>0 \,\big|\, {\mathbb{R}}^d\setminus \Omega_{u,\varepsilon}(t) \subseteq B_L \left({\mathbb{R}}^d\setminus \Omega_{u,1-\varepsilon}(t) \right) \right\}. \end{equation} Finally, for $\varepsilon\in(0,\tfrac 12)$ we also define the minimal length of transition from $(\varepsilon,1-\varepsilon)$ to either $\varepsilon$ or $1-\varepsilon$ to be \begin{equation} \label{1.3aa} J_{u,\varepsilon}(t) := \inf \left\{L>0 \,\big|\, {\mathbb{R}}^d = B_L \left( \Omega_{u,1-\varepsilon}(t) \cup \left[{\mathbb{R}}^d\setminus \Omega_{u,\varepsilon}(t) \right] \right) \right\} . \end{equation} For the above to be perfectly symmetric, we could replace ${\mathbb{R}}^d\setminus\Omega_{u,\varepsilon}(t)$ by ${\mathbb{R}}^d\setminus\bigcup_{\varepsilon'>1-\varepsilon}\Omega_{u,\varepsilon'}(t)$, but as we mentioned in the introduction, \eqref{1.3a} and \eqref{1.3aa} will not play a major role here. \begin{definition} \label{D.1.1} Let $u:(t_0,\infty)\times{\mathbb{R}}^d\to[0,1]$ be a solution of \eqref{1.1} with $t_0\in[-\infty,\infty)$. We say that $u$ has a {\it bounded width (with respect to 0 and 1)} if for any $\varepsilon\in(0,\tfrac 12)$ we have \begin{equation} \label{1.4} L^{u,\varepsilon} := \lim_{T\to\infty} \sup_{t>t_0+T} L_{u,\varepsilon}(t) < \infty. \end{equation} We say that $u$ has a {\it doubly-bounded width} if \eqref{1.4} holds for any $\varepsilon\in(0,\tfrac 12)\cup (\tfrac 12,1)$. And we say that $u$ has a {\it semi-bounded width} if for any $\varepsilon\in(0,\tfrac 12)$ we have \begin{equation} \label{1.4f} J^{u,\varepsilon} := \lim_{T\to\infty} \sup_{t>t_0+T} J_{u,\varepsilon}(t) < \infty. \end{equation} \end{definition} {\it Remarks.} 1. Notice that if $t_0=-\infty$, then $L^{u,\varepsilon} = \sup_{t\in{\mathbb{R}}} L_{u,\varepsilon}(t)$ and $J^{u,\varepsilon} = \sup_{t\in{\mathbb{R}}} J_{u,\varepsilon}(t)$. For $t_0>-\infty$, however, these quantities are defined only asymptotically. One reason for this is that if $\sup_{x\in{\mathbb{R}}^d} u_0(x)<1$, then $\sup_{x\in{\mathbb{R}}^d} u(t,x)<1$ for any $t>t_0$. Thus for any $\varepsilon\in(0,\tfrac 12)$, $L_{u,\varepsilon}(t)$ will equal $\infty$ up to some time $t_\varepsilon$ ($\to\infty$ as $\varepsilon\to 0$). \smallskip 2. We trivially have that $L^{u,\varepsilon}$ is non-increasing in $\varepsilon\in(0,\tfrac 12)$ (as is $J^{u,\varepsilon}$) and non-decreasing in $\varepsilon\in(\tfrac 12,1)$, so in fact the definition only needs to involve $\varepsilon$ close to 0 and 1. \smallskip While the definition of bounded width involves $\varepsilon\in(0,\tfrac 12)$, we do not make one involving only $\varepsilon\in(\tfrac 12,1)$. This lack of symmetry was explained in the introduction, and is due to the possibility of existence of unburned pockets with $u\sim 0$ behind the leading edge of the reaction zone. Hence, typical solutions $u$ in general disordered media may have $L^{u,\varepsilon}=\infty$ for $\varepsilon\in(\tfrac 12,1)$. In particular, they would not have doubly-bounded widths, but may still have bounded widths, at least when $u\equiv 0$ is a stable equilibrium. If the equilibrium $u\equiv 0$ is strongly unstable (such as for KPP $f$), bounded width is also too much to hope for in some situations, even when $d=1$. Indeed, an easy extension of the construction from \cite{NRRZ} yields media where burned pockets with $u\sim 1$ can form arbitrarily far ahead of the leading edge of the reaction zone. While we do not study this case here, we introduce the concept of semi-bounded width in Definition \ref{D.1.1} because it is likely to be relevant in such situations. We next define the propagation speed of (the reaction zone of) $u$ (cf. \cite{BH3}). \begin{definition} \label{D.1.1b} Let $u:(t_0,\infty)\times{\mathbb{R}}^d\to[0,1]$ be a solution of \eqref{1.1} with $t_0\in[-\infty,\infty)$, and let $0<c\le c'\le\infty$. We say that {\it $u$ propagates with global mean speed in $[c,c']$} if for any $\varepsilon\in(0,\tfrac 12)$ and $\delta>0$ there are $T_{\varepsilon,\delta},\tau_{\varepsilon,\delta}<\infty$ such that \begin{equation}\label{1.3g} B_{(c-\delta)\tau} \left(\Omega_{u,\varepsilon}(t) \right) \subseteq \Omega_{u,1-\varepsilon}(t+\tau) \qquad\text{and}\qquad \Omega_{u,\varepsilon}(t+\tau) \subseteq B_{(c'+\delta)\tau} \left(\Omega_{u,1-\varepsilon}(t) \right) \end{equation} whenever $t> t_0+T_{\varepsilon,\delta}$ and $\tau\ge \tau_{\varepsilon,\delta}$. If any such $0<c\le c'\le\infty$ exist, we also say that {\it $u$ propagates with a positive global mean speed.} \end{definition} {\it Remarks.} 1. If $t_0=-\infty$, then obviously $t\in{\mathbb{R}}$ above is arbitrary. \smallskip 2. Notice that the definition would be unchanged if we took $T_{\varepsilon,\delta}=\tau_{\varepsilon,\delta}$. However, this formulation will be more convenient for us because we will show that under certain conditions, $\tau_{\varepsilon,\delta}$ (but not necessarily $T_{\varepsilon,\delta}$) will be independent of $f,u$. \smallskip We now let $c_0$ be the front/spreading speed associated with the homogeneous reaction $f_0$. That is, $c_0$ is the {\it unique} value such that \eqref{1.1} for $d=1$ and with $f_0(u)$ in place of $f(x,u)$ has a traveling front solution $u(t,x)=U(x-c_0t)$ with $\lim_{s\to -\infty}U(s)=1$ and $\lim_{s\to \infty}U(s)=0$. We also let $f_1:[0,1]\to[0,\infty)$ be any Lipschitz function with constant $K$ such that \begin{equation} \label{1.4e} f(x,u)\le f_1(u) \qquad \text{for $(x,u)\in {\mathbb{R}}^d\times [0,1]$}, \end{equation} which is also pure ignition if $\theta>0$ in (H) and pure monostable (i.e. $f(0)=f(1)=0$ and $f(u)>0$ for $u\in(0,1)$) otherwise. For instance, we could pick $f_1(u):=\sup_{x\in{\mathbb{R}}^d} f(x,u)$, if this function is pure ignition/monostable. We also let $c_1$ be the front/spreading speed associated with $f_1$ (which is again the unique traveling front speed if $f_1$ is ignition, and it is the {\it minimal} traveling front speed if $f_1$ is monostable). The existence of $c_0, c_1$ is well known, as well as that $f_0(u)\le f_1(u)\le Ku$ implies $c_0\le c_1\le 2\sqrt K$. Our main results below say that under appropriate (quite general and physically relevant) qualitative hypotheses on the reaction, typical solutions of \eqref{1.1} have bounded widths and (their reaction zones) propagate with global mean speeds in the interval $[c_0,c_1]$. They also eventually grow in time on any closed interval of values of $u$ contained in $(0,1)$. Specifically, we will prove the following conclusion for typical solutions $u$. \medskip {\it Conclusion (C): For any $\varepsilon\in(0,\tfrac 12)$, there are $\ell_\varepsilon,m_\varepsilon, T_\varepsilon\in(0,\infty)$ such that \begin{equation} \label{1.7} \sup_{t> t_0+T_\varepsilon} L_{u,\varepsilon}(t)\le \ell_\varepsilon \qquad\text{and}\qquad \inf_{\substack{(t,x)\in(t_0+T_\varepsilon,\infty) \times {\mathbb{R}}^d \\ u(t,x)\in[\varepsilon,1-\varepsilon]}} u_t(t,x) \ge m_\varepsilon. \end{equation} In particular, $L^{u,\varepsilon}\le \ell_\varepsilon$, so $u$ has a bounded width. Moreover, if a pure ignition $f_1$ satisfies \eqref{1.4e}, then $u$ propagates with global mean speed in $[c_0,c_1]$.} \medskip Moreover, $\ell_\varepsilon,m_\varepsilon$ as well as $\tau_{\varepsilon,\delta}$ from Definition \ref{D.1.1b} will depend on some uniform bounds on the reaction, but neither on the reaction itself nor on the solution. That is, the spatial scale on which the transition from $u\sim 0$ to $u\sim 1$ happens as well as the temporal scale on which the global mean speed of (the reaction zone of) $u$ is observed to be in $[c_0,c_1]$, will become independent of $f,u$ after an initial time interval. Note that such expanding sets may also be weak solutions of appropriate Hamilton-Jacobi equations. Connection of the two types of PDE is well-established in the homogenization theory for various types of media (e.g., periodic or stationary ergodic), see for instance \cite{Freidlin, LioSou}. It will be explored, via our results, for general disordered media elsewhere. \bigskip \noindent {\bf Solutions of the Cauchy Problem with Bounded Widths} \smallskip \smallskip We will first show that for ignition reactions, (C) holds in dimensions $d\le 3$, but not in dimensions $d\ge 4$ (under the same qualitative hypotheses). A crucial additional (and necessary) hypothesis, which is automatically satisfied in the case of constant ignition temperature $\theta_0$, relates to the following definition (see Remarks 1 and 2 below). It says that if for any $x\in{\mathbb{R}}^d$ we increase $u$ from 0 to 1, once $f(x,u)$ becomes large enough, it cannot become arbitrarily small until $u\sim 1$, as illustrated in Figure \ref{fig2}. \begin{figure}[ht] \centering \scalebox{0.29}{\includegraphics{reaction_inhomog_2.pdf}} \caption{Example of a reaction from Definition \ref{D.1.1a} (at some fixed $x\in{\mathbb{R}}^d$). \label{fig2} \end{figure} \begin{definition} \label{D.1.1a} Let $f_0,K,\theta$ be as in (H) and let $\zeta, \eta>0$. If $f$ satisfies (H), define \begin{equation} \label{1.4a} \alpha_f(x) = \alpha_f(x;\zeta):= \inf \{ u\ge 0 \,|\, f(x,u)> \zeta u \}, \end{equation} (with $\inf\emptyset=\infty$) and let $F(f_0,K,\theta,\zeta,\eta)$ be the set of all $f$ satisfying (H) such that \begin{equation} \label{1.4b} \inf_{\substack{x\in{\mathbb{R}}^d \\ u\in[\alpha_f(x),\theta_0]}} f(x,u) \ge \eta. \end{equation} \end{definition} {\it Remarks.} 1. We will require that $f\in F(f_0,K,\theta,\zeta,\eta)$ for some {\it not too large} $\zeta>0$ and some $\eta>0$. This assumption is physically relevant and encompasses a large class of functions. A natural example is the pure ignition reaction from Remark 1 after (H), when also \begin{equation} \label{1.4c} \tilde\eta:=\inf_{\substack{x\in{\mathbb{R}}^d \\ u\in[\tilde\theta(x)+\delta,\theta_0]}} f(x,u) >0 \end{equation} for some $\delta>0$ (in that case $f\in F(f_0,K,\theta,\zeta,\eta)$ for any $\zeta\ge \tfrac K\theta \delta$ and $\eta\in(0,\tilde\eta]$). \smallskip 2. Notice that this definition is not necessary when $f$ has a constant ignition temperature: if $f(x,u)= 0$ for $(x,u)\in {\mathbb{R}}^d\times[0,\theta_0]$, then $f$ from (H) is in $F(f_0,K,\theta,\zeta,\eta)$ for any $\zeta,\eta>0$. This is the case in Theorem \ref{T.1.0} (the special case $f(x,u)=a(x)f_0(u)$ was also considered in \cite{MRS,NolRyz,MNRR} in one spatial dimension). \smallskip 3. Note that $F(f_0,K,\theta,\zeta,\eta)$ is spatially translation invariant and closed under locally uniform convergence of functions. It is also decreasing in its odd arguments and increasing in the even ones. In particular, $F(f_0,K,\theta,\zeta,\eta)\subseteq F(f_0,K,0,\zeta,\eta)$. These facts will be useful later, as well as the obvious $\alpha_f(x)\ge \tfrac \eta K$ for $f\in F(f_0,K,\theta,\zeta,\eta)$. \smallskip Without (some version of) the assumption from Remark 1, solutions of \eqref{1.1} need not have bounded widths even when $d=1$ and $f$ is a homogeneous ignition reaction! Indeed, assume that $f:[0,1]\to[0,\infty)$ is such that $f(u)=0$ for $u\in[0,\tfrac 14]$, $f(u)>0$ for $u\in(\tfrac 14,\tfrac 12)$, and $f(u)= 2f(u-\tfrac 12)$ for $u\in [\tfrac 12,1]$. Such $f$ vanishes on $[\tfrac 12,\tfrac 34]$ and so belongs to $F(f_0,K,\theta,\zeta,\eta)$ {\it only for large $\zeta$} (specifically $\zeta\ge\|f(u)/u\|_\infty$). For such $f$, there obviously is a traveling front solution $u(t,x)=U(x-ct)$ of \eqref{1.1} connecting 0 and $\tfrac 12$ (i.e., such that $\lim_{s\to-\infty}U(s)=\tfrac 12$ and $\lim_{s\to\infty}U(s)=0$) and another $u(t,x)=\tfrac 12 + U(\sqrt 2(x-\sqrt 2ct))$ connecting $\tfrac 12$ and 1. Their speeds are $0<c<\sqrt 2c$ and a simple comparison principle argument shows that all spark-like and front-like solutions have a linearly in time growing {\it propagating terrace}: \begin{equation} \label{1.4d} \lim_{t\to\infty} \sup_{x\in [(c+\delta)t, (\sqrt 2c-\delta)t]} \left| u(x,t)-\frac 12 \right| = 0 \end{equation} for any $\delta>0$ (see \cite{DGM} for further results of this nature). In particular, they do not have bounded widths. Of course, for such solutions one can separately study the transition from 0 to $\tfrac 12$ and that from $\tfrac 12$ to 1, using our results. Hence, the latter can also be applied in some situations when \eqref{1.4b} is not satisfied for any $\eta>0$ (and some not too large $\zeta>0$). We are now ready to state our first main result, which applies to general {\it spark-like} and {\it front-like} initial data $u_0\in[0,1]$. Specifically, we will assume that either there are $x_0\in{\mathbb{R}}^d$, $R_2\ge R_1>0$, and $\varepsilon_1,\varepsilon_2>0$ such that \begin{equation} \label{1.5} (\theta_0+\varepsilon_1) \chi_{B_{R_1}(x_0)}(x) \le u_0(x)\le e^{-\varepsilon_2(|x-x_0|-R_2)}, \end{equation} or there are $e\in{\mathbb{S}}^{n-1}$, $R_2\ge R_1$, and $\varepsilon_1,\varepsilon_2>0$ such that \begin{equation} \label{1.6} (\theta_0+\varepsilon_1) \chi_{\{x\,|\,x\cdot e<R_1\}}(x) \le u_0(x)\le e^{-\varepsilon_2(x\cdot e-R_2)}. \end{equation} In \eqref{1.5} we also assume that $R_1$ is large enough (depending on $\varepsilon_1$) to guarantee {\it spreading} (i.e., $\lim_{t\to\infty}u(x,t)= 1$ locally uniformly in ${\mathbb{R}}^d$), because otherwise one might have {\it quenching} (i.e., $\lim_{t\to\infty} \|u(t,\cdot)\|_\infty =0$) for ignition reactions. \begin{theorem} \label{T.1.2} (i) Let $f_0,K,$ and $\theta>0$ be as in (H) and let $\eta>0$, $\zeta\in(0,c_0^2/4)$, and \hbox{$f\in F(f_0,K,\theta,\zeta,\eta)$.} Let $u$ solve \eqref{1.1}, \eqref{1.2} with spark-like or front-like $u_0\in[0,1]$ as above. If $d\le 3$, then (C) holds with $\ell_\varepsilon,m_\varepsilon$ depending only on $\varepsilon,f_0,K,\zeta,\eta$, and $\tau_{\varepsilon,\delta}$ in Definition~\ref{D.1.1b} also depending on $\delta,f_1$. (ii) If $d\ge 4$, then there is $f$ as in (H) with $\theta>0$ and $f(x,u)= 0$ for $(x,u)\in {\mathbb{R}}^d\times[0,\theta_0]$ (so that $f\in F(f_0,K,\theta,\zeta,\eta)$ for any $\zeta,\eta>0$) such that all claims in (C) are false for any $u_0\in[0,1]$ supported in the left half-space for which $\limsup_{t\to\infty}\|u(t,\cdot)\|_\infty>0$. \end{theorem} {\it Remarks.} 1. As noted before, the hypothesis $\zeta<c_0^2/4$ is crucial in (i). It guarantees that the reaction at small $u$ (where $f(x,u)\le \zeta u$) is not strong enough to cause spreading at speeds $\ge c_0$. This is because spreading speeds for homogeneous reactions bounded above by $\zeta u$ are no more than $2\sqrt\zeta<c_0$. Since $f\ge f_0$ has spreading speed no less than $c_0$, one should then expect spreading to be driven by ``intermediate'' values of $u$ (above $\alpha_f(x)$ and not too close to 1, where $f$ is small). Thus $u$ would be a ``pushed'' solution, and one can hope for it to have a bounded width, provided one can also show that values of $u$ close to 1 do not ``trail'' far behind the intermediate ones. We will prove the latter for $d\le 3$ but also show in (ii) that it fails in general for $d\ge 4$. \smallskip 2. Note that the second claim in \eqref{1.7} and parabolic regularity shows that $\Omega_{u,\varepsilon}(t)$ grows with {\it instantaneous speed} greater than some positive constant at all times $t\ge t_0+T_\varepsilon$ in (i). An upper bound on the instantaneous speed of growth does not exist in general, however, because for $\varepsilon\in(0,\tfrac 12)$, $\Omega_{u,\varepsilon}(t)$ may acquire new connected components (which then soon merge with the ``main'' component) as time progresses. \smallskip 3. As the proof of (i) shows, $T_\varepsilon$ in (C) depends on $\varepsilon,f_0,K,\zeta,\eta,\theta,R_2-R_1,\varepsilon_1,\varepsilon_2$, and $T_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} also depends on $\delta, f_1$. \smallskip 4. The result extends to monostable reactions in a weaker form. (ii) holds without change (the counter-example we construct is easily modified) but in (i) we need to assume that either there are $R_1,R_2,\varepsilon_1>0$ ($R_1$ sufficiently large, depending on $\varepsilon_1$) and $x_0\in{\mathbb{R}}^d$ such that \begin{equation} \label{1.5a} (\theta_0+\varepsilon_1) \chi_{B_{R_1}(x_0)}(x) \le u_0(x)\le \chi_{B_{R_2}(x_0)}(x), \end{equation} or there are $R_1,R_2\in{\mathbb{R}}$, $\varepsilon_1,\varepsilon_2>0$, and $e\in{\mathbb{S}}^{n-1}$ such that \begin{equation} \label{1.6a} (\theta_0+\varepsilon_1) \chi_{\{x\,|\,x\cdot e<R_1\}}(x) \le u_0(x)\le (1-\varepsilon_2) \chi_{\{x\,|\,x\cdot e<R_2\}}(x). \end{equation} Then for any $\varepsilon\in(0,\tfrac 12)$ there are $\ell_\varepsilon,T_\varepsilon\in(0,\infty)$, depending on $\varepsilon,f_0,K,\zeta,\eta,\varepsilon_1$, and either on $R_2$ (for \eqref{1.5a}) or on $R_2-R_1,\varepsilon_2$ (for \eqref{1.6a}), such that $L_{u,\varepsilon}(t)\le \ell_\varepsilon$ for $t> t_0+T_\varepsilon$. \smallskip The first step in the proof of Theorem \ref{T.1.2}(i) will be to consider general solutions with $u_t\ge 0$. That is, such that on ${\mathbb{R}}^d$, \begin{equation} \label{3.6a} \Delta u_0(\cdot) + f(\cdot,u_0(\cdot))\ge 0, \end{equation} which then guarantees $u_t\ge 0$ because $v:=u_t$ solves $v_t=\Delta v + f_u(x,u(x))v$ with $v(0,x)\ge 0$. For $d\le 3$ we will show that if the width of the reaction zone of such $u$ is controlled at the initial time $t_0$ (see \eqref{1.6b} below), then the conclusions of Theorem \ref{T.1.2}(i) continue to hold. This step is related to our proof of existence of transition fronts in \cite{ZlaGenfronts}, but will be considerably more involved, particularly for $d=3$. This latter result applies to any such solution $u$ (as well as solutions trapped between time-shifts of such $u$), not just the spark-like or front-like ones, and is stated next. We let \begin{equation} \label{1.6b} L_{u,\varepsilon,\varepsilon'}(t) := \inf \left\{L>0 \,\big|\, \Omega_{u,\varepsilon}(t)\subseteq B_L \left(\Omega_{u,\varepsilon'}(t)\right) \right\} \end{equation} be the width of the transition zone from $\varepsilon$ to $\varepsilon'$. We will assume that $L_{u,\varepsilon,\varepsilon'}(t_0)<\infty$ for each $\varepsilon>0$ and some fixed $\varepsilon'>\theta_0$. Here $\varepsilon'$ can be arbitrary when $d\le 2$, and equals $1-\varepsilon_0$ when $d=3$ (with $\varepsilon_0=\varepsilon_0(f_0,K)>0$ from Lemma \ref{L.2.1} below). This choice of $\varepsilon'$ will guarantee spreading for any solution satisfying \eqref{3.6a} and $u(t_0,x)\ge \varepsilon'$ for some $x\in{\mathbb{R}}^{d}$. \begin{theorem} \label{T.1.3} Let $d\le 3$, let $f_0,K,$ and $\theta>0$ be as in (H), and let $\eta>0$, $\zeta\in(0,c_0^2/4)$, and $f\in F(f_0,K,\theta,\zeta,\eta)$. Let $u$ solve \eqref{1.1}, \eqref{1.2} with $u_0\in[0,1]$ satisfying \eqref{3.6a}. (i) If $\varepsilon'$ is as above and $L_{u,\varepsilon,\varepsilon'}(t_0)<\infty$ for each $\varepsilon>0$, then (C) holds with $\ell_\varepsilon,m_\varepsilon$ depending only on $\varepsilon,f_0,K,\zeta,\eta$, and $\tau_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} also depending on $\delta,f_1$. (ii) If $u$ is as in (i), and a solution $v$ of \eqref{1.1} satisfies \[ u(t_0,\cdot)\le v(t_0+\tau,\cdot)\le u(t_0+2\tau,\cdot) \] for some $\tau>0$, then (C) holds for $v$ with $\ell_\varepsilon,m_\varepsilon,\tau_{\varepsilon,\delta}$ as in (i) (so independent of $\tau$). \end{theorem} {\it Remarks.} 1. In (i), $T_\varepsilon$ in (C) depends on $\varepsilon,f_0,K,\zeta,\eta,\theta, u_0$, the dependence on $u_0$ being only via the number $L_{u,h,\varepsilon'}(t_0)$ with $h:=\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},1-\varepsilon',\tfrac \varepsilon 2 \right\}$ (see the proof); $T_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} also depends on $\delta, f_1$. In (ii) they also depend on $\tau$. 2. (i) extends to monostable $f$ if we also assume $\sup_{\varepsilon\in(0,1)} \varepsilon e^{\sqrt\zeta L_{u,\varepsilon,\varepsilon'}(t_0)}< \infty$, but with global mean speed in $[c_0,c_Y']$, where $c_Y'$ is from \eqref{3.0a} below. \smallskip 3. (ii) also extends to monostable $f$ if we assume $\sup_{\varepsilon\in(0,1)} \varepsilon e^{\sqrt\zeta L_{u,\varepsilon,\varepsilon'}(t_0)}< \infty$, but with $\tau$-dependent $\ell_\varepsilon,\tau_{\varepsilon,\delta}$, without the second claim in \eqref{1.7}, and with global mean speed in $[c_0,c_Y']$.\smallskip Notice that in (ii), the bounds in (C) are independent of the time shift $\tau$. To prove this, we will first need to show such solution-independent bounds for entire solutions with bounded widths when $d\le 3$. In particular, as long as such a solution has a bounded width, the bound on $\sup_{t\in{\mathbb{R}}} L_{u,\varepsilon}(t)$ (for $\varepsilon\in(0,\tfrac 12)$) will in fact {\it only depend on $\varepsilon, f_0,K,\zeta,\eta$.} It will then suffice to show, using parabolic regularity, that the solutions from (ii) asymptotically look like entire solutions with bounded widths, where the bounds involved will be allowed to depend on $\tau$. A crucial ingredient in this will be the proof that entire solutions with bounded widths satisfy $u_t\ge 0$ (in all dimensions). Such a result was previously proved in \cite{BH3} for transition fronts in a closely related setting. This and the uniform bounds for entire solutions with bounded widths are stated in Theorem \ref{T.1.5} below. Theorem \ref{T.1.2}(i) is proved similarly to Theorem \ref{T.1.3}(ii), but the solution will be sandwiched between time-shifts of a time-increasing solution, perturbed by certain exponentially in space decreasing functions. We will therefore also need to prove stability of spark-like and front-like time-increasing solutions with respect to such perturbations. This could be extended to other situations where time-increasing solutions with some specific profiles are stable with respect to appropriate (exponentially decreasing) perturbations. For instance, one could handle in this way {\it cone-like} solutions, with initial data exponentially decreasing inside a $d$-dimensional cone and converging to 1 outside it. We will not pursue this direction here. We also note that these results cannot be extended to arbitrary spreading solutions, even for homogeneous pure ignition reactions $f(x,u)=f_0(u)$ and $d=1$. Indeed, the author showed \cite{ZlaSharp} that then there exists a unique $M>0$ such that the solution of \eqref{1.1}, \eqref{1.2} with $u_0:=\chi_{[-M,M]}$ converges locally uniformly to $\theta_0$ as $t\to\infty$. If we now let $R\gg 1$ and $u_0:=\chi_{[-R,R]}+\sum_{n=1}^\infty \chi_{[a_n-M,a_n+M]}$ with sufficiently rapidly growing $a_n$, the solution $u$ will have increasingly long plateaus as $t\to\infty$. Specifically, there will be $t_n,b_n\to\infty$ such that \[ \lim_{n\to\infty} \sup_{x\in[a_n-b_n,a_n+b_n]} |u(t_n,x)-\theta_0|=0. \] Such $u$ therefore does not even have a {\it semi-bounded} width! Finally, most of the argument for $d\le 3$ also applies if $d\ge 4$, the one exception being Lemma~\ref{L.3.2} below. The reason it fails for $d\ge 4$ lies in Lemma \ref{L.2.2}, which only excludes existence of equilibrium solutions to \eqref{1.1} which are independent of one coordinate when $d\le 3$. Such solutions will be the basis of the counter-example proving Theorem \ref{T.1.2}(ii). \bigskip \noindent {\bf Extensions to More General Reactions, Equilibria, and Solutions} \smallskip \smallskip Let us now discuss the more general case when typical solutions transition from some equilibrium $u^-$ to another equilibrium $u^+$ (instead from 0 to 1), with $u^-<u^+$ and \begin{equation}\label{1.19} 0<\inf_{x\in{\mathbb{R}}^d} [u^+(x)-u^-(x)] \le \sup_{x\in{\mathbb{R}}^d} [u^+(x)-u^-(x)]<\infty \end{equation} (the case $u^->u^+$ is identical, as one can consider the equation for $-u$ instead). Our goal is to extend the positive results in Theorems \ref{T.1.2}(i) and \ref{T.1.3} to such situations. We will assume $u^-\equiv 0$ without loss, because the general case is immediately reduced to this by taking $v:=u-u^-$, which solves \eqref{1.1} with $f$ replaced by \[ g(x,v):=f(x,v+u^-(x))-f(x,u^-(x)). \] Obviously, we can also assume $u^+\le 1$, by \eqref{1.19} and after scaling in $u$. Thus we will now assume the following generalization of (H). \medskip {\it Hypothesis (H'): $f$ is Lipschitz with constant $K\ge 1$ and \[ f(x,0)=0 \qquad \text{for $x\in{\mathbb{R}}^d$.} \] There are also $0<\theta_0< \theta_1\le 1$ and Lipshitz $f_0:[0,\theta_1]\to{\mathbb{R}}$ with $f_0(0)=f_0(\theta_0)=f_0(\theta_1)=0$, $f_0(u)< 0$ for $u\in(0,\theta_0)$, and $f_0(u)>0$ for $u\in(\theta_0,\theta_1)$, such that $\int_0^{\theta_1} f_0(u)du>0$ and \[ f(x,u)\ge f_0(u) \qquad \text{for $(x,u)\in {\mathbb{R}}^d\times [0,\theta_1]$}. \] Furthermore, we assume that there is an equilibrium solution $u^+$ of \eqref{1.1} with \begin{equation} \label{1.20} \theta_0<\inf_{x\in{\mathbb{R}}^d} u^+(x)\le\sup_{x\in{\mathbb{R}}^d} u^+(x)\le 1, \end{equation} and we have \begin{equation}\label{1.24} \text{$f(x,u)\ge 0$ when $u< 0$} \qquad \text{and} \qquad \text{$f(x,u)\le f(x,u^+(x))$ when $u> u^+(x)$} \end{equation} Finally, there is $\theta\in[0,\tfrac {\theta_0}3]$ such that $f$ is non-increasing in $u$ on $[0,\theta]$ and on $[u^+(x)-\theta,u^+(x)]$ for each $x\in{\mathbb{R}}^d$ ($\theta=0$ obviously always works but we will obtain stronger results when $\theta>0$).} \medskip That is, $f_0$ is now a {\it pure bistable reaction} (while $f_1$ in \eqref{1.4e} will still be pure ignition or pure monostable), so $f$ could be any mix of different reaction types. The hypothesis $\int_0^{\theta_1} f_0(u)du>0$ is necessary for solutions of \eqref{1.1}, \eqref{1.2}, with reaction $f_0$ and large enough $u_0\in[0,\theta_1]$, to spread (i.e., $\lim_{t\to\infty} u(t,x)= \theta_1$ locally uniformly). In fact, it guarantees that the front/spreading speed $c_0$ for this $f_0$ (which corresponds to the traveling front for $f_0$ connecting $0$ and $\theta_1$, and is unique just as for ignition reactions) is positive. Thus, typical non-negative solutions of \eqref{1.1} transition {\it away from} $u=0$. Transition {\it to} $u^+$ is, however, not guaranteed by (H') only. Finally, \eqref{1.24} will be needed in Theorem \ref{T.1.12} to extend our results to solutions which are not necessarily between 0 and $u^+$. We next need to generalize Definitions \ref{D.1.1}--\ref{D.1.1a} to the case at hand. We will first consider solutions $0\le u \le u^+$ (henceforth denoted $u\in[0,u^+]$), when \eqref{1.24} is of no consequece. Definitions \ref{D.1.1} and \ref{D.1.1b} are unchanged for such $u$, but use (for $\varepsilon\in (0,\tfrac{1}2)$) \begin{align} \Omega_{u,\varepsilon}(t) & :=\{x\in{\mathbb{R}}^d\,|\, u(t,x) \ge \varepsilon \}, \label{1.20a} \\ \Omega_{u,1-\varepsilon}(t) & :=\{x\in{\mathbb{R}}^d\,|\, u(t,x) \ge u^+(x)-\varepsilon \}. \label{1.20b} \end{align} Definition \ref{D.1.1a}, on the other hand, needs to be changed because $f(x,u^+(x))\not\ge 0$ in general. The motivation for this new form comes from the proofs of Theorems \ref{T.1.2}(i) and \ref{T.1.3}, specifically from the use of Lemma \ref{L.2.2} below in the proof of the $d=3$ case of Lemma \ref{L.3.2}. \begin{definition} \label{D.1.10} Let $f_0,K,\theta$ be as in (H') and $\zeta, \eta>0$. If $f$ satisfies (H'), define $\alpha_f(x;\zeta)$ as in \eqref{1.4a}. Finally, let $F'(f_0,K,\theta,\zeta,\eta)$ be the set of all $(f,u^+)$ satisfying (H') such that $\alpha_f(x;\zeta)\ge \eta$ for all $x\in{\mathbb{R}}^d$ and any equilibrium solution $p$ of \eqref{1.1} with $0<p<u^+$ satisfies \begin{equation} \label{1.21} \sup_{x_0\in{\mathbb{R}}^d} \sum_{n\ge 1} \frac1{1+d(x_0,{\mathcal C}_n)} \le \frac 1\eta. \end{equation} Here $d(\cdot,\cdot)$ is the distance in ${\mathbb{R}}^d$ and ${\mathcal C}_1,{\mathcal C}_2,\dots$ are all (distinct) unit cubes in ${\mathbb{R}}^d$, whose corners have integer coordinates, such that $p(x)> \alpha_f(x;\zeta)$ for some $x\in{\mathcal C}_n$. \end{definition} {\it Remarks.} 1. The advantage of \eqref{1.4b}, relative to \eqref{1.21}, is that the former is a local condition while the latter is not. Thus \eqref{1.21} is more difficult to check. An obvious sufficient condition is when $p(\cdot)\le\alpha_f(\cdot ;\zeta)$ for each equilibrium $0<p<u^+$ (with $\zeta<c_0^2/4$, so that our results apply), which may be proved under some {\it quantitative} local hypotheses on $f$. A simple such example is when $d=1=\theta_1$ and $f$ is sufficiently close to a homogeneous reaction $f_0$ as in (H') with $\int_0^\beta f_0(u) du>0$, where $\beta\in(\theta_0,1)$ is smallest number such that $f_0(\beta)=c_0^2\beta/4$. \smallskip 2. Lemma \ref{L.2.2} shows that in the setting of (H), \eqref{1.4b} implies \eqref{1.21} when $d\le 3$ (but not when $d\ge 4$), although with a different $\eta>0$. \smallskip 3. \eqref{1.21} will cause typical solutions between 0 and $u^+$ to transition to $u^+$ (instead of to some other equilibrium $p<u^+$), and also to have a bounded width. The latter need not be true without a condition like \eqref{1.21}, as is demonstrated by the example in the proof of Theorem \ref{T.1.2}(ii), for which the sum in \eqref{1.21} diverges, albeit slowly (as $\log n$). \smallskip Note that unlike $F(f_0,K,\theta,\zeta,\eta)$, the set $F'(f_0,K,\theta,\zeta,\eta)$ may be neither spatially translation invariant (although it would be if the ${\mathcal C}_n$ were integer translations of any fixed unit cube ${\mathcal C}$, and the sup in \eqref{1.21} were also taken over all such ${\mathcal C}$) nor closed with respect to locally uniform convergence (i.e., locally uniform convergence for $q(x,u):=(f(x,u),u^+(x))$ on ${\mathbb{R}}^d\times{\mathbb{R}}$). Since these properties will be essential in our analysis, in the following generalization of Theorems \ref{T.1.2}(i) and \ref{T.1.3} we will work with subsets ${\mathcal F}\subseteq F'(f_0,K,0,\zeta,\eta)$ which possess them both (an example is the closure of all translations of a given $(f,u^+)$ with respect to locally uniform convergence). We will denote ${\mathcal F}_\theta:={\mathcal F}\cap F'(f_0,K,\theta,\zeta,\eta)$ for $\theta\ge0$, which then also has the same properties. \begin{theorem} \label{T.1.11} Let $f_0,K,$ and $\theta>0$ be as in (H') and let $\eta>0$, $\zeta\in(0,c_0^2/4)$, and ${\mathcal F} \subseteq F'(f_0,K,0,\zeta,\eta)$ be spatially translation invariant and closed with respect to locally uniform convergence. Let $(f,u^+)\in{\mathcal F}_\theta$ and let $u$ solve \eqref{1.1}, \eqref{1.2} with $u_0\in[0,u^+]$. (i) If $d\ge 1$ and $u_0$ satisfies \eqref{1.5} or \eqref{1.6}, then (C) holds with $1-\varepsilon$ replaced by $u^+(x)-\varepsilon$ in \eqref{1.7}, with $\ell_\varepsilon,m_\varepsilon$ depending only on $\varepsilon,{\mathcal F}$, and $\tau_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} also depending on $\delta,f_1$. (ii) If $d\ge 1$, $u_0$ satisfies \eqref{3.6a}, and $L_{u,\varepsilon,1-\varepsilon_0}(t_0)<\infty$ for $\varepsilon_0>0$ from Lemma \ref{L.11.1} and each $\varepsilon>0$, then (C) holds for $u$ and for $v$ as in Theorem \ref{T.1.3}(ii), with $1-\varepsilon$ replaced by $u^+(x)-\varepsilon$ in \eqref{1.7}, with $\ell_\varepsilon,m_\varepsilon$ depending only on $\varepsilon,{\mathcal F}$, and $\tau_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} also depending on $\delta,f_1$. \end{theorem} {\it Remarks.} 1. Here $T_\varepsilon$ in (C) and $T_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} depend on the same parameters as in Theorems \ref{T.1.2} (in (i)) and \ref{T.1.3} (in (ii)), but with $f_0,K,\zeta,\eta$ replaced by ${\mathcal F}$. This is also the case in Theorem~\ref{T.1.12} below, but there $T_\varepsilon$ and $T_{\varepsilon,\delta}$ depend also on $\|u_0\|_\infty$. \smallskip 2. These results again extend to the case $\theta=0$ in the slightly weaker form from Remark 4 after Theorem \ref{T.1.2} and Remarks 2,3 after Theorem \ref{T.1.3}. \smallskip Next, we consider extensions of our results to solutions that are not necessarily between the equilibria which they connect. We first need to extend Definitions \ref{D.1.1} and \ref{D.1.1b} in a physically relevant manner to such solutions (we will do so for general $u^\pm$). Namely, we will consider $u$ to be $\varepsilon$-close to $u^\pm$ at $(t,x)$ if $|u(t,y)-u^\pm(y)|<\varepsilon$ for all $y$ in a ball centered at $x$, whose size grows to $\infty$ as $\varepsilon\to 0$. It will therefore be useful to define for $A\subseteq{\mathbb{R}}^d$, \[ \text{$r$-int}\, A:=\{x\in A \,|\, B_r(x)\subseteq A \}. \] \begin{definition} \label{D.1.4} Let $u^\pm$ be equilibrium solutions of \eqref{1.1} with bounded Lipschitz $f$, satisfying \eqref{1.19}. For a solution $u$ of \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$, define (for $\varepsilon\in(0,\tfrac12)$) \[ \Omega_{u,\varepsilon}(t):=\left\{x\in{\mathbb{R}}^d\,\big|\, |u(t,x)-u^-(x)|\ge \varepsilon \right\}, \] \[ \Omega_{u,1-\varepsilon}(t):= \left\{x\in{\mathbb{R}}^d\,\big|\, |u(t,x)-u^+(x)|\le \varepsilon \right\}, \] \begin{equation} \label{1.3b} L_{u,\varepsilon}(t) := \inf \left\{L>0 \,\big|\, \Omega_{u,\varepsilon}(t)\subseteq B_L \left(\text{$\tfrac 1\varepsilon$-int}\, \Omega_{u,1-\varepsilon}(t) \right) \right\} \end{equation} \[ L_{u,1-\varepsilon}(t) := \inf \left\{L>0 \,\big|\, {\mathbb{R}}^d\setminus \Omega_{u,1-\varepsilon}(t) \subseteq B_L \left( \text{$\tfrac 1{\varepsilon}$-int}\,\left[ {\mathbb{R}}^d\setminus \Omega_{u,\varepsilon}(t) \right] \right) \right\}, \] \[ J_{u,\varepsilon}(t) := \inf \left\{L>0 \,\big|\, {\mathbb{R}}^d= B_L \left(\text{$\tfrac 1\varepsilon$-int}\, \Omega_{u,1-\varepsilon}(t) \cup \text{$\tfrac 1{\varepsilon}$-int}\,\left[ {\mathbb{R}}^d\setminus \Omega_{u,\varepsilon}(t) \right] \right) \right\}. \] We say that $u$ has a {\it bounded width (with respect to $u^\pm$)} if \eqref{1.4} holds for any $\varepsilon\in(0,\tfrac 12)$, a {\it doubly-bounded width} if \eqref{1.4} holds for any $\varepsilon\in(0,\tfrac 12)\cup (\tfrac 12,1)$, and a {\it semi-bounded width} if \eqref{1.4f} holds for any $\varepsilon\in(0,\tfrac 12)$. Definition \ref{D.1.1b} remains the same but with these new $\Omega_{u,\varepsilon}(t)$. \end{definition} Parabolic regularity and strong maximum principle show that if $u^-\le u\le u^+$, then this new definition of bounded/doubly-bounded/semi-bounded width is equivalent to the one using \eqref{1.3}--\eqref{1.3aa} and these new $\Omega_{u,\varepsilon}(t)$ (which are those from \eqref{1.20a}, \eqref{1.20b} if also $u^-=0$). In fact, while the new $L_{u,\varepsilon}(t)$ is larger than the original one for such $u$, it is finite for all $\varepsilon\in(0,\tfrac 12)$ resp.~all $\varepsilon\in(0,\tfrac 12)\cup(\tfrac 12,1)$ as long as the same is true for the original $L_{u,\varepsilon}(t)$. Finally, let us extend the definition of {\it spark-like} and {\it front-like} initial data as follows. We will assume that either there are $x_0\in{\mathbb{R}}^d$, $R_2\ge R_1>0$, and $\varepsilon_1,\varepsilon_2>0$ such that \begin{equation} \label{1.22} (\theta_0+\varepsilon_1) \chi_{B_{R_1}(x_0)}(x) - e^{-\varepsilon_2(|x-x_0|-R_2)} \chi_{{\mathbb{R}}^d\setminus B_{R_1}(0)}(x)\le u_0(x)\le e^{-\varepsilon_2(|x-x_0|-R_2)} \end{equation} (with $R_1$ sufficiently large, depending on $\varepsilon_1,\varepsilon_2,R_2-R_1$, to guarantee spreading), or there are $e\in{\mathbb{S}}^{n-1}$, $R_2\ge R_1$, and $\varepsilon_1,\varepsilon_2>0$ such that \begin{equation} \label{1.23} (\theta_0+\varepsilon_1) \chi_{\{x\,|\,x\cdot e<R_1\}}(x) -e^{-\varepsilon_2(x\cdot e-R_2)} \chi_{\{x\,|\,x\cdot e\ge R_1\}}(x) \le u_0(x)\le e^{-\varepsilon_2(x\cdot e-R_2)}. \end{equation} \begin{theorem} \label{T.1.12} Consider the setting of Theorem \ref{T.1.11} but with $u_0$ only bounded. (i) Theorem \ref{T.1.11}(i) holds with \eqref{1.5}/\eqref{1.6} replaced by \eqref{1.22}/\eqref{1.23}, provided that in the case of \eqref{1.23}, ``$\le$'' is replaced by ``$<$'' in \eqref{1.24} for all $(f,u^+)\in{\mathcal F}$. (ii) Theorem \ref{T.1.11}(ii) holds, provided that ``$\le$'' and ``$\ge$'' are replaced by ``$<$'' and ``$>$'' in \eqref{1.24} for all $(f,u^+)\in{\mathcal F}$. \end{theorem} {\it Remarks.} 1. The extra condition in (i) guarantees $\limsup_{t\to \infty} \sup_{x\in{\mathbb{R}}^d} [u(t,x)-u^+(x)]\le 0$ for any bounded $u_0$, uniformly in ${\mathcal F}$. This as well as (i) also hold for from \eqref{1.23} if instead we assume $\limsup_{x\cdot e\to -\infty} [u_0(x)-u^+(x)]\le 0$, but then $T_\varepsilon,T_{\varepsilon,\delta}$ in (i) depend on $u_0$ also via the rate of this decay (cf. Remark 1 after Theorem \ref{T.1.11}). \smallskip 2. The extra condition in (ii) guarantees $\limsup_{t\to \infty} \sup_{x\in{\mathbb{R}}^d} [u(t,x)-u^+(x)]\le 0$ and $\liminf_{t\to \infty} \inf_{x\in{\mathbb{R}}^d} u(t,x)\ge 0$ for any bounded $u_0$, uniformly in ${\mathcal F}$. \smallskip 3. Theorems \ref{T.1.2}(i) and \ref{T.1.3} extend similarly to solutions $u$ not necessarily in $[0,1]$. \smallskip \bigskip \noindent {\bf Entire Solutions with Bounded Widths} \smallskip \smallskip Finally, let us turn to the discussion of the above-mentioned entire solutions of \eqref{1.1}. \begin{definition} \label{D.1.4a} Let $u^\pm$ be equilibrium solutions of \eqref{1.1} satisfying \eqref{1.19}. A {\it transition solution (connecting $u^-$ to $u^+$)} for \eqref{1.1} is a bounded entire solution $u$ of \eqref{1.1} which satisfies \begin{equation} \label{1.8} \lim_{t\to\pm\infty} u(t,x) = u^\pm(x) \end{equation} locally uniformly on ${\mathbb{R}}^d$. \end{definition} As above, we will assume $u^-\equiv 0$ without loss in the following. \begin{theorem} \label{T.1.5} Let $u^-\equiv 0$ and $u^+$ satisfy \eqref{1.19} and be equilibrium solutions of \eqref{1.1} with some Lipschitz $f$, satisfying \eqref{1.24} (but not necessarily (H')). Let $u\not\equiv 0,u^+$ be a bounded entire solution of \eqref{1.1} which has a bounded width with respect to $0,u^+$. (i) We have $0<u<u^+$. (ii) If $u$ propagates with a positive global mean speed, then $u$ is a transition solution. If, in addition, there is $\theta>0$ such that $f$ is non-increasing in $u$ on $[0,\theta]$ and on $[u^+(x)-\theta,u^+(x)]$ for each $x\in{\mathbb{R}}^d$, then $u_t>0$. (iii) Assume $f_0,K,$ and $\theta>0$ are as in (H') and $\eta>0$, $\zeta\in(0,c_0^2/4)$, ${\mathcal F} \subseteq F'(f_0,K,0,\zeta,\eta)$ is spatially translation invariant and closed with respect to locally uniform convergence. If $(f,u^+)\in{\mathcal F}_\theta$, then (C) holds for $u$, with $t_0+T_\varepsilon$ replaced by $-\infty$ and $1-\varepsilon$ by $u^+(x)-\varepsilon$ in \eqref{1.7}, with $\ell_\varepsilon,m_\varepsilon$ depending only on $\varepsilon,{\mathcal F}$, and $\tau_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} also depending on $\delta,f_1$. \end{theorem} {\it Remarks.} 1. (i,ii) were proved in \cite[Theorem 1.11]{BH3}, in a more general setting and for a smaller class of entire solutions called {\it invasions}. The latter have doubly-bounded widths and their reaction zones satisfy an additional geometric requirement (see the discussion below). Our proof proceeds along similar lines, using a version of the sliding method. \smallskip 2. (ii) will play a crucial role in the proofs of Theorems \ref{T.1.2}(i), \ref{T.1.3}, \ref{T.1.11}, and \ref{T.1.12}. \smallskip 3. Notice that as long as $u$ has a bounded width in (iii), we actually have the $u$-independent bound $\sup_{t\in{\mathbb{R}}} L_{u,\varepsilon}(t)\le \ell_\varepsilon$. \smallskip The hypothesis \eqref{1.24} is necessary in Theorem \ref{T.1.5}, even for homogeneous $f$ and $d=1$. It is well known that, for instance, if $0\le f(u)\le f'(0)u$ for $u\in[0,1]$ (i.e., $f$ is a {\it KPP reaction} with $f'(0)>0$) and $f(u)=f'(0)u$ for $u<0$, then for any $c\in(0,2\sqrt{f'(0)})$ there is a traveling front solution $u(t,x)=U_c(x-ct)$ of \eqref{1.1} on ${\mathbb{R}}\times{\mathbb{R}}$ with $\lim_{s\to-\infty}U_c(s)=1$, $\lim_{s\to\infty}U_c(s)=0$, and $\inf_{s\in{\mathbb{R}}} U_c(s)<0$. This solution satisfies neither (i) nor (ii). Counter-examples with ignition $f$ also exist. \smallskip Theorem \ref{T.1.5} suggests a couple of interesting questions. \smallskip {\it Open problems.} 1. Does $u_t>0$ hold in Theorem \ref{T.1.5}(ii) when $\theta=0$? \smallskip 2. Does Theorem \ref{T.1.5}(ii) and/or Theorem \ref{T.1.5}(iii) hold if we drop the hypotheses of bounded width and positive global mean speed and instead only assume that $u\in[0,u^+]$ is a transition solution? Of course, bounded width and positive global mean speed would then follow from the claim of Theorem \ref{T.1.5}(iii). \smallskip A natural question is whether solutions considered in Theorem \ref{T.1.5} must always exist. The following result answers this in the affirmative under the hypotheses of Theorem \ref{T.1.11}, even when $\theta=0$ in (H'). It also shows that transition solutions with {\it doubly-bounded width} need not exist for $d\ge 2$ even for ignition reactions, as was discussed in the introduction. \begin{theorem} \label{T.1.6} (i) If $(f,u^+)\in{\mathcal F}$, with ${\mathcal F}$ as in Theorem \ref{T.1.11} (so $\theta=0$), then there exists a transition solution $u\in(0,u^+)$ for \eqref{1.1} with $u_t>0$ and a bounded width. (ii) If $d\ge 2$, then there exists $f$ as in Theorem \ref{T.1.2} such that any bounded entire solution $u\not\equiv 0,1$ for \eqref{1.1} with bounded width is a transition solution $u\in(0,1)$ satisfying $u_t>0$ and $\lim_{t\to\infty} \inf_{x\in{\mathbb{R}}^2} u(t,x)=1$. In particular, there exists no transition solution with a doubly-bounded width for \eqref{1.1} (and hence also no transition front --- see the discussion below). \end{theorem} {\it Remarks.} 1. The hypothesis $\zeta<c_0^2/4$ is at least qualitatively necessary in (i), as counterexamples with $\zeta >c_0^2/2$ exist even for $d=1$ \cite{NRRZ}. \smallskip 2. Note that for $d=1$, transition fronts always exist under the hypotheses in (ii) \cite{ZlaGenfronts}. The first example of non-existence of fronts was given in \cite{NRRZ} for {\it KPP reactions} (and $d=1$). It is based on the construction of $f$ for which the equilibrium $u\equiv 0$ is {\it strongly unstable} in some region of space, so that arbitrarily small amounts of heat diffusing far ahead of the reaction zone quickly ignite on their own inside this region. (ii) is the first non-existence result for ignition reactions (so it does not rely on this strong instability property of KPP reactions). \smallskip Before proving the above results, let us note that while the concepts of bounded and semi-bounded width of solutions to \eqref{1.1} are new for $d\ge 2$, the concept of doubly-bounded width is closely related to the Berestycki-Hamel definition of transition fronts from \cite{BH2,BH3}, which motivated this work. The latter definition is more geometric in nature and its scope is slightly different from ours. It involves entire solutions rather than solutions of the Cauchy problem, and is also stated for wider classes of PDEs and spatial domains, and vector-valued solutions with possibly time-dependent coefficients and $u^\pm$. This is beyond the scope of the present paper (although the corresponding generalizations are rather straightforward), so we will only discuss the case at hand: \eqref{1.1} on ${\mathbb{R}}\times{\mathbb{R}}^d$ with bounded Lipschitz $f$ and time-independent $u^\pm$ satisfying \eqref{1.19}. In this setting, the definition in \cite{BH3} says that a {\it transition front} connecting $u^-$ and $u^+$ is an entire solution $u$ such that for each $t\in{\mathbb{R}}$ there are open non-empty sets $\Omega_t^\pm\subseteq{\mathbb{R}}^d$ satisfying \begin{equation} \label{1.11} \Omega_t^-\cap\Omega_t^+=\emptyset, \quad \partial\Omega_t^-=\partial\Omega_t^+=:\Gamma_t, \quad \Omega_t^-\cup\Gamma_t\cup\Omega_t^+={\mathbb{R}}^d, \end{equation} \begin{equation} \label{1.13} \sup \left\{ d(y,\Gamma_t)\,\big|\, y\in \Omega_t^\pm\cap \partial B_r(x) \right\} \to \infty \text{ as $r\to\infty$, uniformly in $t\in{\mathbb{R}} $ and $x\in\Gamma_t$,} \end{equation} \begin{equation} \label{1.14} u(t,x)-u^\pm(x) \to 0 \text{ as $d(x,\Gamma_t)\to\infty$ and $x\in\Omega_t^\pm$, uniformly in $t\in{\mathbb{R}}$,} \end{equation} and there is $n\ge 1$ such that for each $t\in{\mathbb{R}}$, \begin{equation} \label{1.15} \text{$\Gamma_t$ is a subset of $n$ (rotated in ${\mathbb{R}}^d$) graphs of functions from ${\mathbb{R}}^{d-1}$ to ${\mathbb{R}}$.} \end{equation} While we have to forgo geometric conditions, such as \eqref{1.15}, in our definitions (as was explained earlier), it is not difficult to see that \eqref{1.11}--\eqref{1.14} for an entire solution $u\not\equiv u^\pm$ are in fact {\it equivalent} to $u$ having a doubly-bounded width! Indeed, if $u\not\equiv u^\pm$ has a doubly-bounded width (in the sense of Definition \ref{D.1.4} if $u\notin[u^-,u^+]$), one only needs to take \begin{equation} \label{1.16} \Omega_t^+:= {\rm int}\left\{ x\in{\mathbb{R}}^d \,\Bigg|\, u(t,x)\ge \frac {u^+(x)+u^-(x)}2 \right\}, \end{equation} $\Gamma_t:=\partial\Omega_t^+$, and $\Omega_t^-:={\mathbb{R}}^d\setminus\bar\Omega_t^+$. (Of course, the sets $\Omega_t^\pm,\Gamma_t$ from \eqref{1.11} are not unique!) On the other hand, when \eqref{1.11}--\eqref{1.14} holds, it is easy to see that $\Gamma_t$ and the boundary of the set from \eqref{1.16} are within a (uniformly in $t$) bounded distance of each other. So transition fronts are precisely those entire solutions with doubly-bounded widths which also satisfy \eqref{1.15}. In particular, Theorems \ref{T.1.5} and \ref{T.1.6}(ii) apply to them, the latter also showing that transition fronts need not always exist in dimensions $d\ge 2$. We note that the condition \eqref{1.8} for our transition solutions also has a counterpart in \cite{BH3}. There an {\it invasion} of $u^-$ by $u^+$ is defined to be a transition front connecting $u^\pm$ for which \begin{equation} \label{1.17} \text{$\Omega_s^+\subseteq \Omega_t^+$ when $s\le t$} \qquad\text{and}\qquad \lim_{r\to\infty} \inf_{|t-s|=r} d(\Gamma_t,\Gamma_s)= \infty. \end{equation} This condition, together with \eqref{1.11}--\eqref{1.14}, implies \eqref{1.8} but is stronger than our definition of transition solutions with doubly-bounded widths. Nevertheless, if we relax \eqref{1.17} to the existence of $T$ such that \begin{equation} \label{1.18} \text{$\Omega_s^+\subseteq \Omega_t^+$ when $s+T\le t$} \qquad\text{and}\qquad \lim_{r\to\infty} \inf_{|t-s|=r} d(\Gamma_t,\Gamma_s)= \infty, \end{equation} then \eqref{1.11}--\eqref{1.14}, \eqref{1.18} are in fact {\it equivalent} to our definition of transition solutions with doubly-bounded widths which also propagate with a positive global mean speed. Indeed, notice that \eqref{1.18} implies that $\inf_{|t-s|=r} d(\Gamma_t,\Gamma_s)$ grows at least linearly as $r\to\infty$, so we can again use \eqref{1.16} to define $\Omega_t^+$. \vbox{ \bigskip \noindent {\bf Organization of the Paper and Acknowledgements} \smallskip \smallskip In Section \ref{S2} we prove some preliminary results. Section \ref{S3} is the heart of the argument proving bounded widths of solutions for $d\le 3$ (the proof of Lemma~\ref{L.3.2} is considerably more complicated for $d=3$, so it is postponed until Section \ref{S5}). Theorem \ref{T.1.3}(i) will then be obtained in the short Section \ref{S4}, and a more involved argument (along with Theorem \ref{T.1.5}(ii)) will be needed to prove Theorem~\ref{T.1.3}(ii) and Theorem \ref{T.1.2}(i) in Section \ref{S4a}. All these arguments are extended in Section \ref{S11} to obtain proofs of Theorems \ref{T.1.11} and \ref{T.1.12}, and in Section \ref{S8} we prove Theorem \ref{T.1.5} (the proof of its parts (i,ii) only uses Lemma \ref{L.8.1} below and, in particular, not the results proved in Section \ref{S4a}). In Section \ref{S6} we prove Theorem \ref{T.1.2}(ii) by means of a counter-example (the proof is also independent of the rest of the paper) and Theorem~\ref{T.1.6} is proved in Section \ref{S7}. } The author thanks \' Arp\' ad Baricz, Henri Berestycki, Fran\c cois Hamel, and Hiroshi Matano for helpful discussions and comments. He also acknowledges partial support by NSF grants DMS-1056327, DMS-1113017, and DMS-1159133. \section{Preliminaries (case $u^+\equiv 1$)} \label{S2} In Chapters \ref{S2}--\ref{S5} we consider the setting of Theorems \ref{T.1.2} and \ref{T.1.3}, with $f_0,K,\theta$ as in (H), $u^+\equiv 1$, and $u\in[0,1]$. We will extend the results below to the setting of (H') in Chapter \ref{S11}. Let us start with some useful preliminary lemmas. \begin{lemma} \label{L.2.1} There is $\varepsilon_0=\varepsilon_0(f_0,K)> 0$ such that for each $c<c_0$ and $\varepsilon>0$ there is $\tau=\tau(f_0,K,c,\varepsilon)\ge 0$ such that the following holds. If $u\in[0,1]$ solves \eqref{1.1}, \eqref{1.2} with $f$ from (H), and $u(t_1,x)\ge 1-\varepsilon_0$ for some $(t_1,x)\in [t_0+1,\infty)\times{\mathbb{R}}^d$, then for each $t\ge t_1+\tau$, \begin{equation} \label{2.1} \inf_{|y-x|\le c(t-t_1)} u(t,y) \ge 1-\varepsilon. \end{equation} The same result holds if the hypothesis $u(t_1,x)\ge 1-\varepsilon_0$ is replaced by \begin{equation}\label{2.1a} u(t_1,\cdot) \ge \frac {1+\theta_0}2 \chi_{B_R(x)}(\cdot) \end{equation} for some $(t_1,x)\in [t_0,\infty)\times{\mathbb{R}}^d$ and a large enough $R=R(f_0)>0$. \end{lemma} \begin{proof} The second claim is proved in \cite{AW} when $f(y,\cdot)=f_0(\cdot)$ for all $y\in{\mathbb{R}}^d$ and follows for general $f$ by the comparison principle. The first claim holds because \eqref{2.1a} follows from $u(t_1,x)\ge 1-\varepsilon_0$, provided $\varepsilon_0>0$ is sufficiently small (depending on $f_0,K$) . Indeed, assume that for each $n\in{\mathbb{N}}$ there were $f_n$ satisfying (H) and $u_n$ solving \eqref{1.1} on $(-1,\infty)\times{\mathbb{R}}^d$ with $f=f_n$, such that $u_n(0,0)\ge 1-\tfrac 1n$ and $\inf_{y\in B_R(0)} u_n(0,y)<\tfrac 12(1+\theta_0)$ (note that we can shift $(t_1,x)$ to $(0,0)$ without loss, and then $t_0\le -1$). By parabolic regularity, there is a subsequence $\{{n_j}\}_{j\ge 1}$ with $u_{n_j}$ and $f_{n_j}$ locally uniformly converging to $u\in[0,1]$ and $f$ such that $f$ satisfies (H) and $u$ solves \eqref{1.1} on $(-1,\infty)\times{\mathbb{R}}^d$, with $u(0,0)=1$ and $\inf_{y\in B_R(0)} u(0,y)<1$. But this contradicts the strong maximum principle, and we are done. \end{proof} The first claim of this result immediately shows that solutions with bounded widths propagate with global mean speed in $[c_0,\infty]$. It turns out that bounded width also makes the global mean speed not exceed $c_1$, at least in the ignition case. This can be proved by a separate argument and we state both these results in the following lemma. \begin{lemma} \label{L.2.1a} Let $f_0,K$ be as in (H) and $f_1$ be pure ignition. For each $\varepsilon\in(0,\tfrac 12)$ and $\delta>0$ there is $\varepsilon'>0$ and $\tau<\infty$ such that the following holds. If $u\in[0,1]$ solves \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$ with ignition $f$ from (H) satisfying \eqref{1.4e}, and $\sup_{t\in[t_0+1,t_3]}L_{u,\varepsilon'}(t)\le L$, then \[ B_{(c_0-\delta)(t_2-t_1)-L} \left(\Omega_{u,\varepsilon}(t_1) \right) \subseteq \Omega_{u,1-\varepsilon}(t_2) \qquad\text{and}\qquad \Omega_{u,\varepsilon}(t_2) \subseteq B_{(c_1+\delta)(t_2-t_1)+L} \left(\Omega_{u,1-\varepsilon}(t_1) \right) \] whenever $t_1\ge t_0+1$ and $t_2\in[t_1+\tau,t_3]$. \end{lemma} \begin{proof} The first inclusion is immediate for any $\varepsilon'\in(0,\min\{\varepsilon,\varepsilon_0\}]$, with $\tau$ from Lemma \ref{L.2.1} with $\varepsilon$ and $c:=c_0-\delta$. Indeed, if $x\in\Omega_{u,\varepsilon}(t_1)$, then $\bar B_L(x)\cap \Omega_{u,1-\varepsilon_0}(t_1)\neq \emptyset$, so Lemma \ref{L.2.1} yields the result (even for monostable $f$). Let us now consider the second inclusion. Extend $f_1$ by 0 to ${\mathbb{R}}\setminus[0,1]$. It is well known that for any $\delta>0$ there is $\varepsilon'\in(0,\tfrac \varepsilon 2)$ and a traveling front for some $f_2\ge f_1(\ge 0)$ with $f_2\equiv 0$ on $[0,2\varepsilon']\cup\{1+\varepsilon'\}$, which has speed $c_2\in[c_1, c_1+\tfrac \delta 3]$ and connects $\varepsilon'$ and $1+\varepsilon'$. That is, there is a solution of $U''+c_2U'+f_2(U)=0$ on ${\mathbb{R}}$ with $U'<0$, $U(-\infty)=1+\varepsilon'$ and $U(\infty)=\varepsilon'$ (and we can also assume $U(0)=2\varepsilon'$ after translation). Indeed, one only needs to take $\varepsilon'$ small enough and $f_2$ close enough to $f_1$. Let $z_1:=\tfrac{6d}\delta$, $z_2:=\tfrac{6d+7}\delta$ and let $h:[0,\infty)\to[0,\infty)$ be any $C^2$ function with $h\equiv 0$ on $[0,z_1]$, $h'\equiv 1$ on $[z_2,\infty)$, and $h'\le 1$ and $h''\in[0,\tfrac \delta 6]$ on $[z_1,z_2]$. We now claim that \begin{equation}\label{1.3hh} v(t,x):=U \left( z_2-h(|x|)- \left(c_2+\frac\delta 3 \right)t \right) \end{equation} satisfies \begin{equation} \label{1.3jj} v_t\ge \Delta v+f_2(v) \qquad\text{on $(-\infty,0)\times{\mathbb{R}}^d$.} \end{equation} Indeed, for $|x|\le z_1$ the argument of $U$ is positive (so $f_2(U)=0$) and we have \[ v_t-\Delta v - f_2(v) = -\left(c_2+\frac\delta 3 \right)U' \ge 0. \] For $|x|\ge z_1$ we get \[ -v_t+\Delta v + f_2(v) = \left[ \left(c_2+\frac\delta 3 \right) -h''(|x|)-\frac{d-1}{|x|}h'(|x|) \right] U'+ \left( h'(|x|) \right)^2 U'' +f_2(U) =(*). \] If $|x|\ge z_2$, then \[ (*)= \left[ \left(c_2+\frac\delta 3 \right) -\frac{d-1}{|x|} \right] U'+ U'' +f_2(U) = \left[ \frac\delta 3 -\frac{d-1}{|x|} \right] U' \le 0. \] If $|x|\in[z_1,z_2]$, then again the argument of $U$ is positive (so $f_2(U)=0$) and we have \begin{align*} (*) & = \left[ \left(c_2+\frac\delta 3 \right) -h''(|x|)-\frac{d-1}{|x|}h'(|x|) - c_2 \left( h'(|x|) \right)^2 \right] U' \\ & = \left[ c_2 \left( 1-\left( h'(|x|) \right)^2 \right) + \left( \frac\delta 6-h''(|x|) \right) + \left( \frac\delta 6 -\frac{d-1}{|x|}h'(|x|) \right) \right] U'. \end{align*} Each of the three terms in the last square bracket is non-negative, so again $(*)\le 0$ and \eqref{1.3jj} holds. We now let $\tau:=\tfrac 3\delta(2z_2-U^{-1}(1))$ and consider arbitrary $y\notin B_{(c_1+\delta)(t_2-t_1)+L} \left(\Omega_{u,1-\varepsilon}(t_1) \right)$. By the hypothesis and $\varepsilon>\varepsilon'$, we have $u(t,x)<\varepsilon'$ for all $x\in B_{(c_1+\delta)(t_2-t_1)}(y)$. The function \[ w(t,x):=v(t-t_2,x-y)\quad(\ge \varepsilon') \] is obviously a super-solution of \eqref{1.1} on $(t_1,t_2)\times{\mathbb{R}}^d$, and for $x\notin B_{(c_1+\delta)(t_2-t_1)}(y)$ we have \[ w(t_1,x) \ge U \left( 2z_2-|x-y|- \left(c_1+\frac{2\delta} 3 \right)(t_1-t_2) \right) \ge U \left( 2z_2-\frac{\delta} 3 (t_2-t_1) \right) \ge U \left( 2z_2-\frac{\delta} 3\tau \right)=1 \] by $U'<0$, $h(z)\ge z-z_2$, $c_2\le c_1+\tfrac\delta 3$, and $t_2-t_1\ge \tau$. Hence $w(t_1,\cdot)\ge u(t_1,\cdot)$ and so \[ u(t_2,y)\le w(t_2,y)=v(0,0)=U(z_2)<2\varepsilon'<\varepsilon. \] Thus $y\notin\Omega_{u,\varepsilon}(t_2)$ and we are done. \end{proof} During the proofs of our main results, we will sometimes need to pass to limits along subsequences of $\{(f_n,u_n)\}$, where $f_n$ satisfy (H) and $u_n\in[0,1]$ have uniform-in-$n$ bounds on their widths. The following will be useful. For $\varepsilon\in(0,\tfrac 12)$, $\ell>0$, and $t_0\in[-\infty,\infty)$, let $S_{t_0,\varepsilon,\ell}=S_{t_0,\varepsilon,\ell}(f_0,K,\theta)$ be the set of all pairs $(f,u)$ such that $f$ satisfies (H) with the given $f_0,K,\theta$ and $u\in[0,1]$ solves \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$ and satisfies $L_{u,\varepsilon'}(t)\le \ell$ for all $\varepsilon'\in(\varepsilon,\tfrac 12)$ and all $t> t_0$. For non-increasing and left-continuous $L:(0,\tfrac 12)\to(0,\infty)$, let \[ S_{t_0,L}=S_{t_0,L}(f_0,K,\theta):= \bigcap_{\varepsilon\in(0,1/2)} S_{t_0,\varepsilon,L(\varepsilon)} (f_0,K,\theta). \] (so $(f,u)\in S_{t_0,L}$ implies $L_{u,\varepsilon}(t)\le L(\varepsilon)$ for $\varepsilon\in(0,\tfrac 12)$ and $t>t_0$, by left-continuity of $L$) and \[ S_L=S_L(f_0,K,\theta):= \{ (f,u)\,|\, (f,u)\in S_{-\infty,L}(f_0,K,\theta) \text{ and } u\not\equiv 0,1 \}. \] Thus any entire solution $u\in[0,1]$ of \eqref{1.1} with bounded width, except $u\equiv 0,1$, appears in some $S_L$. Of course, strong maximum principle gives $u\in(0,1)$ if $(f,u)\in S_L$. \begin{lemma} \label{L.8.1} Fix $f_0,K,\theta$ and $L$ as above and let $t_0\in[-\infty,\infty)$. (i) If for $\varepsilon\in(0,\tfrac 12)$ and $\ell>0$ we have $(f_n,u_n)\in S_{t_n,\varepsilon,\ell}(f_0,K,\theta)$ and $\limsup_{n\to\infty}t_n\le t_0$, then there is $n_j\to\infty$ (as $j\to\infty$) and $(f,u)\in S_{t_0,\varepsilon,\ell}(f_0,K,\theta)$ such that $f_{n_j}\to f$ locally uniformly on ${\mathbb{R}}^{d}\times[0,1]$ and $u_{n_j}\to u$ locally uniformly on $(t_0,\infty)\times{\mathbb{R}}^d$. (ii) If for each $\varepsilon\in(0,\tfrac 12)$ we have $(f_n,u_n)\in S_{t_n(\varepsilon),\varepsilon,L(\varepsilon)}(f_0,K,\theta)$ and $\limsup_{n\to\infty}t_n(\varepsilon)\le t_0$, then there is $n_j\to\infty$ (as $j\to\infty$) and $(f,u)\in S_{t_0,L}(f_0,K,\theta)$ such that $f_{n_j}\to f$ locally uniformly on ${\mathbb{R}}^{d}\times[0,1]$ and $u_{n_j}\to u$ locally uniformly on $(t_0,\infty)\times{\mathbb{R}}^d$. (iii) If $\varepsilon\in(0,2\varepsilon_0]$ and $\ell>0$, then \begin{equation} \label{1.7b} \inf \left\{ u_t(t,x) \,\Big|\, (f,u)\in S_{0,\varepsilon/2,\ell}(f_0,K,0),\, u_t\ge 0 \text{ on $(0,\infty)\times{\mathbb{R}}^d$, } t\ge 1, u(t,x)\in[\varepsilon,1-\varepsilon] \right\} >0 \end{equation} \end{lemma} \begin{proof} (i) The properties of $f_n$, uniform boundedness of $u_n$, and standard parabolic regularity for $u_n$ prove existence of locally uniform limits $f,u$ along a subsequence $\{n_j\}_{j\ge 1}$, as well as that $f$ satisfies (H) (with the same $f_0,K,\theta$) and $u$ solves \eqref{1.1}. Locally uniform convergence $u_{n_j}\to u$ then yields $L_{u,\varepsilon'}(t)\le \ell$ for all $\varepsilon'\in(\varepsilon,\tfrac 12)$ and all $t> t_0$ (just pick any $\varepsilon''\in(\varepsilon,\varepsilon')$ and then a large enough $j$). Thus $(f,u)\in S_{t_0,\varepsilon,\ell}$. (ii) The proof is identical to (i). (iii) Assume that the $\inf$ in \eqref{1.7b} is 0. Then there are $(f_n,u_n)\in S_{0,\varepsilon/2,\ell}$ with $(u_n)_t\ge 0$ and $(t_n,x_n)\in[1,\infty)\times{\mathbb{R}}^d$ such that $u_n(t_n,x_n)\in[\varepsilon,1-\varepsilon]$ and $(u_n)_t(t_n,x_n)\in[0,\tfrac 1n]$. After shifting $(t_n,x_n)$ to $(1,0)$ and applying (i), we obtain $(f,u)\in S_{0,\varepsilon/2,\ell}$ with $u(1,0)\in[\varepsilon,1-\varepsilon]$ and $u_t\ge 0 =u_t(1,0)$. The strong maximum principle for the linear PDE $v_t=\Delta v+f_u(x,u(t,x))v$, satisfied by $u_t$, then implies $u_t\equiv 0$. This however contradicts Lemma \ref{L.2.1}, which yields $\lim_{t\to\infty} u(t,0)= 1\,(>u(1,0))$ because $\sup_{x\in B_\ell(0)} u(1,x)\ge 1-\tfrac\varepsilon 2 \,(\ge 1-\varepsilon_0)$. \end{proof} An important role in the proof of Theorems \ref{T.1.2} and \ref{T.1.3} will be played by equilibrium solutions of \eqref{1.1}. \begin{lemma} \label{L.2.2} Let $f\ge 0$ be Lipschitz and $v\in[0,1]$ satisfy \begin{equation} \label{2.2} \Delta v + f(x,v)=0 \end{equation} on ${\mathbb{R}}^d$. If $d\le 2$, then $v$ is constant and $f(x,v(x))\equiv 0$. If $d\ge 3$, then \begin{equation} \label{2.3} \int_{{\mathbb{R}}^d} |x|^{2-d} f(x,v(x)) dx\le (d-2)|\partial B_1(0)|. \end{equation} \end{lemma} \begin{proof} Integrating \eqref{2.2} over $B_r:=B_r(0)$ and using the divergence theorem yields \[ \int_{B_r} f(x,v(x)) dx = -\int_{\partial B_r} \nabla v(x) \cdot n(x) \,d\sigma_r(x) = -r^{d-1} \int_{\partial B_1} \tilde v_\rho(r,y) \,d\sigma_1(y) \] where $n$ is the unit outer normal and $\sigma_r$ the surface measure for $\partial B_r$, and $\tilde v(\rho,y)=v(\rho y)$ for $(\rho,y)\in(0,\infty)\times \partial B_1$. Multiplying by $r^{1-d}$ and integrating in $r\in[0,r_0]$ gives \[ \int_{B_{r_0}} [l(r_0)-l(|x|)] f(x,v(x)) dx = \int_{0}^{r_0} r^{1-d} \int_{B_r} f(x,v(x)) dx dr = \int_{\partial B_1} [\tilde v(0,y) - \tilde v(r_0,y)] \,d\sigma_1(y), \] where $l(r)=\ln r$ if $d=2$ and $l(r)=r^{2-d}/(2-d)$ otherwise. Taking $r_0\to \infty$ finally yields \[ \int_{{\mathbb{R}}^d} [l(\infty)-l(|x|)] f(x,v(x)) dx = |\partial B_1(0)| v(0) - \lim_{r\to \infty} r^{1-d}\int_{\partial B_r} v(x) d\sigma_r(x). \] Since $v\in[0,1]$, either $f(x,v(x))\equiv 0$ (and then $v$ is constant) or $d\ge 3$ and \eqref{2.3} holds. \end{proof} \begin{lemma} \label{L.2.3} For $\zeta>0$, let $\Psi(x)=\psi(|x|)$ be the radially symmetric solution of \begin{equation} \label{2.4} \Delta \Psi = \zeta \Psi \end{equation} on ${\mathbb{R}}^d$ with $\Psi(0)=1$. Then $\psi,\psi'>0$ on $(0,\infty)$ and \begin{equation} \label{2.5} \lim_{r\to\infty} \left( \sqrt\zeta r \right)^{(d-1)/2} e^{-\sqrt\zeta r} \psi^{(k)}(r)= \zeta^{k/2} l_{d} \end{equation} for some $l_{d}\in(0,\infty)$ and $k=0,1$. In particular, \begin{equation} \label{2.6} \lim_{r\to\infty} \psi'(r)\psi(r)^{-1}=\sqrt\zeta \end{equation} \end{lemma} {\it Remark.} We only need $k=0,1$ here but \eqref{2.5} holds for any $k\ge 0$. \begin{proof} Here $\psi$ is the unique solution of $\psi''+\tfrac{d-1}r\psi'=\zeta\psi$ on $(0,\infty)$, with $\psi(0)=1$ and $\psi'(0)=0$, which is obviously positive along with $\psi'$. If $d=1$, one easily checks that \begin{equation} \label{2.7} \psi(r)=\frac {e^{\sqrt\zeta r} + e^{-\sqrt\zeta r}} 2, \end{equation} so \eqref{2.5} holds with $l_1=\tfrac 12$. If $d\ge 2$, then $\phi(r):=r^{(d-2)/2} \psi(\zeta^{-1/2}r)$ satisfies \[ \phi''+\frac 1r\phi'- \left[1+ \frac{(d-2)^2}{4r^2} \right] \phi=0 \] on $(0,\infty)$, with $\lim_{r\to 0} r^{(2-d)/2} \phi(r)=1$ and $\lim_{r\to 0} \tfrac d{dr}[r^{(2-d)/2} \phi(r)]=0$. Thus by \cite[p.375]{AS}, $\phi=c_dI_{(d-2)/2}$ for $I_\nu$ ($\nu\in{\mathbb{C}}$) the modified Bessel function of the first kind and some $c_d>0$ (in fact, $c_d=2^{(d-2)/2}\Gamma(\tfrac d2)$). But now \eqref{2.5} follows from $\lim_{r\to\infty} \sqrt{r} e^{-r}I_\nu^{(k)}(r)=(2\pi)^{-1/2}$ for $k=0,1$ \cite[pp.~377 and 378]{AS}, with $l_d:=(2\pi)^{-1/2}c_d$. \end{proof} \section{Bounded Widths for Solutions $u\in[0,1]$ with $u_t\ge 0$ (case $u^+\equiv 1$)} \label{S3} Again we consider $f_0,K,\theta$ as in (H), $u^+\equiv 1$, $u\in[0,1]$, and also $\eta>0$ and $\zeta\in(0,c_0^2/4)$. All constants in this section will depend on $f_0,K,\zeta,\eta$ (but not on $\theta$, unless explicitly noted!). We define $\zeta':=\tfrac {c_0^2}8+\tfrac\zeta 2 \in(\zeta, c_0^2/4)$ and choose any \begin{equation}\label{3.00} h\in \left[0,\min\left\{ \theta\frac{c_0^2-4\zeta}{c_0^2+4\zeta},\frac\eta{4K} \right\} \right] \end{equation} (obviously $h=0$ when $\theta=0$). This yields $\zeta'(\theta-h)\ge\zeta\theta$, which guarantees $\zeta'(u-h)\ge \zeta u$ for all $u\ge\theta$. Hence, any $f\in F(f_0,K,\theta,\zeta,\eta)$ satisfies \begin{equation} \label{3.0b} f(x,u)\le \zeta'(u-h) \qquad\text{for $x\in{\mathbb{R}}^d$ and $u\in[h, \alpha_f(x)]$}. \end{equation} Here, and always, $\alpha_f(x)=\alpha_f(x;\zeta)$ (not $\alpha_f(x;\zeta')$). Let us also take $\varepsilon_0$ from Lemma \ref{L.2.1} and $\psi$ from Lemma \ref{L.2.3} corresponding to $\zeta'$. Below, $\psi^{-1}(\cdot)$ is the inverse function to $\psi(\cdot)$ on $[0,\infty)$ while $\psi(\cdot)^{-1}=1/\psi(\cdot)$. In the following we will assume that $f\in F(f_0,K,\theta,\zeta,\eta)\, (\subseteq F(f_0,K,0,\zeta,\eta))$ and $u\in[0,1]$ solves \eqref{1.1}, \eqref{1.2}. For any $(t,y)\in[t_0,\infty)\times{\mathbb{R}}^d$ we define \begin{align} Z_y(t):= & \inf_{u(t,x)\ge 1-\varepsilon_0} |x-y| \qquad(\in [0,\infty]), \label{3.1} \\ Y^{h}_y(t):= & \sup \left\{ \rho \,\big|\, u(t,\cdot)\le h+\psi(\rho)^{-1} \psi(|\cdot-y|) \right\} \qquad(\in [0,\infty]), \label{3.2} \end{align} and $\gamma^{h}_y(t):=\psi(Y^{h}_y(t))^{-1}$. That is, $Z_y(t)$ is the distance from $y$ to the nearest point with value of $u$ sufficiently close to 1, while $Y^{h}_y(t)$ is the distance from $y$ to the points where the best upper bound of the form $h+\gamma\psi(|\cdot-y|)$ on $u$ takes the value $1+h$ (both at time $t$), and $\gamma^{h}_y(t)$ is the $\gamma$ from that bound. The latter is clearly non-increasing in $h$, hence $Y^{h}_y(t)$ is non-decreasing in $h$. Note that \eqref{2.6} immediately shows \begin{equation} \label{3.2b} Y^{h}_y(t)\le Z_y(t) + M \end{equation} for some ($\theta,h$-independent) $M\ge 0$. Let us also fix any $c_Y,c_Z$ such that \begin{equation} \label{3.0} 2\sqrt{\zeta'} < c_Y < c_Z < c_0, \end{equation} for instance, $c_Y:=\tfrac 14 c_0 + \tfrac 32\sqrt{\zeta'}$ and $c_Z:=\tfrac 34 c_0 + \tfrac 12 \sqrt{\zeta'}$. Let $\tau_Z\ge 0$ correspond to $c=c_Z$ and $\varepsilon=\varepsilon_0$ in Lemma \ref{L.2.1} and let $r_Y\ge 0$ be such that \[ \frac{\psi'(r)}{\psi(r)}\ge \frac {4\zeta'} {c_Y +2\sqrt{\zeta'}} \qquad \left( > \frac {2\zeta'} {c_Y} \right) \] for $r\ge r_Y$ (which exists by \eqref{2.6}, with $\zeta'$ in place of $\zeta$, and $c_Y>2\sqrt{\zeta'}$). Finally, let \begin{equation} \label{3.0a} c_Y':= \frac{(K +\zeta')c_Y}{2\zeta' } \qquad \left(> \frac{(K +\zeta')}{\sqrt{\zeta'}}\ge 2\sqrt{K } \ge c_1 \right). \end{equation} The choice of $Y^{h}_y$ is motivated by the following result. \begin{lemma} \label{L.3.1} Let $(t_1,y)\in [t_0,\infty)\times{\mathbb{R}}^d$. (i) If $t\ge t_1$ is such that $Y^{h}_y(t_1)-c_Y'(t-t_1)\ge r_Y$, then \begin{equation} \label{3.3} Y^{h}_y(t)\ge Y^{h}_y(t_1) - c_Y'(t-t_1). \end{equation} (ii) If $t_2\ge t_1$ is such that $Y^{h}_y(t_1)-c_Y(t_2-t_1)\ge r_Y$ and $u(t,x)\le \alpha_f(x)$ on the set $A:=\{(t,x) \,|\, t\in [t_1,t_2] \text{ and }|x-y|\le Y^{h}_y(t_1)-c_Y(t-t_1)\}$, then \begin{equation} \label{3.4} Y^{h}_y(t)\ge Y^{h}_y(t_1) - c_Y(t-t_1) \end{equation} for any $t\in [t_1,t_2]$. (iii) If $t_1\ge t_0+1$ and $t\ge t_1+\tau_Z$, then \begin{equation} \label{3.5} Z_y(t)\le \left[ Z_y(t_1) - c_Z(t-t_1) \right]_+ . \end{equation} \end{lemma} {\it Remark.} The point here is that (ii) and (iii), together with $c_Y<c_Z$, will keep $Z_y(t)-Y^{h}_y(t)$ uniformly bounded above. This is done in Lemma \ref{L.3.2} below. It turns out, however, that the hypothesis of (ii) is too strong to make this idea directly applicable for $d\ge 3$. Lemma \ref{L.3.2} nevertheless still holds for $d=3$, albeit with a considerably more involved proof (see Section \ref{S5} below). For $d\ge 4$ the lemma is false in general. \begin{proof} (i) Since $w(t,x):=h+e^{(K +\zeta')(t-t_1) } \gamma^{h}_y(t_1)\psi(|x-y|)$ is a super-solution of \eqref{1.1} due to $f(x,u)\le K(u-\theta)\le K(u-h)$, the comparison principle gives \begin{equation} \label{3.6} \gamma^{h}_y(t)\le e^{(K +\zeta')(t-t_1) } \gamma^{h}_y(t_1) \end{equation} for any $t\ge t_1$. From this and \eqref{3.2} we obtain \[ \ln \psi(Y^{h}_y(t)) \ge \ln \psi(Y^{h}_y(t_1)) -(K +\zeta')(t-t_1). \] Since $\tfrac d{dr}[\ln\psi(r)]\ge 2\zeta'/c_Y$ for $r\ge r_Y$, it follows that \[ Y^{h}_y(t)\ge Y^{h}_y(t_1) - \frac{(K +\zeta')c_Y}{2\zeta'}(t-t_1) = Y^{h}_y(t_1)-c_Y'(t-t_1) \] for all $t\in[t_1,t_2]$, where $t_2\ge t_1$ is the first time such that $Y^{h}_y(t_2)= r_Y$. Thus $r_Y\ge Y^{h}_y(t_1)-c_Y'(t_2-t_1)$, so $t\le t_2$ due to $Y^{h}_y(t_1)-c_Y'(t-t_1)\ge r_Y$, and we are done. (ii) Let $\beta(t)$ be such that $w(t,x):=h+e^{\beta(t) } \gamma^{h}_y(t_1)\psi(|x-y|)$ equals $1+h$ when $t\in[t_1,t_2]$ and $|x-y|= Y^{h}_y(t_1)-c_Y(t-t_1)$. Then $\beta(t_1)=0$ and from $\tfrac d{dr}[\ln\psi(r)]\ge 2\zeta'/c_Y$ for $r\ge r_Y$ we obtain $\beta'(t)\ge 2\zeta'$ on $[t_1,t_2]$. Thus we have \[ w_t\ge \Delta w + \zeta' (w-h). \] From $w\ge h$, \eqref{3.0b}, and the hypothesis it follows that $w$ is a super-solution of \eqref{1.1} on $A$. Since $u(t,x)\le 1\le w(t,x)$ when $t\in[t_1,t_2]$ and $|x-y|\ge Y^{h}_y(t_1)-c_Y(t-t_1)$, we obtain $w\ge u$ for $t\in[t_1,t_2]$ because $u(t_1,\cdot)\le w(t_1,\cdot)$. Therefore $\gamma^{h}_y(t)\le e^{\beta(t) } \gamma^{h}_y(t_1)$ for $t\in[t_1,t_2]$ and the result follows. (iii) This is immediate from Lemma \ref{L.2.1}. \end{proof} The following crucial lemma, which requires $u_t\ge 0$, will enable us to prove the claim in the remark after Lemma \ref{L.3.1}. It essentially shows that $Y^{h}_y$ cannot decrease faster than at speed $c_Y$ ($<c_Z$) whenever $Z_y$ is much larger than $Y^{h}_y$. \begin{lemma} \label{L.3.2} Let $d\le 3$. There are ($\theta, h,f,u_0$-independent) $T_Y > 0$ and $\tau_Y\ge T_Y+1$ such that we have the following whenever \eqref{3.6a} holds on ${\mathbb{R}}^d$. If \begin{equation} \label{3.7} Z_y(t_1+\tau_Y)\ge Y^{h}_y(t_1) \end{equation} for some $(t_1,y)\in[t_0,\infty)\times {\mathbb{R}}^d$ and $Y^{h}_y(t_1)-c_YT_Y\ge r_Y$, then \begin{equation} \label{3.8} Y^{h}_y(t_1+T_Y)\ge Y^{h}_y(t_1)-c_YT_Y. \end{equation} \end{lemma} {\it Remarks.} 1. For $d\le 2$ we can take {\it any} $T_Y>0$. For $d=3$ any large enough $T_Y$ works. \smallskip 2. When $d\ge 4$, this result fails in general! The same is true for $d\ge 1$ if $f$ satisfies (H) but we do not require $f\in F(f_0,K,\theta,\zeta,\eta)$. \smallskip \begin{proof} We split the proof in two cases, $d\le 2$ and $d=3$, due to Lemma \ref{L.2.2}. {\bf Case $d\le 2$:} We first claim that there is $\tau\ge 1$ such that if a solution $u\in[0,1]$ of \eqref{1.1} on $(0,\infty)\times{\mathbb{R}}^d$ satisfies $u_t\ge 0$ and $u(0,0)>\alpha_f(0)$, then $u(\tau,0)>1-\varepsilon_0$. Assume that for each $\tau=1,2,\dots$ there is some couple $f_\tau\in F(f_0,K,0,\zeta,\eta)$ and $u_\tau$ contradicting this statement with $(f,u)=(f_\tau,u_\tau)$. Then parabolic regularity shows that there is a sequence $\tau_j\to\infty$ such that $f_{\tau_j}$ and $u_{\tau_j}$ converge locally uniformly on ${\mathbb{R}}^d\times[0,1]$ and on $(0,\infty)\times{\mathbb{R}}^d$ to some $f\in F(f_0,K,0,\zeta,\eta)$ and some solution $u\in[0,1]$ of \eqref{1.1} such that $u_t\ge 0$ and $\lim_{t\to\infty}u(t,0)\le 1-\varepsilon_0$. Moreover, $u_{\tau_j}(0,0)\ge \alpha_{f_{\tau_j}}(0)$ and $f_{\tau_j}\in F(f_0,K,0,\zeta,\eta)$ guarantee that $f(0,\cdot)\ge f_0(\cdot)+\eta\chi_{[u(0,0),\theta_0]}(\cdot)$. But then $v(x):=\lim_{t\to\infty} u(t,x)$ satisfies \eqref{2.2} on ${\mathbb{R}}^d$ (so it is constant by Lemma \ref{L.2.2}) with $f(0,v(0))>0$, a contradiction. We now pick any $T_Y>0$ and apply this claim with the point $(0,0)$ shifted to $(t_1+T_Y,x)$, for any $x\in B_{Y^{h}_y(t_1)}(y)$. If we let $\tau_Y:=T_Y+\tau$, it follows from \eqref{3.7} that $u(t_1+T_Y,x)\le \alpha_f(x)$, and thus $u(t,x)\le \alpha_f(x)$ for all $(t,x)\in[t_1,t_1+T_Y]\times B_{Y^{h}_y(t_1)}(y)$. Lemma \ref{L.3.1}(ii) now yields \eqref{3.8}. {\bf Case $d=3$:} This case is considerably more involved, due to the limitation in Lemma \ref{L.2.2}. We postpone its proof until Section \ref{S5} in order to not interrupt the flow of the presentation. \end{proof} Note that in the case $d\le 2$, this result holds even if \eqref{1.4b} is replaced by \[ \inf_{\substack{x\in{\mathbb{R}}^d \\ u\in[\alpha_f(x),\theta_0]}} \sup_{y\in B_R(x)} f(y,u) \ge \eta \] for some $R<\infty$, because we still obtain $f(x,v(x))>0$ for the constant function $v$ and some $|x|\le R$. Theorems \ref{T.1.2}(i) and \ref{T.1.3} also extend accordingly. The following result is at the heart of the proofs of our main results. \begin{theorem} \label{T.3.3} Let $d\le 3$, let $f_0,K$ be as in (H), and let $\eta>0$ and $\zeta\in(0,c_0^2/4)$. (i) There is $M>0$ such that if $\theta\ge 0$, $h$ satisfies \eqref{3.00}, $f\in F(f_0,K,\theta,\zeta,\eta)$, $u_0\in[0,1]$ satisfies \eqref{3.6a}, and $u$ solves \eqref{1.1}, \eqref{1.2} on $(t_0,\infty)\times{\mathbb{R}}^d$, then for any $(t,y)\in (t_0,\infty)\times {\mathbb{R}}^d$ we have \begin{equation} \label{3.9} Z_y(t)-Y^{h}_y(t) \le M + \left [Z_y(t_0)-Y^{h}_y(t_0) - \left(\frac{c_0}2 - \sqrt{\zeta'} \right) (t-t_0) \right]_+ . \end{equation} Moreover, for any $\varepsilon\in(h,\tfrac 12)$ there is ($\theta,h,f,u_0$-independent) $\tau_\varepsilon>0$, continuous and non-increasing in $\varepsilon>0$, such that \begin{equation} \label{3.9a} L_{u,\varepsilon}(t) \le M_{\varepsilon-h} + \left [ \sup_{y\in{\mathbb{R}}^d} \left( Z_y(t_0)-Y^{h}_y(t_0) \right) - \left(\frac{c_0}2 - \sqrt{\zeta'} \right) (t-t_0) \right]_+ \end{equation} for any $t\ge t_0+\tau_\varepsilon$, with $M_{\delta}:=M+ c_Y' \tau_\delta+ \psi^{-1}(\delta^{-1})$. (ii) If $\theta,h,M,M_{\delta},\tau_\varepsilon,f,u$ are from (i) and $v\in[0,1]$ satisfies \begin{equation}\label{3.9b} u(t-T,\cdot)-\frac \varepsilon 2 \le v(t,\cdot) \le u(t+T,\cdot) + \frac \varepsilon 2 \end{equation} for some $\varepsilon\in(2h,\tfrac 12)$, $T\ge 0$, and $t\ge t_0+T+\tau_{\varepsilon/2}$, then for such $t$, \begin{equation} \label{3.9c} L_{v,\varepsilon}(t) \le M_{\varepsilon/2-h} + 3c_Y'T+ \left [ \sup_{y\in{\mathbb{R}}^d} \left( Z_y(t_0)-Y^{h}_y(t_0) \right) - \left(\frac{c_0}2 - \sqrt{\zeta'} \right) (t-t_0) \right]_+ \end{equation} \end{theorem} {\it Remarks.} 1. Recall also the bound from below in \eqref{3.2b}. \smallskip 2. $\tfrac 12 c_0-\sqrt{\zeta'}$ can be replaced by any $c<c_0-2\sqrt{\zeta'}$, and then $M,M_{\delta}$ also depend on $c$. \smallskip 3. Obviously $M_{\delta}$ is continuous and decreasing in $\delta>0$. \smallskip \begin{proof} (i) Let us start with \eqref{3.9}. Assume, without loss, that $y=0$ and $t_0=0$, and denote $Y^{h}_0=Y$ and $Z_0=Z$. Recall that $c_Z=\tfrac 34 c_0+\tfrac 12 \sqrt{\zeta'}$ and $c_Y=\tfrac 14 c_0+\tfrac 32 \sqrt{\zeta'}$, so that $c_Z-c_Y= \tfrac 12 {c_0} - \sqrt{\zeta'}$, and then pick $c_Y',\tau_Z,r_Y,T_Y,\tau_Y$ as above (all these constants are independent of $\theta,h$). We can assume $Z(t)>0$ because otherwise the claim is obvious. It is also sufficient to prove the claim for $t$ such that $Y(t)\ge c_Y'(\tau_Y+\tau_Z) + r_Y$ because then the result follows for all $t> 0$ after increasing M by $ c_Y'(\tau_Y+\tau_Z) +r_Y$. This is because $Z$ (and also $Y$) is non-increasing due to \eqref{3.6a}. We also note that $Y$ is then continuous by Lemma \ref{L.3.1}(i), while $Z$ is right-continuous and lower-semi-continuous by continuity of $u$ on $(0,\infty)\times{\mathbb{R}}^d$. Finally, we can assume that $t>\tau_Y+\tau_Z$, because for $t\in(0,\tau_Y+\tau_Z]$ the estimate follows for any $M\ge (c_Y'+\tfrac 12 c_0-\sqrt{\zeta'})(\tau_Y+\tau_Z) $ due to Lemma \ref{L.3.1}(i), $Z$ and $Y$ being non-increasing, and the assumption $Y(t)\ge c_Y'(\tau_Y+\tau_Z) + r_Y$. We will now prove \eqref{3.9} assuming $t>\tau_Y+\tau_Z$, $Z(t)>0$ and $Y(t)\ge c_Y'(\tau_Y+\tau_Z) + r_Y$, with \[ M:=c_Z\tau_Y+ c_Y'(\tau_Y+\tau_Z). \] Let $t_2$ be the smallest number in $[0,t-\tau_Y]$ such that $Z(t_1+\tau_Y) \ge Y(t_1)$ for all $t_1\in(t_2,t-\tau_Y)$. Lower-semi-continuity of $Z(\cdot+\tau_Y)-Y(\cdot)$ now shows the following. If $t_2=0$, then $Z(\tau_Y) \ge Y(0)$; if $t_2\in(0,t-\tau_Y)$, then $Z(t_2+\tau_Y) = Y(t_2)$; and if $t_2=t-\tau_Y$, then $Z(t) \le Y(t-\tau_Y)$. If $t_2=0$, let $N:=\lfloor t/T_Y \rfloor$. Applying Lemma \ref{L.3.1}(i) once and then Lemma \ref{L.3.2} $N$ times, we obtain using $Y(t)- c_Y'T_Y \ge r_Y$ (recall that $\tau_Y>T_Y$), \[ Y(t)\ge Y(NT_Y) - c_Y'T_Y \ge Y(0) - Nc_YT_Y - c_Y'T_Y \ge Y(0) - c_Y t - c_Y'T_Y. \] On the other hand, Lemma \ref{L.3.1}(iii) and $t\ge \tau_Y+\tau_Z$ yield \[ Z(t) \le Z(\tau_Y) - c_Z(t-\tau_Y) \le Z(0) -c_Z t + c_Z \tau_Y \] (notice that $Z(\tau_Y)-c_Z(t-\tau_Y)>0$ because otherwise $Z(t)=0$ by Lemma~\ref{L.3.1}(iii)). Thus \[ Z(t)-Y(t) \le c_Z\tau_Y+ c_Y'T_Y + Z(0)-Y(0) - (c_Z-c_Y) t \le M + Z(0)-Y(0) - (c_Z-c_Y) t. \] If $t_2\in(0,t-\tau_Y-\tau_Z)$, then let $N:=\lfloor (t-t_2)/T_Y \rfloor$. An identical argument now yields \[ Y(t)\ge Y(t_2) - c_Y(t-t_2)-c_Y'T_Y \] and \[ Z(t) \le Z(t_2+\tau_Y) - c_Z(t-t_2-\tau_Y). \] Thus $Z(t_2+\tau_Y) = Y(t_2)$ yields \[ Z(t)-Y(t) \le c_Z\tau_Y+ c_Y'T_Y + Z(t_2+\tau_Y)-Y(t_2) - (c_Z-c_Y) (t-t_2) \le c_Z\tau_Y+ c_Y'T_Y \le M . \] If $t_2\in[t-\tau_Y-\tau_Z,t-\tau_Y]$, then $Z(t_2+\tau_Y) \le Y(t_2)$, so that \[ Z(t)-Y(t)\le Z(t_2+\tau_Y) -Y(t) \le Y(t_2) - Y(t) \le c_Y'(\tau_Y+\tau_Z) \le M. \] by Lemma \ref{L.3.1}(i). The proof of \eqref{3.9} is finished. Let us now turn to \eqref{3.9a} and again assume $t_0=0$. Let $c:=\tfrac 12 c_0$, and for any $\varepsilon\in(h,\tfrac 12)$ let $\tau_\varepsilon:=\tau+1$, with $\tau$ from Lemma~\ref{L.2.1} (this can obviously be chosen continuous and non-increasing in $\varepsilon>0$). For $t\ge \tau_\varepsilon$, let $x\in\Omega_{u,\varepsilon}(t)$. Then $Y^{h}_x(t)\le \psi^{-1}((\varepsilon-h)^{-1})$, so \[ Y^{h}_x(t-\tau)\le \psi^{-1}((\varepsilon-h)^{-1})+c_Y'\tau \le \psi^{-1}((\varepsilon-h)^{-1})+c_Y'\tau_\varepsilon \] by Lemma \ref{L.3.1}(i), and \eqref{3.9} gives \[ Z_x(t-\tau) \le \psi^{-1}((\varepsilon-h)^{-1})+c_Y'\tau_\varepsilon + M + \left [\sup_{y\in{\mathbb{R}}^d} (Z_y(0)-Y^{h}_y(0)) - \left(\frac{c_0}2 - \sqrt{\zeta'} \right) (t-\tau) \right]_+ . \] Lemma \ref{L.2.1} with $t_1:=t-\tau$ now shows that there is $y$ with $u(t,y)\ge 1-\varepsilon$ and \[ |y-x| \le \psi^{-1}((\varepsilon-h)^{-1})+c_Y'\tau_\varepsilon + M + \left [\sup_{y\in{\mathbb{R}}^d} (Z_y(0)-Y^{h}_y(0)) - \left(\frac{c_0}2 - \sqrt{\zeta'} \right) (t-\tau) \right]_+ - \frac {c_0}2\tau. \] This yields \eqref{3.9a} for $t\ge\tau_\varepsilon$ (recall that $t_0=0$ and $\tau\ge 0$). (ii) Again assume $t_0=0$. An identical argument to the last one shows that if $t\ge T+\tau_{\varepsilon/2}$ and $x\in\Omega_{u,\varepsilon/2}(t+T)$ (the latter holds when $v(t,x)\ge \varepsilon$), then \[ Y^{h}_x(t-T-\tau)\le \psi^{-1} \left( \left( \frac\varepsilon 2-h \right)^{-1} \right)+c_Y'(2T+\tau_{\varepsilon/2}), \] and ultimately that there is $y$ with $u(t-T,y)\ge 1-\tfrac \varepsilon 2$ (which yields $v(t,y)\ge 1-\varepsilon$) and \[ |y-x| \le \psi^{-1} \left( \left( \frac\varepsilon 2-h \right)^{-1} \right)+c_Y'(2T+\tau_{\varepsilon/2}) + M + \left [\sup_{y\in{\mathbb{R}}^d} (Z_y(0)-Y^{h}_y(0)) - \left(\frac{c_0}2 - \sqrt{\zeta'} \right) (t-T-\tau) \right]_+ - \frac {c_0}2\tau. \] This proves \eqref{3.9c} because $\tfrac 12 c_0-\sqrt{\zeta'}\le c_Y'$. \end{proof} Before moving onto the proofs of our main results, we state an important corollary of Theorem \ref{T.3.3}. For a solution $u$ of \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$ and for $t\ge t_0$, we let \begin{equation}\label{3.9d} \Lambda^h_u(t) := \sup_{y\in{\mathbb{R}}^d} \left(Z_y(t)-Y^{h}_y(t) \right) \qquad(\le\infty) \end{equation} for $h$ from \eqref{3.00}. Then $\Lambda^h_u$ is non-increasing and right-continuous in $h$, by definition of $Y^h_y$. Notice that if $\Lambda^h_u$ is finite, then it controls $L_{u,\varepsilon}$ for any $\varepsilon\in(h,\tfrac 12)$. Indeed, the argument proving \eqref{3.9a} from \eqref{3.9} applies to any $u$ (even if $u_t\not\ge 0$) and, with the notation from Theorem \ref{T.3.3}, yields for $t\ge t_0+\tau_\varepsilon$, \begin{equation} \label{3.9e} L_{u,\varepsilon}(t) \le M_{\varepsilon-h} - M + \Lambda^h_{u}(t-\tau_\varepsilon+1). \end{equation} For ignition $f$ we also have the opposite direction. For any $(t,y)\in(t_0,\infty)\times{\mathbb{R}}^d$ and $h\in\left(0,\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},\varepsilon_0 \right\} \right]$, we have \[ \sup_{|x-y|<Y^{h}_y(t)} u(t,x)> h \] because otherwise we would have $u(t,\cdot)\le h+\gamma\psi(|\cdot|)$ for some $\gamma<\psi(Y^{h}_y(t))^{-1}$, contradicting the definition of $Y^{h}_y(t)$. But then $Z_y(t)\le Y^{h}_y(t)+L_{u,h}(t)$ by $h\le\varepsilon_0$, so \begin{equation} \label{3.9f} \Lambda^h_u(t)\le L_{u,h}(t). \end{equation} We now have the following result for entire solutions with $u_t\ge 0$. \begin{corollary} \label{C.3.3a} Let $d\le 3$, let $f_0,K,$ and $\theta\ge 0$ be as in (H), and let $\eta>0$, $\zeta\in(0,c_0^2/4)$, and $f\in F(f_0,K,\theta,\zeta,\eta)$. Assume that $u\in[0,1]$ solves \eqref{1.1} and satisfies $u_t\ge 0$ on ${\mathbb{R}}\times{\mathbb{R}}^d$. (i) If $h$ is as in \eqref{3.00} and $\limsup_{t\to-\infty} \Lambda^h_u(t)<\infty$, then in fact $\sup_{t\in{\mathbb{R}}} \Lambda^h_u (t) \le M$ and $\sup_{t\in{\mathbb{R}}} L_{u,\varepsilon}(t) \le M_{\varepsilon-h}$ for any $\varepsilon\in(h,\tfrac 12)$ (here $M,M_{\delta}$ are from Theorem \ref{T.3.3}). We also have \begin{equation} \label{3.9g} \inf_{ u(t,x)\in[\varepsilon,1-\varepsilon]} u_t(t,x) \ge \mu_{\varepsilon,M_{\varepsilon/2-h}} \end{equation} for any $\varepsilon\in(2h,2\varepsilon_0]$, where $\mu_{\varepsilon,\ell}>0$ is the $\inf$ in \eqref{1.7b}. (ii) If $\theta>0$ and $u$ has bounded width, then $\sup_{t\in{\mathbb{R}}} \Lambda^0_u (t) \le M$ (and so (i) holds with $h=0$ and $\varepsilon\in(0,\tfrac 12)$). Moreover, if a pure ignition $f_1$ satisfies \eqref{1.4e}, then $u$ propagates with global mean speed in $[c_0,c_1]$, with $\tau_{\varepsilon,\delta}$ in Definition \ref{D.1.1b} depending only on $\delta,f_1,\varepsilon,f_0,K,\zeta,\eta$. \end{corollary} {\it Remark.} Recall that $M,M_{\varepsilon},\mu_\varepsilon$ depend on $f_0,K,\zeta,\eta$ (and $\varepsilon$) {\it but not on $\theta,h,f,u$}. This and Theorem \ref{T.1.5}(ii) will be the key to the independence of the bounds in Theorem \ref{T.1.2}(i) on $u_0$. \begin{proof} (i) The first claim is immediate from \eqref{3.9} after letting $u_0(x):=u(t_0,x)$ and then sending $t_0\to-\infty$. The second then follows from \eqref{3.9e}, and the third from Lemma \ref{L.8.1}(iii) with $\theta=0$, applied to $u$ shifted in time by $1-t$. (ii) For any $h\in\left(0,\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},\varepsilon_0 \right\} \right]$, \eqref{3.9f} and (i) show $\sup_{t\in{\mathbb{R}}} \Lambda^h_u (t) \le M$, and right-continuity of $\Lambda^h_u$ in $h$ then yields $\sup_{t\in{\mathbb{R}}} \Lambda^0_u (t) \le M$. The second claim now follows from Lemma \ref{L.2.1a} with $\delta/2$ instead of $\delta$, using the bound $L_{u,\varepsilon'}(t) \le M_{\varepsilon'}$ for $\varepsilon'$ from that lemma (which holds by \eqref{3.9e} with $h=0$). Indeed, we only need to take $\tau_{\varepsilon,\delta}\ge \max\{2\delta^{-1}M_{\varepsilon'},\tau\}$ in Definition \ref{D.1.1b}, with $\tau$ from Lemma \ref{L.2.1a}. \end{proof} {\it Open problem.} It is an interesting question whether there is a transition solution $u$ satisfying all the hypotheses of Corollary \ref{C.3.3a}(ii), except of the hypothesis of bounded width, such that $\Lambda^0_u$ is unbounded (cf.~Open problem 2 after Theorem \ref{T.1.5}). It is obvious from \eqref{3.9f} and \eqref{3.9a} that in that case one would have $\liminf_{t\to-\infty} |t|^{-1} \Lambda^h_u(t)>0$. \section{Proof of Theorem \ref{T.1.3}(i)} \label{S4} We can assume $u_0\not\equiv 0,1$ because then the result holds trivially. As in Section \ref{S3}, all constants will depend on $f_0,K,\zeta,\eta$ (but not on $\theta$ from (H), unless explicitly noted). The second claim in \eqref{1.7} follows immediately from the first. Indeed: it is sufficient to prove it for $\varepsilon\in(0,2\varepsilon_0]$; if $\mu_{\varepsilon,\ell}>0$ is the $\inf$ in \eqref{1.7b} for such $\varepsilon$ and $\ell_\varepsilon,T_\varepsilon$ are from the first claim in \eqref{1.7}, then the second claim follows with $m_\varepsilon:= \mu_{\varepsilon,\ell_{\varepsilon/2}}$ and $T_\varepsilon$ replaced by $T_{\varepsilon/2}+1$, after applying Lemma \ref{L.8.1}(iii) to $u$ shifted in time by $-(t_0+T_{\varepsilon/2})$. Similarly, the claim about global mean speed also follows from the first claim in \eqref{1.7}. Indeed, if $\varepsilon',\tau$ are from Lemma \ref{L.2.1a} with $\delta/2$ instead of $\delta$, then that lemma shows that we only need $T_{\varepsilon,\delta}\ge T_{\varepsilon'}+1$ and $\tau_{\varepsilon,\delta}\ge\max\{2\delta^{-1}\ell_{\varepsilon'},\tau\}$ in Definition \ref{D.1.1b}. We are left with proving the first claim in \eqref{1.7} (which also proves that $u$ has a bounded width). We will do so with $\ell_\varepsilon:=M_{\varepsilon/2}$ from Theorem \ref{T.3.3}. We define $Z_y,Y^{h}_y$ as in Section \ref{S3} and split the proof into two cases. {\bf Case $d=3$:} Let $\varepsilon':=1-\varepsilon_0$ (which depends on $f_0,K$) and given any $\varepsilon\in(0,\tfrac 12)$, let $h:=\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},\varepsilon_0,\tfrac \varepsilon 2 \right\}$. The argument which proves \eqref{3.9f} now shows $Z_y(t_0)\le Y^{h}_y(t_0)+L_{u,h,\varepsilon'}(t_0)$ for each $y\in{\mathbb{R}}^3$. Hence the right-hand side of \eqref{3.9a} equals $M_{\varepsilon-h}\,(\le M_{\varepsilon/2})$ for all large enough $t$ and we are done. {\bf Case $d\le 2$:} First, there is $\tau\ge 1$ such that if a solution $u\in[0,1]$ of \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$ with $f$ as in the theorem satisfies $u_t\ge 0$ and $u(t_0,x)\ge\varepsilon'$, then $u(t_0+\tau,x)>1-\varepsilon_0$. This is proved just as a similar claim in the proof of Lemma \ref{L.3.2}. Define now $Z_y'$ as $Z_y$ but with $\varepsilon'$ in place of $1-\varepsilon_0$, and given any $\varepsilon\in(0,\tfrac 12)$, let $h:=\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},1-\varepsilon',\tfrac \varepsilon 2 \right\}$. The argument which proves \eqref{3.9f} now shows $Z_y'(t_0)\le Y^{h}_y(t_0)+L_{u,h,\varepsilon'}(t_0)$, and then the previous paragraph and Lemma \ref{L.3.1}(i) yield \[ Z_y(t_0+\tau)\le Y^{h}_y(t_0)+L_{u,h,\varepsilon'}(t_0)\le Y^{h}_y(t_0+\tau)+c_Y'\tau+L_{u,h,\varepsilon'}(t_0). \] This holds for all $y\in{\mathbb{R}}^d$, so we conclude as in the first case. The proof of Theorem \ref{T.1.3}(i) is finished. \smallskip \begin{proof}[Proof of Remark 2 after Theorem \ref{T.1.3}] The second claim in \eqref{1.7} follows from the first as above. To prove the first claim in \eqref{1.7}, again consider two cases. {\bf Case $d=3$:} Let $\varepsilon':=1-\varepsilon_0$. The hypothesis and \eqref{2.5} for $k=0$ and $\zeta'$ in place of $\zeta$ imply \[ C:=\sup_{\varepsilon\in(0,1), r\ge 0} \varepsilon \psi(r)^{-1}\psi(r+L_{u,\varepsilon,\varepsilon'}(t_0))<\infty. \] Assume that $u_0\not\equiv 0$ because otherwise the result holds trivially. By the definition of $Y^{h}_y$, for any $y\in{\mathbb{R}}^3$, there is $x\in{\mathbb{R}}^3$ such that \[ u(t_0,x)=\psi(Y^{0}_y(t_0))^{-1}\psi(|x-y|) \qquad (=:\varepsilon>0). \] Then there is $x'\in B_{L_{u,\varepsilon,\varepsilon'}(t_0)}(x)$ with $u(t_0,x')\ge \varepsilon'$ ($=1-\varepsilon_0$), and we have \[ \psi(|x'-y|)\le \psi(|x-y|+L_{u,\varepsilon,\varepsilon'}(t_0)) \le C\varepsilon^{-1}\psi(|x-y|) = C\psi(Y^{0}_y(t_0)). \] Since $C$ is independent of $y$, this means $\sup_{y\in{\mathbb{R}}^3} (Z_y(t_0)-Y^{0}_y(t_0))<\infty$. We conclude as in the ignition case, using \eqref{3.9a}. {\bf Case $d\le 2$:} The argument from the first case shows $\sup_{y\in{\mathbb{R}}^d} (Z_y'(t_0)-Y^{0}_y(t_0))<\infty$, with $Z_y'$ from the ignition case $d\le 2$. As in the ignition case $d\le 2$, and with the same $\tau$, we obtain $\sup_{y\in{\mathbb{R}}^d} (Z_y(t_0+\tau)-Y^{0}_y(t_0+\tau))<\infty$ and the result follows as before. Finally, the first inclusion of the claim about global mean speed follows from the first claim in \eqref{1.7} as in the ignition case because the first inclusion in Lemma \ref{L.2.1a} holds also for monostable $f$. The second inclusion in Definition \ref{D.1.1b}, with $c':=c_Y'$, follows from $\Lambda^0_u(t)\le M$ (which holds for all large enough $t$ by \eqref{3.9}) and \eqref{3.3}. \end{proof} \section{Proofs of Theorems \ref{T.1.2}(i) and \ref{T.1.3}(ii)} \label{S4a} As in Section \ref{S3}, all constants will depend on $f_0,K,\zeta,\eta$ (but not on $\theta$ from (H), unless explicitly noted). Note that the claim about global mean speed follows in both cases from the first claim in \eqref{1.7} as in the proof of Theorem \ref{T.1.3}(i). Since bounded width also follows from the first claim in \eqref{1.7}, we are therefore left with proving \eqref{1.7} in both cases. Let us start with the (easier to prove) analogous results for monostable reactions from Remark 4 after Theorem \ref{T.1.2} and Remark 3 after Theorem \ref{T.1.3}. \begin{proof}[Proof of Remark 4 after Theorem \ref{T.1.2}] We can assume without loss that $t_0=0$. The idea is to construct $w_0$ such that \begin{equation} \label{4.2} \Delta w_0(\cdot) + f(\cdot,w_0(\cdot))\ge 0 \end{equation} and the solution $w$ to \eqref{1.1} with $w(0,x)=w_0(x)$ satisfies $w(\tau,\cdot)\ge u_0(\cdot)$ and $u(\tau ,\cdot)\ge w_0(\cdot)$ for some $\tau >0$. Then $u$ will satisfy \begin{equation} \label{4.2a} w(t-\tau ,\cdot) \le u(t,\cdot) \le w(t+\tau,\cdot) \end{equation} for $t\ge \tau$. Since $w_t\ge 0$, Theorem \ref{T.3.3}(ii) for $w,u$ in place of $u,v$ will now do the trick. Let us first consider \eqref{1.6a}, and sssume without loss $e=(1,0,\dots,0)$. Let $s_0>0$ be such that there is a smooth, even, $2s_0$-periodic $C^2$ function $U:{\mathbb{R}}\to[0,\tfrac 12(1+\theta_0)]$ satisfying $U'' + f_0(U)>0$ on ${\mathbb{R}}$, $U(0)=\tfrac 12(1+\theta_0)$, $U(s_0)=0$, and $U'<0$ on $(0,s_0)$ (then obviously $U'(0)=U'(s_0)=0$). Such $U$ is obtained by perturbing the solution of $\tilde U'' + f_0(\tilde U)=0$ with $\tilde U(0)=\tfrac 12(1+\theta_0)$ and $\tilde U'(0)=0$. The latter satisfies $\tilde U'<0$ on some interval $(0,\tilde s_0]$ with $\tilde U(\tilde s_0)=0$ because multiplying the ODE by $\tilde U'$ and integrating yields \[ \tilde U'(s)^2 = \tilde U'(0)^2 + 2\int_{\tilde U(s)}^{\tilde U(0)} f_0(u)du = 2\int_{\tilde U(s)}^{(1+\theta_0)/2} f_0(u)du >0 \] as long as $\tilde U(s)>0$ (notice that $\tilde U''(0)<0$). Thus we can perturb $\tilde U$ to obtain the desired $U$, with $s_0$ near $\tilde s_0$. Then $U,s_0$ depend only on $f_0$, and we define \begin{equation}\label{4.1} W(s):= \begin{cases} \tfrac 12(1+\theta_0) & s\le R_2, \\ U(s-R_2) & s\in(R_2,R_2+s_0), \\ 0 & s\ge R_2+s_0. \end{cases} \end{equation} Note that \[ \inf_{s<R_2+s_0} [W''(s)+f_0(W(s))] = \inf_{s\in{\mathbb{R}}} [U''(s)+f_0(U(s))] >0. \] Then $w_0(x):=W(x_1)$ satisfies \eqref{4.2}, and $w(\tau,\cdot)\ge u_0(\cdot)$ follows for some $(f_0,\varepsilon_2)$-dependent $\tau$, from $\varepsilon_2>0$ and the second claim in Lemma \ref{L.2.1}. Similarly, $u(\tau,\cdot)\ge w_0(\cdot)$ follows for some $(f_0,R_2-R_1,\varepsilon_1)$-dependent $\tau$ from the second claim in Lemma \ref{L.2.1} (which holds with $\tfrac 12(1+\theta_0)$ replaced by $\theta_0+\varepsilon_1$ when $\varepsilon_1>0$, and with $R=R(f_0,\varepsilon_1)$ \cite{AW}). Thus $w$ satisfies \eqref{4.2a} for all $t\ge \tau $. Let us increase $\tau$ so that \[ w(\tau,\cdot)\ge (1-\varepsilon_0) \chi_{\{x\,|\,x\cdot e<R_2\}}(\cdot). \] This makes $\tau$ also depend on $K$, and then Lemma \ref{L.3.1}(i) applied to $w$ yields \begin{equation}\label{4.1cc} \Lambda_w^0(\tau)\le s_0+c_Y'\tau \end{equation} because $Y_y^0(0)\ge y_1-(R_2+s_0)$ if $Y_y^0$ is defined with respect to $w$. With $M_\varepsilon,\tau_\varepsilon$ from Theorem~\ref{T.3.3}, let $\ell_\varepsilon:=M_{\varepsilon/2}+3c_Y'\tau $ and \begin{equation}\label{4.1b} T_\varepsilon:= 2\tau + \tau_{\varepsilon/2} + \left(\frac{c_0}2 - \sqrt\zeta \right)^{-1} (s_0+c_Y'\tau) . \end{equation} Then Theorem \ref{T.3.3}(ii) with $w,u$ in place of $u,v$ gives $L_{u,\varepsilon}(t)\le\ell_\varepsilon$ for $t\ge T_\varepsilon$. The proof in the case \eqref{1.6a} is finished. Let us now assume \eqref{1.5a} as well as $x_0=0$ without loss, and first also assume that $\sup u_0<1$. We can also assume without loss that (with $U$ as above) \begin{equation}\label{4.1a} R_2\ge \frac{(d-1)\|U'\|_\infty}{\inf_{s\in{\mathbb{R}}} [U''(s)+f_0(U(s))]}. \end{equation} The result now holds for any $R_1\ge R(f_0,\varepsilon_1)$, where the latter is from the argument above so that the conclusion of Lemma \ref{L.2.1} still holds. Indeed, this time we let $w_0(x):=W(|x|)$, which also satisfies \eqref{4.2} due to \eqref{4.1a}. As above, we obtain \eqref{4.2a} for some $\tau>0$, and then again $L_{u,\varepsilon}(t)\le\ell_\varepsilon$ for $t\ge T_\varepsilon$. Finally, assume \eqref{1.5a} with $x_0=0$, and $\sup u_0= 1$. We let again \eqref{4.1a} and $w_0(x):=W(|x|)$, but now $w(\tau,\cdot)\not\ge u_0(\cdot)$ for all $\tau\ge 0$. Solve \eqref{1.1}, \eqref{1.2} and replace $u_0$ by $u(1,\cdot)$. It is obviously sufficient to prove this claim for the new $u_0$. This now satisfies \[ u_0(x) \le \min \left\{ 1-\varepsilon_2,\frac{ |B_1(0)| R_2^d}{(4\pi)^{d/2}} e^{K } e^{ -\max\{|x|-R_2,0\}^2/4} \right\}, \] by the comparison principle, for some $(K,R_2)$-dependent $\varepsilon_2>0$. Since $w(\tau,\cdot)$ converges locally uniformly to $1$ as $\tau\to\infty$, $w_t\ge 0$, and $w(\tau,x)\sim e^{-|x|^2/4\tau}$ as $|x|\to\infty$ by the heat equation asymptotics, we again obtain $w(\tau,\cdot)\ge u_0(\cdot)$ for some $\tau$. The rest of the argument is as before. \end{proof} \begin{proof}[Proof of Remark 3 after Theorem \ref{T.1.3}] The first claim in \eqref{1.7} is proved as above, now with $u$ playing the role of $w$ because $u_t\ge 0$. Indeed, assume $t_0=0$ and let $\tau<\infty$ be such that $s:=\sup_{y\in{\mathbb{R}}^d}(Z_y(\tau)-Y^0_y(\tau))<\infty$ (which exists by the proof of Remark 2 after Theorem \ref{T.1.3}). With $M_\varepsilon,\tau_\varepsilon,\ell_\varepsilon$ from the previous proof, let $T_\varepsilon$ be from \eqref{4.1b} but with $s_0+c_Y'\tau$ replaced by $s$. Then again $L_{v,\varepsilon}(t)\le\ell_\varepsilon$ for $t\ge T_\varepsilon$ by Theorem \ref{T.3.3}(ii), as above. The global mean speed claim is proved as in the proof of Remark 2 after Theorem \ref{T.1.3}, this time using $\Lambda^0_v(t)\le \Lambda^0_u(t-\tau)+2c_Y'\tau \le M+ 2c_Y'\tau$ (which again holds for all large $t$). \end{proof} The proof of \eqref{1.7} in Theorem \ref{T.1.2}(i) resp.~Theorem \ref{T.1.3}(ii) is similar but a little more involved. To show that $\ell_\varepsilon$ is independent of $R_1,R_2,\varepsilon_1,\varepsilon_2$ resp.~$\tau$, as well as to obtain the second claim in \eqref{1.7}, we will need to use Theorem \ref{T.1.5}. In addition, the exponential tails of the initial data in Theorem \ref{T.1.2}(i) will be handled by constructing appropriate super-solutions and obtaining inequalities as in \eqref{3.9b} instead of \eqref{4.2a}. We will start with proving the result for general solutions $u$ which (essentially) lie between two time-translates of a solution $w$ with initial datum satisfying \eqref{4.2}. The bounds in this result will, in fact, be independent of $u,w$ for large $t$ as long as the number $\Lambda^h_{w}(0)$, defined in \eqref{3.9d}, is finite for each small enough $h>0$. \begin{theorem} \label{T.5.1} Let $d\le 3$, let $f_0,K$ be as in (H), and let $\eta>0$ and $\zeta\in(0,c_0^2/4)$. For any $\varepsilon'\in(0,\tfrac 12)$, there are $\ell_{\varepsilon'},m_{\varepsilon'}\in(0,\infty)$ such that if $\theta>0$, $\lambda:(0,\tfrac 12)\to(0,\infty)$ is left-continuous and non-increasing, $\tau <\infty$, and $\nu:(0,\infty)\to[0,\infty)$ satisifies $\lim_{t\to\infty}\nu(t)=0$, then there is $T_{\varepsilon',\theta,\lambda,\tau ,\nu}<\infty$ such that the following holds. If $f\in F(f_0,K,\theta,\zeta,\eta)$ and $u,w\in[0,1]$ are solutions of \eqref{1.1} on $(0,\infty)\times{\mathbb{R}}^d$ with $w_0(\cdot):=w(0,\cdot)$ satisfying \eqref{4.2}, with $\Lambda^h_{w}(0)\le \lambda(h)$ for all $h\in \left( 0,\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},\varepsilon_0 \right\} \right]$, and with \begin{equation}\label{5.2} w(t-\tau ,\cdot)-\nu(t) \le u(t,\cdot) \le w(t+\tau ,\cdot)+\nu(t) \end{equation} for each $t>\tau $, then \begin{equation}\label{5.3} \sup_{t\ge T_{\varepsilon',\theta,\lambda,\tau ,\nu}} L_{u,\varepsilon'}(t)\le \ell_{\varepsilon'} \qquad\text{and}\qquad \inf_{\substack{t\ge T_{\varepsilon',\theta,\lambda,\tau ,\nu} \\ u(t,x)\in[\varepsilon',1-\varepsilon']}} u_t(t,x) \ge m_{\varepsilon'}. \end{equation} \end{theorem} {\it Remark.} We stress that $\ell_{\varepsilon'},m_{\varepsilon'}$ are independent of $f,u$ as well as of $\theta,\lambda,\tau,\nu$. \begin{proof} Let $h_0:= \min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},\varepsilon_0 \right\}>0$. For $\varepsilon\in(0,4h_0]$, and with $M_\delta,\tau_\varepsilon$ from Theorem \ref{T.3.3}, let $T(\varepsilon)\ge \tau +\tau_{\varepsilon/2}$ be such that $\sup_{t\ge T(\varepsilon)} \nu(t)\le \tfrac \varepsilon 2$, and define $L(\varepsilon):= M_{\varepsilon/4} + 3c_Y'\tau +\lambda(\tfrac \varepsilon 4)$. For $\varepsilon\in(4h_0,\tfrac 12)$ let $T(\varepsilon):=T(4h_0)$ and $L(\varepsilon):=L(4h_0)$. Then for any $\varepsilon\in(0,4h_0]$, by Theorem \ref{T.3.3}(ii) with $h:=\tfrac\varepsilon 4$, \begin{equation}\label{5.4a} L_{u,\varepsilon}(t)\le L(\varepsilon) \qquad\text{for $t\ge T(\varepsilon)$} \end{equation} (and this also holds for $\varepsilon\in(4h_0,\tfrac 12)$ because $L_{u,\varepsilon}(t)$ is non-increasing in $\varepsilon$). Let us now prove the first claim in \eqref{5.3}, with $\ell_{\varepsilon'} :=M_{\varepsilon'/2}+1$. If there is no such $T_{\varepsilon',\theta,\lambda,\tau ,\nu}$, then there is a sequence $(f_n,u_n, w_n,t_n,x_n)$ with $f_n,u_n,w_n$ satisfying the hypotheses of the theorem, $\lim_{n\to\infty} t_n=\infty$, $u_n(t_n,x_n)\in [\varepsilon',1-\varepsilon']$ and \begin{equation}\label{5.5} \inf_{u_n(t_n,y)\ge 1-\varepsilon'} |y-x_n| > \ell_{\varepsilon'}. \end{equation} After shifting $f_n$ by $(-x_n,0)$ and $u_n$ by $(-t_n,-x_n)$, and then applying Lemma \ref{L.8.1}(ii) (with $-t_n+T(\varepsilon)$ in place of $t_n(\varepsilon)$, using \eqref{5.4a}), we obtain new $(f,u)\in S_{-\infty,L}(f_0,K,\theta)$ such that $u(0,0)\in [\varepsilon',1-\varepsilon']$ and $L_{u,\varepsilon'/2}(0)\ge \ell_{\varepsilon'}>M_{\varepsilon'/2}$. We also have $f\in F(f_0,K,\theta,\zeta,\eta)$ because that set is closed under locally uniform limits. Thus $(f,u)\in S_L(f_0,K,\theta)$ since $u\not\equiv 0,1$. Then Theorem \ref{T.1.5}(ii) shows $u_t\ge 0$, because bounded width of $u$ and Lemma \ref{L.2.1} immediately show that $u$ propagates with a positive global mean speed. But then $L_{u,\varepsilon'/2}(0)>M_{\varepsilon'/2}$ yields a contradiction with Corollary \ref{C.3.3a}(ii,i) (with $h=0$). The first claim in \eqref{5.3} is proved. The second claim is proved similarly with $m_{\varepsilon'}:=\tfrac 12\mu_{\varepsilon',M_{\varepsilon'/2}}$ for $\varepsilon\in(0,2\varepsilon_0]$, where $\mu_{\varepsilon,\ell}>0$ is the $\inf$ in \eqref{1.7b} (then it also holds with $m_{\varepsilon'}:=m_{2\varepsilon_0}$ for $\varepsilon'\in(2\varepsilon_0,\tfrac 12)$). Non-existence of $T_{\varepsilon',\theta,\lambda,\tau ,\nu}$ again yields a sequence $(f_n,u_n,w_n,t_n,x_n)$ with $f_n,u_n,w_n$ satisfying the hypotheses of the theorem, $\lim_{n\to\infty} t_n=\infty$, $u_n(t_n,x_n)\in [\varepsilon',1-\varepsilon']$ and $(u_n)_t(t_n,x_n)<m_{\varepsilon'}$. We again obtain new $(f,u)\in S_L(f_0,K,\theta)$ such that $f\in F(f_0,K,\theta,\zeta,\eta)$, $u_t\ge 0$, as well as $u(0,0)\in [\varepsilon',1-\varepsilon']$, and $u_t(0,0)\le m_{\varepsilon'}<\mu_{\varepsilon',M_{\varepsilon'/2}}$. This contradicts Corollary \ref{C.3.3a}(ii,i) (with $h=0$), and the second claim in \eqref{5.3} is also proved. \end{proof} Recall that in the proof of Theorem \ref{T.1.3}(i) we obtained $\Lambda^h_u(t_0+T)\le c_Y'T+L_{u,h,\varepsilon'}(t_0)$ for all $h\in \left( 0,\min\left\{ \theta(c_0^2-4\zeta)(c_0^2+4\zeta)^{-1},\frac\eta{4K},\varepsilon_0 \right\} \right]$, with $T=0$ if $d=3$ and some $T>0$ if $d\le 2$. If we thus let $\lambda(h):=c_Y'T+\inf_{h'\in(0,h)}L_{u,h',\varepsilon'}(t_0)$ (which is left-continuous) and $\nu\equiv 0$, then \eqref{1.7} in Theorem \ref{T.1.3}(ii) immediately follows from Theorem \ref{T.5.1} with $u,v$ in place of $w,u$ and time shifted by $-(t_0+T)$. Hence we are left with proving \eqref{1.7} in Theorem \ref{T.1.2}(i). As in the proof of Remark 4 after Theorem \ref{T.1.2}, we will start with assuming \eqref{1.6}, and also without loss that $t_0=0$, $e=(1,0,\dots,0)$, as well as $\varepsilon_2\le c_0/4$ (recall that $u\in[0, 1]$). We again let $w$ solve \eqref{1.1} with $w(0,x)=W(x_1)$, where $W$ is from \eqref{4.1}. As before, $w_t\ge 0$ and we have $u(\tau,\cdot) \ge w(0 ,\cdot)$ provided $\tau $ is large enough (depending on $f_0,R_2-R_1,\varepsilon_1$). This yields the first inequality in \eqref{5.2}, with $\nu\equiv 0$. To obtain the second inequality in \eqref{5.2}, we define $\beta(t):= \tau -e^{-\varepsilon_2^2 t}$ and \begin{equation} \label{5.9b} v(t,x):=w(t+\beta(t),x)+e^{\varepsilon_2^2 t - \varepsilon_2(x_1-R_2)} \end{equation} for some large $\tau $ to be determined later. We then have for $t>0$, \begin{equation} \label{5.6} v_t-\Delta v-f(x,v) = f(x,w(t+\beta(t),x))-f(x,v) + \varepsilon_2^2 e^{-\varepsilon_2^2 t} w_t(t+\beta(t),x), \end{equation} where we extend $f$ so that $f(x,u)\le 0$ for $u\ge 1$ (cf.~\eqref{1.24}). We want to show that $v$ is a super-solution of \eqref{1.1}, that is, the right hand side of \eqref{5.6} is $\ge 0$ for $t>0$ and $x\in{\mathbb{R}}^d$. When $w(t+\beta(t),x)\ge 1-\theta$, then $f(x,w(t+\beta(t),x))\ge f(x,v(t,x))$ by the hypotheses on $f$ and $w\le 1$, so this is indeed the case. Let $\ell_{\theta/4},m_{\theta/2}$ be from Theorem \ref{T.5.1} (i.e., with $\varepsilon':=\tfrac\theta 4$ and $\varepsilon':=\tfrac\theta 2$). We now let $\tau $ be large enough so that $w(t+\tau -1,x)\ge 1-\theta$ whenever $t\ge 0$ and \begin{equation} \label{5.7} x_1\le \frac {c_0}2 t + \frac 1{\varepsilon_2} \log \max \left\{ \frac K {\varepsilon_2^2 m_{\theta/2}}, \frac 2\theta \right\} +R_2, \end{equation} and also that \begin{equation}\label{5.7a} \sup_{t\ge 0} L_{w,\theta/4}(t+\tau-1)\le\ell_{\theta/4} \qquad\text{and}\qquad \inf_{\substack{t\ge 0 \\ w(t+\tau -1,x)\in[\theta/2,1-\theta/2]}} w_t(t+\tau -1,x) \ge m_{\theta/2}. \end{equation} The former holds for all large $\tau $ due to the second claim in Lemma \ref{L.2.1}. The latter holds for all large $\tau $ due to Theorem \ref{T.5.1} applied to $u=w$, $\nu\equiv 0$, and $\tau =0$, but starting from some positive time for which $\Lambda^0_w$ ($\ge \Lambda^h_w$ for all $h> 0$) is finite (see \eqref{4.1cc}), instead from time 0. This $\tau$ then only depends on $f_0,K,\zeta,\eta,\varepsilon_2,\theta$. When $w(t+\beta(t),x)< 1-\theta$, then $w(t+\tau-1,x)< 1-\theta$ by $w_t\ge 0$, so \[ e^{\varepsilon_2^2 t - \varepsilon_2(x_1-R_2)} \le \min \left\{ \frac {\varepsilon_2^2 m_{\theta/2}}K, \frac \theta 2 \right\} e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t} \le \min \left\{ \frac {\varepsilon_2^2 m_{\theta/2}}K e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t}, \frac\theta 2 \right\} \] by the opposite inequality to \eqref{5.7}. So either $w(t+\beta(t),x)\le \tfrac\theta 2$, in which case $v(t,x)\le\theta$ and we have $f(x,w(t+\beta(t),x))= f(x,v(t,x))=0$; or $w(t+\beta(t),x)\in (\tfrac\theta 2,1-\theta)$, in which case the right hand side of \eqref{5.6} can be bounded below by \[ - Ke^{\varepsilon_2^2 t - \varepsilon_2(x_1-R_2)} + \varepsilon_2^2 e^{-\varepsilon_2^2 t} m_{\theta/2} \ge - \varepsilon_2^2 m_{\theta/2} e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t} + \varepsilon_2^2 m_{\theta/2} e^{-\varepsilon_2^2 t} \ge 0 \] (using $\varepsilon_2\le c_0/4$ in the last inequality). It follows that $v$ is a super-solution of \eqref{1.1}, with $v(0,\cdot)\ge u(0,\cdot)$ due to \eqref{1.6}. Hence $v\ge u$, and the second inequality in \eqref{5.2} holds with \[ \nu(t):= \max \left\{ \sup_{x_1\le R_2+c_0t/2} [1-w(t+\tau -1, x)],\, e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t} \right\} \] because $u\le 1$ and $w_t\ge 0$ (notice that $\nu$ depends only on $\tau,f_0,\varepsilon_2$). Since $\lim_{t\to\infty}\nu(t)=0$ due to Lemma \ref{L.2.1} and $0<\varepsilon_2<c_0/2$, Theorem \ref{T.1.2}(i) for \eqref{1.6} follows from Theorem \ref{T.5.1}. The proof of \eqref{1.7} in the \eqref{1.5} case of Theorem \ref{T.1.2}(i) is similar, with $x_1$ replaced by $|x-x_0|$ in the whole argument, and $\varepsilon_2(d-1)|x-x_0|^{-1}e^{\varepsilon_2^2 t - \varepsilon_2(|x-x_0|-R_2)}$ added to the right hand side of \eqref{5.6}. \smallskip {\it Remark.} For later reference, in the proof of Theorem \ref{T.1.12} below, we also construct a sub-solution of \eqref{1.1} with the same flavor. Let $w$ be as in the above proof, solving \eqref{1.1} with $w(0,x):=W(x_1)$. We have $\inf_{w(0,x)\ge \theta/4} w_t(0,x)>0$ by the construction of $W$ (because $U''+f_0(U)>0$), uniformly in all $f\ge f_0$. It follows from this and parabolic regularity that on some short time interval $[0,\tilde t]$, $w_t$ is bounded away from zero uniformly in all $(t,x)$ with $w(t,x)\ge\tfrac\theta 2$ and in all $f\ge f_0$ with $f(x,0)\equiv 0$ and Lipschitz constant $K$. This, $w_t\ge 0$, and the first claim in \eqref{5.7a} now yield \begin{equation}\label{5.8a} m:=\inf \left\{ w_t(t,x) \,\bigg|\, f\in F(f_0,K,\theta,\zeta,\eta), \, t\ge 0, \text{ and } w(t,x)\in \left[ \frac\theta 2, 1- \frac\theta 2 \right] \right\}>0, \end{equation} provided we also assume (without loss) that $\theta\le 4\varepsilon_0$. This is because an argument as in Lemma \ref{L.8.1}(iii) shows that otherwise there would be some $f\in F(f_0,K,\theta,\zeta,\eta)$ and a solution $u$ of \eqref{1.1} on $(-\tilde t,\infty)\times{\mathbb{R}}^d$ with $\sup_{t\ge \tau} L_{u,\theta/4}(t)\le\ell_{\theta/4}$ for $\tau$ from \eqref{5.7a}, $u(0,0)\in[\tfrac\theta 2, 1-\tfrac\theta 2]$, and $u_t\equiv 0$. Then Lemma \ref{L.2.1} and $\tfrac\theta 4\le\varepsilon_0$ show $\lim_{t\to\infty}u(t,0)= 1$, contradicting $u_t\equiv 0$. Next, pick any $r\in{\mathbb{R}}$ and define \begin{equation} \label{5.9a} v(t,x):=w(t-1+e^{-\varepsilon_2^2 t},x)-e^{\varepsilon_2^2 t - \varepsilon_2(x_1-r)}, \end{equation} so that for $t>0$, \begin{equation} \label{5.9} v_t-\Delta v-f(x,v) = f(x,w(t-1+e^{-\varepsilon_2^2 t},x))-f(x,v) - \varepsilon_2^2 e^{-\varepsilon_2^2 t} w_t(t+\beta(t),x). \end{equation} Here we extend $f$ so that $f(x,u)\ge 0$ for $u\le 0$ (cf.~\eqref{1.24}). We would like to show that the right hand side of \eqref{5.9} is $\le 0$. This is obviously true when $v(t,x)\ge 1-\theta$ because then the hypotheses on $f$ and $w\le 1$ show $f(x,w(t-1+e^{-\varepsilon_2^2 t},x))\le f(x,v(t,x))$. Now consider $(t,x)\in(0,\infty)\times{\mathbb{R}}^d$ with \begin{equation} \label{5.8} x_1\ge \frac {c_0}2 t + \frac 1{\varepsilon_2} \log \max \left\{ \frac K {\varepsilon_2^2 m}, \frac 2\theta \right\}+r. \end{equation} Then \[ e^{\varepsilon_2^2 t - \varepsilon_2(x_1-r)} \le \min \left\{ \frac {\varepsilon_2^2 m}K, \frac \theta 2 \right\} e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t} \le \min \left\{ \frac {\varepsilon_2^2 m}K e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t}, \frac\theta 2 \right\}. \] This, $w\in[0,1]$, and the hypotheses on $f$ show that if $w(t-1+e^{-\varepsilon_2^2 t},x)\notin (\theta,1-\tfrac\theta 2)$, then we have $f(x,w(t-1+e^{-\varepsilon_2^2 t},x))\le f(x,v(t,x))$, so the right hand side of \eqref{5.9} is indeed $\le 0$. If instead $w(t-1+e^{-\varepsilon_2^2 t},x)\in (\theta,1-\tfrac\theta 2)$, then we conclude the same because the right hand side can be bounded above by \[ Ke^{\varepsilon_2^2 t - \varepsilon_2(x_1-r)} -\varepsilon_2^2 e^{-\varepsilon_2^2 t} m \le \varepsilon_2^2 m e^{(\varepsilon_2^2 -c_0\varepsilon_2/2) t} -\varepsilon_2^2 m e^{-\varepsilon_2^2 t} \le 0. \] We cannot, however, conclude this when the opposite of \eqref{5.8} holds and $v(t,x)< 1-\theta$. Thus we have obtained that $v$ is a sub-solution of \eqref{1.1} on the set of $(t,x)\in(0,\infty)\times{\mathbb{R}}^d$ such that either \eqref{5.8} holds or $v(t,x)\ge 1-\theta$. This will turn out to be sufficient for our purposes because typical solutions $u$ spread with speed $>c_0/2$. Hence for appropriate $u$ we will have $u(t,x)\ge 1-\theta$ when the opposite of \eqref{5.8} holds, and we will still be able to conclude $u\ge v$. \section{Proof of Lemma \ref{L.3.2} in the case $d=3$ (case $u^+\equiv 1$)} \label{S5} Recall the setup from the beginning of Section \ref{S3}. In particular, all constants depend on $f_0,K,\zeta,\eta$ (but not on $\theta,h$ unless explicitly noted). Let us also assume, without loss, that $t_1=0$ and $y=0$, and denote $Y^{h}_0=Y$, $Z_0=Z$. Thus \eqref{3.7} becomes $Z(\tau)\le Y(0)$. Finally, recall that $\alpha_f(x)=\alpha_f(x;\zeta)$ and $\psi$ corresponds to $\zeta'$ in Lemma \ref{L.2.3}. Let $\kappa\in(0,\tfrac12)$ be such that if $u(t,\tilde x)\in [\alpha_f(\tilde x),1-\varepsilon_0]$ for some $(t,\tilde x)\in [\tfrac 12,\infty)\times{\mathbb{R}}^3$, then \begin{equation} \label{3.10} u(t,x)\ge \frac\eta{2K} \qquad \text{and}\qquad f(x,u(t,x)) \ge \kappa \qquad \text{for any $x\in B_{\sqrt 3 \kappa}(\tilde x)$.} \end{equation} Note that $\kappa$ exists and is independent of $f,u$ due to $\inf_{x\in{\mathbb{R}}^3}\alpha_f(x)\ge \eta/K$, parabolic regularity, and $f\in F(f_0,K,0,\zeta,\eta)$. Let also $Q\ge K $ be such that if ${\mathcal C}:=[0,\kappa)^3$ and $\tilde w\ge 0$ solves \[ \tilde w_t = \Delta \tilde w + \left[\zeta' + Q \chi_{[1/2,1]}(t) \chi_{{\mathcal C}}(x) \right] \tilde w \] on $(\tfrac 12,\infty)\times{\mathbb{R}}^3$ with $\tilde w(\tfrac 12, \cdot)\ge \tfrac\eta{4K}\chi_{{\mathcal C}}(\cdot)$, then $\tilde w(t,\cdot)\ge \chi_{{\mathcal C}}(\cdot)$ for any $t\ge 1$ (which exists because $\zeta'>0$). Assume now that $v\in[0,1]$ solves \eqref{2.2}, and that ${\mathcal C}_1',{\mathcal C}_2',\dots$ are all (finitely or infinitely many) disjoint cubes such that ${\mathcal C}_n'$ is a $\kappa{\mathbb{Z}}^d$-translation of ${\mathcal C}$ (i.e., by an integer multiple of $\kappa$ in each coordinate) and $v(x_n')\in( \alpha_f(x_n'), 1-\varepsilon_0]$ for some $x_n'\in \bar {\mathcal C}_n'$. Since \eqref{3.10} applies to $v$ in place of $u(t,\cdot)$, its second claim and \eqref{2.3} show for each $x_0\in{\mathbb{R}}^d$, \begin{equation} \label{3.13b} \sum_{n\ge 1} (1+|x_n'-x_0|)^{-1} \le \kappa^{-4}. \end{equation} Let $T=T_Y>0$, $R\ge T$, and $\tau=\tau_Y\ge T+1$, all to be chosen later (but independent of $\theta,h,f,u$). Also let ${\mathcal C}_1,\dots,{\mathcal C}_N$ be as above but such that $u(T,x)> \alpha_f(x;\zeta')$ for some $x\in {\mathcal C}_n\cap B_{Y(0)}(0)$. Let $t_n\in[0,T)$ be the last time such that $u(t,x)\le \alpha_f(x;\zeta')$ for all $(t,x)\in [0,t_n]\times [{\mathcal C}_n\cap B_{Y(0)}(0)]$, let $I_n:=[t_n,t_n+1]$, and let $x_n\in {\mathcal C}_n\cap B_{Y(0)}(0)$ be any point such that $u(T,x_n)\ge \alpha_f(x_n;\zeta')$ ($(t_n,x_n)$ will be fixed from now on). Then $u_t\ge 0$ and $Z(\tau)\le Y(0)$ show that \begin{equation} \label{3.11a} u(t,x_n)\in [\alpha_f(x_n;\zeta'), 1-\varepsilon_0] \qquad\text{for $n=1,\dots,N$ and $t\in[T,\tau]$.} \end{equation} We now claim that if $\tau$ is large enough (depending only on $T,R$ in addition to $f_0,K,\zeta,\eta$), then we must have \begin{equation} \label{3.13} \sum_{|x_n-x_0|\le 2R+2} (1+|x_n-x_0|)^{-1} < 2 \kappa^{-4} \end{equation} for each $x_0\in{\mathbb{R}}^3$. This holds due to the same argument as in the case $d\le 2$ of this lemma. Indeed, if such $\tau$ did not exist, we take a sequence of counter-examples $(f_\tau,u_\tau,x_0^\tau)$ to \eqref{3.13} for $\tau=T+1,T+2,\dots$ and shift each in space by (the negative of) the vector whose each coordinate is the largest multiple of $\kappa$ smaller than the same coordinate of $x_0^\tau$. Parabolic regularity then shows that there is a subsequence along which these shifted solutions converge locally uniformly to a solution of \eqref{1.1} (with some $f\in F(f_0,K,0,\zeta,\eta)$), whose $t\to\infty$ limit $v\in[0,1]$ satisfies \eqref{2.2}. Moreover, by taking a further subsequence (along which those shifted ${\mathcal C}_1^\tau,\dots,{\mathcal C}_{N_\tau}^\tau$ for which $|x_n^\tau-x_0^\tau|\le 2R+2$ are all the same, and the corresponding shifted $x_n^\tau$ and as well as the shifted $x_0^\tau$ converge), one obtains existence of $x_0'\in\bar{\mathcal C}$ and of $\kappa{\mathbb{Z}}^d$-translations ${\mathcal C}_1',\dots,{\mathcal C}_{N'}'\subseteq B_{2R+4}(0)$ of ${\mathcal C}$ and $x_n'\in \bar {\mathcal C}_n'\cap B_{2R+3}(0)$, such that $v(x_n')\in (\alpha_f(x_n'), 1-\varepsilon_0]$ (because \eqref{3.11a} holds for each $(u_\tau,f_\tau,x_n^\tau)$, and $f_\tau(x_n^\tau,\alpha_{f_\tau}(x_n^\tau; \zeta'))=\zeta'\alpha_{f_\tau}(x_n^\tau; \zeta') \ge \zeta \alpha_{f_\tau}(x_n^\tau; \zeta')+(\zeta'-\zeta)\tfrac \eta K$) and \[ \sum_{n=1}^{N'} (1+|x_n'-x_0'|)^{-1} \ge 2\kappa^{-4}. \] This obviously contradicts \eqref{3.13b}, so there must be $\tau\ge T+1$ such that \eqref{3.13} holds. We now reorder the $({\mathcal C}_n,t_n,x_n)$ so that $t_1\le \dots\le t_N$. Define \[ A(t,x) := Q \sum_{n=1}^N \chi_{I_n}(t) \chi_{{\mathcal C}_n}(x) \] and let $w$ solve \begin{equation}\label{3.13a} w_t=\Delta w + [\zeta' + A(t,x)](w-h) \end{equation} on $(0,\infty)\times{\mathbb{R}}^3$, with $w(0,x)= h+\psi(Y(0))^{-1}\psi(|x|)$ (so that $w(0,\cdot)\ge u(0,\cdot)$ by the definition of $Y(0)$). We will now show that $w\ge u$ on $[0,T]\times{\mathbb{R}}^3$. Since the time-independent function $h+\psi(Y(0))^{-1}\psi(|x|)$ is a sub-solution of \eqref{3.13a}, we have $w(t,x)\ge 1 \ge u(t,x)$ for $(t,x)\in [0,T]\times ({\mathbb{R}}^3\setminus B_{Y(0)}(0))$. Also, $w\ge h$ and \eqref{3.0b} show \[ [\zeta'+A(t,x)](w(t,x)-h) \ge f(x,w(t,x)) \] for $(t,x)\in [0,T]\times (B_{Y(0)}(0)\setminus \bigcup_{n=1}^N {\mathcal C}_n)$, as well as for any $n$ and any $(t,x)\in [0,t_n]\times {\mathcal C}_n$. The same is true for $(t,x)\in I_n\times {\mathcal C}_n$ because $f\in F(f_0,K,\theta,\zeta,\eta)$, $h\le\theta$, and $Q\ge K$. Since $w(0,\cdot)\ge u(0,\cdot)$ and $u\le 1$ solves \eqref{1.1}, the comparison principle yields $w\ge u$ on $[0,t_1+1]\times{\mathbb{R}}^3$. From $Z(\tau)\le Y(0)$, the definition of $t_1\in [0,\tau-1]$, and the first claim in \eqref{3.10} we have $u(t_1+\tfrac 12,\cdot)\ge \frac\eta{2K}\chi_{{\mathcal C}_1}(\cdot)$. Since $w(t_1+\tfrac 12,\cdot)\ge u(t_1+\tfrac 12,\cdot)$ and $h\le\tfrac\eta{4K}$, the function $\tilde w(t,x):=w(t-t_1,x)-h$ satisfies $\tilde w(\tfrac 12,\cdot)\ge \frac\eta{4K}\chi_{{\mathcal C}_1}(\cdot)$. Our choice of $Q$ then shows $w(t,x)\ge 1$ ($\ge u(t,x)$) for $(t,x)\in [t_1+1,T]\times {\mathcal C}_1$, so the comparison principle now yields $w\ge u$ on $[0,t_2+1]\times{\mathbb{R}}^3$. Using the argument from the previous paragraph $n-1$ more times (with $t_2,\dots,t_n$ in place of $t_1$) ultimately indeed gives $w\ge u$ on $[0,T]\times{\mathbb{R}}^3$. It therefore suffices to show \begin{equation} \label{3.14} w(T,\cdot) -h\le \psi(Y(0)-c_YT)^{-1}\psi(|\cdot|) \end{equation} to conclude the proof. This will be achieved by using \eqref{3.13}, for appropriately chosen $T,R$. Let \[ a(t,x):= e^{-2\zeta' t}\psi(Y(0))\psi(|x|)^{-1} (w(t,x)-h), \] so that we have $a(0,x)\equiv 1$ and \[ a_t = \Delta a + \frac {2 x\psi'(|x|)}{|x|\psi(|x|)} \cdot \nabla a + A(t,x) a. \] Thus \eqref{3.14} will follow if we prove \begin{equation} \label{3.15} \|a(T,\cdot)\|_{L^\infty({\mathbb{R}}^3)} \le \frac{e^{-2\zeta' T}\psi(Y(0))} {\psi(Y(0)-c_YT)} \end{equation} Let \[ \delta:= 2\zeta'\frac {c_Y-2\sqrt{\zeta'}}{c_Y+2\sqrt{\zeta'}}>0. \] Since $\tfrac d{dr}[\ln\psi(r)]\ge 4\zeta'(c_Y+2\sqrt{\zeta'})^{-1}=(2\zeta'+\delta)c_Y^{-1}$ for $r\ge r_Y$ and we assume $Y(0)-c_YT\ge r_Y$, it follows that $\psi(Y(0))\psi(Y(0)-c_YT)^{-1} \ge e^{(2\zeta'+\delta)T}$. Hence it suffices to prove \begin{equation} \label{3.16} \|a(T,\cdot)\|_{L^\infty({\mathbb{R}}^3)} \le e^{\delta T}. \end{equation} We now choose $R\ge T$ to be such that if $B_t$ is the standard Brownian motion in ${\mathbb{R}}^3$ (defined on some probability space $(\Omega,{\mathcal F},{\mathbb{P}})$), then \begin{equation} \label{3.17} {\mathbb{P}} \left( \sup_{t\in[0,T]} |B_t| \ge R-2\|\psi'\psi^{-1}\|_\infty T \right) \le \frac 13 e^{-QT} \end{equation} (so $R$ depends on $T$ in addition to $f_0,K,\zeta,\eta$). For any $|x|\le Y(0)$, we let $X^x_t$ be the stochastic process given by $X^x_0=x$ and \[ dX^x_t = b(X^x_t)dt + dB_t := \frac {2 X^x_t\psi'(|X^x_t|)}{|X^x_t|\psi(|X^x_t|)} dt + dB_t. \] Then the well-known Feynman-Kac formula gives \begin{equation} \label{3.18} a(T,x)= {\mathbb{E}} \left( e^{\int_0^T A(T-t,X^x_t) dt} \right) \end{equation} We will now show that this is $\le e^{\delta T}$ for $x=0$ (the general case is identical). Denote $X^0_t=X_t$ and note that $|b|\le 2\|\psi'\psi^{-1}\|_\infty$ yields \begin{equation} \label{3.19} {\mathbb{P}} \left( \sup_{t\in[0,T]} |X_t| \ge R \right) \le {\mathbb{P}} \left( \sup_{t\in[0,T]} |B_t| \ge R-2\|\psi'\psi^{-1}\|_\infty T \right) \le \frac 13 e^{-QT}. \end{equation} Since $A\le Q$, this means that the contribution to \eqref{3.18} from those paths which leave $B_{R}(x=0)$ before time $T$ is at most $\tfrac 13 e^{-QT} e^{QT}=\tfrac 13$. Next reorder again the $({\mathcal C}_n,t_n,x_n)$, now so that ${\mathcal C}_n\cap B_R(0)\neq \emptyset$ precisely when $n\le N'$ (for some $N'$). Since \eqref{3.13} holds for any $x_0\in {\mathbb{R}}^3$, we have \begin{equation} \label{3.20} \sum_{n=1}^{N'} (1+|x_n-x_0|)^{-1} < 2 \kappa^{-4} \end{equation} for any $x_0\in B_{R+1}(0)$. Then $x_n\in B_{R+1}(0)$ implies that \eqref{3.20} holds for any $x_0\in{\mathbb{R}}^3$. Consider now the paths which stay in $B_{R}(0)$ until time $T$. These have non-zero $A(T-t,X_t)$ only at those times $t\in[0,T]$ for which $X_t\in {\mathcal C}_n$ for some $n\le N'$, and also $T-t\in I_n$. Since $|I_n|=1$, the contribution to \eqref{3.18} from those of these paths which hit fewer than $\delta (2Q)^{-1}T$ of the cubes ${\mathcal C}_1,\dots,{\mathcal C}_{N'}$ before time $T$ is at most $\exp(Q\delta (2Q)^{-1} T)\le \tfrac 13 e^{\delta T}$, provided we choose $T\ge \delta^{-1}\ln 9$. Finally, the contribution to \eqref{3.18} from those paths which stay in $B_{R}(0)$ until time $T$ and hit at least $\delta (2Q)^{-1}T$ of the cubes ${\mathcal C}_1,\dots,{\mathcal C}_{N'}$ before time $T$ is at most $e^{-2QT}e^{QT}\le \tfrac 13$ by $A\le Q$ and Lemma \ref{L.3.4} below, provided we let $p:=\delta (2Q)^{-1}$, $P:=2 \max \{\kappa^{-4},Q, \|\psi'/\psi\|_\infty\}$, and choose $T$ large enough (which then depends on $f_0,K,\zeta,\eta$). Thus $a(T,x=0)\le e^{\delta T}$, and the general case $|x|\le Y(0)$ is identical. Hence \eqref{3.16} follows for any such $T$ and the proof will be finished once we prove the following lemma. \begin{lemma} \label{L.3.4} If $p,P>0$, then for any large enough $T>0$ (depending only on $d,p,P$) the following holds. If $N\le\infty$, the points $x_n\in{\mathbb{R}}^d$ satisfy \begin{equation} \label{3.21} \sum_{n=1}^{N} (1+|x_n-x|)^{-1} \le P \end{equation} for any $x\in{\mathbb{R}}^d$, and the process $X_t$ satisfies $X_0=0$ and $dX_t=b(X_t)dt+dB_t$ with $\|b\|_\infty\le P$ and $B_t$ the standard Brownian motion in ${\mathbb{R}}^d$, then \begin{equation} \label{3.22} {\mathbb{P}} \left( X_t \text{ hits at least $pT$ of the balls $B_1(x_n)$ before time $T$} \right) \le e^{-PT}. \end{equation} \end{lemma} {\it Remark.} The point here is that if $X_t$ hits at least $pT$ of these balls, the sum of the $\lceil pT \rceil$ displacements it undergoes in-between hits will be bounded below by a quantity super-linear in $T$ because of \eqref{3.21}. The same will then hold for $B_t$ because $b$ is bounded, but the probability of this decreases super-exponentially in $T$ due to the nature of the Gaussian. \begin{proof} Define the stopping times $t_0:=0$, \[ t_j' := \inf \{ t\ge 0 \,|\, X_s \text{ hits at least $j$ of the balls $B_1(x_n)$ before time $t$} \}, \] and $t_j:=\min\{t_j',T\}$ for $j= 1,\dots,\lceil pT \rceil$. Let $h_j:= \sum_{k=1}^j | X_{t_k}-X_{t_{k-1}} |$ and let \[ j_t:=\max \left\{ j\le \lceil pT \rceil \,\big|\, t_j<t \right\} \] be the smaller of $\lceil pT \rceil$ and the number of the balls hit by $X_s$ before time $t\in (0, T]$ (if $t>T$, then $j_t=\lceil pT \rceil$). Of course, these are all measurable functions of $\omega\in\Omega$. Finally, let $\Omega':=\{\omega\in\Omega \,|\, j_T(\omega)=\lceil pT \rceil \}$ be the set of those $\omega$ for which at least $\lceil pT \rceil$ balls are hit by $X_s(\omega)$ before time $T$. Thus we need to show ${\mathbb{P}}(\Omega') \le e^{-PT}$. We now claim that there is $\gamma(T)\to\infty$ as $T\to\infty$ (also depending on $p,P$ but nothing else) such that (cf.~the Remark above) \begin{equation} \label{3.24} h_{\lceil pT \rceil} \ge \gamma(T)T \qquad \text{for any $\omega\in\Omega'$.} \end{equation} Indeed, let $\omega\in\Omega'$ and $H=H(\omega):=h_{\lceil pT \rceil}(\omega)$. For any $x\in{\mathbb{R}}^d$ we have by \eqref{3.21}, \[ \sum_{j=1}^{ \lceil pT \rceil} \left( 2+\left| X_{t_j}-x \right| \right)^{-1} \le P. \] If we take $x:=rX_{t_k}+(1-r)X_{t_{k-1}}$ for some $k=1,\dots,\lceil pT \rceil$ and $r\in[0,1)$, then this gives \begin{equation} \label{3.23} \sum_{j=1}^{\lceil pT \rceil} \left( 2+ |rh_k+(1-r)h_{k-1}-h_j| \right)^{-1} \le P. \end{equation} For each $q\in[0,H)$ we let $(r_q,k_q)\in[0,1)\times\{1,\dots,\lceil pT \rceil\}$ be the unique couple such that $q=r_qh_{k_q}+(1-r_q)h_{k_q-1}$. Integrating \eqref{3.23} over $q$ with $(r,k)=(r_q,k_q)$ yields \[ \sum_{j=1}^{\lceil pT \rceil} \int_0^{H} (2+|q-h_j|)^{-1} dq \le P H. \] Since $h_j\in[0,H]$, we obtain \[ \lceil pT \rceil \ln \frac{2+H}2 \le PH, \] yielding \eqref{3.24} with $\gamma(T):=\tfrac pP\ln T$ for $T\ge e^{2P/p}$. Then $|b|\le P$ implies also \begin{equation} \label{3.25} \sum_{k=1}^{\lceil pT \rceil} | B_{t_k}-B_{t_{k-1}} |\ge (\gamma(T)-P)T \qquad \text{for any $\omega\in\Omega'$.} \end{equation} Let now $\{e^{(1)},\dots,e^{(d)}\}$ be the standard basis in ${\mathbb{R}}^d$ and let \[ E:=\{ e\in{\mathbb{R}}^d \,|\, e\cdot e^{(l)}\in\{1,-1\} \text{ for each } l=1,\dots,d \} \] be the set of the $2^d$ reflections of $(1,\dots,1)$ across subspaces generated by all $2^d$ subsets of the standard basis. Notice that $E$ is a group if endowed with coordinate-wise multiplication. For any $\hat e=(e_0,\dots,e_{\lceil pT \rceil})\in E^{\lceil pT \rceil +1}$, define \[ B_t^{\hat e} := \sum_{j=1}^{j_t} \left( B_{t_j} - B_{t_{j-1}} \right) \prod_{k=0}^{j-1} e_k + \left( B_{t} - B_{t_{j_t}} \right) \prod_{k=0}^{j_t} e_k, \] with all multiplications coordinate-wise. That is, $B_t^{\hat e}$ is obtained from $B_t$ after $\lceil pT \rceil +1$ reflections corresponding to $e_0,\dots,e_{\lceil pT \rceil}$ at stopping times $t_0=0,t_1,\dots,t_{\lceil pT \rceil}$. (Note that what gets reflected according to $e_j$ is the displacement $B_t-B_{t_j}$ for any $t>t_j$. So in particular, $B_t-B_{t_{j_t}}$ gets reflected $j_t+1$ times --- according to $e_0,e_1,\dots,e_{j_t}$.) Since $t_j$ are stopping times, each $B_t^{\hat e}$ is also a standard Brownian motion. For any $\omega\in\Omega$, there is $\hat e\in E^{\lceil pT \rceil +1}$ such that for $j=1,\dots,\lceil pT \rceil$ (and with $\cdot$ the usual dot product in ${\mathbb{R}}^d$), \[ \left[ ( B_{t_j} - B_{t_{j-1}})\prod_{k=0}^{j-1} e_k \right] \cdot (1,\dots,1)\ge |B_{t_j} - B_{t_j-1}|. \] Indeed, one only needs to choose $e_j$ successively so that $( B_{t_j} - B_{t_j-1})\prod_{k=0}^{j-1} e_k$ has all $d$ coordinates non-negative. So by \eqref{3.25}, for each $\omega\in\Omega'$, there is $\hat e\in E^{\lceil pT \rceil +1}$ such that \[ B_{t_{\lceil pT \rceil}}^{\hat e} \cdot (1,\dots,1) \ge (\gamma(T)-P)T \] Since $t_{\lceil pT \rceil}\le T$, we obtain \[ {\mathbb{P}}(\Omega') \le 2^{d(\lceil pT \rceil+1)} {\mathbb{P}} \left( B_t\cdot (1,\dots,1) \ge (\gamma(T)-P)T \text{ for some } t\le T \right). \] Given any $C>0$, the last probability is less than $e^{-CT}$ for all large enough $T$ because $\lim_{T\to\infty} \gamma(T) =\infty$. Taking $C:=2dp\ln 2+P$ yields ${\mathbb{P}}(\Omega') \le e^{-PT}$ for all large enough $T$, finishing the proof of Lemma \ref{L.3.4}. \end{proof} \section{Proofs of Theorems \ref{T.1.11} and \ref{T.1.12}} \label{S11} These proofs follow the same lines as those of Theorems \ref{T.1.2}(i) and \ref{T.1.3}. The only differences in the proof of Theorem \ref{T.1.11} will be in the proofs of Lemma \ref{L.2.1} and Lemma \ref{L.3.2}, where \eqref{1.21} will play a central role. In what follows, let us consider the setting of Theorem \ref{T.1.11} (in particular, ${\mathcal F}$ is fixed) but for now also only $(f,u^+)\in{\mathcal F}$ and $\theta\ge 0$. All constants will now depend on ${\mathcal F}$ instead of $f_0,K,\zeta,\eta$ (but not on $\theta$ from (H) unless explicitly noted). Before we start, let us note that since $\inf_{x\in{\mathbb{R}}^d} u^+(x) >\theta_0$ and $f(x,u)\ge f_0(u)>0$ for $u\in(\theta_0,\theta_1)$, it follows from elliptic regularity and the maximum principle that in fact $\inf_{x\in{\mathbb{R}}^d} u^+(x) \ge \theta_1$. \begin{lemma} \label{L.11.1} There is $\varepsilon_0=\varepsilon_0({\mathcal F})> 0$ such that for each $c<c_0$ and $\varepsilon>0$ there is $\tau=\tau({\mathcal F},c,\varepsilon)$ such that the following holds. If $u\in[0,u^+]$ solves \eqref{1.1}, \eqref{1.2} with $(f,u^+)\in{\mathcal F}$, and $u(t_1,x)\ge u^+(x)-\varepsilon_0$ for some $(t_1,x)\in [t_0+1,\infty)\times{\mathbb{R}}^d$, then for each $t\ge t_1+\tau$, \begin{equation} \label{11.1} \sup_{|y-x|\le c(t-t_1)} \left[ u^+(x)-u(t,y) \right] \le \varepsilon. \end{equation} The same result holds if the hypothesis $u(t_1,x)\ge u^+(x)-\varepsilon_0$ is replaced by \begin{equation}\label{11.1a} u(t_1,\cdot) \ge \frac {\theta_1+\theta_0}2 \chi_{B_R(x)}(\cdot) \end{equation} for some $(t_1,x)\in [t_0,\infty)\times{\mathbb{R}}^d$ and a large enough $R=R(f_0)>0$. \end{lemma} \begin{proof} Without loss we can assume $t_0=0$ and $x=0$. As in the argument in the proof of Lemma \ref{L.2.1}, one shows that $u(t_1,x)\ge u^+(x)-\varepsilon_0$ (with $t_1\ge 1$ and $u^+\ge\theta_1$) yields \eqref{11.1a}, provided $\varepsilon_0>0$ is sufficiently small. So we only need to prove the second claim. Let us therefore assume \eqref{11.1a}. The result from \cite{AW} used in Lemma \ref{L.2.1} also applies to bistable $f_0$, and together with the comparison principle gives for any $c'<c_0$ and $\varepsilon'>0$ existence of $\tau'$ such that, \begin{equation} \label{11.2} \inf_{|y|\le c't} u(t,y) \ge \theta_1-\varepsilon' \end{equation} when $t\ge \tau'$. To upgrade this to \eqref{11.1}, we will use \eqref{1.21} along with $\sup_{x\in{\mathbb{R}}^d}\alpha_f(x)<\theta_1$. The latter holds because otherwise $f_0(u)\le\zeta u$ for all $u\in[0,\theta_1]$, which contradicts $c_0> 2\sqrt\zeta$. If \eqref{11.1} does not hold for some $c<c_0$ and $\varepsilon>0$, we let $u_n$ be a counterexample with $t_n$ ($\to\infty$ as $n\to\infty$) and $|y_n|\le ct_n$, corresponding to some $(f_n,u^+_n)\in{\mathcal F}$. We can assume $(u_n)_t\ge 0$, because \eqref{11.2} with $c'\in(c,c_0)$ and a small enough $\varepsilon'\in(0,\varepsilon)$ lets us find a time-increasing solution $w_n$ of \eqref{1.1} between 0 and $u_n$, defined for $t\ge t'$ with some $t'\ge \tau'$, which still spreads with speed $\ge c'$ in the sense of \eqref{11.2}. Indeed, similarly to \eqref{4.1}, we let $w_n(t',x):=W(|x|)$, where \begin{equation}\label{11.3} W(s):= \begin{cases} \theta_1-\varepsilon' & s\le R', \\ U(s-R') & s\in(R',R'+s_0], \\ 0 & s>R'+s_0 \end{cases} \end{equation} and $U,s_0$ are obtained as for \eqref{4.1} but with the current $f_0$ and $U(0)=\theta_1-\varepsilon'$. Here we need $\varepsilon'>0$ to be small enough (such that $\int_0^{\theta_1-\varepsilon'} f_0(u) du>0$) and $R'$ larger than the right-hand side of \eqref{4.1a} (so that $W(|x|)$ satisfies \eqref{4.2}). So each $w_n$ is time-increasing, and also satisfies \eqref{11.2} for large $t$ if $\varepsilon'\le\tfrac 12(\theta_1-\theta_0)$ and $R'\ge R$, with $R$ from \eqref{11.1a}. Then we only need $t'\ge \max\{\tau',(R'+s_0)/c'\}$ to get $w_n\le u_n$ for all $n$ (note that $\varepsilon',c',\tau',U,s_0,R',t'$ are independent of $n$). Let therefore $(u_n)_t\ge 0$ be such counterexamples, with $t_n\to\infty$ and $|y_n|\le ct_n$ such that $u_n(t_n,y_n)\le u^+_n(y_n)-\varepsilon$. After shifting $u_n$ by $(-\tfrac{c'+c}{2c'}t_n,-y_n)$ (and $f_n,u^+_n$ by $y_n$) and passing to a subsequence, we recover an entire time-increasing solution $u$ of \eqref{1.1} with new $(f,u^+)\in{\mathcal F}$, such that $u\in[\theta_1,u^+]$ (due to \eqref{11.2} for each $u_n$ and all $\varepsilon'>0$) and $\lim_{t\to\infty} u(t,0)\le u^+(0)-\varepsilon$. As before, the function $p(x):=\lim_{t\to\infty} u(t,x)$ is an equilibrium solution of \eqref{1.1} with $p(0)\le u^+(0)-\varepsilon$. Since $p\in[\theta_1,u^+]$, the strong maximum principle yields $p <u^+$, and we also have $p\ge\theta_1 >\alpha_f$ on ${\mathbb{R}}^d$. So the sum in \eqref{1.21} equals $\infty$, contradicting $(f,u^+)\in{\mathcal F}$. \end{proof} \begin{proof}[Proof of Theorem \ref{T.1.11}] This is essentially identical to the proofs of Theorem \ref{T.1.2}(i) and Theorem~\ref{T.1.3} for $d=3$, but with $u\in[0,u^+]$ instead of $u\in[0,1]$ and using Lemma \ref{L.11.1} instead of Lemma \ref{L.2.1}. We will again only assume $(f,u^+)\in{\mathcal F}$ and $\theta\ge 0$ in most of the proof. Lemmas \ref{L.2.1a} and \ref{L.8.1} are unchanged, with the sets $S_{t_0,\varepsilon,\ell}, S_{t_0,L}, S_L$ containing triples $(f,u^+,u)$ and restricted to $(f,u^+)\in{\mathcal F}$ and $u\in[0,u^+]$, and with $1-\varepsilon$ replaced by $u^+(x)-\varepsilon$ in \eqref{1.7b}. In Section \ref{S3} we take \[ Z_y(t):= \inf_{u(t,x)\ge u^+(x)-\varepsilon_0} |x-y| \] and keep $Y^{h}_y(t)$ as before because $u^+\le 1$. Lemma \ref{L.3.1} is unchanged and Lemma \ref{L.3.2} is proved as in the case $d=3$. We cannot use Lemma \ref{L.2.2} here but \eqref{1.21} will do the job. Indeed, we let $\kappa\in(0,d^{-1/2})$ be such that if $u(t,\tilde x)\ge \eta$ for some $(t,\tilde x)\in [\tfrac 12,\infty)\times{\mathbb{R}}^d$, then \begin{equation} \label{11.4} u(t,x)\ge \frac\eta{2K} \qquad \text{for any $x\in B_{\sqrt d \kappa}(\tilde x)$,} \end{equation} which replaces \eqref{3.10}. We then still conclude \eqref{3.13b} using \eqref{1.21}, although with the right hand side being $\lceil \kappa^{-1} \rceil^d\eta^{-1}$ instead of $\kappa^{-4}$. The rest of the proof is unchanged, as is Theorem \ref{T.3.3} and Corollary \ref{C.3.3a}, except for $1-\varepsilon$ being replaced by $u^+(x)-\varepsilon$ in \eqref{3.9g}. Section \ref{S4} is also unchanged, using only the arguments in Case $d=3$. This proves Theorem \ref{T.1.11}(ii) for $u$. The proofs of the remarks at the beginning of Section \ref{S4a} remain the same, with $W$ from \eqref{11.3} instead of \eqref{4.1} and $R':=R_2$. Theorem \ref{T.5.1} is also unchanged (note that here we need $\theta>0$ because we employ Theorem \ref{T.1.5}(ii)) and so is the proof of Theorem \ref{T.1.3}(ii). This proves Theorem \ref{T.1.11}(ii) for $v$. Finally, since we have \eqref{1.24}, the argument after Theorem \ref{T.5.1} which proves Theorem \ref{T.1.2}(i) also remains the same, with each ``$1-$'' is replaced by ``$u^+(x)-$''. \end{proof} \begin{proof}[Proof of Theorem \ref{T.1.12}] Let us define $f_0(u)=0$ for $u<0$, and for $\gamma\le 0$ let $c_\gamma$ be the front/spreading speed for $f_0$ but corresponding to fronts connecting $\gamma$ and $\theta_1$ resp. to sufficiently large $u_0\in[\gamma,\theta_1]$ converging to $\gamma$ as $|x|\to\infty$. It is well known (using phase-plane analysis) that $c_\gamma\in(0,c_0)$ for any $\gamma<0$ as well as $\lim_{\gamma\to 0^-} c_\gamma=c_0$. (i) To prove this we will need to construct an appropriate (non-positive) sub-solution, in addition to the super-solution constructed previously. We will use here the remark at the end of Section \ref{S4a}. Let us first assume \eqref{1.23}, and let $\gamma:=\inf_{x\in{\mathbb{R}}^d} u_0(x)\le 0$ (then $\gamma=\inf_{(t,x)\in[t_0,\infty){\mathbb{R}}^d} u(t,x)$ by \eqref{1.24}). Without loss, we also assume $t_0=0$, $e=e_1$, and $\varepsilon_2\le c_\gamma/4$. The latter can be done because \eqref{1.23} continues to hold if we replace $\varepsilon_2$ by $\min\{\varepsilon_2,c_\gamma/4\}$ and $R_2$ by $R_2+4c_\gamma^{-1}\ln_+\|u_0\|_\infty$. First, we claim that \begin{equation} \label{11.6} \limsup_{t\to \infty} \sup_{x\in{\mathbb{R}}^d} [u(t,x)-u^+(x)]\le 0\qquad\text{and}\qquad \liminf_{t\to \infty} \inf_{x\in{\mathbb{R}}^d} u(t,x)\ge 0, \end{equation} where the rate of these decays depends on the same parameters as $T_\varepsilon$ in (C) does, except of $\varepsilon$ (by ``rate'' we mean a function $\tilde T:(0,\infty)\to (0,\infty)$ such that $\sup_{x\in{\mathbb{R}}^d} [u(t,x)-u^+(x)]\le\delta$ and $\inf_{x\in{\mathbb{R}}^d} u(t,x)\ge-\delta$ for all $t\ge \tilde T(\delta)$). For the first claim in \eqref{11.6}, let $v(t,x):= u(t,x)-u^+(x)$. Then $v_t\le \Delta v+g(v)$, with \[ g(s):=\sup_{(f,u^+,x)\in{\mathcal F}\times{\mathbb{R}}^d} [f(x,u^+(x)+s)-f(x,u^+(x))]. \] We have $g(s)<0$ for all $s>0$ because otherwise translation invariance of ${\mathcal F}$ and its closure under locally uniform convergence would yield $(f,u^+)\in{\mathcal F}$ with $f(x,u^+(x)+s)=f(x,u^+(x))$ for some $s>0$, contradicting the extra hypothesis in the case of \eqref{1.23}. Obviously $\sup_{x\in{\mathbb{R}}^d} v(t,x)\le \kappa(t)$, where $\kappa(0):=\sup_{x\in{\mathbb{R}}^d} v(0,x)<\infty$ and $\kappa'=g(\kappa)$. Thus $\lim_{t\to\infty} \kappa(t)=0$, and the first claim in \eqref{11.6} follows. (If we only have $g\le 0$ but assume $\limsup_{x_1\to-\infty} v(0,x)\le 0$, the claim is immediate from this and \eqref{1.23}.) We now turn to the second claim in \eqref{11.6}. The result from \cite{AW} (see the proof of Lemma~\ref{L.11.1}) for \eqref{1.1} with $u_0\ge (\theta_0+\varepsilon_1+|\gamma|)\chi_{\{x\,|\,x_1<R_1\}}-|\gamma|$ shows \begin{equation} \label{11.7} \inf_{x_1\le R_1+c't} u(t,x)\ge \theta_1-\varepsilon' \end{equation} for any $c'<c_\gamma$, $\varepsilon'>0$, and $t\ge \tau'$ (with $\tau'$ depending only on $f_0,\gamma,\varepsilon_1,c',\varepsilon'$). The comparison principle and \eqref{1.24} yield $u(t,x)\ge -e^{-\varepsilon_2(x_1-\varepsilon_2t-R_2)}$ because the latter solves the heat equation. But this, \eqref{11.7} with $c':=c_\gamma/2$ and $\varepsilon':= \theta_1$, and $\varepsilon_2\le c_\gamma/4$ show $\inf_{x\in{\mathbb{R}}^d} u(t,x)\ge -e^{-\varepsilon_2(c_\gamma t/4+R_1-R_2)}$ for $t\ge \tau'$. The second claim in \eqref{11.6} follows. \eqref{11.6} shows that for any $\varepsilon>0$ and large enough $t$, the sets $\Omega_{u,\varepsilon}(t)$ and $\Omega_{u,1-\varepsilon}(t)$ from \eqref{1.20a} and \eqref{1.20b} are the same as those in Definition \ref{D.1.4}. Hence we will use \eqref{1.20a} and \eqref{1.20b}. We next claim that because of \eqref{11.6} and the parabolic Harnack inequality, it suffices to prove the result with $L_{u,\varepsilon}(t)$ from \eqref{1.3} instead of \eqref{1.3b}. First, there is $(K,\|u\|_\infty)$-dependent $M\ge 1$ such that $\sup_{t\ge t_0+1} \|u_t\|_\infty\le M$ for any solution of \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$ with $f$ Lipschitz with constant $K$ and satisfying $f(\cdot,0)\equiv 0$. Given any $\varepsilon\in(0,\tfrac 12)$, consider some small $\varepsilon'>0$ and let $v:=\varepsilon'+u^+-u$ and $T'<\infty$ be such that $\inf_{x\in{\mathbb{R}}^d} v(t,x)\ge 0$ for all $t\ge T'$ (which exists by \eqref{11.6}). Then we have $v_t- \Delta v - \lambda(t,x) v\ge 0$ for $t\ge T'$, with \[ \lambda(t,x):= v(t,x)^{-1}\min\{f(x,u^+(x))-f(x,u(t,x)),0\} \] which satisfies $\lambda(t,x)\in [-K,0]$ due to \eqref{1.24}. So by the Harnack inequality, there is $C\ge 1$ (depending on $\varepsilon, K$) such that if $v(t,x)\le 2\varepsilon'$ for $(t,x)\in[T'+1,\infty)\times{\mathbb{R}}^d$, then $\sup_{|y-x|\le 1/\varepsilon}v(t-\tfrac\varepsilon{2M},y)\le C\varepsilon'$. If we now let $\varepsilon':=\tfrac \varepsilon{2C}$, then this means $u(t,y)\ge u^+(y)- \varepsilon$ for all $y\in B_{1/\varepsilon}(x)$ whenever $u(t,x)\ge u^+(x)-\varepsilon'$ and $t\ge T'+1$. Therefore $\ell_{\varepsilon'}$ for $L_{u,\varepsilon'}(t)$ from \eqref{1.3} works as $\ell_\varepsilon$ for $L_{u,\varepsilon}(t)$ from \eqref{1.3b}, provided we can obtain (C) for the former. So we can consider $L_{u,\varepsilon}(t)$ from \eqref{1.3}, with $\Omega_{u,\varepsilon}(t)$ and $\Omega_{u,1-\varepsilon}(t)$ from \eqref{1.20a} and \eqref{1.20b}, which is what was done in the proofs of Theorems \ref{T.1.2}(i) and \ref{T.1.11}(i). Next, notice that \eqref{11.7} and $u\ge \gamma$ can be upgraded to \begin{equation} \label{11.8} \sup_{x_1\le R_1+ct} [u^+(x)-u(t,x)]\le \varepsilon \end{equation} for any $c<c_\gamma$, $\varepsilon>0$, and $t\ge \tilde\tau$ (with $\tilde\tau$ depending only on ${\mathcal F},c,\varepsilon,\gamma,\varepsilon_1$). Indeed, this is done using \eqref{11.7} in the same way \eqref{11.2} is used to prove \eqref{11.1}, but with $U(s_0)=-\gamma$ in the definition of $W$ (we still have $\int_{-\gamma}^{\theta_1-\varepsilon'} f_0(u) du>0$). In fact, \eqref{11.7} and then \eqref{11.8} hold for any $c<c_0$ because of the second claim in \eqref{11.6} and $\lim_{\gamma\to 0^-} c_\gamma=c_0$. Let us assume, without loss, that $\theta>0$ is small enough so that $\theta\le \tfrac12(\theta_1-\theta_0)$ and $\int_{0}^{\theta_1-\theta} f_0(u) du>0$. We now use \eqref{11.8} with $c:=c_\gamma/2$, $\varepsilon:=\theta$, and the corresponding $\tilde\tau$, together with $u(t,x)\ge -e^{-\varepsilon_2(x_1-\varepsilon_2t-R_2)}$ (shown above) and $\varepsilon_2\le c_\gamma/4$, to obtain for $t\ge\tilde\tau$, \[ u(t,x)\ge (u^+(x)-\theta)\chi_{\{x\,|\, x_1\le R_1+c_\gamma t/2\}}(x) -e^{-\varepsilon_2(x_1-R_2-c_\gamma t/4)} \chi_{\{x\,|\, x_1> R_1+c_\gamma t/2\}}(x) . \] Of course, \eqref{1.23} and $f(u)\le Ku$ for $u\ge 0$ also give \[ u(t,x)\le e^{(\varepsilon_2^2+K) t-\varepsilon_2(x_1-R_2)}. \] Now consider $W$ from \eqref{11.3}, with $\varepsilon':=\theta$ and $R':=R_2$. Consider also $w$ solving \eqref{1.1} with $w(0,x):=W(x_1)$. As in the remark at the end of Section \ref{S4a}, we obtain \begin{equation}\label{5.8b} m:=\inf \left\{ w_t(t,x) \,\bigg|\, (f,u^+)\in {\mathcal F}_\theta, \, t\ge 0, \text{ and } w(t,x)\in \left[ \frac\theta 2, u^+(x)- \frac\theta 2 \right] \right\}>0. \end{equation} With $s_0$ from \eqref{11.3}, we let \[ r:=R_2+s_0- \frac 1{\varepsilon_2} \log \max \left\{ \frac K {\varepsilon_2^2 m} , \frac 2\theta \right\}, \] and shift $u$ by $(-T,-R)$, and $f,u^+$ by $-R$ in space, where \[ T:=\max \left\{ \tilde\tau, 4{c_\gamma}^{-1} \left( 2R_2-R_1+s_0-r \right) \right\}, \] \[ R:= \frac {c_\gamma T}4 + R_2-r. \] Since $\tfrac 12 c_\gamma T\ge R_2-R_1+s_0+R $, the above estimates on the original $u(t,x)$ for $t\ge T$ ($\ge \tilde\tau$) now give for the shifted $u,u^+$ and $t\ge 0$, \begin{equation}\label{11.10} u(t,x)\ge (u^+(x)-\theta)\chi_{\{x\,|\, x_1\le R_2+s_0+c_\gamma t/2\}}(x) -e^{-\varepsilon_2(x_1-r-c_\gamma t/4)} \chi_{\{x\,|\, x_1> R_2+s_0+c_\gamma t/2\}}(x) , \end{equation} \begin{equation}\label{11.11} u(0,x)\le e^{-\varepsilon_2(x_1-R_2+R-\varepsilon_2T-KT/\varepsilon_2)}. \end{equation} The crucial point here is that $u(t,x)\ge u^+(x)-\theta$ when \begin{equation}\label{11.9} x_1\le \frac{c_\gamma t}2 + \frac 1{\varepsilon_2} \log \max \left\{ \frac K {\varepsilon_2^2 m} , \frac 2\theta \right\} +r . \end{equation} Hence $v$ from \eqref{5.9a}, which by the argument from the remark at the end of Section \ref{S4a} is a subsolution of \eqref{1.1} on the set of $(t,x)\in(0,\infty)\times{\mathbb{R}}^d$ such that either the opposite of \eqref{11.9} holds or $v(t,x)\ge u^+(x)-\theta$, will stay below $u$ as long as $v(0,\cdot)\le u(0,\cdot)$. But this holds due to \eqref{11.10} for $t=0$ because $w(0,\cdot)\le \theta_1-\theta\le u^+(\cdot)-\theta$ and $w(0,\cdot)$ vanishes for $x_1\ge R_2+s_0$. Thus \eqref{5.9a} shows that the first inequality in \eqref{5.2} holds with $\tau=1$ and \[ \nu(t):= \max \left\{ \sup_{x_1\le r+c_\gamma t/2} [u^+(x)-u(t, x)],\, e^{(\varepsilon_2^2 -c_\gamma \varepsilon_2/2) t} \right\} \] because $w\le u^+$ and $w_t\ge 0$. Moreover, $\lim_{t\to\infty}\nu(t)=0$ due to $0<\varepsilon_2<c_\gamma/2$ and \eqref{11.8} for some $c>c_\gamma/2$ and any $\varepsilon>0$. On the other hand, as in Section \ref{S4a}, we also have a super-solution of \eqref{1.1} on $(0,\infty)\times{\mathbb{R}}^d$ from \eqref{5.9b}, with some large $\tau$ and $R_2$ replaced by $R_2-R+\varepsilon_2T+KT/\varepsilon_2$. This then stays above $u$ due to \eqref{11.11}. As in Section \ref{S4a}, we obtain the second inequality in \eqref{5.2} with some $\nu(t)\to 0$ as $t\to\infty$. The (H') version of Theorem \ref{T.5.1} now finishes the proof. The proof in the case \eqref{1.22} is similar, with $x_1$ replaced by $|x-x_0|$ and sufficiently large $R_1$ to guarantee \eqref{11.7} with $x_1$ replaced by $|x-x_0|$. Notice that the first claim in \eqref{11.6} follows from \eqref{1.22}, even though now we only have $g\le 0$. (ii) Similarly to (i), the extra hypotheses in (ii) imply both claims in \eqref{11.6}. So again we can consider $L_{u,\varepsilon}(t)$ from \eqref{1.3}, with $\Omega_{u,\varepsilon}(t)$ and $\Omega_{u,1-\varepsilon}(t)$ from \eqref{1.20a} and \eqref{1.20b}. Moreover \eqref{3.6a} shows $u_t\ge 0$ so we must have $u\le u^+$. Assume again $t_0=0$ without loss and notice that the hypotheses continue to hold if we replace $u_0$ by $u(t,\cdot)$ for any $t\ge 0$. Indeed, for all small enough $\varepsilon>0$ and all $y\in{\mathbb{R}}^d$ we have $Z_y(0)\le Y^{\varepsilon/2}_y(0)+L_{u,\varepsilon/2,1-\varepsilon_0}(0)$ as in the case $d=3$ of the proof of Theorem \ref{T.1.3}(i). Since $u_t\ge 0$, the (H') version of Lemma \ref{L.3.1}(i) now gives \[ Z_y(t)\le Z_y(0)\le Y^{\varepsilon/2}_y(0)+L_{u,\varepsilon/2,1-\varepsilon_0}(0)\le Y^{\varepsilon/2}_y(t)+c_Y't+r_Y+L_{u,\varepsilon/2,1-\varepsilon_0}(0). \] If $u(t,y)\ge\varepsilon$, then $Y^{\varepsilon/2}_y(t)\le \psi^{-1}(\tfrac\varepsilon 2)$, so this yields \[ L_{u,\varepsilon,1-\varepsilon_0}(t)\le \psi^{-1}(\tfrac\varepsilon 2)+c_Y't+r_Y+L_{u,\varepsilon/2,1-\varepsilon_0}(0)<\infty. \] This and \eqref{11.6} mean that we can assume without loss that $\gamma:= \min\{\inf_{x\in{\mathbb{R}}^d} u_0(x),0\}=\min\{\inf_{(t,x)\in[0,\infty){\mathbb{R}}^d} u(t,x),0\}$ is such that $c_\gamma>c_Z$ from \eqref{3.0}. But then Lemma \ref{L.11.1} shows that the (H') version of Lemma \ref{L.3.1}(iii) continues to hold because now $u\in[\gamma,u^+]$. The rest of the proof of Theorem~\ref{T.1.3} (or rather Theorem \ref{T.1.11}(ii)) then carries over to the case $u\in[\gamma,u^+]$, with $c_0$ replaced by $c_\gamma$ and the obvious (minimal) changes (notice also that the second claim in \eqref{11.6}, which holds for any $(f,u^+)\in{\mathcal F}$ and bounded $u_0$ satisfying \eqref{3.6a}, precludes existence of equilibrium solutions $p$ of \eqref{1.1} with $\gamma<p<u^+$ and $\inf_{x\in{\mathbb{R}}^d} p(x)<0$). Since we can take $\gamma$ arbitrarily close to 0, by replacing $u_0$ with $u(t,\cdot)$ for a large enough $t$, we finally also obtain global mean speed in $[c_0,c_1]$. The proof is thus finished. \end{proof} \section{Proof of Theorem \ref{T.1.5}} \label{S8} Let $K\ge 1$ be a Lipschitz constant for $f$ and pick a non-increasing and continuous function $L:(0,\tfrac 12)\to(0,\infty)$ such that $(\sup_{t\in{\mathbb{R}}} L_{u,\varepsilon}(t)=)\,L^{u,\varepsilon}\le L(\varepsilon)$ for all $\varepsilon\in(0,\tfrac 12)$. We can do so because $L^{u,\varepsilon}$ is finite and non-increasing in $\varepsilon$. (i) Let $m_0:=\inf_{(t,x)\in {\mathbb{R}}^{d+1}} u(t,x)$ and $m_1:=\sup_{(t,x)\in {\mathbb{R}}^{d+1}} [u(t,x)-u^+(x)]$. It is easy to see that Lemma \ref{L.8.1}(i,ii) extends to the case when $S_{t_0,\varepsilon,\ell}= S_{t_0,\varepsilon,\ell}(K,m_0,m_1)$ is defined to be the set of all triples $(f,u^+,u)$ such that $f$ is Lipschitz with constant $K$, the functions $u^-\equiv 0$ and $u^+$ satisfy \eqref{1.19} and are equilibrium solutions of \eqref{1.1}, and $u$ with $m_0\le u\le u^+ +m_1$ solves \eqref{1.1} on $(t_0,\infty)\times{\mathbb{R}}^d$ and satisfies $L_{u,\varepsilon'}(t)\le \ell$ for all $\varepsilon'\in(\varepsilon,\tfrac 12)$ and all $t> t_0$ (with $L_{u,\varepsilon}(t)$ from Definition \ref{D.1.4}). Assume first that $m_1>0$. Take $(t_n,x_n)$ such that $u(t_n,x_n)-u^+(x_n)\to m_1$ and define $f_n(x,u):=f(x-x_n,u)$, $u^+_n(x):=u^+(x-x_n)$, and $u_n(t,x):=u(t-t_n,x-x_n)$. Since $(f_n,u^+_n,u_n)$ belongs to the corresponding $S_{-\infty,L}=S_{-\infty,L}(K,m_0,m_1)$, we obtain as in the proof of Lemma \ref{L.8.1} some new $(f,u^+,u)\in S_{-\infty,L}$ (and thus also $u\not\equiv u^+ +m_1$) such that $u\le u^+ + m_1$ and $u(0,0)=u^+(0)+m_1$. The function $u^+ +m_1$ is a super-solution of \eqref{1.1} due to \eqref{1.24}, so the strong maximum principle yields a contradiction with $u\not\equiv u^++m_1$. Thus $m_1\le 0$ and the strong maximum principle also shows $u<u^+$. The case $m_0<0$ is identical, this time using that the constant $m_0$ is a sub-solution of \eqref{1.1} due to \eqref{1.24}. We obtain $m_0\ge 0$ and then also $u>0$. (ii) By the discussion following Definition \ref{D.1.4} (see also the proof of Theorem \ref{T.1.12}(i) above), (i) shows that it is equivalent to use \eqref{1.3} instead of \eqref{1.3b} in what follows. We will do so in (ii) and (iii), including in the definition of $S_{t_0,\varepsilon,\ell}=S_{t_0,\varepsilon,\ell}(K,0,0)$ from (i) (and thus also in $S_{t_0,L},S_L$, with the condition $u\not\equiv 0,u^+$ for $S_L$). In addition, (i) and $u^-\equiv0$ show that we have \eqref{1.20a} and \eqref{1.20b}. Since $u$ propagates with a positive global mean speed, \eqref{1.3g} shows that $u$ is a transition solution connecting $u^-\equiv 0$ and $u^+$. Indeed, $u\not\equiv u^+$ gives $\Omega_{u,1-\varepsilon}(0)\neq{\mathbb{R}}^d$ for all small $\varepsilon>0$. Thus the first inclusion in \eqref{1.3g}, with $t\to-\infty$ and $\tau:=-t$, shows the $t\to-\infty$ limit in \eqref{1.8}. Also, $u\not\equiv 0$ gives $\Omega_{u,\varepsilon}(0)\neq\emptyset$ for all small $\varepsilon>0$. Thus the first inclusion in \eqref{1.3g}, with $t=0$ and $\tau\to\infty$, shows the $t\to\infty$ limit in \eqref{1.8}. Assume now that $\theta>0$ is as in (ii), take $\varepsilon_0:= \theta/ 2>0$, and let \[ u^s(t,x):=u(t+s,x) \] be a time shift of $u$. It is then sufficient to show that $u^s\ge u$ for any $s\ge 0$. Indeed, we then obtain $u_t\ge 0$, and the strict inequality follows from the strong maximum principle for $u_t$ (which satisfies a linear equation and is not identically 0 because $u$ is a transition solution). \begin{lemma} \label{L.8.2} There is $s_0$ such that $u^s\ge u$ whenever $s\ge s_0$. \end{lemma} \begin{proof} Since $u$ propagates with a positive global mean speed (with some $c>0$ and $\tau_{\varepsilon,\delta}<\infty$ in Definition \ref{D.1.1b}), we have $\Omega_{u,\varepsilon_0}(t)\subseteq \Omega_{u,1-\varepsilon_0}(t+s)$ for all $t\in{\mathbb{R}}$ and $s\ge s_0:=\tau_{\varepsilon_0,c/2}$. Thus for $s\ge s_0$ we have $u^s+\varepsilon_0\ge u$ as well as \begin{equation} \label{8.2} u^s(t,x)\ge u(t,x) \text{ whenever } u(t,x)\in [\varepsilon_0,u^+(x)-\varepsilon_0]. \end{equation} Next take any $s\ge s_0$ and let $\varepsilon\ge 0$ be the smallest number such that $u^s+\varepsilon\ge u$. Obviously, $\varepsilon$ exists and $\varepsilon\le \varepsilon_0$. We need to show that $\varepsilon=0$, so assume that $\varepsilon>0$ and let $(t_n,x_n)$ satisfy \[ \lim_{n\to\infty} [u^s(t_n,x_n)+\varepsilon-u(t_n,x_n)] = 0. \] Then \eqref{8.2} shows that $u(t_n,x_n)\notin[\varepsilon_0,u^+(x_n)-\varepsilon_0]$ for large enough $n$, so either $u(t_n,x_n)\in [\varepsilon,\varepsilon_0]$ or $u(t_n,x_n)\in [u^+(x_n)-\varepsilon_0,u^+(x_n)]$. Apply the $u^+\not\equiv 1$ version of Lemma \ref{L.8.1}(ii) with $t_n(\varepsilon)=-\infty$, $f_n(x,u):=f(x-x_n,u)$, $u^+_n(x):=u^+(x-x_n)$, and $u_n(t,x):=u(t-t_n,x-x_n)$. We obtain $(\tilde f,\tilde u^+,\tilde u)\in S_{-\infty,L}(K,0,0)$ such that $\tilde u\in[0,\tilde u^+]$, $\tilde u^s+\varepsilon\ge \tilde u$, \begin{equation}\label{8.2a} \tilde u^s(t,x)\ge \tilde u(t,x) \text{ whenever } \tilde u(t,x)\in [\varepsilon_0,\tilde u^+(x)-\varepsilon_0] \end{equation} by \eqref{8.2}, and \[ \tilde u^s(0,0)+\varepsilon=\tilde u(0,0) \qquad( \in[\varepsilon,\varepsilon_0]\cup[\tilde u^+(0)-\varepsilon_0,\tilde u^+(0)] ). \] Moreover, $\tilde f$ is non-increasing in $u$ on $[0,\theta]$ and on $[\tilde u^+(x)-\theta,\tilde u^+(x)]$ because $f,u^+$ have the same property. Let now $v:= \tilde u^s+\varepsilon-\tilde u\ge 0$, so that $v_t = \Delta v + \tilde f(x,\tilde u^s)-\tilde f(x,\tilde u)$. We then have \[ v_t \ge \Delta v-Kv. \] Indeed, this obviously holds when $\tilde u^s(t,x)\ge \tilde u(t,x)$. When $\tilde u^s(t,x)< \tilde u(t,x)$, then \eqref{8.2a} and $\varepsilon\le\varepsilon_0$ show $\tilde u^s(t,x),\tilde u(t,x)\in [0,\varepsilon_0]\cup[\tilde u^+(0)-2\varepsilon_0,\tilde u^+(0)]$, so $\tilde f(x,\tilde u^s(t,x))\ge \tilde f(x,\tilde u(t,x))$ by $2\varepsilon_0=\theta$. Now we obtain $v\equiv 0$ on $(-\infty,0]\times{\mathbb{R}}$ by $v\ge 0$, $v(0,0)=0$, and the strong maximum principle. But then $\tilde u^{-sn}\equiv \tilde u+n\varepsilon$ on $(-\infty,0]\times{\mathbb{R}}$ for $n\in{\mathbb{N}}$, a contradiction with boundedness of $\tilde u$. Thus $\varepsilon=0$ for any $s\ge s_0$ and the proof is finished. \end{proof} \begin{lemma} \label{L.8.3} We have $u^s\ge u$ for any $s\ge 0$. \end{lemma} \begin{proof} Let $s_1\ge 0$ be the smallest number such that $u^s\ge u$ for any $s\ge s_1$ (which obviously exists), and assume that $s_1>0$. We first claim that \begin{equation}\label{8.1} m:= \min \left\{ \varepsilon_0, \inf_{u(t,x)\in[\varepsilon_0,u^+(x)-\varepsilon_0]} [u^{s_1}(t,x)-u(t,x)] \right\} >0. \end{equation} Indeed, if $m=0$, then let $(t_n,x_n)$ be such that $u(t_n,x_n)\in[\varepsilon_0,u^+(x_n)-\varepsilon_0]$ and \[ \lim_{n\to\infty} [u^{s_1}(t_n,x_n)-u(t_n,x_n)] = 0. \] The $u^+\not\equiv 1$ version of Lemma \ref{L.8.1}(ii) with $t_n(\varepsilon):=-\infty$, $f_n(x,u):=f(x-x_n,u)$, $u^+_n(x):=u^+(x-x_n)$, and $u_n(t,x):=u(t-t_n,x-x_n)$, again yields $(\tilde f,\tilde u^+,\tilde u)\in S_{-\infty,L}(K,0,0)$ such that \[ \tilde u^{s_1} \ge \tilde u \qquad\text{and}\qquad \tilde u^{s_1}(0,0)=\tilde u(0,0) \in[\varepsilon_0,\tilde u^+(0)-\varepsilon_0]. \] This contradicts the strong maximum principle for $v:=\tilde u^{s_1}-\tilde u\ge 0$, which satisfies a linear equation $v_t=\Delta v + \lambda(t,x) v$ with $\|\lambda\|_\infty\le K$, because $v(0,0)=0$ and $v\not \equiv 0$. The latter holds because otherwise $\tilde u$ would be time-periodic, contradicting \eqref{1.8} for $\tilde u$ (which propagates with a positive global mean speed because the same is true for $u_n$, with $n$-independent constants in Definition \ref{D.1.1b}, so \eqref{1.8} holds for $\tilde u$ by the first claim in (ii)). So $m>0$ and since $u_t$ is uniformly bounded by parabolic regularity, there is $s_2\in(0,s_1)$ such that $u^s \ge u^{s_1} -m$ for any $s\in[s_2,s_1]$. Thus \eqref{8.2} as well as $u^s+\varepsilon_0\ge u$ hold for any $s\in[s_2,s_1]$. Fix any such $s$ and let $\varepsilon\in [0,\varepsilon_0]$ be the smallest number such that $u^s+\varepsilon\ge u$. The argument in the proof of Lemma \ref{L.8.2} now shows that $\varepsilon=0$. But since $s\in [s_2,s_1]$ was arbitrary, this means that we can decrease $s_1$ to $s_2$, a contradiction. It follows that $s_1=0$. \end{proof} This finishes the proof of (ii). (iii) From bounded width of $u$ and Lemma \ref{L.11.1} it follows that $u$ propagates with a positive global mean speed. Thus (ii) yields $u_t>0$, and then the (H') version of Corollary \ref{C.3.3a}(ii) from the proof of Theorem \ref{T.1.11} gives the result. Note that we did not use Theorem \ref{T.1.5} in the proof of Corollary \ref{C.3.3a}. \section{Proof of Theorem \ref{T.1.2}(ii) } \label{S6} We will prove this by constructing an example of an ignition $f$ which prevents most solutions from having a bounded width (an almost identical construction can be made with a monostable $f$). The idea is to find $f$ such that there is an equilibrium solution $p\in(0,1)$ of \eqref{1.1}, independent of $x_1$, with the transition $0\to p$ propagating faster in the direction $e_1=(1,0,\dots,0)$ than the transition $p\to 1$. Then as $t\to\infty$, the solution $u$ will be close to $p$ on a slab $I_t\times{\mathbb{R}}^{d-1}$ (to the left of which $u\sim 1$ and to the right of which $u\sim 0$), with $I_t$ an interval of linearly growing length. Let $\tilde p:{\mathbb{R}}^{d-1}\to(0,1)$ be $C^\infty$, radially symmetric, radially decreasing, with \[ \tilde p (\tilde x)=3^{d-4}|\tilde x|^{3-d} \text{ for $|\tilde x|\ge 3$,} \qquad \Delta \tilde p <0 \text{ on $B_3(0)$,} \qquad \text{and} \qquad \tilde p(B_1(0))\subseteq(\tfrac 23,\tfrac 34). \] Let $f_0:[0,1]\to[0,\infty)$ be a $C^\infty$ ignition reaction with $f_0(\tilde p(\tilde x))=-\Delta\tilde p(\tilde x)$ for $\tilde x\in{\mathbb{R}}^{d-1}$ (so ignition temperature is $\theta_0:=\tfrac 13$) and decreasing on $[\tfrac 34,1]$. Then $p(x):=p(x_2,\dots,x_d)\in(0,\tfrac 34)$ satisfies on ${\mathbb{R}}^d$, \[ \Delta p + f_0(p) = 0. \] Consider $f$ that satisfies (H) with this $f_0$, some $K\ge 1$ and $\theta:=\tfrac 14$, as well as $f(x,u)=f_0(u)$ for all $x\in{\mathbb{R}}^d$ and $u\in[0,\tfrac 12]\cup [p(x),1]$, and $f(x,u)\ge f_0(u)$ for all $x\in{\mathbb{R}}^d$ and $u\in(\tfrac 12,p(x))$ (provided this interval is non-empty). Then obviously $f(x,u)= 0$ for $(x,u)\in {\mathbb{R}}^d\times[0,\theta_0]$. \begin{lemma} \label{L.9.1} Let $f$ be as above and $c:=\max\{2\sqrt{\|f_0'\|_\infty},1\}>0$. If $u$ solves \eqref{1.1}, \eqref{1.2} with $t_0=0$ and $u_0(x)\le p(x)+e^{-c(x_1-z)/2}$ for all $x\in{\mathbb{R}}^d$ and some $z\in{\mathbb{R}}$, then \begin{equation} \label{9.1} u(t,x)\le p(x)+e^{-c(x_1-z-ct)/2}. \end{equation} \end{lemma} \begin{proof} Let $v$ be the right-hand side of \eqref{9.1}. Then \[ v_t-\Delta v -f(x,v) =f_0(p(x))-f_0(v)+\frac{c^2}4 e^{-c(x_1-z-ct)/2} \ge 0 \] by $c^2/4\ge \|f_0'\|_\infty$. So $v$ is a super-solution for \eqref{1.1} with $v(0,\cdot)\ge u_0$, and we are done. \end{proof} That is, transition $p\to 1$ is propagating in the direction $e_1$ with speed at most $c$, which is independent of $K,f$. We now make $f$ sufficiently large for $u\in(\tfrac 12,p(x))$ so that the transition $0\to p$ will be propagating faster than speed $c$. Let $f^M$ be as $f$ above, with $f^M(x,u):=f_0(u)+M(u-\tfrac 12)(p(x)-u)$ for $u\in(\tfrac 12,p(x))$. Let $t_0:=0$ and fix a radially symmetric and radially non-increasing $v_0$ such that \begin{equation} \label{9.2} \frac 23 \chi_{B_{1/2}(0)} \le v_0 \le \frac 23 \chi_{B_{1}(0)} \qquad(\le p) \end{equation} and $\Delta v_0(x) + f^{M_0}(x,v_0(x))\ge 0$ for some $M_0\gg 1$. Then the solution $v^M$ of \eqref{1.1}, with $f^M$ instead of $f$ and $v^M(0,\cdot)=v_0(\cdot)$, is non-decreasing in both $t>0$ and $M\ge M_0$. Therefore \[ w(t,x):=\lim_{M\to\infty} v^M(t,x) \qquad(\le p(x)) \] satisfies $w_t\ge 0$. We claim that $w(t,x)\notin (\tfrac 12,p(x))$ for any $(t,x)\in(0,\infty)\times{\mathbb{R}}^d$. Otherwise there is $M_1\ge M_0$ such that $v^{M_1}(t,x)\in (\tfrac 12,p(x))$, and then there is $\varepsilon>0$ such that $v^{M_1}(t-\varepsilon,y)\in (\tfrac 12,p(y))$ for all $y\in B_\varepsilon(x)$. Since $v^M(t-\varepsilon,\cdot)\ge v^{M_1}(t-\varepsilon,\cdot)$ for all $M\ge M_1$, it follows from the definition of $f^M$ that $w(s,y)\ge p(y)$ for all $s>t-\varepsilon$ and $y\in B_\varepsilon(x)$. This is a contradiction with the hypothesis $w(t,x)\in(\tfrac 12,p(x))$. This, \eqref{9.2}, and $v^M_t\ge 0$ thus show \begin{equation} \label{9.3} w(t,x)=p(x) \qquad \text{for $(t,x)\in(0,\infty)\times B_{1/2}(0)$.} \end{equation} Pick some $0<\tau\ll 1$. It is easy to show using the properties of the Gaussian that if $\tau$ is sufficiently small, then any super-solution of the heat equation on $D:={\mathbb{R}}^d\setminus B_{1/2-\tau^{2/3}}(0)$ with initial condition $u(\tau,x)\ge 0$ for $x\in D$ and boundary condition $u(t,x)\ge \tfrac 35$ on $(\tau,\infty)\times \partial D$ satisfies $u(2\tau,x)>\tfrac 12$ for all $x\in B_{1/2+\tau^{2/3}}(0)$. \eqref{9.3} shows that there is $M$ such that $v^M$ satisfies these conditions, and it follows that \begin{equation} \label{9.4} w(t,x)=p(x) \qquad \text{for $(t,x)\in (2\tau,\infty)\times B_{1/2+\tau^{2/3}}(0)$}. \end{equation} We can repeat this argument with \eqref{9.4} as a starting point instead of \eqref{9.3} and eventually obtain for all integers $n\ge \tau^{-2/3}$, \begin{equation} \label{9.5} w(t,x)=p(x) \qquad \text{for $(t,x)\in (2n\tau,\infty)\times A_{n\tau^{2/3}-1/2}$}, \end{equation} where $A_{a}:=\bigcup_{b\in[-a,a]} B_1(b,0,\dots,0)\subseteq {\mathbb{R}}^{d}$ (we need $A_{n\tau^{2/3}-1/2}$ instead of $B_{n\tau^{2/3}+1/2}(0)$ because $p(x)>\tfrac 12$ holds only when $|(x_2,\dots,x_d)|<C$, for some $C>1$) . One can in fact show that $w(t,\cdot)=p(\cdot)$ for all $t>0$ but we will not need this. If we now choose $\tau>0$ so that there exists an integer $n\in((2c+1)\tau^{-2/3},(2\tau)^{-1})$, then \eqref{9.5} yields \[ v^M(1,\cdot)\ge \frac 23 \chi_{A_{2c}}(\cdot), \] for some $M\ge M_0$. Iterating this, we obtain for $m\in{\mathbb{N}}$, \begin{equation} \label{9.6} v^M(m,\cdot)\ge \frac 23 \chi_{A_{2cm}}(\cdot). \end{equation} So let us take $f:=f^M$ and any $u_0$ as in Theorem \ref{T.1.2}(ii) (without loss let $t_0=0$). It follows from $c\ge 1$, $p\le\tfrac 34$, and Lemma \ref{L.9.1} with $z=0$ that \begin{equation} \label{9.7} \sup_{t>0, \, x_1>ct+4} u(t,x) \le \frac 9{10}. \end{equation} If $u\to 1$ locally uniformly on ${\mathbb{R}}^d$ as $t\to\infty$, then $u(t_1,\cdot)\ge \tfrac 23 \chi_{B_{1/2}(0)}(\cdot)$ for some $t_1>0$, and so $u(t_1+m,2cm)\ge \frac 23$ for $m\in{\mathbb{N}}$ by \eqref{9.6}. It follows from this and \eqref{9.7} that all claims in (C) are false for $u$. This also holds when $u\not\to 1$ locally uniformly on ${\mathbb{R}}^d$ as $t\to\infty$, because then Lemma \ref{L.2.1} shows $\sup_{(t,x)\in (1,\infty)\times{\mathbb{R}}^d} u(t,x)<1$ (and $u\not\to 0$ uniformly by the hypothesis). \section{Proof of Theorem \ref{T.1.6}} \label{S7} (i) Having Remark 2 after Theorem \ref{T.1.11}, this is now rather standard. Let $U,s_0,$ and small $\varepsilon'>0$ be as in \eqref{11.3}, with $R'$ large enough so that $w_0(x):=W(|x|)$ satisfies \eqref{4.2} and the solution $w$ of \eqref{1.1} with $f_0$ in place of $f$ and $w(0,x):=w_0(x)$ spreads in the sense $w\to \theta_1$ locally uniformly as $t\to\infty$ \cite{AW}. If now $u_n\in[0,u^+]$ is the solution of \eqref{1.1} on $(0,\infty)\times{\mathbb{R}}^d$ with $u_n(0,x)=w_0(x-ne_1)$, then $(u_n)_t\ge 0$; and the proof of Lemma \ref{L.11.1}, along with $f\ge f_0$ and the comparison principle, shows that $u_n\to u^+$ locally uniformly as $t\to\infty$. If $t_n$ is the first time such that $u_n(t_n,0)=\theta_0$, shift $u_n$ in time by $-t_n$ so that now it solves \eqref{1.1} on $(-t_n,\infty)\times{\mathbb{R}}^d$ and $u_n(0,0)=\theta_0$. Obviously $\lim_{n\to\infty} t_n=\infty$ because $u\le u^+\le 1$, $f(x,u)\le Ku$, and the comparison principle yield on $(-t_n,\infty)\times{\mathbb{R}}^d$, \[ u_n(t,x)\le e^{-\sqrt K(x_1+n-s_0-R'-2\sqrt K (t+t_n))}. \] So by parabolic regularity, there is a sub-sequence along which $u_n$ and their spatio-temporal first and spatial second derivatives converge locally uniformly to some solution $u$ of \eqref{1.1} on ${\mathbb{R}}\times{\mathbb{R}}^d$. Obviously $0\le u\le u^+$ and $u_t\ge 0$ (then all three inequalities are strict due to the strong maximum principle), and since all the $u_n$ satisfy the Remark after Theorem \ref{T.1.11} with the same $\ell_\varepsilon,T_\varepsilon$ (and $-t_n$ in place of $t_0$), $u$ has a bounded width. Theorem \ref{T.1.5}(ii) now shows that $u$ is a transition solution because bounded width and Lemma \ref{L.11.1} yield a positive global mean speed of $u$, finishing the proof. (ii) We will only {\it sketch} the proof, since the mechanics of the workings of the counter-example which we construct are more important than the detailed proof. The latter would only add tedious technical details, obscuring the main ideas. Let us also only consider the case $d=2$ because the general case is identical, with annuli below replaced by shells. To find $f$ such that there is no transition solution with doubly-bounded width for \eqref{1.1} (and thus also no transition front), it is sufficient to take some ignition $f_0$ and let $f$ be equal to $\beta f_0(u)$ outside the union of the discs $B_n:=B_n(n^3e_1)$ (for some $\beta\gg1$), and $f(x,u)=f_0(u)$ inside each $B_{n-1}(n^3e_1)$ (with a smooth transition between the two on $B_{n}(n^3e_1)\setminus B_{n-1}(n^3e_1)$). If $u$ is a transition solution for \eqref{1.1} with a bounded width, let $t_n$ be the first time when $\sup_{x\in B_n}u(t_n,x)=\tfrac 1{10}$ (i.e., when the reaction zone of $u$ ``reaches'' $B_n$). Since $\beta\gg1$, the reaction will spread all over $A_n:=B_{2n}(n^3e_1)\setminus B_n(n^3e_1)$ before it spreads to $B_{n/2}(x_n)$, as described in the introduction (see below for more details). So at the (later) time $s_n$ when $\inf_{x\in B_{n}} u(s_n,x)=\tfrac 12$, we will also have $\inf_{x\in A_n} u(s_n,x)\ge \tfrac 12$. It follows that $L^{u,\varepsilon}\ge n$ for all $n$ and $\varepsilon\in(\tfrac 12,1)$. Hence $u$ does not have a doubly-bounded width. We will need to use a more involved construction to obtain $\inf_{x\in{\mathbb{R}}^2} u(t,x)>0$ for any $t\in{\mathbb{R}}$ and any $u$ from (ii). Let $f_0(u)=(2u-1)(1-u)\chi_{[1/2,1]}(u)$ and let $R$ be such that if $u_t=\Delta u + f_0(u)$ on $(0,\infty)\times{\mathbb{R}}^2$ and $u(0,\cdot)\ge \tfrac 34 \chi_{B_R(0)}(\cdot)$, then $u\to 1$ locally uniformly as $t\to\infty$. By Lemma \ref{L.2.1}, such $u$ also satisfies \begin{equation} \label{10.2} \lim_{t\to\infty} \inf_{|x|\le ct} u(t,x) = 1 \end{equation} for any $c<c_0$, with $c_0>0$ the spreading speed for $f_0$ (and we have $c_0\le 2\sqrt{\|f_0(u)/u\|_\infty}\le 2$). Let $\beta>1$ be such that if $u_t=\Delta u + \beta f_0(u)$ on $(0,\infty)\times{\mathbb{R}}\times[-2,2]$ with Dirichlet boundary conditions and $u(0,\cdot)\ge \tfrac 34 \chi_{B_1(0)}(\cdot)$, then \begin{equation} \label{10.1} \lim_{t\to\infty} \inf_{x\in [-100t,100t]\times[-1,1]} u(t,x) \ge \frac 45. \end{equation} That is, the reaction with strength $\beta$ spreads along a strip with a cold boundary at speed at least 100. It is not difficult to show that this holds for large enough $\beta$. Next let $f(x,u)=a(|x|)f_0(u)$, where $a:[0,\infty)\to [1,\beta]$ is smooth, Lipschitz with the constant $\beta$, with $a(r)=\beta$ if $|r-2^{n}|\le 3$ for some $n\ge 3$, and with $a(r)=1$ if $|r-2^{ n}|\ge 4$ for each $n\ge 3$. That is, the reaction is large on a sequence of annuli with uniformly bounded widths and exponentially growing radii, and small elsewhere. We obviously have $f\in F(f_0,\beta,\tfrac 14,\zeta,\eta)$ for any $\zeta,\eta>0$ because $\alpha_f(\cdot;\zeta)>\theta_0$ for any $\zeta>0$. Then pick $\varepsilon_0\in(0,\tfrac 12)$ such that if $u\in[0,1]$ solves \eqref{1.1}, \eqref{1.2}, then $\inf_{y\in B_R(x)} u(t,x)\ge \tfrac 34$ whenever $t\ge t_0+1$ and $u(t,x)\ge 1-\varepsilon_0$ (which exists by parabolic regularity) and also such that the unique traveling front for $u_t = u_{xx}+f_0(u)$ connecting $\varepsilon_0$ and 1 has speed $c_{\varepsilon_0}< 1.1c_0$ (which is possible because $\lim_{\varepsilon\to 0}c_\varepsilon=c_0$). Assume now that $u\not\equiv 0,1$ is a bounded entire solution for \eqref{1.1} with bounded width. By Theorem \ref{T.1.5}(i) we have $u\in(0,1)$, so Lemma \ref{L.2.1} yields a positive global mean speed of $u$. Then Theorem \ref{T.1.5}(ii) shows that $u$ is a transition solution with $u_t>0$. Let $t_0$ be the first time such that $u(t_0,0)=\tfrac 12$ and for any large $n$, let $t_n$ be the first time such that $\sup_{x\in B_{2^n}(0)} u(t_n,x) = \varepsilon_0$. Then the maximum principle shows that there is $x_n\in\partial B_{2^n}(0)$ with $u(t_n,x_n)=\varepsilon_0$. Since $L^{u,\varepsilon_0}<\infty$, our choice of $\varepsilon_0$ and $R$ shows that there is $T>0$ such that $\inf_{x\in B_1(x_n)}u(t_n+T,x)\ge \tfrac 34$ for each $n$. It then follows from \eqref{10.1} and $100>20\pi$ that \begin{equation} \label{10.4} \inf_{x\in B_{2^n+1}(0)\setminus B_{2^n-1}(0)} u \left( t_n+T + \frac {2^n}{20},x \right)\ge \frac 34 \end{equation} for all large $n$. From this and \eqref{10.2} it follows that for all large $n$, \begin{equation} \label{10.3} \inf_{x\in B_{2^n}(0)\setminus B_{2^{n-1}}(0)} u \left( t_n+T + \frac {2^n}{20} + \frac {2^{n-1}}{0.9c_0},x \right)\ge \frac 34. \end{equation} At the same time, $\sup_{x\in B_{2^n}(0)} u(t_n,x) = \varepsilon_0$ and $c_{\varepsilon_0}< 1.1c_0$ show that $u(t, 0)< \tfrac 12$ for $t\le t_n + 2^n(1.1c_0)^{-1}$ if $n$ is large, because the reaction can propagate radially no faster than at speed $c_{\varepsilon_0}$ on any wide annulus where $a(|x|)=1$, provided $u\le\varepsilon_0$ initially (this is similar to the upper bound on the propagation speed in Lemma \ref{L.2.1a}, and also uses the fact that the annuli on which $a(|x|)> 1$ have widths $\le 4$, so they shorten the time to reach the origin only by an amount proportional to $n$). So $t_n + 2^n(1.1c_0)^{-1}\le t_0$ for all large $n$, and if we let \[ s_n:= t_n+T + \frac{2^n}{20} + \frac{2^{n-1}}{0.9c_0} \quad \left(\le t_n + \frac{2^n}{1.1c_0} \text{ if $n$ is large because $c_0\le 2$}\right), \] then we obtain $s_n\le t_0$ for all large $n$. But then \eqref{10.3} and $u_t>0$ show for all large $n$, \[ \inf_{x\in B_{2^n}(0)\setminus B_{2^{n-1}}(0) }u(t_0,x)\ge \frac 34. \] The result now follows from Lemma \ref{L.2.1}. \smallskip {\it Remark.} It is an interesting question whether for the reaction in (ii), all entire solutions $u\in(0,1)$ satisfy $\lim_{t\to\infty} \inf_{x\in{\mathbb{R}}^2} u(t,x)=1$. \begin{proof}[A rough sketch of the proof that the claim involving \eqref{1.00} is false] We construct here an example involving front-like solutions in ${\mathbb{R}}^2$ (essentially the same idea works for spark-like solutions as well as for all dimensions $d\ge 2$). The full proof of it working as claimed would be quite technical, but the following clearly illustrates the main idea. For some rapidly growing $b_n\to\infty$, define \[ A:=\bigcup_{n\ge 1} A_n := \bigcup_{n\ge 1} \big[ \{x\,|\, 0\le x_1 \le b_n \text{ and } |x_2|=b_n \} \cup \{x\,|\, x_1=b_n \text{ and } |x_2|\le b_n \} \big] \subseteq{\mathbb{R}}^2. \] We then let $f(x,u)=a(d(x,A))f_0(u)$, where $f_0$ is the ignition reaction from part (ii) of the above proof and $a:[0,\infty)\to[1,\beta]$ is smooth, with $a(s)=\beta$ for $s\le 1$ and $a(x)=1$ for $s\ge 2$. Here $\beta\gg 1$ will be chosen later. We also let $s_0\gg 1$ and $w:{\mathbb{R}}\to[0,1]$ be smooth and such that $w(s)=0$ for $s\ge 1$ and $w(s)=1$ for $s\le 0$. We then define $v_0(x):=w(x_1)$ and let $u_0(x)\in[w(x_1),w(x_1-2s_0)]$ be smooth and such that $u_0(x)=w(x_1)$ for $x_2\le -1$ and $u_0(x)=w(x_1-2s_0)$ for $x_2\ge 1$. Finally, let $u,v$ solve \eqref{1.1} on $(0,\infty)\times{\mathbb{R}}^2$ with $u(0,\cdot)=u_0$ and $v(0,\cdot)=v_0$, and let $t_n$ be the first time such that $u(t_n,b_n,0)=\tfrac 12$. It is obvious that $u,v$ satisfy the hypothesis of the claim involving \eqref{1.00} because $u_0\ge v_0$ and also $u_0\le v(T,\cdot)$ for some $T$. So if \eqref{1.00} holds, we must have \begin{equation}\label{10.5} \lim_{n\to\infty} [u(t_n,b_n,r)-u(t_n,b_n,-r)]=0 \qquad \text{for any $r\in{\mathbb{R}}$} \end{equation} because $v$ is obviously even in $x_2$. However, if we take $\beta\gg 1$ and sufficiently rapidly growing $b_n$, then the reaction zone of $u$ spreads towards $(b_n,0)$ along the two ``arms'' of $A_n$ much faster than through anywhere else, and that propagation is virtually unaffected by the other ``arms''. This and the definition of $u_0$ means that the reaction zone moving towards $(b_n,0)$ along the upper arm of $A_n$ is distance $\sim 2s_0$ ahead of the one arriving along the lower arm. This means that if $s_0$ is chosen sufficiently large, depending on $\beta$ (but not on $b_n$), then $\liminf_{n\to\infty} u(t_n,b_n,s_0)\ge \tfrac 34$ and $\limsup_{n\to\infty} u(t_n,b_n,-s_0)\le \tfrac 14$. But this means $\liminf_{n\to\infty} [u(t_n,b_n,s_0)-u(t_n,b_n,-s_0)]>0$, a contradiction with \eqref{10.5}. \end{proof} {\it Remarks.} 1. This example can easily be adjusted to $v$ being a transition solution with a bounded width such that $v(t,x)=V(x_1-c_0t)$ for $t\ll -1$, where $c_0$ is the front/spreading speed and $V$ the traveling front profile for $f_0$. \smallskip 2. If $u,v$ are not required to be front-like (or spark-like), conuter-examples to \eqref{1.00} can be constructed even for homogeneous reactions and dimensions $d\ge 2$.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,435
{"url":"https:\/\/nforum.ncatlab.org\/discussion\/6324\/","text":"# Start a new discussion\n\n## Not signed in\n\nWant to take part in these discussions? Sign in if you have an account, or apply for one below\n\n## Discussion Tag Cloud\n\nVanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.\n\n\u2022 CommentRowNumber1.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeNov 16th 2014\n\nI am beginning to give the entry FQFT a comprehensive Exposition and Introduction section.\n\nSo far I have filled some genuine content into the first subsection Quantum mechanics in Schr\u00f6dinger picture.\n\nBut I have to quit now. This isn\u2019t even proof-read yet. So don\u2019t look at it unless you feel more in editing-mood than in pure-reading-mood.\n\n\u2022 CommentRowNumber2.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeNov 18th 2014\n\u2022 (edited Nov 18th 2014)\n\nI have been further editing, but I will run out of time.\n\nToday I am giving a talk, on request, to introduce the description of TQFT via n-functors on n-dimensional cobordisms. This is to an audience of mathematical physicists but with no real background in category theory or homotopy theory, so to get anywhere in one hour, I need some strategy.\n\nMy strategy is to explain in 1\/2 of the talk the 1-dimensional case in detail and to explain in the next 1\/4th of the talk the main results of the 2-dimensional case, so that in the last remaining 1\/4th of the talk the statement of the full cobordism hypothesis falling from the sky will at least make some intuitive sense.\n\nmotivating all the monoidal 1-category structure from textbook qantum mechanics\n\nthen discuss\n\nwith the aim to indicate elements of a concrete construction and at the same time introduce some basic ideas of homotopy theory that need to be alluded to later on.\n\nFrom that the Atiyah-axioms for 2d TQFT are evident and so I\u2019ll show some of the usual 2d Yoga along a string physics storyline in\n\nThat will leave just a handful of minutes at the end to say something about the actual topic,\n\n(and that section hardly exists yet as actual typed notes at the moment, will have to see how far I get).\n\n\u2022 CommentRowNumber3.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeMay 18th 2019\n\n\u2022 Stephan Stolz, Topology and Field Theories, Contemporary Mathematics 613, American Mathematical Society 2014 (ams:conm-613)\n\u2022 CommentRowNumber4.\n\u2022 CommentAuthorDmitri Pavlov\n\u2022 CommentTimeOct 17th 2019\n\nCorrected xxx.lanl.gov to arXiv.\n\n\u2022 CommentRowNumber5.\n\u2022 CommentAuthorDmitri Pavlov\n\u2022 CommentTimeJan 13th 2021\n\n\u2022 CommentRowNumber6.\n\u2022 CommentAuthorDmitri Pavlov\n\u2022 CommentTimeJan 13th 2021\n\nShould this be renamed to \u201cfunctorial field theory\u201d? The formalism is not inherently quantum, and the title should probably not be abbreviated.\n\n\u2022 CommentRowNumber7.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeJan 13th 2021\n\n\u2022 CommentRowNumber8.\n\u2022 CommentAuthorDmitri Pavlov\n\u2022 CommentTimeJan 13th 2021\n\n\u2022 CommentRowNumber9.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeJan 31st 2021\n\n(I see that the list of references in this entry leaves much room for imrpovement, concerning all of: completeness, organization, commentary and formatting.)\n\n\u2022 CommentRowNumber10.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeNov 2nd 2021\n\n\u2022 CommentRowNumber11.\n\u2022 CommentAuthorDmitri Pavlov\n\u2022 CommentTimeJul 6th 2022\n\n### Terminology\n\nThe term functorial quantum field theory appears to originate around June 2008 in\n\nAt some point later, the adjective \u201cquantum\u201d was dropped because the formalism also encodes classical and prequantum field theories.\n\n\u2022 CommentRowNumber12.\n\u2022 CommentAuthorGuest\n\u2022 CommentTimeJul 7th 2022\nthe article still refers to this concept as \"functorial quantum field theory\" even though the title is just \"functorial field theory\".","date":"2023-02-04 09:31:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6935693025588989, \"perplexity\": 4438.72832220444}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500095.4\/warc\/CC-MAIN-20230204075436-20230204105436-00616.warc.gz\"}"}
null
null
\section*{1.~Introduction} \begin{table}[t] \centering \footnotesize \setlength{\tabcolsep}{1.0pt} \renewcommand\arraystretch{1.2} \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \begin{tabular}{l|c|c|c|c|c|c} \hline \multirow{2}{*}{\tabincell{c}{Vision \\Transformer}}&\multicolumn{2}{c|}{DeiT-T~\cite{DeiT_ICML}}& \multicolumn{2}{c|}{T2T-7~\cite{T2T_ICCV21}}& \multicolumn{2}{c}{T2T-14 ~\cite{T2T_ICCV21}}\\ \cline{2-7} & IN & IN-A & IN & IN-A & IN & IN-A \\ \hline ClassT& 72.2\;\;\;\;\;\; & 7.3\;\;\;\;\;\;\;\;\; & 71.7\;\;\;\;\;\; & 6.1\;\;\;\;\;\; & 81.5\;\;\;\;\;\; & 23.9\;\;\;\;\;\;\;\\ WordT&77.9$_{5.7\uparrow}$ & 15.5$_{8.2\uparrow}$\;& 73.7$_{2.0\uparrow}$ & 7.8$_{1.7\uparrow}$ &82.1$_{0.6\uparrow}$& 26.6 $_{2.7\uparrow}$\\ ClassT+WordT& 78.6$_{6.4\uparrow}$ & 17.5$_{10.2\uparrow}$& 74.5$_{2.8\uparrow}$ & 8.1$_{2.0\uparrow}$ & 82.6$_{1.1\uparrow}$ & 27.1$_{3.2\uparrow}$ \\ \hline \hline \multirow{2}{*}{\tabincell{c}{Language \\Transformer}}&\multicolumn{2}{c|}{GPT~\cite{GPT-1}}& \multicolumn{2}{c|}{BERT-base~\cite{DBLP:conf/naacl/DevlinCLT19}}& \multicolumn{2}{c}{BERT-large~\cite{DBLP:conf/naacl/DevlinCLT19}}\\ \cline{2-7} & CoLA & \tabincell{l}{\centering RTE} & CoLA & RTE & CoLA & RTE \\ \hline ClassT& 54.3\;\;\;\;\;\;& 63.2\;\;\;\;\;\;& 54.8\;\;\;\;\;\;&67.2\;\;\;\;\;\;&60.6\;\;\;\;\;\;&73.7\;\;\;\;\;\;\\ WordT& 56.1$_{1.8\uparrow}$&65.0$_{1.8\uparrow}$ & 56.4$_{1.6\uparrow}$ &69.0$_{1.8\uparrow}$ &61.4$_{0.8\uparrow}$ & 74.4$_{0.7\uparrow}$ \\ ClassT+WordT& 57.3$_{3.0\uparrow}$& 65.4$_{2.2\uparrow}$& 58.0$_{3.2\uparrow}$ &69.3$_{2.1\uparrow}$ &61.8$_{1.2\uparrow}$ &75.1$_{1.4\uparrow}$ \\ \hline \end{tabular}% \end{subtable} \end{minipage} \caption{Accuracies (\%) of transformer models which use single classification token (ClassT), single word tokens (WordT) and their combination (ClassT+WordT). We showcase performance of vision transformers (i.e., DeiT and T2T) on ImageNet (IN)~\cite{imagenet_cvpr09} and ImageNet-A (IN-A)~\cite{ImageNet-A}, and that of language transformers (i.e., GPT and BERT) on CoLA~\cite{warstadt2018neural} and RTE~\cite{bentivogli2009fifth}. The results indicate word tokens per se are very competent with classifier and moreover, are complementary to the classification token. See Sec.~\hyperref[section:experiments]{\textcolor{red}{4}} for details and for diverse results. } \label{table: ClassT or WordT and fusion} \vspace{-6pt} \end{table} In the past years transformer models have been very successful in the field of natural language processing (NLP)~\cite{NIPS2017_3f5ee243,DBLP:conf/naacl/DevlinCLT19,GPT-1}. The transformer architecture, solely based on attention mechanisms, can naturally model long-range dependency of tokens and learn contextual knowledge. In contrast, the convolutional neural networks (CNNs)~\cite{LeNet1989,nips2012cnn} lack such capability as the convolutions are fundamentally local operations. The success of transformer models has attracted great interests of computer vision (CV) researchers~\cite{wang2018non,LambdaNetworks-ICLR2020,DBLP:journals/corr/abs-2012-12556,DBLP:journals/corr/abs-2101-01169}. Recently, a vision transformer model called ViT~\cite{ViT} is proposed entirely based on a stack of transformer blocks, which has matched or outperformed state-of-the-art CNNs when pre-trained on ultra large-scale datasets of ImageNet-21K~\cite{imagenet_cvpr09} or JFT-300M~\cite{JFT-300M}. Thereafter, a number of pure transformer models, e.g., ~\cite{T2T_ICCV21,PSViT_ICCV21,PVT_ICCV21}, have been proposed to improve the architecture of transformers, reporting impressive performance gains when trained from scratch on ImageNet with 1K classes~\cite{imagenet_cvpr09}. Despite great advance, for years pure transformer models invariably build final classifier exclusively based on classification token, without explicitly harnessing high-level word tokens. Although the classification token interacts with all word tokens$\,$\footnote{For brevity, we use the notation of ``word'' token for both NLP and CV tasks, which, for the latter case, indicates image patch.} through the attention mechanisms across the network backbone, we conjecture the high-level word tokens by themselves contain rich information that the classification token fails to accommodate. Therefore, exploiting only the classification token but excluding the word tokens from the classifier limits transformer models. Actually, we empirically find that rich information inherent in word tokens per se is very competent with classifier and moreover, is complementary to the classification token. As show in Tab.~\ref{table: ClassT or WordT and fusion}, the experimental results on both CV and NLP tasks show that classification head solely based on word tokens (i.e., WordT) is often better than that based on single classification token (i.e., ClassT), and combination of WordT and ClassT further boosts the classification accuracy. Based on the empirical observations above, we rethink classification head for transformer, and propose a novel transformer model, namely second-order transformer (SoT), to exploit simultaneously classification token and word tokens for the final classifier. To this end, there exist two key issues to be tackled: (1) how to effectively aggregate the word tokens to fully explore their rich information; (2) how to explicitly combine word tokens with classification token for building the final classification head. For effective token aggregation, we propose a multi-headed global cross-covariance pooling (MGCrP) method, which learns a group of second-order, cross-covariance representations. Previous works~\cite{LiXWZ17,PAMI_2020_MPN-COV} have shown that structured normalization plays an important role for the second-order representations. Unfortunately, existing structured normalization methods are not applicable to our MGCrP as it produces asymmetric matrices. Therefore, we present a singular value power normalization (svPN) method in light of the overall statistical analysis of data; moreover, an approximate, fast svPN algorithm is developed for guaranteeing efficiency. MGCrP shares similar philosophy and so is more consistent with the transformer block, clearly better than global average pooling (GAP)~\cite{iclr2014_NIN,Szegedy_2015_CVPR,He_2016_CVPR} and global covariance pooling (GCP)~\cite{pami/LinRM18,PAMI_2020_MPN-COV} widely used in CNNs. To build the classification head by combining word tokens with classification token, we propose three early fusion schemes, which combine the two kinds of token representations through operations such as concatenation and sum, along with one late fusion scheme which integrates individual classification scores. These schemes are illustrated in Fig.~\ref{fig:fusion schemes} and their comparisons are given in Tab.~\ref{tab:comparison of fusion method}; among them, the sum scheme performs best. Note that the proposed classification head is architecture-agnostic, which can be seamlessly integrated into a wide variety of vision transformers and language transformers. To verify the effectiveness of our SoT, we conduct experiments on both CV and NLP tasks. Our contributions are summarized as follows. \begin{itemize} \item We dig into the transformer's classification head, finding that word tokens per se are very competent with classifier and moreover, is complementary to classification token. Based on this, we propose a second-order transformer (SoT) model. As far as we know, our SoT makes the first attempt to exploit simultaneously classification token and word tokens for classification, which can span and benefit both CV and NLP tasks. \item We propose multi-headed global cross-covariance pooling with structured normalization for mining word tokens, while systemically studying several schemes to combine word tokens with classification token. As a result, we achieve a novel classification head, which is very effective and suitable for a variety of transformer architectures. \item We perform thorough ablation study to validate our SoT. Extensive experiments show our SoT can significantly improve vision transformers on challenging ImageNet and ImageNet-A. Meanwhile, our SoT is very helpful to language transformers such as BERT and GPT, performing better than its conventional counterpart on General Language Understating tasks. \end{itemize} \section*{2.~Related Works}\label{section-related work} \vspace{4pt}\noindent \textbf{Transformer architectures} Solely based on attention mechanisms without any recurrence and convolutions, the transformer models~\cite{NIPS2017_3f5ee243} can naturally model long-range dependencies and global context, outperforming LSTM~\cite{hochreiter1997long} and CNN~\cite{10.5555/3305381.3305510} on NLP tasks. The transformer models pre-trained on large-scale unlabeled corpus (e.g., GPT~\cite{GPT-1,GPT-2} and BERT~\cite{DBLP:conf/naacl/DevlinCLT19}) are able to learn powerful language representations, and, after fine-tuning, achieve state-of-the-art results for a wide range of downstream language tasks~\cite{GPT-1,GPT-2,DBLP:conf/naacl/DevlinCLT19,roberta}. For CV tasks, Dosovitskiy et al.~\cite{ViT} present a pure transformer architecture (i.e., ViT) which reports very promising performance when pre-trained on ultra large-scale datasets. The works that follow greatly improve over ViT. DeiT~\cite{DeiT_ICML} proposes an extra distillation token to transfer knowledge from teacher models. T2T-ViT~\cite{T2T_ICCV21} and PS-ViT~\cite{PSViT_ICCV21} focus on better tokenization of vision patches. Swin Transformer~\cite{Swin_ICCV21} and PVT-T~\cite{PVT_ICCV21} introduces hierarchical structure into the transformer. Conformer~\cite{Conformer_ICCV21} is a dual architecture which combines CNN with transformer. The pure transformer architectures, either in NLP or CV field, only use the classification token for the classifier, limiting the performance of models. As such, we propose a novel classification head, which exploits simultaneously the classification token and word tokens. \vspace{4pt}\noindent \textbf{Second-order pooling} GCP, also known as bilinear pooling, often produces symmetric positive definite (SPD) matrices as global representations~\cite{Ionescu_2015_ICCV,lin2015bilinear}. It can capture second-order relations, performing much better than GAP which only estimates first-order statistics~\cite{pami/LinRM18,PAMI_2020_MPN-COV}. Research has shown that normalization techniques are very helpful to the second-order representations~\cite{lin2015bilinear,RANK-1-2020,LiXWZ17,Ionescu_2015_ICCV}. Element-wise normalization improves GCP but fails to consider holistic structure of data~\cite{lin2015bilinear}. The structured normalization such as matrix power normalization (MPN)~\cite{LiXWZ17,PAMI_2020_MPN-COV,lin2017improved} exploits the overall statistical structure and geometric structure of covariance matrices, and has greatly benefited GCP. As MPN depends on GPU-unfriendly eigen-decomposition, iSQRT~\cite{LiXWG18} proposes a fast method for computing matrix square root, suitable for parallel implementation on GPU. A recent study~\cite{WQL_What_CVPR20} has shown that GCP with MPN improves Lipschitzness of the loss function, leading to fast convergence and robustness to distorted images. Different from GCP, we propose multi-headed global cross-covariance pooling, which shares similar philosophy and is consistent with transformers; furthermore, we propose a new structured method to normalize cross-covariance matrices. \section*{3.~Second-order Transformer (SoT)}\label{section:So-ViT model} In this section, we first describe the SoT network, followed by the proposed multi-headed global cross-covariance pooling method. Finally, we introduce the normalization method for cross-covariance matrices. \subsection*{3.1.~SoT Network}\label{subsection:SoT network} We illustrate our SoT network in Fig.~\ref{fig:overview}. Similar to~\cite{PSViT_ICCV21}, we develop a small, hierarchical module for token embedding, which, based on off-the-shelf convolutions, gradually reduces spatial size of feature maps. In designing the token embedding module, we evaluate varying type of convolution blocks including ResNet block~\cite{He_2016_CVPR}, Inception block~\cite{Szegedy_2015_CVPR}, DenseNet block~\cite{Huang_2017_CVPR} and Non-local block~\cite{wang2018non}. This token embedding method has capability to model local image properties, which ViT~\cite{ViT} lacks due to its naive embedding method. The features produced by the token embedding module are reshaped to a sequence of vectors as word tokens. As in~\cite{ViT}, we prepend a learnable classification token to the word token sequence, and then position embeddings are added. The modified token sequence is then fed to the backbone network, which consists of a stack of standard transformer blocks, each containing a multi-head self-attention and a multi-layer perception. For classification head, unlike the conventional transformers, we explicitly combine the word tokens with the classification token. We investigate different fusion methods to combine two kinds of tokens, finding the sum scheme performs best which, therefore, is adopted in our SoT. We design a family of transformers of varying depths, including a 12-layer SoT-Tiny, a 14-layer SoT-Small and a 24-layer SoT-Base; in addition, we design a 7-layer SoT for the sake of ablation study. The details on these models as well as on the token embedding module are provided in the supplement~\ref{suppsection:SoT network}. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\linewidth]{images/overview.pdf} \end{center} \caption{Diagram of our SoT network. Given an input image, the token embedding module produces a sequence of word tokens, which is then prepended by a classification token. After added by position tokens, the sequence of tokens is subject to the backbone consisting of a stack of standard transformer blocks. Finally, the classification token and the word tokens output from the backbone are fed to the proposed classification head.} \label{fig:overview} \vspace{-6pt} \end{figure} \subsection*{3.2.~Proposed Classification Head}\label{subsection:cross-cov our paradgim} The conventional pure transformer models only use classification token for the final classifier. As high-level word tokens contain rich information, neglect of them leads to information loss. So we propose to combine the word tokens with the classification token for the final classifier. We introduce three early fusion schemes (i.e., $\mathrm{sum}$, $\mathrm{concat}$ and $\mathrm{aggr\_all}$) and a late fusion scheme (i.e., $\mathrm{late}$). Fig.~\ref{fig:fusion schemes} illustrates these schemes, where a $\mathrm{Linear}$ transformation is equivalent to (and thus denoted by) a fully-connected (FC) layer. For the $\mathrm{sum}$ scheme, the classification token and aggregated word tokens are separately connected to a FC layer and are then summed, before fed to the softmax classifier. In the $\mathrm{concat}$ scheme, the representations of classification token and the aggregated word tokens are concatenated, followed by a FC layer and then a softmax classifier. For the $\mathrm{aggr\_all}$ scheme, we directly aggregate all tokens including both the classification token and the word tokens, and then connect the resulting representation to a FC layer succeeded by a softmax classifier. In the late fusion scheme, the classification token and the word tokens are independently attached to a FC layer and a softmax classifier, and finally the two classification scores are added. \begin{figure}[t!] \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[height=1.4in]{images/fusion_sum.pdf} \caption{Sum } \label{subfigure:fusion_sum} \end{subfigure}% ~\hspace{3pt} \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[height=1.4in]{images/fusion_concat.pdf} \caption{Concat} \label{subfigure:fusion_concat} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[height=1.4in]{images/fusion_all.pdf} \caption{Aggr\_all} \label{subfigure:fusion_aggr_all} \end{subfigure}% ~\hspace{3pt} \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[height=1.4in]{images/fusion_late.pdf} \caption{Late} \label{subfigure:fusion_late} \end{subfigure} \caption{Fusion schemes designed for our classification head.} \label{fig:fusion schemes} \end{figure} Let $\{\mathbf{z}_{i}\in \mathbb{R}^{p}, i=0,1,\ldots,q\}$ denote the features of all tokens, where $\mathbf{z}_{0}$ indicates the classification token while the remaining ones indicate word tokens. We concatenate the features of word tokens to form a matrix $\mathbf{Z}=[\mathbf{z}_{1},\mathbf{z}_{2},\cdots,\mathbf{z}_{q}] \in \mathbb{R}^{p\times q}$. The four fusion schemes can be formulated as \begin{align} \mathrm{sum}:\;\;&\mathrm{softmax}\big(\mathrm{FC}(\mathbf{z}_{0})+\mathrm{FC}(\mathrm{pool}(\mathbf{Z})\big)\\ \mathrm{concat}:\;\;&\mathrm{softmax}\left(\mathrm{FC}\left(\big[\mathbf{z}_{0},\mathrm{pool}(\mathbf{Z})\big]\right)\right)\nonumber\\ \mathrm{aggr\_all}:\;\;&\mathrm{softmax}\big(\mathrm{FC}(\mathrm{pool}([\mathbf{z}_{0},\mathbf{Z}])\big)\nonumber\\ \mathrm{late}:\;\;&\mathrm{softmax}\big(\mathrm{FC}(\mathbf{z}_{0})\big)+\mathrm{softmax}\big(\mathrm{FC}(\mathrm{pool}(\mathbf{Z}))\big)\nonumber \end{align} Here $\mathrm{pool}(\cdot)$ denotes a pooling function. The commonly used pooling functions are global average pooling (GAP) and Global covariance pooling (GCP), which, however, are developed for CNN architecture and may be sub-optimal for the transformer. As described in~\cite[Sec. 3.2.2]{NIPS2017_3f5ee243}, the multi-head structure in the transformer block facilitates modeling information from different representation space. Inspired by this, we propose multi-headed global cross-covariance pooling (MGCrP) with structured normalization, as illustrated in Fig.~\ref{fig:overview} (top-right). \begin{table}[t] \centering \footnotesize \setlength{\tabcolsep}{2pt} \renewcommand\arraystretch{1.2} \begin{minipage}{1\linewidth} \begin{tabular}{l|c|l|c|c} \hline \tabincell{l}{Pooling\\method } & Formula & \tabincell{l}{Matrix\\property} & \tabincell{l}{Structured\\Norm} & \tabincell{c}{Multi-\\head} \\ \hline GAP& $\frac{1}{q}\mathbf{Z}\mathbf{1}$ & \tabincell{l}{Vector} & N/A & \xmark \\ GCP& $\mathrm{MPN}\left(\frac{1}{q}\mathbf{Z}\mathbf{Z}^{T}\right)$ & \tabincell{l}{SPD} & MPN & \xmark \\ \tabincell{c}{MGCrP\\(ours)} & \tabincell{c}{$\left[\mathrm{svPN}\left(\frac{1}{q}\mathbf{W}_{1}\mathbf{Z}\mathbf{Z}^{T}\mathbf{R}_{1}^{T}\right),\ldots,\right.$ \\ $ \left.\mathrm{svPN}\left(\frac{1}{q}\mathbf{W}_{h}\mathbf{Z}\mathbf{Z}^{T}\mathbf{R}_{h}^{T}\right)\right]\vspace{2pt}$ }& \tabincell{l}{Asym} & $\mathrm{svPN}$ &\cmark\\ \hline \end{tabular}% \end{minipage} \caption{Differences of our MGCrP from the classical pooling methods. GAP produces first-order, vectorial representations; GCP produces symmetric positive definite (SPD) matrices to which matrix power normalization (MPN) is applicable; MGCrP yields asymmetric matrices, normalized by the proposed singular value power normalization ($\mathrm{svPN}$). }\label{tab:cross-cov vs cov and attention}% \end{table} In the following, we first introduce cross-covariance pooling with single head. Given an input matrix $\mathbf{Z}$, we perform two separate linear transformations, obtaining $\mathbf{X}=\mathbf{W}\mathbf{Z}$ and $\mathbf{Y=RZ}$, where $\mathbf{W}\in \mathbb{R}^{m\times p}$ and $\mathbf{R}\in \mathbb{R}^{n\times p}$ are learnable weight matrices. Then we compute cross-covariance matrix $\mathbf{Q}=\frac{1}{q}\mathbf{X}\mathbf{Y}^{T}$. Previous works have shown that structured normalization plays an important role for GCP. However, as $\mathbf{Q}$ is asymmetric (square or non-square), existing normalization method for SPD matrices~\cite{PAMI_2020_MPN-COV} is not applicable. Thus we propose a new structured normalization method, called singular value power normalization (i.e., $\mathrm{svPN}$), details of which are deferred to next section. Now we can define our global cross-covariance pooling: \begin{align} \mathrm{GCrP}(\mathbf{Z})=\mathrm{svPN}\Big(\frac{1}{q}\mathbf{W}\mathbf{Z}\mathbf{Z}^{T}\mathbf{R}^{T}\Big) \end{align} We continue to equip our cross-covariance pooling with multi-head structure: \begin{align} \mathrm{MGCrP}(\mathbf{Z})&=\Big[\mathrm{GCrP}_{1}(\mathbf{Z}),\cdots,\mathrm{GCrP}_{h}(\mathbf{Z})\Big]\nonumber\\ \mathrm{where}\ \mathrm{GCrP}_{i}(\mathbf{Z})&=\mathrm{svPN}\Big(\frac{1}{q}\mathbf{W}_{i}\mathbf{Z}\mathbf{Z}^{T}\mathbf{R}_{i}^{T}\Big) \end{align} Here $[\cdot]$ denotes concatenation operation; $\mathrm{GCrP}_{i}(\mathbf{Z})$ denotes the cross-covariance matrix of the $i$-th head, and $\mathbf{W}_{i}$ and $\mathbf{R}_{i}$ are two learnable linear projections. Differences among GAP, GCP and MGCrP are summarized in Tab.~\ref{tab:cross-cov vs cov and attention}, where $\mathbf{1}$ in the formula of GAP denotes a $q$-dimensional vector each component of which is 1, and we use state-of-the-art MPN~\cite{LiXWZ17} for GCP. Performance comparison among them is presented in Tab.~\ref{tab:comparison of fusion method}. Ablation analysis on MGCrP is given in Sec.~\hyperref[subsection: ablation of 2nd-order pooling]{\textcolor{red}{4.2.2}}. \subsection*{3.3.~Normalization of Cross-covariance Matrix}\label{section:svPN} We propose singular value power normalization (svPN) for cross-covariance matrices. As svPN is computationally expensive, we further develop a fast approximate algorithm. \subsubsection*{3.3.1~~~~Singular Value Power Normalization} \label{subsection:svPN} Our method is motivated by MPN~\cite{PAMI_2020_MPN-COV}, an effective method for normalizing any covariance matrix $\mathbf{P}=\mathbf{Z}\mathbf{Z}^{T}$~\footnote{Without loss of generality, in Sec.~\hyperref[section:svPN]{\textcolor{red}{3.3}}, we omit the constant $1/q$ in representing a covariance matrix or cross-covariance matrix for simplicity. } that is SPD. This normalization consists in computation of power of eigenvalues aligned with eigenvectors of $\mathbf{P}$. In terms of principal component analysis~\cite[Chap. 12]{Book-PRML}, the eigenvalues of $\mathbf{P}$ in decreasing order amounts to from maximum to minimum variances of $\mathbf{Z}$ , while the corresponding eigenvectors of $\mathbf{P}$ characterize the principal components. Therefore, MPN can be interpreted as \textit{shrinking these variances aligned with the principal components}. Let us consider $\mathbf{u}^{T}\mathbf{X}\mathbf{Y}^{T}\mathbf{v}$ where $\mathbf{u}\in \mathbb{R}^{m}, \mathbf{v}\in \mathbb{R}^{n}$, which indicates the cross-covariance between projections of $\mathbf{X}$ on $\mathbf{u}$ and those of $\mathbf{Y}$ on $\mathbf{v}$. For simplification, we denote $\mathbf{Q}=\mathbf{X}\mathbf{Y}^{T}$. Actually, we have a proposition about such cross-covariances. \begin{proposition}\label{proposition1-main} \textit{ Given $\mathbf{u}_{i}$ and $\mathbf{v}_{i}$ where $i=1,\ldots, k-1$, let us consider the objective \begin{align}\label{equ:kth-cross-cov-main} \max_{\|\mathbf{u}\|=\|\mathbf{v}\|=1} \;&\mathbf{u}^{T}\mathbf{X}\mathbf{Y}^{T}\mathbf{v}\\ \mathrm{s.t.}\;\;&\mathbf{u}_{i}^{T}\mathbf{u}=0, \;\mathbf{v}_{i}^{T}\mathbf{v}=0, \;i<k.\nonumber \end{align} By inductively optimizing~(\ref{equ:kth-cross-cov-main}) for $k=1,\ldots, \min(m,n)$, we can obtain, in decreasing order, the $k$-th largest cross-covariance $\mathbf{u}_{k}^{T}\mathbf{X}\mathbf{Y}^{T}\mathbf{v}_{k}$, which is equal to the $k$-th singular value $\lambda_{k}$ of $\mathbf{Q}$ while $\mathbf{u}_{k}$ and $\mathbf{v}_{k}$ are the corresponding left and right singular vectors, respectively. } \end{proposition} \vspace{-2pt}In light of this proposition, we can define our normalization by \textit{shrinking the cross-covariances between $\mathbf{X}$ and $\mathbf{Y}$ aligned with the left and right singular vectors of $\mathbf{Q}$}: \begin{align}\label{equ:svPN_alpha_SVD} \mathrm{svPN}(\mathbf{Q})=\sum_{i=1}^{\min(m,n)}\lambda_{i}^{\alpha}\mathbf{u}_{i}\mathbf{v}_{i}^{T}, \end{align} where $0\!\!<\!\!\alpha\!\!<\!\!1$. Our $\mathrm{svPN}$ can be performed accurately via SVD, which, however, is computationally expensive, as the SVD algorithm is GPU-unfriendly~\cite{LiXWG18}. We give a proof of Proposition~\ref{proposition1-main} and the backpropagation of svPN via SVD in the supplement~\ref{suppsection:Proof of Proposition} and~\ref{suppsection:Backpropagation}, respectively. \subsubsection*{3.3.2~~~~Fast Approximate Normalization Algorithm}\label{subsection:approximate svPN} Based on low-rank assumption widely used in machine learning~\cite{low-rank-workshop}, we can efficiently implement approximate normalization by only estimating few largest singular values. We use an iterative method to consecutively estimate the singular values. Given an initial vector $\mathbf{v}^{(0)}$, the iterative procedure takes the following form~\cite{SVD_TPAMI_1982}: \begin{align}\label{equ:iteration_u_and_v} \mathbf{u}^{(j+1)}=\dfrac{\mathbf{Q}\mathbf{v}^{(j)}}{\|\mathbf{Q}\mathbf{v}^{(j)}\|},\; \mathbf{v}^{(j+1)}=\dfrac{\mathbf{Q}^{T}\mathbf{u}^{(j+1)}}{\|\mathbf{Q}^{T}\mathbf{u}^{(j+1)}\|} \end{align} where the superscript $j$ denotes the $j$-th iteration. After a few iterations, we obtain approximately the largest singular value $\hat{\lambda}_{1}=\|\mathbf{Q}^{T}\mathbf{u}^{(j+1)}\|$ and the corresponding left singular vector $\hat{\mathbf{u}}_{1}=\mathbf{u}^{(j+1)}$ and right one $\hat{\mathbf{v}}_{1}=\mathbf{v}^{(j+1)}$. Suppose we have the $k$-th ($k\geq 1$) largest singular value, we deflate matrix $\mathbf{Q}$ to obtain \begin{align}\label{equ:deflation} \mathbf{Q}'=\mathbf{Q}-\sum_{i=1}^{k}\hat{\lambda}_{i}\hat{\mathbf{u}}_{i}\hat{\mathbf{v}}_{i}^{T} \end{align} For $\mathbf{Q}'$, we can perform iteration with Eq.~(\ref{equ:iteration_u_and_v}) to achieve approximately the $(k+1)$-th largest singular value $\hat{\lambda}_{k+1}$ and the corresponding singular vectors $\hat{\mathbf{u}}_{k+1}$ and $\hat{\mathbf{v}}_{k+1}$. The deflation~(\ref{equ:deflation}) and the iteration~(\ref{equ:iteration_u_and_v}) can be repeated. Given $r$ largest singular values, we define the approximate normalization as \begin{align}\label{equ:approximated svPN_alpha_SVD} \widehat{\mathrm{sv}}\mathrm{PN}(\mathbf{Q})\!=\!\sum_{i=1}^{r-1}\hat{\lambda}_{i}^{\alpha}\hat{\mathbf{u}}_{i}\hat{\mathbf{v}}_{i}^{T} \!+\!\dfrac{1}{\hat{\lambda}_{r}^{1-\alpha}}\big(\mathbf{Q}\!-\!\sum_{i=1}^{r-1}\hat{\lambda}_{i}\hat{\mathbf{u}}_{i}\hat{\mathbf{v}}_{i}^{T}\big) \end{align} It shrinks the 1st to the $(r-1)$-th singular values aligned with the corresponding singular vectors, while shrinking the remaining ones with the $r$-th largest singular value. Note that $\widehat{\mathrm{sv}}\mathrm{PN}(\mathbf{Q})$ reduces to $\mathbf{Q}/\hat{\lambda}_{1}^{1-\alpha}$ if we only use the largest singular value. \section*{4.~Experiments}\label{section:experiments} We first introduce experimental setting in Sec.~\hyperref[subsection:setting]{\textcolor{red}{4.1}}. Then we evaluate the proposed methods for computer vision (CV) tasks and natural language processing (NLP) tasks in Sec.~\hyperref[subsection: image classification]{\textcolor{red}{4.2}} and Sec.~\hyperref[subsection:text classification]{\textcolor{red}{4.3}}, respectively. We train models with 8 NVIDIA 2080Ti GPUs based on PyTorch framework. Our code will be open-sourced after acceptance. \subsection*{4.1.~Experimental Setting}\label{subsection:setting} Here we briefly describe the benchmarks and training strategy for both CV and NLP tasks. Details on benchmark statistics, task description and hyper-parameters setting are given in the supplement~\ref{suppsection:detailed}. \vspace{4pt}\noindent\textbf{Datasets} For CV tasks, we evaluate on ILSVRC ImageNet benchmark~\cite{ILSVRC15,imagenet_cvpr09}, which has 1.28M images for training and 50K images for test. Furthermore, we adopt a more challenging dataset (i.e., ImageNet-A~\cite{ImageNet-A}) for evaluation, which consists of real-world adversarial images, involving heterogeneous and varied distribution shift. For NLP tasks, following the common practice~\cite{GPT-1,DBLP:conf/naacl/DevlinCLT19}, we fine-tune the transformer models pre-trained in an unsupervised manner on large-scale corpus on four downstream tasks, i.e., Corpus of Linguistic Acceptability (CoLA)~\cite{warstadt2018neural}, Recognizing Textural Entailment (RTE)~\cite{bentivogli2009fifth}, Multi-Genre Natural Language Inference (MNLI)~\cite{williams2018broad} and Stanford Question Answering (QNLI)~\cite{rajpurkar2016squad}. \vspace{4pt}\noindent\textbf{Training strategy} In image classification, we train our SoT models on ImageNet from scratch. Besides scale, color and flip augmentations~\cite{Simonyan15, He_2016_CVPR}, following~\cite{T2T_ICCV21}, we also adopt mixup~\cite{zhang2018mixup}, randAugment~\cite{cubuk2020randaugment}, cutmix~\cite{yun2019cutmix}, and label smoothing~\cite{Szegedy_2015_CVPR}. We use AdamW~\cite{loshchilov2018decoupled} algorithm with warmup for network optimization and cosine annealing schedule for learning rate. Detailed training strategies about optimizers, hyper-parameters, etc., are given in the supplement~\ref{suppsection:detailed}. Also, we present in detail the hyper-parameter setting on natural language processing tasks in the supplement~\ref{suppsection:detailed}. \subsection*{4.2.~Image Classification for CV Tasks}\label{subsection: image classification} To make our extensive evaluations computationally feasible on ImageNet, in Sec.~\hyperref[subsection: experiments-how novel]{\textcolor{red}{4.2.1}} and Sec.~\hyperref[subsection: ablation of 2nd-order pooling]{\textcolor{red}{4.2.2}}, we use the 7-layer SoT model; besides, we re-scale the images such that their short sizes are 128 and 112$\times$112 patches are cropped as network inputs. In Sec.~\hyperref[subsection:experiment-comparion with state-of-the-art]{\textcolor{red}{4.2.3}}, we compare with state-of-the-art methods using standard protocol on ImageNet. \vspace{-6pt}\subsubsection*{4.2.1.~~~How Does Our Classification Head Perform?} \label{subsection: experiments-how novel} We first establish a strong baseline model that only uses the classification token. Based on this strong baseline, we compare different fusion schemes and pooling methods. \vspace{6pt}\noindent\textbf{Baseline based on conventional classification head} Several works~\cite{T2T_ICCV21,PSViT_ICCV21} improve the simple embedding method of ViT~\cite{ViT} (i.e., a naive linear projection of fixed patches). Specifically, T2T~\cite{T2T_ICCV21} introduces soft-split operations called Tokens-to-token and PS-ViT~\cite{PSViT_ICCV21} proposes a progressively sampling strategy built upon a convolution stem to learn the local structure of images. In contrast, we design a small, hierarchical module based on off-the-shelf convolutions for token embedding, and evaluate various types of convolution blocks. Tab.~\ref{tab:our baseline model} compares these baseline models where only classification token is used for the classifier. It can be seen that both T2T and PS-ViT clearly improve over ViT, while all of our models perform much better than the two variants while having comparable parameters and FLOPs. Our embedding module with DenseNet block obtains the best result (73.13\%), establishing a strong baseline. This token embedding module will be used across our family of transformer models. \begin{table}[t] \centering \footnotesize \setlength{\tabcolsep}{2pt} \renewcommand\arraystretch{1.1} \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \begin{tabular}{c|l|c|c|c} \hline Model &\tabincell{c}{Token embedding} & \tabincell{c}{Top-1\\ (\%)} & \tabincell{c}{Params\\ (M)} & \tabincell{c}{FLOPs\\ (G)} \\ \hline \multirow{7}{*}{\tabincell{l}{Baseline\\(classification\\ token only)}}&Naive linear proj~\cite{ViT} & 66.25 & 3.98 & 0.73 \\ &Tokens-to-token~\cite{T2T_ICCV21} & 68.30 & 4.25 & 0.81 \\ &Progressive sampling~\cite{PSViT_ICCV21} & 70.48 & 4.13 & 0.90 \\ \cline{2-5} &ResNet block (ours)& 72.51 &4.28 & 0.88 \\ &Inception block (ours) & 71.61 &4.09& 0.81 \\ &DenseNet block (ours)& \textbf{73.13} & 4.23 & 1.06 \\ &Non-local block (ours)& 71.45 & 4.17 &0.89\\ \hline \end{tabular}% \caption{Results of baselines using conventional classification head.}\label{tab:our baseline model}% \end{subtable} \end{minipage} \vspace{6pt} \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \begin{tabular}{c|c|c|c|c} \hline \tabincell{c}{Fusion scheme} & Pool method & Repr. size & Top-1 (\%) & Params (M) \\ \hline \multicolumn{2}{c|}{Baseline} & 256 & 73.13 & 4.23\\ \multicolumn{2}{c|}{(classification token only)} & 1280 & 73.43 & 5.57\\ \hline \multirow{3}{*}{$\mathrm{aggr\_all}$} & GAP & 256 &73.56 & \textbf{4.22} \\ & GCP & 1176 & 74.12 & 5.15 \\ & MGCrP & 1176 & 74.74 & 5.20 \\ \hline \multirow{3}{*}{$\mathrm{concat}$} & GAP & 512 &73.84&4.73 \\ & GCP & 1432 & 75.37 & 5.66 \\ & MGCrP & 1432 & 75.85 & 5.71 \\ \hline \multirow{3}{*}{$\mathrm{sum}$} & GAP & 256 & 73.85 &4.47\\ & GCP & 1176 & 75.23 & 5.40\\ & MGCrP & 1176 &\textbf{75.97} &5.44\\ \hline \multirow{3}{*}{$\mathrm{late}$}& GAP & 256 & 73.27 & 4.47\\ & GCP & 1176 &73.46&5.40\\ & MGCrP & 1176 &73.98&5.46\\ \hline \end{tabular}% \caption{Results using our classification head. }\label{tab:comparison of fusion method}% \end{subtable} \end{minipage} \caption{Evaluation of proposed classification head. (a) We build a strong baseline which only uses the classification token. (b) We compare different fusion schemes along with pooling methods, based on the strong baseline. } \label{table: effect of fusion} \end{table} \begin{table*} \centering \begin{subtable}[t]{0.38\linewidth} \centering \setlength{\tabcolsep}{2pt} \renewcommand\arraystretch{1.3} \footnotesize \vspace{-43pt} \begin{tabular}{c|c|c|c|c|c} \hline $h$ & 1 & 2 & 4 & 6 & 8 \\ \hline ($m$,$n$) & (32,32) & (24,24) & (16,16) & (14,14) & (12,12) \\ \hline Top-1 (\%) & 75.14 & 75.36 & 75.24 & \textbf{75.97} & 75.37\\ \hline \end{tabular}% \caption{Effect of head number $h$ given fixed Repr. size ($\sim$$\mathrm{1K}$). }\label{tab:comparison of_head \centering \vspace{5pt} \begin{tabular}{c|c|c|c|c|c} \hline Repr. size & 0.5K & 1K & 2K & 3K & 6K \\ \hline ($m$,$n$) & (9,9) & (14,14) & (18, 18) & (24,24) & (32,32) \\ \hline Top-1 (\%) & 74.38& 75.97 & 76.24 & 76.73&\textbf{77.01}\\ \hline \end{tabular}% \caption{Effect of Repr. size given fixed head number ($h=6$). }\label{tab:comparison of_dimension \end{subtable} \begin{subtable}[t]{0.33\linewidth} \centering \setlength{\tabcolsep}{2pt} \footnotesize \vspace{-15mm} \begin{tabular}{c|c|c|c|c} \hline Method & \multicolumn{2}{c|}{Setting} & \parbox{8mm}{\centering Top-1\\ (\%)} & \tabincell{c}{Speed\\ (Hz)}\\ \hline \multirow{3}{*}{$\mathrm{svPN}$} & \multirow{3}{*}{$\alpha$} & 0.3 & 75.74 &\multirow{3}{*}{110} \\ \cline{3-4} & & 0.5 & \textbf{76.13} & \\ \cline{3-4} & & 0.7 & 75.45 & \\ \hline \multirow{5}{*}{$\widehat{\mathrm{sv}}\mathrm{PN}$} & \multirow{5}{*}{\tabincell{c}{(\#sv,\#iter)\\ $\alpha$=0.5}} & $(1,1)$ & 75.97 & \textbf{2226}\\ \cline{3-5} & & $(1,3)$ & 75.82 & 2207\\ \cline{3-5} & & $(1,5)$ & 74.11 &2188\\ \cline{3-5} & & $(2,1)$ & 73.51 &2207 \\ \cline{3-5} & & $(3,1)$ & 74.89 & 2188\\ \hline \end{tabular}% \caption{Exact normalization versus approximate one.}\label{tab: eact vs approximate svPN}% \end{subtable} \begin{subtable}[t]{0.25\linewidth} \centering \setlength{\tabcolsep}{4pt} \renewcommand\arraystretch{1.2} \footnotesize \begin{tabular}{r|l|c} \hline Method & Top-1 (\%) & Speed (Hz)\\ \hline -- &74.82 & \textbf{2248}\\ $1/\tau$ &75.13$_{\,\text{0.31}\uparrow}$ & 2226\\ EPN~\cite{lin2015bilinear} &73.29$_{\,\text{1.53}\downarrow}$ &2206\\ LN~\cite{Layer_Norm_arXiv_2016} &75.25$_{\,\text{0.43}\uparrow}$ & 2245\\ \hline $\widehat{\mathrm{sv}}\mathrm{PN}$ & 75.97$_{\,\text{1.15}\uparrow}$ &2226\\ $\mathrm{sv}\mathrm{PN}$ & \textbf{76.13}$_{\,\textbf{\text{1.31}}\uparrow}$ &110\\ \hline \end{tabular}% \caption{Comparison with different normalization methods.}\label{tab:comparison of normalization \end{subtable} \caption{Ablation analysis of MGCrP and normalization.} \end{table*} \vspace{6pt}\noindent\textbf{Effect of our classification head} \label{subsection: experiment-fusion} Tab.~\ref{tab:comparison of fusion method} evaluates the proposed classification head on the basis of the strong baseline. We adopt iterative square-root normalization (iSQRT)~\cite{LiXWG18} for GCP which is a fast version of matrix power normalization (MPN)~\cite{LiXWZ17}. For our MGCrP, we use $\widehat{\mathrm{sv}}\mathrm{PN}$ with the single largest singular value. According to the results in Tab.~\ref{tab:comparison of fusion method}, we have two observations. (1) Fundamentally, all fusion schemes improve the baseline whatever the pooling function is, which indicates that explicit combination of the word tokens indeed benefits the transformer models. Among the fusion methods, the $\mathrm{aggr\_all}$ scheme is superior to the $\mathrm{late}$ scheme which only slightly improves the baseline, while both of the $\mathrm{sum}$ scheme and $\mathrm{concat}$ scheme perform much better than the other two fusion methods. (2) The second-order pooling outperforms the first-order pooling by a large margin regardless of fusion method, which is consistent with previous conclusion drawn under the CNN architectures~\cite{pami/LinRM18,PAMI_2020_MPN-COV}. For any fusion method, our MGCrP performs better than GCP by about 0.5\%, suggesting that our multi-headed cross-covariance pooling has more powerful representation capability. We note that the $\mathrm{sum}$ scheme with MGCrP obtains the highest accuracy of $75.97\%$. The second-order pooling enlarges the representation size (Repr. size), leading to more parameters than the baseline. For fair comparison, for the baseline model, we add a linear projection layer after the classification token, increasing its dimension to 1280; as a result, its accuracy increases, but only slightly (0.3\%, 2nd row in Tab.~\ref{tab:comparison of fusion method}), which is still much lower than the fusion method. This suggests that the performance increase of the fusion methods is mainly attributed to more powerful representation capability rather than capacity growth. Note that all our fusion methods bring negligible increase of FLOPs, compared to the baseline. \subsubsection*{4.2.2.~~~Ablation Study of MGCrP and Normalization}\label{subsection: ablation of 2nd-order pooling} For the MGCrP module, we first evaluate the number of heads and representation size. After that, we assess the exact normalization (i.e., $\mathrm{svPN}$) against the approximate one (i.e., $\widehat{\mathrm{sv}}\mathrm{PN}$). Finally, we compare with other normalization methods for cross-covariance matrices. \vspace{3pt}\noindent\textbf{Number of heads and representation size} The representation size (Repr. size) of MGCrP is equal to $h\times m \times n$, where $h$, $m$ and $n$ are number of heads, and dimensions of two linear projections, respectively. Exhaustive grid search for these hyper-parameters of MGCrP is computationally prohibitive. For simplification, we set $m=n$, and search for optimal $h$ by fixing the Repr. size, followed by evaluation of the Repr. size with fixed number of heads $h$ just determined. Tab.~\ref{tab:comparison of_head} shows accuracy versus $h$ when the Repr. size is fixed to about 1K. It can be seen that $h=6$ achieves the best result. By setting $h$ to 6, Tab.~\ref{tab:comparison of_dimension} shows the effect of the representation size on performance. We can see that the accuracy consistently increases as the Repr. size grows while less than 3K; however, further doubling Repr. size to 6K brings minor increase of accuracy, which suggests that the performance tends to saturate. We use six heads for MGCrP across the paper, unless otherwise specified. \vspace{3pt}\noindent\textbf{Exact normalization against approximate one} Tab.~\ref{tab: eact vs approximate svPN} (upper panel) shows the effect of exponent $\alpha$ (Eq.~\ref{equ:svPN_alpha_SVD}) for the exact normalization $\mathrm{svPN}$, where $\alpha=0.5$ achieves the highest accuracy. However, svPN via SVD is computationally very expensive, running only at 110 Hz. By setting $\alpha$ to 0.5, we demonstrate, in Tab.~\ref{tab: eact vs approximate svPN} (lower panel), the effect of the number of singular values ($\#\mathrm{sv}$) and the number of iterations ($\#\mathrm{iter}$) on $\widehat{\mathrm{sv}}\mathrm{PN}$ (Eq.~\ref{equ:approximated svPN_alpha_SVD}). We note that the approximate normalization is slightly inferior to but runs 20 times faster than its exact counterpart. With only the largest singular value, increase of iteration number brings no gains; If we use two or three largest singular values, we observe performance decline. We conjecture the reason is that more iterations accumulate numerical errors, leading to performance decline. Notably, $\widehat{\mathrm{sv}}\mathrm{PN}$ with the single largest eigenvalue and one iteration achieves a very competitive accuracy of 75.97\% with the fastest speed of 2226 Hz, and this setting is used throughout, unless otherwise specified. \vspace{3pt}\noindent\textbf{Different normalization methods} Besides the proposed normalization method, there are several other options to normalize the cross-covariance matrices, including layer normalization (LN)~\cite{Layer_Norm_arXiv_2016}, EPN~\cite{lin2015bilinear} and an adaptive scaling. EPN consists in signed square root for each element followed by $\ell_{2}$ normalization. In contrast to $\widehat{\mathrm{sv}}\mathrm{PN}$ which scales the cross-covariance matrix by $1/\lambda_{1}^{1-\alpha}$, we design the adaptive scaling which learns a scalar $1/\tau>0$ to calibrate the cross-covariance matrix. The comparison results are given in Tab.~\ref{tab:comparison of normalization}. We can see that all normalization methods except EPN improve over the baseline that has no normalization. In particular, our normalization methods perform much better than the competitors, and improve the baseline by~$\sim$~1.2\%, suggesting superiority of our normalization method for cross-covariance matrices. \begin{table}[htb!] \centering \setlength{\tabcolsep}{2pt} \footnotesize \renewcommand\arraystretch{1.0} \begin{tabular}{c} \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \footnotesize \setlength{\tabcolsep}{3pt} \begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Model} & Params & FLOPs & ImageNet & ImageNet-A \\ & (M) & (G) & Top-1 (\%) & Top-1 (\%) \\ \hline DeiT-T~\cite{DeiT_ICML} & 5.7 & 1.3 & 72.2 & 7.3 \\ T2T-ViT-7~\cite{T2T_ICCV21} & 4.3 & 1.2 & 71.7 & 6.1 \\ T2T-ViT-12~\cite{T2T_ICCV21} & 6.9 & 2.2 & 76.5 & 12.2 \\ PS-ViT-Ti/14~\cite{PSViT_ICCV21} & 4.8 & 1.6 & 75.6 & -- \\ PVT-T~\cite{PVT_ICCV21} & 13.2 & 1.9 & 75.1 & 7.9 \\ PiT-Ti~\cite{PiT_ICCV21} & 4.9 & 0.7 & 73.0 & 6.2 \\ iRPE-K-T~\cite{iRPE_ICCV21} & 6.0 & 1.3 & 73.7 & 8.8 \\ AutoFormer-tiny~\cite{AutoFormer_ICCV21} & 5.7 & 1.3 & 74.7 & 10.3 \\ SoT-Tiny (ours) & 7.7 & 2.5 & \textbf{80.3} & \textbf{21.5} \\ \hline DeiT-T+ours & 7.0 & 2.3 & 78.6$_{6.4\uparrow}$ & 17.5$_{10.2\uparrow}$ \\ iRPE-K-T+ours & 7.0 &2.3 & 79.0$_{5.3\uparrow}$ & 18.2$_{\;\;9.4\uparrow}$ \\ T2T-ViT-12+ours & 6.9 & 2.3 & 79.4$_{2.9\uparrow}$ & 15.4$_{\;\;3.2\uparrow}$ \\ \hline \end{tabular}% \setlength{\abovecaptionskip}{1.5pt} \setlength{\belowcaptionskip}{4pt} \caption{Comparison with light-weight models.}\label{subtab:small} \end{subtable \end{minipage} \\ \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \footnotesize \setlength{\tabcolsep}{3pt} \begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Model} & Params & FLOPs & ImageNet & ImageNet-A \\ & (M) & (G) & Top-1 (\%) & Top-1 (\%) \\ \hline DeiT-S~\cite{DeiT_ICML} & 22.1 & 4.6 & 79.8 & 18.9 \\ T2T-ViT-14~\cite{T2T_ICCV21} & 21.5 & 5.2 & 81.5 & 23.9 \\ PVT-S~\cite{PVT_ICCV21} & 24.5 & 3.8 & 79.8 & 18.0 \\ PS-ViT-B/10~\cite{PSViT_ICCV21} & 21.3 & 3.1 & 80.6 & -- \\ PS-ViT-B/14~\cite{PSViT_ICCV21} & 21.3 & 5.4 & 81.7 & 27.3 \\ PS-ViT-B/18~\cite{PSViT_ICCV21} & 21.3 & 8.8 & 82.3 & 31.7 \\ iRPE-QKV-S~\cite{iRPE_ICCV21} & 22.0 & 4.9 & 81.4 & 25.0 \\ PiT-S~\cite{PiT_ICCV21} & 23.5 & 2.9 & 80.9 & 21.7 \\ AutoFormer-small~\cite{AutoFormer_ICCV21} & 22.9 & 5.1 & 81.7 & 25.7 \\ Conformer-Ti~\cite{Conformer_ICCV21} & 23.5 & 5.2 & 81.3 & 27.2 \\ Swin-T~\cite{Swin_ICCV21} & 28.3 & 4.5 & 81.3 & 21.6 \\ SoT-Small (ours) & 26.9 & 5.8 & \textbf{82.7} & \textbf{31.8} \\ \hline DeiT-S+ours & 25.6 & 5.5 & 82.7$_{2.9\uparrow}$ & 32.2$_{13.3\uparrow}$\\ T2T-ViT-14+ours & 24.4 & 5.4 & 82.6$_{1.1\uparrow}$ & 27.1$_{\;\;3.2\uparrow}$ \\ Swin-T+ours & 31.6 & 6.0 & 83.0$_{1.7\uparrow}$& 33.5$_{11.9\uparrow}$ \\ Conformer-Ti+ours & 30.6& 6.3&83.0$_{1.7\uparrow}$ &36.4$_{\;\;9.2\uparrow}$ \\ \hline \end{tabular}% \setlength{\abovecaptionskip}{1.5pt} \setlength{\belowcaptionskip}{4pt} \caption{Comparison with middle-sized models. }\label{subtab:middle} \end{subtable \end{minipage} \\ \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \footnotesize \setlength{\tabcolsep}{3pt} \renewcommand\arraystretch{1.2} \begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Model} & Params & FLOPs & ImageNet & ImageNet-A \\ & (M) & (G) & Top-1 (\%) & Top-1 (\%) \\ \hline DeiT-B~\cite{DeiT_ICML} & 86.6 & 17.6 & 81.8 & 27.4 \\ T2T-ViT-24~\cite{T2T_ICCV21} & 64.1 & 15.0 & 82.3 & 28.9 \\ PVT-L~\cite{PVT_ICCV21} & 61.4 & 9.8 & 81.7 & 26.6 \\ iRPE-K-B~\cite{iRPE_ICCV21} & 87.0 & 17.7 & 82.4 & 31.8 \\ PiT-B~\cite{PiT_ICCV21} & 73.8 & 12.5 & 82.0 & 33.9 \\ AutoFormer-base~\cite{AutoFormer_ICCV21} & 54.0 & 11.0 & 82.4 & 28.8 \\ Swin-B~\cite{Swin_ICCV21} & 87.8 & 15.4 & \textbf{83.5} & \textbf{35.8} \\ SoT-Base (ours)& 76.8 & 14.5 &\textbf{83.5} & 34.6 \\ \hline DeiT-B+ours & 94.9 & 18.2 & 82.9$_{1.1\uparrow}$ & 29.1$_{1.7\uparrow}$ \\ T2T-ViT-24+ours & 72.1 & 15.5 & 83.3$_{1.0\uparrow}$ & 30.1$_{1.2\uparrow}$ \\ Swin-B+ours & 95.9 & 16.9 & 84.0$_{0.5\uparrow}$ & 42.9$_{7.1\uparrow}$ \\ \hline \end{tabular}% \setlength{\abovecaptionskip}{1.5pt} \caption{Comparison with heavyweight models. }\label{subtab:large} \end{subtable}% \end{minipage}\\ \end{tabular} \caption{Comparison with state-of-the-art vision transformer models on image classification tasks.} \label{tab:sota} \end{table} \subsubsection*{4.2.3.~~~Comparison with State-of-the-art Methods}\label{subsection:experiment-comparion with state-of-the-art} We present comparisons with a series of vision transformer models. Tabs.~\ref{subtab:small}, \ref{subtab:middle} and \ref{subtab:large} show comparison results with lightweight models, middle-sized models and heavyweight models, respectively. In light of these results we can draw the following three conclusions. (1) As regards our family of transformer models, on ImageNet, SoT-Tiny significantly outperforms the competing light-weight transformers by 3.8\%. When the networks deepens, the performance gaps between our models and the competitors become smaller. This is natural as the network gets deeper, further performance increase becomes more difficult~\cite{He_2016_CVPR}. (2) When we equip state-of-the-art architectures with our method, on ImageNet, we can invariably observe consistent benefits while introducing small, additional cost. For light-weight models, the gains are substantial, i.e., 6.4\%, 5.3\% and 2.9\% for DeiT-T, iRPE-K-T and T2T-ViT-12, respectively. For middle-sized models, the improvements are 1.1\%$\sim$2.9\%. Even for very strong heavyweight models, we can still achieve gains of 0.5\%$\sim$1.0\%. Note that Swin-Transformer models have no classification token, so we use our MGCrP in place of the original GAP. These comparison results demonstrate that our method well generalizes to different vision transformer architectures. (3) On ImageNet-A, our SoT-Tiny is superior across the light-weight models, while our SoT-Small and SoT-Base are very competitive compared to state-of-the-art models. Notably, equipped with our method, the compared state-of-the-art methods can achieve impressive improvements, i.e., 3.2\%$\sim$10.2\% for light-weight models, 3.2\%$\sim$13.3\% for middle-sized models and 1.2\%$\sim$7.1\% for heavyweight models. These results indicate that our classification head can substantially enhance robustness of different architectures. \subsection*{4.3~~Text Classification for NLP Tasks}\label{subsection:text classification} At last, we evaluate our classification head on natural language processing tasks. Note that our purpose here is not to achieve state-of-the-art performance; instead, we intend to show how our classification head performs against the conventional classification head. Two kinds of pre-trained transformer models are used, namely GPT~\cite{GPT-1} as well as BERT~\cite{DBLP:conf/naacl/DevlinCLT19} and its stronger variants (i.e., SpanBERT~\cite{joshi2020spanbert} and RoBERTa~\cite{roberta}). According to Tab.~\ref{tab:text_classification}, on CoLA and RTE, our method with GPT models improves over the conventional one by 2.18\% or more, while with BERT models and the variants, we achieve 1.19\%$\sim$6.29\% gains in accuracy. As opposed to CoLA and RTE, the improvement on MNLI and QNLI is not that large: with GPT we achieve 0.31\% gains for GPT and 0.27\%$\sim$0.73\% for BERT or its variants. Note that MNLI and QNLI are much bigger than CoLA and RTE, both containing similar sentences with pre-training datasets; consequently, the performance of individual models may tend to saturate and further improvement becomes difficult. Notably, the magnitude of performance boost by using our method preserves for stronger models, e.g., the gains over stronger RoBERTa are comparable to those over BERT. It is well-known that real-world tasks often have limited labeled data since human annotations are expensive and laborious. For such tasks, our method is more preferred as it can provide nontrivial performance increase over the conventional method. \begin{table}[t] \centering \footnotesize \setlength{\tabcolsep}{2pt} \renewcommand\arraystretch{1.0} \begin{tabular}{l|l|l|l|l \hline Model & CoLA & RTE & MNLI & QNLI\\ \hline GPT~\cite{GPT-1} & 54.32 & 63.17 &82.10&86.36\\ GPT+ours & 57.25$_{2.93\uparrow}$ & 65.35$_{2.18\uparrow}$ &82.41$_{0.31\uparrow}$ &87.13$_{0.77\uparrow}$\\ \hline \hline BERT-base~\cite{DBLP:conf/naacl/DevlinCLT19} & 54.82 & 67.15 & 83.47 &90.11 \\ BERT-base+ours & 58.03$_{3.21\uparrow}$ & 69.31$_{2.16\uparrow}$ & 84.20$_{0.73\uparrow}$ &90.78$_{0.67\uparrow}$\\ BERT-large~\cite{DBLP:conf/naacl/DevlinCLT19} & 60.63 & 73.65 & 85.90 &91.82 \\ BERT-large+ours & 61.82$_{1.19\uparrow}$ & 75.09$_{1.44\uparrow}$ & 86.46$_{0.56\uparrow}$ &92.37$_{0.55\uparrow}$\\ \hline SpanBERT-base~\cite{joshi2020spanbert} & 57.48 & 73.65 & 85.53 &92.71\\ SpanBERT-base+ours & 63.77$_{6.29\uparrow}$ & 77.26$_{3.61\uparrow}$ & 86.13$_{0.60\uparrow}$ &93.31$_{0.60\uparrow}$ \\ SpanBERT-large~\cite{joshi2020spanbert} & 64.32 & 78.34 &87.89 &94.22 \\ SpanBERT-large+ours & 65.94$_{1.62\uparrow}$ & 79.79$_{1.45\uparrow}$ &88.16$_{0.27\uparrow}$ & 94.49$_{0.27\uparrow}$\\ \hline RoBERTa-base~\cite{roberta} & 61.58 & 77.60 & 87.50&92.70\\ RoBERTa-base+ours &65.28$_{3.70\uparrow}$ & 80.50$_{2.90\uparrow}$ &87.90$_{0.40\uparrow}$ &93.10$_{0.40\uparrow}$ \\ RoBERTa-large~\cite{roberta} & 67.98 & 86.60 & 90.20&94.70 \\ RoBERTa-large+ours & 70.90$_{2.92\uparrow}$ & 88.10$_{1.50\uparrow}$ &90.50$_{0.30\uparrow}$ & 95.00$_{0.30\uparrow}$ \\ \hline \end{tabular}% \caption{Performance improvement over language transformer models on text classification tasks. }\label{tab:text_classification}% \end{table} \section*{5.~Conclusion} In this paper, we propose a novel second-order transformer (SoT) model. The key of our SoT is a novel classification head which exploits simultaneously word tokens and classification token. It goes beyond, for the first time as far as we know, the prevalent classification paradigm of transformers which exclusively use the classification token. We perform extensive ablation analysis on ImageNet, validating the effectiveness and superiority of our method. The proposed classification head is flexible and fits for a variety of vision transformer architectures, significantly improving them on challenging image classification tasks. What's more, the proposed classification head generalizes to language transformer architecture, performing much better than the conventional classification head on general language understanding tasks. {\small \bibliographystyle{ieee_fullname} \section{Architectures of Our Family of SoT}\label{suppsection:SoT network} Following~[\hyperref[reference:ViT-supp]{\textcolor{green}{S-7}}], our transformer architecture consists of a token embedding module, a backbone and a classification head, as shown in Tab.~\ref{table:architecture}. For token embedding, the original ViT only uses a single linear projection of fixed image patches, failing to model local image information. To address this limitation, we design a small, hierarchical module based on off-the-shelf convolutions. Note that our simple embedding module has proven to be very competitive, compared to the other variants, as shown in the main paper. \paragraph{Token embedding} Our embedding module consists of a stem and a stack of three convolution blocks, gradually decreasing image resolution. The stem is a $3\times 3$ convolution followed by a max pooling of stride 2 (S2). The design of blocks is flexible, and we choose ResNet block~[\hyperref[reference:He_2016_CVPR-supp]{\textcolor{green}{S-9}}], Inception block~[\hyperref[reference:szegedy2016rethinking-supp]{\textcolor{green}{S-24}}], DenseNet block~[\hyperref[reference:huang2017densely-supp]{\textcolor{green}{S-11}}] and Non-local block~[\hyperref[reference:wang2018non-supp]{\textcolor{green}{S-29}}], whose configurations are shown in Fig.~\ref{fig:blocks}. We halve the spatial size of feature maps for each block, and after the last block, we use a 1$\times$1 convolution to change the dimension of features such that the dimension is consistent with that of the backbone. For an input image of $224\times 224$, our token embedding outputs $14\times 14$ spatial features, reshaped to a sequence of 196 vectors as word tokens. \paragraph{Backbone of transformer} We build the backbone by stacking standard transformer blocks as ~[\hyperref[reference:ViT-supp]{\textcolor{green}{S-7}}], where each transformer block contains a multi-head self-attention (MSA) and a multi-layer perception (MLP). Throughout the backbone, the dimension of tokens (token size) remain unchanged. In MSA, we specify the number of heads; the dimension of queries, keys and values can be determined accordingly. Each MLP contains two fully-connected (FC) layers, where the dimension of hidden layer is increased (called MLP size). \paragraph{Classification head} The proposed classification head combines the classification token and word tokens, where word tokens are aggregated by multi-headed global cross-covariance pooling (MGCrP) with structured normalization svPN. We keep the number of heads unchanged and vary the dimensions ($m$ and $n$) of two linear projections for SoT of different depths. We develop a family of SoT, namely, a light-weight 12-layer SoT-Tiny, a middle-sized 14-layer SoT-Small and a heavyweight 24-layer SoT-Base, and their configurations are shown in Tab.~\ref{table:architecture}. In addition, to make computationally feasible the ablation analysis where extensive evaluations on ImageNet are involved, we also design a 7-layer SoT. It shares the same configuration with SoT-Tiny, but only has 7 layers and moreover, the downsampling of the last block in the token embedding module is removed. \begin{table}[t] \begin{minipage}{1.0\linewidth} \centering \footnotesize \setlength{\tabcolsep}{4pt} \renewcommand\arraystretch{1.3} \begin{tabular}{l|l|ccc} \hline & & SoT-Tiny & SoT-Small & SoT-Base \\ \hline \parbox{0.5in}{\vspace{1mm}Token \\embedding}& Token size & 240 & 384 & 528 \\ \hline \multirow{3}{*}{Backbone} & Layers & 12 & 14 & 24 \\ & MSA heads & 4 & 6 & 8 \\ & MLP size & 600 & 1344 & 1584 \\ \hline Classification &MGCrP heads & 6 & 6 & 6 \\ head& MGCrP $(m,n)$ & (14,14) & (24,24) & (38,38) \\ \hline \multicolumn{2}{c|}{Parameters (M)} & 7.7 & 26.9 & 76.8 \\ \multicolumn{2}{c|}{FLOPs (G)} & 2.5 & 5.8 & 14.5\\ \hline \end{tabular}% \caption{Architectures of the proposed SoT networks.}\label{table:architecture}% \end{minipage}\hfill \vspace{12pt} \begin{minipage}{1.0\linewidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[height=1.75in]{images/DenseNet.pdf} \caption{DenseNet block} \label{subfigure:DenseNet_block} \end{subfigure}% ~\hspace{3pt} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[height=1.4in]{images/Non-local.pdf} \caption{Non-local block} \label{subfigure:Non-local_block} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[height=1.35in]{images/ResNet.pdf} \caption{ResNet block} \label{subfigure:ResNet_block} \end{subfigure}% ~\hspace{3pt} \begin{subfigure}[b]{0.60\textwidth} \centering \includegraphics[height=1.4in]{images/Inception.pdf} \caption{Inception block} \label{subfigure:Inception_block} \end{subfigure} \captionof{figure}{Illustration of different type of convolution blocks in our token embedding module. S1/2: Stride 1/2; Avg: average pooling; Max: max pooling.} \label{fig:blocks} \end{minipage} \end{table} \section{Proof of Proposition 1 }\label{suppsection:Proof of Proposition} \begin{proposition}\label{proposition1} \textit{ Given $\mathbf{u}_{i}$ and $\mathbf{v}_{i}$ where $i=1,\ldots, k-1$, let us consider the objective \begin{align}\label{equ:kth-cross-cov} \max_{\|\mathbf{u}\|=\|\mathbf{v}\|=1} \;&\mathbf{u}^{T}\mathbf{X}\mathbf{Y}^{T}\mathbf{v}\\ \mathrm{s.t.}\;\;&\mathbf{u}_{i}^{T}\mathbf{u}=0, \;\mathbf{v}_{i}^{T}\mathbf{v}=0, \;i<k.\nonumber \end{align} By inductively optimizing~(\ref{equ:kth-cross-cov}) for $k=1,\ldots, \min(m,n)$, we can obtain, in decreasing order, the $k$-th largest cross-covariance $\mathbf{u}_{k}^{T}\mathbf{X}\mathbf{Y}^{T}\mathbf{v}_{k}$, which is equal to the $k$-th singular value $\lambda_{k}$ of $\mathbf{Q}$ while $\mathbf{u}_{k}$ and $\mathbf{v}_{k}$ are the corresponding left and right singular vectors, respectively. } \end{proposition} \begin{proof} We prove this proposition using mathematical induction. Note that $\mathbf{Q}=\mathbf{X}\mathbf{Y}^{T}$. For convenience, we let $R(\mathbf{Q},\mathbf{u},\mathbf{v})=\mathbf{u}^{T}\mathbf{X}\mathbf{Y}^{T}\mathbf{v}$. \vspace{6pt}\noindent\textbf{Initial case } First, let us consider the initial case of objective~(\ref{equ:kth-cross-cov}) for which $k=1$, i.e., $\max_{\|\mathbf{u}\|=\|\mathbf{v}\|=1}R(\mathbf{Q},\mathbf{u},\mathbf{v})$. Note that $\|\mathbf{u}\|=1$ is equivalent to $\|\mathbf{u}\|^2=\mathbf{u}^{T}\mathbf{u}=1$. The Lagrange function associated with the objective~(\ref{equ:kth-cross-cov}) is \begin{align}\label{equ:lagrange_1st} \mathcal{L}(\mathbf{u},\mathbf{v},\gamma,\beta)=R(\mathbf{Q}, \mathbf{u}, \mathbf{v})-\dfrac{\gamma}{2}(\mathbf{u}^{T}\mathbf{u}-1)-\dfrac{\beta}{2} (\mathbf{v}^{T}\mathbf{v}-1). \end{align} We calculate the gradient $\triangledown\mathcal{L}=\Big[\frac{\partial \mathcal{L}}{\partial \mathbf{u}},\frac{\partial \mathcal{L}}{\partial \mathbf{v}},\frac{\partial \mathcal{L}}{\partial \gamma},\frac{\partial \mathcal{L}}{\partial \beta}\Big]$, where $\frac{\partial \mathcal{L}}{\partial \mathbf{u}}$ is the partial derivatives of $\mathcal{L}$ with respect to $\mathbf{u}$, and set it to be zero. After some manipulations, we have \begin{align}\label{equ: a set of_first} \mathbf{Q}\mathbf{v}-\gamma \mathbf{u}=0,\\ \mathbf{Q}^{T}\mathbf{u}-\beta \mathbf{v}=0,\nonumber\\ \mathbf{u}^{T}\mathbf{u}-1=0,\nonumber\\ \mathbf{v}^{T}\mathbf{v}-1=0.\nonumber \end{align} The last two equations simply produce the constraints, i.e., $\|\mathbf{u}\|=1$ and $\|\mathbf{v}\|=1$. We left multiple the first and second equation by $\mathbf{u}^{T}$ and $\mathbf{v}^{T}$, respectively. Then we can obtain \begin{align*} \gamma=\beta=R(\mathbf{Q},\mathbf{u},\mathbf{v}), \end{align*} as $\mathbf{u}^{T}\mathbf{Q}\mathbf{v}=\mathbf{v}^{T}\mathbf{Q}^{T}\mathbf{u}$. Therefore, we have \begin{align}\label{equ: a pair of left and right SV} \mathbf{Q}\mathbf{v}=\gamma \mathbf{u},\\ \mathbf{Q}^{T}\mathbf{u}=\gamma \mathbf{v}. \nonumber \end{align} According to the property of SVD~[\hyperref[reference:Golub_4th]{\textcolor{green}{S-8}\textcolor{black}{, Chap.2.4}}], we know that $\mathbf{u}$ and $\mathbf{v}$ which satisfy~\ref{equ: a pair of left and right SV} are respectively left and right singular vectors of $\mathbf{Q}$ with $\gamma$ being the singular value. Obviously, $R(\mathbf{Q},\mathbf{u},\mathbf{v})$ achieves its maximum when it is equal to the largest singular value. Therefore, by optimizing the objective~(\ref{equ:kth-cross-cov}) with $k=1$, we obtain vectors $\mathbf{u}_{1}$ and $\mathbf{v}_{1}$ which are respectively the left and right singular vectors corresponding to the largest singular value $\lambda_{1}=R(\mathbf{Q},\mathbf{u}_{1},\mathbf{v}_{1})$. \vspace{6pt}\noindent\textbf{Inductive step} Next, we optimize the objective~(\ref{equ:kth-cross-cov}) for $k>1$. Suppose the statement holds for $i<k$. That is, for any $i$, $\mathbf{u}_{i}$ and $\mathbf{v}_{i}$, which maximize $R(\mathbf{Q},\mathbf{u},\mathbf{v})$ while satisfying the constraints $\|\mathbf{u}_{i}\|=\|\mathbf{v}_{i}\|=1$ and $\mathbf{u}_{i}^{T}{\mathbf{u}_{i'}}=0, \mathbf{v}_{i}^{T}{\mathbf{v}_{i'}}=0, i'<i$, are the singular vectors corresponding to the $i\mathrm{th}$ largest singular values $\lambda_{i}$. Obviously \begin{align*} \underbrace{R(\mathbf{Q},\mathbf{u}_{1},\mathbf{v}_{1})}_{\lambda_{1}}\geq\ldots\geq \underbrace{R(\mathbf{Q},\mathbf{u}_{k-1},\mathbf{v}_{k-1})}_{\lambda_{k-1}}. \end{align*} Now, let us prove the statement holds for the case $k$. The Lagrange function of the objective (\ref{equ:kth-cross-cov}) is \begin{align}\label{equ:lagrange_kth} \mathcal{L}(&\mathbf{u},\mathbf{v},\gamma,\beta,\tau_{i},\delta_{i})=R(\mathbf{Q},\mathbf{u},\mathbf{v})-\dfrac{\gamma}{2}(\mathbf{u}^{T}\mathbf{u}-1)\\ &-\dfrac{\beta}{2} (\mathbf{v}^{T}\mathbf{v}-1)-\sum\nolimits_{i=1}^{k-1}\tau_{i}\mathbf{u}^{T}\mathbf{u}_{i}-\sum\nolimits_{i=1}^{k-1}\delta_{i}\mathbf{v}^{T}\mathbf{v}_{i}.\nonumber \end{align} We calculate the gradient of the Lagrange function $\triangledown\mathcal{L}=\Big[\frac{\partial \mathcal{L}}{\partial \mathbf{u}},\frac{\partial \mathcal{L}}{\partial \mathbf{v}},\frac{\partial \mathcal{L}}{\partial \gamma},\frac{\partial \mathcal{L}}{\partial \beta},\frac{\partial \mathcal{L}}{\partial \tau_{1}},\ldots, \frac{\partial \mathcal{L}}{\partial \tau_{k-1}},\frac{\partial \mathcal{L}}{\partial \delta_{1}},\ldots, \frac{\partial \mathcal{L}}{\partial \delta_{k-1}}\Big]$. By setting $\triangledown\mathcal{L}$ to be zero, we obtain a set of equations \begin{align}\label{equ: a set of_kth} \mathbf{Q}\mathbf{v}-\gamma \mathbf{u}-\sum\nolimits_{i=1}^{k-1}\tau_{i}\mathbf{u}_{i}=0,\\ \mathbf{Q}^{T}\mathbf{u}-\beta \mathbf{v}-\sum\nolimits_{i=1}^{k-1}\delta_{i}\mathbf{v}_{i}=0,\nonumber\\ \mathbf{u}^{T}\mathbf{u}-1=0,\nonumber\\ \mathbf{v}^{T}\mathbf{v}-1=0,\nonumber\\ \mathbf{u}^{T}\mathbf{u}_{i}=0,\; i=1,\ldots, k-1,\nonumber\\ \mathbf{v}^{T}\mathbf{v}_{i}=0,\; i=1,\ldots, k-1.\nonumber \end{align} The third to last equations are simply constraints of the maximization problem~(\ref{equ:kth-cross-cov}). We left multiply $\mathbf{u}^{T}$ (resp., $\mathbf{v}^{T}$) the first (resp., second) equation, and we can obtain $\mathbf{u}^{T}\mathbf{Q}\mathbf{v}=\gamma$ (resp., $\mathbf{\mathbf{v}}^{T}\mathbf{Q}^{T}\mathbf{u}=\beta$), by noting that $\mathbf{u}^{T}\mathbf{u}_{i}=0$ (resp., $\mathbf{v}^{T}\mathbf{v}_{i}=0$) $\mathbf{u}$ for $i<k$. Therefore, we have $\gamma=\beta=R(\mathbf{Q},\mathbf{u},\mathbf{v})$. Subsequently, we will show that $\tau_{j}=0, j<k$. We left multiply the first equation by $\mathbf{u}_{j}^{T}, j<k$. We recall that $\mathbf{u}_{j}$ is orthogonal to $\mathbf{u}_{j'}$ for $j'\neq j$ and to $\mathbf{u}$, and then can derive \begin{align*} \tau_{j}=\mathbf{u}_{j}^{T}\mathbf{Q}\mathbf{v}. \end{align*} As $\mathbf{u}_{j}$ is the left singular value of $\mathbf{Q}$, we know $\mathbf{Q}^{T}\mathbf{u}_{j}=\lambda_{j}\mathbf{v}_{j}$, i.e., $\mathbf{u}_{j}^{T}\mathbf{Q}=\lambda_{j}\mathbf{v}_{j}$. Therefore we have \begin{align*} \tau_{j}=\mathbf{u}_{j}^{T}\mathbf{Q}\mathbf{v}=\lambda_{j}\mathbf{v}_{j}^{T}\mathbf{v}=0. \end{align*} Here we make use of the fact that $\mathbf{v}_{j}$ is orthogonal to $\mathbf{v}$. In a similar manner, we left multiply the second equation by $\mathbf{v}_{j}^{T}, j<k$, and then we can derive $\delta_{j}=0$. Up to this point, we know $\mathbf{u}$ and $\mathbf{v}$ for which the objective~(\ref{equ:kth-cross-cov}) is maximized satisfy the following pair of equations \begin{align}\label{equ: a pair of left and right SV--induction} \mathbf{Q}\mathbf{v}=\gamma \mathbf{u},\\ \mathbf{Q}^{T}\mathbf{u}=\gamma \mathbf{v}. \nonumber \end{align} Again, according to the property of SVD, we know $\mathbf{u}$ and $\mathbf{v}$ are left and right singular values of $\mathbf{Q}$ and $\gamma$ is the corresponding singular value. Obviously, $\gamma$ achieves the maximum when it equals the $k$-$\mathrm{th}$ largest singular value. This concludes our proof. \end{proof} As far as we know, the statement as described in Proposition~\ref{proposition1} appeared early in~[\hyperref[reference:MCA_SVD_Climate-supp]{\textcolor{green}{S-2}}] and later in~[\hyperref[reference:book_statistical_1999-supp]{\textcolor{green}{S-27}\textcolor{black}{, Chap. 14.1.7}}], among others. However, we fail to find any formal proof of this statement; hence, we provide a proof here. It is worth mentioning that this statement is closely related to but different from canonical correlation analysis (CCA). For detailed theory on CCA, one may refer to~[\hyperref[reference:10.1145/3136624-supp]{\textcolor{green}{S-26}}]. \section{Backpropagation of svPN via SVD}\label{suppsection:Backpropagation} Let $\mathbf{Q}= \mathbf{U}\mathrm{diag}(\lambda_{i})\mathbf{V}^{T}$ be the SVD of $\mathbf{Q}$. The forward propagation of our normalization $\tilde{\mathbf{Q}}\stackrel{\vartriangle}{=}\mathrm{svPN}(\mathbf{Q})$ can be described in two consecutive steps as follows: \begin{align}\label{equ:svPN} \mathbf{Q}\stackrel{\;\mathrm{SVD}\;}{\longrightarrow}\mathbf{U}\mathrm{diag}(\lambda_{i})\mathbf{V}^{T}\stackrel{\mathrm{power}\;}{\longrightarrow}\mathbf{U}\mathrm{diag}(\lambda_{i}^{\alpha})\mathbf{V}^{T} \end{align} The associated backward propagations are not that straightforward as the structured, nonlinear matrix operations are involved. Suppose $l$ is the network loss function. Let us denote $\mathbf{A}_{\mathrm{sym}}=\frac{1}{2}(\mathbf{A}+\mathbf{A}^{T})$, $\mathbf{\Lambda}=\mathrm{diag}(\lambda_{i})$, and $\mathbf{A}_{\mathrm{diag}}$ being a matrix setting the off-diagonals of $\mathbf{A}$ to zero. Based on the theory of matrix backpropagation~[\hyperref[reference:Ionescu_arXiv15-supp]{\textcolor{green}{S-12}}], we can derive the gradients relative to $\mathrm{svPN}$ via SVD, which are given in the following corollary. \begin{corollary}\label{corollary:backpropagation of svPN} Suppose we have $\dfrac{\partial l}{\partial \tilde{\mathbf{Q}}}$ from the succeeding layer. The gradient involved in the first step of~(\ref{equ:svPN}) is \begin{align* &\dfrac{\partial l}{\partial \mathbf{Q}}=\dfrac{\partial l}{\partial \mathbf{U}}\mathbf{\Lambda}^{-1}\mathbf{V}^{T}+\mathbf{U}\Big(\dfrac{\partial l}{\partial \mathbf{\Lambda}}-\mathbf{U}^{T}\dfrac{\partial l}{\partial \mathbf{U}}\mathbf{\Lambda}^{-1}\Big)_{\mathrm{diag}}\mathbf{V}^{T}\\ &+2\mathbf{U}\mathbf{\Lambda}\Big(\mathbf{K}^{T}\circ \Big( \mathbf{V}^{T}\Big( \dfrac{\partial l}{\partial \mathbf{V}}-\mathbf{V}\mathbf{\Lambda}^{-1} \Big(\dfrac{\partial l}{\partial \mathbf{U}}\Big)^{T}\mathbf{U}\mathbf{\Lambda} \Big)\Big) \Big)_{\mathrm{sym}}\mathbf{V}^{T} \end{align*} where $K_{ij}=(\lambda_{i}^2-\lambda_{j}^2)^{-1}$ if $\lambda_{i}\neq \lambda_{j} $ and $K_{ij}=0$ otherwise, and $\circ$ denotes Hadamard product. The partial derivatives with respect to the second step of~(\ref{equ:svPN}) are \begin{align*} \dfrac{\partial l}{\partial \mathbf{U}}&=\dfrac{\partial l}{\partial \tilde{\mathbf{Q}}}\mathbf{V}\mathbf{\Lambda}^{\alpha},\\ \dfrac{\partial l}{\partial \mathbf{V}}&=\Big(\dfrac{\partial l}{\partial \tilde{\mathbf{Q}}}\Big)^{T}\mathbf{U}\mathbf{\Lambda}^{\alpha},\\ \dfrac{\partial l}{\partial \mathbf{\Lambda}}&=\alpha\mathbf{\Lambda}^{\alpha-1}\mathbf{U}^{T}\dfrac{\partial l}{\partial \tilde{\mathbf{Q}}}\mathbf{V}. \end{align*} \end{corollary} \section{Detailed Experimental Settings for CV and NLP Tasks}\label{suppsection:detailed} \subsection{Benchmark Description} \subsubsection{Benchmarks Used in CV} \vspace{6pt}\noindent\textbf{ImageNet} Our experiments are mainly conducted on ILSVRC ImageNet 2012 image classification benchmark~[\hyperref[reference:ILSVRC15-supp]{\textcolor{green}{S-21}},\hyperref[reference:imagenet_cvpr09-supp]{\textcolor{green}{S-5}}], which contains 1K classes with 1.28M images for training and 50K images for validation. Note that as test images are not publicly available, the common practice is to adopt the validation images for testing~[\hyperref[reference:T2T_ICCV21-supp]{\textcolor{green}{S-34}},\hyperref[reference:DeiT_ICML-supp]{\textcolor{green}{S-25}},\hyperref[reference:PSViT_ICCV21-supp]{\textcolor{green}{S-35}}]. We train the transformer models from scratch on the training set and report top-1 accuracy on the validation set. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{images/IN-A_examples.pdf} \end{center} \caption{Examples of adversarial images from ImageNet-A (1st column). The {\color{blue}{blue}} text is the test class, and the {\color{red}{red}} text is the false prediction and the score produced by a DeiT-Small model. The 2nd and 3rd columns show the images of the corresponding classes from ImageNet. It can be seen that the images from ImageNet-A are highly adversarial, whose distribution deviates much from that of ImageNet. On the challenging ImageNet-A, the proposed method can substantially improve state-of-the-art vision transformers (see Sec.~\hyperref[subsection:experiment-comparion with state-of-the-art]{\textcolor{red}{4.2.3}} in the main paper). } \label{fig:ImageNet-A} \end{figure} \vspace{6pt}\noindent\textbf{ImageNet-A} The ImageNet-A~[\hyperref[reference:ImageNet-A-supp]{\textcolor{green}{S-10}}] is a hard ImageNet test set of real-world adversarial images with adversarial filtration. It contains 7,500 natural images from 200 classes which cover most broad categories spanned by ImageNet. This dataset has heterogeneous and varied distribution shift from ImageNet. ImageNet-A is far more challenging than the original ImageNet validation set (and test set). For example, DeiT-Small model achieves only a top-1 accuracy of 18.9\% on ImageNet-A against 79.8\% on ImageNet validation set. Fig.~\ref{fig:ImageNet-A} (1st column) shows four examples from ImageNet-A, in which an object ({\color{blue}{blue}} text) of some class is mistakenly identified as that of another class with high confidence score (in {\color{ red}{red}}); for contrast, the 2nd and 3rd columns show the images of the corresponding classes in ImageNet. It can be seen that images of ImageNet-A is highly adversarial and its distribution deviates much from that of ImageNet. Notably, on the challenging ImageNet-A, the proposed method can substantially improve state-of-the-art vision transformers (see Sec.~\hyperref[subsection:experiment-comparion with state-of-the-art]{\textcolor{red}{4.2.3}} in the main paper). \subsubsection{Benchmarks Used in NLP} \vspace{6pt}\noindent\textbf{CoLA} \quad The goal of the Corpus of Linguistic Acceptability task~[\hyperref[reference:warstadt2018neural-supp]{\textcolor{green}{S-30}}] is to judge whether a sentence is grammatical or not. It can be formulated as a binary single-sentence classification problem. The dataset contains 10,657 English sentences which are labeled as grammatical or ungrammatical, and these sentences are split into training (8,551)/development (1,043)/test (1,063) sets. \vspace{6pt}\noindent\textbf{RTE} \quad Given a pair of text fragments, denoted by (``Text'', ``Hypothesis''), Recognizing Textual Entailment~[\hyperref[reference:bentivogli2009fifth-supp]{\textcolor{green}{S-1}}] aims to determine whether the ``Text'' entails ``Hypothesis''. This task can be converted into a binary entailment classification task. The dataset of RTE consists of 5,767 examples, among which the training set contains 2,490 examples, while the development set and test set contain 3,000 and 277 examples, respectively. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[height=3.0in]{images/nlp_sentence_pair.pdf} \caption{Sentence-pair classification task (RTE, MNLI and QNLI).} \label{subfigure:nlp_a} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[height=3.0in]{images/nlp_single_sentence.pdf} \caption{Single-sentence classification task (CoLA)} \label{subfigure:nlp_b} \end{subfigure} \caption{Diagrams of fine-tuning BERT and GPT on downstream NLP tasks, formulated as either (a) sentence-pair classification (RTE, MNLI and QNLI), or (b) single-sentence classification (CoLA). Note that for BERT \& its variants, the classification token [CLS] is always in the first place of the sequence, while it is at the end for GPT whose illustration fades; besides, GPT does not use segment embedding. } \label{fig:fine-tuning NLP} \end{figure*} \begin{table}[t] \centering \footnotesize \begin{minipage}{1\linewidth} \begin{subtable}{1\linewidth} \centering \setlength{\tabcolsep}{6pt} \renewcommand\arraystretch{1.2} \begin{tabular}{c|c|ccc} \hline \multicolumn{2}{c|}{Models} & SoT-Tiny&SoT-Small& SoT-Base \\ \hline \multicolumn{2}{c|}{Batch size} & 1024 & 1024 & 512 \\ \hline \multicolumn{2}{c|}{Optimizer} & AdamW &AdamW& AdamW\\ \hline \multicolumn{2}{c|}{Epochs}& 310 & 310 & 310 \\ \multicolumn{2}{c|}{Base LR} & 1e-3& 1e-3 & 5e-4\\ \multicolumn{2}{c|}{Final LR} & 1e-5 & 1e-5& 1e-5\\ \multicolumn{2}{c|}{Scheduler} & cosine & cosine & cosine\\ \multicolumn{2}{c|}{Weight decay} & 0.03 & 0.03 & 0.065 \\ \hline \multicolumn{2}{c|}{Label smoothing }& 0.1 & 0.1 & 0.1\\ \multicolumn{2}{c|}{Mixup prob.} & 0.8& 0.8 & 0.8\\ \multicolumn{2}{c|}{Cutmix prob.} & 1.0 & 1.0& 1.0\\ \multicolumn{2}{c|}{Erasing prob.} &0.25&0.25 & 0.25 \\ \multicolumn{2}{c|}{RandAugment} & 9/0.5& 9/0.5 & 9/0.5\\ \hline \multicolumn{2}{c|}{MGCrP dropout }&0.0&0.5&0.7 \\ \hline \end{tabular}% \end{subtable} \end{minipage} \caption{Hyper-parameters for image classification.} \label{table:hyper-parameters-image} \end{table} \vspace{6pt}\noindent\textbf{MNLI} \quad Similar to RTE, Multi-Genre Natural Language Inference~[\hyperref[reference:williams2018broad-supp]{\textcolor{green}{S-31}}] is also concerned with judgement of entailment. Given a pair of sentences, the task is to predict whether the ``Text'' entails the ``Hypothesis'' (entailment), contradicts the ``Hypothesis'' (contradiction), or neither (neutral). As such, this task can be formulated as a three-way classification problem. MNLI is a large-scale dataset, consisting of 432,702 examples, in which 392,702 examples belong to the training set, 20,000 examples belong to the development set and the remaining 20,000 examples are in the test set. \vspace{6pt}\noindent\textbf{QNLI} \quad The Stanford Question Answering task~[\hyperref[reference:rajpurkar2016squad-supp]{\textcolor{green}{S-20}}] is converted to a sentence-pair classification problem~[\hyperref[reference:wang2018glue-supp]{\textcolor{green}{S-28}}]. Given a pair of sentences, the model needs to determine whether the sentence contains the answer to the question. QNLI dataset contains 105K training examples, 5.4K development examples and 5.4K test examples. \subsection{Training Strategy} \subsubsection{Training from Scratch on CV Tasks} As suggested in~[\hyperref[reference:ViT-supp]{\textcolor{green}{S-7}}], training of high-performance transformer models requires ultra large-scale datasets. Hence, for training from scratch on ImageNet which is not that large, one often depends on extensive data augmentation and regularization methods~[\hyperref[reference:DeiT_ICML-supp]{\textcolor{green}{S-25}}], for which we mainly follow ~[\hyperref[reference:DeiT_ICML-supp]{\textcolor{green}{S-25}},\hyperref[reference:T2T_ICCV21-supp]{\textcolor{green}{S-34}},\hyperref[reference:PSViT_ICCV21-supp]{\textcolor{green}{S-35}}]. For data augmentation, besides standard scale, color and flip jittering~[\hyperref[reference:Simonyan15-supp]{\textcolor{green}{S-14}},\hyperref[reference:He_2016_CVPR-supp]{\textcolor{green}{S-9}}] with default settings in PyTorch, we adopt randAugment~[\hyperref[reference:cubuk2020randaugment-supp]{\textcolor{green}{S-4}}] and random erasing~[\hyperref[reference:zhong2020random-supp]{\textcolor{green}{S-39}}]. For model regularization, we employ label smoothing~[\hyperref[reference:Szegedy_2015_CVPR-supp]{\textcolor{green}{S-23}}], mixup~[\hyperref[reference:zhang2018mixup-supp]{\textcolor{green}{S-38}}] and cutmix~[\hyperref[reference:yun2019cutmix-supp]{\textcolor{green}{S-36}}]. We use AdamW~[\hyperref[reference:loshchilov2018decoupled-supp]{\textcolor{green}{S-17}}] optimizer with a learning rate warm up (5 epochs) and cosine annealing scheduler. We adopt dropout for our MGCrP module. The hyper-parameters involved in augmentation, regularization, optimization, etc., are summarized in Tab.~\ref{table:hyper-parameters-image}. Note that the 7-layer SoT used in the ablation study shares the same hyper-parameters with SoT-Tiny. \begin{table}[t] \centering \footnotesize \begin{minipage}{1.0\linewidth} \begin{subtable}{1.0\linewidth} \centering \setlength{\tabcolsep}{1.5pt} \renewcommand\arraystretch{1.5} \begin{tabular}{c|c|c|c|c|c} \hline \multicolumn{2}{c|}{Models}&GPT& BERT &SpanBERT&RoBERTa\\ \hline \multicolumn{2}{c|}{Batch size}& \{32,64\}& \{24,64,96\} &32&\{16,32\} \\ \hline \multirow{2}{*}{Adam}& $\beta_{1}$ & 0.9&0.9& 0.9 & 0.9\\ \multirow{2}{*}{Optimizer}&$\beta_{2}$ & 0.999&0.999& 0.999 & 0.98\\ &$\epsilon$ & 1e-8 & 1e-8 & 1e-6 & 1e-6 \\ \hline \multicolumn{2}{c|}{Epochs}& \{10,15\}&10&\{10,30\} &\{10,20\}\\ \multicolumn{2}{c|}{Base LR}& 6.25e-5&\{2e-5,3e-5\}&2e-5&\{1e-5,2e-5\}\\ \multicolumn{2}{c|}{Final LR} & 0& 0&0& 0\\ \multicolumn{2}{c|}{Scheduler} & linear & linear & linear & linear \\ \multicolumn{2}{c|}{Weight decay} & 0.01 & 0 & 0.01 & 0.1 \\ \hline \multicolumn{2}{c|}{MGCrP repr. size}& \{4K,6K\}&\{1K,4K,5K\} &\{1K,4K,5K\}& \{1K,4K,5K\} \\ \multicolumn{2}{c|}{MGCrP dropout} &\{0.5,0.7\}&\{0.5,0.8\}&\{0.5,0.7\}&\{0.5,0.7\} \\ \hline \end{tabular}% \end{subtable} \end{minipage} \caption{Hyper-parameters for text classification.} \label{table:hyper-parameter text} \end{table} \subsubsection{Fine-tuning on Downstream NLP Tasks} The illustration of fine-tuning BERT and GPT with our classification head can be seen in Fig~\ref{fig:fine-tuning NLP}. The four NLP tasks are formulated as either sentence-pair classification task (RTE, MNLI and QNLI) or single-sentence classification task (CoLA). For each task, we plug in the task-specific input and outputs into the transformer models and fine-tune the whole networks in an end-to-end fashion. Following previous works~[\hyperref[reference:roberta-supp]{\textcolor{green}{S-16}},\hyperref[reference:GPT-1-supp]{\textcolor{green}{S-19}},\hyperref[reference:DBLP:conf/naacl/DevlinCLT19-supp]{\textcolor{green}{S-6}},\hyperref[reference:joshi2020spanbert-supp]{\textcolor{green}{S-13}}], for each task we fine-tune the model on the training set while evaluating on the development set. In the following, we introduce the fine-tuning pipeline by taking the sentence-pair classification with BERT model as an example. As shown in Fig.~\ref{subfigure:nlp_a}, for a sentence-pair classification, a pair of sentences are concatenated into a single sequence with a special token ([$\mathrm{SEP}$]) separating them, and is then prepended by a classification token ([$\mathrm{CLS}$]). The input representation of every token is built by summing the word embedding (by e.g., WordPiece~[\hyperref[reference:WordPiece-supp]{\textcolor{green}{S-33}}]), segment embedding and position embedding. At the output, the token representations are fed into our proposed classification head, in which we combine classification token and word tokens. Fig.~\ref{subfigure:nlp_b} shows the single-sentence classification task, which is similar to sentence-pair task except the input only involves one sentence. We mention that the [$\mathrm{CLS}$] is always in the first place of the token sequence for BERT, while in the last for GPT. For GPT and BERT \& its variants, most training strategies and hyper-parameters in fine-tuning are the same as those in pre-training. We use Adam~[\hyperref[reference:Adam-supp]{\textcolor{green}{S-15}}] algorithm for model optimization. The learning rate is linearly warmed up over a number of steps to a peak value, and then linearly decayed to zero. We mainly tune the batch size, learning rate and number of training epochs. The optimal hyper-parameters are task-specific. Following~[\hyperref[reference:GPT-1-supp]{\textcolor{green}{S-19}},\hyperref[reference:DBLP:conf/naacl/DevlinCLT19-supp]{\textcolor{green}{S-6}}], we choose them from a small set of options; for example, for GPT, we select batch size from \{32,64\}. For our classification head, we use dropout for MGCrP. For simplicity, we adopt single head for our MGCrP and select the representation (repr.) size from a set of values, e.g., \{4K,6K\} for GPT. The detailed settings of the hyper-parameters are summarized in Tab.~\ref{table:hyper-parameter text}. Fine-tuning of BERT and GPT are implemented based on HuggingFace's codebase~[\hyperref[reference:DBLP:journals/corr/abs-1910-03771-supp]{\textcolor{green}{S-32}}], while that of RoBERTa is based on fairseq~[\hyperref[reference:ott2019fairseq-supp]{\textcolor{green}{S-18}}], an open-source sequence modeling toolkit. We implement fine-tuning of SpanBERT using the code available at \href{https://github.com/facebookresearch/SpanBERT}{official website of facebook}. The pretrained BERT model is downloaded from \href{https://huggingface.co/models}{HuggingFace's website}, and pretrained RoBERTa and SpanBERT models are both from \href{http://dl.fbaipublicfiles.com/fairseq/models}{fairseq website}. The pretrained GPT model is available at \href{https://github.com/openai/finetune-transformer-lm/tree/master/model}{the official website of OpenAI }. \begin{figure*}[t!] \centering \includegraphics[width=6.9in]{./images/image_vis.pdf} \caption{Visualizations of images on ImageNet validation set based on SoT-Tiny by using the Grad-CAM~[\hyperref[reference:selvaraju2017grad-supp]{\textcolor{green}{S-22}}]. We show the examples for which \textbf{ClassT}+\textbf{WordT} predicts correctly, but \textbf{ClassT} or \textbf{WordT} fails. {\color{red}{\checkmark}}: correct prediction; {\color{red}{\xmark}}: incorrect prediction.} \label{figure:image_vis} \end{figure*} \vspace{12pt} \begin{figure*}[t!] \centering \includegraphics[width=6.0in]{./images/nlp_vis.pdf} \caption{Visualization of the influence of each word for linguistic acceptability on the given English sentence. We adopt the BERT-base as the backbone and refer~[\hyperref[reference:Yun2021TransformerVis-supp]{\textcolor{green}{S-37}},\hyperref[reference:chefer2021transformer-supp]{\textcolor{green}{S-3}}] to obtain the results. {\color{red}{\checkmark}}: correct prediction; {\color{red}{\xmark}}: incorrect prediction.} \label{figure:nlp_vis} \end{figure*} \section{Visualization for CV and NLP Tasks} To further analyze the effectiveness of our proposed classification head, we make qualitative comparisons by visualizing the models for CV and NLP tasks. Specifically, SoT-Tiny and the BERT-base are used as the backbone models for CV and NLP tasks, respectively. For each model, we compare three variants as follows: \begin{itemize} \item \textbf{ClassT}: only classification token is used for classifier; \item \textbf{WordT}: only word tokens are used for classifier; \item \textbf{ClassT}+\textbf{WordT}: Both classification token and word tokens are used for classifier based on the sum scheme. \end{itemize} \subsection{Visualization for CV Model } To visualize the models in CV task, we first train our SoT-Tiny variants on ImageNet, and then adopt the Grad-CAM~[\hyperref[reference:selvaraju2017grad-supp]{\textcolor{green}{S-22}}] to obtain class activation map of each input image. As such, we can visualize the most important regions (i.e., regions of interest) for the final classification according to the gradient information. As illustrated in Fig.~\ref{figure:image_vis}, we show three kinds of scenarios, in which \textbf{ClassT}+\textbf{WordT} all makes correct predictions, i.e., (left panel) \textbf{ClassT} makes correct predictions but \textbf{WordT} fails, (middle panel) \textbf{WordT} predicts correctly but \textbf{ClassT} does not; (right panel) neither \textbf{ClassT} nor \textbf{WordT} predicts correctly. From Fig.~\ref{figure:image_vis}, we have the following observations: (1) As classification token interacts with all word tokens across the network, it tends to focus on the \textit{global} context of images, especially some messy backgrounds. Therefore, \textbf{ClassT} is more suitable for classifying the categories associated with the backgrounds and the whole context, e.g., "Bookshop". (2) The word tokens mainly correspond to local patches, so \textbf{WordT} performs classification primarily based on some \textit{local} discriminative regions. As such, \textbf{WordT} has better ability to classify the categories associated with local parts and subtle variations, e.g., "Standard poodle". (3) Our \textbf{ClassT}+\textbf{WordT} can make fully use of merits of both word tokens and classification token, which can focus on the most important regions for better classification by exploiting both local discriminative parts and global context information. \subsection{Visualization for NLP Model } Similarly, we compare the visualization results of the BERT-base under three scenarios on the examples of CoLA. The task is to judge whether an English sentence is grammatical or not. We use visualization methods proposed in~[\hyperref[reference:Yun2021TransformerVis-supp]{\textcolor{green}{S-37}},\hyperref[reference:chefer2021transformer-supp]{\textcolor{green}{S-3}}] to show the influence of each word in the final prediction. As shown in Fig.~\ref{figure:nlp_vis}, the \colorbox[RGB]{124,242,27}{green} denotes stronger impact while the \colorbox[RGB]{13,119,234}{blue} implies weaker one. All examples in Fig.~\ref{figure:nlp_vis} are ungrammatical. Overall, we can see \textbf{ClassT} inclines to make predictions from the whole sentence, such as the conjunction of two sub-sentences (e.g., ``Because.., as'') or the key global semantic word; \textbf{WordT} tends to focus on local correctness of each sentence, ignoring the global context. This observation is similar with the visualization results of CV model, demonstrating that the classification token and word tokens are highly complementary for both CV and NLP tasks. Finally, the proposed \textbf{ClassT}+\textbf{WordT} can highlight all important words in sentence, including the subordinate clause, conjunction, etc., which can help to boost the performance of classification. \makeatletter \def\@bibitem#1{\item\if@filesw \immediate\write\@auxout {\string\bibcite{#1}{S-\the\value{\@listctr}}}\fi\ignorespaces} \def\@biblabel#1{[S-{#1}]} \makeatother \phantomsection \small{ \bibliographystylesupp{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,053
Testing the effectiveness of various SEO tactics is part of what we do at AIMBIZ, and it's something anyone can do using free tools such as the Google Keyword Adwords tool, Twitter, HootSuite and a WordPress website. Recently, we began a test to optimize several posts for two of our top markets, Kent, Washington, where many of our clients are located and Issaquah, where we are located and where our business is growing. Clearly there's value for us to appear in the search results for relevant web design-related terms in these locations. With our new website and domain at AIMBIZ.com, we have the perfect opportunity to track the real-time progress of our search results starting from a clean slate. If you want to skip straight to the results, pop down to the bottom of the page. Between here and there we describe how we ran the test and how you can do the same thing for your business website. We began by researching the top keyword terms that people use regarding web development. The term "Web design" ranked best. The obvious test for this was to see what keywords got the most local traffic for Seattle, and Seattle web design won. So it makes sense that people searching for web design services in Kent, WA and Issaquah would use the same local search strategy. It also makes sense for us to test local strategies because most of our clients sell to a local market. But even if your business serves a non-geographic base, the same lessons apply to other specialized terms, such as brands and model names or specific industry terms. This long-tail or niche keyword strategy makes up more than 90% of all Internet search, and people who are searching with long-tail keywords already have narrowed their focus to what you offer, so they are ideal potential customers. I recently posted an article about using keywords in post content, so you might want to check that out for more detail. The gist of our test was to create a post that featured a recent client website design from each of our two target markets. We chose the Good Sense Accounting firm in Issaquah and Sweet Themes Bakery in Kent, WA. Within the post, we used our seed keyword "web design" and niche keyword for each location in specific places; in the title and title metatag, in the content, in a link, and in the alt text for the post's related image. Each post is fairly short—about 200 to 300 words—and each shares some of the useful or fun features of the respective websites. The same tactic could be used in describing company events, new products, customer testimonials and other topics that are specific to your business. When it comes to creating a blogging strategy, you need to think about frequency—how often you will have an opportunity to post a story about a topic. In our case, showcasing our client's websites is a natural opportunity to talk about what we do and the special value we bring. You should look for the same types of opportunities when devising your own blogging strategy. Most of our clients are pretty skeptical about using Twitter as part of their online marketing program. It's little wonder, since most of the press Twitter gets is about how celebrities Tweet about what they're having for lunch or how the service is co-opted for political movements. But the real power of Twitter isn't in learning where to get a great BLT, but in sharing something of interest or value to your Twitter following. It's about sharing, not selling. So we automatically send out a Tweet with every blog post we make on AIMBIZ.com. The reason for this is to drive traffic to the post. Even though Twitter uses the no-follow anti-spam tag so the link itself doesn't give us any value, the traffic generated by the link does have value. A percentage of our Twitter followers will see the Tweet, link through to the blog post, read it, sometimes share it, sometimes visit other pages on our website, and all of that is valuable. So I'd like to thank our Twitter community with a big, virtual hug. I've set up this post at length so that you can understand the basis of the test, but our plan is to follow up periodically with further results. Our initial results, however, are pretty interesting. For the search for "web design Kent, WA"—which was the post about the web design for the bakery in Kent—the initial result was almost immediate. After one Tweet and a dozen views, the site appeared on page four of Google results. A day later and with 26 views, it had moved to the middle of page two. At the time of this post on day three, the site is at the top position for page two of Google with 47 views. For the search for "web design Issaquah"—the post about the accounting firm website—the initial result didn't happen for almost one day. (Depending on the indexing by Google, results can often take a while to show up, but the more frequently you post, the more frequently Google's bots will crawl your page. Another good reason to blog frequently.) After one day and one Tweet, the post showed up on page seven with 25 views, and after two days and two more Tweets, the post rose to page five and 41 views. Each post took about half an hour to write, and the initial Tweets were automatic. Setting up a schedule of follow up Tweets using HootSuite took about 15 minutes. So after spending less than two hours of invested time, we're very close to achieving page one results for one of our major markets and moving up in the second test market. That's time well spent, I think.
{ "redpajama_set_name": "RedPajamaC4" }
4,304
BACKGROUND: Influenza is a highly infectious viral disease that is particularly common in the winter months. Oscillococcinum(®) is a patented homeopathic medicine that is made from a 1% solution of wild duck heart and liver extract, which is then serially diluted 200 times with water and alcohol. OBJECTIVES: To determine whether homeopathic Oscillococcinum(®) is more effective than placebo in the prevention and/or treatment of influenza and influenza-like illness in adults or children. The objective of this study was to assess evidence for the efficacy and effectiveness of Chinese qigong exercise in rehabilitative programs among cardiac patients. Thirteen databases were searched through to November 2010, and all controlled clinical trials on Chinese qigong exercise among patients with chronic heart diseases were included. For each included study, data was extracted and validity was assessed. Study quality was evaluated and summarized using both the Jadad Scale and the criteria for levels of evidence. Chronic insomnia affects a significant proportion of the general population worldwide, and is associated with several serious medical conditions. From the Western scientific literature, hyper-arousal (on the cognitive-emotional, behavioral, autonomic, or central nervous system level) is a final common pathway involved in its pathogenesis. Cardioprotective effect of ethanolic extract of Terminalia chebula fruits (500 mg/kg body wt) was examined in isoproterenol (200 mg/kg body wt) induced myocardial damage in rats. In isoproterenol administered rats, the level of lipid peroxides increased significantly in the serum and heart. A significant decrease was observed in the activity of the myocardial marker enzymes with a concomitant increase in their activity in serum. Histopathological examination was carried out to confirm the myocardial necrosis. T. Born in Arles on February 21st 1875, Madame J. C. was the oldest registered human being. She died on August 4th 1997 at the age of 122 years, the record for longevity probably for a long moment. Based on this unique case and a review of the literature, the authors describe the mechanical, physiological and clinical aspects of normal cardiac ageing. The diseases of the elderly which accelerate the process of physiological ageing are then reviewed. The prevalence of heart diseases, such as coronary artery disease and congestive heart failure, increases with age. Optimal therapeutic interventions that antagonize aging may reduce the occurrence and mortality of adult heart diseases. We discuss here how molecular mechanisms mediating life span extension affect aging of the heart and its resistance to pathological insults. In particular, we review our recent findings obtained from transgenic mice with cardiac-specific overexpression of Sirt1, which demonstrated delayed aging and protection against oxidative stress in the heart.
{ "redpajama_set_name": "RedPajamaC4" }
9,538
'use strict'; function buildTemplate(data) { return ` <div style="max-width:800px;position:relative;margin:20px auto;padding:15px;border:2px solid black;box-shadow:0 0 5px 2px lightgray;letter-spacing:1px;background-color:aliceblue;box-shadow: 0 0 2px 1px rgba(31, 31, 33, 0.47);"> <div style="text-align:center;"> <h1 style="font-size:40px; font-size:40px; padding-bottom:5px; margin-top:15px; border-bottom:1px solid #cacaca;">Account Activated!</h1> <h2 style="font-size:28px">...an administrator has approved and activated your Battle-Comm account.</h2> </div> <h4 style="font-size="18px">Welcome to the Community for Table-Top Games,</h4> <p style="font-size:14px">Find access to a worldwide community of dedicated table-top gamers and hobbyists. Battle-Comm is a platform to connect with other players and earn Reward Points that can be applied toward new products and discounts at your friendly local gaming stores. The site is an ever evolving application, that seeks to make your experience better with new features and functionality. During these early stages, don't hesitate to drop us a line if you have recommendations for the application as we begin to move from beta to production.</p> <h4 style="font-size:16px;text-align:center;">Name: ${data.lastName}, ${data.firstName} | Username: ${data.username} | Email: ${data.email}</h4> <div style="text-align:center;"> <b>Follow the link to login and update your account.</b> </div> <div style="text-align:center;margin:20px 0;"> <a style="padding: 6px 12px;font-size: 20px;line-height: 1.42857143;border-radius: 3px;background: #278ECA;color: white;text-decoration: none;" href="https://www.battle-comm.net/login">Login</a> </div> <div style="text-align:center; border-top:1px solid #cacaca; padding:20px 0 0;"> <img src="https://www.battle-comm.net/images/BC_Web_Logo.png"> </div> </div> `; } export default buildTemplate;
{ "redpajama_set_name": "RedPajamaGithub" }
5,351
Defiance is a 1997 video game for Microsoft Windows, developed by American studio Logicware and published by Visceral Productions of Avalon Hill. The game is a first-person shooter, with a sci-fi/futuristic setting. Gameplay Defiance is a first-person shooter with primitive 3D graphics, playing very similarly to Doom (1993). The graphics of Defiance use the 2.5D style, a way to make 2D graphics look 3D. It also includes a dynamic aim feature, where the camera bobs based on movement input. Defiance uses technology called Ncircle to make more realistic three-dimensional sound. The game boast MMX and 3D-accelerated support but it only makes realistic explosions. Defiance manages to successfully imitate its insipid color palette with plenty of brown and gray. The player controls an elite unnamed fighter pilot who rides the LAV-6 SABER, a weaponized mech, as they traverse through Calchona, a military complex. The first level of Defiance acts as a tutorial stage, introducing several of the game's core mechanics, such as the thruster, and setting up the plot. From that point on, the levels, which there are 13 of, play much like Descent (1995). In game, there are a plethora of enemies to fight. The player must also manage lift fuel, shields, hull durability, and weapons. Enemies often swarm the player through the complex, labyrinth like level setup of the game. Setting The action takes place on the planet Calchona, in a military complex. There is a war raging between Hegemonistic Core of Planets and the Anterran Premacy Worlds, but it is being waged in space so no civilians are killed and no planets rendered uninhabitable. Unfortunately, both sides have reached a stalemate, forcing the player and the rest of the "good guys" to resort to planetary warfare. Reception Defiance received reviews from publications including GameSpot, GameStar, Adrenaline Vault, PC Player, and Computer Games Magazine. A 1998 review from GameSpot stated that "Although this is not a terrible game, there's nothing here that would merit someone to really consider buying it when there are so many other 3D games of such a higher caliber." References 1997 video games First-person shooters Science fiction video games Video games developed in the United States Windows games Windows-only games Single-player video games
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,032
York prepares for perfect 10 Written by runABC North on Fri 26 Jul 2019 Thousands of runners are getting ready to help the Asda Foundation York 10K celebrate a special milestone on Sunday 4 August. Next month will be a decade in action for the event, which since its launch has seen more than 50,000 people enter and over £2.5 million raised in charity. Charities will be handed another boost this year thanks to the thousands of runners who will tackle the attractive route, which starts and finishes in Knavesmire Road, near York Racecourse, and takes in many of the city's landmarks, including Clifford's Tower and York Minster. Many entrants will be running in aid of the 10K's partner charities which are the Jane Tomlinson Appeal, Macmillan Cancer Support, Candlelighters, York Mind, Martin House Children's Hospice, St Leonard's Hospice, York Against Cancer, The Island, Changing Lives. Tristan Batley-Kyle, Run For All's head of events, said: "10 years of the York 10K is certainly something worth celebrating. It has supported the work of some wonderful charities and has grown into a must do event for many people." Organised by Run For All, the run forms part of the legacy of the late Jane Tomlinson who raised nearly £2m for charity by undertaking a series of endurance challenges, despite being diagnosed with incurable cancer. York 10K Jane Tomlinson Run For All ← Mo aims for six in a row at Great North Who could say no to a PB medal? →
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,452
{"url":"http:\/\/clay6.com\/qa\/24993\/one-ml-of-h-2o-2-solution-gives-50ml-of-o-2-at-ntp-so-it-is","text":"# One ml of $H_2O_2$ solution gives 50ml of $O_2$ at NTP,so it is\n\n$\\begin{array}{1 1}(a)\\;10vol\\\\(b)\\;25vol\\\\(c)\\;50vol\\\\(d)\\;100vol\\end{array}$\n\nOne ml of $H_2O_2$ solution gives 50ml of $O_2$ at NTP,so it is 50vol\nHence (c) is the correct option.","date":"2017-08-19 13:09:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9356994032859802, \"perplexity\": 4328.07563616584}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886105451.99\/warc\/CC-MAIN-20170819124333-20170819144333-00623.warc.gz\"}"}
null
null
Casual Chic Salon is a young growing company and we are looking for individuals who desire to further their career, are ready to work hard and are eager to learn. We offer competitive pay with fast track advancement courses and your salary or commission will change as you improve your performance and your client retention base. Casual Chic Salon believes that success depends solely on your will and that education is the catalyst to lead you to success. We offer unparalleled on-going training for all of our stylists. Our employees have a keen sense of direction within the fashion industry and have the ability to adapt to new and anticipated industry breakthroughs. As a team, we learn, work and live to enhance the natural beauty of our guests, never settling until their satisfaction is achieved. How we learn, and why we teach. Educational Classes Valued at $250 – $300 per person! All of our service providers are offered excellent in depth training thorough courses within the company. *To implement the Azemi standards, we require that our professional service providers attend all of our weekly/monthly classes!
{ "redpajama_set_name": "RedPajamaC4" }
7,652
The Importance of Future British-Polish Relations British Prime Minister Boris Johnson gestures as he delivers a speech on 'Unleashing Britain's Potential' at the Old Royal Naval College in London, Britain, 03 February 2020. The United Kingdom officially left the EU on 31 January 2020, beginning an eleven month transition period with negotiations over a future trade deal. EPA/Jason Alden / POOL Global Britain is a new era for British-Polish relations, and an opportunity for Warsaw to capitalize on a strategic – and sovereign – relationship. Author: Daniel Kawczyński Last year Poles and Brits celebrated the 250th anniversary of the first Polish diplomatic presence in the United Kingdom. Since 1769, our two nations have had multitude of challenges and opportunities. Looking merely at the last century – we had to collectively face and defeat the German Nazi aggression, and 50 years later we conquered communism which presented the prospectus of uniting under the then European Economic Community. In the 21st century, Poland and the UK are facing new pathways and roadblocks. London is on a course of departing from a flawed European project dominated by a Franco-German alliance, whilst Warsaw looks to regain its historical leadership in Central and Eastern Europe. These opportunities and challenges can be mapped out by examining contemporary British-Polish relations through the political, defense, and economic lenses. Britain is leaving the European Union, not Europe. Global Britain is here to seek influence globally, including in the Central and Eastern Europe region within and outside the EU. Poland has successfully showcased its ability to be a leader and seek influence in this area. The Visegrad Group's influence in EU policymaking or the Three Seas Initiative's ability to make Atlanticism attractive again throughout the region, are emblematic of this. Closer British-Polish ties in this region are thus crucial. British presence in CEE and its projects such as the Three Seas Initiative can strengthen the prospects of a "Transatlantic Triangle" cooperation format emerging between Warsaw-London-Washington, in which mutual benefits can be easily identified. Firstly, closer cooperation with Washington on specific foreign policy areas (in this case approach to CEE) could contribute to a future favorable US-UK trade deal. Secondly, Poland could use the V4 (which declared its willingness to take leadership in the currently lifeless EU-US trade negotiations) as a tool to shape a favorable common EU approach to both the US and the UK. Finally, the Transatlantic Triangle could collectively curb the domination of the Franco-German machine in Europe. Consequently, it could challenge projects, which threaten the national security of its members. Additionally, there are a number of projects aiming to strengthen the North-South infrastructure and the trade link between the Baltic, Adriatic and Black Seas, which were previously neglected, but now, thanks to projects such as the Three Seas Fund, may become an attractive opportunity for British investors. Establishing a new cooperation format through closer Polish-British ties may also curtail perilous Chinese trade power, especially in CEE. The success of the Three Seas, with joint British-American support, could be a key factor to convincing CEE countries to be wary of Chinese-led projects like the 17+1 Cooperation and Chinese Belt and Road Initiative, and primarily focus on enhancing transatlantic ties. A London bus drives past near the Bank of England in the City of London in London, Britain 16 October 2020. EPA/FACUNDO ARRIZABALAGA Britain is well-versed in its understanding of the Russian threat at NATO's Eastern flank. In intra-EU diplomacy, London lobbied together with Warsaw to ensure continued EU sanctions against Russia. Now, as a sovereign diplomatic player, Britain remains persistent in pursuing a balanced policy towards Russia. This was demonstrated recently by the Prime Minister's call for a transparent investigation into the poisoning of Alexei Navalny. Global Britain's hard line on Russian interference could be supported by an increased British presence at the Eastern flank. There are over 150 British soldiers stationed on Polish soil and a discussion to strenuously lobby for more British troops could therefore be considered in the Polish defense discourse. Equally, hybrid warfare is another area in which Poland and Britain find common interest. The hacking and foreign interference in elections as well as misinformation and fake news have become some of the main tools of modern warfare and must be treated with upmostimportance. The collective response to hybrid threats is widely dispersed across different international and regional organizations such as the EU, NATO, and the UN, providing groundwork to facilitate closer bilateral cooperation between the Poles and the Brits. In a coordinated approach, Poland could continue to raise awareness of cyberattacks on EU platforms i.e. PESCO. Warsaw has shown the ability to effectively lobby for PESCO not to duplicate NATO's competencies, thus it successfully uses its membership in the EU to align the bloc with broader transatlantic security interests. Whereas, the UK could lobby for the common cause in the UN Security Council, which Poland recently departed after its two-year elected tenure. Lastly, during NATO summits, both Warsaw and London could speak in a united and coordinated voice on this topic. Last but certainly not least – trade. Poland has been named the most business-friendly country in the Central European region, and Britain has successfully recognized that. According to a 2019 report from Deloitte, three out of four companies owned or co-owned by British investors planned to increase their investment outlays in Poland. Moreover, UK investors perceive Poland positively due to the "size of its economy and its growing integration with the global economy." UK companies invested PLN 48 billion in Poland between 1995 and 2017, growing Poland's GDP by 15 billion in 2017. BP, AVIVA, Imperial Brands , Rolls-Royce, Prudential and Primark are just few among the many British companies that have seized on the opportunities that lie in Polish-British trade relations. From a superficial perspective, Brexit in its early stages may be perceived by some as a challenge to future trade relations, as businesses may have to slightly recalibrate their activities. However, every new change brings room for new opportunities – Global Britain will be able to trade with the whole world and not suffer from Brussels' interventions whilst Poland could tap into the Commonwealth's network to boost its global trade, while inviting British companies to continue to invest their capital over the Vistula river. Shaping post-Brexit trade relations should also be facilitated by regional agencies and trade bodies that highlight investment opportunities not only in Warsaw, Kraków or Łódź, but also in cities like Piotrków Trybunalski where FDI could drastically improve the lives of hard-working Poles. Global Britain has the chance to create thousands of jobs and boost infrastructure without having to pay billions to a regional organization dominated by foreign interests, hindering the true potential of Polish-British relations. As outlined, there is tremendous space for closer cooperation in the political, defense and trade realms. The latter in particular is a policy sphere in which this potential is most evident and provides a model for strengthening British-Polish relations. It is imperative that Polish policymakers examine the opportunities afforded by Brexit, which have been wrongfully labelled as setbacks to London's ties with Warsaw. Building Peace in the Persian Gulf Belarus: the war for the Russian world Belarusian protests in the context of transformations in the post-Soviet area Will the "maximum pressure" campaign force Iran to negotiate with the US? Mechanizmy dezinformacji XXI wieku Mechanisms of disinformation of the 21st century China – USA. The Cold War 2.0?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,587
Macomb is een plaats (city) in de Amerikaanse staat Illinois, en valt bestuurlijk gezien onder McDonough County. Demografie Bij de volkstelling in 2000 werd het aantal inwoners vastgesteld op 18.558. In 2006 is het aantal inwoners door het United States Census Bureau geschat op 18.422, een daling van 136 (-0,7%). Geografie Volgens het United States Census Bureau beslaat de plaats een oppervlakte van 26,5 km², waarvan 25,5 km² land en 1,0 km² water. Macomb ligt op ongeveer 219 m boven zeeniveau. Plaatsen in de nabije omgeving De onderstaande figuur toont nabijgelegen plaatsen in een straal van 16 km rond Macomb. Externe link Plaats in Illinois
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,488
Kullarna-Häxtjärn är ett naturreservat i Ånge kommun i Västernorrlands län. Området är naturskyddat sedan 2014 och är 391 hektar stort. Reservatet ligger på Häxtjärnberget och dess norra och västra sluttningar och på Svartjärnberget östra sluttning. Reservatet består av brandpräglad äldre tallskog med gran på lägre nivåer längre ner. Ett flertal små tjärnar och vattenfyllda fördjupningar finns också i reservatet. Referenser naturreservatet Kullarna-Häxtjärn Länsstyrelsen i Västernorrlands län Noter Naturreservat i Ånge kommun Naturskyddsområden bildade 2014
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,130
Assessment of the crime and justice statistics of the Republic of Moldova The statistics of the Republic of Moldova on crime and justice were evaluated by national and international experts, within the project "Strengthening the efficiency and access to justice in Moldova", implemented by UNDP Moldova, with the financial support of Sweden. The assessment, carried out in collaboration with the National Bureau of Statistics (NBS) of the Republic of Moldova, aimed to review the current situation in the area of crime and justice statistics, focusing on the authorities involved in data collection, on their role and institutional capacity, on data and mechanisms, on existing systems. In total, 16 authorities involved in the collection and exchange of crime and justice data were consulted, including the NBS. The degree of compliance was determined on one hand by the quality principles from the Code of Good Practices of European Statistics, among which are relevance and usefulness, timeliness, punctuality, and comparability, in relation to which served as benchmarks in carrying out the sectoral assessment. On the other hand, the UNDP evaluators determined the extent to which the recommendations and classifications of the United Nations Office on Drugs and Crime (UNODC) are implemented, as well as identified the current strengths and challenges of the national statistical system in the field. Based on the analyses carried out, a set of recommendations was developed to encourage the operation of improvements in accordance with national needs and international standards. These can be found in the Assessment Report on Crime and Justice Statistics of the Republic of Moldova and in the Roadmap to operationalize the recommendations and to accompany the process of further alignment with relevant international statistical standards and EU standards UNDP Country Programme Document 2023-2027
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,374
Stado – święto słowiańskie, wywodzące się z czasów przedchrześcijańskich, najprawdopodobniej związane z kultem płodności. Jego pozostałością w kulturze ludowej są Zielone Świątki. Współcześnie obchodzone w Polsce przez rodzimowierców słowiańskich. Historia i obecność w folklorze Prawdopodobnie pierwsza wzmianka o Stadzie pojawia się w Postylli Łukasza z Wielkiego Koźmina (ok. 1405–1412), który w jednym z kazań na Zielone Świątki wspomina o gromadach ludzi, "które w języku Polaków nazywają »stada«", dodając, że w tych dniach urządza się obrzęd, którego elementem są "tańce wykonywane przez dziewczęta z mieczami, jakby na ofiarę demonom, a nie Bogu czynione, i przez chłopców uzbrojonych w miecze i kije, które nawzajem sobie rozłupywali". Informacja o Stadzie pojawia się także w pierwszym tomie Kronik Jana Długosza (1455), który opisuje je jako igrzyska urządzane w pewnej porze roku na cześć bóstw. Miało być to radosne święto, w czasie którego zbierali się mieszkańcy całej wsi i okolic. Obchodom święta towarzyszyły śpiewy i tańce, a jego głównym elementem były zawody sportowe. Według kronikarza święto to obchodzone było w okolicy chrześcijańskich Zielonych Świątek. Podobne święto, jednak bez podania nazwy, jest także często potępiane w kazaniach XV-wiecznych kaznodziejów i moralistów. O starosłowiańskim święcie obchodzonym we wtorek lub środę około Zielonych Świąt wspomniał także Kosmas z Pragi, odnotowując, że tańce i igrzyska poświęcone były duchom przodków. Również na Rusi w siódmy czwartek po Wielkanocy zbierano się pod brzozami, składano ofiary z kaszy, pierogów i jajecznicy, a następnie śpiewano i tańczono. Na obszarze Rusi w okolicach Zielonych Świątek obchodzono także podobne do wspomnianego przez Długosza Stada tygodniowe święto, zwane rusałczym tygodniem. Możliwe zatem, że było to ogólnosłowiańskie święto, którego obchody trwały nawet tydzień, a jego kulminacją była Noc Kupały. Zdaniem Karola Potkańskiego w XVI wieku nastąpiła chrystianizacja święta, a jego pozostałością jest ludowy odpust zielonoświątkowy. Współczesność Od roku 2016 polskie środowisko rodzimowiercze, skupione w ramach Konfederacji Rodzimowierczej, organizuje ogólnopolskie obchody święta, zrekonstruowanego do rodzimowierczej formy na podstawie kwerend archiwalnych, etnograficznych i folklorystycznych. W 2016 roku obrzęd na Stadzie poprowadziło 11 żerców, co czyniło je największym świętem w dotychczasowej historii rodzimowierstwa w Polsce. Do roku 2018 obchody odbywały się we współpracy z Grodziskiem Owidz, mając charakter publicznego festiwalu obejmującego oprócz obrzędów także wykłady, warsztaty i inne formy rozrywki. Kluczowym elementem święta jest wstąpienie pary bóstw Łado i Leli w ciała dwóch dziewcząt, pod których postacią wędrują one w orszaku pomiędzy ludźmi, otrzymując od nich różne dary w ofierze. Ów zwyczaj został zrekonstruowany przez rodzimowierców w oparciu o zachowany w Chorwacji rytualny pochód dziewcząt z mieczami lub szablami, noszący nazwę ljelje/kraljice i wpisany w 2009 roku na listę Niematerialnego Dziedzictwa Kulturowego UNESCO. Zobacz też Noc Kupały Przypisy Bibliografia Krzysztof Bracha: Słowiańskie święto wiosenne "stado". Rekonstrukcja obrzędu, [w:] Targi, jarmarki i odpusty, red. M. Brzostowicz, J. Wrzesiński i M. Przybył. Poznań-Ląd 2014, s. 52-67. Święta słowiańskie
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,955
If you have any questions that are beyond the scope of this documentation, please feel free to contact us at https://singularform.com/markus-theme. Thanks so much! After downloading the package, please first unzip the main package, then you will see another zip file named markus.zip. This zip file is the one to be uploaded via the WordPress admin panel. Click Add New then click Upload Theme. Click Choose File and select the markus.zip. Then click Install Now. Wait until you see the "Theme installed successfully" message. After that, click the Activate link on the screen to start using the theme. Next, we will show you how to install some required and recommended plugins for the theme. After the theme is activated, you should see a notification at the top saying that there are some required and/or recommended plugins to be installed like the below image. Click the Begin installing plugins link or go to Appearance > Install Plugins. You will see a list of plugins. Tick all of them then choose Install from the drop-down list and click Apply. Wait until the installation is finished then click Return to Required Plugins Installer to go back to the plugin list again. Now tick all of them and choose Activate from the drop-down list then click Apply. That's it! The plugins are just installed and activated. You should see a complete message in this step. We will talk about importing an XML file for the demo content in the next step which is optional. You can skip it if you do not want the demo content. Go to Tools > Import and click the Install Now link under the WordPress title to install the WordPress Importer tool. After the tool is installed, click Run Importer. Click the Choose File button then browse to the purchased package. You will see an XML file named "markusdummy.wordpress.2018-07-17.xml". Select it and click Upload file and import. Don't forget to check the Download and import file attachments checkbox before clicking the Submit button. Then wait until you see the message "All done. Have fun!". Make sure that your site menu is selecting the correct location. To do that, go to Appearance > Menus and scroll down to the bottom. There you will see a checkbox for Main Menu. Tick it and save the menu. You might want to change the site title, tagline and copyright text now. To do that, go to Appearance > Customize > Site Identity and you will find the options for changing them in this section.
{ "redpajama_set_name": "RedPajamaC4" }
3,635
Calocarides longispinis är en kräftdjursart som först beskrevs av McArdle 1901. Calocarides longispinis ingår i släktet Calocarides och familjen Axiidae. Inga underarter finns listade i Catalogue of Life. Källor Tiofotade kräftdjur longispinis
{ "redpajama_set_name": "RedPajamaWikipedia" }
259
\section{Introduction} The tropospheric zonal wind system of Jupiter has been observed for decades showing alternating prograde and retrograde jets at the boundaries between zones and belts \citep{Chapman1969,Ingersoll1979,Ingersoll2004,Limaye1982,Garcia-Melendo2001}. A similar structure, although with jets fewer in number, has also been found in Saturn's troposphere \citep{Smith1981,Sanchez-Lavega2000,Choi2009,Barbara2021}. The mechanism behind the winds as well as their vertical extent has been extensively studied. Recent Juno and Cassini gravity field measurements have demonstrated that these winds extend down to a few thousands of kilometers below the cloud deck in Jupiter and Saturn and are therefore powered by the internal heat flux \citep{Kaspi2018,Kaspi2020,Guillot2018,Galanti2019}. Above the tropopause, in the stratosphere, there are no tracers to infer the wind pattern from visible light imaging. The stratospheric winds could only be derived in the 20-50\,mbar and 30\ensuremath{^\circ} S-60\ensuremath{^\circ} S ranges on the exceptional occasion of the Shoemaker-Levy 9 impacts from the evolution of the debris fields \citep{Banfield1996,Sanchez-Lavega1998}. The relative contributions of thermal versus mechanical forcing (by e.g. waves and eddies) in the stratosphere are therefore unquantified. So far, the stratospheric zonal wind pattern has only been indirectly derived from the thermal wind balance relation applied to the measured zonal temperature field. There are several studies that applied this method to Jupiter and Saturn \citep{Flasar2004,Flasar2005,Fouchet2008,Guerlet2011,Guerlet2018,Fletcher2016,Cosentino2017}. In addition, systematic infrared observations of long-time scales led to the discovery of stratospheric quasi-periodic oscillations manifested in the stratospheric temperatures and winds, in particular the Quasi-Quadrennial Oscillation (QQO) in Jupiter \citep{Orton1991} and also the Saturn Equatorial Oscillation (SEO - \citealt{Orton2008}). These oscillations have been the subject of numerous follow-up observations and modeling efforts to obtain robust constraints on their origin and evolution \citep{Cosentino2017,Li2000,Medvedev2013,Spiga2020,Bardet2021,Giles2020,Antunano2020}. However, deriving the wind field from the thermal wind balance is only an approximation which unfortunately breaks down at the equator, even if a new prescription of this equation for equatorial latitudes has been recently proposed \citep{Marcus2019}. Solving the thermal wind equation also requires a boundary condition, often taken as the cloud-top wind pattern. Furthermore, the temperature field is only interpolated between the tropopause and the middle stratosphere where it can be retrieved from hydrocarbon emissions. In any case, the thermal wind equation only gives wind shear but not the absolute wind speeds. The real magnitude of the stratospheric winds has thus remained elusive. Direct wind measurements in the stratosphere are thus warranted to quantify the role of thermal and mechanical forcing, and thus better constrain models of the planetary wave propagation that generate the stratospheric equatorial oscillations. With spectral resolving powers, $R=\lambda/\Delta\lambda$, exceeding 10$^6$, heterodyne spectroscopy in the millimeter wavelength range has opened the possibility to directly measure frequency Doppler shifts induced by winds in spectral lines of molecular species, as originally demonstrated at Venus and Mars \citep{Shah1991,Lellouch1991}. The Atacama Large Millimeter/submillimeter Array (ALMA) now enables almost instantaneous mapping, with high sensitivity, and sufficiently high spectral and angular resolutions to measure wind-induced Doppler shifts in most Solar System atmospheres (e.g. \citealt{Lellouch2019}). At Jupiter, the main difficulty resides in measuring Doppler shifts caused by $\sim$100\,m/s (or less) winds superimposed onto the rapid Jovian rotation (12.5\,km/s at the equator). To overcome this challenge, we use the strong millimeter lines of HCN and CO, two species delivered by the impacts of comet Shoemaker-Levy 9 (SL9) in 1994 \citep{Lellouch1995}. The SL9-derived species were expected to be homogeneously distributed in latitude at the time of our observations (e.g. \citealt{Moreno2003,Cavalie2013}). \section{Observations \label{Observations}} We observed Jupiter with the ALMA interferometer on March 22$^\mathrm{nd}$, 2017, at 5:11-5:36UT with 42 12-m antennas, as part of the 2016.1.01235.S project. At this time, Jupiter's equatorial diameter subtended a 43.8\hbox{$^{\prime\prime}$}~angle, the sub-earth point latitude was -3\ensuremath{^\circ}, and the central meridian longitude (CML) ranged from 65\ensuremath{^\circ}~to 80\ensuremath{^\circ}~(System III). To map the whole planet at such frequencies, we had to use a mosaic of 39 pointings. Standard pointing, bandpass, amplitude and phase calibration observations were carried out and accounted for in the data reduction we performed under CASA 4.7.2 (additional details can be found in Section~\ref{appendixA}). The lack of short-spacings with the interferometer results in filtering out Jupiter's extended emission (i.e. most of the disk flux), such that only the limb observations are preserved. The baselines of the interferometer ranged from 15.1 to 160.7\,m, providing an elliptical synthesized beam of 1.2\hbox{$^{\prime\prime}$} (East-West) $\times$ 1\hbox{$^{\prime\prime}$} (North-South). This resulted in a latitudinal resolution of $\sim$3\ensuremath{^\circ}~at the equator, degrading to $\sim$10\ensuremath{^\circ}~close to the poles. From each spectral cube, we extracted $\sim$550 spectra located at the planet limb (at the 1\,bar level) to oversample the beam by a factor of 4 to 5. The accumulated on-source integration time of 24 minutes enabled us to detect the HCN (5-4) and CO (3-2) emissions at 354.505\,GHz and 345.796\,GHz (respectively) with signal-to-noise ratios (S/N) of $\sim$25 per beam at the limb at spectral resolutions of 122\,kHz and 488\,kHz (respectively). \section{HCN and CO vertical and horizontal distributions} The spectral lines show limited variability in terms of amplitude, but the HCN lines present some variability in terms of width. We analyzed the vertical distributions of CO and HCN as a function of latitude from the line widths. Using empirical vertical profiles of CO and HCN with a cut-off pressure $p_0$ below which the species have a constant mole fraction, and the radiative transfer model of \citet{Cavalie2019}, we found that CO is present at $p_0$ $<$ 5\,mbar at all latitudes, whereas HCN is found at the same pressure levels only at the low-to-mid latitudes (60\ensuremath{^\circ} S-50\ensuremath{^\circ} N). At higher latitudes, HCN is restricted to $p_0$ $<$ 0.1\,mbar (see \fig{Map_spectra}). This is surprising because HCN and CO share the same origin, are both long-lived, and should thus have similar horizontal and vertical distributions. The missing spectral signature of HCN at pressures higher than 0.1\,mbar and at high latitudes exhibits asymmetry in latitude between the northern and the southern hemispheres: the transition between the broad HCN lines seen at low and mid-latitudes with the thin HCN lines seen in the polar region is at 60\ensuremath{^\circ} S versus 50\ensuremath{^\circ} N. These facts point to a chemical sink for HCN related to the aurorae, which latitudinal extent shows similar asymmetry in latitude between the north and the south. In particular, aerosols are known to be more abundant at high latitudes \citep{Zhang2013}, suggesting adsorption of HCN on aurorally-produced aerosols as a potential sink mechanism \citep{Anderson2016}. \begin{figure*}[!h] \begin{center} \includegraphics[width=14cm,keepaspectratio]{Map_spectra_v4.png} \end{center} \caption{ALMA observations of Jupiter's stratospheric HCN and CO. (Left) Line area maps of the HCN (5-4) (top) and CO (3-2) (bottom) emission at the limb of Jupiter. (Right) Spectra extracted from the data cubes (red lines) showing typical line shapes and the cut-off pressure ($p_0$) in the species vertical profile to reproduce the line width, with the 30 best-fit spectra computed with the MCMC procedure from the parametrized line shape. Observable Doppler shifts with respect to the lines rest frequencies are caused by the planet's rapid rotation and the local east-west winds. } \label{Map_spectra} \end{figure*} \section{Wind speed retrieval} Within a synthetic beam, the line is naturally Doppler-shifted by the rapid rotation of the planet. Any additional Doppler shift of the line is then indicative of atmospheric motions along the line-of-sight located at the altitude of the wind. ``Wind contribution functions'', as defined by \citet{Lellouch2019}, indicate that fitting the HCN line enables us to retrieve wind speeds at $\sim$1\,mbar from 60\ensuremath{^\circ} S to 50\ensuremath{^\circ} N and at 0.1\,mbar at polar latitudes (see Section~\ref{appendixB} and \fig{S1}). We determined the line-of-sight (LOS) wind speeds as a function of latitude by fitting the HCN lines with a Markov Chain Monte Carlo (MCMC) scheme \citep{Goodman2010,Foreman-Mackey2019}. We fitted all extracted limb spectra using a parametrized line shape that is fully defined by four parameters (see Section~\ref{appendixC} for a more detailed description). The only parameter of interest is the central frequency of the line; the other three parameters are related to the width and amplitude of the line and help us having a good rms metric for the fitting algorithm. This method is independent of any prior knowledge of the HCN and CO distributions, and atmospheric temperature. We use several hundreds of iterations to fit the line center position and derive its uncertainty. The altitudes of the winds are estimated from the contribution function calculations, as described above. \fig{winds_summary} (top) shows the derived LOS wind speeds as a function of latitude we obtain from HCN. The associated uncertainties result from the combination of the continuum subtraction on the spectra, uncertainties in the subtraction of the planet rotation associated with pointing errors, and uncertainty of the MCMC fitting procedure (see Section~\ref{appendixD}). The wind speeds in \fig{winds_summary} are measured instantaneously, as opposed to the zonal mean wind speeds at the cloud-top found in the literature (e.g. \citealt{Ingersoll2004}). The combination of lower spectral resolution and S/N of the CO observations does not allow us to retrieve wind speeds. We can only put a 3-sigma upper limit of 150\,m/s at 3\,mbar, where fitting the CO line would enable us to measure wind speeds. \begin{figure*}[!h] \begin{center} \includegraphics[width=15cm,keepaspectratio]{winds_summary.pdf} \vspace{0.5cm} \includegraphics[width=15cm,keepaspectratio]{winds_averaged.pdf} \end{center} \caption{Jupiter's stratospheric winds. (Top) Instantaneous line-of-sight wind speed measurements as a function of latitude obtained from ALMA spectral mapping observations of the HCN (5-4) line. The winds are measured at 1\,mbar from 60\ensuremath{^\circ} S to 50\ensuremath{^\circ} N and at 0.1\,mbar at polar latitudes. The western limb data is plotted in blue and the eastern limb data in red (1-$\sigma$ uncertainty envelopes in light blue and orange, respectively). (Bottom) Eastward wind speeds averaged from both limbs from 60\ensuremath{^\circ} S to 50\ensuremath{^\circ} N. Prograde winds have positive speed values. } \label{winds_summary} \end{figure*} \section{Results} \subsection{Wind speed retrieval results at low-to-mid latitudes} From 60\ensuremath{^\circ} S to 50\ensuremath{^\circ} N, the strongest and broadest wind we detect is located at 9-11\ensuremath{^\circ} N, as shown in \fig{winds_summary}. It is a prograde jet with a peak LOS velocity of +215$\pm$25\,m/s on the planet eastern limb and -115$\pm$25\,m/s on the planet western limb. The average eastward wind speed is 165$\pm$40\,m/s (\fig{winds_summary} bottom), compatible with the magnitude of the near-equatorial jet found from the thermal wind balance by \citet{Flasar2004} This jet has a full-width at half maximum (FWHM) of $\sim$7\ensuremath{^\circ}. The difference in peak velocities between the two limbs indicates that the local vortices could accelerate/decelerate winds by $\sim$50\,m/s. This situation could thus be similar to what is seen at the cloud level, where \citet{Garcia-Melendo2011} found that the equatorial zone shows variability in wind speeds of $\sim$20\,m/s on average (but up to 60\,m/s) over only one planet rotation, because of vortices and planetary waves. We tentatively find a retrograde jet at 2\ensuremath{^\circ} S with a 2-$\sigma$ confidence level only. Its speed on the eastern limb is -140$\pm$25\,m/s, but we cannot unambiguously identify it on the western limb. The presence of a prograde jet at 4-7\ensuremath{^\circ} S is even more tentative (1.5-$\sigma$). The equatorial wind structure at 1\,mbar is thus asymmetrical with respect to the equator, contrary to the cloud-top wind structure and contrary to what one would expect in the QQO altitude and latitude ranges. It may result from the latitudinal temperature gradients found between the upper stratospheric layers and the mbar region where the QQO occurs \citep{Cosentino2017}, which are also found asymmetrical at the time of our observations \citep{Giles2020}. In the northern and southern low-to-mid latitude, there is little evidence of other jets outside the equatorial region. \subsection{Wind speed retrieval results in the polar regions} The most unexpected and outstanding feature detected in our observations are the non-zonal winds seen in the northern and southern polar regions (see \fig{winds_summary} top). We detect jets at 0.1\,mbar at 55\ensuremath{^\circ} N and 85\ensuremath{^\circ} S on the western limb, and at 70\ensuremath{^\circ} S on the eastern one. They all have counter-rotation velocities. The strongest one, seen at 70\ensuremath{^\circ} S on the eastern limb, has an FWHM of 7\ensuremath{^\circ}~and a peak LOS velocity of -350$\pm$20\,m/s. The wind seen at 85\ensuremath{^\circ} S on the western limb peaks at +200$\pm$20\,m/s. These peaks seem to be collocated with the position of the southern auroral oval for the CML of our observations, when compared with the position of the statistical emission of the aurorae \citep{Clarke2009} and with the M=30 footprints of \citet{Connerney2018}'s model of the magnetic field (i.e., the footprints of field lines that reach 30 Jupiter radii at the equator). The latter is a good marker of the position of the main ovals as observed by Juno's Ultraviolet Spectrograph (UVS - \citealt{Gladstone2017}). This comparison can be seen qualitatively in \fig{ALMA_comparison}. The wind peaks at 70\ensuremath{^\circ} S on the eastern limb and at 85\ensuremath{^\circ} S on the western limb would then result from the same jet. To confirm this finding, we implemented a model in which we assumed a constant wind within the southern oval and no wind outside the oval. We took the inner and outer oval edges as defined by \citet{Bonfond2012}. We simulated spectra at infinite spatial and spectral resolutions, Doppler-shifted them according to the LOS auroral oval wind component after carefully accounting for the geometry of the observations, and finally convolved them to the spectral and spatial resolutions of the ALMA observations. To improve the fit, we had to extend by $\sim$2\ensuremath{^\circ}~the inner and outer edges of the southern oval. This model demonstrates that a 370\,m/s counter-rotation wind inside the auroral oval results in asymmetric components as observed at 70\ensuremath{^\circ} S and 85\ensuremath{^\circ} S (see Section~\ref{appendixE} and \fig{S3}). However, this simple model is unable to fit properly the wind speeds within the entire auroral region. The real wind pattern in the auroral region is certainly more complex than in our simple model, like the ionospheric wind field derived by \citet{Johnson2017} from H$_3^+$ emission in the northern auroral region. The lack of spatial resolution prevents us from refining it further without additional and unconstrained parametrization (e.g. variable wind speed within the oval, wind gradient at the interface between the oval and its surroundings, winds not only limited to the oval but also inside the auroral regions). It is noteworthy we find hints of a similar counter-rotation jet in the northern auroral region with peak LOS velocities of +165$\pm$15\,m/s and a FWHM of 6\ensuremath{^\circ}~in latitude at 57\ensuremath{^\circ} N on the western limb. The northern oval was just coming into view at the time of the observations, thus severely limiting the viewing of the northern auroral region. A significant part of the main oval was expected to be close to tangential to the limb on its poleward edge (see Section~\ref{appendixF} and \fig{S5}). It is thus no surprise that we find no clear evidence of the jet on the northern edge of the oval. Within the framework of our simplified model, assuming a 300\,m/s counter-rotation wind inside the northern oval provides nonetheless a good fit to the measured wind speeds poleward of 55\ensuremath{^\circ} N on the western limb where the northern oval was rising (see \fig{S3}). Finally, despite the northern aurora being located on the western side, mostly behind the terminator, we see a broad signal on the eastern limb at polar latitudes with an average LOS velocity of about +100\,m/s for which we lack a clear explanation. A more favorable observation geometry of the northern polar region is thus required to improve our understanding of the stratospheric circulation in this region. \begin{figure*}[!h] \begin{center} \includegraphics[width=15cm,keepaspectratio]{ALMA_comparison_v4.png} \end{center} \caption{Jupiter's UV aurora and stratospheric HCN winds. Composite image showing the LOS wind velocities in\,m/s derived from the ALMA observations and the statistical emission of the aurorae \citep{Clarke2009} in the configuration of the ALMA observations. The northern and southern aurora regions are best seen in dedicated zoomed-in quadrants. The M=30 footprints of the magnetic field model of \citet{Connerney2018} is a good marker of the position of the main ovals as seen by Juno-UVS \citep{Gladstone2017} and is plotted in orange. The white ellipses indicate the spatial resolution of the ALMA observations. The directions of the strongest winds in the equatorial and auroral regions are indicated with the $\odot$ and $\otimes$ red symbols. } \label{ALMA_comparison} \end{figure*} \section{Discussion \label{Discussion}} The branch of the northern auroral jet we tentatively detect is lying below the electrojet discovered at p $<$ 1\,$\mu$bar from infrared observations of H$_3^+$ emission by \citet{Rego1999} and further constrained by \citet{Stallard2001} and \citet{Johnson2017}. This electrojet has a near-to-supersonic velocity of $\sim$1-2 km/s and is in counter-rotation along the main oval \citep{Stallard2001,Stallard2003}. \citet{Achilleos2001} showed that the H$_3^+$ ions could accelerate the neutrals up to 60\% of their velocity through collisions between the ionosphere and the thermosphere in the ionization peak layer (0.07-0.3\,$\mu$bar). The upper limit set by \citet{Chaufray2011} of 1 km/s on the velocity of a corresponding H$_2$ flow confirmed a smaller neutral wind velocity, in agreement with our findings. Benefiting from ideal viewing conditions (sub-earth latitude of 0.2\ensuremath{^\circ} N), \citet{Rego1999} also detected a similar counter-rotation electrojet on the main southern oval. Models by \citet{Majeed2016} and \citet{Yates2020} predict that neutrals have higher velocities below the southern oval than below the northern one. Although we find relatively similar velocities underneath the two ovals, our detection in the northern oval remains tentative such that we cannot conclude on the relative magnitude between the two auroral jets. This particular point thus needs to be confirmed with new observations. \citet{Majeed2016} and \citet{Yates2020} also predict that the southern jets are expected to disappear around the $\mu$bar level. On the contrary, our data demonstrate that the neutrals are still flowing with a substantial counter-rotation velocity at the sub-mbar level below the southern oval (and probably also below the northern one), i.e. $\sim$900\,km below the corresponding ionospheric winds of \citet{Rego1999} and 100-500\,km below the tentative H$_2$ flow of \citet{Chaufray2011}. Despite the strong signal-to-noise limitations of our CO observations at 3\,mbar, we find that the southern auroral jets are at least twice slower in the mbar range than at sub-mbar levels, possibly disappearing between the sub-mbar and the mbar levels. The detection of these auroral vortices down to the sub-mbar level may bear crucial implications on Jovian atmospheric chemistry. The photolysis of CH$_4$ at the $\mu$bar level triggers the production of more complex hydrocarbons. The addition of energetic magnetospheric electrons, which are more abundant in the auroral region than anywhere else on the planet \citep{Gerard2014}, further favors this complex ion-neutral chemistry \citep{Wong2003}. The presence of auroral vortices down to the sub-mbar level could confine the photochemical products within this region, by preventing the mixing of the material inside the oval with the material outside, and thus increase the production of heavy hydrocarbons and aerosols. Auroral chemistry probably increases the production of C$_2$ species as already observed by \citet{Sinclair2018,Sinclair2019}, and the production of aerosols \citep{Zhang2013}. The counter-rotation direction of the wind in both ovals translates into a clockwise circulation on the northern oval and counterclockwise circulation on the southern one. Such circulation pattern, which appears to be similar to anticyclones in this respect, could induce subsidence interior to the auroral ovals \citep{Yates2020}. The photochemically produced species would then be transported downwards and could escape the auroral region at the mbar level where the vortices could be breaking up. This increased production of aerosols coupled to the downward motion could also result in the removal of HCN by adsorption onto the aerosol particles at pressures higher than 0.1\,mbar at auroral latitudes, as shown by our data. This adsorption mechanism was proposed for Titan by \citet{Anderson2016} and needs to be quantified under Jovian auroral conditions. Another effect of the downward motions would be adiabatic heating around the vortex break up level. Heating at the mbar level was observed inside both ovals by \citet{Sinclair2017} and could be the indication that this is actually the level at which the vortices break up. We note that the independence of this heating with respect to solar illumination conditions \citep{Sinclair2017} seems to disqualify aerosol heating as a cause. We see a sharp HCN emission increase in our data at the edges of the oval and it could indeed be proof of such heating between the oval and its surrounding region. However, the HCN line is not optically thick, and we cannot waive the degeneracy between a temperature and an abundance increase. The detection of stratospheric auroral jets in this work demonstrates that the Jovian atmospheric circulation is complex not only in the equatorial region owing to the QQO \citep{Cosentino2017,Giles2020,Antunano2020}, but also in its polar regions. Repeated observations with the northern aurora in the field-of-view are necessary for a better characterization of the counter-rotation stratospheric jet underneath the main oval similarly to the situation witnessed in the south. \section*{Acknowledgements} The authors thank P. Gratier for helping implement the MCMC method, and R. Johnson and T. Stallard for providing them with their infrared ionospheric auroral wind velocities. T.C. acknowledges funding from CNES and the Programme National de Plan\'etologie (PNP) of CNRS/INSU. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2016.1.01235.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,160
Gold 4th as Chloe Kim wins the day at X Games News News | Jan 25, 2015 Arielle Gold flies above the half-pipe at X Games in Aspen on Saturday. Gold placed fourth in the event for the second consecutive year. Aspen — Chloe Kim took a major jolt before Saturday night's women's snowboarding super-pipe competition at the Winter X Games in Aspen. The 14-year-old Californian snowboarding phenom glanced off the deck and slid to her face in her last training run before the finals competition. The result was several red marks on her face and plenty of pain. During the competition, she instead delivered the jolt, giving the women's snowboarding world the biggest shakeup it's received in a long time. That meant a gold X Games medal and the designation as the youngest X Games winner. Kim won Saturday night's competition in front of a raucous crowd, using her final run to dethrone five-time reigning champ Kelly Clark. Steamboat Springs' Arielle Gold, meanwhile, languished in fourth place. "I've got mixed emotions," Gold said. "There's definitely a little bit of frustration." Gold couldn't quite lay down the perfect run that might have vaulted her onto the podium or even into the war between Kim and Clark. She was happiest with her second run, which scored in at 75.66, and then improved but still behind eventual bronze medal winner Torah Bright. "When you come out and land a run you're really happy with, you would hope it would get you where you want to be and fourth place is not my favorite place," Gold said, considering her second run and its score. "I'm happy with the way I was riding and I did the best I could. After that, everything else is in the judges' hands." She toyed earlier in the week with the idea of including a 1080 in her run, a first for Gold, but she said, at least from the vantage point atop the pipe, the opportunity didn't present itself. "I was thinking I would try on my third run, but when my second run wasn't as clean as I'd hoped, I decided to try to make it a little bit cleaner in the third run," Gold said. "In hindsight, I wouldn't have minded giving it a try." Clark's best run was her first, which scored at 90.00. She couldn't improve on that, falling on her final run. Kim did two fewer hits in her run, five compared to Clark's seven, but was judged very strongly on her final run and scored at 92.00, sliding her into first place. "It was definitely close between those two," Gold said. "It's always a tough call because the run Chloe has going is really technical but Kelly always has that giant, really sick run. Tonight it was up to the judges." Now it's off to the European Open for Gold, starting with an early morning flight Sunday. "It will be good to get back into it and use this fire for riding better," she said. To reach Joel Reichenberger, call 970-871-4253, email jreichenberger@SteamboatToday.com or follow him on Twitter @JReich9
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,187
KUMD Call Letters: KUMD Frequency: 103.3 FM http://www.kumd.org Affiliations: NPR, Ampers, and Pacifica Networks: AMPERS and American Routes Private Network KUMD- Duluth Public Radio is your independent alternative member supported public radio station. 103.3FM, online at www.kumd.org. We play an eclectic mix of music (new indie, folk, jam, jazz, soul, blues, and more). Check out our program guide and playlist for more info. Northland Morning on KUMD brings you a special week of interviews M-F at 8am. KUMD speaks with local experts, artisans and participants in these Minnesota-centric issues and events. From: KUMD Beargrease Beargrease Dog-sledding Kumd's Caring and Sharing Holiday Series. "You can judge a society by the way it treats its most vulnerable members." We're talking with a number of non-profit organizations in the northland that help make this a better place to live for all of us, including the vulnerable. Community Conversations on 103.3FM KUMD Curtain Call is a weekly program dedicated to the theater arts in the Twin Ports area and Northern Minnesota. Host Lawrance Bernabo reviews and spotlights local productions, musicals, locally produced plays, independent and Hollywood movies in current and upcoming runs. Glensheen: The Congdon Legacy KUMD celebrates "The Congdon Legacy" with a five episode series of 5 minute segments as well as a one-hour special. Chester Congdon is known for the sprawling Glensheen estate he had built on the shores of Lake Superior in Duluth. You may have even toured the Historic Congdon Estate, but what about the man behind the mansion and the multitude of other family stories? Homegrown Week KUMD celebrates the annual Duluth Homegrown Music Festival with programming and events all 8 days. Programs include Art segments, Live in-studio performances and a special 2 hour Local Music showcase. More information at www.kumd.org In the Spirit of Medicine Dr. Arne Vainio is an enrolled member of the Mille Lacs Band of Ojibwe and a family practice doctor on the Fond Du Lac reservation in Cloquet. His essays on life, work, medicine and spirit are published in "News From Indian Country," and you can find the link to his stories and more on our website at KUMD.org. New on KUMD's Northland Morning, "In the Spirit of Medicine" airs twice monthly. Journey to Wellness in Indian Country "Journey to Wellness in Indian Country" is a 10-minute program featuring interviews with medical and health researchers, professors and doctors and other people active in Native American Health today. KUMD partners with the University of Minnesota Duluth Medical School with their focus on Native American Community Health. Live from Studio A Interviews and live performances from local, regional, and national musicians on KUMD in Duluth, MN. "Live from Studio A" is produced at KUMD by Christine Dean and Chris Harwood with funding provided by the Minnesota Arts and Cultural Heritage Fund. Dan Israel Series: Live from Studio A This Twin Cities singer/songwriter recently released his 15th album, Social Media Anxiety Disorder. We found out more when he joined us in the stud... Sing! A Women's Chorus Mags David formed this women's chorus in 1999 with a small group of beginners learning how to sing. The group has since expanded to 30-plus members... Life Parade Life Parade is the brainchild of Cameron Mathews, who recorded the project's pop-rock debut album, Suburban Life, by himself. When the album starte... In the Spirit of Medicine: I won't miss my opportunity next time Series: In the Spirit of Medicine I was trying to get my jack under the car without kneeling and I didn't want to have my pants wet all day in the clinic. I finally got my jack unde... Bought by WTIP MN Reads: "Picnic in Venice" by Konnie Ellis Series: MN Reads The protagonist in Konnie Ellis's book Picnic in Venice "finds her way in the world through art." MN Reads: "Kotimaa: Homeland" by Mark Munger Our guest this morning is Mark Munger, and he's written the final chapter in his Finnish American trilogy, Kotimaa: Homeland. MN Reads: "Christmas in Minnesota" edited by Marilyn Ziebarth and Brian Horrigan "An embarassment of riches." That's what retired MNHS Press editor Marilyn Ziebarth and exhibit curator Brian Horrigan had to wade through when th... MN Reads: "A to Zåäö" by Tara Sweeney and Nate Christopherson Curiosity, collaboration, the language we all speak - artist and author Tara Sweeney talks about it all. In the Spirit of Medicine: The Red Guitar In the Spirit of Medicine features the essays of Dr. Arne Vainio, an enrolled member of the Mille Lacs Band of Ojibwe and a family practice doctor ... This Twin Cities indie folk/rock artist built a following by recording weekly YouTube videos. Del Cid has expanded her sound with the addition of a... Tent Show Radio TSR 20-03 Jontavious Willis - BCO All Aboard 01/16/20 MN Native News: 'Star Wars: The Rise Of Skywalker' Movie Review & Winter Homeless Initiative 01/16/20 The Latin Alternative 2003 01/16/20 High Country Celtic Radio 094 - Nollaig Na mBan 01/16/20 Tent Show Radio TSR 20-02 Blue Canvas Orchestra: Back to the Garden 01/10/20 High Country Celtic Radio 093 - Hogmanay Free For All 01/07/20 2019-12-27 The Big Climate Stories of 2019 01/03/20 MN90: The Locust Plagues of the 1870s 01/03/20 Staff of KUMD Chris Harwood (Admin) Christine Dean (Admin) Joel Glaser (Admin) Lisa Johnson (Admin) Maija Jenson (Admin) Reid Durbin (Admin) Justus Sanchez (Admin)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,484
Боргу () — регіон північно-західної Нігерії та північного Беніну, який був розділений між Великою Британією та Францією під час Англо-Французької конвенції 1898 року. До конвенції — держава. Населяє Боргу два племені: Баріба і Боргава. Історія Інформацію про походження і історію доколоніального Боргу століттями усно передавали у формі легенди про Кісру. Територія Боргу простягається від північно-східного і східного берега ріки Нігер до гір Аліборі на заході, на півдні до дощових лісів на кордоні з Землею Йоруба. На вершині могутності держава Боргу складалася з трьох князівств: Бусса, Ніккі та Ілло, чиї правителі є предками Кісри. Легенда про Кісру За переказом Кісра жив у VII столітті в Бадарі біля Мекки, за часу пророка Мухамеда, який неодноразово, але безуспішно намагався навернути його в іслам, поки не розв'язав проти нього війну. Зрештою, Кісра був змушений тікати, рятуючи своє життя. Кочуючи зі своїми людьми з Аравії, подолавши пустелю Сахару і Сахель, він, нарешті, оселився на західному березі ріки Нігер, де його нащадки стали правителями Бусси, Ніккі та Ілло. Легенда про Кісру не є однорідною і послідовною, деякі її варіанти лише описують певні події. Також події можуть відрізнятися деталями залежно від місця її походження в Боргу. Наприклад, відомо що за легендою засновниками трьох князівств стали троє синів Кісри: найстарший син Вору заснував Буссу (на заході), а його молодші брати Шабі і Біо заснували Ніккі (на півдні) й Ілло (на півночі). Проте існують версії, що Буссу заснував сам Кісра, або що Ніккі заснував Кісрин зять. Кісра вважається першопредком Одудуа, засновника держави Іфе, та династії Бусси, що тривалий час було провідним містом Боргу. Війни З часом провідними державами Боргу стають Ніккі і Бусса, які активно беруть участь у військових кампаніх в державах Дагомея і Ойо та хауса відповідно. Бусса тривалий час вела боротьбу з державою Нупе. У XVIII ст. відбувається занепад держав Боргу, внаслідок чого Ніккі визнає зверхність Ойо. Бусса зменшується до колиць міста. Ілло опиняється під зверхністью хауської держави Кеббі, а згодом входить до її складу. У 1730 році Бусса перетворюється на емірат Боргу. У хауса було перейнято титул володаря саркін. З ослаблення напочатку XIX ст. держави Ойо, об'єднання Боргу відбувається навколо Ніккі, що перевершує Буссу-Боргу. Втім згодом вимушена була боронитися від халіфату Сокото, яке підкорює Буссу. Ніккі-Боргу перетворюється на потугу, розширюючи кордони від Дагмоеї на півдні до Сокото на півночі, на заході кордони доходять до держав мосі. Ера колонізації У 1894 році після підкорення Францією і Великою Британією держав Дагомея і Ойо відповідно почався тиск на держави Боргу. Суперечка між європейськими країнами тривала до 1898 року. Ще у 1897 року британські війська зайняли Буссу. 1898 року підписано англо-французьку конвенцію, що завершила поділ Боргу. При цьому західні області перейшли під владу британців у складі протекторату Північна Нігерія, південні та центральні області з містом Ніккі стали частиною Французької колоніальної імперії. У теперішній час Джерела Примітки Історія Беніну Історія Нігерії
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,154
\section{Introduction} Chaotic quantum systems, such as open Quantum Dots \cite{Alhassid}, graphene flakes \cite{flakes1,flakes2,flakes3}, and nuclei exhibit common universal features characterized by fluctuating observables, such as the electronic conductance in the former two, and the compound nucleus cross section in the latter \cite{weiden1,weiden2,weiden3}. Useful information about the system in this regime of the dynamics are obtained through the average of the observables and through the correlation functions defined as the average of products of two fluctuating observables at two values of the independent variable, such as the energy or an external field. The correlation function is of paramount importance as it measures the degree of coherence present in the otherwise fully chaotic system. This measure resides in the correlation width which specifies the shape of the correlation function. In the case of parametric energy variation the shape is Lorentzian, as shown more than half a century ago by Ericson \cite{Ericson}, while in the case of a varying external parameter such as a magnetic field, the shape is a square Lorentzian, \cite{Efetov,Vallejos, caio}. In \cite{caio}, the effect of temperature on the conductance correlation function was investigated. Deviations from these shapes have also been investigated \cite{ref2,caio}. For their importance in the study of chaotic mesoscopic systems, nanosystems, resonators and in nuclei, it is clear that a detailed investigation of the correlation functions and their potential deviations from the above mentioned two extreme shapes and the resulting correlation widths is called for. In this paper we calculate analytically the correlation functions for the general case of partially open system. Three types of external magnetic fields are considered. The energy correlation function is discussed in details and the corresponding correlation width is extracted and found to be significantly different from the Weisskopf estimate. This last result is quite important as it affects the value of the dwell time in these systems. \section{Quantum Chaotic Scattering} Quantum Chaotic Scattering (QCS) is a widely occurring phenomenon in physics. It operates in a variety of systems, such as, electronics \cite{Alhassid}, spintronics \cite{spintronica}, biomolecules \cite{bio}, disordered mesoscopic nanostructutes \cite{Beenakker}, and the compound nucleus \cite{Bohr}. The emergence of the phenomenon is, just as in other cases of quantum chaos, directly related to the intrinsic chaotic dynamics associated with quasi-bound states of a quantum system. Random Matrix Theory (RMT) supplies the formal S-matrix of QCS, as it describes the intrinsic Hamiltonian which governs the dynamics and the scattering. The empirical manifestation of QCS is evidenced by universal fluctuations observed in electronic conductance in open Quantum Dots, and in Graphene flakes, transmittance in microwave and acoustic resonators, and in compound nucleus cross sections. \\ The S-matrix describing QCS is given by \begin{equation} \label{eq:SHeidelberg} S (\varepsilon, X) = \mbox{$\openone$} - 2\pi i W^\dagger (\varepsilon - H(X) + i \pi W W^\dagger)^{-1} W \;, \end{equation} where $H(X)$ is a random Hamiltonian matrix of dimension $M \times M$ that describes the resonant states in the chaotic system, which is subject to the influence of an external parameter, X. The number of resonances is very large ($M \rightarrow \infty$). The matrix $W$ of dimension $M \times (N)$ contains the channel-resonance coupling matrix elements. Using the above S-matrix, one is then able to calculate observables. The ensemble average $\langle S_{cc'}(\varepsilon)S^{\star}_{cc'}(\varepsilon')\rangle$ supplies the average conductance or cross section ($\varepsilon = \varepsilon'$), while the four-point function, $\langle S_{cc'}(\varepsilon_{1})S^{\star}_{cc'}(\varepsilon_{1})S_{cc'}(\varepsilon_{2})S^{\star}_{cc'}(\varepsilon_{2})\rangle$, furnishes the correlation function, characterized by the correlation width, of great importance in the study of chaotic quantum systems.\\ In application to conductance statistics in Quantum Dots with three or more terminals it is convenient to represent the $S$-matrix as, \begin{equation} S=\left(\begin{array}{ccc}r_{11} & t_{12} & t_{13} \\ t_{12}' & r_{22} & t_{23} \\ t_{13}' & t_{23}' & r_{33} \\ \end{array}\right), \end{equation} where $t_{ij}$ indicates the probability amplitude of transmission of the channel(s) contained in the terminal $ i $ for the channel(s) contained in the terminal $ j$ and $ r_ {ii} $ denotes the probability amplitude of reflexions of the channels in the terminal $ i $. The correlation function obtained from the above is invariably of a Lorentzian shape, in the case of a variation of the energy, or a square Lorentzian, in the case of a variation of the external parameter, $X$. Deviations from these limiting cases are expected and can be derived \cite{ref2,caio}. Calculation of the ensemble averages, is rather difficult, and only the case of the Gaussian Unitary Ensemble average has been performed analytically. Most applications of the above $S$-matrix, involves numerical simulations using random matrix generator. To obtain analytical results, an alternative method has been devised, based on the distribution of the $S$-matrix itself, which, being unitary, follows the Dyson circular ensemble. The Stub model is such an alternative \cite{brouwer,nos1,nos2}. It involves the use of the following parametrized form of the $S$-matrix. \section{The Stub Model} We assume that the particles dynamics is ballistic and ergodic, and we model the system statistical properties using the random matrix theory (RMT)-based stub model. Following references \cite{brouwer,nos2}, for particles with spin, the scattering matrix S can be represented as a unitary matrix with quaternionic entries and is parameterized as \begin{eqnarray} {\cal S} (\varepsilon,{\cal B})={\bar {\cal S}} +{\cal P U}[1-{\cal K}^\dagger {\cal R}(\varepsilon, {\cal X}) {\cal K} {\cal U}]^{-1}{\cal P}^\dagger. \label{SMatriz} \end{eqnarray} Here, $ {\cal U}$-matrix, $M\times M$, is the scattering matrix counterpart of an isolated quantum system, while ${\bar {\cal S}} $ is the average of the scattering matrix of the system ${\cal S}$, which has dimension $N\times N$. The $M$ stands for the number of resonances of the system, while $N=N_1+N_2+N_3$ is the total number of open channels. The universal regime requires $M \gg N$. The ${\cal K}$-matrix is a projection operator of order $(M-N) \times M$, while $ {\cal P} $, of order $N \times M$, describes the channels-resonances couplings. Their explicit forms read $ {\cal K}_{i, j} = \delta_{i +N, j} $, $ {\cal P}_{i, j} = \textrm{diag} ( i \delta_{i, j} \sqrt{T_{1}}, i \delta_{i+N_1, j} \sqrt{T_{2}},i \delta_{i+N_1+N_2, j} \sqrt{T_{3}})$ and $ {\bar {\cal S}}_{i, j} = \textrm{diag} ( \delta_{i, j} \sqrt{1-T_{1}}, \delta_{i+N_1, j} \sqrt{1-T_{2}}, \delta_{i+N_1+N_2, j} \sqrt{1-T_{3}})$, we are assuming the equivalent coupling for channels in the same terminal. The $ {\cal R}$-matrix (representing the external fields) is the stub model counterpart of order ($M-N \times M-N$) as described by \cite{nos2}. The parameter $X$ represents, e.g., the type of external applied magnetic field employed. The quantity $T = tr(t_{12}t_{12}^{\dagger})$. \begin{figure*}[htp] \includegraphics[width=0.8\textwidth]{prlR.pdf} \caption{\scriptsize Fig.(1a) show a typical Lorentzian behaviour of the correlation function and the correlation length $\Gamma_{corr}$ associated with a typical transport observable between the terminals $1$ and $2$ probed by another non-linear terminals $3$, as the inset indicates. Fig.(1b) shows a deviation from the Weisskopf length in the semiclassical regime occasioned by the presence of another terminals. Fig.1(c) shows the ratio $D = \Gamma_{corr}/\Gamma_{W}$ as a function of $T_3$ exhibiting the attainment of the Weisskopf length only for a completely open ($T_3 = 1$) or completely closed ($T_3 = 0$) extra-terminals lumped into an effective one. See text for details.. The continuous line is the analytical result and the dots are the results of the numerical Hamiltonian simulation using the $S$-matrix of Eq. (1). Fig.(1d) shows the deviation of the Weisskopf length analytically and numerically.}\label{grafico1} \end{figure*} The calculation of the ensemble averages can be done in a closed form using the stub $S$-matrix above. Details of such calculations can be found in the original references. The ensemble average of the conductance, $Q = tr(t_{12}t'_{12})$, is easily calculated, \cite{Mello}, following the Ref. \cite{nos2} of a chaotic QS. We obtain \begin{eqnarray} \langle {\cal Q}\rangle &=& \frac{{\cal T}_1 {\cal T}_2}{{\cal T}}\left[1-\frac{2-\beta}{\beta}\frac{{\cal T}_1 T_2+{\cal T}_2T_1}{{\cal T}^2}\right] ,\label{condg} \end{eqnarray} where ${\cal T}_i=N_i T_i$ is the total transmission coefficient for the terminal $i$ with equivalent channels and ${\cal T}={\cal T}_1+{\cal T}_2+{\cal T}_3$, and the $\varepsilon$-dependence disappear in the ensemble average. In the case of the compound nucleus, the quantity $Q$ becomes the cross section and the its ensemble average , similar to eq. (4), is known as the Hauser-Feshbach cross section \cite{HF}, with the factor $\left[1-\frac{2-\beta}{\beta}\frac{{\cal T}_1 T_2+{\cal T}_2T_1}{{\cal T}^2}\right] $ representing what is known as the elastic enhancement factor, which is known exactly at arbitrary overlap both for orthogonal and unitary symmetry, \cite{verb,ref3}. Note that the number of channels, $N$, in all the terminals is taken to be large, $N >> 1$, and thus we are in the semiclassical regime. Note further that the symmetry parameter $\beta$ takes on the values 1, in the Gaussian Orthogonal Ensemble, 2 in the Gaussian Unitary Ensemble, and 4 in the Gaussian Symplectic Ensemble. \section{The Correlation Functions} As for the correlation functions, the calculation using the stub model can be carried out as done in \cite{BHR2013}. In the following we merely write down the correlation functions for a chaotic Quantum Dot in the case of only two terminals, specified by $T_1$ and $T_2$ and and consider $T_1 = T_2 = T$, we find for the energy variation we find, \begin{eqnarray} \frac{C(\delta E)}{1/8 \beta}= \frac{3 T(2-T)-2}{1+(\delta E/\Gamma_{E})^2}+\frac{4 \left[ 1 + T (T-2) \right]}{\left[ 1+(\delta E/\Gamma_{E})^2 \right]^2} \end{eqnarray} where $\beta$ is a measure of the universality class to which $H$ pertains. It should be mentioned that magneto-conductance correlation functions, were calculated using the Hamiltonian approach, Eq. (1), and the study of a relevant asymptotic expansion in inverse channel number was reported in \cite{weiden99}. For the variation of the external parameter, we consider three cases. An external perpendicular magnetic field, $B_{\perp}$, and external perpendicular magnetic filed acting on the spin-orbit interaction, the so-called Rashba-Drasselhaus field, $H_{\perp}$ \cite{Rashba,Dressel}, and a parallel magnetic field, $H_{\parallel}$, whose effect has been studied recently by \cite{Harvard}, \begin{eqnarray} \frac{C(\delta X)}{1/8 \beta}= \frac {2 T(1-T)}{1+ (\delta X/\Gamma_{X})^2} + \frac{2+ T(3T-4) }{ \left[ 1+ ( \delta X/\Gamma_{X})^2 \right] ^{2}} \end{eqnarray} \begin{eqnarray} C(\delta H_{\perp})=\frac {3\,T^{2}+4-4\,T }{ \left( {{ (\delta H_{\perp}/\Gamma_{H_{\perp}})^2}}+2 \right) ^{2}}-\frac {T^{2}-2\,T }{{{(\delta H_{\perp}/\Gamma_{H_{\perp}}})^{2}+2}} \end{eqnarray} Finally for the case of a parallel magnetic field considered recently in \cite{Harvard} \begin{eqnarray} \frac{C(\delta H_{\parallel})}{1/8 \beta}=\frac{T(2-T)-1}{1+(\delta H_{\parallel}/\Gamma_{H_{\parallel}})^2} \end{eqnarray} This last correlation function is new and deserves some discussion. It does not depend on the openness parameter, the transmission coefficient, $T$, and it has a pure Lorentzian shape in contrast to the cares of perpendicular magnetic field. \\ It should be mentioned that a thorough study of the energy dependence and correlation length in quantum chaotic scattering with many channels was performed in \cite{ref1} A general relation was established there between fluctuations in scattering and the distribution of complex energies (poles of the S matrix). In particular, the correlation length was shown to be given by the spectral gap in the pole distribution, and the deviations of the gap from the (semiclassical) Weisskopf estimate [eq.(14) of the present work] were analyzed in great detail there as well. \section{The Average Density of Maxima} The important feature that characterizes these correlation functions is that they all (except the case of $H_{\parallel}$) deviate from pure Lorentzian or square Lorentzian shape, for an arbitrary value of the openness probability. The application to the case of the compound nucleus, Eq. ( 5), allows the investigation of the statistics of resonances both in the weak (isolated resonances) and strong (overlapping resonances) absorption cases, as well as in the intermediate cases. Several quantities can be obtained from the correlation functions. The average density of maxima in the fluctuating observable, is one of them. In Ref. \cite{RBHL2011}, this quantity was derived and analyzed for $C(E)$ and $C(X)$. For completeness we give below the main results and extend them to other types of applied magnetic fields. \begin{eqnarray} \left< \rho_{z} \right> &=&\frac{1}{2\pi}\sqrt{\frac{T_4}{T_2}} \label{dens} \\ T_2&=&-\frac{d^2}{d(\delta z)^2}C(\delta z)\bigg|_{\delta z=0}\nonumber\\ T_4&=&\frac{d^4}{d(\delta z)^4}C(\delta z) \bigg|_{\delta z=0}.\nonumber \end{eqnarray} For the energy variation, \begin{equation} \left< \rho_{E} \right>=\frac{\sqrt{3}}{\pi \Gamma_{E}}\sqrt{\frac { 9\,{T }^{2}-18\,T+10}{ 5\,{T }^{2}-10\,T+6}} =\frac{\sqrt{3}}{\pi \Gamma_{corr}} \end{equation} For the case of an external perpendicular magnetic field, \begin{equation} \left< \rho_{X} \right>=\frac{\sqrt{3}}{\sqrt{2} \pi \Gamma_{X}}\sqrt{\frac{7\,{T }^{2}-10\,T+6 }{2\,{T }^{2}-3\,T+2}} \end{equation} For the Rashba-Dresselhaus field, \begin{equation} \left< \rho_{H_{\perp}} \right>= \frac{\sqrt{3}}{2 \pi \Gamma_{h_{\perp}}} \sqrt{\frac {7\,{T }^{2}-10\,T+6}{2\,{T }^{2}-3\,T+2}} \end{equation} Interestingly for the case of a parallel magnetic field, the result is independent of the openness parameter and is identical to the Brink-Stephen \cite{Brink} one, \begin{equation} \left< \rho_{H_{\parallel}} \right> = \frac{\sqrt{3}}{\pi \Gamma_{H_{\parallel}}} \approx \frac{0.55}{\Gamma_{H_{\parallel}}} \end{equation} \section{The Correlation Length and Weisskopf's Estimate} The important point to emphasize here is that both the correlation function and the average density of maxima are characterized, for a fixed value of the openness probability, by a single quantity, the correlation width. In the energy variation case, this width is the inverse of the dwell time and is usually estimated using the Weisskopf expression \cite{Weisskopf}, \begin{equation} \Gamma_{E} \approx \Gamma_{W} = \frac{\overline{\Delta}}{2\pi}\sum_{c} T_{c} \end{equation} where, $\overline{\Delta}$ is the average spacing between the resonances in the chaotic system, and the sum extends over all the open channels reached through the transmission coefficient, $T_{c}$. Deviation from the Weisskopf estimate was calculated in \cite{Richter} using the $S$-matrix of Eq. ( 1). With the help of a random matrix generator, these authors calculated numerically the transmittance correlation function for microwave resonators, and obtained the correlation width, $\Gamma_{corr}$ as the width at half maximum. They found for the ratio $D = \Gamma_{corr}/\Gamma_{W}$ vs. $\sum_{c} T_{c}$, values that reach up to 1.1. In our work here, we can read out the change in the correlation width as, \begin{equation} D = \frac{\Gamma_{corr}}{\Gamma_{W}} = \sqrt{\frac{ 5\,{T}^{2}-10\,T+6}{ 9\,{T }^{2}-18\,T+10}} \end{equation} For almost closed system, $T << 1$, the deviation reaches the value $D = \sqrt{3/5}$ = 0.81. Of course in the other limit of a completely open Quantum Dot, $T \approx$ 1, the ratio $D$ attains the value of unity as expected. . It is interesting to generalize our result to the case of two terminals in the presence of other terminals lumped together as an effective terminal characterized by $T_{3}$, and study the variation of the correlation width as a function of $T_3$. Thus, we extend the stub method for the new calculation of the general correlation function, including the quantum interference terms. Using the same stub ${\cal S}$-matrix described above, together with the typical large number of diagrams for the correlation function \cite{brouwer}, we calculate the averages of products of two observables, $Q(E) Q(E')$, and obtain after setting ${\cal T}_1 = {\cal T}_2 = N$, and $N_1 = N_2 = N_3 = N$. Note that in this case of more than two terminals, the result we have obtained are valid for $T_1 = T_2 = 1$, to get. \begin{eqnarray} \frac{{\cal C}(\delta E)}{1/\beta}&=&\frac{\left[{\cal A}_1(T_3)-8N^2{\cal T}_3^2(1-T_3)^2\right]}{N^4(2 +T_3)^6\left[1+(\delta E/\Gamma_W)^2\right]}\nonumber\\ &+&\delta_{2\beta}\frac{{\cal T}_3 {\cal A}_{2}(T_3)}{N^3(2 +T_3)^6 \left[1+(\delta E/\Gamma_W)^2\right]}\nonumber\\ &+&\frac{8 {\cal T}_3^2(1-2T_3+T_3^2)}{(2 + T_3)^6\left[1+(\delta E/\Gamma_W)^2\right]^2} \label{geral} \end{eqnarray} and, \begin{eqnarray} \frac{{\cal A}_{1}(T_3)}{N^4} &\equiv & A_{1}(T_3) =T_3^4+2 (2p +T_3)(2+T_3)T_3^{3}\nonumber\\ &+&\left[2T_3^{2}-4(2 + T_3)^{2}T_3\right.\nonumber\\ &+&\left.16(2 + T_3)^2 \right ]T_3^2\nonumber\\ &+&2(2p + T_3)\left[-4T_3^{2}(2 + T_3)\right. \nonumber\\ &+&\left.12 (2 + T_3)^2\right]T_3+8(2 +T_3)^4\nonumber\\ \frac{{\cal A}_{2}(T_3)}{N^3}&\equiv &A_{2}(T_3) = T_{3}(2 +T_3)^2\left[2(2 +T_3)+ 1\right]\nonumber\\ \end{eqnarray} The Eq.(\ref{geral}) shows convincingly that the correlation function is not a Lorentzian on the Weisskopf width scale, which is violated and no longer the transport correlation width. The Fig. (1b) shows the modification of the Weisskopf length as a function of the $T_3$. On the another hand, the amplitude of the universal fluctuations (variance) ${\textrm var}[{\cal Q}]= {\cal C}(0)=1/\beta$ is maintained regardless of the additional terminals. The Eq.(\ref{geral}) can be written in Lorentzian form, using some algebra, as in the following \begin{equation} \frac{C(\delta E)}{1/\beta}=\frac{1}{1+\left( \delta E/\Gamma_{corr}\right)^2}, \;\; \Gamma_{corr}\equiv \Gamma_W D\label{lorentziana} \end{equation} with \begin{eqnarray} D&\equiv& \frac{\sqrt{64T_3^{4}(1-T_3)^4+\left[A_{1}(T_3)+\delta_{2\beta}T_{3}A_{2}(T_3)\right]^2}}{A_{1}(T_3)+\delta_{2\beta}T_{3} A_{2}(T_3)}\nonumber\\&-&\frac{8T_3^{2}(1-T_3)^2}{A_{1}(T_3)+\delta_{2\beta}T_{3}A_{2}(T_3)} \label{lorentzianaC} \end{eqnarray} which is highly non-liner as a function of $T_3$. The form of the Eqs.(\ref{lorentziana}) and (\ref{lorentzianaC}) show analytically the effect of the presence of more terminals on the correlation properties of the other two terminals. It simulates absorption of the flux in the two-terminal subsystem. The non-linear effect disappears in the limits $ T_3 = 1$ (ideal) and $ T_3 \rightarrow 0 $ (opaque) for which $D \rightarrow 1$, and the correlation length approaches the Weisskopf length, as Fig.(1c) shows clearly. These results were verified by a numerical simulations with random matrix generator using the $S$-matrix of Eq. (1). \begin{figure}[h!] \centering \vskip-0.4cm \includegraphics[width=\columnwidth]{flutuaQ.pdf} \caption{Typical transport observable ${\cal Q}$ as a function of $\varepsilon/\Gamma_W$ for $N=50$ open channels coupled to $5 \cdot 10^3$ resonances. For a single realization of $H$ in both fluctuations and the same observable ${\cal Q}$, the top indicates a single measure in the absence of the extras-terminals and the bottom indicates extra-terminals with non-linear coupling $T_3=0.3$.} \label{fig:TvsEN=5} \end{figure} We perform a numerical simulation using the Hamiltonian model of Eq. (1) with a configuration of $50$ open channels and $5 \cdot 10^3$ resonances in an energy range $ \Delta \varepsilon / \gamma \in [-100,100]$. As shown in Fig.[2], the transport observable ${\cal Q}$ is ``experimentally" obtained for the same scattering QS. For such single QS the ${\cal Q}$ is largely affected in the presence of the extra non-linear terminals in two forms. Firstly, the reference line of fluctuations is suppressed (from the top to the bottom plot of Fig.[2]), as can be expected using Eq.(4). Secondly, the number of maxima increases from 99 to 101 in the same energy interval, confirming nicely our analytical findings. A single experimental trace can confirm the violation of Weisskopf width in the presence of the extra terminals. Thus, we find excellent agreement between our analytical calculation and the numerical one. In a way we are also confirming the results of \cite{Richter} obtained for transmittance in a microwave cavity. \section{Conclusions} In this paper, we analytically calculate the correlation functions for chaotic systems using the Quantum Chaotic Scattering theory. We show that in the case of energy variation and in the variation of an external magnetic field, these functions deviate significantly from the expected Lorentzian and square Lorentzian shapes. The parameter that measures this deviation is identified as the transmission coefficient which acts as the "openness" probability. We further identify the deviation of the correlation width from the Weisskopf width for an arbitrary observable of the quantum transport. The stub calculation of the ensemble average of the products of four scattering matrices allows finding this deviation, of the order of $10\%$, and reveals the topological effects of other terminals on the chaotic quantum transport. The consequences of our results include the increase of the dwell time of an arbitrary chaotic scattering system. The results are general and applicable to any scattering amplitudes between two terminals within the diagrammatic approach of the stub model, which is an 1/N expansion method. Therefore the deviation of the correlation width from the Weisskopf estimate can occur in spin and/or charge channels for electronic nanostructures, the transmittance of antennas, sub-lattices and/or sub-valleys channels for graphene flakes \cite{graphene}, etc. Technological applications of the results includes artificial atomic clocks by means of dwell time modifications and cryptography on transport observables. This work is supported in part by the Brazilian funding agencies CAPES, CNPq, FACEPE, FAPESP, the Instituto Nacional de Ci\^{e}ncia e Tecnologia de Informa\c{c}\~{a}o Qu\^{a}ntica-MCT, and the CEPID-CEPOF Project.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,720
34 Awesome And Interesting Facts About Jared Leto Jared Joseph Leto is an American actor and singer-songwriter. After starting his career with television appearances in the early 1990s, Leto achieved recognition for his role as Jordan Catalano on the television series My So-Called Life. He made his film debut in How to Make an American Quilt and received critical praise for his performance in Prefontaine. Take a look below for 34 more awesome and interesting facts about Jared Leto. 1. Leto was born on December 26, 1971, in Bossier City, Louisiana, to Constance Leto. 2. His mother has Cajun ancestry. 3. "Leto" is the surname of his stepfather. 4. His parents divorced when he was a child, and he and his older brother, Shannon Leto, lived with their mother and their maternal grandparents, Ruby (Russell) and William Lee Metrejon. 5. His father remarried, and committed suicide when Jared was eight. 6. Leto moved frequently with his family from Louisiana to different cities around the country. 7. Leto has two younger half-brothers from his father's second marriage. 8. Constance joined the hippie movement and encouraged her sons to get involved in the arts. 9. Leto stated he "was raised around a lot of artists, musicians, photographers, painters and people that were in theater," adding that "Just having the art communal hippie experience as a child, there wasn't a clear line that was drawn. We celebrated creative experience and creative expression. We didn't try and curtail it and stunt any of that kind of growth." 10. Leto started playing music with his brother at an early age and his first musical instrument was a broken-down piano. 11. After dropping out briefly in the 10th grade, Leto decided to return and focus on his education at the private Emerson Preparatory School in Washington, D.C. 12. He was interested in large-scale visual art and enrolled at the University of the Arts in Philadelphia. 13. After developing an interest in filmmaking, he transferred to the School of Visual Arts in New York City. 14. While he was a student there, he wrote and starred in his own short film, Crying Joy. 15. He also attended the Corcoran School of the Arts and Design, a part of George Washington University. 16. In 1992, Leto moved to Los Angeles to pursue a career in directing, intending to take acting roles on the side. 17. He found minor roles on television shows but his first break came in 1994, after he was cast opposite Claire Danes as Jordan Catalano, her love interest, in the short-lived but well-reviewed ABC teen drama My So-Called Life. 18. In 1997, Leto starred in the biopic Prefontaine in which he played the role of Olympic hopeful Steve Prefontaine. For the preparation of the role, Leto immersed himself in the runner's life, training for six weeks and meeting with members of his family and friends. 19. After the strong relationship between Thirty Seconds to Mars and its audience, Leto launched the social media management and digital marketing company The Hive. It is based in Studio City, Los Angeles and focus on creative community building. 20. In 2010, Leto launched The One and Only Golden Tickets, a full service company which operates worldwide and manages exclusive services for concerts, festivals, and events. 21. In 2011, Leto launched the online platform VyRT. Created as a live video streaming service, it also features social networking and official merchandise. 22. In June 2012, VyRT was awarded Best Online Concert Experience at the O Music Awards. 23. In 2012, Leto became an investor in Surf Air, a California-based air service. 24. He is also a funder for Reddit and Robinhood Markets. 25. Leto lives a vegan lifestyle and supports animal rights. 26. In 2008, he supported the California Proposition 2 regarding treatment of farm animals. 27. In the 2008 presidential election, Leto supported Senator Barack Obama of Illinois. 28. In 2012, he chaired a Gen44 event, a campaign set up by Obama to energize voters under 40. 29. In October 2009, he raised money to the campaign against California Proposition 8, created by opponents of same-sex marriage to overturn the California Supreme Court decision that had legalized same-sex marriage. 30. In May 2012, he expressed support after hearing that Barack Obama had endorsed same-sex marriage. 31. He has been volunteering at the charity Art of Elysium, which helps children with serious medical conditions. 32. He has supported the Barbara Davis Center for Childhood Diabetes, a program specializing in type 1 diabetes research and care. 33. In 2006, he created the cover art for the album 97X Green Room: Volume 2, which proceeds from the sales benefited The Nature Conservancy. 34. In June 2008, he joined Habitat for Humanity to work with Thirty Seconds to Mars on a home being repaired and renovated through the Greater Los Angeles Area's "A Brush With Kindness" program. actorcelebrityjared letomusician 21 Fun And Fascinating Facts About January Jones 30 Awesome And Fascinating Facts About Jason Aldean 25 Fun And Interesting Facts About Marcus Aurelius 30 Fun And Interesting Facts About The Deadpool Movie 30 Interesting And Fascinating Facts About Soren Kierkegaard
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,145
Diese Liste gibt einen Überblick über alle Mitglieder des namibischen Nationalrats des 5. Parlaments, d. h. aller Abgeordneter des Nationalrats, des Oberhauses des namibischen Parlaments, von 2015 bis 2020. Die SWAPO hat alle 42 Sitze gewonnen, gab jedoch aus Gründen der Demokratie zwei Sitze freiwillig, an die NUDO und DTA (ab 2017 PDM) ab. Abgeordnete Weblinks 5. Nationalrat Namibias (englisch) 05 Namibia Nationalrat 05
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,528
{"url":"https:\/\/en.wikipedia.org\/wiki\/Intensity_mapping","text":"# Intensity mapping\n\nIn cosmology, intensity mapping is an observational technique for surveying the large-scale structure of the universe by using the integrated radio emission from unresolved gas clouds.\n\nIn its most common variant, 21\u00a0cm intensity mapping, the 21cm emission line of neutral hydrogen is used to trace the gas. The hydrogen follows fluctuations in the underlying cosmic density field, with regions of higher density giving rise to a higher intensity of emission. Intensity fluctuations can therefore be used to reconstruct the power spectrum of matter fluctuations. The frequency of the emission line is redshifted by the expansion of the Universe, so by using radio receivers that cover a wide frequency band, one can detect this signal as a function of redshift, and thus cosmic time. This is similar in principle to a galaxy redshift survey, with the important distinction that galaxies do not need to be individually detected and measured, making intensity mapping a considerably faster method.[1]\n\n## History\n\n\u2022 Aug 1996: Madau, Meiksin & Rees[2] propose intensity mapping as a way of probing the Epoch of Reionization.\n\u2022 Dec 2001: Bharadwaj & Sethi[3] propose using intensity maps of neutral hydrogen to observe the matter distribution in the post-reionisation epoch.\n\u2022 Jan 2004: Battye, Davies & Weller[4] propose using 21\u00a0cm intensity maps to measure dark energy.\n\u2022 Mar 2009: Cosmological HI signal observed for the first time out to redshift 1.12[5] by the Green Bank Telescope.\n\u2022 Jan 2013: Construction begins on the CHIME experiment[6] in British Columbia, Canada.\n\n## Scientific applications\n\nIntensity mapping has been proposed as a way of measuring the cosmic matter density field in several different regimes.\n\n### Epoch of Reionization\n\nBetween the times of recombination and reionization, the baryonic content of the Universe \u2013 mostly hydrogen \u2013 existed in a neutral phase. Detecting the 21\u00a0cm emission from this time, all the way through to the end of reionization, has been proposed as a powerful way of studying early structure formation.[7] This period of the Universe's history corresponds to redshifts of ${\\displaystyle z\\approx 30}$ to ${\\displaystyle z\\approx 6-12}$, implying a frequency range for intensity mapping experiments of 50 \u2013 200\u00a0MHz.\n\n### Large-scale structure and dark energy\n\nAt late times, after the Universe has reionized, most of the remaining neutral hydrogen is stored in dense gas clouds called damped Lyman-alpha systems, where it is protected from ionizing UV radiation. These are predominantly hosted in galaxies, so the neutral hydrogen signal is effectively a tracer of the galaxy distribution.\n\nAs with galaxy redshift surveys, intensity mapping observations can be used to measure the geometry and expansion rate of the Universe (and therefore the properties of dark energy[1]) by using the baryon acoustic oscillation feature in the matter power spectrum as a standard ruler. The growth rate of structure, useful for testing modifications to general relativity,[8] can also be measured using redshift space distortions. Both of these features are found at large scales of tens to hundreds of megaparsecs, which is why low angular resolution (unresolved) maps of neutral hydrogen are sufficient to detect them. This should be compared with the resolution of a redshift survey, which must detect individual galaxies that are typically only tens of kiloparsecs across.\n\nBecause intensity mapping surveys can be carried out much faster than conventional optical redshift surveys, it is possible to map-out significantly larger volumes of the Universe. As such, intensity mapping has been proposed as a way of measuring phenomena on extremely large scales, including primordial non-Gaussianity from inflation[9] and general relativistic corrections to the matter correlation function.[10]\n\n### Molecular and fine structure lines\n\nIn principle, any emission line can be used to make intensity maps if it can be detected. Other emission lines that have been proposed as cosmological tracers include:\n\n\u2022 Rotational transitions in molecules, such as carbon monoxide[11]\n\u2022 Fine structure transitions from species such as ionized carbon[12]\n\u2022 Lyman-alpha emission from hydrogen[13]\n\n## Experiments\n\nThe following telescopes have either hosted intensity mapping surveys, or plan to carry them out in future.\n\n## References\n\n1. ^ a b Bull, Philip; Ferreira, Pedro G.; Patel, Prina; Santos, Mario G. (2015). \"Late-time cosmology with 21cm intensity mapping experiments\". The Astrophysical Journal. 803 (1): 21. arXiv:1405.1452. Bibcode:2015ApJ...803...21B. doi:10.1088\/0004-637X\/803\/1\/21.\n2. ^ Madau, Piero; Meiksin, Avery; Rees, Martin J. (1997). \"21 Centimeter Tomography of the Intergalactic Medium at High Redshift\". The Astrophysical Journal. 475 (2): 429\u2013444. arXiv:astro-ph\/9608010. Bibcode:1997ApJ...475..429M. doi:10.1086\/303549.\n3. ^ Bharadwaj, Somnath; Sethi, Shiv K. (December 2001). \"HI fluctuations at large redshifts: I-visibility correlation\". Journal of Astrophysics and Astronomy. 22 (4): 293\u2013307. arXiv:astro-ph\/0203269. Bibcode:2001JApA...22..293B. doi:10.1007\/BF02702273.\n4. ^ Battye, Richard A.; Davies, Rod D.; Weller, Jochen (2004). \"Neutral hydrogen surveys for high-redshift galaxy clusters and protoclusters\". Monthly Notices of the Royal Astronomical Society. 355 (4): 1339\u20131347. arXiv:astro-ph\/0401340. Bibcode:2004MNRAS.355.1339B. doi:10.1111\/j.1365-2966.2004.08416.x.\n5. ^ Chang, Tzu-Ching; Pen, Ue-Li; Bandura, Kevin; Peterson, Jeffrey B. (22 July 2010). \"An intensity map of hydrogen 21-cm emission at redshift z \u2248 0.8\". Nature. 466 (7305): 463\u2013465. Bibcode:2010Natur.466..463C. doi:10.1038\/nature09187. PMID\u00a020651685.\n6. ^ \"Construction begins on Canada's largest radio telescope\". Phys.org. 2013-01-24. Retrieved 17 August 2014.\n7. ^ Loeb, Abraham; Zaldarriaga, Matias (May 2004). \"Measuring the Small-Scale Power Spectrum of Cosmic Density Fluctuations through 21 cm Tomography Prior to the Epoch of Structure Formation\". Physical Review Letters. 92 (21): 211301. arXiv:astro-ph\/0312134. Bibcode:2004PhRvL..92u1301L. doi:10.1103\/PhysRevLett.92.211301. PMID\u00a015245272.\n8. ^ Hall, Alex; Bonvin, Camille; Challinor, Anthony (19 March 2013). \"Testing general relativity with 21-cm intensity mapping\". Physical Review D. 87 (6): 064026. arXiv:1212.0728. Bibcode:2013PhRvD..87f4026H. doi:10.1103\/PhysRevD.87.064026.\n9. ^ Camera, Stefano; Santos, M\u00e1rio G.; Ferreira, Pedro G.; Ferramacho, Lu\u00eds (2013). \"Cosmology on Ultralarge Scales with Intensity Mapping of the Neutral Hydrogen 21 cm Emission: Limits on Primordial Non-Gaussianity\". Physical Review Letters. 111 (17): 171302. arXiv:1305.6928. Bibcode:2013PhRvL.111q1302C. doi:10.1103\/PhysRevLett.111.171302. PMID\u00a024206474.\n10. ^ Maartens, Roy; Zhao, Gong-Bo; Bacon, David; Koyama, Kazuya; Raccanelli, Alvise (26 February 2013). \"Relativistic corrections and non-Gaussianity in radio continuum surveys\". Journal of Cosmology and Astroparticle Physics. 2013 (2): 044. arXiv:1206.0732. Bibcode:2013JCAP...02..044M. doi:10.1088\/1475-7516\/2013\/02\/044.\n11. ^ Lidz, Adam; Furlanetto, Steven R.; Peng Oh, S.; Aguirre, James; Chang, Tzu-Ching; Dor\u00e9, Olivier; Pritchard, Jonathan R. (10 November 2011). \"INTENSITY MAPPING WITH CARBON MONOXIDE EMISSION LINES AND THE REDSHIFTED 21 cm LINE\". The Astrophysical Journal. 741 (2): 70. arXiv:1104.4800. Bibcode:2011ApJ...741...70L. doi:10.1088\/0004-637X\/741\/2\/70.\n12. ^ Gong, Yan; Cooray, Asantha; Silva, Marta; Santos, Mario G.; Bock, James; Bradford, C. Matt; Zemcov, Michael (January 2012). \"Intensity Mapping of the [CII] Fine Structure Line during the Epoch of Reionization\". The Astrophysical Journal. 745 (1): 49. arXiv:1107.3553. Bibcode:2012ApJ...745...49G. doi:10.1088\/0004-637X\/745\/1\/49.\n13. ^ Pullen, Anthony R.; Dor\u00e9, Olivier; Bock, Jamie (May 2014). \"Intensity Mapping across Cosmic Times with the Ly\u03b1 Line\". The Astrophysical Journal. 786 (2): 111. arXiv:1309.2295. Bibcode:2014ApJ...786..111P. doi:10.1088\/0004-637X\/786\/2\/111.\n14. ^ \"Cahill Radio Astronomy Lab - CRAL\". www.astro.caltech.edu. Retrieved 2017-11-06.","date":"2019-12-08 19:42:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 2, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.80794757604599, \"perplexity\": 9223.338319546388}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540514475.44\/warc\/CC-MAIN-20191208174645-20191208202645-00154.warc.gz\"}"}
null
null
\section{Introduction} Artificial intelligence (AI) holds great promise for revolutionizing scientific fields as varied as biology~\cite{senior2020improved}, chemistry~\cite{huang_quantum_2020}, physics~\cite{carleo_solving_2017}, and economics \cite{jean2016poverty}. Much of AI's impressive recent successes have been in data-rich applications: AlphaFold \cite{senior2020improved}, for example, uses a library of tens of thousands of existing protein data to build a highly successful model for protein folding. In fields that lack comparably vast data, however, a great opportunity lies in guiding the exploratory process itself to minimize the number of experiments that are required to achieve insights, i.e, active learning~\cite{settles_active_2009} (AL), and thereby accelerate the pace of scientific discovery \cite{gil2014amplify}. Such AI-guided efforts have shown great promise in the design of quantum experiments \cite{melnikov1221}, drug development \cite{murphy2011active}, wind turbine control \cite{kolter2012design}, and are of particular importance in materials research aimed at designing and optimizing functional materials that lie at the core of technological advances. High-throughput (HT) experimental synthesis and characterization of materials systems through thin-film deposition of inorganic composition spreads~\cite{dover_codeposited_2004}, so-called ``libraries'', present a promising avenue to rapidly explore a vast chemical, structural, and property space~\cite{hattrick-simpers_perspective_2016}. These methods have been well established for comprehensive synthesis of composition spaces with 2-4 components, where the resulting 10's to 1000's of materials can be evaluated via automated characterization. While this approach has been quite effective for identification of materials with desired properties, the opportunity for broader materials exploration to enable new technologies is highlighted by the limited exploration of synthesis conditions to-date. The portion of the materials search space that has been explored is vanishingly small when considering the dynamic range of thermal processing conditions, which are inherent to processing-composition-structure-property (PCSP) relationships. The breadth of relevant thermal processing conditions makes exhaustive sampling untenable. Therefore, AL is critical in reducing the number of experiments to a more tractable scale~\cite{reyes_machine_2019, sstein_progress_2019, tabor_accelerating_2018}. Recently, HT experimentation and AL techniques have been combined in a closed-loop fashion, where an AI instance iteratively proposes a sequence of experiments to explore and discover new materials. These efforts include identifying phase-change materials via Bayesian AL~\cite{kusne_on-the-fly_2020}, the discovery of NiTi-based shape memory alloys with low thermal hysteresis~\cite{xue_accelerated_2016}, the synthesis of \ce{BaTiO3}-based piezoelectrics with the large electrostrain~\cite{yuan_accelerated_2018}, the selective growth of carbon nano-tubes~\cite{nikolaev_discovery_2014,nikolaev_autonomy_2016}, the search for perovskite-type materials for photovoltaic applications~\cite{sun_accelerated_2019} and inorganic quantum dots~\cite{epps_artificial_2020}, maximizing hole mobility of organic solar cells~\cite{macleod_self-driving_2020}, and accelerating thoughness-optimization in additive manufacturing~\cite{gongora_bayesian_2020}. Despite this progress, the current state of the art exhibits considerable limitations. Chiefly, most attempts at closed-loop cycles still rely heavily on human intervention, preventing them from reaching true autonomy in materials discovery. Further, although AL guidance has recently been deployed to great effect for discovery of optical phase-change thin film materials,~\cite{kusne_on-the-fly_2020} the search space was limited to pre-synthesized compositions using a single processing condition. The time and temperature scales relevant to non-equilibrium thermal processing of solid-state inorganic materials pose substantial problems for incorporating synthesis in an autonomous loop, although the utility of spanning synthesis and characterization in an AL framework has been clearly demonstrated by several recent autonomous workflows for chemical synthesis ~\cite{porwol_autonomous_2020, li_autonomous_2020, macleod_self-driving_2020, li_robot-accelerated_2020}. The complexity and the degrees of freedom of the PCSP space are particularly challenging to incorporate in autonomous experimentation when considering metastable materials that form far from equilibrium at different, often unpredictable processing conditions~\cite{alberi_2019_2018, saksena_metastable_2018}. Importantly, commonly deployed off-the-shelf AL models are often not sufficient for achieving highly efficient learning and are frequently outperformed by random search with twice the number of samples~\cite{li2017hyperband}, a problem that is exacerbated with increased dimensionality of the search space~\cite{ahmed2016we}. Expert human scientists navigate complex search spaces by incorporating their prior knowledge, such as physics-based models that underlie the acquired data. Incorporating such knowledge in AL often requires the development of new AI methods. Finally, exploration via AL critically relies on uncertainty quantification in the not-yet-sampled regions of parameter space, which for complex experimental workflows requires error propagation. Arguably the most immediate obstacle to accelerating experimental exploration via AL lies in the dual challenges of developing noise models for each type of experiment and integrating them into a computational framework for end-to-end uncertainty quantification. In aggregate, these challenges motivate the establishment of a framework that integrates AI methods at multiple scales to perform scientifically meaningful interpretation, modeling, and uncertainty quantification of multiple streams of incoming data. Our vision of the Scientific Autonomous Reasoning Agent (SARA)~\cite{sara2020vision} is to develop a fully autonomous HT materials discovery and exploration framework by integrating robotic HT materials synthesis~\cite{li_robot-accelerated_2020} with AI instances to accelerate both materials synthesis and analysis. In particular, SARA aims to automate the representation, characterization, planning, optimization, and learning of materials knowledge in a fully integrated manner. To achieve this goal, we envision the deployment of agents, which individually specialize on specific subtasks, but closely interact with each other to accelerate the discovery efforts. These agents include, but are not limited to, synthesis and probing robotics to conduct experiments and highly optimized, physics-based AI models that evaluate currently available data with their associated uncertainties and that drive AI-guided discovery. In this work, we take strides towards realizing this vision and present a fully integrated, autonomous framework that iteratively maps out the synthesis phase boundaries of metastable compounds in a closed-loop fashion. {To this end, we incorporate a system of nested~\cite{balachandran_experimental_2018} cycles harnessed by SARA's specialized AI agents} to synthesize and explore thin-film libraries with lateral-gradient laser spike annealing (lg-LSA)~\cite{bell_lateral_2016}: an internal (highest frequency) autonomous loop iteratively proposes optimized property measurements of a given lg-LSA stripe using a hierarchy of optical characterization techniques, while an external autonomous loop proposes and executes the next lg-LSA synthesis via a model that aggregates knowledge obtained by inner-loop iterations. SARA's nested synthesis, microscopy imaging, and reflectance spectroscopy loops driven by the specialized AIs with AL reflect the hierarchical nature of scientific discovery. A primary goal of studying PCSP relationships is the enumeration of all possible syntheses that yield unique materials, a knowledge base that must be built from synthesis phase diagrams over a broad range of synthesis techniques, multiple parameter spaces defined within each technique, and many experimental campaigns to map synthesis phase diagrams in those spaces. Coordination among the levels of hierarchy is critical for maximizing high-level knowledge generation from low-level experiments, which guides our development of nested AL algorithms that seamlessly incorporate task coordination and uncertainty propagation. This framework is extensible with respect to incorporation of additional levels of hierarchy and/or expansion of techniques, such as additional property measurements and on-the-fly quantum mechanical calculations~\cite{cerqueira_materials_2015,kirklin_open_2015}, that enrich the knowledge within a given level of hierarchy. Networking of capabilities and knowledge sources elevates the use of AI and AL from process optimization to accelerated scientific discovery, a grand vision of AI-assisted science. \section{Results} Our goal is to explore synthesis phase diagrams, especially the relatively unexplored ultrafast-annealing region where metastable polymorphs of metal oxides are more likely to form. Such metastable oxide materials often exhibit improved properties over thermodynamic ambient ground states, and are relevant for countless industrial applications. The cubic high-temperature polymorph of \ce{ZrO2}, for example, is frequently used as a thermal coating material~\cite{clarke_thermal_2005,hannink_transformation_2000,clarke_materials_2003} due to its low thermal conductivity, while the anatase phase of \ce{TiO2} has attracted interest as a photocatalytic material~\cite{satoh_metastability_2013,cui_first-principles_2016,vu_anataserutile_2012}. These are only two of the most prominent examples of materials systems where metastable phases outperform their respective equilibrium counterparts. Here we study the Bi--O system, which exhibits a rich phase diagram with dozens of experimentally observed polymorphs. In particular, we focus on the \ce{Bi2O3} composition, for which five distinct crystalline phases are known~\cite{harwig_structure_1978,cornei_new_2006}. The monoclinic $\alpha$-\ce{Bi2O3} is the thermodynamic ground state at room temperature, while four high-temperature phases have been reported: tetragonal $\beta$-\ce{Bi2O3}, bcc $\gamma$-\ce{Bi2O3}, cubic $\delta$-\ce{Bi2O3}, and orthorhombic $\epsilon$-\ce{Bi2O3}. The metastable $\delta$-phase has attracted interest as a solid oxide electrolyte in fuel cells~\cite{shuk_oxide_1996}: due to its defective fluorite-type crystal structure with a high concentration of oxygen-vacancies, $\delta$-\ce{Bi2O3} has the highest oxygen ion conductivity of any solid oxide known to date. Unfortunately, it exhibits only a narrow thermodynamic stability window between $727 - 824^\circ$C, which has so far precluded its use on an industrial scale. Substitution of yttrium or rare earth oxides can stabilize $\delta$-\ce{Bi2O3} to room temperature, but leads to a degraded ion conductivity. Hence, efforts have been aimed at finding routes to retain phase-pure $\delta$-\ce{Bi2O3} to ambient conditions~\cite{bell_room_2019}. Our samples are deposited as amorphous thin films by reactive sputtering on a silicon substrate. For other materials systems, composition spreads can be similarly deposited, allowing the mapping of a composition gradient $c(\mathbf{ x})$ to the location $\mathbf{ x}$ on the substrate. We process the thin film libraries using lg-LSA to form and kinetically trap metastable phases during the quench to ambient conditions. In contrast to conventional methods for annealing thin film samples, such as hot plate, furnace, and rapid thermal annealing~\cite{Borisenko1997}, lg-LSA allows a controlled and rapid thermal processing over a wide range of conditions in a spatially confined region of less than 1~mm, with quench rates of $10^4-10^7$~K/s and peak temperatures $T_p$ up to 1400~$^\circ$C (limited by melt of the silicon substrates). Scanning a laser beam with a bi-Gaussian-like power profile (see the backdrop in the left panel of Fig.~\ref{fig:overview}) over the film allows a single lg-LSA stripe to produce a spatially inhomogeneous thermal profile $T_{T_p, \tau}(x)$ (where $x$ runs across the stripe). The duration of heating is characterized by a dwell time $\tau$ defined by the ratio of the laser full-width-half-maximum divided by the scan velocity of the laser (typical dwells range from 10-10,000 $\mu$s). Hence, at a given dwell time $\tau$, a single lg-LSA experiment produces a continuous range of temperature conditions wherein phase transitions, including formation of the sought metastable phases, need to be detected with a speed and level of automation commensurate with this robotic synthesis procedure in order to fully capitalize and elevate high throughput synthesis to high throughput discovery of phase boundaries. In order to reduce both computational and experimental cost, we need to autonomously map out the processing phase space \{$\mathbf{ x}, \tau, T_p$\} with as few synthesis experiments and property measurements as possible. Since the lg-LSA is an irreversible method, a specific position $\mathbf{ x}$ (and potentially its associated composition $c(\mathbf{ x})$ in the presence of a composition gradient) can only be annealed once, further emphasizing the need for optimizing the selection of the processing conditions. Once an lg-LSA stripe is processed, a conclusive structural characterization across the thermal gradient is possible with grazing-incidence high-intensity X-ray diffraction (XRD) to resolve the crystal structure.\cite{sutherland_optical_2020} However, access to synchrotron facilities capable of producing X-rays with appropriate wavelength, intensity, and $\mu$m-scale spatial resolution comprise an inherently limited resource that motivates development of alternative phase-boundary-detection methods. To address this issue, we developed a complementary technique based on microscopy imaging and optical spectroscopy to rapidly assess phase boundaries. We recently demonstrated that structural phase changes are directly associated with changes in the optical thin film properties of transparent films, in particular the optical thickness $nd$~\cite{sutherland_optical_2020}, where $n$ is the refractive index and $d$ is the film thickness. Essentially, the \textit{gradients} of the optical measurements across an lg-LSA stripe provide a means to map out phase boundaries without explicit crystallographic phase identification, thereby producing an unlabeled processing phase diagram without expensive XRD experiments. Herein, we put forth how SARA integrates lg-LSA synthesis and optical phase boundary detection in a hierarchical autonomous workflow by employing characterization and synthesis agents, $\text{X}_{\text{AI}}$ \ (pronounced Chi AI) and $\Upsigma_{\text{AI}}$ \ (pronounced Sigma AI), respectively, as illustrated in Fig.~\ref{fig:overview}. Starting with an initial processing condition, SARA synthesizes an lg-LSA stripe on a thin film library. Then, SARA employs its internal characterization agent $\text{X}_{\text{AI}}$ \ to probe the stripe using a set of optical techniques: (a) microscopy imaging to rapidly inspect the anneal stripe (see top panel in ``Optical Characterization'' in Fig.~\ref{fig:overview}), and (b) more elaborate, but costly, reflectance measurements (see ``Reflectance Spectroscopy'' in Fig.~\ref{fig:overview}). In particular, $\text{X}_{\text{AI}}$ \ uses the observed features from the micrograph as prior knowledge to guide and acquire an accurate reflectance map with as few measurements as possible. The \textit{gradients} of the reflectance map are then fed into SARA's synthesis AI agent $\Upsigma_{\text{AI}}$, which incorporates the reflectance gradient information of each lg-LSA stripe into a phase boundary map as a function of the parameters \{$\mathbf{ x}, \tau, T$\}. The high-gradient regions of this map determine the boundaries between phase fields, and produce an unlabeled processing phase diagram (see ``Phase Boundary Mapping'' in Fig.~\ref{fig:overview}). $\Upsigma_{\text{AI}}$ \ is also responsible for proposing the next, most promising synthesis conditions in order to effectively explore the search space. We discuss $\text{X}_{\text{AI}}$ \ and $\Upsigma_{\text{AI}}$ \ in detail below. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Intro_04.pdf} \caption{SARA's closed-loop autonomous materials synthesis and discovery cycle. Starting from a set of initially selected processing conditions, SARA synthesizes an lg-LSA stripe on a thin film library, and subsequently sends it to its characterization AI agent, $\text{X}_{\text{AI}}$. A schematic illustration of the lg-LSA/camera setup is shown in the left panel with the bi-Gaussian power profile in the backdrop, the laser to the left, a camera to the right, and the thin-film sample mounted on a stage. Using a hierarchy of characterization techniques, $\text{X}_{\text{AI}}$ \ analyzes the stripe to determine intricate changes in its optical properties. In particular, $\text{X}_{\text{AI}}$ \ first acquires a microscopy image to determine the positions of likely phase boundaries, which informs the reflectance spectroscopy measurements. $\text{X}_{\text{AI}}$'s physics-informed AL model accelerates the spectroscopy acquisition, resulting in an accurate gradient model of the lg-LSA stripe. A microscopy image, the reflectance spectroscopy heat map, and the first four Legendre coefficients from the $\text{X}_{\text{AI}}$ \ representation are shown in the center panel for a representative lg-LSA stripe. Finally, the gradients are fed into SARA's synthesis AI agent, $\Upsigma_{\text{AI}}$, which generates a gradient phase boundary map, and also proposes the next experimental processing conditions to improve the phase boundary with as few experiments as possible. A model gradient phase map showing high-gradient regions in yellow is presented in the right panel.} \label{fig:overview} \end{figure*} \subsection{$\text{X}_{\text{AI}}$: Accelerating data acquisition and characterization} \begin{figure*}[th!] \centering \includegraphics[width=0.9\textwidth]{iAI.pdf} \caption{The characterization AL loop to accelerate acquisition of the reflectance spectroscopy necessary for phase boundary detection. Panel (a) illustrates the overall workflow. A microscopy image (b) is captured to extract the stripe features, which are fed as a scaling function to the $\text{X}_{\text{AI}}$ \ kernel. The core features are the LSA prior and the RGB transition prior which are sums of generalized Gaussian functions, as shown in panel (c). The corresponding gradient peak positions are denoted as yellow vertical lines in (b). $\text{X}_{\text{AI}}$ \ takes these functions as prior knowledge to set up a stripe-specific kernel that facilitates rapid model convergence. The AL loop is performed iteratively on reflectance measurements $r(x_i,\lambda)$ over positions $x_i$, which are expanded into Legendre polynomials to reduce the dimensionality (see center panel in Fig.~\ref{fig:overview}). Panel (d) shows the performance of different kernel designs, illustrating that our $\text{X}_{\text{AI}}$ \ kernel with both LSA ($\text{X}_{\text{AI}}$+LSA) and LSA+RGB ($\text{X}_{\text{AI}}$+RGB) priors outperform other, conventional kernels. The performances of the different acquisition function are shown in panel (e) (R: random, U: uncertainty, IU: integratd uncertainty, IGU: integrated gradient uncertainty sampling). The solid lines represent the $\text{X}_{\text{AI}}$+RGB kernel, while the dashed lines correspond to the RBF kernel.} \label{fig:iAI} \end{figure*} $\text{X}_{\text{AI}}$'s primary task is to construct an accurate reflectance spectroscopy map $r(x,\lambda)$ of an lg-LSA sample annealed at $T_p$ and $\tau$ while measuring it at as few positions $x_i$ across the stripe as possible. Since the acquisition time for a single such measurement $r(x_i, \cdot)$ is around 4.5~s, an exhaustive scan across a stripe of 1.5~mm in $10~\mu\text{m}$ intervals requires more than 11~min, forming one of the main bottlenecks of our HT experimental setup. To accelerate the reflectance data acquisition, we propose an AL scheme that takes advantage of multimodal measurements and incorporates physical structure into a Gaussian process (GP) regression model to yield highly optimized data acquisition and analysis. The overall workflow of the $\text{X}_{\text{AI}}$ \ cycle is outlined in Fig.~\ref{fig:iAI}~(a). In a first step, SARA captures a microscopy image of an lg-LSA stripe to analyze the overall condition of the anneal and to extract key features. This single RGB image of a stripe is inherently throughput-matched to the lg-LSA synthesis, producing prior knowledge for the $\text{X}_{\text{AI}}$'s AL cycle to accelerate reflectance measurements. A representative microscopy image is shown in Fig.~\ref{fig:iAI}~(b). Such micrographs can be used to rapidly assess the conditions and the integrity of the anneal. Obvious damage of the thin film such as delamination, scratches, or contamination such as dust particles, residual lithography artifacts, and dirt can be easily detected, which invalidates the lg-LSA stripe and can trigger re-synthesis. The incorporation of such automated quality control in the autonomous loop alleviates responsibility for the $\text{X}_{\text{AI}}$ \ loop to effectively respond to invalid data, a critical aspect of autonomous workflows for robust operation.~\cite{sstein_progress_2019} SARA proceeds by constructing a stripe-specific GP kernel that incorporates the underlying physics of both the lg-LSA and optical spectroscopy processes. Notably, the bi-Gaussian power profile produces stripes of nearly perfect lateral symmetry at steady state, with their centers reaching the corresponding peak temperatures $T_p$ and the continuous variation in lateral thermal gradient mirrored on each side of the stripe. We incorporate this structure into the kernel of $\text{X}_{\text{AI}}$ \ by forcing its main component to be symmetric around the center of a stripe (see Sec. \ref{sec:iAImethods}). Additionally, SARA extracts key features of the stripe texture from the micrograph to further improve the kernel design, i.e., by identifying the stripe center, and by detecting systematic optical changes across the stripe that we associate with structural transitions~\cite{sutherland_optical_2020}. These optical transitions are identified by peaks in the gradient signal across a stripe, the locations of which are shown as vertical yellow lines in Fig.~\ref{fig:iAI}~(b). Furthermore, the two outermost detected peaks in the gradient signal give an estimate of how wide the lg-LSA stripe is, i.e., where the unannealed, amorphous film ends and the crystallization begins. We use slightly broadened peaks in the RGB gradient signal (purple line in Fig.~\ref{fig:iAI}~(c)) and the overall width of the lg-LSA stripe (red line in Fig.~\ref{fig:iAI}~(c)) as the RGB and LSA prior, respectively. These two functions are then used to rescale the kernel of the GP in the $\text{X}_{\text{AI}}$ \ cycle. Finally, we account for small thickness variations of the film across the stripe by adding a linear component to the kernel. To improve the efficiency of $\text{X}_{\text{AI}}$, the reflectance $r(x,\lambda)$ at any position $x$ is expanded in Legendre polynomials as a function of wavelength $\lambda$ before it is fed into the GP. Since the reflectance varies smoothly with $\lambda$, the Legendre expansion can be truncated between the \nth{10} and \nth{20} order at essentially no loss in accuracy \cite{wang2012legendre} (see Fig.~S1 in the Supplemental Materials~\cite{SupplementalMaterials}), which reduces the dimensionality from the 2046 measured photon wavelength to a compact space of 10--20 Legendre coefficients. For our system, we use 16 coefficients throughout. Figure \ref{fig:overview} (bottom middle) shows the first four Legendre coefficients of the reflectance data and our GP model's posterior predictive mean and uncertainty for those coefficients. To demonstrate the advantage of our specialized $\text{X}_{\text{AI}}$ \ kernel with respect to a set of conventional kernels, we perform statistical benchmarks on 617 lg-LSA experiments at distinct conditions, $(T_p, \tau)_j$. For each of the stripes, we measure the reflectance at $n$ randomly selected positions $\{x_i\}_j^n$ on a grid spaced $10~\mu\text{m}$ apart, and use these measurements as inputs to a GP model with different kernels. The ground truth is exhaustively measured across the whole stripe ranging over 1.5~mm, corresponding to a total of 151 measurements. For a range of $n$, we repeat this test 32 times for every stripe with independent random locations and average the coefficient of determination $R^2$ for each kernel on the exhaustive data. This reduces the statistical noise in the results to a negligible value. Further, we benchmark every kernel with a range of length scales and select the best in terms of $R^2$ score (see Sec.~\ref{sec:metrics}). By construction, our benchmark disentangles the effects of AL and kernel design, and the kernel with the right inductive bias will express the data best, even if all measurements are random. The results of this benchmark are illustrated in Fig.~\ref{fig:iAI}~(d), showing the performance of the various kernels with respect to the number of random measurements $n$. The radial basis function (RBF) kernel performs poorly, barely reaching an $R^2$ score of 0.8 within 37 measurements. The Mat\'{e}rn kernel performs better, requiring $n=25$ to reach the same score. The $\text{X}_{\text{AI}}$ \ kernels perform best: depending on whether or not prior knowledge from the microscopy image is included (``$\text{X}_{\text{AI}}$+LSA'' with LSA prior only, and ``$\text{X}_{\text{AI}}$+RGB'' with LSA and gradient peak prior), we obtain an $R^2$ value of 0.8 with as few as 16 sampling points. The precise modeling of the optical measurements, its incorporation into the AL model, and the model's initialization with the RGB-image prior knowledge all contribute to the fast learning rate at the onset of autonomous experimentation, as required for efficient AL. Having designed the kernel for the $\text{X}_{\text{AI}}$ \ cycle, we turn our attention to the acquisition function, that is, the function that chooses the next measurement based on the available information. An important component of many performant acquisition functions is the reduction of uncertainty in a target variable. Here, we benchmark three different acquisition functions, two of which are non-standard. In particular, we study (U) uncertainty sampling, which chooses the next measurement at the point of maximum uncertainty in the Legendre coefficients; (IU) integrated uncertainty sampling, which selects the point that minimizes the integrated uncertainty over the whole sampling domain; and (IGU) integrated gradient uncertainty sampling, which is similar to IU, but reduces the overall uncertainty in the gradients of the model. Importantly, the last strategy targets our quantity of interest, since the reflectance gradients are indicative of the phase boundaries in the processing phase diagram. For this reason, we quantify the error of the model in the gradients, rather than the error to the observed data. Since we cannot directly observe the gradients, we generate ground truth data by training our GP model on the exhaustive measurements and taking the derivative of the fitted model. We then record the $R^2$ score of the derivatives of the model for each of the acquisition functions at every iteration. In Fig.~\ref{fig:iAI}~(e) we show the performance of the various sampling strategies as a function of AL iteration $i$, using either the $\text{X}_{\text{AI}}$+RBG kernel (solid lines) or the RBF kernel (dashed lines). The best performance is achieved with the stripe-specific, highly optimized $\text{X}_{\text{AI}}$+RBG kernel in conjunction with IGU sampling, reaching an $R^2$ score of 0.8 and 0.9 within 9 and 15 iterations, respectively. Note that random sampling with the best kernel design still outperforms the best sampling strategies with the worst kernel. Further, the acquisition functions do not differ markedly with the $\text{X}_{\text{AI}}$+RGB kernel, highlighting the importance of incorporating the problem structure into our AI model and AL cycle. Compared to random sampling with an RBF kernel, the best strategy accelerates the acquisition and characterization by a factor of 9.7 for an $R^2$ of 0.8, approximately one order of magnitude. \subsection{$\Upsigma_{\text{AI}}$: Accelerating phase exploration and processing conditions} \begin{figure*}[th!] \centering \includegraphics[width=0.9\textwidth]{eAI_2.pdf} \caption{The synthesis AL loop to accelerate materials exploration. Panel (a) illustrates the overall external workflow. Starting from an initial set of conditions, an lg-LSA stripe is annealed and processed by $\text{X}_{\text{AI}}$. The gradients are then fed into the $\Upsigma_{\text{AI}}$ \ agent, which constructs a (preliminary) gradient phase map and proposes the next experimental conditions. The transformation of the $\text{X}_{\text{AI}}$ \ reflectance gradients requires a rigorous error assessment and propagation, as shown in panel (b). Due to the symmetric $\text{X}_{\text{AI}}$ \ kernel, only one side from the stripe center is sampled on a uniform temperature mesh. The errors propagated to the $\Upsigma_{\text{AI}}$ \ stem from the variation in the peak temperature (peak error) and the gradient of the temperature profile (profile error). Panel (c) shows the performance of different $\Upsigma_{\text{AI}}$ \ acquisition strategies: random (R), uncertainty (U), stripe uncertainty (SU), and upper confidence bound (UCB) sampling. The solid and dashed lines correspond to GP regression with and without input uncertainty, respectively. Panel (d) shows the gradient phase map of \ce{Bi2O3}, where the peak ridges are highlighted with light lines. The phase regions are labeled \textit{a posteriori} with selected XRD measurements, from low to high temperatures: amorphous as-deposited ($i$), rearranged, densified amorphous ($ii$), $\delta$-\ce{Bi2O3} ($iii$), mixed-phase region of $\delta$ and $\beta$-\ce{Bi2O3} ($iv$), pure $\beta$-\ce{Bi2O3} ($v$), and, finally, melt-quenched amorphous ($vi$).} \label{fig:eAI} \end{figure*} Once an lg-LSA stripe has been processed by $\text{X}_{\text{AI}}$, its output reflectance gradient information is fed into the external synthesis AI agent, $\Upsigma_{\text{AI}}$. Its main task is threefold: (i) assemble the incoming data, (ii) propagate uncertainty from every lg-LSA experiment to predict the gradient signal and its uncertainty throughout the search space, and (iii) ultimately propose new conditions for the synthesis experiments. The overall workflow of this process, which integrates the techniques described below, is shown in Fig.~\ref{fig:eAI}~(a). The optical data of an lg-LSA anneal is processed through the nested $\text{X}_{\text{AI}}$ \ loop, the output of which is the gradients of the reflectance spectroscopy across a stripe, $g(x)= \| \partial_x r(x, \cdot) \|_2$. This spatial gradient information is then transformed onto a temperature scale based on the Gaussian-type temperature profile $T_{T_p, \tau}(x)$ shown in Fig.~\ref{fig:eAI}~(b) (blue line). Since the $\text{X}_{\text{AI}}$ \ kernel is symmetric up to the linear term, the gradient information is symmetric about the peak temperature $T_p$, so that we only need to sample $g(x)$ along one side from the stripe center (orange crosses in Fig.~\ref{fig:eAI}~(b)). In principle, one single lg-LSA stripe would produce the complete temperature conditions between room temperature $T_r$ and $T_p$ at a given dwelltime $\tau$. Hence, the set of metastable materials and their transition conditions would be available from a single stripe if one selected a high $T_p$ (e.g., $1400^\circ$C) and $T_\text{min}=T_r$. In practice, the concomitant increase in temperature gradient with $T_p$ would require progressively higher spatial resolution to characterize the full range of transitions and results in undesirably high uncertainty in the modelled temperature. With our experimental characterization technique, the spatial resolution is limited to approx.~$10~\mu$m, and thus the design of lg-LSA synthesis conditions must be done under consideration of the position-dependent temperature variation within a single spectroscopy measurement, which makes the selection of $T_p$ at a given $\tau$ a non-trivial decision based on the aggregate information that can be gained from the entire lg-LSA stripe. Properly propagating the multiple sources of uncertainty from synthesis and characterization through the model of the phase boundary map is extremely important: in standard GP regression, the inputs are assumed to be free of noise, but accounting for such errors is crucial when dealing with experimental measurements. Here, we include and propagate the uncertainties of the inputs due to two sources. First, the peak temperature reached in an lg-LSA anneal can vary within an error range of up to $\sigma_{T_p} = 25^{\circ} \text{C}$ at 1400~$^\circ$C due to fluctuation in the laser power, even after reaching steady state (peak error in Fig.~\ref{fig:eAI}~(b)). Second, the temperature profile itself gives rise to an error proportional to the spatial rate of change $\sigma_T(x) \propto |\partial_x T_{T_p, \tau}(x)|$, as shown by the green area in Fig.~\ref{fig:eAI}~(b). Note that the error bars in Fig.~\ref{fig:eAI}~(b) are not to scale and intended solely for a schematic illustration. As opposed to the $\text{X}_{\text{AI}}$ \ model of optical spectroscopy, the gradient map in synthesis space has no analogous physics-based model, in part because too few synthesis phase diagrams are known, and their underlying features remain an open question. The large dynamic range of dwell time motivates its logarithmic sampling, and the distinct influence of temperature and dwell time on synthesis motivate independent parameterization of these two dimensions. While we aim to learn more structured representations of synthesis phase diagrams in future refinements of the SARA framework, for the purposes of the present work, we find a Mat\'{e}rn kernel, with separate length scales for the temperature and dwell time dimensions, to enable rapid model convergence in the $\Upsigma_{\text{AI}}$ \ loop while remaining flexible with respect to the gradient map in synthesis phase space. In contrast to the $\text{X}_{\text{AI}}$ \ cycle, there is more opportunity to incorporate structure based on prior knowledge into the acquisition function, rather than the kernel, of $\Upsigma_{\text{AI}}$. As shown in Fig.~\ref{fig:eAI}~(c), random sampling performs only slightly worse than more sophisticated acquisition methods like uncertainty sampling or upper confidence bound (UCB) sampling in terms of $R^2_s$, a generalization of $R^2$ that takes into account the heteroscedasticity of the data due to the propagation of uncertainty (see Sec. \ref{sec:metrics}). This behavior can be understood by considering the following: every experiment at $\{T_p, \tau\}$ produces a range of temperatures $T<T_p$ at which new information is obtained, thereby reducing the uncertainty not only at $T_p$, but in a wide range of temperatures below it. Hence, uncertainty sampling at $T_p$ and $\tau$ alone is a poor strategy. To address this issue, we introduce stripe uncertainty (SU) sampling, which takes into account the uncertainty in the \textit{whole temperature range} between $T_\text{min}$ and $T_p$. {This strategy greatly improves performance, reaching $R^2_s >0.7$ within 15 iterations.} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Flow.pdf} \caption{The evolution of the actively learned gradient phase map of the \ce{Bi2O3} system at selected number of iterations $n$. We use the stripe uncertainty (SU) acquisition strategy, starting from a randomly selected condition $(T_p,\tau)_1$. The gradient ridges are shown as white lines, and the conditions $(T_p,\tau)_i$ at which the experiments have been performed are shown as white crosses (note that not all crosses are shown, since the plots have been cropped to a smaller range than the range of experimental conditions). For each panel, the number of sampled conditions $n$ is indicated at the top, together with the corresponding $R^2_s$ score. The final, exhaustively sampled phase map is shown in panel (f).} \label{fig:evolution} \end{figure*} The plot in Fig.~\ref{fig:eAI}~(d) shows the gradient heat map of \ce{Bi2O3} from an exhaustive sampling of all 617 lg-LSA stripes, with gradient peaks at every value of $\tau$ highlighted in white. Note that these peaks are connected and form ridges, that can be well interpreted as phase field boundaries. To label these phase fields, we selectively collect and analyze XRD data of lg-LSA stripes annealed at conditions close to the field centers (see Supplemental Materials for details~\cite{SupplementalMaterials}). Notably, only few XRD measurements suffice to label the phase map, once the phase boundaries have been determined via the reflectance data. The phase field $i$ below approx.~$350^\circ$~C corresponds to the as-deposited amorphous film, while a slight gradient ridge separates it from $ii$, a densified, amorphous regime. At approx.~$500^\circ$~C there is a ridge that extends across the complete dwell range, corresponding to the crystallization onset of the $\delta$-phase in domain $iii$. The boundary separating $iii$ from $iv$ at approx.~$550^\circ$~C corresponds to the onset of a two-phase region, where both the $\delta$ and $\beta$-phase of \ce{Bi2O3} coexist, and above approx.~$650^\circ$~C we observe phase-pure $\beta$-\ce{Bi2O3} in $v$. The phase field above approx.~$810^\circ$~C corresponds to amorphous \ce{Bi2O3} that reforms after quenching from melt (the bulk melting temperature of \ce{Bi2O3} is $817^\circ$C~\cite{madelung_bismuth_1998}) and stretches out across all values of $\tau$. A representative evolution of the actively learned gradient phase map is shown in Fig.~\ref{fig:evolution}, with six snapshots in the panels from (a) to (f). Panel (a) shows the preliminary gradient map at iteration $n=3$: the two gradient ridges spanning all dwells qualitatively correspond to the crystallization boundaries either from melt (($v$--$vi$) in Fig.~\ref{fig:eAI}~(d)) and the deposited, densified thin film ($ii$--$iii$). At $n=8$ in panel (b), we detect the onset of the two-phase region ($iii$--$iv$), and at $n=15$ (panel (c)) the phase-pure $\beta$-\ce{Bi2O3} boundary ($iv$--$v$). With only $n=25$ iterations we identify the last boundary, namely the amorphous densification onset ($i$--$ii$, see panel (d)). At this point, the overall features of the gradient phase map are already qualitatively captured completely, and subsequent iterations merely refine the boundary locations (panel (e)), getting closer to the exhaustive phase map in panel (f). Two factors are crucial for $\Upsigma_{\text{AI}}$ \ to achieve a factor of approx. 14 acceleration to reach $R^2_s = 0.7$ compared to random sampling without propagation of input uncertainties. First, incorporating materials synthesis into our SARA discovery framework allows to check for convergence of the phase diagram on the fly. Even with random sampling, the possibility of quantifying the progress and monitoring convergence in the gradient mapping informs us how well the phase space has been sampled, thereby significantly reducing the resource cost. Second, the comprehensive uncertainty propagation in conjunction with the stripe uncertainty acquisition function realizes the full potential of AI and AL and decreases the required samples to a fraction of the exhaustive measurements. Importantly, SARA's overall AL acceleration is the {\it product} of the acceleration factors of $\text{X}_{\text{AI}}$ \ and $\Upsigma_{\text{AI}}$, due to the cycles' nested design. \section{Discussion} In conclusion, we have developed SARA, an AI-driven autonomous closed-loop materials discovery framework that integrates robotic materials synthesis with automated microscopy imaging and reflectance spectroscopy characterization. SARA incorporates a set of nested AL loops based on specialized physics-inspired Gaussian process regression models to synthesize, characterize, and iteratively explore non-equilibrium synthesis phase maps using high-throughput lg-LSA thin-film processing. In particular, { SARA tightly integrates the physics of the experiments and quantifies} experimental uncertainties in both the inputs and the outputs of the model. We highlight { SARA's capabilities on the \ce{Bi2O3} system by showing that SARA reduces the time to map the system's phase boundaries by more than two orders of magnitude, in contrast to random or exhaustive searches. In particular, SARA identifies the synthesis conditions that trap metastable $\delta$-\ce{Bi2O3} at room temperature, a promising solid oxide electrolyte for fuel cell applications.} This speedup in synthesis and data-acquisition is a fundamental prerequisite for paving the path towards exploratory HT efforts with additional chemical degrees of freedom, extended processing parameters, and when targeting property optimization. The gradient phase map construction can be extended to additional degrees of freedom, e.g., on composition spreads over a continuous range of chemistries. { SARA's nested AI architecture also allows the incorporation of additional agents for multi-objective optimization efforts by including robotic measurements of target properties.} In addition to phase boundary mapping, research objectives for which SARA would enable new modalities of materials design include: discovery of a synthesis condition for a not-yet-synthesized phase, extension of the optical spectroscopy to characterize visible absorption to identify syntheses of materials for solar energy applications, and incorporation of new performance characterization such as electrical conductivity measurements. These latter examples involve mapping of synthesis phase diagrams in the context of performance metrics for a target application, the central goal of studying PCSP relationships. SARA's autonomous execution of such studies constitutes a grand vision of AI-assisted materials science. \section{Materials and Methods} \subsection{Experiments and measurements} \subsubsection{Thin film deposition} We used thermally oxidized (200~nm oxide), highly doped (\emph{p}-type, 0.01--0.02~$\Omega$ cm), Si wafers with lithographically patterned gold alignment marks as substrates for our thin film deposition. RF reactive sputtering from a \ce{Bi} target in an atmosphere of 8~mTorr Ar and 2~mTorr \ce{O2} was employed to deposit the \ce{Bi2O3} sample in a custom built sputter system. The substrate was rotated while operating the target at an RF power of 20~W to create a 170~nm thick film with $<10\%$ thickness variation. \subsubsection{Lateral gradient laser spike annealing} The lg-LSA anneals were conducted using a CW \ce{CO2} laser operating at $\lambda = 10.6\ \mu$m and maximum power of 125~W which was configured to produce a power profile with a bi-Gaussian shape ($320~\mu$m-wide lateral FWHM and $80~\mu$m-long longitudinal FWHM). To reach steady-state, each anneal was conducted on a 5~mm long stripe, with peak temperatures ranging from 400$^{\circ}$C to 1300$^{\circ}$C and processing dwell times between 250~$\mu$s and 10~ms. The stripes were located 2~mm apart to avoid thermal overlap between anneals. With this configuration, a 100~mm diameter wafer offers space for a total of up to 625 stripes with distinct anneal conditions. Note that the dwell $\tau$ is related to the scan velocity $v$ via the FWHM of the laser in the scan direction (longitudinal) through $\tau=\frac{\text{FWHM}}{v}$. $\tau$ is approximately the time scale during which the temperature is within 5~\% of the peak temperature~\cite{bell_lateral_2016}. To avoid potential location bias on the wafer arising from variations in film thickness, the anneal locations were randomized across the thin film with respect to the $T_p$ and $\tau$. In total, we annealed 617 lg-LSA stripes on our \ce{Bi2O3} sample with $400 \leq T_p \leq 1300^\circ$C, and $250 \leq T_p \leq 10,000\mu$s. \subsubsection{Microscopy imaging} We used a Thorlabs CMOS camera (RGB channels with $1024 \times 1280$ pixels), which was aligned normal to the sample, together with a coaxial illumination using white light over a spot size of approximately 1~mm in diameter. The camera magnification was set to produce a field of view (FOV) of approximately 1~mm horizontally, resulting in a spacing of 0.92~$\mu$m between pixels. \subsubsection{Reflectance spectroscopy} A white light source ($400 <\lambda < 900$~nm) was focused down to a single 10~$\mu$m diameter spot using optical fibers to locally illuminate the sample, allowing spatially resolved reflectance measurements. We used a Flame Spectrometer from Ocean Optics to collect the reflectance spectroscopy with an optimized integration time of $\approx 4500$~ms. The reflected light was measured from $\lambda=340$ to $\lambda < 1026$~nm at 2046 discrete values. The reflectance data was calibrated and normalized with respect to a dark reference spectrum, and a spectrum from an Ag-coated mirror. For the exhaustive reflectance measurements, the optical fiber was scanned across an lg-LSA stripe over a range of 1.5~mm in 10~$\mu$m increments, leading to 151 samples per stripe. \subsubsection{X-ray diffraction} The XRD data was collected using the ID3B beamline at the Cornell High Energy Synchrotron Source (CHESS) with a 9.7~keV beam, which was focused to a spot size on the sample of $20\ \mu\text{m}\times40\ \mu\text{m}$ at a 2$^\circ$ angle of incidence. A Pilatus 300K detector was used to capture the diffracted signal. The XRD data were collected every 10~$\mu$m across a stripe with a 50~ms integration time for each frame. The 2D detector data were integrated along the $\chi$ direction using pyFAI~\cite{Ashiotis2015}. \subsection{Computational methods} In the following, bold lower case letters refer to vectors and bold upper case letters refer to matrices. Given a collection of inputs $\bold X = [\bold x_1, \hdots, \bold x_n]$ of a function $f$, we let $f(\bold X)$ be the result of the application of $f$ to each column of $\bold X$, $f(\bold X) := [f(\bold x_1), \hdots, f(\bold x_N) ]$. \subsubsection{Gaussian Processes} A Gaussian Process (GP) is a distribution over functions whose finite-dimensional marginal distributions are multivariate normal. That is, for any sample $f$ of a GP, and any finite selection of inputs $\bold X$, we have $ f(\bold X) \sim \mathcal{N}(\boldsymbol \mu_{\bold X}, \boldsymbol \Sigma_{\bold X})$, for some mean vector $\boldsymbol \mu_{\bold X}$ and covariance matrix $\boldsymbol \Sigma_{\bold X}$. In fact, analogous to the multivariate case, a GP is completely defined by its first and second moments: a mean {\it function} $\mu(\cdot)$ and a covariance {\it function} $\kappa(\cdot, \cdot)$, also known as a kernel. In particular, if $f \sim \mathcal{GP}(\mu, \kappa)$ then for any finite collection of inputs $\bold X$, \begin{equation} f(\bold X) \sim \mathcal{N}(\mu(\bold X), \kappa(\bold X, \bold X)), \end{equation} where $\kappa(\bold X, \bold X)$ is the matrix whose $(i,j)^\text{nth}$ entry is $\kappa(\bold x_i, \bold x_j)$. Fortunately, the posterior mean $\mu_p$ and posterior covariance $\kappa_p$ of a GP conditioned on observations with normally-distributed noise have closed forms and only require linear algebraic operations: \begin{equation} \label{eq:gp_posterior} \begin{aligned} \mu_p(\bold x_*) &= \mu(\bold x_*) + \kappa(\bold x_*, \bold X) \bold \Sigma_{\bold X}^{-1} (\bold y - \mu(\bold X)),\\ \kappa_p(\bold x_*, \bold x_*') &= \kappa(\bold x_*, \bold x_*') - \kappa(\bold x_*, \bold X) \bold \Sigma_{\bold X}^{-1} \kappa(\bold X, \bold x_*'),\\ \end{aligned} \end{equation} where for homoscedastic regression, $\bold \Sigma_{\bold X} = k(\bold X, \bold X) + \sigma_y^2 \bold I$ and $\sigma_y$ is the standard error of the target $\bold y$. Since a GP's behavior is chiefly determined by the kernel, its performance can be improved dramatically by incorporating important problem structure into the kernel. For more background on Gaussian processes, see \cite{rasmussen:2005:gpml}. For the present work, we developed a Gaussian process framework in Julia \cite{julia2017bezanson} with which we implemented SARA's active learning technology. \subsubsection{Active Learning} The field of active learning considers the problem of selecting data in an optimal way in order to reduce the total amount of data that is required to effectively train a model \cite{cohn1996active, settles_active_2009}. To this end, the notion of an {\it acquisition function} is important. An acquisition function $a(\bold X, \bold y)$ dependents on currently available data, and outputs a suggested observation $\bold x^*$. For example, if $f | \bold X, \bold y$ denotes the posterior of $f$ after having seen the data, and var$(f| \bold X, \bold y)$ is the posterior variance (itself a function), then \begin{equation} \label{eq:us} \arg \max_{\bold x^*} \text{var}(f | \bold X, \bold y)(\bold z) \end{equation} defines an acquisition strategy known as {\it uncertainty sampling}. Other acquisition functions are based on upper-confidence bounds, expected improvement, and probability of improvement. Overall, an important ingredient for active learning is the quantification of uncertainty, which is a strength of Bayesian models. In the realm of Bayesian models, Gaussian processes are of particular importance because of their unique combination of flexibility, closed-form inference formulas, and uncertainty quantification. For these reasons, we chose to build SARA's computational backbone on Gaussian processes. \subsubsection{Input Noise} \label{sec:inputnoise} Due to the importance of uncertainty quantification for AL, it is critical to take all sources of uncertainty into account. In the case of SARA, it is crucial to not only account for errors in the measurements (i.e., model outputs), but also the experimental conditions (i.e., model inputs) due to intrinsic experimental uncertainties in the temperature profile. However, the general problem of posterior inference with input noise is intractable. For this reason, one needs to employ approximate methods like variational approximations \cite{titsias2009variational, lazaro2011variational}, Markov-chain Monte Carlo \cite{neal1997monte}, or methods that transform the problem of homoscedastic regression with input noise to one of heteroscedastic regression \cite{le2005heteroscedastic} without input noise \cite{kersting2007most, dallaire2009learning, snelson2012variable}. A particularly efficient technique is that of McHutchon and Rasmussen \cite{mchutchon2011input}, which is based on propagating the input uncertainty using a linear approximation of the standard posterior mean. According to this model, given the regular posterior mean $\mu_p(x)$, the input-noise-corrected version can be computed by updating \begin{equation} \label{eq:nigp} \bold \Sigma_{\bold X} \leftarrow \bold \Sigma_{\bold X} + \text{diag}(\sigma_x(\bold X) \odot \partial_x \mu_p(\bold X))^2 \end{equation} in the equations~\eqref{eq:gp_posterior} for the GP posterior. Notably, we generalize the original work in making the input uncertainty $\sigma_x(\bold X)$ dependent on the input. This is possible because the non-constant uncertainties in SARA's experimental process can be estimated well by physical considerations (see Section \ref{sec:eAImethods} for details). Lastly, note that Eq.~\eqref{eq:nigp} makes the approximate posterior uncertainty dependent on the values of the data via the posterior mean, not just the locations of the measurements. \subsubsection{$\text{X}_{\text{AI}}$} \label{sec:iAImethods} The goal of the $\text{X}_{\text{AI}}$ \ is to infer the reflectance $r(x, \lambda)$ using the least number of measurement locations $x_i$ as possible. Each measurement of the inner loop acquires the wavelength-dependent spectroscopic reflectance of the underlying thin film, that is, a vector whose entries correspond to reflectance intensities at a given wavelength. To aid the efficiency of our model, we first reduce the dimension of the output by projecting it onto the basis of a small number (10 to 20) of Legendre polynomials. Since the signal is smooth as a function of wavelength, it admits a sparse approximation in this basis, allowing the compression of the signal with virtually no loss of information \cite{wang2012legendre} (see also Supplemental Materials). The AL cycle then works on the dimensionality-reduced form of the reflectance data. In the following, we describe the construction of the $\text{X}_{\text{AI}}$ \ kernel function, which integrates special structure of the data and is a critical part of the $\text{X}_{\text{AI}}$. In particular, the kernel incorporates 1) lateral symmetry, 2) variance scaling based on RGB data, and 3) asymptotically linear behavior. Starting with a Mat\'ern 5/2 kernel $k$ with a length scale $l$, we symmetrize it via $k_{\text{sym}}(x,y) = k(x, y) + k(x-c, y-c)$ around the stripe center $c$, which we estimate from the RGB images. We incorporate further information from the RGB images by scaling the kernel with the LSA or RGB prior function $f_{\text{rgb}}$ shown in Figure \ref{fig:iAI} (c). In particular, we use the peaks in the RGB gradient signal and slightly broaden them by a Gaussian with $\sigma=20\mu\text{m}$, and sum them to our RBG prior function (purple line in Fig.~\ref{fig:iAI}~(c)). Additionally, the overall width of the lg-LSA stripe gives rise to the LSA prior, which is a generalized Gaussian with a wide shape parameter of $\beta = 4$ and a scale parameter $\sigma$ defined by the stripe width (red line in Fig.~\ref{fig:iAI}~(c)). $f_{\text{rgb}}$ is then given by a weighted sum of these two prior functions. This scaling constrains the search space, since we don't expect a lot of change in the underlying material if the experimental conditions (e.g. temperature) stay similar, and gives rise to the kernel $f_{\text{rgb}}(x) k_{\text{sym}}(x, y) f_{\text{rgb}}(y)$. Lastly, we incorporate an asymptotically linear behavior, due to thickness variations in the wafer, with the linear kernel $k_{\text{line}}(x,y) = x \cdot y + b$, where $b$ is a constant that controls the variance of the bias term of the line. As a result, the $\text{X}_{\text{AI}}$ \ kernel for one Legendre coefficient is proportional to \begin{equation} k_{\text{$\text{X}_{\text{AI}}$}}(x,y) = f_{\text{rgb}}(x) k_{\text{sym}}(x, y) f_{\text{rgb}}(y) + k_{\text{line}}(x, y). \end{equation} For all the Legendre coefficients, we then use a GP with the kernel $a_i k_{\text{$\text{X}_{\text{AI}}$}}(x, y)$, where $\{a_i\}$ are scaling coefficients that incorporate the different variances of the Legendre coefficients, to learn the reflectance map. This can also be interpreted as computing a vector-valued GP with the matrix-valued kernel $\bold K_{\text{$\text{X}_{\text{AI}}$}}(x,y) = \text{diag}(\bold a) \ k_{\text{$\text{X}_{\text{AI}}$}}(x,y)$, where $\bold a$ is the vector of scaling coefficients. For a comprehensive review on matrix-valued kernels, see \cite{alvarez:2012:kvf}. The length scale $l$ of the Mat\'ern kernel can be optimized via maximization of the marginal likelihood \cite{rasmussen:2005:gpml}. However, to make the reported results in Fig. \ref{fig:iAI} (d) independent of this non-convex optimization procedure, we ran the benchmarks using a range of fixed length scales and reported the best performing combination for each kernel. Regarding the acquisition function, in addition to uncertainty sampling, we benchmark the $\text{X}_{\text{AI}}$ \ using integrated uncertainty sampling (IU), a policy that reduces the total variance over a set of potential measurement locations $\bold Z$. In particular, IU is defined by \begin{equation} \label{eq:ius} \arg \min_{\bold x^*} \sum_{\bold z \in \bold Z} \text{var}(f | \bold X, \bold y, \bold x^*)(\bold z), \end{equation} where $\bold X$ is the set of inputs and $\bold y$ is the set of outputs of the model. Note that we can calculate the quantity $\text{var}(f | \bold X, \bold y, \bold x^*)$ because the standard posterior GP variance only depends on the measurement location, not the value $y^*$. Lastly, we note that the derivative of a GP is also a GP \cite{solak2002derivative}. Plugging the derivative GP into Eq.~\eqref{eq:ius} yields integrated gradient uncertainty sampling (IGU), which achieves the best performance in the $\text{X}_{\text{AI}}$ \ acquisition benchmark (see Fig.~\ref{fig:iAI}). \subsubsection{$\Upsigma_{\text{AI}}$} \label{sec:eAImethods} The $\Upsigma_{\text{AI}}$ \ works to identify phase regions and their boundaries in the temperature-dwelltime space, and more generally, the processing-composition space. Importantly, the raw reflectance data cannot be used directly for this task because of two reasons. First, the data is measured as a function of position, not temperature. Therefore, we convert the stripe-specific reflectance function $r_{T_p, \tau}(x, \lambda)$ to the temperature domain using the temperature profile $T_{T_p, \tau}$, yielding $r_{T_p, \tau}(T, \lambda)$. Second, the reflectance varies not just with the phase behavior, but also with the film thickness across the wafer. For this reason, we calculate the $L_2$-norm of the \textit{rate of change} of the spectroscopic reflectance, which is invariant to linear thickness variations of the film. In particular, for all $T < T_p$ we want to infer \begin{equation} \label{eq:d} d(T, \tau) := \sqrt{\int \left( \frac{\partial r_{T_p, \tau}(T, \lambda)}{\partial T} \right)^2 \text{d} \lambda}. \end{equation} $d$ quantifies how much the spectroscopic reflectance changes as a function of temperature and dwell time and is a strong indicator of phase changes~\cite{sutherland_optical_2020}. Estimating the phase boundaries then reduces to getting an accurate estimate of $d$ over all $(T, \tau)$ (and potentially composition $c$). This is the goal of the $\Upsigma_{\text{AI}}$ \ loop. Crucially, experimental errors can occur in $x$, and therefore in $T$, making it imperative to quantify the uncertainty due to these input errors and propagate them to the $\Upsigma_{\text{AI}}$. Indeed, our benchmarks show that ignoring these uncertainties leads to a significant deterioration in active learning performance (see Figure \ref{fig:eAI}(c)). To this end, we now discuss the intrinsic experimental uncertainties due to the temperature profile $T_{T_p, \tau}(x)$ of the laser. In particular, we compute the variance of the true temperature around the value predicted by the temperature profile as a function of position by \begin{equation} \label{eq:temperature_uncertainty} \sigma_T^2(x) = \sigma^2_{T_p} \left( \frac{T_p}{1400}\right)^2 + \sigma_x^2 \ \left(\frac{\partial T_{T_p, \tau}(x)}{\partial x}\right)^2 \end{equation} where $\sigma_{T_p}$ is the standard error in the peak temperature and $\sigma_x$ is the standard error in the position. The first term quantifies the error at the peak temperature which is largest at high temperatures ($1400^{\circ}$C) and falls off linearly with $T$. The second term quantifies uncertainties of the temperature profile, which are not just due to limited spatial resolution, but also encompass random asymmetries in the profile of the laser. The form of term is derived using standard error propagation techniques \cite{tellinghuisen2001statistical}. For the results reported herein, $\sigma_{T_p} = 25^{\circ} \text{C}$ and $\sigma_x = 50 \mu\text{m}$. The expression for the temperature uncertainty in Eq.~\eqref{eq:temperature_uncertainty} is then used in conjunction with Eq.~\eqref{eq:nigp} to compute a GP that comprehensively quantifies the uncertainties in the Legendre coefficients of the optical reflectance as a function of temperature. To compute $d$ in Eq.~\eqref{eq:d}, one simply sums the squared derivatives of the GPs of the Legendre coefficients of the reflectance: \begin{equation} \label{eq:dfromgp} d(T, \tau) = \sqrt{\sum_i (\partial_T \mu^{(i)}_p(T))^2} \end{equation} Since we have access to the uncertainties in $\mu_p^{(i)}$ from the GP, we can use uncertainty propagation techniques on Eq.~\eqref{eq:dfromgp} to calculate a first-order uncertainty estimate of $d(T, \tau)$. For the outer loop, we used a two dimensional Mat\'{e}rn 5/2 kernel with different length scales across each dimension. This allows the GP to learn independent sensitivity parameters of the experiment for the input dimensions. Note that the $\Upsigma_{\text{AI}}$ \ benchmarks in Fig. \ref{fig:eAI} (c) were carried out with fixed length scales to disentangle the effects of different acquisition functions and hyper-parameter learning. For the $\Upsigma_{\text{AI}}$, we designed an acquisition strategy that incorporates the property that a single stripe generates data throughout a range of temperatures. In particular, given experimental conditions $\bold x_s$ that give rise to a stripe ($T_p$, $\tau$, etc.), we sum the uncertainties of all relevant observations $\bold x_i$ that are in the set Stripe$(\bold x_s)$ of conditions on the stripe $\bold x_s$. In particular, we propose {\it stripe uncertainty sampling}: \begin{equation} \label{eq:stripe_uncertainty} \arg \max_{\bold x_s} \sum_{\bold x_i \in \text{Stripe}(\bold x_s)} \text{var}(f | \bold X, \bold y)(\bold x_i). \end{equation} Notably, one can use the same principle to generalize other acquisition functions. In fact, we investigated a stripe upper-confidence bound sampling policy. However, it performed worse or equal to the simpler stripe uncertainty sampling policy above. The synergy of the comprehensive uncertainty quantification and the stripe sampling function yields considerable benefits, as displayed in Figure \ref{fig:eAI} (c). \subsubsection{Error Metrics} \label{sec:metrics} In our benchmarks of the kernels and acquisition functions for the inner loop, we used the coefficient of determination $R^2$ to measure performance, defined by \begin{equation} R^2 = 1 - \frac{\sum_i (f(x_i) - y_i)^2} {\sum_i (\mu(\bold y) - y_i)^2}, \end{equation} where $\mu(\bold y)$ is the mean of the data $\bold y$. The advantage of using $R^2$ over other canonical measures like the mean-squared error is that it is dimensionless and easily interpretable as the proportion of the variance of the data that is explained by the model $f$. As $R^2$ weighs the deviation at every data point equally, it is not an ideal measure for heteroscedastic data, like the optical gradient data of $\Upsigma_{\text{AI}}$. For this reason, we use a generalization of $R^2$, based on the log-likelihood of the heteroscedastic normal errors, to measure performance in the $\Upsigma_{\text{AI}}$ \ benchmarks. In particular, the measure is given by \begin{equation} R^2_s = 1 - \frac{\sum_i (f(x_i) - y_i)^2 / \sigma_i^2} {\sum_i (\mu(\bold y) - y_i)^2 / \sigma_i^2}, \end{equation} where $\sigma_i$ is the standard deviation of the $i^{\text{th}}$ error. For SARA, the $\sigma_i$ are the product of the comprehensive uncertainty quantification of the experimental process. Clearly, $R^2_s$ reduces to $R^2$ if the noise variances are all equal. If they are not, $R^2_s$ is a better measure of misfit, as it weighs the residuals of more certain data points stronger than those with greater uncertainty. Notably, similar pseudo $R^2$ scores based on log-likelihoods are used throughout statistics and applied fields \cite{hu2006pseudo, smith2013comparison, hemmert2018log}. \section{Acknowledgments}\label{sec:ack} The authors acknowledge the Air Force Office of Scientific Research for support under award FA9550-18-1-0136. This work is based upon research conducted at the Materials Solutions Network at CHESS (MSN-C) which is supported by the Air Force Research Laboratory under award FA8650-19-2-5220, and the National Science Foundation Expeditions under award CCF-1522054. This work was also performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the National Science Foundation (Grant NNCI-2025233). MA acknowledges support from the Swiss National Science Foundation (project P4P4P2-180669). This research was conducted with support from the Cornell University Center for Advanced Computing. \section{Author contributions} RBvD, CPG, MOT, and JMG conceived and supervised the research. SA and MA developed and implemented the SARA algorithms, and contributed equally to this work. MA and SA took the lead in writing the manuscript. DRS fabricated the \ce{Bi2O3} thin film samples and collected and analyzed the optical microscopy and reflectance data. ABC performed the lg-LSA experiments. DG and JMG processed the XRD data, and DRS and MCC helped analyze the results. All authors provided critical feedback and helped shape the research, analysis, and manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,498
Q: Unable to open app from unidentified developer I'm trying to install VMD on my MacBook. My operating system is macOS Mojave 10.14.5. When I try to install the software, I receive the following unidentified developer message: But when I go to the System Preferences, I can't find any options to overrule it. I was wondering why is this the case and how to fix it.
{ "redpajama_set_name": "RedPajamaStackExchange" }
267
§ Their current chapter size is 42. § The conversion rate from visitors to members is 42.86%. § They achieved Gold Chapter status in 2012. Our #1 goal as a chapter is to grow our membership. The more we grow as a chapter the more potential we have to grow our business as well as helping new members grow their business. The key to our chapter's success is our loyalty to our members. We see the power in the philosophy of givers gain. Our chapter has a lot of fun with meeting stimulants, our favorite one has to be where we pull out a business card and give the 45 second training moment for that member. Our chapter has lots of FUN by gathering for social events and participating in community activities as a group.
{ "redpajama_set_name": "RedPajamaC4" }
8,417
module Main where import qualified SignExtService_Client as Client import SignExt_Types import SignExtService import Thrift import Thrift.Protocol.Binary import Thrift.Server import Thrift.Transport import Thrift.Transport.Handle import Control.Exception import Data.Either import Data.Int import Data.List import Data.Maybe import System.Environment import Data.Time import Data.Text.Lazy import Data.Vector import Network import System.Exit import System.Random import Text.Printf import Control.Concurrent import Numeric (showHex, showIntAtBase) -- Package size. package_size = 3 -- Buffer size. buffer_size = 4096 * 512 -- Buffer aligment. buffer_alignment = 4096 -- Thread sleep time. thread_sleep_ime = 1 -- Size of int in bits. size_of_int = 32 getRight :: Either left right -> right getRight (Right x) = x -- stream read function stream_read x to_cpu y frame_address fsz_address = do e <- try (Client.max_framed_stream_read x to_cpu y frame_address fsz_address) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let ret = getRight e if (fromIntegral ret) == 0 then do threadDelay thread_sleep_ime (stream_read x to_cpu y frame_address fsz_address) else return (fromIntegral ret) -- proces data function process_data x fsz_address frame_address to_cpu numRx = do fsz_array <- Client.receive_data_int64_t x fsz_address (fromIntegral 1) let fsz = Data.Vector.head fsz_array putStrLn ("CPU: Got output frame " Data.List.++ (show numRx) Data.List.++ " - " Data.List.++(show fsz) Data.List.++ " bytes") frame_array <- Client.receive_data_int64_t x frame_address (fromIntegral 1) let frame = Data.Vector.head frame_array word_array <- Client.receive_data_int64_t x frame package_size returnVal <- process_words word_array package_size 0 numRx discardVal <- Client.max_framed_stream_discard x to_cpu (fromIntegral 1); if (returnVal) == 0 then return 0 else do stream_read x to_cpu (fromIntegral 1) frame_address fsz_address process_data x fsz_address frame_address to_cpu (numRx + 1) -- proces words function process_words word_array 0 i numRx = do if (word_array ! 0 ) == 0 && (word_array ! 1 ) == 0 && (word_array ! 2 ) == 0 then return 0 else return 1 process_words word_array package_size i numRx = do let word = (word_array ! i) putStr ("FRAME[" Data.List.++ (show numRx) Data.List.++ "] WORD[" Data.List.++ (show i) Data.List.++ "]: ") putStrLn $ printf "0x%08x" word process_words word_array (package_size - 1) (i + 1) numRx main = do startTime <- getCurrentTime startDFETime <- getCurrentTime args <- getArgs case (Data.List.length args) of 2 -> return () otherwise -> do putStrLn ("Usage: dfe_ip remote_ip") exitWith $ ExitFailure (-1) let dfe_ip = (args !! 0) let remote_ip = (args !! 1) -- Make socket transport <- hOpen ("localhost", PortNumber 9090) -- Wrap in a protocol let protocol = BinaryProtocol transport -- Create a client to use the protocol encoder let client = (protocol, protocol) stopTime <- getCurrentTime putStrLn ("Creating a client and opening connection:\t" Data.List.++ (show (diffUTCTime stopTime startTime))) -- Allocate Dfe ip adress e <- try (Client.malloc_int64_t (protocol, protocol) (fromIntegral 5)) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let dfe_ip_address = getRight e e <- try (Client.inet_aton client (pack dfe_ip) dfe_ip_address) :: IO (Either SomeException Int32) case e of Left ex -> putStrLn $ "Caught exception inet_aton: " Data.List.++ show ex Right ex -> return () let dfe_ip_address_aligned = getRight e -- Allocate Remote ip adress e <- try (Client.malloc_int64_t client (fromIntegral 5)) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let remote_ip_address = getRight e Client.inet_aton client (pack remote_ip) remote_ip_address -- Allocate Netmask address startTime <- getCurrentTime e <- try (Client.malloc_int64_t client (fromIntegral 5)) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let netmask_address = getRight e Client.inet_aton client (pack "255.255.255.0") netmask_address -- Initialize maxfile startTime <- getCurrentTime e <- try (Client.signExt_init client) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let maxfile = getRight e stopTime <- getCurrentTime putStrLn ("Initializing maxfile:\t\t\t\t" Data.List.++ (show (diffUTCTime stopTime startTime))) -- Load DFE startTime <- getCurrentTime e <- try (Client.max_load client maxfile (pack "*")) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let engine = getRight e stopTime <- getCurrentTime putStrLn ("Loading DFE:\t\t\t\t\t" Data.List.++ (show (diffUTCTime stopTime startTime))) -- Set Enum let enumKey = Max_config_key_bool_t_struct (Just MAX_CONFIG_PRINTF_TO_STDOUT) e <- try (Client.max_config_set_bool client (enumKey) (1)) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Set actions e <- try (Client.max_actions_init client maxfile (pack "default")) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let actions = getRight e -- Run actions e <- try (Client.max_run client engine actions) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Free actions e <- try (Client.max_actions_free client actions) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Allocate buffer address startTime <- getCurrentTime e <- try (Client.malloc_int64_t client (fromIntegral 1)) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let buffer_address = getRight e e <- try (Client.posix_memalign client buffer_address buffer_alignment buffer_size) :: IO (Either SomeException Int32) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let buffer_address_aligned = getRight e ---- Read buffer buffer_array <- Client.receive_data_int64_t client (fromIntegral buffer_address) (fromIntegral 1) let buffer = Data.Vector.head buffer_array -- Framed stream setup to_cpu <- Client.max_framed_stream_setup client engine (pack "toCPU") buffer buffer_size (fromIntegral (-1)) -- Max_net_connection let enumconn = Max_net_connection_t_struct (Just MAX_NET_CONNECTION_QSFP_TOP_10G_PORT1) --- Ip config e <- try (Client.max_ip_config client engine enumconn (fromIntegral dfe_ip_address) netmask_address) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Udp create socket e <- try (Client.max_udp_create_socket client engine (pack "udpTopPort1")) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let dfe_socket = getRight e -- Socket bind let port = 2000 e <- try (Client.max_udp_bind client dfe_socket port) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Udp connect e <- try (Client.max_udp_connect client dfe_socket remote_ip_address (fromIntegral 0)) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () putStrLn ("Listening on: " Data.List.++ dfe_ip Data.List.++ " port " Data.List.++ (show port)) putStrLn ("Waiting for kernel response...") -- Allocate memory for frame address e <- try (Client.malloc_int64_t client (fromIntegral 1)) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let frame_address = getRight e -- Allocate memory for fsz address e <- try (Client.malloc_int64_t client (fromIntegral 1)) :: IO (Either SomeException Int64) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () let fsz_address = getRight e -- Main loop, wait for packages let numMessageRx = 0 framed_return <- stream_read client to_cpu 1 frame_address fsz_address val <- process_data client fsz_address frame_address to_cpu numMessageRx -- Close sockets e <- try (Client.max_udp_close client dfe_socket) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () e <- try (Client.max_framed_stream_release client to_cpu) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Unload DFE startTime <- getCurrentTime e <- try (Client.max_unload client engine) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () stopTime <- getCurrentTime putStrLn ("Unloading DFE:\t\t\t\t\t" Data.List.++ (show (diffUTCTime stopTime startTime))) e <- try (Client.max_file_free client maxfile) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Free allocated memory for streams on server startTime <- getCurrentTime e <- try (Client.free client (fromIntegral dfe_ip_address)) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () e <- try (Client.free client remote_ip_address) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () e <- try (Client.free client netmask_address) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () e <- try (Client.free client buffer_address) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () e <- try (Client.free client (fromIntegral buffer_address_aligned)) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () stopTime <- getCurrentTime putStrLn ("Freeing allocated memory for streams on server:\t" Data.List.++ (show (diffUTCTime stopTime startTime))) -- Free allocated maxfile data startTime <- getCurrentTime e <- try (Client.signExt_free client) :: IO (Either SomeException ()) case e of Left ex -> putStrLn $ "Caught exception: " Data.List.++ show ex Right ex -> return () -- Close! startTime <- getCurrentTime tClose transport stopTime <- getCurrentTime putStrLn ("Closing connection:\t\t\t\t" Data.List.++ (show (diffUTCTime stopTime startTime)))
{ "redpajama_set_name": "RedPajamaGithub" }
5,226
\section{Introduction} This paper is based on the talk given by the author at the Arbeitstagung 2017 ``Physical Mathematics" in honor of Yuri Manin's 80th birthday. It is an introduction to an ongoing joint project with Matthew Heydeman, Sarthak Parikh and Ingmar Saberi, on the construction of holographic classical and quantum codes on Bruhat--Tits trees and higher rank Bruhat--Tits buildings and on Drinfeld symmetric spaces, and associated entanglement entropy formulae. A discussion of the entanglement entropy and the relation to other holographic codes constructions, such as \cite{HaPPY}, will be presented in a forthcoming joint paper in preparation. The present paper should be regarded as covering some background material on the question of constructing holographic codes on $p$-adic symmetric spaces, based on algebro-geometric properties. \smallskip In \cite{Manin1}, \cite{Manin2}, Manin gave a compelling view of the idea of ``Arithmetical Physics", according to which physics in the usual Archimedean setting or real and complex numbers would cast non-Archimedean shadows that live over the finite primes and arithmetic properties associated to these non-Archimedean models can be used to better understand the physics that we experience at the Archimedean ``prime at infinity". According to this general philosophy ${\rm Spec}({\mathbb Z})$ is the ``arithmetic coordinate" of physics and geometry. A famous example where this principle manifests itself is given by the description of the Polyakov measure for the bosonic string in terms of the Faltings height function at algebraic points of the moduli space of curves, which leads naturally to the question of whether the Polyakov measure is in fact an adelic object and whether there is an overall arithmetic expression for the string partition function, \cite{Manin3}, \cite{Manin4}. More generally, one can ask to what extent are the fundamental laws of physics adelic. Does physics in the Archimedean setting (partition functions, action functionals, real and complex variables) have $p$-adic manifestations? Can these be used to provide convenient ``discretized models? of physics, powerful enough to determine their Archimedean counterpart? \smallskip Various forms of $p$-adic and adelic phenomena in physics and their relation to the usual Archimedean formulation were developed over the years. We refer the readers to \cite{CMZ}, \cite{DKKV}, \cite{VVZ}, \cite{Zab} for some references relevant to the point of view discussed in this paper. \smallskip Here we focus in particular on the holographic AdS/CFT correspondence and on the recent viewpoint relating information (entanglement entropy) of quantum states on the boundary to geometry (classical gravity) on the bulk, \cite{NRT} and the tensor networks and holographic codes approach of \cite{HaPPY}. The existence of a $p$-adic version of the holographic AdS/CFT correspondence was already proposed in \cite{ManMar}, based on earlier results of Manin \cite{Manin5}, \cite{Manin6} expressing the Green function on a compact Riemann surface with Schottky uniformization to configurations of geodesics in the bulk hyperbolic handlebody (which are higher genus generalizations of Euclidean BTZ black holes \cite{Krasnov}) and results of Drinfeld and Manin \cite{ManDri} on periods of Mumford curves uniformized by $p$-adic Schottly groups. \smallskip In \cite{HMSS} we developed a non-Archimedean version of AdS/CFT holography, based on the approach originally proposed in \cite{ManMar}, which would be compatible with the more recent viewpoint on the holographic correspondence based on the ideas of tensor networks and holographic codes and the correspondence between entanglement entropy and bulk geometry. Versions of $p$-adic AdS/CFT correspondence were also developed in \cite{GKPSW}, and in subsequent work \cite{BHLL}, \cite{GuPa}, \cite{GHJPSST}, \cite{GHJMPSST}, \cite{GJPT} and others. The theme of non-archimedean versions of holography has clearly become a very active area of current research. \smallskip In this paper, we return to the point of view of tensor networks and holographic codes discussed in \cite{HMSS} and we present some new constructions which are based on the geometry of Bruhat--Tits trees and buildings and of Drinfeld symmetric spaces. \smallskip The main difference between the approach we propose here and other constructions of holographic codes such as \cite{HaPPY}, or for instance \cite{BHLL}, \cite{BreRi}, \cite{HNQTWY}, lies in the fact that we rely on well known techniques for the construction of classical codes associated to algebro-geometric objects \cite{TsfaVla} and on algorithms relating classical to quantum codes \cite{CRSS}. The construction of algebro-geometric codes played a crucial role in the study of asymptotic problems in coding theory, as shown by Manin in \cite{Manin7}. \smallskip We first present here a construction of holographic codes that is based on the geometry of the Bruhat-Tits trees and algebro-geometric Reed--Solomon codes associated to projective lines over a finite field, together with an application of the CRSS algorithm that associates quantum codes to classical $q$-ary codes. \smallskip We then revisit the approach to holographic codes via tessellations of the hyperbolic plane, as in \cite{HaPPY}. Instead of relating such constructions to the Bruhat--Tits trees via a non-canonical planar embedding of the tree, as in \cite{HMSS}, we use here a purely $p$-adic viewpoint, working with the Drinfeld $p$-adic upper half plane as a replacement of the real hyperbolic plane, and its (canonical) map to the Bruhat--Tits tree. Instead of tessellations of the real hyperbolic plane we use actions of $p$-adic Fuchsian groups on the Drinfeld plane and associated surface codes. We show that this approach is restricted by the strong constraints that exist on $p$-adic Fuchsian groups. For example, we show that a $p$-adic analog of the holographic pentagon code of \cite{HaPPY} constructed with this method can only exist when $p=2$. \smallskip We then propose an extension of this approach via holographic codes to higher rank buildings, based on algebro-geometric codes associated to higher dimensional algebraic varieties, as constructed in \cite{TsfaVla}. \medskip \section{Algebro-Geometric Codes on the Bruhat--Tits tree}\label{BTcodes} In this section we describe a construction of holographic codes on the Bruhat--Tits trees that are obtained via Reed--Solomon algebro-geometric codes on projective lines over finite fields. \subsection{Reed--Solomon codes and classical codes on the Bruhat--Tits tree} The set of algebraic points $X({\mathbb F}_q)$ of a curve $X$ over a finite field ${\mathbb F}_q$ can be used to construct algebro-geometric error-correcting codes, see \cite{TsfaVlaNo}. Algebro-geometric codes associated to a curve $X$ over a finite field ${\mathbb F}_q$ consists of a choice of a set $A$ of algebraic points $A\subset X({\mathbb F}_q)$ and a divisor $D$ on $X$ with support disjoint from $A$. The linear code $C=C_X(A,D)$ is obtained by considering rational functions $f\in {\mathbb F}_q(X)$ with poles at $D$ and evaluating them at the points of $A$. A bound on the order of pole of $f$ at $D$ determines the dimension of the linear code. \smallskip We are interested here in the simplest case of algebro-geometric codes, the Reed--Solomon codes constructed using the points of $\P^1({\mathbb F}_q)$. Given a set of points $A \subset \P^1({\mathbb F}_q)$ with $\# A =n \leq q+1$ we consider two types of Reed--Solomon codes, one constructed using the point $\infty \in \P^1({\mathbb F}_q)$ as divisor, that is, using polynomials $f\in {\mathbb F}_q[x]$, and using a set $A$ of $n\leq q$ points in ${\mathbb A}^1({\mathbb F}_q)={\mathbb F}_q$ for evaluation. The corresponding Reed--Solomon code $C=\{ (f(x_1),\cdots, f(x_n))\,:\, f\in {\mathbb F}_q[x], \, \deg(f)< k \}$ gives an $[n,k,n-k+1]_q$ classical code, where $n\leq q$. The other type of Reed--Solomon codes are obtained using homogeneous polynomials and a set $A$ of $n\leq q+1$ points in $\P^1({\mathbb F}_q)$. The resulting code $\hat C=\{ (f(u_1,v_1),\ldots,f(u_n,v_n))\,:\, f\in {\mathbb F}_q[u,v], \, \text{homogeneous with } \deg(f)<k\}$, with $x_i=(u_i:v_i)\in \P^1({\mathbb F}_q)$. We also consider generalized Reed-Solomon codes of these two types, where for a vector $w=(w_1,\ldots,w_n)\in {\mathbb F}_q^n$ one defines $$ C_{w,k}=\{ (w_1 f(x_1),\cdots, w_n f(x_n))\,:\, f\in {\mathbb F}_q[x], \, \deg(f)< k \} $$ $$ \hat C_{w,k}=\{ (w_1 f(u_1,v_1),\ldots, w_n f(u_n,v_n))\,:\, f\in {\mathbb F}_q[u,v], \, \text{homogeneous}, \, \deg(f)<k\}. $$ \smallskip For ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$ with residue field ${\mathbb F}_q$, with $q=p^r$, the Bruhat--Tits tree ${\mathcal T}_{\mathbb K}$ is a homogeneous tree with valence $q+1=\# \P^1({\mathbb F}_q)$ and with ends $\partial {\mathcal T}_{\mathbb K} =\P^1({\mathbb K})$. The choice of a projective coordinate on $\P^1({\mathbb K})$ fixes three points $\{0,1,\infty\} \in \P^1({\mathbb K})$, hence it fixes a unique root vertex $\nu_0\in V({\mathcal T}_{\mathbb K})$. The star of vertices surrounding $\nu_0$ can then be identified with a copy of $\P^1({\mathbb F}_q)$, which in algebro-geometric terms corresponds to the reduction modulo the maximal ideal ${\mathfrak m}$ in ${\mathcal O}_{\mathbb K}$. \smallskip The root vertex $\nu_0$ is therefore associated to the reduction curve $\P^1$. We can construct a holographic {\em classical code} to the Bruhat--Tits tree by assigning to the root vertex $\nu_0$ and its star of $q+1$ edges a Reed--Solomon code with an assigned number $k$ of logical inputs ($q$-ary bits) located at $\nu_0$ and outputs at each of the $q+1$ legs. This can be done by a (generalized) Reed-Solomon code $\hat C_{w,k}$ of maximal length $n=q+1$, seen as an encoding $\hat C_{w,k} : {\mathbb F}_q^k \to {\mathbb F}_q^{q+1}$, which inputs a $k$-tuple of $q$-ary bits $a=(a_0,\ldots, a_{k-1})\in {\mathbb F}_q^k$, uses the homogeneous polynomial $f_a(u,v)=\sum_{i=0}^k a_i u^iv^{k-1-i}$, and outputs a $q$-ary bit $f(u_j,v_j)\in {\mathbb F}_q$ at each point $x_j=(u_j:v_j)\in \P^1({\mathbb F}_q)$ identified with a leg of the vertex $\nu_0$ in the Bruhat--Tits tree. \smallskip The choice of the projective coordinate on $\P^1({\mathbb K})$, hence of the root vertex $\nu_0$ in ${\mathcal T}_{\mathbb K}$, determines a choice of a leg at each other vertex $\nu\neq \nu_0$, given by the unique direction out of $\nu$ towards the root $\nu_0$. We can identify this choice with a choice of the point $\{ \infty \}$ in each copy of $\P^1({\mathbb F}_q)$ at each vertex $\nu\neq \nu_0$ of the tree. Proceeding from the center, if we assign a Reed--Solomon code at each vertex of ${\mathcal T}_{\mathbb K}$, and by homogeneity we expect all of them to have the same number $k$ of inputs, we see that at each successive steps the leg of the star of edges at $\nu$ has already one value assigned at the leg labelled by the point $\infty\in \P^1({\mathbb F}_q)$, which corresponds to the output coming from the matching leg in the star of the previous vertex coming from the root $\nu_0$. Thus, in projective coordinates $(u:v)$ where $(0:1)$ is the point at infinity, the Reed--Solomon code $\hat C_{w,k}$ associated to the vertex $\nu$ takes $k-1$ new inputs $a=(a_1,\ldots,a_{k-1})\in {\mathbb F}_q^{k-1}$ and one additional input $a_0$ given by the value at $\infty$ assigned by the previous code, and deposits a new $q$-ary bit $f_a(u,v)=\sum_{i=0}^{k-1} a_i u^i v^{k-1-i}$ at each of the remaining legs at the vertex $v$ pointing away from the root, labelled by the points $x\in {\mathbb A}^1({\mathbb F}_q)={\mathbb F}_q$. \smallskip Note how the construction considered here has one root vertex play a special role with an ${\mathbb F}_q^k$ logical input and a Reed--Solomon code of length $q+1$, while all the other vertices have a further logical input of ${\mathbb F}_q^{k-1}$. This asymmetry is inevitable if we want to use the algebro-geometric structure underlying the Bruhat--Tits tree to construct a classical code, since the root vertex plays the special role of the algebraic curve given by the reduction modulo ${\mathfrak m}$, while the sets of vertices in the tree at distance $m$ from the root correspond to reducing modulo powers ${\mathfrak m}^m$. Thus, the asymmetric role of the root vertex and the other vertices is built into the relation between $\P^1({\mathbb K})$ and its reduction curves. \smallskip The construction described here determines a classical code associated to the Bruhat--Tits tree with logical inputs at the vertices and outputs at the forward pointing legs. In the limit where one considers the whole tree, the outputs consist of a $q$-ary bit deposited at each point of the boundary $\P^1({\mathbb K})$. We want to transform this classical code built using the algebro-geometric properties of the Bruhat--Tits tree, into a quantum error correcting code that generates a holographic code for the Bruhat--Tits tree and its boundary at infinity. \smallskip \subsection{Classical Algebro-Geometric Codes for Mumford Curves} The construction above can be generalized in the case of Mumford curves. Let $\Gamma$ be a $p$-adic Schottky group and $\Omega_\Gamma=\P^1({\mathbb K})\smallsetminus \Lambda_\Gamma$ the domain of discontinuity of $\Gamma$ acting on the boundary $\P^1({\mathbb K})$, the complement of the limit set $\Lambda_\Gamma$. The quotient $X=\Omega_\Gamma/\Gamma$ is a Mumford curve of genus $g$ equal to the number of generators of the Schottky group. Unlike complex Riemann surfaces, which always admit a Schottky uniformization, only very special $p$-adic curves admit a Mumford curve uniformization. Indeed, these curves must have the property that their reduction mod ${\mathfrak m}$ is totally split: as a curve over ${\mathbb F}_q$ it consists of a collection of $\P^1$'s with incidence relations described by the dual graph $G$. This is the finite graph at the center of the quotient ${\mathcal T}_{\mathbb K} /\Gamma$, obtained as the quotient $G={\mathcal T}_\Gamma/\Gamma$, where ${\mathcal T}_\Gamma$ is the subtree if ${\mathcal T}_{\mathbb K}$ spanned by the geodesic axes of the hyperbolic elements $\gamma\neq 1$ of $\Gamma$, with $\partial{\mathcal T}_\Gamma =\Lambda_\Gamma$. \smallskip Using the identification between the finite graph $G$ and the dual graph of the reduction curve, we can again associate to each vertex in $G$ a copy of $\P^1({\mathbb F}_q)$ (the corresponding component in the curve), and to each of these projective lines a Reed--Solomon code as in the previous construction. Now, however, we need to impose compatibility conditions between these codes at the incidence points between different components of the curve, that is, along the edges of the finite graph $G$. Thus, we associate to the finite graph $G$ a classical code $C(G)$ constructed as follows. Start with a code $\hat C_{w,k}$ associated to each vertex $v$, which inputs $a=(a_0,\ldots,a_{k-1})$ and outputs $f_a(u,v)=\sum_i a_i u^i v^{k-1-i}$ at each point $x=(u:v)$ of the associated $\P^1({\mathbb F}_q)$. Consider the set ${\mathcal F}$ of functions $f=(f_1,\ldots, f_N)$, with $N=\# V(G)$, and $f_i$ a homogeneous polynomial of degree $\deg(f_i)<k$ on the $i$-th component $\P^1({\mathbb F}_q)$, with the property that if $x=(u_i:v_i)=(u_j:v_j)$ is an intersection point between the $i$-th and the $j$-th components of the reduction curve, then $f_i(u_i:v_i)=f_j(u_j:v_j)$. Thus, each edges $e\in E(G)$ imposes a relation between $f_i$ and $f_j$ which requires the value that the codes $\hat C_{w,k}$ at the vertices $\nu_i$ and $\nu_j$ deposit at the point $x$ to be the same. Thus, the resulting code $C(G)$ is an ${\mathbb F}_q$-linear code with input ${\mathbb F}_q^{kN -M}$ where $N=\# V(G)$ and $M=\# E(G)$. We have $kN-M=(k-1)N +1-b_1(G)$ hence we need to assume $k>1+(b_1(G)-1)/N$. \smallskip The free legs of the graph $G$ are all the legs that point towards the infinite trees in ${\mathcal T}_{\mathbb K}/\Gamma$ that extend from the vertices of $G$ to the boundary Mumford curve $X({\mathbb K})=\partial {\mathcal T}_{\mathbb K}/\Gamma$. At each vertex along these trees we consider Reed-Solomon codes as in the case of $\P^1({\mathbb K})$, with one input coming from the previous vertex closer to $G$ and $k-1$ new inputs and outputs at the $q$ forward pointing legs. This determines a classical code associated to the infinite graph ${\mathcal T}_{\mathbb K}/\Gamma$, with logical inputs at the vertices and outputs at the points of the Mumford curve $X({\mathbb K})$. The finite graph $G$ and the infinite graph ${\mathcal T}_{\mathbb K}/\Gamma$ containing it are a genus $g$ generalization of the $p$-adic BTZ black hole, which corresponds to the $g=1$ case of Mumford--Tate elliptic curves. \smallskip \subsection{Reed--Solomon Codes and Quantum Algebro-Geometric Codes} There is a general procedure for passing from classical codes to quantum codes, based on the Calderbank--Rains--Shor--Sloane algorithm \cite{CRSS}, see also \cite{AsKn}. It can be applied to certain classes of algebro-geometric codes and in particular to generalized Reed--Solomon codes. \smallskip Let ${\mathcal H}={\mathbb C}^q$ be the Hilbert space of a single $q$-ary qubit and ${\mathcal H}_n=({\mathbb C}^q)^{\otimes n}$ the space of $n$ $q$-ary qubits. We label an orthonormal basis of ${\mathcal H}$ by $|a\rangle$ with $a\in {\mathbb F}_q$. Thus, a $q$-ary qubit is a vector $\psi =\sum_{a\in {\mathbb F}_q} \lambda_a \,|a\rangle$ with $\lambda_a\in {\mathbb C}$, and an $n$-tuple of $q$-ary qubits is given by a vector $\psi=\sum_{a=(a_1\ldots a_n) \in {\mathbb F}_q^n} \lambda_a |a\rangle$ where $|a\rangle =|a_1\rangle \otimes \cdots \otimes |a_n\rangle$. Quantum error correcting codes are subspaces ${\mathcal C}$ of ${\mathcal H}_n$ that are error correcting for a certain number of ``q-ary bit flip" and ``phase flip" errors. More precisely, an error operator $E$ is detectable by a quantum code ${\mathcal C}$ if $P_{\mathcal C} E P_{\mathcal C} =\lambda_E \, P_{\mathcal C}$, where $P_{{\mathcal C}}$ is the orthogonal projection onto the code subspace and $\lambda_E$ is a scalar. In particular, one considers error operators that affect up to a certain number of qubits in an $n$-qubits state, namely error operators of the form $E=E_1\otimes \cdots \otimes E_n$, of weight $\omega(E)=\# \{ i\,:\, E_i\neq I \}$. The minimum distance $d_Q({\mathcal C})$ of the quantum code is the largest $d$ such that all errors with $\omega(E)< d$ are detectable. \smallskip The bit and phase flip error operators are defined on a single $q$-ary qubit as $$ T_b |a\rangle = |a+b \rangle, \ \ \ R_b |a\rangle = \xi^{{\rm Tr}(\langle a,b \rangle)}|a\rangle, $$ where $\xi$ is a $p$-th primitive root of unity, and ${\rm Tr}: {\mathbb F}_q \to {\mathbb F}_p$ is the trace function, ${\rm Tr}(a)=\sum_{i=0}^{r-1} a^{p^i}$, with $\langle a,b \rangle = \sum_{i=1}^{r} a_i b_i$ and with $R^{b_i} |a_j\rangle = \xi^{{\rm Tr}(a_j b_i)}|a_j\rangle$. Let $\{ \gamma_i \}_{i=1}^r$ be a basis of ${\mathbb F}_q$ as an ${\mathbb F}_p$-vector space, so that $a=\sum_i a_i \gamma_i$ and $b=\sum_i b_i \gamma_i$. Then the error operators $T_b$ and $R_b$ can be written respectively as $$ T_b =T^{b_1}\otimes \cdots \otimes T^{b_r}, \ \ \ R_b= R^{b_1}\otimes \cdots \otimes R^{b_r} $$ with $T$ and $R$ given by the operators acting on ${\mathbb C}^p$ of matrix form $$ T=\begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & & & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ 1 & 0 & 0 & \cdots & 0 \end{pmatrix} \ \ \ R = \begin{pmatrix} 1 & & & & \\ & \xi & & & \\ & & \xi^2 & & \\ & & & \ddots & \\ & & & & \xi^{p-1} \end{pmatrix}$$ satisfying the commutation relation $TR =\xi RT$. The operators $T_aR_b$ with $a,b\in {\mathbb F}_q$ form an orthonormal basis for $M_{q\times q}({\mathbb C})$ under the inner product $\langle A, B\rangle=q^{-1} {\rm Tr}(A^* B)$, hence these operators generate all possible quantum errors on the space ${\mathbb C}^q$ of a single $q$-ary qubit. The action of error operators on a state of $n$ $q$-ary qubits can similarly be written in terms of operators $T_a R_b$ with $$ E_{a,b}=T_a R_b = (T_{a_1} \otimes \cdots \otimes T_{a_n}) (R_{b_1}\otimes \cdots \otimes R_{b_n}), $$ for $a=(a_1,\ldots, a_n), b=(b_1\ldots,b_n)\in {\mathbb F}_q^n$. The operators $E_{a,b}$ satisfy $E^p_{a,b}=I$ and the commutation and composition rules $$ E_{a,b} E_{a',b'} = \xi^{\langle a,b'\rangle - \langle b, a' \rangle} E_{a',b'} E_{a,b}, \ \ \ E_{a,b} E_{a',b'} = \xi^{-\langle b,a'\rangle} E_{a+a',b+b'}, $$ where $\langle a,b \rangle=\sum_i \langle a_i, b_i\rangle=\sum_{i,j} a_{i,j} b_{i,j}$, with $a_i,b_i\in {\mathbb F}_q$, written as $a_i=\sum_j a_{i,j} \gamma_j$ and $b_i=\sum_j b_{i,j} \gamma_j$, after identifying $F_q$ as a vector space with ${\mathbb F}_p^r$. Thus, we can consider the group ${\mathcal G}_n = \{ \xi^i E_{a,b}, \, a,b\in {\mathbb F}_q^n, \, 0\leq i \leq p-1 \}$ of order $pq^{2n}$. A quantum stabilizer error-correcting code ${\mathcal C}$ is a subspace ${\mathcal C}\subset {\mathcal H}_n$ that is a joint eigenspace of operators $E_{a,b}$ in an abelian subgroup ${\mathcal S}\subset {\mathcal G}_n$. \smallskip Let $\varphi\in {\rm Aut}_{{\mathbb F}_p}({\mathbb F}_p^r)$ be an automorphism. In particular, we consider $\varphi$ given by the trace as in \cite{AsKn}, so that we the associated pairing is $$ \langle (a,b), (a',b') \rangle =\langle a, \varphi(b')\rangle - \langle a', \varphi(b)\rangle = {\rm Tr}(\langle a,b'\rangle_* - \langle a',b\rangle_*), $$ where, for $a, b \in {\mathbb F}_q^n$, the inner product $\langle a,b \rangle \neq \langle a,b \rangle_*$, since $\langle a,b \rangle_* = \sum_{i=1}^n a_i b_i$, while $\langle a,b \rangle = \sum_{i=1}^n \langle a_i, b_i \rangle = \sum_{i=1}^n \sum_{j=1}^r a_{i,j} b_{i,j}$. If $C\subset {\mathbb F}_q^{2n}$ is a classical self-orthogonal code with respect to this pairing, then the subgroup ${\mathcal S} \subset {\mathcal G}_n$ given by the elements $\xi^i E_{a,\varphi(b)}$ with $(a,b)\in C$ is an abelian subgroup of ${\mathcal G}_n$, because of the commutation rule above. This construction is the CRSS algorithm that associates to a self-orthogonal classical $[2n,k,d]_q$ code a stabilizer quantum $[[n,n-k,d_Q]]_q$-code, where $d_Q=\min\{ \omega(a,b)\,:\, (a,b)\in C^\perp \smallsetminus C \}$, where the weight $\omega(a,b)=\#\{ i\,:\, a_i\neq 0 \text{ or } b_i\neq 0 \}$, and $C^\perp=\{ (v,w)\in {\mathbb F}_q^{2n}\,:\, \langle (a,b),(v,w)\rangle =0, \, \forall (a,b)\in C\}$. \smallskip We can view the CRSS algorithm assigning the quantum stabilizer code ${\mathcal C}$ to the classical code $C$ as an encoding process that takes the $q^k$ input vectors $(v,w) \in {\mathbb F}_q^{2n}$ of the classical code $C$ and encodes the states $|(v,w) \rangle$ using the vectors $\psi \in ({\mathbb C}^q)^{\otimes n}$ satisfying $E_{v,\varphi(w)}\psi =\lambda \psi$ in a common eigenspace of the $E_{v,\varphi(w)}$. \smallskip A slightly more general version of the CRSS algorithm starts with two classical linear $q$-ary codes $C_1 \subseteq C_2$ of length $n$ and dimensions $k_1$ and $k_2$ and associates to them a quantum code ${\mathcal C}={\mathcal C}(C_1,C_2)$ with parameters $[[n, k_2-k_1, \min\{ d(C_2\smallsetminus C_1), d(C_1^\perp\smallsetminus C_2) \}]]_q$, see \cite{Gra2}, \cite{KimWal}. The procedure for the construction of the quantum code is similar to the version of the CRSS algorithm recalled above. One constructs a code $C=\gamma C_1 + \bar \gamma C_2$ in ${\mathbb F}_{q^2}^n$ with $\gamma$ a primitive element of ${\mathbb F}_{q^2}$ and $\{ \gamma, \bar\gamma \}$ a linear basis of ${\mathbb F}_{q^2}$ as an ${\mathbb F}_q$-vector space. By identifying ${\mathbb F}_{q^2}^n$ as an ${\mathbb F}_q$-vector space with ${\mathbb F}_q^{2n}$, we obtain a self orthogonal $C \subset {\mathbb F}^{2n}_q$, to which the CRSS algorithm discussed before can be applied. \smallskip Conditions under which Reed-Solomon codes satisfy a self-dual condition, and the corresponding quantum Reed-Solomon codes obtained via a CRSS type algorithm are analyzed, for instance, in \cite{Gra} and \cite{Gua}. We use here a construction of \cite{AsKn}, which shows that, if $C$ is a $q^2$-ary classical $[n,k,d]_{q^2}$-code, which is Hermitian-self-dual, then there exists an associated $q$-ary $[[n,n-2k,d_Q]]_q$-quantum code, with $d_Q\geq d$. Here Hermitian-self-dual means that the classical code $C$ is self dual with respect to the ``Hermitian" pairing $$ \langle v,w \rangle_H = \sum_{i=1}^n v_i w_i^q, \ \ \text{ for } v,w \in {\mathbb F}_{q^2}^n. $$ This is a variant of the CRSS algorithm described above, where a Hermitian-self-dual code of length $n$ over the field extension ${\mathbb F}_{q^2}$ is used to construct a self-dual code $\tilde C$ of length $2n$ over ${\mathbb F}_q$ to which the CRSS algorithm can be applied, obtained by expanding the code words $v\in C$ using a basis $\{ 1,\gamma \}$ of ${\mathbb F}_{q^2}$ as a ${\mathbb F}_q$-vector space, where $\gamma$ is an element in ${\mathbb F}_{q^2}\smallsetminus {\mathbb F}_q$ satisfying $\gamma^q =-\gamma +\gamma_0$ for some fixed $\gamma_0\in {\mathbb F}_q$. Using this approach, it suffices to construct generalized Reed--Solomon codes $\hat C_{w,k}$ of length $n<q+1$ over ${\mathbb F}_{q^2}$ that are Hermitian self-dual, in order to obtain associated quantum codes $\hat {\mathcal C}_{w,2n-k}$ as a code subspace of the $n$ $q$-ary qubits space ${\mathcal H}_n=({\mathbb C}^q)^{\otimes n}$. It is possible to ensure the Hermitian self-duality condition for generalized Reed--Solomon codes by taking the weights vector $w=(w_1,\ldots,w_n)\in ({\mathbb F}_{q^2}^*)^n$ to satisfy $\sum_{i=1}^n w_i^{q+1} x_i^{qj+\ell}=0$ for all $0\leq j,\ell\leq k-1$, where $x=(x_1,\ldots,x_n) \in {\mathbb F}_{q^2}^n$ are the $n$ chosen points (excluding $\infty$) of $\P^1({\mathbb F}_{q^2})$. Using this method, it is proved in \cite{LXW} that the choice $w_i=1$, with $n=q^2=\# {\mathbb F}_{q^2}$ and $k=q$, produces a Reed-Solomon code $C=C_{1,q}$ that is Hermitian-self-dual, and an associated $[[q^2+1, q^2-2q+1,q+1]]_q$-quantum Reed-Solomon code $\hat {\mathcal C}$. Moreover, it is also shown in \cite{LXW} that for $w_i$ satisfying $w_i^{q+1}=(\prod_{j\neq i} (x_i - x_j))^{-1}$, with $n\leq q$ and $x=(x_1,\ldots,x_n) \in {\mathbb F}_q$, and for $k\leq \lfloor n/2 \rfloor$, the generalized Reed-Solomon codes satisfy $C_{w,k}\subseteq C_{w,n-k}$ and $C_{w,k}$ is hermitian self dual to $C_{w,n-k}$. Thus, hence the CRSS algorithm can be applied to obtain an $[[n,n-2k,k+1]]_q$-quantum Reed-Solomon code ${\mathcal C}_{w,k}={\mathcal C}(C_{w,k},C_{w,n-k})$. \smallskip \subsection{The case of the perfect tensors} In particular, from the construction described above we see that we obtain the case of perfect tensors as the special case where $n=q$ and $k=(q-1)/2$. We obtain this using the generalized Reed--Solomon codes as in Theorem~6 of \cite{LXW}, for the case $n\leq q$ and $k\leq \lfloor \frac{n}{2} \rfloor$, with a choice of the weights $w_i$ satisfying $w_i^{q+1}=(\prod_{j\neq i} (x_i - x_j))^{-1}$, with $n=q$ and $x_i \in {\mathbb F}_q$. As shown in Theorem~6 of \cite{LXW}, this produces two classical generalized Reed-Solomon codes $C_{w,\frac{q-1}{2}}\subseteq C_{w,\frac{q+1}{2}}$ that are hermitian self-dual. The associated quantum generalized Reed--Solomon code is then obtained via the general construction of Ashikhmin--Knill (Theorem~4 and Corollary~1 of \cite{AsKn}) that associates to a classical $[n,k,d]_{q^2}$ code contained in its hermitian dual a quantum $[[n,n-2k,d]]_q$ code. One can see directly that, in the case of perfect tensors when $n=q$, the weights are constant and given by $w_i^{q+1} = p-1$ for all $i=1, \ldots, q$. \smallskip Thus, we can regard the construction described above with generalized Reed--Solomon codes as a generalization of the usual construction of perfect tensors, which recovers the perfect tensor case for a particular choice of (constant) weights of the classical Reed--Solomon codes. \smallskip The more general cases with non-constant weights assign different weights to different directions in the Bruhat--Tits tree. These may be useful in view of holographic models where the bulk geometry is dynamical, as in \cite{GHJMPSST}, and also described by different weights in different directions in the tree. \smallskip \subsection{Holographic quantum codes on Bruhat--Tits trees and Mumford curves} We use the procedure described above to pass from classical algebro-geometric codes, in particular generalized Reed-Solomon codes, to associated quantum stabilizer codes, to construct a holographic code on the Bruhat--Tits tree ${\mathcal T}_{\mathbb K}$ associated to the classical codes constructed above. \smallskip We have seen above that, in order to apply the CRSS algorithm, we pass to a quadratic extension ${\mathbb F}_{q^2}$ of ${\mathbb F}_q$ and consider Reed-Solomon codes over ${\mathbb F}_{q^2}$. In terms of the Bruhat--Tits tree, we can pass to an unramified quadratic extension ${\mathbb L}$ of the field ${\mathbb K}$, so that the Bruhat--Tits tree ${\mathcal T}_{\mathbb L}$ is obtained from the Bruhat--Tits tree ${\mathcal T}_{\mathbb K}$ simply by adding new branches at each vertex, so as to obtain a homogeneous tree of valence $q^2+1$. Since the extension is unramified, it is not necessary to insert new vertices along the edges, and we can view the tree ${\mathcal T}_{\mathbb K}$ as a subtree of ${\mathcal T}_{\mathbb L}$. We then proceed to construct a classical code associated to ${\mathcal T}_{\mathbb L}$ using Reed-Solomon codes $\hat C_{w,k}$ placed at the vertices according to the procedure described in the previous sections. Using the construction above, with $w_i=1$ and $k=q$, we associate to each vertex a quantum Reed-Solomon code $\hat{\mathcal C}$ with code parameters $[[q^2+1, q^2-2q+1,q+1]]_q$. This corresponds to considering a state space ${\mathcal H}_{q^2+1}=({\mathbb C}^q)^{\otimes q^2+1}$ associated to each vertex, which we can think of as a state with a $q$-ary qubit sitting at each of the $q^2+1$ points of $\P^1({\mathbb F}_{q^2})$, or equivalently at each of the legs surrounding that vertex in ${\mathcal T}_{\mathbb L}$. The quantum code $\hat{\mathcal C}$ detects quantum errors of weight up to $q+1=\# \P^1({\mathbb F}_q)$. Thus, by identifying ${\mathcal T}_{\mathbb K}\subset {\mathcal T}_{\mathbb L}$ and $\P^1({\mathbb F}_q)\subset \P^1({\mathbb F}_{q^2})$ as the set of directions along the subtree ${\mathcal T}_{\mathbb K}$, we can arrange that the code $\hat{\mathcal C}$ corrects quantum errors along the ${\mathcal T}_{\mathbb K}$ directions. One can also use a bipartition $A\cup A^c$ of the edges at each vertex of ${\mathcal T}_{\mathbb L}$, with $\# A =k$ and associate to the bipartition a pair of codes $\hat C_{w,k}$ and $\hat C_{w,n-k}$ with associated quantum Reed-Solomon codes $\hat C_{w,k}$ as above at the vertices of ${\mathcal T}_{\mathbb K}$. \smallskip One can think of the classical codes $\hat C$ associated to the vertices of the Bruhat--Tits tree in this way as performing an encoding of $q+1$ classical $q^2$-ary bits associated to the points of $\P^1({\mathbb K})$ into $q^2+1$ classical $q^2$-ary bits associated to the points of $\P^1({\mathbb L})$. Thus, the whole classical code associated to this quadratic extension can be seen as a way of encoding a state consisting of classical $q^2$-ary bits associated to the edges of ${\mathcal T}_{\mathbb K}$ into a set of classical $q^2$-ary bits associated to the edges of ${\mathcal T}_{\mathbb L}$, and the letter into a state of $q^2$-ary bits associated to the set of boundary points $\P^1({\mathbb L})$. The corresponding CRSS quantum codes $\hat{\mathcal C}$ at the vertices of the Bruhat--Tits tree ${\mathcal T}_{\mathbb L}$ encode the input given by the common eigenspace of the error operators associated to the code words of the classical code $\hat C$ into a state consisting of a $q$-ary qubit placed at each leg around the vertex. \smallskip In order to combine these quantum codes placed at the vertices of the Bruhat--Tits tree ${\mathcal T}_{\mathbb L}$ into a holographic code over the whole tree, with logical inputs in the bulk and physical outputs at the boundary $\P^1({\mathbb L})$, notice that at each vertex $v$ we have the same subspace ${\mathcal H}_\nu$ given by the common eigenspace ${\mathcal H}_\nu=\{ \psi \,:\, E_{v,\varphi(w)}\psi =\lambda \psi \}$ for all words $(v,w)$ in the classical code. We encode states $\psi_\nu \in {\mathcal H}_\nu$ as $\psi_\nu =(\psi_{\nu,x})_{x\in \P^1({\mathbb F}_{q^2})}$, where the points $x\in \P^1({\mathbb F}_{q^2})$ label the legs around the vertex $\nu$, so that we think of $\psi_{\nu,x}\in {\mathbb C}^q$ as the $q$-ary qubit deposited on the leg $x$ by the quantum code $\hat{\mathcal C}$ sitting at the vertex $\nu$. Starting at the root vertex and proceeding towards the outside of the tree, at each next step, the leg $\infty \in \P^1({\mathbb F}_{q^2})$ around the new vertex $\nu$ is the one connected to a leg $x_i \in {\mathbb F}_{q^2} \subset \P^1({\mathbb F}_{q^2})$ of the previous vertex $\nu'$, which receives an output $\psi_{\nu',i}$. Thus, the $q$-ary qubit $\psi_{\nu,\infty}$ is determined as it has to match the output $\psi_{\nu',i}$ of the previous code, while the remaining possible inputs correspond to the choices of $\psi \in {\mathcal H}_v$ with that fixed $\psi_{v,\infty}$ component. Proceeding towards the boundary of the tree determines a holographic code on ${\mathcal T}_{\mathbb L}$ that outputs $q$-ary qubits at the points of $\P^1({\mathbb L})$. As mentioned above, the quantum code detects errors along the subtree ${\mathcal T}_{\mathbb K}$. As in the case of the classical codes, there is an asymmetry in this construction of the holographic code between the roles of the root vertex and of the remaining vertices of the Bruhat--Tits tree. \section{Discrete and continuous bulk spaces: Bruhat-Tits buildings and Drinfeld symmetric spaces} Unlike its Archimedean counterparts, either Euclidean AdS$_2$/CFT$_1$ with bulk $\H^2$ and boundary $\P^1({\mathbb R})$ or Euclidean AdS$_3$/CFT$_2$ with bulk $\H^3$ and boundary $\P^1({\mathbb C})$, the $p$-adic AdS/CFT correspondence has two different choices of bulk spaces (one discrete and one continuous) which share the same conformal boundary at infinity. The discrete version of the bulk space is given by the Bruhat--Tits tree ${\mathcal T}_{\mathbb K}$ of ${\rm PGL}(2,{\mathbb K})$, with ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$, while the continuous form of the bulk space is given by Drinfeld's $p$-adic upper half plane $\Omega$. Both have the same boundary $\P^1({\mathbb K})$. We argue here that the full picture of the $p$-adic AdS/CFT correspondence should take into account both of these bulk spaces and the relation between them induced by the norm map. \smallskip The rank-two case can be generalized to higher rank, with the Bruhat--Tits buildings of ${\rm PGL}(n,{\mathbb K})$ generalizing the Bruhat--Tits tree and the higher dimensional Drinfeld symmetric spaces generalizing the Drinfeld upper half plane, see \cite{Kato}. \subsection*{The geometry of the Drinfeld plane} We review quickly the geometry of the Drinfeld upper half plane, see \cite{BouCar}. We denote by ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$ and by ${\mathbb C}_p$ the completion of the algebraic closure of ${\mathbb K}$. Drinfeld's $p$-adic upper half plane is the space $$ \Omega = \P^1({\mathbb C}_p) \smallsetminus \P^1({\mathbb K}). $$ We also denote by ${\mathcal T}_{\mathbb K}$ the Bruhat--Tits tree of ${\rm PGL}(2,{\mathbb K})$, with boundary at infinity $\P^1({\mathbb K})$. It is convenient to think of $\P^1({\mathbb C}_p)$ as the set of classes, up to homotheties in ${\mathbb C}_p^*$, of non-zero ${\mathbb K}$-linear maps $\varphi: {\mathbb K}^2 \to {\mathbb C}_p$, with $\P^1({\mathbb K})$ the set of classes of maps as above with ${\mathbb K}$-rank equal to one. This can be seen by identifying points $(\alpha:\beta)$ of $\P^1$ with homogeneous ideals $\langle y\alpha - x \beta \rangle$ in the polynomial ring in the variables $(x,y)$. The ${\mathbb K}$-linear map $\varphi: {\mathbb K}^2 \to {\mathbb C}_p$ given by $\varphi(x,y)=y\alpha - x \beta$ has a non-trivial kernel when $\alpha/\beta \in {\mathbb K}$ (assuming $\beta\neq 0$) and is invertible if $\alpha/\beta \in {\mathbb C}_p\smallsetminus {\mathbb K}$. Thus, $\P^1({\mathbb C}_p) \smallsetminus \P^1({\mathbb K})$ can be identified with the set of homothety classes of invertible ${\mathbb K}$-linear maps $\varphi: {\mathbb K}^2 \to {\mathbb C}_p$. \smallskip Given such an injective linear map, one can then compose it with the norm on ${\mathbb C}_p$. Recall that the Bruhat--Tits tree can be defined in terms of equivalence classes of norms. Namely, vertices of the Bruhat--Tits tree correspond to classes of lattices $M$ in ${\mathbb K}^2$ up to similarity, namely $M_1\sim M_2$ if $M_1=\lambda M_2$ for some $\lambda\in {\mathbb K}^*$. To a lattice $M$ one associates a norm $|\cdot |_M$, namely a real valued function on ${\mathbb K}^2$ which is positive on non-zero elements, satisfies $| a\cdot \underline{x} |_M =|a|\cdot | \underline{x} |_M$ for all $a\in {\mathbb K}$ and $\underline{x} \in {\mathbb K}^2$, with $|a|$ the $p$-adic norm on ${\mathbb K}$, and $| \underline{x}+\underline{y} |_M\leq \max\{ | \underline{x}|_M, |\underline{y} |_M \}$. The norm $|\underline{x}|_M$ is defined as follows. Let $\pi$ be a uniformizer in ${\mathcal O}_{\mathbb K}$ such that $k={\mathcal O}_{\mathbb K} /\pi{\mathcal O}_{\mathbb K}$ is the residue field $k={\mathbb F}_q$. The fractional ideal $\{ \lambda \in {\mathbb K}\,:\, \lambda \underline{x} \in M \}$ is generated by a power $\pi^m$. The norm is then defined as $| \underline{x}|_M=q^m$ on non-zero vectors. Equivalent norms $|\cdot |_{M_1} =\gamma |\cdot |_{M_2}$ for $\gamma\in {\mathbb R}^*_+$ correspond to equivalent lattices. Two vertices in the Bruhat--Tits tree are adjacent iff the corresponding equivalence classes of lattices have representatives satisfying $\pi M \subset M' \subset M$. To see this in terms of norms, we can choose an ${\mathcal O}_{\mathbb K}$-basis $\{ e_1, e_2 \}$ for $M$ and $\{ e_1 , \pi e_2 \}$ for $M'$. Then $| x e_1 + y e_2 |_M = \max \{ |x|, |y| \}$ and $| x e_1 + y e_2 |_{M'} =\max \{ |x|, |\pi|^{-1} \cdot |y| \}$. The edge $e$ between the vertices $v=[M]$ and $v'=[M']$ is then parameterized by the classes of norms $| x e_1 + y e_2 |_t = \max \{ |x|, |\pi|^{-t} \cdot |y| \}$ for $0\leq t \leq 1$, see \cite{Kato}. This description of the Bruhat--Tits tree in terms of equivalence classes of norms on ${\mathbb K}^2$ determines a map from the Drinfeld uppar half plane to the Bruhat--Tits tree, directly induced by the norm. Namely, given a point in $\Omega$, which we identify as above with an invertible ${\mathbb K}$-linear map $\varphi: {\mathbb K}^2 \to {\mathbb C}_p$, we obtain a surjective map $$ \Upsilon: \Omega \to {\mathcal T}_{\mathbb K} $$ by setting $\Upsilon(\varphi)=|\cdot |_\varphi$, where $|\cdot |_\varphi$ is the norm on ${\mathbb K}^2$ defined by $|\underline{x} |_\varphi = |\varphi(\underline{x})|$, where the norm on the right-hand-side if the $p$-adic norm on ${\mathbb C}_p$. The explicit form of this map is discussed in \cite{BouCar}. We identify a point $(\zeta_0:\zeta_1) \in \P^1({\mathbb C}_p)\smallsetminus \P^1({\mathbb K})$ with the map $\varphi: {\mathbb K}^2 \to {\mathbb C}_p$ that maps $x e_1 + y e_2 \mapsto x \zeta_0 + y \zeta_1 \in \P^1({\mathbb C}_p)$. In an affine patch (say with $\zeta_1\neq 0$) we can write the homotethy class of $\varphi$ as $x e_1 + y e_2 \mapsto x \zeta + y \in {\mathbb C}_p\smallsetminus {\mathbb K}$. Then the preimages under the map $\Upsilon$ of two adjacent vertices $v,v'$ of ${\mathcal T}_{\mathbb K}$ and the edge $e$ connecting them are given, respectively, by $$ \Upsilon^{-1}(v) = \{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq 1\} \smallsetminus \bigcup_{a\in {\mathcal O}_{\mathbb K}/\pi {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - a| < 1 \} $$ $$ \Upsilon^{-1}(v') =\{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq q^{-1} \} \smallsetminus \bigcup_{b \in \pi {\mathcal O}_{\mathbb K}/\pi^2 {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - b| < q^{-1} \} $$ where $v=[M]$, $v'=[M']$ with $\pi M \subset M' \subset M$, and for $e_t = (1-t) v + t v'$, for $0<t<1$, along the edge $e$ $$ \Upsilon^{-1}(e_t) =\{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq q^{-t} \}, $$ while $$ \Upsilon^{-1}(e) = \{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq 1\} \smallsetminus \bigcup_{a\in ({\mathcal O}_{\mathbb K} \smallsetminus \pi {\mathcal O}_{\mathbb K})/\pi {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - a| < 1 \} $$ $$ \smallsetminus \bigcup_{b \in \pi {\mathcal O}_{\mathbb K}/\pi^2 {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - b| < q^{-1} \}. $$ For a detailed proof of this fact we refer to \S 2 of \cite{BouCar}. A part of the Drinfeld plane corresponding to the regions $\Upsilon^{-1}(v)$, $\Upsilon^{-1}(v')$ and $\Upsilon^{-1}(e)$ with $\partial e=\{ v,v'\}$ in the Bruhat--Tits tree can be illustrated as follows (from \cite{BouCar}): \begin{center} \includegraphics[scale=0.5]{drinfeldplane.jpg} \end{center} where the light colored region is $\Upsilon^{-1}(v)$, the striped shaded region is $\Upsilon^{-1}(v')$, and the dark shaded cylinder connecting them is $\Upsilon^{-1}(e)$. Thus, one can visualize the Drinfeld plane as a continuum that is a ``tubular neighborhood" of the discrete Bruhat--Tits tree, with the regions $\Upsilon^{-1}(v)$ viewed as the $p$-adic analog of pair-of-pants decompositions for complex Riemann surfaces. A lift of the projection map $\Upsilon$ to the Bruhat-Tits tree realizes the tree as a skeleton of the Drinfeld plane. \subsection*{Higher rank buildings and Drinfeld symmetric spaces} An analogous description holds relating the Bruhat--Tits buildings ${\mathcal T}_{n,{\mathbb K}}$ of ${\rm PGL}_{n+1}({\mathbb K})$, with ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$ and the associated Drinfeld symmetric space $$ \Omega_n = \P^n({\mathbb C}_p)\smallsetminus \cup_{H \in {\mathcal H}_{\mathbb K}} H, $$ where ${\mathcal H}_{\mathbb K}$ is the set of all ${\mathbb K}$-rational hyperplanes in $\P^n({\mathbb C}_p)$. There is again a map $\Upsilon_n : \Omega_n \to {\mathcal T}_{n,{\mathbb K}}$ where the preimages of simplices in the Bruhat-Tits building is described in terms of norm conditions, \cite{Kato}. \smallskip The Bruhat--Tits building ${\mathcal T}_{n,{\mathbb K}}$ of ${\rm PGL}_{n+1}({\mathbb K})$ is a simplicial complex with vertex set $V({\mathcal T}_{n,{\mathbb K}})={\mathcal T}_{n,{\mathbb K}}^0$ given by the similarity classes $M_1\sim M_2$ if $M_1=\lambda M_2$ for $\lambda\in {\mathbb K}^*$ of lattices in an $n+1$ dimensional vector space $V$ over ${\mathbb K}$. A set $\{ [M_0], \ldots, [M_\ell] \}$ of such classes defines an $\ell$-simplex in ${\mathcal T}_{n,{\mathbb K}}^\ell$ in the Bruhat--Tits building iff $M_0\supsetneq M_1 \supsetneq M_2 \supsetneq \cdots \supsetneq M_\ell \supsetneq \pi M_0$, with $\pi \in {\mathcal O}_{\mathbb K}$ a prime element with ${\mathbb F}_q={\mathcal O}_{\mathbb K}/\pi {\mathcal O}_{\mathbb K}$ the residue field. Such a sequence determines a flag $\bar M_0 \supsetneq \bar M_1 \supsetneq \cdots \supsetneq \bar M_\ell \supseteq 0$ of subspaces $\bar M_i =M_i/\pi M_i$ of an $n+1$-dimensional ${\mathbb F}_q$-vector space. The $\ell$-simplices in ${\mathcal T}_{n,{\mathbb K}}^\ell$ containing a given vertex $[M]$ are in one-to-one correspondence with such flags with $[M_0]=[M]$. As before, we consider norms on $V\simeq {\mathbb K}^{n+1}$ and similarity classes of norms. There is a ${\rm PGL}_{n+1}({\mathbb K})$-equivariant homeomorphism between the resulting space of equivalence classes of norms and the geometric realization of the simplicial complex ${\mathcal T}_{n,{\mathbb K}}$. \smallskip Consider then points $\zeta=(\zeta_0:\cdots : \zeta_n)\in \P^n({\mathbb C}_p)$ and the map $\varphi: V\to {\mathbb C}_p$ given by $\sum_{i=0}^n a_i e_i \mapsto \sum_{i=0}^n a_i \zeta_i$. The map $| \sum_{i=0}^n a_i e_i |_\varphi =| \sum_{i=0}^n a_i \zeta_i |$ determines an equivalence class of norms iff the point $\zeta \in \P^1({\mathbb C}_p)$ does not lie in any ${\mathbb K}$-rational hyperplane. This determines the map $\Upsilon: \Omega_n \to {\mathcal T}_{n,{\mathbb K}}$ that generalizes in higher rank the map from the Drinfeld plane to the Bruhat-Tits tree. As in the previous case, one can describe the preimages under this map. For example, the preimage of a vertex $v=[M]$ is given by $$ \Upsilon^{-1}(v)=\{ |\zeta_0|=\cdots =|\zeta_n|=1\} \smallsetminus \cup_H \{ \zeta \mod \pi\in H \} $$ with the union over hyperplanes and $$ \Upsilon^{-1}(e_t)=\{ |\zeta_0|=\cdots =|\zeta_{n-1}|=1, \, |\zeta_n|=q^{-t} \} $$ for $e_t$ point along an edge $e$, with $0<t<1$, see \S 2 of \cite{Kato} for more details. \section{Tensor networks on the Drinfeld plane}\label{DplaneSec} Because the $p$-adic AdS/CFT correspondence has two different choices of bulk space, in addition to considering classical and quantum codes associated to the Bruhat--Tits tree in constructing a version of tensor networks, we can also work with the Drinfeld $p$-adic upper half plane. Because this is a continuous rather than a discrete space, the type of construction we can consider there will be more similar to the type of construction of tensor networks on the ordinary upper half plane (the $2$-dimensional real hyperbolic plane $\H^2$) described in \cite{HaPPY}. The map $\Upsilon$ from the Drinfeld upper half plane to the Bruhat--Tits tree will then make it possible to relate the construction of tensor networks on the first to the latter. To this purpose, we start by reviewing the construction of the pentagon holographic code from \cite{HaPPY}. \smallskip \subsection{Pentagon Code on the Real Hyperbolic Plane} In \cite{HaPPY} a holographic code is constructed using a tessellation of the real hyperbolic plane $\H^2$ by pentagons, with quantum codes given by a six leg perfect tensor placed at each tile. Unlike the codes discussed in the previous section on Bruhat--Tits trees, this code has no preferred base point in the tiling and all tiles are treated equally, and the codes are symmetric with respect to permutations of the five legs places across the edges of the tiles, thus preserving the full symmetry group of the tiling. We discuss briefly some aspects of this pentagon code here before turning to analogous constructions on the Drinfeld $p$-adic upper half plane. \smallskip The real hyperbolic plane $\H^2$ (which we can conveniently represent as the Poincar\'e disk) has a regular periodic tessellation by right-angle pentagons. \begin{center} \includegraphics[scale=0.35]{rightpentagons.jpg} \end{center} The corresponding symmetry group is the Fuchsian group $\Gamma \subset {\rm PSL}(2,{\mathbb R})$ of signature $(2,2,2,2,2)$ generated by the reflections about the sides of a single right-angled hyperbolic pentagon. An interesting property of this Fuchsian group, from the algebro-geometric perspective is the fact that, if one subdivides an equilateral right-angled hyperbolic pentagon into $10$ triangles with angles $\pi/2, \pi/4, \pi/5$, then one can realize the group $\Gamma$ as a finite index subgroup of a triangle Fuchsian group $\Gamma'$ of signature $(2,4,5)$. These Fuchsian groups have the property that the quotient Riemann surfaces $\H/\Gamma=X$ is arithmetic as an algebraic curve (that is, it is defined over a number field), \cite{Cohen}. \smallskip The construction of the pentagon holographic code in \cite{HaPPY} places over each tile of this right-hangled pentagon tiling a quantum code given by the six leg perfect tensor determined by a $5$-qubit $[[5,1,3]]_2$-quantum code $$ {\mathcal C} \subset {\mathcal H}^{\otimes 5}, \ \ \ {\mathcal C}=\{ \psi\in {\mathcal H}^{\otimes 5}\, :\, S_j \psi =\psi \} $$ where $S_1 = X\otimes Z \otimes Z \otimes X \otimes I$, with $X,Y,Z$ the Pauli gates and $S_2,S_3,S_4,S_5=S_1S_2S_3S_4$ the cyclic permutations of $S_1$, and with ${\mathcal H}={\mathbb C}^2$ the $1$-qbit Hilbert space. This is visualized as a code over $\H^2$ that has one logical input at each tiles of the pentagon tessellation and physical outputs across each edge of the tile, which are contracted with the legs of the nearby tiles, so that the resulting holographic code has one logical input at each tiles and outputs at the points at the boundary $\P^1({\mathbb R})$ that correspond to infinite sequences of tiles. \smallskip \subsection{Triangle Fuchsian Groups and Holographic Codes} In view of adapting this construction to the $p$-adic setting, it is better to first consider a modification that will allow us to work directly with the triangle Fuchsian group $\Gamma(2,4,5)$ rather than with its index $10$ subgroup $\Gamma$ of signature $(2,2,2,2,2)$ which is the symmetry group of the regular right-angled pentagon tiling. \smallskip This means replacing each pentagons in the tiling with its subdivision into a triangulation of $10$ hyperbolic triangles with a vertex at the center of the pentagon tile and the other vertices in the middle of the edges and at the original vertices of the pentagon. \begin{center} \includegraphics[scale=0.35]{pentagontriangle.jpg} \end{center} We then consider holographic codes constructed by quantum codes associated to the triangle tiles. To this purpose, we do not necessarily require the group to be $\Gamma(2,4,5)$. We can work directly with the more general case of an arbitrary triangle Fuchsian group $\Gamma(a,b,c)\subset {\rm PSL}_2({\mathbb R})$ of hyperbolic type, $a^{-1}+b^{-1}+c^{-1}<1$. \smallskip A simple way to construct a holographic code based on a tiling of the hyperbolic plane realized by a hyperbolic triangle group is to use a quantum error correcting code described in \cite{HaPPY} that encodes a single $3$-ary qubit (qutritt) into a space of three $3$-ary qubits by $$ \begin{array}{rcl} |0\rangle & \mapsto & |000\rangle + |111\rangle + |222\rangle \\ |1\rangle & \mapsto & |012\rangle + |120\rangle + |201\rangle \\ |2\rangle & \mapsto & |021\rangle + |102\rangle + |210\rangle\, . \end{array} $$ This code can be represented as a perfect tensor $|a\rangle \mapsto T_{abcd} |bcd\rangle$ in the sense of \cite{HaPPY}. By placing a copy of this code (thought of as a copy of the tensor $T_{abcd}$ at each triangle tile of the tiling specified by the Fuchsian triangle group, one obtains a holographic code with a logical input qutritt at each tile and physical output qutritts at points of the boundary $\P^1({\mathbb R})$ corresponding to limit points of infinite sequences of tiles, from a specified base point in the bulk. \smallskip A possible drawback of this simple construction is the fact that the quantum code we are using does not contain any information about the specific triangle group that determines the tessellation. This should be corrected by taking into consideration the stabilizer subgroups of edges and vertices, and incorporating them into the structure of the quantum code. \smallskip This can be done by considering quantum codes placed at the vertices, rather than at the faces, of the tessellation of a hyperbolic triangle group $\Gamma(a,b,c)$. This requires using perfect tensors of different valences, depending on the cardinality of the stabilizer group $G_v \subset \Gamma(a,b,c)$ of the vertex $v$. \smallskip A triangle Fuchsian group $\Gamma(a,b,c)$ in ${\rm PSL}_2({\mathbb R})$ is generated by elements $\gamma_1=\sigma_1 \sigma_2$, $\gamma_2=\sigma_2 \sigma_3$ and $\gamma_3=\sigma_3\sigma_1$, where the $\sigma_i$ with $\sigma_i^2=1$ are the reflections about the sides of the fundamental domain triangle in $\H^2$. The generators $\gamma_i$ satisfy the relations $\gamma_1^a=\gamma_2^b=\gamma_3^c=\gamma_1\gamma_2\gamma_3=1$, that correspond to rotations by angles $2\pi/a$, $2\pi/b$ and $2\pi/c$, respectively, with stabilizer groups ${\mathbb Z}/a{\mathbb Z}$, ${\mathbb Z}/b{\mathbb Z}$, ${\mathbb Z}/c{\mathbb Z}$ associated to the vertices of the tessellation. Let $\ell={\rm lcm}\{a,b,c\}$ and consider the embedding ${\mathbb Z}/a{\mathbb Z}\hookrightarrow {\mathbb Z}/\ell{\mathbb Z}$ by identifying ${\mathbb Z}/\ell{\mathbb Z}$ with $\ell$-th roots of unity and mapping the generator of ${\mathbb Z}/a{\mathbb Z}$ to $\zeta^{\ell/a}$, where $\zeta$ is a primitive $\ell$-th root. Similarly, for the other two groups. We can then consider a construction like the quantum codes described in \cite{HMSS}. At a vertex labelled by a stabilizer ${\mathbb Z}/a{\mathbb Z}$ we consider the polynomial code $$ | \alpha \rangle \mapsto \sum_{\alpha_0,\ldots,\alpha_{a-1} \in {\mathbb Z}/\ell{\mathbb Z}} \otimes_{x\in {\mathbb Z}/a{\mathbb Z}} | f_{\underline{\alpha}}(x^{\ell/a}) \rangle $$ where $f_{\underline{\alpha}}(t)=\alpha_0 + \alpha_1 t + \cdots + \alpha_{a-1} t^{a-1}+\alpha t^a \in {\mathbb Z}/\ell{\mathbb Z}[t]$. This encodes an input in $\ell^2({\mathbb Z}/\ell{\mathbb Z})$ into an output in $\ell^2({\mathbb Z}/\ell{\mathbb Z})\otimes \ell^2({\mathbb Z}/a{\mathbb Z})$, which we think of as an $\ell$-ary qubit deposited at each side of the tessellation around the vertex. We can express this as a tensor $T_{i_0\ldots i_a}$ with $a+1$ legs. By contracting legs along the matching edges of the tessellations we obtain a holographic code that inputs an $\ell$-ary qubit at each vertex of the tessellation and outputs at the points in the boundary $\P^1({\mathbb R})$ that are endpoints of geodesic lines consisting of edges of the tessellation. \smallskip \subsection{Surface Quantum Codes} There is another interesting construction of quantum stabilizer codes associated to tessellations of the hyperbolic planes, which was developed in \cite{Zemor}. These codes are constructed in general for a tiling defining a $2$-dimensional surface (possibly with boundary). In particular, as shown in \cite{Zemor}, the construction applies to the case of hyperbolic triangle Fuchsian groups, through the associated Cayley graph and the tessellation defined by it. In particular it applies to the triangle group $\Gamma(2,4,5)$ which we use here as a replacement for the right-angled pentagon tile of \cite{HaPPY}. The construction of surface codes in \cite{Zemor} arises as a natural generalization of Kitaev's toric code of \cite{Kitaev}. They have the advantage that they rely again on the CRSS algorithm that coverts classical into quantum codes, hence they can be investigated in terms of classical coding theory techniques. \smallskip Consider a tessellation ${\mathcal R}$ of a complex Riemann surface $\Sigma$ and its dual ${\mathcal R}^*$ that has a vertex for each face of ${\mathcal R}$ with two vertices being adjacent in ${\mathcal R}^*$ if the corresponding faces in ${\mathcal R}$ share a common boundary edge. Let ${\mathcal E}=(\epsilon_{v,e})$ be the vertex-edge incidence matrix of ${\mathcal R}$ and let ${\mathcal E}^*$ be the vertex-edge incidence matrix of the dual graph ${\mathcal R}^*$. Let $V$ and $V^*$ be the ${\mathbb F}_q$-vector spaces spanned by the rows of ${\mathcal E}$ and ${\mathcal E}^*$, respectively. The rows of ${\mathcal E}$ are orthogonal to $V^*$ and the rows of ${\mathcal E}^*$ are orthogonal to $V$, with respect to the standard pairing $\langle v,v' \rangle =\sum_i v_i v'_i$. The first homology groups of ${\mathcal R}$ and ${\mathcal R}^*$ can be identified with the quotients $V^\perp/V^*$ and ${V^*}^\perp/V$. A quantum code can be associated to these data by a version of the CRSS algorithm, using the pair of matrices ${\mathcal E}$ and ${\mathcal E}^*$. The construction of the quantum code follows the same procedure illustrated above: to pairs $(v,w)$ of vectors $v\in V$, $w\in V^*$, one associates an error operator $E_{(v,w)}$. The condition that the spaces $V$ and $V^*$ are mutually orthogonal implies that the bilinear pairing $\langle (v,w),(v',w')\rangle = \langle v,w' \rangle -\langle v',w\rangle$ vanishes, hence the group ${\mathcal S}$ formed by these $E_{(v,w)}$ and the $\xi^j$, $0\leq j \leq p-1$ is abelian. Thus, one can associate to it a quantum stabilizer code by taking a common eigenspace of the $E_{(v,w)}$. This imposes $\dim V + \dim V^*$ stabilizer conditions on $n$ $q$-ary qubits, where $n$ is the number of columns of ${\mathcal E}$ and ${\mathcal E}^*$ (number of edges of the graph ${\mathcal R}$), hence the parameters of the resulting quantum code are $[[n,k,d]]_q$, where $k=n -\dim V - \dim V^*$ and $d=\min\{ d_{V^\perp \smallsetminus V^*}, d_{{V^*}^\perp\smallsetminus V} \}$ with $d_{V^\perp \smallsetminus V^*}=\min\{ \omega(v)\,:\, v\neq 0, \, v\in V^\perp \smallsetminus V^* \}$ with $\omega(v)=\#\{ i\,:\, v_i\neq 0 \}$ and similarly for $d_{{V^*}^\perp\smallsetminus V}$. \smallskip The Kitaev toric code consists of this construction applied to a graph ${\mathcal R}$ obtained by a tessellation of a torus into squares. Generalizations to other Riemann surfaces and other tessellations were described in \cite{Zemor}. The main idea is to associate quantum surface codes to increasingly large portions of a given tessellation of the hyperbolic plane or to suitable quotients of such regions. \smallskip In our case, we can start with the right-angled pentagon tessellation ${\mathcal R}$ and its dual graph ${\mathcal R}^*$. After choosing a root vertex $v_0$ of ${\mathcal R}^*$ (the center of a chosen face in the tiling) we denote by ${\mathcal R}_N$ and ${\mathcal R}_N^*$ the finite tessellations obtained by considering only the points that are up to $N$ steps away from $v_0$ (that is, such that the hyperbolic geodesic to $v_0$ passes through at most $N$ tiles. Let $V_N$ and $E_N$ be the number of vertices and edges in ${\mathcal R}_N$ and let $V_N^*$ be the number of vertices in ${\mathcal R}_N^*$. The region ${\mathcal R}_N$ has boundary, so in the construction of the dual graph ${\mathcal R}_N^*$ we assume that the dual graph has $E_N=E_N^*$ where the edges of ${\mathcal R}_N^*$ include an edge cutting through each boundary edge of ${\mathcal R}_N$ and number of vertices $V_N^*$ given by the number of faces of ${\mathcal R}_N$ plus one additional vertex for each boundary edge of ${\mathcal R}_N$. This will correctly produce, in the limit when $N\to \infty$ boundary vertices on $\P^1({\mathbb R})$ at the endpoints of all geodesics of the dual graph ${\mathcal R}^*$ of the tessellation ${\mathcal R}$, which should be the physical outputs of a holographic quantum code. Note that, starting from the central pentagon as zeroth step, at the first step one adds $10$ new pentagons, five of which share an edge with the initial one and five that share a vertex. At the second step, one adds $40$ new pentagons, where each of the $5$ pentagons of the first step that shared an edge with the central pentagon (we call these tiles of the first kind) will be adjacent to $2$ new tiles of the first kind (sharing an edge) and $1$ tile of the second kind (sharing a vertex), while each of the $5$ tiles of the second kind will be adjacent to $3$ new tiles of the first kind and $2$ new tiles of the second kind. Thus, if we let $F_N$ be the number of new tiles (faces) added to the tessellation at the $N$-th step, with $F_N=m_N+n_N$, where $m_N$ and $n_N$ are, respectively, the number of tiles of the first and second kind, namely those that share a full edge or just a vertex with a tile of the $(N-1)$-st step. We then have the recursion relation $$ m_{N+1} = 2m_N + 3 n_N , \ \ \ n_{N+1} = m_N + 2 n_N $$ with initial condition $m_1=n_1=5$. This gives $$ (m_1,n_1)=(5,5), \ \ (m_2,n_2)=(25, 15), \ \ (m_3,n_3)=(95,55), $$ $$ (m_4,n_4)=(355,205), \ \ (m_5,n_5)=(1325,765), \ \ (m_6,n_6)=(4945,2855), \ldots $$ which corresponds to $F_1=10$, $F_2=40$, $F_3=150$, $F_4=560$, $F_5=2090$, $F_6=7800\ldots$ Similarly, let $V_N$ denote the number of vertices added to the tessellation at the $N$-th step in the construction. We count as before the numbers $m_N$ and $n_N$ of faces added at the $N$-th step, and for each face we count new vertices counterclockwise, counting the leftmost vertex (common to the next adjacent face) and not counting the rightmost vertex (which we include in the counting for the next tile). This gives a number of new vertices equal to $W_N=2 m_N + 3 n_N =m_{N+1}$, which is again computed in terms of the recursion above. We have $V_N = \sum_{k=0}^N W_k$ and $V_N^* = \sum_{k=0}^N F_k + E_{\partial,N}$, where $E_{\partial, N}$ is the number of boundary edges at the $N$-th stage in the construction. This number is also equal to $E_{\partial, N}= 2 m_N + 3 n_N=m_{N+1}$. \smallskip One can also consider closed surfaces (without boundary) and associated quantum codes by passing to Cayley graphs of quotient groups of the triangle Fuchsian group associated to the tessellation. In particular, the case that corresponds to the right-angled pentagon tile of the pentagon code of \cite{HaPPY} is $m=4$ and $\ell=5$, for which we use the presentation $$ \Gamma(2,4,5)=\langle a,b \,|\, a^2 =1, b^5=1, (ab)^4 =1 \rangle. $$ The $2$-complex used for the construction of the surface code in \cite{Zemor} is built by considering $2$-cycles of length $\ell=5$ and $2m=8$ of the form $\{ x,xb,xb^2,xb^3, xb^4,xb^5=x \}$ and $\{ x, xa, xab, xaba, x(ab)^2, x(ab)^2a, x(ab)^3, a(ab)^3a, x(ab)^4=x\}$, at every vertex $x$, where all vertices have valence $3$, with two edges $\{ x,xb \}$ and $\{x, xb^{-1} \}$ along an $\ell=5$-face and the remaining edge $\{x,xa\}$ along a $2m=8$-face. By constructing an explicit matrix representation of $\Gamma(2,m,\ell)$ in the matrix group ${\rm SL}_3({\mathbb Z}[\xi])$, with $\xi=2\cos(\pi/m\ell)$, and taking reduction of the matrix entries modulo a prime $p$, one obtains a finite quotient group $G$, as the image of $\Gamma(2,m,\ell)$ (as a subgroup of ${\rm SL}_3({\mathbb Z}[\xi])$) in the quotient ${\rm SL}_3({\mathbb F}_p[X]/(h(X)))$ where $h(X)$ is a function of the $2m\ell$-th normalized Chebyshev polynomial. It is shown in \cite{Zemor} that this finite quotient group $G$ has the property that any word in the generators that is the identity in $G$ without being the identity in $\Gamma(2,m,\ell)$ must be of length at least $\log p$. This condition on the finite quotient group ensures that the finite graph given by the Cayley graph of $G$ can be identified with a portion of the infinite Cayley graph of $\Gamma(2,m,\ell)$, given by the neighborhood of size $\log p$ of a vertex. Provided that $\log p$ is sufficiently large, the $2$-cycles will then correspond to the $\ell$-cycles and $2m$-cycles in this region, as the only words within that length that are equal to the identity in $G$ are those already equal to the identity in the triangle group. The quantum code associated to the Cayley graph of $G$ and its dual graph then has code parameters $[[n,k,d]]_q$ with $n=E$, dimension $k\geq \frac{E}{3}(1-2(\frac{1}{\ell}+\frac{1}{m})$, where $E$ and $V$ are the number of edges and vertices in the Cayley graph of $G$. Thus, the dimension grows linearly in the length of the code, while as shown in \S 3.3 of \cite{Zemor}, the minimum distance is proportional to $\log p$. \smallskip The advantage of thinking in terms of triangle groups rather than pentagon codes is that there is a parallel theory of $p$-adic hyperbolic triangle groups in ${\rm PGL}_2({\mathbb K})$, for ${\mathbb K}$ a (sufficiently large) finite extension of ${\mathbb Q}_p$, see \cite{Kato2}, \cite{Kato3}. These are much more severely constrained than the Fuchsian triangle groups in ${\rm PSL}_2({\mathbb R})$ and only exist for small values of $p$. \smallskip \subsection{Triangle Groups on the Bruhat-Tits trees} In order to consider analogous constructions in the Drinfeld $p$-adic upper half plane $\Omega=\P^1({\mathbb C}_p)\smallsetminus \P^1({\mathbb K})$, we first need to consider possible tilings of $\Omega$. As in the case of the real hyperbolic plane $\H^2$, we can think of a tessellation of the Drinfeld plane $\Omega$ as a fundamental domain ${\mathcal F}$ for the action of a subgroup $\Gamma \subset {\rm GL}_2({\mathbb K})$ and its translates $\gamma({\mathcal F})$, $\gamma\in \Gamma$, with the property that $\Omega/\Gamma$ is compact. Using the reduction map $\Upsilon: \Omega \to {\mathcal T}_{\mathbb K}$ from the Drinfeld plane to the Bruhat--Tits tree, the property that $\Omega/\Gamma$ is compact translates into the property that ${\mathcal T}_{\mathbb K}/\Gamma$ is a finite graph. \smallskip Unlike what happens in the case of Fuchsian groups acting on the real hyperbolic plane, the existence of $p$-adic triangle graphs is much more severely constrained. One is particularly interested in triangle groups of Mumford type. These are triangle groups $\Gamma\subset {\rm GL}_2({\mathbb Q}_p)$ such that $(\P^1({\mathbb C}_p)\smallsetminus \Lambda_\Gamma)/\Gamma\simeq \P^1({\mathbb C}_p)$ and the uniformization map $\pi: \P^1({\mathbb C}_p)\smallsetminus \Lambda_\Gamma\to \P^1({\mathbb C}_p)$ is ramified at three points. In particular, by the classification result of \cite{Kato2}, \cite{Kato3} a $p$-adic triangle group of Mumford type, of signature $(2,4,5)$ exists only when $p=2$. No hyperbolic triangle groups of Mumford type exist for $p>5$. The complete list of hyperbolic $p$-adic triangle groups $\Gamma(a,b,c)$ of Mumford type that can exist in the cases $p=2$, $p=3$, and $p=5$ is given in \cite{Kato3}. \smallskip Let ${\mathcal F}$ be a fundamental domain for the action of the triangle group $\Gamma(2,4,5)$ on the Drinfeld $p$-adic upper half plane $\Omega$ with $p=2$ and let $T$ be a fundamental domain for the action of the same group $\Gamma(2,4,5)$ on the Bruhat--Tits tree ${\mathcal T}_{{\mathbb K}}$ of a (sufficiently large) finite extension ${\mathbb K}$ of ${\mathbb Q}_2$. Since the reduction map $\Upsilon: \Omega \to {\mathcal T}_{{\mathbb K}}$ is equivariant with respect to the action of ${\rm GL}_2({\mathbb K})$, we can assume that $T=\Upsilon({\mathcal F})$. More generally, we can consider any choice of one of the possible hyperbolic $p$-adic triangle groups $\Gamma(a,b,c)$ of Mumford type, with $p\in \{2,3,5\}$, acting on the Bruhat--Tits tree of a (sufficiently large) finite extension ${\mathbb K}$ of ${\mathbb Q}_p$, for one of these three possible values of $p$, and we proceed in the same way. \smallskip A good way of describing the fundamental domain of the action of a finitely generated discrete subgroup $\Gamma \subset {\rm PGL}_2({\mathbb K})$ on the Bruhat--Tits tree ${\mathcal T}_{\mathbb K}$ and the resulting quotient graph is in terms of {\em graphs of groups}, as shown in \cite{Kato2}, \cite{Kato3}. The theory of graphs of groups was developed in \cite{Bass}, \cite{Serre}. A graph of groups consists of a finite directed graph with groups $G_v$ and $G_e$ associated to the vertices and edges of the graph, with $G_{\bar e}=G_e$, together with injective group homomorphisms $\varphi_s: G_e \to G_{s(e)}$ and $\varphi_t: G_e \to G_{t(e)}$ from the group associated to an edge to the groups associated to the source and target vertices. The fundamental group of a graph of groups is constructed choosing a spanning tree of the graph: it is generated by the vertex groups $G_v$ together with an element $h_e$ for each edge $e$, with relations $h_{\bar e}=h_e^{-1}$ and $$ h_e^{-1} \varphi_s(g) h_e = \varphi_t(g), \,\, \forall g\in G_e $$ and with $h_e=1$ for all $e$ in the chosen spanning tree. If one denotes by ${\mathcal G}$ the graph and by $G_\bullet$ the collection of groups associated to the vertices and edges, one writes $\pi_1({\mathcal G},G_\bullet)=\varinjlim_{\varphi, {\mathcal G}} G_\bullet$ for the resulting amalgam given by the fundamental group of the graph of groups. In the case where the graph consists of one edge and two vertices, this fundamental group is just the pushfoward in the category of groups, namely the amalgamated free product $G_{s(e)}\star_{G_e} G_{t(e)}$. The main idea (see \cite{Bass}, \cite{Serre}) is to associate to the action of a discrete group on a tree a quotient given not just by a graph but by the richer structure of a graph of groups, which keeps track of the information about the stabilizers of vertices and edges. In the case of a discrete subgroup $\Gamma \subset {\rm PGL}_2({\mathbb K})$, we consider the tree of groups given by the subtree ${\mathcal T}_\Gamma$ of the Bruhat--Tits tree ${\mathcal T}_{\mathbb K}$ together with the stabilizers $G_v$ and $G_e$ of vertices and edges, and we obtains a graph of groups as the quotient graph ${\mathcal T}_\Gamma/\Gamma$. It is shown in \cite{Kato3} that $p$-adic triangle groups of Mumford type are characterized by the property that the quotient graph $T={\mathcal T}_\Gamma/\Gamma$ is a tree consisting of three lines meeting at a single root vertex $v_0$. Such trees are called {\em tripods}. This tree, decorated with the stabilizer groups of vertices and edges is a tree of groups. The ends of this tree are the three branch points, at $0$, $1$ and $\infty$, of the genus zero curve $\Omega_\Gamma/\Gamma$. The group $\Gamma$ can be reconstructed from the tree of groups $(T, G_\bullet)$ as the associated fundamental group, \cite{Kato2}. Indeed the possible $p$-adic triangle groups of Mumford type are explicitly constructed using this method. For example, the tripod associated to the $p$-adic triangle group $\Gamma(2,4,5)$ with $p=2$, seen as a tree of groups, is the case $\ell=m=1$ of the following family (from \cite{Kato3}): \begin{center} \includegraphics[scale=0.35]{tripod.jpg} \end{center} with subgroups $D_2 \subset D_4 \subset S_4$ and $D_2$ intersecting $A_4\subset S_4$ trivially. In the case $\ell=m=1$ the resulting amalgam agrees with the pushout $S_4\star_{A_4} A_5$. \smallskip \subsection{Tessellations of the Drinfeld Plane} A general algorithm exists for computing fundamental domains in Bruhat--Tits trees for the action of certain quaternion groups, see \cite{FraMas}. In these cases the algorithm produces \begin{enumerate} \item a connected subtree ${\mathcal D}_\Gamma$ of the Bruhat--Tits tree which is a fundamental domain for the group action, in the sense that the edges of ${\mathcal D}_\Gamma$ form a complete set of coset representatives for $E({\mathcal T})/\Gamma$; \item the edge and vertex stabilizer groups $G_e$, $G_v$ for $e\in E({\mathcal D}_\Gamma)$ and $v\in V({\mathcal D}_\Gamma)$; \item an explicit form for the quotient map by identifications $(v,v',\gamma)$ between pairs of boundary vertices $v,v'$ of the fundamental domain ${\mathcal D}_\Gamma$, with $\gamma\in \Gamma$ such that $v'=\gamma v$. \end{enumerate} \smallskip This algorithm can be used to produce corresponding tessellations of the Drinfeld $p$-adic upper half plane. Let $\Gamma$, ${\mathcal D}_\Gamma$, $G_e$, $G_v$, and $\{ (v,v',\gamma) \}$ be given as above, through the algorithm of \cite{FraMas}. Using the projection map $\Upsilon: \Omega \to {\mathcal T}$ from the Drinfeld plane to the Bruhat--Tits tree, we can construct an associated tessellation of the Drinfeld plane, where the tiles are given by $\gamma T$, with $\gamma \in \Gamma$ and $$ T = \bigcup_{v\in V({\mathcal D}_\Gamma)} \Upsilon^{-1}(v) \cup \bigcup_{e\in E({\mathcal D}_\Gamma)} \Upsilon^{-1}(e). $$ The gluing rules for the tiles are prescribed by the data (2) and (3) associated to the fundamental domain on the Bruhat--Tits tree. \smallskip \subsection{Lifting Holographic Codes from the Bruhat--Tits Tree}\label{LiftSec} Another way to obtain holographic codes on the Drinfeld plane is to lift the construction of the classical and quantum codes on Bruhat-Tits trees described in \S \ref{BTcodes} via the surjection $\Upsilon: \Omega \to {\mathcal T}_{\mathbb K}$. This means that the ``tiles" to which we associate classical and quantum codes in the Drinfeld plane are given, in this case, by the regions $\Upsilon^{-1}(v)$, the preimages in $\Omega$ of vertices of the Bruhat--Tits tree, and the outputs of each (classical or quantum) Reed--Solomon code is stored in the connecting regions $\Upsilon^{-1}(e)$. This can be done by choosing a lift of the projection $\Upsilon$, which realizes the Bruhat--Tits tree as a skeleton of $\Omega$ and constructing the holographic code over that skeleton. The choice of a lift of the projection is non-canonical, hence this type of construction has the same kind of drawback of the construction used in \cite{HMSS} to simulate the pentagon code via a choice of a planar embedding of a tree along edges of the pentagon tiling of the real hyperbolic plane. An advantage in this case, however, is that the projection $\Upsilon$ is equivariant with respect to the ${\rm GL}_2({\mathbb Q}_p)$ symmetries so one maintains the symmetries of the tree intact, unlike the case of the planar embedding used in \cite{HMSS}. \medskip \section{Holographic Codes on Higher Rank Bruhat--Tits Buildings}\label{BuildCodes} As above, we denote by ${\mathcal T}_{n,{\mathbb K}}$ the Bruhat--Tits building of ${\rm GL}_{n+1}({\mathbb K})$ and by $\Omega_n$ the Drinfeld symmetric space. \smallskip Consider first the case of the Bruhat--Tits building of ${\rm GL}_3({\mathbb K})$, with ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$ with residue field ${\mathbb F}_q$, $q=p^r$. The set of vertices adjacent to a given vertex $v\in V({\mathcal T}_{2,{\mathbb K}})$ is a bipartite set, consisting of the set of $q^2+q+1$ ${\mathbb F}_q$-rational points of the projective plane $\P^2$ over ${\mathbb F}_q$ together with the set of $q^2+q+1$ ${\mathbb F}_q$-rational lines of the projective plane $\P^2$ over ${\mathbb F}_q$. The surface $X$ over ${\mathbb F}_q$ obtained by blowing up all the ${\mathbb F}_q$-rational points of $\P^2$ contains an exceptional divisor (a line) for each ${\mathbb F}_q$-rational points of $\P^2$ and a proper transform (also a line) for each ${\mathbb F}_q$-rational line in $\P^2$. Thus, to each vertex $w$ adjacent to the given vertex $v$ we associate a line $\ell_w$ in the blowup surface $X$. Let $u,w$ be vertices adjacent to $v$: the set $\{ u,v,w \}$ corresponds to a $2$-simplex in the $2$-dimensional simplicial complex ${\mathcal T}_{2,{\mathbb K}}$ if and only if the lines $\ell_u$ and $\ell_w$ intersect nontrivially in $X$. \smallskip In the case of ${\mathbb Q}_2$ one obtains the well known picture below, with the $7$ points and $7$ lines of $\P^2({\mathbb F}_2)$ as vertices and with $21$ edges, \cite{Cox}. \begin{center} \includegraphics[scale=0.35]{BTGL3.jpg} \end{center} In order to extend the construction of holographic codes to higher rank Bruhat--Tits buildings, in a way that reflects the associated geometries over finite fields that determine the local structure of the building, we need to replace the classical Reed--Solomon codes with algebro-geometric codes associated to higher-dimensional algebraic varieties. \smallskip \subsection{Codes on the Bruhat--Tits buildings of ${\rm GL}_3$ from algebro-geometric codes on surfaces} A general procedure for constructing algebro-geometric codes over higher-dimensional algebraic varieties generalizing the Reed--Solomon codes is described in \cite{TsfaVla}, see also \cite{Hansen}. Given a smooth projective variety $X$ over ${\mathbb F}_q$ with an ample line bundle ${\mathcal L}$, one obtains a linear code $C(X,{\mathcal L},{\mathcal P})$, where ${\mathcal P}$ is a set of ${\mathbb F}_q$-algebraic points of $X$, as the image of the germ map $$ \alpha: \Gamma(X,{\mathcal L}) \to \oplus_{x\in {\mathcal P}} {\mathcal L}_x \simeq {\mathbb F}_q^n, $$ which evaluates sections $s\in \Gamma(X,{\mathcal L})$ at points $x\in {\mathcal P}$, with the last identification given by a choice of an isomorphism ${\mathcal L}_x \simeq {\mathbb F}_q$ of the fibers at $x\in {\mathcal P}$, with $n=\# {\mathcal P}$. \smallskip For example, for $X=\P^2$, with ${\mathcal L}={\mathcal O}(m)$, with $0<m\leq q$, and ${\mathcal P}$ the set of all ${\mathbb F}_q$-rational points of $\P^2$, one obtains a code $C(\P^2, {\mathcal O}(m), \P^2({\mathbb F}_q))$ with length $n=q^2+q+1$, dimension $k=\frac{1}{2}(m+1)(m+2)$, and minimum distance bounded by $d\geq q^2+q+1 - m(q+1)$, see \cite{Hansen}. \smallskip We focus here on the case of the Bruhat--Tits building of ${\rm GL}_3({\mathbb K})$, with ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$ with residue field ${\mathbb F}_q$, $q=p^r$. As we mentioned above, the link of a vertex in the Bruhat--Tits building is described in terms of the geometry of an algebraic surface $X$ obtained by blowing up all the ${\mathbb F}_q$-algebraic points of $\P^2$. \smallskip We use the example above of algebro-geometric codes $C(\P^2, {\mathcal L}, \P^2({\mathbb F}_q))$ associated to line bundles ${\mathcal L}$ over $\P^2$ to construct a classical holographic code on the Bruhat--Tits building of ${\rm GL}_3({\mathbb K})$. We fix a base vertex in the building and assign as logical input the datum of a divisor $D$ on $\P^2$ so that ${\mathcal L}={\mathcal L}(D)$. Consider then the surface $X$ over ${\mathbb F}_q$ obtained by blowing up all the ${\mathbb F}_q$-rational points of $\P^2$, and the pullback $\pi^*{\mathcal L}$ under the projection map, and line bundles of the form $\hat{\mathcal L}=\pi^*{\mathcal L} \otimes {\mathcal O}(-\sum_i k_i E_i)$ where the $E_i$ are the exceptional divisors of the blowup. Assume that $D$ and the $k_i$ are chosen so that $\hat{\mathcal L}$ is represented by an effective divisor on $X$. We now consider the $q^2+q+1$ lines in $X$ determined by the ${\mathbb F}_q$-lines of $\P^2$ and the $q^2+q+1$ lines that correspond to the ${\mathbb F}_q$-points of $\P^2$ and the set ${\mathcal P}$ consisting of the $q+1$ ${\mathbb F}_q$-rational points of each of these lines, with $\# {\mathcal P} = 2 (q+1) (q^2+q+1)$. The code $C(X,\hat{\mathcal L}, {\mathcal P})$ can be viewed as a code that, given the logical input $D$ at the base vertex $v$, deposits an output given by a vector in ${\mathbb F}_q^{q+1}$ at each adjacent vertex $w$ in the Bruhat--Tits building. These outputs are related by a consistency condition, which is determined by the edges and $2$-cells of the building. Namely, whenever $w$ and $u$ are vertices adjacent to $v$, such that $\{v,w,u \}$ is a $2$-cell in the building, we know the corresponding condition on $X$ is that the two lines $\ell_w$ and $\ell_u$ intersect. The presence of a point of intersection means that the corresponding vectors in ${\mathbb F}_q^{q+1}$ must agree in one of the $q+1$ coordinates. \smallskip When one propagates the construction to nearby vertices in the Bruhat--Tits building, part of the logical input is reserved for the output ${\mathbb F}_q^{q+1}$-vector of the nearby vertices already reached by the previous steps from the chosen root vertex. As in the case of the Bruhat--Tits tree, we identify the given ${\mathbb F}_q^{q+1}$-vector (computed as output by the previous code) with assigned values at one of the lines in $X$ that corresponds to one of the lines in $\P^2$ (which we can think of as the $\P^1$ at infinity in $\P^2$). There is a consistency condition for the output at a new vertex $w$ that is adjacent to a $2$-cell where the remaining two vertices $v$ and $v'$ already have outputs $\underline{x}(v), \underline{x}(v')\in {\mathbb F}_q^{q+1}$ assigned by the previous codes: the outputs $\underline{x}(v), \underline{x}(v')$ at the two previous vertices $v,v'$ are two vectors in ${\mathbb F}_q^{q+1}$ that agree in one coordinate, hence they fix the values of the sections at two intersecting lines in $X$. The resulting output $\underline{x}(w)$ at the new vertex $w$ is then computed by the values at the $q+1$ points of the line $\ell_w$ of all sections $s$ that satisfy the constraints given by the assigned values at the points of $\ell_v$ and $\ell_{v'}$. The construction can in this way be propagated to the rest of the Bruhat--Tits building of ${\rm GL}_3({\mathbb K})$. This illustrates the general approach to constructing classical holographic codes on higher rank Bruhat--Tits buildings. \smallskip A construction of quantum holographic codes can be obtained from these classical codes using a version of the CRSS algorithm (possibly by allowing more general types of weighted versions of the classical codes, as we discussed in the case of the Reed--Solomon codes). The details of the corresponding quantum codes for higher rank buildings will be discussed in forthcoming work. \smallskip \subsection{Codes on Drinfeld symmetric spaces} Another possible approach to the construction of holographic codes for higher-rank $p$-adic symmetric spaces consists of working with Dirnfeld symmetric spaces instead of Bruhat--Tits buildings. This extends the approach discussed in \S \ref{DplaneSec} on codes associated to actions of discrete groups on the Drinfeld plane. \smallskip In the higher rank setting, we consider two possible viewpoints. The first is based on the projection map from the Drinfeld symmetric space $\Upsilon: \Omega_n \to {\mathcal T}_{n,{\mathbb K}}$, from the Drinfeld space $\Omega_n = \P^n({\mathbb C}_p)\smallsetminus \cup_{H\in {\mathcal H}_{\mathbb K}} H$ (the complement of the ${\mathbb K}$-rational hyperplanes in $\P^n$) to the Bruhat--Tits building of ${\rm GL}_n({\mathbb K})$. The idea here, as in \S \ref{LiftSec} above, is to lift via the projection map a construction of holographic classical and quantum codes from the Bruhat--Tits building to the space $\Omega_n$, with logical inputs associated to the regions $\Upsilon^{-1}(v)$, with $v$ the vertices of ${\mathcal T}_{n,{\mathbb K}}$ and outputs and compatibility conditions along the edges, faces, and higher-dimensional cells. Since the projection map $\Upsilon$ is equivariant with respect to the ${\rm GL}_n({\mathbb K})$ action, whatever symmetry the codes constructed on ${\mathcal T}_{n,{\mathbb K}}$ exhibit will be inherited by the resulting codes on $\Omega_n$. \smallskip The other possible approach consists of constructing a tensor network directly associated to a given action of a discrete subgroup $\Gamma$ of ${\rm GL}_n({\mathbb K})$ on the symmetric space $\Omega_n$. Roughly, the main idea in this case is to assign logical inputs to the fundamental domains of the action, while outputs should be associated to the generators of the discrete group with compatibility conditions resulting from the relations. In this way, the codes assigned to each copy of the fundamental domain can be compatibly assembled into a global holographic code on $\Omega_n$, with logical inputs in the bulk and outputs at the boundary. The outputs should live on the points in the limit set of the group action on the rational hyperplanes $H\in {\mathcal H}_{\mathbb K}$. We will discuss these constructions of holographic codes on higher rank $p$-adic symmetric spaces in forthcoming work. \bigskip \bigskip \subsection*{Acknowledgment} The author thanks Matthew Heydeman, Sarthak Parikh, and Ingmar Saberi for many very useful discussions and an ongoing collaboration on several topics discussed in this paper, and especially Sarthak Parikh for suggesting several improvements to the paper. The author is partially supported by NSF grant DMS-1707882 and by the Perimeter Institute for Theoretical Physics.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,692
\section{Introduction} \subsection{Statement of the problem} Consider the radiation of an elastic source $F$ outside a cavity $D$ described by the system \begin{eqnarray}\label{llame1}\rho \partial_{tt}U(x,t)=\mathcal L_{\lambda,\mu} U(x,t)+F(x,t), \quad x=(x_1,x_2,x_3)\in {\mathbb{R}}^3\backslash{\overline{D}},\; t>0\end{eqnarray} where $\rho$ denotes the density, \textcolor{rot}{$\mu$ and $\lambda$ the Lam\'e coefficients}, $U=(u_1,u_2,u_3)^\top$ the displacement vector, $D\subset{\mathbb{R}}^3$ the region of the cavity and $\mathcal{L}_{\lambda,\mu}U$ the Lam\'e operator defined by \begin{eqnarray}\label{lame2} \mathcal{L}_{\lambda,\mu}U:=-\mu(x) \nabla\times\nabla\times U+ (\lambda(x)+2\mu(x))\nabla\nabla\cdot U+(\nabla\cdot U)\nabla\lambda(x)+((\nabla U)+(\nabla U)^T)\nabla\mu(x). \end{eqnarray} Throughout the paper, it is supposed that \textcolor{rot}{$\rho>0$ is a constant} and $\mu,\lambda\in \mathcal C^3({\mathbb{R}}^3)$ satisfy $\mu>0$, $\lambda\geq0$. Further, the density function and the Lam\'e coefficients are supposed to be constants in $|x|>R$ for some sufficiently large $R>0$ such that $D\subset B_R:=\{x\in {\mathbb{R}}^3: |x|<R\}$. Together with the governing equation, we impose the initial conditions \begin{eqnarray}\label{IBCs} U(x,0)=0,\quad \partial_tU(x,0)=0,\quad & x\in {\mathbb{R}}^3\backslash{\overline{D}}, \end{eqnarray} and the traction-free boundary condition on $\partial D$: \begin{eqnarray}\label{stress1} \mathcal TU(x,t)=0,\quad& (x,t)\in \partial D\times {\mathbb{R}}^+, \end{eqnarray} where $\mathcal T U$ is the stress boundary condition defined by \eqref{stress} (see Section 2). In this paper we consider the inverse problem of determining the source term $F$ from knowledge of $U$ on the surface $\partial B_R=\{x\in {\mathbb{R}}^3: |x|=R\}$ with $R>0$ sufficiently large. According to \cite[Remark 4.5]{BHKY} there is an obstruction for the recovery of general time-dependent source terms $F$. Facing this obstruction we consider this problem for some specific type of source terms. \subsection{Motivations} We recall that the Lam\'e system \eqref{llame1}-\eqref{lame2} is frequently used for the study of linear elasticity and imaging problems. In this context our inverse problems can be seen as the recovery of an external force provided by the source term $F$. For instance, the recovery of elastodynamics source term $F$ corresponding to the product of a spatial function $g$ and a temporal function $f$ can be regarded as an approximation of the elastic pulse and are commonly used in modeling vibration phenomena in seismology and teleseismic inversion \cite{AR,S}. This type of sources has been also considered in numerous applications in biomedical imaging (see the references \cite{Ammari13,Ammari} and the references therein) where our inverse problem can be seen as the recovery of the information provided by the parameter under consideration. Let us also observe that the identification of time-dependent sources (see Theorems \ref{Th2:Uniqueness} and \ref{t2}) is associated to the recovery of a moving source which can be thought as an approximation of a pulsed signal transmitted by a moving antenna (see \cite{HKLZ} and the references therein for more details). \subsection{Known results} Inverse source problems are a class of inverse problems which have received many interest. These problems take different forms and have many applications (environment, imaging, seismology $\cdots$). For an overview of these problems we refer to \cite{I}. Among the different arguments considered for solving these problems we can mention the approach based on applications of Carleman estimates arising from the work of \cite{K1981} (see also \cite{Kh,Klibanov1992}). This approach has been applied successfully to hyperbolic equations by \cite{Ya99} in order to extend his previous work \cite{Ya95} to a wider class of source terms. More precisely, in \cite{Ya99} the author considered the recovery of source terms of the form $f(x)G(x,t)$, where $G$ is known, while in \cite{Ya95} the analysis of the author is restricted to source terms of the form $\sigma(t)f(x)$, with $\sigma$ known. More recently, the approach of \cite{Ya99} has been extended by \cite{JLY} to hyperbolic equations with time-dependent second order coefficients and to less regular coefficients by \cite{YZ}. We mention also the work of \cite{CY,KSS} using similar approach for inverse source problems stated for parabolic equations and the result of \cite{SU} proved by a combination of geometrical arguments and Carleman estimates. Concerning the Lam\'e system we refer to \cite{INY1998} where a uniqueness result has been stated for the recovery of time-independent source terms by mean of suitable Carleman estimate and we mention also the work of \cite{IY2005,IY05,Isa86} dealing with related problems as well as \cite{BY} where an inverse source problem for Biot's equations has been considered. We refer also to the recent work \cite{BHKY} where the recovery of a time-independent source term appearing in the Lam\'e system in all space has been proved from measurements outside the support of the source under consideration as well as the work of \cite{JLLY} dealing with this inverse source problem for fractional diffusion equations. In all the above mentioned results the authors considered the recovery of time independent source terms (in other words, the spatial component of the source term). For the recovery of a source depending only on the time variable we refer to \cite{FK} where such problems has been considered for fractional diffusion equations and for the recovery of some class of sources depending on both space and time variable appearing in a parabolic equation on the half space, we refer to \cite[Section 6.3]{I}. For hyperbolic equations, we refer to \cite{DOT,RS} where the recovery of some specific time-dependent source terms have been considered. For Lam\'e systems, \cite[Theorem 4.2]{BHKY} seems to be the only result available in the mathematical literature where such a problem has been addressed for time-dependent source terms. The result of \cite[Theorem 4.2]{BHKY} is stated with source terms depending only on the time variable. To the best of our knowledge, except the result of \cite{DOT}, dealing with the recovery of discrete in time sources, and the result of the present paper, there is no result in the mathematical literature treating the recovery of a source term depending on both space and time variables appearing in hyperbolic equations. \subsection{Main results} In the present paper we consider three inverse problems related to the recovery of the source term $F$. In our first inverse problems we assume that the cavity $D\neq\emptyset$ is a domain with $\mathcal C^3$ boundary $\partial D$, with connected exterior ${\mathbb{R}}^3\backslash\overline{D}$, and we consider source terms of the form \begin{eqnarray}\label{source1}F(x,t)=f(t)h(x),\quad x\in{\mathbb{R}}^3\backslash\overline{D},\ t\in(0,+\infty),\end{eqnarray} with $f$ a real valued function and $h=(h_1, h_2, h_3)^\top: {\mathbb{R}}^3\backslash\overline{D}\rightarrow {\mathbb{R}}^3$ a vector valued function. Choose $R>0$ sufficiently large such that $B_R$ also contains the support of $h$ (i.e., supp$(h)\subset B_R$). We assume that $f\in L^2(0,+\infty)$ is compactly supported and $h\in L^2({\mathbb{R}}^3\backslash \overline{D})^3$. Then, the problem (\ref{llame1})-(\ref{stress1}) admits a unique solution $$U\in \mathcal C^1([0,+\infty);L^2({\mathbb{R}}^3\backslash D))^3\cap \mathcal C([0,+\infty);H^1({\mathbb{R}}^3\backslash D))^3.$$ The proof of this result can be carried out by combining the elliptic regularity properties of $\mathcal{L}_{\lambda,\mu}$ (see e.g., \cite[Chapters 4 and 10]{Mclean} and \cite[Chapter 5]{Hsiao}) with \cite[Theorem 8.1, Chapter 3]{LM1} and \cite[Theorem 8.2, Chapter 3]{LM1} (see also the beginning of Sections 4.1 and 4.2 for more details). Our first inverse problem in the exterior of the cavity can be stated as follows. \textbf{Inverse Problem 1} (IP1): Assume that $f$, $D$ are both known in advance. Determine the spatially dependent function $h$ from the radiated field $U$ measured on the surface $\partial B_R\times [0,T_1) $, $T_1\in(0,+\infty]$. Below we give a confirmative answer to the uniqueness issue for IP1 in two different cases. For source terms with low regularity we obtain \begin{thm}\label{THH2} Let $f\in \mathcal C^1([0,+\infty))$ satisfy $f(0)\neq 0$, $h\in H^1({\mathbb{R}}^3\backslash \overline{D})^3$ and $F$ takes the form \eqref{source1}. Let also $\Omega:=B_R\backslash \overline{D}$ and let $d_j$, $j=1,2$, be the Riemannian distance within $\overline{\Omega}$ induced by the metric $g_j$, where $$g_1[x](v,v)=\frac{\rho|v|^2}{\mu(x)},\quad g_2[x](v,v)=\frac{\rho|v|^2}{2\mu(x)+\lambda(x)},\quad x\in\overline{\Omega},\ v\in{\mathbb{R}}^3.$$ Then, for \textcolor{rot}{$$T_1>2\left(\underset{j=1,2}{\max}\ \left(\underset{x\in \overline{\Omega}}{\sup}\ d_j(x,\partial B_R)\right)\right),$$} the boundary data $\{U(x,t):\ (x,t)\in \partial B_{R}\times(0,T_1)\}$ uniquely determine $h$. \end{thm} By considering measurements for all time ($t\in(0,+\infty)$), we can remove the condition $f(0)\neq0$ in the following way. \begin{thm}\label{TH1} Let $f\in H^1_0(0,T)$, $h\in H^1({\mathbb{R}}^3\backslash \overline{D})^3$ and $F$ takes the form \eqref{source1}. Then the boundary data $\{U(x,t):\ (x,t)\in \partial B_R\times{\mathbb{R}}^+\}$ uniquely determine $h$. \end{thm} For our last inverse problem, we consider the Lam\'e system with constant density and Lam\'e coefficients when the embedded cavity is absent ($D=\emptyset$). We assume here that $F$ takes the form \begin{eqnarray}\label{source2}F(\tilde{x},x_3,t)=g(x_3)\,f(\tilde{x},t),\quad \tilde{x}\in{\mathbb{R}}^2,\ x_3\in{\mathbb{R}},\ t\in(0,+\infty),\end{eqnarray} where the vectorial function $f=(f_1, f_2, 0)^\top$ is compactly supported on $\tilde{B}_R\times [0, T)$ and the scalar function $g$ is supported in $(-R, R)$ for some $R>0$. Here $\tilde{x}=(x_1, x_2)\in {\mathbb{R}}^2$ for $x=(x_1,x_2,x_3)\in {\mathbb{R}}^3$ and $\tilde{B}_R$ denotes the set $\tilde{B}_R:=\{\tilde{x}\in{\mathbb{R}}^2:\ |\tilde{x}|<R\}$. Then our last inverse problem can be stated as follows.\\ \textbf{Inverse Problem 2} (IP2): Assume that $g$ is known in advance. Determine the time and space dependent function $f$ from the radiated field $U$ measured on the surface $\Gamma\times(0,T_1) $, with $T_1>0$, $R_1>0$ sufficiently large and $\Gamma\subset\partial B_{R_1}$ an open set with positive Lebesgue measurement.\\ In this paper we give a positive answer to (IP2) both in terms of uniqueness and stability. Our uniqueness result can be stated as follows. \begin{thm}\label{Th2:Uniqueness} Assume $D=\emptyset$ and $\rho,\lambda,\mu$ are all constants in ${\mathbb{R}}^3$. Assume that $F$ takes the form \eqref{source2} with $f=(f_1,f_2,0)^\top\in H^1(0,T; L^2({\mathbb{R}}^2))^3$, $g\in L^2(-R, R)$ is non-uniformly vanishing and $$f( \tilde{x},0)= 0,\quad\tilde{x}\in{\mathbb{R}}^2.$$ Let $R_1>\sqrt{2}R$, $T_1>T+\frac{2R_1\sqrt{\rho}}{\sqrt{\mu}}$ and let $\Gamma\subset\partial B_{R_1}$ be an arbitrary open set with positive Lebesgue measurement. Then the source $f$ can be uniquely determined by the data $U(x, t)$ measured on $\Gamma\times (0, T_1)$. \end{thm} By assuming that $\Gamma=\partial B_{R_1}$, we can extend this uniqueness result to a log-type stability estimate taking the form. For this purpose, we need a priori information on the regularity and upper bound of the source terms $f$ and $g$. \begin{thm}\label{t2} Let $R_1>\sqrt{2}R$, \textcolor{rot}{$T_1>T+\frac{2R_1\sqrt{\rho}}{\sqrt{\mu}}$}, $\rho,\lambda,\mu$ be constant and assume that $D=\emptyset$, $f\in H^3({\mathbb{R}}^2\times{\mathbb{R}})^3\cap H^4(0,T;L^2({\mathbb{R}}^2))^3$ satisfies $$f(\tilde{x},0)=\partial_t f(\tilde{x},0)=\partial_t^2 f(\tilde{x},0)=\partial_t^3 f(\tilde{x},0)=0,\quad \tilde{x}\in{\mathbb{R}}^2.$$ Assume also that $g$ is non-uniformly vanishing with a constant sign \emph{($g\geq0$ or $g\leq0$)} and that there exists $M>0$ such that \begin{equation}\label{t2a} \norm{f}_{H^3({\mathbb{R}}^2\times{\mathbb{R}})^3}+\norm{f}_{H^4(0,T;L^2({\mathbb{R}}^2))^3}\leq M.\end{equation} Then, there exists $C>0$ depending on $M$, $R_1$, $\rho$, $\lambda$, $\mu$, $T_1$, $\norm{g}_{L^1({\mathbb{R}})}$ such that \begin{equation}\label{t2b} ||f||_{L^2((0,T)\times\tilde{B}_{R})}\leq C\left(\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1}))^3}+\left|\ln\left(\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1}))^3}\right)\right|^{-1}\right).\end{equation} \end{thm} \subsection{Comments about our results} Let us first remark that to the best of our knowledge Theorems \ref{THH2} and \ref{TH1} are the first results of recovery of source terms stated for the Lam\'e system outside a cavity with variable coefficients. Indeed, it seems that all other known results have been stated on a bounded domain (e.g. \cite{INY1998}) or in the full space ${\mathbb{R}}^3$ (e.g. \cite{BHKY}). We emphasize that Theorems \ref{THH2} and \ref{TH1} are valid even if the embedded cavity is unknown. In fact, the unique determination of the embedded cavity can be proved following Isakov's arguments \cite[Theorem 5.1]{Isakov} by applying the unique continuation results of \cite{EINT, ET}; see also the proof of Theorem \ref{THH2}. We refer also to \cite{Isakov08} for the determination of other impenetrable scatterers for the wave equation with a single measurement data. The main purpose of this paper is concerned with the identification of elastic sources in an unbounded domain. Let us observe that in Theorem \ref{THH2}, we manage to restrict our measurements to a finite interval of time. However, like in \cite{INY1998,SU,Ya99,YZ}, we need to impose the additional condition $f(0)\neq0$ for a source term $F$ of the form \eqref{source1}. In contrast to Theorem \ref{THH2} and results using Carleman estimates like \cite{INY1998,SU,Ya99,YZ}, in Theorem \ref{TH1} we state our result without assuming that the source under consideration is non-vanishing at $t=0$. For a source term $F$ of the form \eqref{source1}, such assumption will be equivalent to the requirement that $f(0)\neq0$. From the practical point of view, this means that the results of \cite{INY1998,SU,Ya99,YZ}, as well as Theorem \ref{TH1}, can only be applied to the determination of a source term associated with a phenomenon which has appeared before the beginning of the measurement. This restriction excludes applications where one wants to determine a phenomenon with measurements that start before its appearance. By removing this restriction in Theorem \ref{TH1} we make our result more suitable for applications in that context. The approach of Theorem \ref{THH2}, \ref{TH1} consist in transforming our problem into the recovery of initial condition. Then, applying some results of unique continuation and global Holmgren theorem borrowed from \cite{EINT,ET} we complete the proof of Theorem \ref{THH2}, \ref{TH1}. To the best of our knowledge, even for a bounded domain, Theorems \ref{Th2:Uniqueness} and \ref{t2} seem to be the first results of unique and stable recovery of some general source term depending on both time and space variables appearing in a hyperbolic equation. Indeed, it seems that only results dealing with recovery of source terms depending only on the time variable (see \cite{BHKY,RS}) or space variable (see \cite{BHKY,INY1998,SU,Ya99,YZ}) are available in the mathematical literature with the exception of \cite{DOT} where the recovery of discrete in time sources has been considered. Therefore the results of Theorems \ref{Th2:Uniqueness} and \ref{t2} are not only new for the Lam\'e system but also more general for hyperbolic equations. We mention also that the stability result of Theorem \ref{t2} requires a result of stability in the unique continuation already considered by \cite{Ba,CK,Ki1} for the recovery of time-dependent coefficients. Note also that, in contrast of Theorems \ref{TH1} and \ref{THH2}, thanks to the strong Huygens principle we can state Theorems \ref{Th2:Uniqueness} and \ref{t2} at finite time. In Corollary \ref{cor1} we prove that the results of Theorem \ref{TH1} can be reformulated in terms of partial recovery of the source term from measurement on a subdomain where the source term or the initial data are known. This situation may for instance occur in several applications where the source under consideration has large support and the data considered in Theorem \ref{TH1} is not accessible. What we prove in Corollary \ref{cor1} is that even in such context one can expect recovery of partial information of the source term under consideration by measurements located on some subdomain where the source is known. Both Theorems \ref{THH2}, \ref{TH1} and Corollary \ref{cor1} remain valid if the cavity $D$ is absent or if $D$ is a rigid elastic body (i.e., $U$ vanishes on $\partial D$). All the results of this paper can be applied to the wave equation. Actually, the proof for the wave equation will be easier in several aspects and the particular treatment for the Lam\'e system leads to some difficulties inherent to this type of systems (see for instance the proof of Theorems \ref{Th2:Uniqueness} and \ref{t2}). \subsection{Outline} This paper is organized as follows. In Section 2 we study the inverse problem (IP1). More precisely, we prove Theorems \ref{THH2} and \ref{TH1} as well as the Corollary \ref{cor1}. In Section 3 we treat the inverse problem (IP2). We start with the uniqueness result stated in Theorem \ref{Th2:Uniqueness}. Then, we extend this result by proving the stability estimate stated in Theorem \ref{t2}. We give also some results related to solutions of the problem \eqref{llame1}-\eqref{stress1} in the appendix. \section{Inverse source problem with traction-free boundary condition} This section is devoted to the proofs of Theorems \ref{THH2} and \ref{TH1} for our inverse problem (IP1). More precisely, we consider the radiation of an elastic source in an inhomogeneous medium in the exterior of a cavity $D$ (see Figure 1): \begin{eqnarray}\label{lame} &&\rho \partial_{tt}U(x,t)=\mathcal L_{\lambda,\mu} U(x,t)+f(t)\,g(x), \quad x=(x_1,x_2,x_3)\in {\mathbb{R}}^3\backslash{\overline{D}},\; t\in(0,+\infty), \end{eqnarray} where $\mathcal{L}_{\lambda,\mu}U$ stands for the Lam\'e operator given by \eqref{lame2}. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{fig1.pdf} \caption{Radiation of a source in an inhomogeneous isotropic elastic medium in the exterior of a cavity. Suppose that the cavity $D$ is known. The inverse problem is to determine the source term from the data measured on $\partial B_R=\{x\in {\mathbb{R}}^3: |x|=R\}$. } \label{fig1} \end{figure} Together with the governing equation (\ref{lame}), we fix the initial conditions at $t=0$: \begin{eqnarray}\label{II} U(x,0)=0,\quad \partial_tU(x,0)=0,\quad & x\in {\mathbb{R}}^3\backslash{\overline{D}}, \end{eqnarray} and the traction-free boundary condition $\mathcal T U$ on $\partial D$ given by \begin{eqnarray}\label{stress} \mathcal T U:=\sigma(U)\nu=0\quad& \mbox{on}\quad \partial D\times {\mathbb{R}}^+, \end{eqnarray} where $\nu=(\nu_1,\nu_2,\nu_2)$ stands for the unit normal direction pointing into the exterior of $D$ and the stress tensor $\sigma (U)$ is given by \begin{eqnarray}\label{E} \sigma(U):=\lambda\; {\rm {div}~} U\,\textbf{I}_{3}+2\mu E(U),\quad E(U):=\frac{1}{2} ((\nabla U)+(\nabla U)^T). \end{eqnarray} Note that $\textbf{I}_{3}$ means the 3-by-3 unit matrix and that the conormal derivative $\sigma(U)\nu$ corresponds to the \emph{stress vector} or \emph{surface traction} on $\partial D$. With these notation the Lam\'e operator (\ref{lame2}) can be written as $\mathcal{L}_{\lambda,\mu} U={\rm {div}~} \sigma(U)$. We suppose that $D\subset{\mathbb{R}}^3$ is a bounded domain with $\mathcal{C}^3$-smooth boundary $\partial D$ and with connected exterior ${\mathbb{R}}^3\backslash\overline{D}$. If the cavity $D$ is absent (i.e., $D=\emptyset$) and the background medium is homogeneous and isotropic, it was shown in \cite{BHKY} via strong Huygens principle and Fourier transform that the boundary data of Theorem \ref{TH1} can be used to uniquely determine $g$. According to \cite{Ka}, in the context of Theorem \ref{TH1}, the strong Huygens principle is not valid and we can not even expect integrable local energy decay. For this purpose, we use a different approach based on application of Laplace transform for Theorem \ref{TH1} and unique continuation properties for Theorem \ref{THH2}. Let us first consider the proof of Theorem \ref{THH2} stated with non-vanishing sources at $t=0$. \textbf{Proof of Theorem \ref{THH2}.} Assuming $U(x,t)=0$ for $|x|=R$ and $t\in (0,T_1)$, we need to prove that $h\equiv 0$. Since ${\rm Supp}(h)\subset B_R$, the wave field $U$ fulfills the homogeneous initial and boundary conditions of the Lam\'e system in the exterior of $B_R$: \begin{equation}\label{dd}\left\{\begin{array}{lll} \rho\partial_{tt}U(x,t)-\mathcal{L}_{\lambda,\mu} U(x,t)=0\quad&\mbox{in}\quad {\mathbb{R}}^3\backslash\overline{B}_R\times (0,T_1),\\ U(x,0)=\partial_tU(x,0)=0\quad &\mbox{in}\quad {\mathbb{R}}^3\backslash\overline{B}_R, \\ U(x,t)=0\quad &\mbox{on}\quad \partial B_R\times {\mathbb{R}}^+. \end{array}\right.\end{equation} Applying the elliptic regularity properties of $\mathcal{L}_{\lambda,\mu}$ (see e.g., \cite[Chapters 4 and 10]{Mclean} and \cite[Chapter 5]{Hsiao}) and the results of \cite[Theorem 8.1, Chapter 3]{LM1}, \cite[Theorem 8.2, Chapter 3]{LM1} (see also the beginning of Sections 4.1 and 4.2 for more details), one can prove the unique solvability of the initial boundary value problem \eqref{dd}. Consequently, we deduce that $U\equiv 0$ in $({\mathbb{R}}^3\backslash\overline{B}_R)\times [0,T_1)$. Let us now consider the initial boundary value problem \begin{equation}\label{eeq1}\left\{\begin{array}{ll}\rho\partial_t^2V+\mathcal{L}_{\lambda,\mu} V=0,\quad &(x,t)\in({\mathbb{R}}^3\backslash \overline{D})\times(0,+\infty),\\ V(\cdot,0)=0,\quad \partial_tV(\cdot,0)=h,\quad &\textrm{in}\ {\mathbb{R}}^3\backslash \overline{D},\\ \mathcal T V(x,t)=0,\quad& (x,t)\in\partial D\times(0,+\infty).\end{array}\right.\end{equation} Analogous to the boundary value problem (\ref{dd}), one can prove that this exterior problem admits a unique solution $V\in \mathcal C([0,T];H^2({\mathbb{R}}^3\backslash D)^3)\cap \mathcal C^1([0,T];H^1({\mathbb{R}}^3\backslash D)^3)$. Moreover, one can easily check that the solution $U$ to (\ref{lame}) is connected with $V$ via (which is well-known as Duhamel's principle) \begin{equation}\label{l4a}U(x,t)=\int_0^tf(t-s)V(x,s)ds,\quad t\in(0,+\infty),\;x\in {\mathbb{R}}^3\backslash\overline{D},\end{equation} Combining \eqref{l4a} with the fact that $U\equiv 0$ in $({\mathbb{R}}^3\backslash\overline{B}_R)\times [0,T_1)$, we deduce that $$\int_0^tf(t-s)V(\cdot,s)_{|{\mathbb{R}}^3\backslash B_R}ds=0,\quad t\in[0,T_1).$$ Using the fact that $f\in\mathcal C^1([0,T])$, we can differentiate this expression with respect to $t$ in order to get $$ f(0)V(\cdot,t)_{|{\mathbb{R}}^3\backslash B_R}+\int_0^tf'(t-s)V(\cdot,s)_{|{\mathbb{R}}^3\backslash B_R}ds=0,\quad t\in[0,T_1).$$ Combining this with the fact that $f(0)\neq 0$, we obtain $$ \norm{V(\cdot,t)}_{L^2({\mathbb{R}}^3\backslash B_R)}\leq \frac{\norm{f}_{\mathcal{C}^1(0,T_1)}}{|f(0)|}\left(\int_0^t\norm{V(\cdot,s)}_{L^2({\mathbb{R}}^3\backslash B_R)}ds\right),\quad t\in[0,T_1).$$ Therefore, applying the Gronwall inequality, we deduce that \begin{equation}\label{l4b}V(x,t)=0,\quad t\in[0,T_1),\quad x\in {\mathbb{R}}^3\backslash B_R.\end{equation} From this, we deduce that $$V(x,t)=\mathcal T V(x,t)=0,\quad x\in \partial B_R,\ t\in[0,T_1).$$ Combining this with the fact that \textcolor{rot}{$T_1>2\left(\underset{j=1,2}{\max}\ \left(\underset{x\in \overline{\Omega}}{\sup}\ d_j(x,\partial B_R)\right)\right)$}, we deduce that there exist $$\textcolor{rot}{T_2>\left(\underset{j=1,2}{\max}\ \left(\underset{x\in \overline{\Omega}}{\sup}\ d_j(x,\partial B_R)\right)\right)},\quad\epsilon>0,$$ such that \begin{equation}\label{l4c}V(x,t)=\mathcal T V(x,t)=0,\quad x\in \partial B_R,\ t\in[0,2T_2+4\epsilon].\end{equation} Combining the unique continuation result of \cite[Theorem 5.5]{EINT} with the global Holmgren theorem stated in \cite[Theorem 1.2]{ET} and repeating arguments similar to \cite[Theorem 3.2]{ET} (see also the properties of Lam\'e system recalled at the beginning of Section 3.2 of \cite{ET} as well as \cite[Remark 3.5]{ET}), we deduce that \eqref{l4c} implies $$V(x,t)=0,\quad x\in B_R\backslash \overline{D},\ t\in (T_2,T_2+2\epsilon).$$ Combining this with \eqref{l4b}, we get $$V(x,t)=0,\quad x\in {\mathbb{R}}^3\backslash \overline{D},\ t\in (T_2,T_2+2\epsilon)$$ and differentiating with respect to $t$, we get $$V(x,T_2+\epsilon)=\partial_tV(x,T_2+\epsilon)=0,\quad x\in {\mathbb{R}}^3\backslash \overline{D}.$$ Therefore, $V$ restricted to $({\mathbb{R}}^3\backslash \overline{D})\times (0,T_2+\epsilon)$ solves the initial boundary value problem $$\left\{\begin{array}{ll}\rho\partial_t^2V+\mathcal{L}_{\lambda,\mu} V=0,\quad &(x,t)\in({\mathbb{R}}^3\backslash \overline{D})\times(0,T_2+\epsilon),\\ V(\cdot,T_2+\epsilon)=0,\quad \partial_tV(\cdot,T_2+\epsilon)=0,\quad &\textrm{in}\ {\mathbb{R}}^3\backslash \overline{D},\\ \mathcal T V(x,t)=0,\quad& (x,t)\in\partial D\times(0,T_2+\epsilon).\end{array}\right.$$ The uniqueness of the solution of this problem implies that $$V(x,t)=0,\quad x\in {\mathbb{R}}^3\backslash \overline{D},\ t\in(0,T_2+\epsilon)$$ from which we deduce that $h\equiv 0$.\qed Now let us consider Theorem \ref{TH1}, where we allow $f(0)=0$ but we make measurements for all time. \textbf{Proof of Theorem \ref{TH1}.} Assuming $U(x,t)=0$ for $|x|=R$ and $t\in {\mathbb{R}}^+$, we need to prove that $h\equiv 0$. Repeating the arguments used at the beginning of Theorem \ref{THH2}, one can check that $$U(x,t)=0,\quad t\in[0,+\infty),\quad x\in {\mathbb{R}}^3\backslash B_R.$$ Then, repeating the arguments used in Theorem \ref{THH2} with some minor modifications, we can prove that \begin{equation}\label{TH1d}\int_0^tf(t-s)V(\cdot,s)_{|{\mathbb{R}}^3\backslash B_R}ds=0,\quad t\in[0,+\infty),\end{equation} with $V$ the solution of \eqref{eeq1}. Since $f(t)\equiv 0$ in $t<0$, the identity (\ref{TH1d}) can be rewritten as \begin{equation}\label{THd} f(t)*V(\cdot, t)_{|{\mathbb{R}}^3\backslash B_R}=\int_0^\infty f(t-s)V(\cdot,s)_{|{\mathbb{R}}^3\backslash B_R}ds=0,\quad t\in[0,+\infty)\end{equation} where the operator $*$ denotes the convolution. For $R_1>R$, we fix $\Omega_1:=B_{R_1}\backslash \overline{B_R}$. Using standard idea for deriving energy estimates, one can prove that $t\mapsto\norm{V(\cdot,t)}_{H^1(\Omega_1)}$ has a long time behavior which is at most of polynomial type (see Proposition \ref{p4} in the Appendix ). This allows us to define the Laplace transform of $t\mapsto V(\cdot,t)_{|\Omega_1}$ with respect to the time variable as following: \begin{eqnarray*} \hat{V}(x,\tau):=\int_{{\mathbb{R}}} V(x,t)\, e^{-\tau\,t}\,dt,\quad \tau>0,\quad x\in \Omega_1, \end{eqnarray*} and $z\mapsto\hat{V}(\cdot, z)_{|\Omega_1}$ is an holomorphic function on $\mathbb C_+:=\{z\in\mathbb C:\ \mathfrak R z>0\}$ taking values in $H^1(\Omega_1)^3$. Therefore applying the laplace transform to both sides of \eqref{THd}, we get \begin{equation}\label{TH1e}\hat{f}(\tau)\hat{V}(x,\tau)=0,\quad x\in\Omega_1,\ \tau>0.\end{equation} Using the fact that $f\in L^1({\mathbb{R}}_+)$ is supported in $[0,T]$ and it does not vanish identically, we deduce that the function $\hat{f}$ is holomorphic in $ {\mathbb{C}}$ and not identically zero. Thus, there exists an interval $I\subset (0,+\infty)$ such that $|\hat{f}(\tau)|>0$ for $\tau\in I$. Combining this with \eqref{TH1e}, we deduce that $$\hat{V}(x,\tau)=0,\quad x\in\Omega_1,\ \tau\in I$$ and using the fact that $z\mapsto\hat{V}(\cdot, z)_{|\Omega_1}$ is an holomorphic function on $\mathbb C_+:=\{z\in\mathbb C:\ \mathfrak R z>0\}$, we deduce that $$\hat{V}(x,\tau)=0,\quad x\in\Omega_1,\ \tau>0.$$ Then, the injectivity of the Laplace transform, implies $$V(x,t)=0,\quad x\in\Omega_1,\ t>0.$$ Combining this with the arguments used at the end of Theorem \ref{THH2}, we deduce that $h\equiv0$. \qed We remark that surface data are utilized in the proof of Theorem \ref{TH1}. As a corollary, we prove that interior volume observations can also be used to extract partial information of the spatial source term. Below we consider again the problem (\ref{lame})-(\ref{stress}), with $f, g$ being given as in Theorem \ref{TH1}. \begin{cor}\label{cor1} Suppose that $f$ is given and let $\Omega$ be a $\mathcal C^3$-smooth connected open set of ${\mathbb{R}}^3$ satisfying $D\subset \Omega\subset B_R$ for some $R>0$. Let $\omega$ be an open set of ${\mathbb{R}}^3$. Then the wave fields $U$ measured on the volume $\omega\times{\mathbb{R}}^+$ and on the surface $\partial\Omega\times{\mathbb{R}}^+$ (see figure \ref{fig2}) uniquely determine $h|_{\Omega}$. \end{cor} \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{fig2.pdf} \caption{Suppose that the data are collected on $\omega$ and on $\partial\Omega$. The inverse problem is to determine the value of $g$ on $\Omega$. } \label{fig2} \end{figure} \begin{proof} We need to prove that the condition $U(x,t)=0$ for $(x,t)\in(\omega\times{\mathbb{R}}^+)\cup (\partial\Omega\times{\mathbb{R}}^+)$ implies that $h|_{\Omega}\equiv 0$. Repeating the arguments used in Theorem \ref{TH1}, for $V$ the solution of \eqref{eeq1}, we have $$V(x,t)=0,\quad x\in\omega\cup\partial\Omega ,\ t>0.$$ In a similar way to Theorem \ref{THH2}, combing the unique continuation result of \cite[Theorem 5.5]{EINT} with the global Holmgren theorem stated in \cite[Theorem 1.2]{ET}, we deduce that the condition $$V(x,t)=0,\quad x\in\omega ,\ t>0$$ implies that there exists $T_3>0$ such that $$V(x,T_3)=\partial_tV(x,T_3)=0,\quad x\in\Omega.$$ Combining this with the fact that $$V(x,t)=0,\quad x\in\partial\Omega ,\ t>0,$$ we deduce that the restriction of $V$ to $\Omega\times{\mathbb{R}}^+$ solves the problem \begin{equation}\label{c1b}\left\{\begin{array}{ll}\rho\partial_t^2V+\mathcal{L}_{\lambda,\mu} V=0,\quad &(x,t)\in\Omega\times(0,T_3),\\ V(\cdot,T_3)=0,\quad \partial_tV(\cdot,T_3)=0,\quad &\textrm{in}\ \Omega,\\V(x,t)=0,\quad& (x,t)\in\partial \Omega\times(0,T_3)\\ \mathcal TV(x,t)=0,\quad& (x,t)\in\partial D\times(0,T_3).\end{array}\right.\end{equation} Then, the uniqueness of the solution of this problem implies that $$V(x,t)=0,\quad x\in\Omega,\ t\in[0,T_3].$$ In particular, we obtain $h(x)=\partial_tV(x,t)|_{t=0}=0$ in $\Omega$. \end{proof} \begin{rem} Corollary \ref{cor1} shows that, the volume observation data on $\omega$ and the surface measurements on $\partial\Omega$ unique determine the source term $h$ on $\Omega$. This gives partial information of $h$ only. However, in the special case that ${\rm Supp}(h)\subset \Omega$ $($for instance, $\Omega=B_R$$)$, one may deduce from Corollary \ref{cor1} that $h$ can be uniquely determined by the data of $U$ on $\omega\times{\mathbb{R}}^+$.\end{rem} \begin{rem} Assuming that $f(0)\neq0$, in a similar way to Theorem \ref{THH2}, we can restrict the measurements in Corollary \ref{cor1} to a finite time depending on the coefficients $\rho,\lambda,\mu$ and the domain $\Omega$, $\omega$.\end{rem} \section{Determination of the source term $g(x_3) f(\tilde{x},t) $} In the previous section, we established uniqueness of recovering a spatial source term in an inhomogeneous background medium with or without embedded obstacles. However, the dependance of the source term on time and spatial variables are completely separated. The counterexamples constructed in \cite{BHKY} show that it is impossible to recover general source terms of the form $F(x,t)$ from the boundary observation on $\partial B_R\times(0,\infty)$. This implies that a priori information on the source term is always necessary in proving uniqueness. In this section we restrict our discussions to the inverse problem (IP2) for alternative source terms of the form $g(x_3)f( \tilde{x},t) $, where the vectorial function $f=(f_1, f_2, 0)$ is compactly supported on $[0, T)\times \tilde{B}_R$ and the scalar function $g$ is supported in $(-R, R)$ for some $R>0$. We recall that here $\tilde{x}=(x_1, x_2)\in {\mathbb{R}}^2$ for $x=(x_1,x_2,x_3)\in {\mathbb{R}}^3$, and $\tilde{B}_R:=\{\tilde{x}: |\tilde{x}|\leq R\}$. For simplicity, we assume in this section that $D=\emptyset$ and the background medium is homogeneous with constant Lam\'e coefficients $\lambda$, $\mu$ and a constant density function $\rho$. Below we shall consider the initial value problem \begin{eqnarray}\label{Lame2}\left\{\begin{split} &\rho \partial_{tt}U(x,t)=\mathcal{L}_{\lambda,\mu} U(x,t)+g(x_3)\,f(\tilde{x},t), && x\in {\mathbb{R}}^3,\; t>0,\\ &U(x,0)=\partial_t\,U(x,0)=0, && x\in {\mathbb{R}}^3. \end{split}\right. \end{eqnarray} The function $g(x_3)\,f(\tilde{x},t)$ can be used to model source terms which mainly radiate over the $ox_1x_2$-plane and $g(x_3)$ can be regarded as an approximation of the delta function $\delta(x_3)$ in the $x_3$-direction. Suppose that the function $g$ is known in advance. Our inverse problem in this section is concerned with the recovery of $f$ from $U(x, t)$ measured on $\Gamma\times (0, T_1)$ for some $T_1>0$, $R_1>\sqrt{2}R$ and $\Gamma$ an open subset of $\partial B_{R_1}$. The proofs of the uniqueness and stability results (Theorems \ref{Th2:Uniqueness} and \ref{t2}) will be presented in the subsequent two subsections. Recall that by Lemma \ref{l1} in the appendix, the boundary value problem (\ref{Lame2}) admits a unique solution in $\mathcal C^2([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C([0,+\infty);H^2({\mathbb{R}}^3))^3$ under the assumption $f(\tilde{x},0)=0$. Below we prove the uniqueness with partial boundary data measured over a finite time. \subsection{Proof of Theorem \ref{Th2:Uniqueness}} By the strong Huygens principle, fixing $\epsilon >0$, it holds that $U(x, t)= 0$ for all $|x|<R_1+\epsilon$ and $t>T_1+2\epsilon$ (see e.g. \cite{BHKY}). Then, applying the Fourier transform in time to $U$, with $U_{|\partial B_{R_1}\times(-\infty,0]}=0$, gives \begin{eqnarray}\label{Lame3} \mathcal{L}_{\lambda,\mu} \hat{U}(x,\omega)+\omega^2\rho \hat{U}(x,\omega)=-g(x_3)\,\hat{f}(\tilde{x},\omega), \quad &x\in {\mathbb{R}}^3,\; \omega\in {\mathbb{R}}, \end{eqnarray} where $$\hat{U}(x, \omega):=\int_{\mathbb{R}} U(x,t)e^{-i\omega t}dt,\qquad \omega\in{\mathbb{R}}$$ satisfies the Kupradze radiation condition as $|x|\rightarrow\infty$ (see \cite{BHKY,Ku}) for any fixed $\omega\in {\mathbb{R}}$. Here $\hat{f}(\tilde{x},\omega)$ denotes the Fourier transform of $f( \tilde{x},t)$ with respect to the time variable. Evidently, we have the boundary condition $\hat{U}(x, \omega)=0$, $x\in\Gamma$, $\omega\in{\mathbb{R}}$. Since, for all $\omega\in{\mathbb{R}}$, the support of the function $x\mapsto\hat{f}( \tilde{x},\omega) g(x_3))$ is contained into $B_{R_1}$, by elliptic interior regularity, we deduce that $x\mapsto\hat{U}(x, \omega)$ is analytic with respect to the spatial variable $x$ in a neighborhood of $\partial B_R$. By analyticity of both the surface $\partial B_{R_1}$ and the function $\hat{U}(\cdot, \omega)$, we get the vanishing of $\hat{U}(x, \omega)$ on the whole boundary $\partial B_{R_1}$ for any $\omega\in{\mathbb{R}}$. In view of the uniqueness to the Dirichlet boundary value problem in the unbounded domain $|x|>R_1$ (see e.g., \cite{BHSY2018}), we get \begin{eqnarray*} \hat{U}(x, \omega)= 0,\quad\quad |x|>R_1,\quad \omega\in {\mathbb{R}}. \end{eqnarray*} Consequently, we have $T\hat{U}(x,\omega)=0$ on $\partial B_{R_1}$. Since the source term $f=(f_1, f_2, 0)^\top$ is compactly supported on $\tilde{B}_R$, by Hodge decomposition the function $\hat{f}$ can be spatially decomposed into the form \begin{eqnarray}\label{decF} \hat{f}(\tilde{x},\omega)=\begin{pmatrix} \nabla_{\tilde{x}}\; \hat{f}_p(\tilde{x},\omega) \\ 0 \end{pmatrix} + \begin{pmatrix}\nabla_{\tilde{x}}^\perp\; \hat{f}_s(\tilde{x},\omega) \\ 0 \end{pmatrix}, \end{eqnarray} where $\hat{f}_p(\cdot,\omega)$ and $\hat{f}_s( \cdot, \omega)$ are scalar functions compactly supported on $\tilde{B}_R$ as well. Here $\nabla_{\tilde{x}}=(\partial_1, \partial_2)^\top$, $\nabla_{\tilde{x}}^\perp=(-\partial_2, \partial_1)^\top$. For $\xi=(\xi_1,\xi_2)\in {\mathbb{R}}^2$ satisfying $$|\xi|> k_s>k_p,\qquad \; k_p^2:=\frac{\omega^2\rho}{\lambda+2\mu},\quad k_s^2:=\frac{\omega^2\rho}{\mu},$$ we introduce the test functions \begin{eqnarray*} &&V_p(x,\omega)=\begin{pmatrix} -i\xi_1 \\ -i \xi_2 \\ \sqrt{|\xi|^2-k_p^2} \end{pmatrix} e^{-i\xi\cdot \tilde{x}+ \sqrt{|\xi|^2-k_p^2}\; x_3},\\ &&V_s(x,\omega)=\begin{pmatrix} i\xi_2 \\ -i \xi_1 \\ 0 \end{pmatrix} e^{-i\xi\cdot \tilde{x}+ \sqrt{|\xi|^2-k_s^2}\; x_3}. \end{eqnarray*} The numbers $k_p$ and $k_s$ denote respectively the compressional and shear wave numbers in the frequency domain. One can easily check that $\nabla_{\tilde{x}}^\perp\cdot V_p\equiv 0$, $\nabla_{\tilde{x}}\cdot V_s\equiv 0$ in ${\mathbb{R}}^3$ and, using the fact that $$\nabla_x\times V_p=0,\quad (\lambda+2\mu)\nabla_x\nabla_x\cdot V_p=-\omega^2\rho V_p,$$ $$\nabla_x\cdot V_s=0,\quad -\mu\nabla_x\times (\nabla_x\times V_s)=-\omega^2\rho V_s,$$ we deduce that $V_\alpha$ ($\alpha=p,s$) satisfies the homogeneous Lam\'e system in the frequency domain \begin{eqnarray*} \mathcal{L}_{\lambda,\mu} V_\alpha(x,\omega)+\omega^2\rho\; V_\alpha (x,\omega)=0\quad \mbox{in}\quad {\mathbb{R}}^3,\qquad x\in {\mathbb{R}}^3,\quad \alpha=p,s, \end{eqnarray*} for any fixed $\omega\in {\mathbb{R}}$. Now, multiplying $V_p$ to both sides of the equation (\ref{Lame3}) and applying Betti's formula, we obtain \begin{eqnarray*} &&\int_{B_{R_1}}\left( \mathcal{L}_{\lambda,\mu} \hat{U}(x,\omega)+\omega^2\rho \hat{U}(x,\omega)\right) \cdot V_p(x,\omega)\,dx \\ &=& \int_{\partial B_{R_1}} \mathcal T \hat{U}(x,\omega)\cdot V_p(x,\omega)- \mathcal T V_p(x,\omega)\cdot \hat{U}(x,\omega)\,ds(x)\\ &=& 0, \end{eqnarray*} where we have used the vanishing of the Cauchy data of $\hat{U}$ on $\partial B_{R_1}$. On the other hand, making use of (\ref{decF}) together with the relation $\nabla_{\tilde{x}}^\perp\cdot V_p\equiv 0$ yields \begin{eqnarray*} 0&=&\int_{B_{R_1}} V_p(x,\omega)\cdot \hat{f}(x)(\tilde{x},\omega) g(x_3) dx\\ &=& \int_{B_{R_1}} V_p(x,\omega)\cdot\begin{pmatrix} \nabla_{\tilde{x}} \hat{f}_p(\tilde{x},\omega) \\ 0 \end{pmatrix} \;g(x_3) dx\\ &=&\left( \int_{\tilde{B}_R} \begin{pmatrix} -i\xi_1 \\ -i\xi_2 \\ 0\end{pmatrix}e^{-i\xi\cdot \tilde{x}} \cdot\begin{pmatrix} \nabla_{\tilde{x}} \hat{f}_p(\tilde{x},\omega) \\ 0 \end{pmatrix} d\tilde{x}\right)\; \left(\int_{-R}^R g(x_3) e^{\sqrt{|\xi|^2-k_p^2}\; x_3}\,dx_3\right)\\ &=&|\xi|^2\, \left(\int_{\tilde{B}_R} e^{-i\xi\cdot \tilde{x}} \hat{f}_p(\tilde{x},\omega)\,d\tilde{x}\right)\left(\int_{-R}^R g(x_3) e^{\sqrt{|\xi|^2-k_p^2}\; x_3}\,dx_3\right) \end{eqnarray*} for all $\omega\in {\mathbb{R}}$ and $\xi\in {\mathbb{R}}^2$ satisfying $|\xi|>k_s$. Since $g$ is compactly supported and lies in the space $L^1((-R,R))$, the function $$\mathbb C\ni z\mapsto \int_{-R}^R g(x_3) e^{z\; x_3}\,dx_3$$ is holomorphic in $\mathbb C$. Then, using the fact that $g$ is not uniformly vanishing, for every $\omega\in\mathbb R$, we can find an open and not-empty interval $I_\omega\subset (k_s,+\infty)$ such that $$\int_{-R}^R g(x_3) e^{\sqrt{|\xi|^2-k_p^2}\; x_3}\,dx_3\neq0,\quad \xi\in{\mathbb{R}}^2,\ |\xi|\in I_\omega.$$ Hence, for every $\omega\in{\mathbb{R}}$, we have \begin{equation}\label{ana} \int_{\tilde{B}_R} e^{-i\xi\cdot \tilde{x}} \hat{f}_p(\tilde{x},\omega)\,d\tilde{x}=0\quad \mbox{for all}\quad \xi\in{\mathbb{R}}^2,\ |\xi|\in I_\omega. \end{equation} This implies that, for $\omega\in{\mathbb{R}}$ and for $\hat{f}_p(\cdot,\omega):\tilde{x}\mapsto \hat{f}_p(\tilde{x},\omega)$, the Fourier transform $\mathcal F_{\tilde{x}}[\hat{f}_p](\xi)$ of $\hat{f}_p(\cdot,\omega)$ with respect to $\tilde{x}\in{\mathbb{R}}^2$ vanishes for $\xi\in\{\eta\in{\mathbb{R}}^2: |\eta|\in I_\omega\}$. On the other hand, since, for all $\omega\in{\mathbb{R}}$, $\hat{f}_p(\cdot,\omega)$ is supported in $\tilde{B}_R$, the function $$\xi\mapsto \int_{{\mathbb{R}}^3} e^{-i\xi\cdot \tilde{x}} \hat{f}_p(\tilde{x},\omega)\,d\tilde{x}=\int_{\tilde{B}_R} e^{-i\xi\cdot \tilde{x}} \hat{f}_p(\tilde{x},\omega)\,d\tilde{x}$$ is real analytic with respect to $\xi\in{\mathbb{R}}^2$. Then, using the fact that the set $\{\xi\in{\mathbb{R}}^2:\ |\xi|\in I_\omega\}$ is an open subset of ${\mathbb{R}}^2$, it follows from \eqref{ana} that $$\int_{\tilde{B}_R} e^{-i\xi\cdot \tilde{x}} \hat{f}_p(\tilde{x},\omega)\,d\tilde{x}=0\quad \mbox{for all}\quad \xi\in{\mathbb{R}}^2.$$ Applying the inverse Fourier transform in $\tilde{x}$, we get $\hat{f}_p(\cdot,\omega)=0$ for all $\omega\in{\mathbb{R}}$. Further, applying the inverse Fourier transform in $t$ yields $f_p(\tilde{x}, t)\equiv0$ for all $\tilde{x}\in \tilde{B}_R$ and $t>0$. The fact that $f_s\equiv 0$ can be verified analogously by multiplying $V_s$ to both sides of (\ref{Lame3}). This finishes the proof of the relation $f\equiv 0$ in $\tilde{B}_R\times(0,T)$. $\hfill\Box$ To derive a stability estimate of $f$, we need the dynamic data measured over the whole boundary $\partial B_R$. In contrast with the proof of Theorem \ref{Th2:Uniqueness} , we shall carry out the proof of Theorem \ref{t2} in the time domain without using the Fourier transform in the time variable. \subsection{Proof of Theorem \ref{t2}} As done in (\ref{decF}), we can split $f$ via Hodge decomposition into the form \begin{eqnarray}\label{dec} f(\tilde{x},t)= \begin{pmatrix} \nabla_{\tilde{x}} f_p(\tilde{x},t) \\ 0 \end{pmatrix} + \begin{pmatrix} \nabla_{\tilde{x}}^\perp f_s(\tilde{x},t) \\ 0 \end{pmatrix}, \end{eqnarray} where $f_p( \cdot,t)$ and $f_s( \cdot,t)$ are scalar functions compactly supported on $\tilde{B}_R$. Fixing $\omega>0$ and $\xi\in\mathbb R^2$ such that \begin{equation}\label{t2c} |\xi|^2>k_p^2:=\frac{\omega^2\rho}{\lambda+2\mu},\end{equation} we introduce the time-dependent test function \begin{eqnarray*} V_p(x,t; \xi, \omega)&=&\begin{pmatrix} -i\xi_1 \\ -i \xi_2 \\ \sqrt{|\xi|^2-k_p^2} \end{pmatrix} e^{-i\xi\cdot \tilde{x}+ \sqrt{|\xi|^2-k_p^2}\; x_3}\,e^{-i\omega t}.\end{eqnarray*} In the same way, for \begin{equation}\label{tt2c}|\xi|^2>k_s^2:=\frac{\omega^2\rho}{\mu},\end{equation} we introduce the function \begin{eqnarray*} V_s(x,t; \xi, \omega)&=&\begin{pmatrix} i\xi_2 \\ -i \xi_1 \\ 0 \end{pmatrix} e^{-i\xi\cdot \tilde{x}+ \sqrt{|\xi|^2-k_s^2}\; x_3}\,e^{-i\omega t}. \end{eqnarray*} Then, in a similar way to the proof of the uniqueness result, one can check that $V_\alpha$ ($\alpha=p,s$) are solutions to the homogeneous elastodynamic equation \begin{eqnarray}\label{eq:V} \rho\frac{\partial^2}{\partial t^2} V_\alpha(x,t; \xi, \omega) -\mathcal{L}_{\lambda,\mu} V_\alpha(x,t; \xi, \omega)=0\qquad\mbox{in}\quad {\mathbb{R}}^3\times {\mathbb{R}}^+ \end{eqnarray} for any fixed $\xi\in {\mathbb{R}}^2$ and $\omega\in {\mathbb{R}}$ satisfying \eqref{t2c} or \eqref{tt2c}. Moreover, one can easily check that \begin{eqnarray}\label{eq:VP} \nabla_{\tilde{x}}^\perp\cdot V_p(x,t)=\nabla_{\tilde{x}}\cdot V_s(x,t)= 0,\quad (x,t)\in{\mathbb{R}}^3\times{\mathbb{R}}. \end{eqnarray} Therefore, multiplying $V_p(x,t;\xi, \omega)$ to the right hand side of the equation (\ref{Lame2}), using (\ref{eq:V}) and applying integration by parts yield \begin{eqnarray*} &&\int_0^{T_1}\int_{B_{R_1}}\left(\rho\frac{\partial ^2}{\partial t^2} U(x,t)- \mathcal{L}_{\lambda,\mu} U(t, x)\right) \cdot V_p(x,t;\xi, \omega)\,dxdt \\ &=&-\int_0^{T_1}\int_{\partial B_{R_1}} \mathcal T U(x,t)\cdot V_p(x,t;\xi, \omega)- \mathcal T V_p(t, x; \xi, \omega)\cdot U(x,t; \xi, \omega)\,ds(x)\,dt\\ &&+ \int_0^{T_1}\int_{B_{R_1}}\left(\rho\frac{\partial ^2}{\partial t^2} U(x,t)\cdot V_p(x,t;\xi, \omega)- \rho\frac{\partial ^2}{\partial t^2} V_p(x,t)\cdot U(x,t;\xi, \omega)\right) \,dxdt \\ &=&-\int_0^{T_1}\int_{\partial B_{R_1}} \mathcal T U(x,t)\cdot V_p(x,t;\xi, \omega)- \mathcal T V_p(t, x; \xi, \omega)\cdot U(x,t; \xi, \omega)\,ds(x)\,dt\\ && +\;\rho\int_{B_{R_1}} \frac{\partial U(x,T_1)}{\partial t}\cdot V_p(x,T_1;\xi, \omega)- \frac{\partial V_p(x,T_1;\xi, \omega)}{\partial t}\cdot U(x, T_1)\,dx. \end{eqnarray*} Again recalling Huygens principle, we know $U(x,T_1)=\partial_tU(x,T_1) =0$ for all $x\in B_R$ and $T_1>T+\frac{2R_1\sqrt{\rho}}{\sqrt{\mu}}$. Hence, the integral over $B_{R_1}$ on the right hand side of the previous identity vanishes. Following estimate \eqref{l2a} of Proposition \ref{p3} in the appendix, the traction of $U$ on the boundary $\partial B_{R_1}$ can be bounded by the trace of $U$ itself. Hence, the left hand side can be bounded by \begin{eqnarray}\nonumber && \left|\int_0^{T_1}\int_{B_{R_1}}\left(\rho\frac{\partial ^2}{\partial t^2} U(x,t)- \mathcal{L}_{\lambda,\mu} U(t, x)\right) \cdot V_p(x,t;\xi, \omega)\,dxdt\right| \\ \nonumber &=&\left| \int_0^{T_1}\int_{\partial B_{R_1}} \mathcal T U(x,t)\cdot V_p(x,t;\xi, \omega)- \mathcal T V_p(t, x; \xi, \omega)\cdot U(x,t; \xi, \omega)\,ds(x)\,dt \right| \\ \nonumber &\leq& || \mathcal T U||_{L^2((0,T_1)\times \partial B_{R_1})^3} ||V_p||_{L^2((0,T_1)\times \partial B_{R_1})^3}+ ||U||_{L^2((0,T_1)\times \partial B_{R_1})^3} || \mathcal T V_p||_{L^2((0,T_1)\times \partial B_{R_1})^3} \\ \nonumber &\leq& C\;\left(\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1})^3}||V_p||_{L^2((0,T_1)\times \partial B_{R_1})^3}+ ||U||_{L^2((0,T_1)\times \partial B_{R_1})^3}\; ||V_p||_{L^2(0,T_1;H^2( B_{R_1}))} \right)\\ \nonumber &\leq& C\;\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1})^3}||V_p||_{L^2(0,T_1;H^2( B_{R_1}))}\\ \label{UP} &\leq& C\;\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1})^3}(1+|(\xi,\omega)|^3)e^{R\sqrt{|\xi|^2-k_p^2} } \end{eqnarray} for all $|\xi|>k_p$, where $C>0$ depends on $M$, $R_1$, $T_1$, $\rho$, $\lambda$ and $\mu$. On the other hand, using the governing equation (\ref{Lame2}) together with the relations (\ref{dec}), (\ref{eq:VP}) and using the fact that the sign of $g$ is constant, we obtain a lower bound of the left hand side of (\ref{UP}): \begin{eqnarray*} &&\left|\int_0^{T_1}\int_{B_{R_1}}\left(\rho\frac{\partial ^2}{\partial t^2} U(x,t)- \mathcal{L}_{\lambda,\mu} U(x,t)\right) \cdot V_p(x,t;\xi, \omega)\,dxdt\right| \\ &=& \left|\int_0^{T_1}\int_{B_{R_1}} f(\tilde{x},t)g(x_3) \cdot V_p(x,t;\xi, \omega)\,dx\,dt \right|\\ &=& \left|\int_0^{T_1}\int_{B_{R_1}} \begin{pmatrix} \nabla_{\tilde{x}}f_p(\tilde{x},t)\\ 0\end{pmatrix} g(x_3)\;\cdot V_p(x,t;\xi, \omega)\,dx\,dt\right|\\ &=& |\xi|^2\,\left|\int_0^{T_1}\int_{B_{R_1}} f_p(\tilde{x},t) g(x_3)\,e^{-i\xi\cdot \tilde{x}+ \sqrt{|\xi|^2-k_p^2}\; x_3}\,e^{-i\omega t}\, dx\,dt \right|\\ &=& |\xi|^2\,\left|\left(\int_0^{T_1}\int_{\tilde{B}_R} f_p(\tilde{x},t)e^{-i\xi\cdot \tilde{x}-i\omega t} d\tilde{x}dt\right) \left(\int_{-R}^{R} g(x_3)\,e^{\sqrt{|\xi|^2-k_p^2}\; x_3}\,d x_3\right)\right|\\ &\geq& |\xi|^2\,\left|\left(\int_0^{T_1}\int_{\tilde{B}_R} f_p(\tilde{x},t)e^{-i\xi\cdot \tilde{x}-i\omega t} d\tilde{x}dt\right)\right| \norm{g}_{L^1({\mathbb{R}})} \; e^{-(\sqrt{|\xi|^2-k_p^2}) R}, \end{eqnarray*} for all $|\xi|>k_p$. Since $f_p$ is supported on $\tilde{B}_R\times(0,T_1)$, the first integral on the right hand of the last identity is the Fourier transform of $f_p$ with respect to $(\tilde{x},t)$ at the value $(\xi,\omega)$, which we denote by $\hat{f}_p(\xi,\omega)$. Combining the previous two relations we obtain \begin{equation}\label{es} |\hat{f}_p(\xi,\omega)|\leq C\,\frac{(1+|(\xi,\omega)|^3)\norm{U}_{H^3(0,T;H^{3/2}(\partial B_{R_1}))^3}\,e^{2R\sqrt{|\xi|^2-k_p^2}} }{|\xi|^2\; \norm{g}_{L^1({\mathbb{R}})} \; } \end{equation} for all $|\xi|>k_p(\omega)$. We note that (\ref{es}) gives the estimate of $\hat{f}_p$ over the cone $\{(\xi,\omega)\in{\mathbb{R}}^3: |\xi|^2>\omega^2 \rho/(\lambda+2\mu)\}$. In order to derive from (\ref{es}) a stability estimate of $\hat{f}_p$ on $B_r$ for a large $r>0$, we will use a result of stability in the analytic continuation, following the arguments presented in \cite{CK,Ki1}. Below we state a stability estimate for analytic continuation problems; see \cite[Theorem 4]{AEWZ} (see also \cite{Ma,Na}, where similar results were established). \begin{prop}\label{p2} Let $s>0$ and assume that $g:\ B_{2s}\subset {\mathbb{R}}^{3}\to \mathbb C$ is a real analytic function satisfying \[\norm{\nabla^{\beta }g}_{L^\infty(B_{2s})}\leq \frac{N\,\beta\,!}{(s \tau)^{\abs{\beta}}},\qquad \beta=(\beta_1,\beta_2,\beta_2)\in\mathbb N^{3},\] for some $N>0$ and $0<\tau\leq1$. Further let $E\subset B_{s/2}$ be a measurable set with strictly positive Lebesgue measure. Then, \[\norm{ g}_{L^\infty(B_s)}\leq CN^{(1-b)}\norm{ g}_{L^\infty(E)}^{b},\] where $b\in(0,1)$, $C>0$ depend on $\tau$, $\abs{E}$ and $s$.\end{prop} Following \cite{Ki1}, we introduce the function \[H_r(\xi,\omega):=\hat{f}_p(r(\xi,\omega))=(2\pi)^{-3/2}\int_{{\mathbb{R}}^{3}}f_p(\tilde{x},t)e^{-ir(\omega t+\xi\cdot \tilde{x})}d\tilde{x}dt\] for some $r>1$ and $|(\xi,\omega)|\leq 2s$. In a similar way to \cite{Ki1}, we fix $s=[\max(T_1,2R)]^{-1}+1$, choose $N=Ce^{3r}$, with $C$ some constant independent of $r$, and take $\tau=\frac{[\max(T_1,2R)]^{-1}}{s}=(s-1)/s$. Then we obtain \begin{equation}\label{t2d} \norm{\partial_\omega^n\partial^\beta_\xi H_r}_{L^\infty(B_{2s})}\leq C\frac{e^{3r}\beta!n!}{([\max(T,2R)]^{-1})^{\abs{\beta}+n}}=\frac{N\beta!n!}{(s \tau)^{\abs{\beta}+n}},\quad n\in\mathbb N_+,\ \beta\in\mathbb N_+^{2}.\end{equation} Moreover, fixing $c:=\frac{\rho}{\lambda+2\mu}$, $d:= \frac{s}{2\sqrt{1+c^{-1}}}$ and $a_r\in\left(0,\frac{d}{\sqrt{c}}\right)$, we define $$E_r:=\left\{(\xi,\omega)\in\tilde{B}_d\in\times\left[-a_r,\frac{d}{\sqrt{c}}\right]:\ \max(r^{-2},\sqrt{c}|\omega|)<|\xi| \right\}.$$ It is easy to check that $E_r$ is a subset of $B_{s/2}$ in ${\mathbb{R}}^3$, and it is also a subset of the cone $\{(\xi,\omega)\in{\mathbb{R}}^3: |\xi|^2>\omega^2 \rho/(\lambda+2\mu)\}$. We remark that $|E_r|=\kappa_r(-a_r)$, where $$\kappa_r:y\mapsto \int_y^{\frac{d}{\sqrt{c}}}\int_{\max(r^{-2},\sqrt{c}|\omega|)<|\xi|<d}d\xi d\omega.$$ Note that $\kappa_r\left(-\frac{d}{\sqrt{c}}\right)=2\kappa_r(0)$ and one can check that $$\kappa_r(0)=\frac{2\pi d^3}{3\sqrt{c}}+\pi r^{-2}\left(\frac{d^2}{3\sqrt{c}}-\frac{2r^{-4}}{3\sqrt{c}}\right).$$ Thus, there exists $r_0>1$ depending only on $R$, $\rho$, $\lambda$, $\mu$, $T$, such that $$ \frac{\pi d^3}{2\sqrt{c}}<\kappa_r(0)< \frac{5\pi d^3}{6\sqrt{c}},\quad r>r_0.$$ Therefore, we have $$\kappa_r\left(-\frac{d}{\sqrt{c}}\right)=2\kappa_r(0)>\frac{\pi d^3}{\sqrt{c}}>\frac{5\pi d^3}{6\sqrt{c}}>\kappa_r(0)$$ and, from the continuity of the map $\kappa_r$, we deduce that we can choose $a_r$ in such way that $$|E_r|=\kappa_r(a_r)=\frac{\pi d^3}{\sqrt{c}},\quad r>r_0.$$ This implies that, with such choice of $a_r$, the volume $|E_r|$ depends only on $R$, $\rho$, $\lambda$, $\mu$ and $T_1$. Consequently, combining \eqref{t2d} with Proposition \ref{p2}, we deduce that $$|\hat{f}_p(r(\xi,\omega))|=|H_r(\xi,\omega)|\leq Ce^{3(1-b)r}\left(\norm{ H_r}_{L^\infty(E_r)}\right)^{b},\quad |(\xi,\omega)|<s,\ r>r_0,$$ where $C>0$, $b\in(0,1)$ depend only on $R$, $\rho$, $\lambda$, $\mu$ and $T_1$. In addition, applying \eqref{es}, we get $$\norm{ H_r}_{L^\infty(E_r)}\leq Cr^4 e^{c_1 r}\norm{U}_{H^3(0,T;H^{3/2}(\partial B_R))^3},$$ where $C$ and $c_1$ depend only on $R$, $\rho$, $\lambda$, $\mu$ and $T_1$. Therefore, we can find $C$, $c$ depending only on $R$, $\rho$, $\lambda$, $\mu$ and $T_1$ such that $$|\hat{f}_p(\xi,\omega)|\leq Ce^{cr}\norm{U}_{H^3(0,T;H^{3/2}(\partial B_R))^3},\quad |(\xi,\omega)|<r,\ r>sr_0.$$ It follows that \begin{equation}\label{t2e}\int_{B_r}|\hat{f}_p(\xi,\omega)|^2d\xi d\omega\leq Ce^{cr}\norm{U}^2_{H^3(0,T;H^{3/2}(\partial B_R))^3},\quad |(\xi,\omega)|<r,\ r>sr_0,\end{equation} by eventually replacing the constants $C$ and $c$. On the other hand, using \eqref{t2a} and the fact that $\Delta_{\tilde{x}}f_p=\nabla_{\tilde{x}}\cdot f$, we deduce that $f_p\in H^2({\mathbb{R}}^3)$ and $\norm{f_p}_{H^2({\mathbb{R}}^3)}\leq CM$, with $C$ depending only on $T$ and $R$. Thus, we find $$\begin{aligned}\int_{|(\xi,\omega)|>r}|\hat{f}_p(\xi,\omega)|^2d\xi d\omega &\leq r^{-4}\int_{|(\xi,\omega)|>r}(1+|(\xi,\omega)|^4)|\hat{f}_p(\xi,\omega)|^2d\xi d\omega\\ \ &\leq r^{-4}\norm{f_p}^2_{H^2({\mathbb{R}}^3)}\leq C^2r^{-4}M^2.\end{aligned}$$ Combining this with \eqref{t2e}, we find $$\begin{aligned}\int_{{\mathbb{R}}^3}|\hat{f}_p(\xi,\omega)|^2d\xi d\omega&=\int_{B_r}|\hat{f}_p(\xi,\omega)|^2d\xi d\omega+\int_{|(\xi,\omega)|>r}|\hat{f}_p(\xi,\omega)|^2d\xi d\omega \\ \ &\leq C\left(e^{cr}\norm{U}^2_{H^3(0,T_1;H^{3/2}(\partial B_{R_1}))^3}+r^{-4}\right).\end{aligned}$$ Recalling the Plancherel formula, it holds that \[ \norm{f_p}_{L^2((0,T_1)\times \tilde{B}_{R_1})}\leq C\,\left(e^{cr}\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1}))^3}+r^{-2}\right),\quad r>sr_0. \] Now, choosing $r=c^{-1}\ln(\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1}))^3})$, we get for $\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1}))^3}$ sufficiently small that \begin{equation}\label{t2f} \norm{f_p}_{L^2((0,T_1)\times \tilde{B}_{R_1})}\leq C\,\left(\norm{U}^2_{H^3(0,T_1;H^{3/2}(\partial B_{R_1})^3}+\left|\ln\left(\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_{R_1})^3}\right)\right|^{-2}\right),\end{equation} which can be obtained by applying the classical arguments of optimization (see for instance the end of the proof of \cite[Theorem 1]{Ki1}). This gives the estimate of $f_p$ by our measurement data taken on $\partial B_{R_1}$. Using similar arguments, we can prove \begin{equation}\label{t2g} \norm{f_s}_{L^2((0,T)\times \tilde{B}_R)}\leq C\left(\norm{U}^2_{H^3(0,T_1;H^{3/2}(\partial B_R)^3}+\left|\ln\left(\norm{U}_{H^3(0,T_1;H^{3/2}(\partial B_R)^3}\right)\right|^{-2}\right).\end{equation} On the other hand, by interpolation and the upper bound (\ref{t2a}) we have $$\begin{aligned}\norm{f}_{L^2((0,T_1)\times \tilde{B}_R)}&\leq \norm{\nabla_{\tilde{x}}f_p}_{L^2((0,T_1)\times \tilde{B}_R)}+\norm{\nabla_{\tilde{x}}^\perp f_s}_{L^2((0,T_1)\times \tilde{B}_R)}\\ \ &\leq C(\norm{f_p}_{H^1((0,T_1)\times \tilde{B}_R)}+\norm{ f_s}_{H^1((0,T_1)\times \tilde{B}_R)})\\ \ &\leq C(\norm{f_p}_{H^2((0,T_1)\times \tilde{B}_R)}^{\frac{1}{2}}\norm{f_p}_{L^2((0,T_1)\times \tilde{B}_R)}^{\frac{1}{2}}+\norm{f_s}_{H^2((0,T_1)\times \tilde{B}_R)}^{\frac{1}{2}}\norm{f_s}_{L^2((0,T_1)\times \tilde{B}_R)}^{\frac{1}{2}})\\ \ &\leq C(\norm{f_p}_{L^2((0,T_1)\times \tilde{B}_R)}^{\frac{1}{2}}+\norm{f_s}_{L^2((0,T_1)\times \tilde{B}_R)}^{\frac{1}{2}}),\end{aligned}$$ with $C$ depending on $M$, $T_1$ and $R$. Then, combining this with \eqref{t2f}-\eqref{t2g}, we obtain \eqref{t2b}. \begin{rem}\label{remark1} The uniqueness and stability results presented in Theorems \ref{Th2:Uniqueness} and \ref{t2} carry over to the scalar inhomogeneous wave equation of the form \begin{eqnarray*} \frac{1}{c^2}\partial_{tt}U(x,t)=\Delta U(x,t)+f(\tilde{x},t)g(x_3), && (x,t)\in {\mathbb{R}}^3\times (0,\infty),\\ U(x,0)=U_t(x,0)=0, && x\in {\mathbb{R}}^3, \end{eqnarray*} where both $f$ and $g$ are compactly supported scalar functions and $c$ a constant. If the wave speed $c$ and the function $g(x_3)$ are known, one can determine the source term $f(\tilde{x},t)$ from partial boundary data. In particular, $f$ is allowed to be a moving source with the orbit lying on the $ox_1x_2$-plane. In the frequency domain, the above wave equation gives rise to an inverse problem of recovering the wave-number-dependent source term $f(\tilde{x}, k)$ from the multi-frequency boundary observation data of the inhomogeneous Helmholtz equation \begin{eqnarray*} \Delta u(x, k)+k^2/c^2\; u(x,k)=\hat{f}(\tilde{x},k)g(x_3). \end{eqnarray*} Progress along these directions will be reported in our forthcoming publications. \end{rem} \section{Appendix} \subsection{Well-posedness result and estimation of surface traction} In this subsection, we consider the inhomogeneous Lam\'e system \begin{eqnarray}\label{eq1} \left\{\begin{array}{ll}\rho\partial_t^2U- \mathcal{L}_{\lambda,\mu} U=F(x,t),\quad &(x,t)\in{\mathbb{R}}^{3}\times(0,+\infty),\\ U(\cdot,0)= \partial_tU(\cdot,0)=0,\quad & x\in {\mathbb{R}}^3,\end{array}\right.\end{eqnarray} where the operator $\mathcal{L}_{\lambda,\mu}$ is given by (\ref{lame}). We assume that supp$(F)\subset [0,T)\times B_R$, with $B_R:=\{x\in{\mathbb{R}}^3:\ |x|<R\}$. It is well-known that the operator $\mathcal{L}_{\lambda,\mu}$ is an elliptic operator and the standard elliptic regularity holds; see e.g., \cite[Chapters 4 and 10]{Mclean} and \cite[Chapter 5]{Hsiao}. The quadratic form corresponding to $\mathcal{L}_{\lambda,\mu}$ is given by \begin{eqnarray*} \mathcal{E}(U, V):=\lambda\, ({\rm {div}~} U)({\rm {div}~} V)+2\mu E(U):E(V) \end{eqnarray*} where the stress tensor $E$ is defined via (\ref{E}), with the notation $A:B:=\sum_{i,j=1}^3 a_{ij} b_{ij}$ for $A=(a_{ij})_{i,j=1}^3$, $B=(b_{ij})_{i,j=1}^3$. Hence, for a bounded Lipschitz domain $D\subset {\mathbb{R}}^3$ there holds the relation (see e.g., \cite[Lemma 3]{AAIY}) \begin{eqnarray}\label{Betti} -\int_{D}\mathcal{L}_{\lambda,\mu}U \cdot \overline{V} \, dx= \int_{D}\mathcal{E}(U,\overline{V}) \, dx-\int_{\partial D}\overline{V} \cdot \mathcal T U \, ds \end{eqnarray} for all $U, V\in H^2(D)^3$. By the well-known Korn's inequality (see e.g. \cite[Theorem 10.2]{Mclean}, \cite[Chapter 3] {DuvautLions}), it holds that \begin{equation}\label{qad} \int_{{\mathbb{R}}^3}\mathcal{E}(U,\overline{U})+c_1\, \norm{U}_{L^2({\mathbb{R}}^3)^3}\geq c_2\norm{U}_{H^1({\mathbb{R}}^3)^3},\quad U\in H^1({\mathbb{R}}^3)^3 \end{equation} for some constants $c_1,c_2>0$. In the particular case of constant Lam\'e coefficients, we have \begin{eqnarray*} \mathcal{L}_{\lambda,\mu} U=\mu\Delta U +(\lambda+2\mu) \nabla (\nabla\cdot U), \end{eqnarray*} and \begin{eqnarray*} \mathcal{E}(U, V) =2\mu\sum_{j,k=1}^3\partial_k U_j\,\, \partial_k V_j+\lambda\, ({\rm {div}~} U)({\rm {div}~} V)-\mu\,{\rm {curl}~} U\cdot{\rm {curl}~} V. \end{eqnarray*} In this case, the surface traction can be simplified to be \begin{eqnarray*} \mathcal T U= 2 \mu \, \partial_{\nu\,} U + \lambda({\rm {div}~} U)\,\nu\, +\mu\, \nu\times {\rm {curl}~}\,U\qquad\mbox{on}\quad\partial D. \end{eqnarray*} We refer to the monograph \cite{Ku} for comprehensive studies on the Lam\'e system. Below we state a well-posedness result to the elastodynamic system in unbounded domains by applying the standard arguments of \cite[Chapter 8]{LM1}. \begin{lem}\label{l1} Let $F\in H^1({\mathbb{R}}_+;L^2({\mathbb{R}}^3))^3$. Then problem \eqref{eq1} admits a unique solution $$U\in \mathcal C^1([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C([0,+\infty);H^1({\mathbb{R}}^3))^3.$$ Under the additional condition $F(\cdot,0)=x\mapsto F(0,x)=0$, the unique solution lies in the space $$U\in \mathcal C^2([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C([0,+\infty);H^2({\mathbb{R}}^3))^3.$$ \end{lem} \begin{proof} Without lost of generality, we assume that $\rho=1$. We define on $L^2({\mathbb{R}}^3)^3$ the sesquilinear form $a$ with domain $D(a):=H^1({\mathbb{R}}^3)^3$ given by $$a(U_1,U_2):=\int_{{\mathbb{R}}^3}\mathcal{E}(U_1,U_2)dx.$$ In view of \eqref{Betti}, by density, we find $$\left\langle -\mathcal{L}_{\lambda,\mu}U_1,U_2 \right\rangle_{H^{-1}({\mathbb{R}}^3)^3,H^1({\mathbb{R}}^3)^3}=a(U_1,U_2),\quad U_1,U_2\in H^1({\mathbb{R}}^3)^3.$$ Therefore, in view of \eqref{qad}, fixing $H=L^2({\mathbb{R}}^3)^3$, $V=H^1({\mathbb{R}}^3)^3$ and applying \cite[Theorem 8.1, Chapter 3]{LM1} and \cite[Theorem 8.2, Chapter 3]{LM1}, we deduce that \eqref{eq1} admits a unique solution $U\in \mathcal C^1([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C([0,+\infty);H^1({\mathbb{R}}^3))^3$. It remains to prove that $U\in \mathcal C^2([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C([0,+\infty);H^2({\mathbb{R}}^3))^3$. For this purpose, we consider $V=\partial_tU$ and, using the fact that $F(\cdot,0)=0$, we deduce that $V$ solves \begin{equation}\label{eq2}\left\{\begin{array}{ll}\partial_t^2V-\mathcal L_{\lambda,\mu} V=\partial_t\tilde{F}(x,t),\quad &(x,t)\in{\mathbb{R}}^{3}\times(0,+\infty),\\ V(\cdot,0)= \partial_tV(\cdot,0)=0,\quad &x\in {\mathbb{R}}^3.\end{array}\right.\end{equation} Using the fact that $\partial_tF\in L^2((0,+\infty)\times{\mathbb{R}}^3)$ and applying the above arguments we deduce that $V\in \mathcal C^1([0,+\infty);L^2({\mathbb{R}}^3))^3$ and that $U\in \mathcal C^2([0,+\infty);L^2({\mathbb{R}}^3))^3$. Therefore, for any $t\in[0,+\infty)$, $U$ is a solution of the boundary value problem \begin{equation}\label{eq3}-\mathcal L_{\lambda,\mu} U(x,t)=-\partial_t^2U(x,t)+F(x,t),\quad x\in{\mathbb{R}}^{3}.\end{equation} Since $\partial_t^2U(t,\cdot)\in L^2({\mathbb{R}}^3)^3$, from the elliptic regularity of the operator $-\mathcal L_{\lambda,\mu}$ (see e.g. \cite[Theorem 5.8.1]{Hsiao}), we deduce that $U(t,\cdot)\in H^2({\mathbb{R}}^3)^3$. Moreover, for any $t_1,t_2\in [0,+\infty)$, we have $$\begin{aligned}&\norm{U(\cdot,t_1)-U(\cdot,t_2)}_{H^2({\mathbb{R}}^3)^3}\\ &\leq C(\norm{\mathcal L_{\lambda,\mu} (U(\cdot,t_1)-U(\cdot,t_2))}_{L^2({\mathbb{R}}^3)^3}+\norm{U(\cdot,t_1)-U(\cdot,t_2)}_{L^2({\mathbb{R}}^3)^3})\\ \ &\leq C\left(\norm{\partial_t^2U(\cdot,t_1)-\partial_t^2U(\cdot,t_2))}_{L^2({\mathbb{R}}^3)^3}+\norm{U(\cdot,t_1)-U(\cdot,t_2)}_{L^2({\mathbb{R}}^3)^3}+\norm{F(\cdot,t_1)-F(\cdot,t_2))}_{L^2({\mathbb{R}}^3)^3}\right).\end{aligned}$$ Therefore, using the fact that $U\in \mathcal C^2([0,+\infty);L^2({\mathbb{R}}^3))^3$ and the fact that $F$ extended by $0$ to ${\mathbb{R}}^3\times{\mathbb{R}}$ is lying in $ H^1({\mathbb{R}};L^2({\mathbb{R}}^3))^3\subset \mathcal C({\mathbb{R}};L^2({\mathbb{R}}^3))^3$, we deduce that $U\in \mathcal C([0,+\infty);H^2({\mathbb{R}}^3))^3$. \end{proof} Using this result for $F\in H^1(0,T;L^2({\mathbb{R}}^3))^3$, we know $$U(x,t),\; \mathcal T U(x,t)\in \mathcal C([0,+\infty);L^2(\partial B_R))^3, \quad (x,t)\in \partial B_R\times[0,\infty).$$ With additional smoothness assumptions we can also estimate $T U_{|\partial B_R\times[0,+\infty)}$ by $U_{|\partial B_R\times[0,+\infty)}$. The main result of this subsection can be stated as follows. \begin{prop}\label{p3} Let $T_2>0$ and let $F\in H^4({\mathbb{R}}_+;L^2({\mathbb{R}}^3))^3$ be such that $F(\cdot,0)=\partial_t F(\cdot,0)=\partial_t^2 F(\cdot,0)=\partial_t^3 F(\cdot,0)=0$. Then problem \eqref{eq1} admits a unique solution $U\in \mathcal C^4([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C^3([0,+\infty);H^2({\mathbb{R}}^3))^3$ satisfying the estimate \begin{equation}\label{l2a}\norm{ \mathcal T U}_{L^2(\partial B_R\times(0,T_2))^3}\leq C\norm{ U}_{H^3(0,T_2;H^{\frac{3}{2}}(\partial B_R))^3},\end{equation} with $C$ depending on $\lambda$, $\rho$, $\mu$, $T_2$ and $R$.\end{prop} \begin{proof} Note first that $W=\partial_t^3U$ solves \begin{equation}\label{eq4}\left\{\begin{array}{ll}\rho\partial_t^2W-\mathcal L_{\lambda,\mu} W=\partial_t^3F(x,t), & (x,t)\in{\mathbb{R}}^3\times(0,+\infty),\\ W(\cdot,0)= \partial_tW(\cdot,0)=0,\quad & x\in {\mathbb{R}}^3.\end{array}\right.\end{equation} Using the fact that $\partial_t^3 F\in H^1({\mathbb{R}}_+;L^2({\mathbb{R}}^3))^3$ with $\partial_t^3 F(\cdot,0)=0$, we can apply Lemma \ref{l1} to deduce that $W\in \mathcal C^2([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C([0,+\infty);H^2({\mathbb{R}}^3))^3$. Thus, we have $U\in \mathcal C^4([0,+\infty);L^2({\mathbb{R}}^3))^3\cap \mathcal C^3([0,+\infty);H^2({\mathbb{R}}^3))^3$. This implies that $g:=U_{|\partial B_R\times[0,+\infty)}\in\mathcal C^3([0,+\infty);H^{\frac{3}{2}}(\partial B_R))^3$. Hence, the restriction of $U$ to $({\mathbb{R}}^3\backslash B_R)\times (0,T_2)$ solves the initial boundary value problem \begin{equation}\label{eq5}\left\{\begin{array}{ll}\rho\partial_t^2U-\mathcal L_{\lambda,\mu} U=0,\quad &(x,t)\in({\mathbb{R}}^3\backslash B_R)\times(0,T_2),\\ U(\cdot,0)= \partial_tU(\cdot,0)=0,\quad &x\in {\mathbb{R}}^3\backslash B_R,\\ U=g,\quad&(x,t)\in \partial B_R\times(0,T_2).\end{array}\right.\end{equation} Using the fact that $g(\cdot,0)=\partial_tg(\cdot,0)=\partial_t^2g(\cdot,0)=0$, from a classical lifting result, one can find $G\in \mathcal C^3([0,+\infty);H^2({\mathbb{R}}^3\backslash B_R))^3$ such that $G_{|\partial B_R\times(0,T_2) }=g$, $G(\cdot,0)=\partial_tG(\cdot,0)=\partial_t^2G(\cdot,0)=0$ and \begin{equation}\label{l2b}\norm{G}_{H^3(0,T_2;H^2({\mathbb{R}}^3))^3}\leq C\norm{g}_{H^3(0,T_2;H^{\frac{3}{2}}(\partial B_R))^3},\end{equation} with $C$ depending only on $T_2$ and $R$. Therefore, we can split $U$ to $U=V+G$ on $(0,T_2)\times ({\mathbb{R}}^3\backslash B_R)$, with $V$ the solution of $$\left\{\begin{array}{ll}\rho\partial_t^2V-\mathcal L_{\lambda,\mu} V=-(\rho\partial_t^2G-\mathcal L_{\lambda,\mu}G):=H,\quad &(x,t)\in({\mathbb{R}}^3\backslash B_R)\times(0,T_2),\\ V(\cdot,0)= \partial_tV(\cdot,0)=0,\quad & x\in {\mathbb{R}}^3\backslash B_R,\\ V=0, &(x,t)\in \partial B_R\times (0,T_2).\end{array}\right.$$ Using the fact that $H\in H^1(0,T_2;L^2( {\mathbb{R}}^3\backslash B_R))$ and $H(\cdot,0)=0$, in a similar way to Lemma \ref{l1} we can prove that $V\in \mathcal C^2([0,T_2];L^2({\mathbb{R}}^3\backslash B_R))^3\cap \mathcal C([0,T_2];H^2({\mathbb{R}}^3\backslash B_R))^3$, with $$\norm{V}_{L^2(0,T_2;H^2({\mathbb{R}}^3\backslash B_R))^3}\leq C\norm{H}_{H^1(0,T_2;L^2({\mathbb{R}}^3\backslash B_R))^3}\leq C\norm{G}_{H^3(0,T_2;H^2({\mathbb{R}}^3))^3},$$ with $C$ depending on $\lambda$, $\rho$, $\mu$, $T_2$ and $R$. Combining this with \eqref{l2b}, we deduce that $$\norm{U}_{L^2(0,T_2;H^2({\mathbb{R}}^3\backslash B_R))^3}\leq C\norm{g}_{H^3(0,T_2;H^{\frac{3}{2}}(\partial B_R))^3}$$ and using the continuity of the trace map, we obtain $$\norm{\mathcal T U}_{L^2((0,T_2)\times\partial B_R)^3}\leq C\norm{U}_{L^2(0,T_2;H^2({\mathbb{R}}^3\backslash B_R))^3}.$$ Combining the last two estimates we finally obtain \eqref{l2a}.\end{proof} \subsection{Long time asymptotic behavior of the solution on a bounded domain} In this subsection we fix $\Omega_1$ a bounded $\mathcal C^2$ domain of ${\mathbb{R}}^3$. We consider the bilinear form $a$ with domain $D(a)=H^1(\Omega_1)^3$ given by \begin{eqnarray*} a(U_1,U_2)&=&\int_{\Omega_1}\mathcal{E}(U,\overline{V}) \, dx\\ &=&\int_{\Omega_1} \left[(\lambda(x)+2\mu(x))(\nabla\cdot U_1(x))\overline{(\nabla\cdot U_2(x))}-\mu(x)\nabla\times U_1(x)\cdot\overline{\nabla\times U_2(x)}\right]dx.\end{eqnarray*} Then, for $U_1\in H^1(\Omega_1)^3$ such that $\mathcal{L}_{\lambda,\mu}U_1\in L^2(\Omega_1)^3$ and $ \mathcal T U_1=0$ on $\partial\Omega_1$, we have \begin{eqnarray}\label{sesa} a(U_1,U_2)=-\int_{\Omega_1} \mathcal{L}_{\lambda,\mu}U_1(x)\cdot \overline{U_2(x)}dx. \end{eqnarray} Fixing $H=L^2(\Omega_1)^3$, $V=H^1(\Omega_1)^3$ and applying \cite[Chapter 5]{Hsiao}, \cite[Theorem 8.1, Chapter 3]{LM1} and \cite[Theorem 8.2, Chapter 3]{LM1}, we deduce that, for $F\in L^2((0,+\infty)\times \Omega_1)^3$, $V_0\in V$ and $V_1\in H$, the problem \begin{eqnarray}\label{eaq1}\left\{\begin{array}{lll}\rho\partial_t^2U-\mathcal{L}_{\lambda,\mu} U=F(x,t),\quad &&(x,t)\in\Omega_1\times(0,+\infty),\\ U(\cdot,0)=V_0,\ \partial_tU(\cdot,0)=V_1,\quad && x\in \Omega_1\\ \mathcal T U(x,t)=0,\quad && (x,t)\in \partial \Omega_1\times(0,+\infty), \end{array}\right.\end{eqnarray} admits a unique solution in $\mathcal C^1([0,+\infty);L^2(\Omega_1))^3\cap \mathcal C([0,+\infty);H^1(\Omega_1))^3$. Below we show the long time behavior of the solution of \eqref{eaq1}. \begin{prop}\label{p4} Let $F\in L^2((0,+\infty)\times \Omega_1)^3$ be such that supp$(F)\subset \overline{\Omega_1}\times[0,T)$ and let $V_0\in H^1(\Omega_1)^3$, $V_1\in L^2(\Omega)^3$. Then problem \eqref{eaq1} admits a unique solution $U\in \mathcal C^1([0,+\infty);L^2(\Omega_1))^3\cap \mathcal C([0,+\infty);H^1(\Omega_1))^3$ satisfying \begin{equation}\label{p4a}\norm{U(t,\cdot)}_{H^1(\Omega_1)^3}\leq \tilde{C}(\norm{F}_{L^2((0,T)\times\Omega_1)} +\norm{V_0}_{H^1(\Omega_1)^3}+\norm{V_1}_{L^2(\Omega_1)^3})(t+1),\quad t>0,\end{equation} with $\tilde{C}$ independent of $t$. \end{prop} \begin{proof} Without lost of generality we may assume that $V_0=V_1=0$. Indeed the result with non-vanishing initial conditions can be carry out in a similar way. Let us first assume that $F\in H^1_0(0,T;L^2(\Omega_1))^3$. Repeating the arguments in the proof of Lemma \ref{l1}, we can prove that the regularity of $U$ can be improved to be $$U\in \mathcal C^2([0,+\infty);L^2(\Omega_1))^3\cap \mathcal C([0,+\infty);H^2(\Omega_1))^3.$$ Now let us consider the energy $$J(t):=\int_{\Omega_1} \rho|\partial_t U(x,t)|^2+\mathcal E(U(x,t),U(x,t))\;dx.$$ For simplicity, we assume that $F$ takes values in ${\mathbb{R}}^3$ such that $U$ takes also values in ${\mathbb{R}}^3$, otherwise our arguments may be extended without any difficulty to function $F$ taking values in $\mathbb C^3$. It is clear that $J\in \mathcal C^1([0,+\infty))$ and $$J'(t)=2\int_{\Omega_1} \rho\partial_t^2 U(x,t)\cdot \partial_t U(x,t) +\mathcal E(U(x,t),\partial_tU(x,t))\;dx.$$ Using the fact that $ \mathcal T U=0$ on $[0,+\infty)\times\partial\Omega_1$, we can integrate by parts to obtain (see (\ref{sesa})) $$J'(t)=2\int_{\Omega_1} [\rho\partial_t^2 U(x,t)-\mathcal{L}_{\lambda,\mu} U(x,t)]\cdot \partial_t U(x,t)dx=2\int_{\Omega_1} F(x,t)\cdot \partial_tU(x,t).$$ Thus, using the fact that $\rho>0$ we get \begin{eqnarray} \nonumber\int_{\Omega_1} |\partial_t U(x,t)|^2dx\leq \rho^{-1}J(t)&\leq& 2\rho^{-1}\int_0^t\abs{\int_{\Omega_1} F(x,s)\cdot \partial_tU(x,s)dx}ds\\ \nonumber &\leq& 2\rho^{-1}\int_0^T\int_{\Omega_1}|F(x,s)|\;|\partial_tU(x,s)|dxds\\ \label{eq:U} &\leq& 2\rho^{-1} T^{\frac{1}{2}}\norm{F}_{L^2((0,T)\times\Omega_1)^3}\norm{\partial_tU}_{L^\infty(0,T;L^2(\Omega_1))^3}. \end{eqnarray} Combining this with a classical estimate of $\norm{\partial_tU}_{L^\infty(0,T;L^2(\Omega_1))^3}$ (e.g. \cite[Formula (8.15), Chapter 3]{LM1}), we deduce that $$\norm{\partial_tU}_{L^\infty(0,T;L^2(\Omega_1))^3}\leq \tilde{C}\norm{F}_{L^2((0,T)\times\Omega_1)^3}$$ with $\tilde{C}$ depending only on $T$, $\Omega_1$, $\rho$, $\mathcal{L}_{\lambda,\mu}$. It then follows from (\ref{eq:U}) that $$\int_{\Omega_1} |\partial_t U(x,t)|^2dx\leq \tilde{C}\norm{F}_{L^2((0,T)\times\Omega_1)^3}^2.$$ Combining this with the fact that $$U(t,\cdot)=\int_0^t\partial_t U(\cdot,s)ds,$$ we obtain that \begin{eqnarray}\nonumber \norm{U(\cdot,t)}_{L^2(\Omega_1)^3}&=&\norm{\int_0^t\partial_t U(\cdot,s)ds}_{L^2(\Omega_1)^3}\leq \int_0^t\norm{\partial_t U(\cdot,s)}_{L^2(\Omega_1)^3}ds\\ \label{eq:U1} &\leq& \tilde{C}\,\norm{F}_{L^2((0,T)\times\Omega_1)^3}t. \end{eqnarray} By density, we can extend this estimate to $F\in L^2((0,T_1)\times\Omega_1)^3$. Applying Korn's inequality gives the estimate \begin{eqnarray}\nonumber \norm{U(\cdot,t)}_{H^1(\Omega_1)^3}^2&\leq& \tilde{C}\left(\norm{U(\cdot,t)}_{L^2(\Omega_1)^3}^2+\int_{\Omega_1} \mathcal E(U(x,t),U(x,t))dx\right)\\ \label{EU} &\leq& \tilde{C}(\norm{U(\cdot,t)}_{L^2(\Omega_1)^3}^2+J(t)). \end{eqnarray} Multiplying $U$ to both sides of (\ref{eaq1}) and integrating by part with respect to $x$ over $\Omega_1$, we can estimate $J(t)$ by \begin{eqnarray}\label{EJ} J(t)=\int_{\Omega_1} F(x,t)\,\cdot U(x,t)\,dx\leq ||F(\cdot,t)||_{L^2(\Omega_1)^3}||U(\cdot,t)||_{L^2(\Omega_1)^3}. \end{eqnarray} Now, inserting (\ref{EJ}) into (\ref{EU}) and making use of (\ref{eq:U1}), we finally obtain \begin{eqnarray*} \norm{U(\cdot,t)}_{H^1(\Omega_1)^3}^2\leq \tilde{C}\,\norm{F}_{L^2((0,T)\times\Omega_1)^3}^2(1+t^2), \end{eqnarray*} which proves (\ref{p4a}). \end{proof} \section*{Acknowledgement} The work of G. Hu is supported by the NSFC grant (No. 11671028) and NSAF grant (No. U1530401). The work of Y. Kian is supported by the French National Research Agency ANR (project MultiOnde) grant ANR-17-CE40-0029. The authors would like to thank Yikan Liu for his remarks that allow to improve Theorem 3. The second author is grateful to Lauri Oksanen for his remarks about the global Holmgren theorem for Lam\'e system that allows to improve our results related to (IP1). The second author would like to thank the Beijing Computational Science Research Center, where part of this article was written, for its kind hospitality
{ "redpajama_set_name": "RedPajamaArXiv" }
6,388
{"url":"https:\/\/maths.wp.prideaux.id.au\/mas\/topic-4-1-integration-techniques\/","text":"# Syllabus\n\nTopic 4.1: Integration and applications of integration (20 hours)\n\nIntegration techniques\n\n4.1.1 integrate using the trigonometric identities $\\sin^2x=\\frac12(1-\\cos2x)$,$\\cos^2x=\\frac12(1+\\cos2x)$ and $1+\\tan^2x=sec^2x$\n\n4.1.2 use substitution $u=g(x)$ to integrate expressions of the form $f(g(x))g'(x)$\n\n4.1.3 establish and use the formula $\\int\\frac1xdx=\\ln|x|+c\\text{ for }x\\neq0$\n\n4.1.4 use partial fractions where necessary for integration in simple cases\n\nApplications of integral calculus\n\n4.1.5 calculate areas between curves determined by functions\n\n4.1.6 determine volumes of solids of revolution about either axis\n\n4.1.7 use technology with numerical integration\n\n# Lessons\n\nFor other work on Integration techniques, refer to notes on SEQTA and class work.\n\n## 1. Integration by Change of Variable (u-Substitution)\n\nSee the document below for worked solutions to the trig substitution questions in the Prezi.\n\n## 3. Area Under and Between Curves\n\nYou should already be familiar with area under curve calculations. This extends that idea to area between curves. The Prezi starts with area under a curve and then builds on those ideas for area between curves.\nThe video covers only area under a curve, and covers it in a different way from the Prezi. You should treat it as revision of prior work with integration.\n\nSee interactive diagrams showing area under a curve or area between curves.\n\n## 4. Solids of Revolution\n\nThis is not something that I have prepared a Prezi for so I will be using Khan Academy for the principal reference. See the videos and exercises at https:\/\/www.khanacademy.org\/math\/integral-calculus\/solid-revolution-topic\/disc-method\/e\/volumes-of-solids-of-revolution\u2013discs-and-washers. Students should view at least the first two or three videos then work through the exercises at the end of this section of videos at least twice, viewing other videos as they encounter the need. Next, move on to the Shell Method at https:\/\/www.khanacademy.org\/math\/integral-calculus\/solid-revolution-topic\/shell-method\/v\/shell-method-for-rotating-around-vertical-line, viewing at least two of the videos there then working through the exercises a couple of times. (The shell method is more obscure and not as important as the disc method. Many situations that are suitable for the shell method can be done by the disc method instead. You should be familiar with the shell method, but you should practice the disc method until you can do it about either axis in your sleep.)\n\nAnother useful resource is Eddie Woo\u2019s fine videos on this topic at\u00a0https:\/\/youtu.be\/XUoNadN2SbEhttps:\/\/youtu.be\/0BQiIF8wca8https:\/\/youtu.be\/fjsNIw1tLl4https:\/\/youtu.be\/_1GtrOWyif8.\n\n## 5. Numerical Integration\n\n1. Calculating a Riemann Sum in a spreadsheet\n2. Examine different approaches that converge faster than a Riemann Sum\n\n\u2022 The trapezium rule: models each slice using a trapezium. For n slices,\n$\\displaystyle\\int_a^b f(x)\\,dx\\approx\\sum_{i=1}^n\\left(\\frac{f(x_{i-1})+f(x_i)}2\\times w\\right)$\nwhere $w=\\frac{b-a}n$ and $x_i=a+iw$. This simplifies to\n$\\displaystyle \\int_a^b f(x),dx\\approx w \\left(\\frac{f(a)+f(b)}2+\\sum_{i=1}^{n-1} f(x_i)\\right)$\n(Can you demonstrate the simplification?)\n\u2022 Simpson\u2019s Rule: this converges faster than simpler rules. It approximates the area of each slice by using a quadratic approximation of the curve. For Simpson\u2019s rule, each slice is defined by two end points and a mid point, and the area of a single slice of width $2w$ is given by\n$\\displaystyle \\int_{x_0}^{x_2} f(x)\\,dx\\approx \\frac{f(x_0)+4f(x_1)+f(x_2)}{6}\\times 2w$\n(where $x_1=x_0+w=x_2-w$). For n slices of width 2w then the sum becomes\n$\\displaystyle \\int_{a}^{b} f(x)\\,dx\\approx \\sum_{i=0}^{n-1}\\left(\\frac{f(x_{2i})+4f(x_{2i+1})+f(x_{2i+2})}{6}\\times 2w\\right)$\nwhich simplifies to\n$\\displaystyle \\int_{a}^{b} f(x)\\,dx\\approx \\frac{w}{3}\\sum_{i=0}^{n-1}\\left(f(x_{2i})+4f(x_{2i+1})+f(x_{2i+2})\\right)$\nthen splitting the sum\n$\\displaystyle\\int_{a}^{b} f(x)\\,dx\\approx \\frac{w}{3}\\left(\\sum_{i=0}^{n-1}f(x_{2i})+4\\sum_{i=0}^{n-1}f(x_{2i+1})+\\sum_{i=0}^{n-1}f(x_{2i+2})\\right)$\ntweaking the third sum gives\n$\\displaystyle \\int_{a}^{b} f(x)\\,dx\\approx \\frac{w}{3}\\left(\\sum_{i=0}^{n-1}f(x_{2i})+4\\sum_{i=0}^{n-1}f(x_{2i+1})+\\sum_{i=1}^{n}f(x_{2i})\\right)$\nNow the third sum and the first sum are identical except for the $f(x_0)=f(a)$ term included only in the first term, and the $f(x_{2n})=f(b)$ included only in the last term. This gives us\n$\\displaystyle\\int_{a}^{b} f(x)\\,dx\\approx \\frac{w}{3}\\left(f(a)+f(b)+2\\sum_{i=1}^{n-1}f(x_{2i})+4\\sum_{i=0}^{n-1}f(x_{2i+1})\\right)$\nRemember, here $w$ is only half the width of each slice, so $w=\\frac{b-a}{2n}$ but $x_i=a+iw$ as previously.\n\nYou should be comfortable with the summation notation and know how to use it on your CAS calculator.\n\nOnce again, Eddie Woo\u2019s videos are highly recommended here. See Trapezoidal Rule: Basic Form, Trapezoidal Rule: Multiple Sub-Intervals, Simpson\u2019s Rule: Deriving the Basic Formula 1Simpson\u2019s Rule: Deriving the Basic Formula 2, Simpson\u2019s Rule: Multiple Sub-Intervals, Simpson\u2019s Rule Example 1, Simpson\u2019s Rule Example 2","date":"2021-08-04 08:10:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 23, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6800594925880432, \"perplexity\": 847.9553401773014}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154798.45\/warc\/CC-MAIN-20210804080449-20210804110449-00280.warc.gz\"}"}
null
null
{"url":"https:\/\/mailman.ntg.nl\/pipermail\/ntg-context\/2008\/031624.html","text":"# [NTG-context] Alignment of Itemize\n\nThu May 15 08:42:33 CEST 2008\n\nOn Wed, May 14, 2008 at 10:36 PM, <cidadaum at sapo.pt> wrote:\n>\n> I have put this in my source file:\n>\n\n> And in the output the m rule (---) was overlapped by the roman\n> numeral. By the way I would like to align the roman numeral to the\n> right more or less like this (it is difficult to do in text format)\n>\n> I --- blablablablablablabla\n> II --- blablablablablablabla\n> III --- blablablablablablabla\n> IV --- blablablablablablabla\n\nV --- blablablablablablabla\nVI --- blablablablablablabla\nVII --- blablablablablablabla\n\n> VIII --- blablablablablablabla\n>\n> Thank you very much for your help\n\n\\starttext\n\\startitemize[R,fit][itemalign=flushright,stopper={ --- }]\n\\dorecurse{8}{\\item blablabla}\n\\stopitemize\n\\stoptext\n\n> Armando\n\nWolfgang","date":"2016-08-30 17:01:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9522294998168945, \"perplexity\": 9012.97966981308}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-36\/segments\/1471982996875.78\/warc\/CC-MAIN-20160823200956-00111-ip-10-153-172-175.ec2.internal.warc.gz\"}"}
null
null
Our client is seeking a fibre experienced (FTTP), telecoms focused network engineer to join our expanding business. We are a family owned Internet Service Provider bringing broadband to rural areas through a mixture of technologies. Having secured several £multi million projects, and with the view of acquiring more, we need a Fibre Engineer on a permanent basis to help deploy and maintain new and existing networks. You will be working on major new projects as our client extends its networks in specific geographic regions. A typical engineer has a flexible working approach, an ability to solve problems under time pressure and is comfortable working at heights and has networking skills. • Troubleshooting individual and wider network issues. • Liaising with both colleagues and customers; keeping in regular contact with the office staff. • Being punctual and conducting all work to high standards of neatness.
{ "redpajama_set_name": "RedPajamaC4" }
4,012
\section{Introduction} Quantum spin liquids (QSLs) have been a topic of intense research interest in condensed matter physics in recent decades \cite{ANDERSON1973153,Knolle2019,Savary2016}. The still elusive possibility of control over distributed many-body entanglement offers a key path toward fault-tolerant registers for quantum information processing. Quantum spin liquid candidates like $\alpha$-RuCl$_3$ \cite{Kasahara2018,Sandilands2015,WangYiping2020,Wulferding2020,Banerjee2016NM,Banerjee1055,Baek2017,Jansa2018,pai2021magnetostriction}, $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ \cite{doi:10.1126/science.abc6363}, and YbMgGaO$_4$ \cite{RN3956YbMgGaO4} typically exhibit geometric frustration of some sort, but there have not yet been conclusive observations of QSLs, nor are there well accepted measurement protocols for QSL order parameters. Yb delafossites\cite{Schmidt_2021} like NaYbS$_2$ \cite{NaYbS2_2018,NaYbS2_2019}, NaYbSe$_2$ \cite{NaYbSe2PRB_CEF,NaYbSe2_2,NaYbSe2_PRX}, NaYbO$_2$ \cite{NaYbO2_1,NaYbO2_2,NaYbO2_3}, CsYbSe$_2$\cite{Xing_Jie_PRB2019Rapid,pocs2021systematic,xie2021fieldinduced,RN3955}, and KYbSe$_2$ \cite{xing2021synthesis,scheie2021witnessing} - with effective spin $S_{\text{eff}} = 1/2$ from Yb$^{3+}$ ions and antiferromagnetic coupling decorating 2D triangular lattices - are popular candidates for the realization of quantum spin liquids. Easy-plane magnetic anisotropy results in a plateau at one-third of the saturation magnetization due to strong spin fluctuations\cite{NaYbSe2_2, Xing_Jie_PRB2019Rapid, NaYbO2_1, NaYbO2_3}. There have been no reports of long-range magnetic ordering at temperatures as low as $T = $ 0.4 K as a result of the geometric frustration in these materials. This family is thought to be less defect prone than QSL candidate YbMgGaO$_4$\cite{a_large_family,RN3955,xie2021fieldinduced,Xing_Jie_PRB2019Rapid}. Crystal electric field (CEF) modes emerge as a result of the lifting of the orbital degeneracy of transition metal ions in certain environments\cite{HansBethe, VanVleck}. CEFs can therefore be used as probes of internal fields. Lower energy CEFs are particularly important as they are part of the ground state wavefunction that defines properties such as magnetism, conductivity and superconductivity\cite{Schaack_Book_2000}. For instance, the CEF splitting and spin-orbit coupling of Yb$^{3+}$ 4f$^{14}$ orbitals causes the intrinsic magnetic anisotropy in Yb delafossites. CEF modes are also responsible for a Schottky anomaly that has been observed in some members of this family. The rare earth Yb$^{3+}$ doublets are time-reversal degenerate \cite{kramers1930theorie}. The 4f electrons - well shielded compared to 3d electrons - exhibit smaller CEF splitting \cite{Lanthanide_CEF_intro}. CEF excitations can also interact with other excitations in the system\cite{Sethi_PRL_2019_omegas,k6695,adroja2012vibron}. While CEFs have been extensively studied since the 60s, understanding of their interactions with other excitations beyond pristine single ion environments remains rather limited. CEF coupling to electron-hole pairs has been proposed as the mechanism that mediates superconductivity in UPd$_2$Al$_3$\cite{UPD_SC1, UPD_SC2, UPD_SC3, UPD_SC4} and PrOs$_4$Sb$_{12}$\cite{PrOs4Sb12_SC}. It was reported for Pr(Os$_{1-x}$Ru$_{x}$)$_{4}$Sb$_{12}$ that, when CEF and phonon modes cross each other, the superconducting transition temperature has a minimum\cite{mini_Tc_when_CEF_phonon_cross}. Near-resonant interactions between CEF and phonon modes can give rise to strong CEF-phonon coupling, which has been observed in CeAuAl$_3$ \cite{k6695, CeAuAl}, Ce$_2$O$_3$\cite{Sethi_PRL_2019_omegas}, CeAl$_2$ \cite{JTCeAl2, RamanCeAl2, Thalmeier_1982}, PrNi$_5$ \cite{CEFPhononPrNi5} NdBa$_2$Cu$_3$O$_{7-\delta}$\cite{CEF_highTC}, and Ho$_2$Ti$_2$O$_7$\cite{Gaudet_CEF_ph_Ho2Ti2O7}, among other materials. Conduction electrons may further complicate the picture and alter the relative intensity of hybrid modes\cite{CEFEatenConfigCrossOver}. Understanding the complex manifold spanned by CEFs and phonons, their interactions, and their associated optical selection rules may lead to emergent functionalities that take advantage of spin-lattice coupling. Also, this understanding may be useful for controlling and reading out possible QSL ground states for topological quantum information processing. Inelastic scattering methods like neutron and Raman scattering are workhorse tools for probing fundamental excitations such as phonons, CEF modes \cite{Klein1983}, magnons, and possible Majorana states \cite{Wulferding2020, HighfieldSahasrabudhe}. Though Raman spectroscopy typically probes excitations near the $\Gamma$ point in the Brillouin zone, it is capable of probing higher energy modes that are challenging to access with neutron scattering. Further, Raman microscopy can be used to probe sub-micron-scale spatial variation to reveal interactions between excitations. Here, we employ polarization-, spatially-, and temperature-resolved Raman spectroscopy to probe CEF-phonon interactions in CsYbSe$_2$. We identify all the primary phonons, and CEFs. We report combination modes, a vibronic bound state, and mode repulsion between CEF and phonon modes. Our work extends explorations of CEF-phonon coupling into the emerging class of candidate QSLs. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Cs_Figure_1.pdf} \caption{(a) Side view and (b) top view of the crystal structure of CsYbSe$_2$. (c) Vibration patterns of Raman active phonon modes in CsYbSe$_2$. (d) Primary CEF excitations and phonon excitations. } \label{fig:crystal} \end{figure} \section{Results and Discussion} \subsection{Theoretical analysis of phonon and CEF modes} Figures \ref{fig:crystal} (a) and (b) show the crystal structure of CsYbSe$_2$. According to group theory and our first-principles phonon calculations, CsYbSe$_2$ belongs to the space group $P$6$_3$/$mmc$ (No. 194) with the point group D$_{\text{6h}}$. The bulk unit cell has a hexagonal lattice with 8 atoms in total, giving rise to 24 normal phonon modes at the $\Gamma$ point: $\Gamma$(D$_{\text{6h}}$) = A$_{\text{1g}}$ + 3A$_{\text{2u}}$ + 2B$_{\text{1u}}$ + 2B$_{\text{2g}}$ + E$_{\text{1g}}$ + 2E$_{\text{2g}}$ + 3E$_{\text{1u}}$ + 2E$_{\text{2u}}$, where Raman active phonon modes correspond to non-degenerate A$_{\text{1g}}$ symmetry modes, doubly degenerate E$_{\text{2g}}$ symmetry modes, and doubly degenerate E$_{\text{1g}}$ symmetry modes. The intensity of a Raman mode is $I \propto | \mathbf{e}_s \cdot \widetilde{R} \cdot \mathbf{e}_i^T|^2$, where $\widetilde{R}$ is the Raman tensor of a phonon mode, and $\mathbf{e}_s$ and $\mathbf{e}_i$ are the electric polarization vectors of the scattered and incident light, respectively\cite{MoS2_WSe2_Shear_breathing, Liangbo_Fe3GeTe2}. In the back-scattering geometry with linear laser polarization, $\mathbf{e}_s$ and $\mathbf{e}_i$ are in plane. Based on the Raman tensors shown in the Supplemental Information, it is clear that only A$_{\text{1g}}$ and E$_{\text{2g}}$ phonon modes can appear in the Raman spectra, whereas E$_{\text{1g}}$ modes cannot be observed in the our experimental configuration. This is common in hexagonal layered materials\cite{Liangbo_Fe3GeTe2}. Our DFT calculated frequencies for the two E$_{\text{2g}}$ modes (E$_{\text{2g}}^{\text{1}}$ and E$_{\text{2g}}^{\text{2}}$) are 37 cm$^{-1}$ and 97 cm$^{-1}$, and A$_{\text{1g}}$ at 122 cm$^{-1}$. These results are consistent with our experimental Raman observations that will be discussed below. \subsection{Temperature dependence} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Cs_Figure_2.pdf} \caption{(a) Temperature dependent Raman spectra from $T = 3.3$ K to $T = 273$ K. The legends correspond to the assignment of the peaks. (b) The line traces of the temperature dependence data from (a). The white arrow in (a) and the black arrow in (b) indicate an artifact from volume Bragg gratings that leaked through the spectrometer.} \label{fig:temp} \end{figure} Temperature dependent Raman spectra acquired in an XX polarization configuration ($\textbf{e}_s$ = (1, 0, 0), $\textbf{e}_i$ = (1, 0, 0)) from $T = 3.3$ K to $T = 285$ K (1 K step from 3.3 to 30 K, 5K step from 40 K to 285 K) are shown in Figure \ref{fig:temp} (a) and (b). The white arrow in (a) and the black arrow in (b) highlight an artifact from the volume Bragg gratings that is unrelated to CsYbSe$_2$. A total of 10 Raman modes on the Stokes side are resolved. We assign the three modes at 114 cm$^{-1}$, 183 cm$^{-1}$, and 269 cm$^{-1}$ to CEF1, CEF2, and CEF3 respectively. Since the CEF excitations in CsYbSe$_2$ are expected to be dominated by Yb$^{3+}$ 4f$^{14}$ orbitals with the electronic ground state $J = 7/2$ manifold that is split into 4 doubly degenerate Kramer pairs (Figure \ref{fig:crystal} (d)), these three modes and the ground state comprise the expected 4 pairs. The next excited state manifold $J = 5/2$ is expected to be about 10,000 cm$^{-1}$ higher\cite{Koningstein_1967}. Additonally, CEFs typically exhibit strongly temperature-dependent intensity and linewidth \cite{Schaack_Book_2000}. CEF1-CEF3 soften in energy at low temperatures and becomes significantly more prominent. In particular, CEF1 is more than 15.3 times stronger in intensity at $T = 3.3$ K than at $T = 285$ K. The energy levels for the three CEF1-CEF3 modes are consistent with previous spectroscopies of CEF modes in NaYbSe$_2$ \cite{NaYbSe2PRB_CEF} and KYbSe$_2$\cite{scheie2021witnessing}. The CEF1 and CEF2 modes are also consistent with a recent report on CEF levels probed by resonant torsion magnetometry and low field susceptibility measurements that are particularly sensitive to CEF1 and the details of the CEF Hamiltonian \cite{pocs2021systematic}. Though it is common to fit experimentally observed CEF energy levels to obtain crystal field parameters, three clearly visible CEF peaks provide too little information to constrain a fit to the six nonzero crystal field parameters for the three-fold Yb$^{3+}$ symmetry in CsYbSe$_2$. Nevertheless, to get an approximate understanding of the CEF ground state, we modeled the CEF Hamiltonian using a point charge model calculated using PyCrystalField software\cite{PyCrytalField} and the Se environment of CsYbSe$_2$. To match energy scales, we fitted the Se effective charges to the three measured low-temperature CEF levels. This yielded an effective Se charge of -1.54e for CsYbSe$_2$. The observed and calculated CEF level energies are in Table~\ref{table}. The ground states of these models, assuming a quantization axis in the c direction, are \begin{equation} |\psi_\pm \rangle = \pm0.968|\pm7/2\rangle + 0.218|\pm1/2\rangle \pm 0.128|\pm5/2\rangle \end{equation} This point charge fit is a crude approximation, and shows an easy-axis ground state. However, the coefficients for the ground state eigenkets can easily be adjusted to give an easy-plane ground state. Previous experience with these materials \cite{pocs2021systematic, NaYbSe2PRB_CEF} shows that their ground states tend to be more isotropic or easy-planar, suggesting that the above point-charge fit is not very accurate. \begin{figure*}[hbt!] \centering \includegraphics[width=2\columnwidth]{Cs_Figure_3.pdf} \caption{Peak positions as a function of temperature for the most prominent 9 peaks (a) E$^1_{\text{2g}}$, (b) CEF1 and E$^2_{\text{2g}}$, (c) CEF2, (d) CEF3, (e) 2E$^2_{\text{2g}}$, (f) CEF1 + CEF2, (g) 368.0 cm$^{-1}$ and (h) 2E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$. The peak positions are extracted from Bayesian inference on the data in Figure \ref{fig:temp}. The shaded intervals are the 68\% HDI and 95\% HDI.} \label{fig:fit} \end{figure*} Based on our DFT phonon calculations discussed above, we assign the E$^1_{\text{2g}}$ mode to the 46 cm$^{-1}$ peak. Further, we assign the peak at 120 cm$^{-1}$ at $T = 293$ K to E$^2_{\text{2g}}$ (see Supplemental Information for linecuts at $T = 293$ K). At first glance, it may seem that this high-temperature mode transitions continuously to the peak at 114 cm$^{-1}$ at $T = 3.3$ K, which we assigned to CEF1 because of its strong softening and intensity increase at low temperatures. However, more careful analysis of the Raman spectra (discussed in greater detail below) shows that, at $T = 293$ K, CEF1 is a weak mode nearly resonant with E$^2_{\text{2g}}$. As the temperature decreases, the amplitude of E$^2_{\text{{2g}}}$ drops compared to CEF1. The A$_{\text{1g}}$ mode is assigned to a small peak at 169.0 cm$^{-1}$, since this peak shows up in the XX polarization configuration and disappears in the XY polarization configuration, a signature of A$_{\text{1g}}$ symmetry according to the group theory analysis discussed below. While this mode is resolved at $T = 293$ K, due to its low intensity and its proximity to CEF2, we were not able to resolve it at low temperatures. We assign the mode 255.9 cm$^{-1}$ to 2E$^2_{\text{2g}}$, and 300 cm$^{-1}$ to CEF1 + CEF2. The 368.0 cm$^{-1}$ and 479.1 cm$^{-1}$ peaks could be attributed to several higher-order modes including 368.0 cm$^{-1}$: 2CEF2, CEF1 + 2E$^2_{\text{2g}}$, CEF3 + 2E$^1_{\text{2g}}$ and 479.1 cm$^{-1}$: 2E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$, CEF1 + 2CEF2. We observed and identified 8 more modes in higher energy Raman spectra (Supplemental Information) that we will revisit in Section \ref{resonantsection}. To further verify the mode assignment and symmetry, we performed polarization-resolved Raman spectroscopy measurements in both parallel (XX) and cross (XY) polarization configurations. According to group theory analysis (more details in Supplemental Information), the polarization profile of an E$_{\text{2g}}$ Raman mode is circular under any linear polarization configuration. For an A$_{\text{1g}}$ Raman mode, the polarization profile is circular in the XX configuration while its intensity is zero in the XY configuration ($\textbf{e}_s$ = (1, 0, 0), $\textbf{e}_i$ = (0, 1, 0)). Similar polarization responses of Raman modes have been previously reported for NaYbSe$_2$\cite{NaYbSe2PRB_CEF}. Under the XX configuration, at room temperature, E$^1_{\text{2g}}$, E$^2_{\text{2g}}$, and A$_{\text{1g}}$ all have 0-fold symmetry (i.e., circular polarization profiles), and at low temperatures, the E$^1_{\text{2g}}$, CEF1, CEF2 and CEF3 modes likewise exhibit 0-fold symmetry (Supplemental Information), consistent with the group theory analysis for these modes. All of the modes above except A$_{\text{1g}}$ exhibit nearly the same intensity across the XX and XY configurations while A$_{\text{1g}}$ is suppressed in the XY configuration, in agreement with the analysis based on their Raman tensors (Supplemental Information). We note that the polarization profiles of CEF Raman modes are very similar to the polarization profiles of E$_{\text{2g}}$ phonon modes, suggesting that CEF modes share similar forms of Raman tensors to E$_{\text{2g}}$ phonon modes. To track the temperature evolution of the observed CEF and phonon modes, we employed Bayesian inference techniques with a Hamiltonian Monte Carlo technique \texttt{PyMC3}\cite{pyMC}, a No U-Turns (NUTS) sampler, and 4 chains with 3,000 samples for spectral modeling. Figure \ref{fig:fit} illustrates the inferred peak positions of the most prominent peaks: E$^1_{\text{2g}}$, E$^2_{\text{2g}}$, CEF1, CEF2, CEF3, 2E$^2_{\text{2g}}$, CEF1 + CEF2, 368.0 cm$^{-1}$ and 479.1 cm$^{-1}$. The traces are the median values from the Bayesian inference. The two shaded bands are 68 \% (darker) and 95 \% (lighter) highest density intervals (HDIs). The red trace is a fit of the resulting trace assuming a phonon anharmonicity relationship. The E$^1_{\text{2g}}$ mode harden as temperature decreases. The modes that have CEF character all soften. The CEF modes are more prominent for $T < 120 - 140$ K and far weaker at higher temperatures. This can be seen in the large uncertainty of the Bayesian inference result for these CEF modes at higher temperatures. The mode CEF1 + CEF2 at 300 cm$^{-1}$ (Figure \ref{fig:fit} (d)) disappears at $T < 80$ K, suggesting that its initial state is a CEF excited state. Figure \ref{fig:fit} (b) shows the interacting E$^2_{\text{2g}}$ and CEF1 modes. CEF1 has small amplitude at a lower energy than E$^2_{\text{2g}}$ at higher temperatures and become 15.3 times stronger at $T = 3.3$ K while E$^2_{\text{2g}}$ becomes small relative to CEF1. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Cs_Figure_4.pdf} \caption{Hyperspectral Raman map of a 96.1 $\mu$m by 88.4 $\mu$m area of the crystal shows subtle mesoscale spatial dependence of the CEF modes and phonon modes. The data was taken with no polarization control. (a) Representative spectra at 5 positions on the sample. (b)-(i) integrated relative intensity of the prominent peaks as a function of position. (b) CEF1, (c) E$^2_{\text{2g}}$, (d) $\omega_2$, (e) CEF2 + E$^2_{\text{2g}}$, (f) 2E$^2_{\text{2g}}$, (g) CEF2, (h) CEF3, (i) 2E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$. } \label{fig:spatial} \end{figure*} \subsection{Real space mode repulsion observed with hyperspectral Raman} Mesoscale interplay between CEF and phonon modes is also observed in real-space hyperspectral Raman microscopy of the same flake at $T = 3.3 $ K, as shown in Figure \ref{fig:spatial}. The Raman map consists of 45 $\times$ 45 spectra across a 96 $\times$ 88 $\mu$m area. Selected spectra with distinct representative features are shown in Figure \ref{fig:spatial} (a). The spectrum at $(x, y) =(10 \;\mu\text{m}, 76 \;\mu\text{m})$ has prominent CEF1, CEF2, and CEF3 modes, consistent with the majority of the 2,025 sampled pixels. While the average of all the spectra (illustrated by the shaded background) still exhibits most of the spectral content that is present at each pixel, material heterogeneity induces substantial linewidth broadening in the shaded spectrum. For example, at $(x, y) =(6 \;\mu\text{m}, 72\;\mu\text{m})$, the intensity of the CEF1, CEF2 and CEF3 modes are only about 50 \% of their peak intensity, and smaller peaks at 127.1 cm$^{-1}$ and 145.1 cm$^{-1}$ are observed. Additionally, the peaks at 233.9 cm$^{-1}$ and 255.9 cm$^{-1}$ are more prominent. There are also smaller peaks at 383.7 cm$^{-1}$ and 510.7 cm$^{-1}$. We assign 127.1 cm$^{-1}$ to E$^2_{\text{2g}}$, which is only a small baseline next to CEF1 at most pixels and 145.1 cm$^{-1}$ to a CEF-phonon coupling mode $\omega_2$. We further assign 255.9 cm$^{-1}$, 383.7 cm$^{-1}$ and 510.7 cm$^{-1}$ to 2E$^2_{\text{2g}}$, 3E$^2_{\text{2g}}$ and 4E$^2_{\text{2g}}$, respectively. To carefully track the spatial dependence of each mode we utilized baseline removal with asymmetric least squares (ALS) and non-negative matrix factorization (NMF). Below we discuss the ALS results. NMF results are described in the Supplementary Information. Figures \ref{fig:spatial} (b) - (i) illustrate the Raman intensity maps (red: high; blue: low) for modes (b) CEF1, (c) E$^2_{\text{2g}}$, (d) $\omega_2$, (e) CEF2 + E$^1_{\text{2g}}$ (f) 2E$^2_{\text{2g}}$, (g) CEF2, (h) CEF3 and (i) 2E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$. The selected pixels from panel (a) are marked. Spatial anticorrelations between the CEF modes and E$^2_{\text{2g}}$ phonons are observed: where CEFs are strong, the E$^2_{\text{2g}}$ and 2E$^2_{\text{2g}}$ modes are weak. While there have been many spectroscopic reports of CEF-phonon coupling in other materials, interplay of CEF excitations and phonons in real space has not been previously reported. One possible origin of the inhomogeneity may come from Yb atoms occupying Cs sites (as reported for NaYbSe$_2$ \cite{NaYbSe2_PRX}). A rigorous description of the origin of mesoscale phonon-CEF interactions is beyond the scope of this paper, but the ability to control phonon-CEF interactions, through defect engineering, for example, could enable substantial advances in the development of quantum technologies based on Yb delafossite QSLs. \subsection{CEF-phonon bound state and combination modes}\label{resonantsection} \textbf{Bound state}. When CEF and phonon modes are nearly resonant, vibronic bound states can form. This form of \textit{magnetoelastic} coupling has been detected by Raman spectroscopy since the 80s in CeAl$_2$\cite{Thalmeier_1982}. There, closely spaced CEF and phonon modes form two new modes with energies described by the Thalmeier-Fulde description \cite{Thalmeier_1982}: \begin{equation} \omega_{\text{1,2}} = \frac{\omega_{\text{CEF}} + \omega_{\text{ph}} }{2} \pm \sqrt{ (\frac{\omega_{\text{CEF}} - \omega_{\text{ph}} }{2})^2 + V^2 } \end{equation} where $\omega_{\text{CEF}}$ and $\omega_{\text{ph}}$ are the energies of the closely spaced CEF and phonon modes, respectively, and $V$ is the effective coupling strength. With the 145.1 cm$^{-1}$ peak assigned to the $\omega_2$ mode of CEF1 and E$^2_{\text{2g}}$, we obtain an effective coupling strength of 23.6 cm$^{-1}$ at $T = 3.3$ K, which is smaller than Ce$_2$O$_3$\cite{Sethi_PRL_2019_omegas} and larger than CeAl$_2$ \cite{Thalmeier_1982}. This model suggests that a conjugate bound state $\omega_1$ exists at 96.2 cm$^{-1}$, but $\omega_1$ was not observed, potentially due to the 90 cm$^{-1}$ cutoff of the longpass filter that was used for the hyperspectral Raman measurements. It is worth pointing out that $\omega_2$ is observed only in a small subset of the sampled real space positions, such as $(x, y) =(6 \;\mu\text{m}, 72\;\mu\text{m})$, the purple trace in Figure \ref{fig:spatial} (a) and purple square in the rest of the subplots of Figure \ref{fig:spatial}. The eigenvibration of the E$^2_{\text{2g}}$ mode is described by the out-of-phase vibration of the two Se atoms next to the Yb$^{3+}$ ion, which is responsible for the CEF modes. One might ask how the CEF-phonon coupling of these two modes alters the ground state wavefunction and how the lower energy excitations accommodate this change. While the nascent literature argues that CsYbSe$_2$ is less defect prone than other QSL candidates \cite{RN3955,xie2021fieldinduced,Xing_Jie_PRB2019Rapid}, defects are still responsible for many of the observed properties. For example, the sister material NaYbSe$_2$ was reported to have 4.8\% of Na sites occupied by Yb ions\cite{NaYbSe2_PRX}. A similar effect is likely present in CsYbSe$_2$ as Na and Cs both have the same valence as Yb. However, the presence of Yb atoms at Cs sites yielding a stronger phonon response and weaker CEF is counterinuitive as Yb is the cause of the CEF, so one would expect the CEF to be stronger at the defect sites. Furthermore, the E$_{\text{2g}}$ modes are mainly due to the vibrations of Cs atoms. Heavier Yb (173.04 u) atoms at Cs (132.91 u) sites should yield a weaker Raman response. \textbf{Combination modes}. Together, a total of 17 modes are observed in Figures \ref{fig:temp}, \ref{fig:spatial} and S1. Their energies and assignments are summarized in Table \ref{table}. Up to fifth order combination modes such as CEF2 + E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$ and 2CEF2 + CEF3, 5E$^2_{\text{2g}}$ are identified. Phonon dressed electronic excitations are extremely common in photolumninescence\cite{polaronic_CrI3} and ARPES\cite{polaron_vdW}, and combination modes and overtones are also quite common in resonant Raman spectroscopy, including CEF modes coupled to LO phonons\cite{CEF_multiLO} and CEF subtraction modes \cite{CEF_subtraction}. However, overtones and combination modes are far less common in non-resonant Raman spectroscopy. Resonant absorption with 532 nm (2.33 eV) excitation is not expected in this system, as there is no known absorption line for CsYbSe$_2$ near 2.33 eV. \begin{table} \begin{tabular}{ c c } wavenumber (cm$^{-1})$ & assignment \\ \hline 48.0 & E$^1_{\text{2g}}$ \\ \hline 96.2 &$\omega_1$ from CEF1 and E$^2_{\text{2g}}$ (V = 23.5839) \\ \hline 114.2 &CEF1 \\ \hline 127.1 &E$^2_{\text{2g}}$ \\ \hline 145.1 &$\omega_2$ from CEF1 and E$^2_{\text{2g}}$ (V = 23.5839) \\ \hline 169.0 &A$_{1g} (293\; \text{K}) $ \\ \hline 184.6 &CEF2 \\ \hline 233.9 &CEF2 + E$^1_{\text{2g}}$ \\ \hline 255.9 &2E$^2_{\text{2g}}$ \\ \hline 271.4 &CEF3 \\ \hline 300.0 &CEF1 + CEF2 \\ \hline 368.0 &2CEF2, CEF1 + 2E$^2_{\text{2g}}$, CEF3 + 2E$^1_{\text{2g}}$\\ \hline 383.7 & 3E$^2_{\text{2g}}$, CEF1 + CEF3 \\ \hline 479.1 & 2E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$, CEF1 + 2CEF2 \\ \hline 510.7 & 4E$^2_{\text{2g}}$ \\ \hline 615.4 & CEF2 + E$^1_{\text{2g}}$ + 3E$^2_{\text{2g}}$, 3CEF1 + CEF3 \\ \hline 639.6 &2CEF2 + CEF3, 5E$^2_{\text{2g}}$ \\ \end{tabular} \caption{\label{tab:table-name}Summary of assignments of the observed modes}\label{table} \end{table} \section{Conclusion} We identified all the primary CEF and phonon modes for QSL candidate CsYbSe$_2$ and verified their symmetries. Interestingly, with sub-micron spot size, we observed mesoscale spatial mode repulsion between CEF and E$^2_{\text{2g}}$ (and 2E$^2_{\text{2g}}$) phonon modes. Furthermore, we identified a CEF-phonon bound state $\omega_2$ from CEF1 and E$^2_{\text{2g}}$ and extracted an effective coupling strength V = 23.58 cm$^{-1}$. These results for the magnetoelastic coupling in CsYbSe$_2$ can be used to estimate the coupling between phonons and potential spinons which may enable the confirmation of the underlying QSL ground state\cite{Phonon_Kitaev, Phonon_Kitaev2}. Understanding the mechanism behind the mesoscale CEF-phonon coupling with knowledge of the complex CEF and phonon manifolds may provide a pathway to optically addressable mesoscale quantum spin devices that take advantage of the QSL ground state. \section{Methods} \subsection{Sample Details and Experimental Setup} High quality single crystal CsYbSe$_2$ (Figure \ref{fig:crystal}) was grown using a previously described flux method \cite{RN3955}. Polarization-resolved Raman spectra from $T = 3.3$ K - 300 K were taken in a Montana Instruments closed-cycle cryostat with out-of-plane excitation and a back scattering geometry (beam path $\|$ \textbf{c}). The spectra were measured with a Princeton Instruments Isoplane SCT-320 spectrograph with a Pixis 400BR Excelon camera and a 2400 line/mm grating. A 532.03 nm continuous wave laser excitation and a set of 3 Optigrate volume Bragg gratings were used to access low energy Stokes and anti-Stokes Raman modes. Achromatic half-wave plates were mounted on piezoelectric rotators for polarization control. The power at the sample was about 1.5 mW and the typical acquisition time was about 30 sec per spectrum. The hyperspectral Raman microscopy was performed with an AttoCube 3-axis positioner and Semrock filters in lieu of the Bragg gratings (for improved collection efficiency). \subsection{Calculation Details} To obtain frequencies and vibration patterns of Raman-active phonon modes in CsYbSe$_2$, first-principles plane-wave density functional theory (DFT) calculations were performed using VASP software with projector augmented wave (PAW) pseudopotentials for electron-ion interactions\cite{KRESSE_1996_15} and the Perdew-Burke-Ernzerhof (PBE) functional for exchange-correlation interactions\cite{PBE}. The DFT+U method was used to consider the localized f electrons of Yb atoms, with the effective U parameter chosen as 6.0 eV\cite{Dudarev_1998}. Other U values including 4.0, 5.0, and 7.0 eV were tested as well, which yielded the same phonon frequencies. Both atomic coordinates and lattice constants were optimized until the residual forces were below 0.001 eV/\AA, with a cutoff energy of 400 eV and a $\Gamma$-centered k-point sampling of 18$\times$18$\times$5. The total energy was converged to 10$^{-8}$ eV. Based on the optimized unit cell, we then performed phonon calculations using a finite-difference scheme implemented in Phonopy\cite{phonopy}. The same cutoff energy and convergence criteria of energy were used. Hellmann-Feynman forces in the 3$\times$3$\times$1 supercell with a $\Gamma$-centered k-point sampling of 6$\times$6$\times$5 were computed by VASP for both positive and negative atomic displacements ($\delta$ = 0.03 \AA) and then used in Phonopy to construct the dynamical matrix. The diagonalization of the dynamical matrix provides phonon frequencies and eigenvectors (calculation results in Supplemental Information). \acknowledgments This research was sponsored by the U. S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. The first-principles phonon calculations and Raman microscopy were performed at the Center for Nanophase Materials Sciences, which is a U.S. Department of Energy Office of Science User Facility. L.L. acknowledges computational resources of the Compute and Data Environment for Science (CADES) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. Postdoctoral research support was provided by the Intelligence Community Postdoctoral Research Fellowship Program at the Oak Ridge National Laboratory, administered by Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and the Office of the Director of National Intelligence. \section{Author contributions} All authors discussed the results thoroughly. Y.-Y. P., C. E. M., and B. J. L. performed most of the measurements. L. L. performed Raman tensor analysis. J. X. grew the samples. A. S. performed crystal field analysis. Y.-Y. P, L. L. did majority of the data analysis with inputs from A. A. P.. X. L., R. J., L. L. performed DFT phonon calculations. A. S. S., D. P., L. L. and B. J. L initiated and oversaw the project. Y.-Y. P., L.L., B. J. L wrote most of the manuscript with contributions from all authors. \section{Temperature Dependence for Higher Energy Band} Figure \ref{fig:highenergy} shows temperature-dependent unpolarized Raman spectra taken in a higher energy band from $T =$ 3.3 K to $T =$ 270 K. The spectra were taken with Semrock dichroic and longpass filters instead of a set of volume Bragg gratings. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Cs_Figure_S1.pdf} \caption{Raman spectra in a higher energy band as a function of temperature from $T =$ 3.3 K to $T =$ 270 K.} \label{fig:highenergy} \end{figure} \newpage \section{Raman group theory analysis for CsYbSe$_2$} CsYbSe$_2$ belongs to the space group $P$6$_3$/$mmc$ (No. 194) with the point group D$_{\text{6h}}$, and its Raman active phonon modes consist of non-degenerate A$_{\text{1g}}$ symmetry modes, doubly degenerate E$_{\text{2g}}$ symmetry modes (E$_{\text{2g}}-x$ and E$_{\text{2g}}-y$), and doubly degenerate E$_{\text{1g}}$ symmetry modes (E$_{\text{1g}}-x$ and E$_{\text{1g}}-y$). Their Raman tensors $\widetilde{R}$ assume the following forms: \begin{equation*} \widetilde{R} (\text{A}_{\text{1g}}) = \begin{pmatrix} a & \cdot & \cdot\\ \cdot & a & \cdot \\ \cdot & \cdot & b \\ \end{pmatrix} \end{equation*} \begin{equation*} \widetilde{R} (\text{E}_{\text{2g}}-x) = \begin{pmatrix} c & \cdot & \cdot\\ \cdot & -c & \cdot \\ \cdot & \cdot & \cdot \\ \end{pmatrix}; \widetilde{R} (\text{E}_{\text{2g}}-y) = \begin{pmatrix} \cdot & c & \cdot\\ c & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ \end{pmatrix} \end{equation*} \begin{equation} \widetilde{R} (\text{E}_{\text{1g}}-x) = \begin{pmatrix} \cdot & \cdot & \cdot\\ \cdot & \cdot & d \\ \cdot & d & \cdot \\ \end{pmatrix}; \widetilde{R} (\text{E}_{\text{1g}}-y) = \begin{pmatrix} \cdot & \cdot & d \\ \cdot & \cdot & \cdot \\ d & \cdot & \cdot \\ \end{pmatrix} \label{eq:Raman_tensor} \end{equation} {\setlength{\parindent}{0cm}In the experimental back-scattering laser geometry (light $Z$ in and $Z$ out) with linear polarization, the electric polarization vectors of the scattered and incident light $\bold{e}_s$ and $\bold{e}_i$ are in-plane (i.e., the $X$-$Y$ plane), and they are given by $\bold{e}_s$ = (cos$\gamma$, sin$\gamma$, 0) and $\bold{e}_i$ = (cos$\theta$, sin$\theta$, 0). With Raman intensity $I \propto | \bold{e}_s \cdot \widetilde{R} \cdot \bold{e}_i^T|^2$ , we have: } \begin{equation} I \propto \left|\begin{pmatrix} cos\gamma, & sin\gamma, & 0\end{pmatrix}\cdot \widetilde{R} \cdot \begin{pmatrix} cos\theta\\ sin\theta\\ 0 \end{pmatrix}\right|^2 \label{eq:int_general} \end{equation} {\setlength{\parindent}{0cm} It is obvious that E$_{\text{1g}}$ phonon modes have zero Raman intensity in the back-scattering geometry, and thus cannot be observed experimentally. For A$_{\text{1g}}$ and E$_{\text{2g}}$ modes, by substituting the Raman tensors $\widetilde{R}$ from Eq. \ref{eq:Raman_tensor} into Eq. \ref{eq:int_general}, we can obtain} \begin{align} I(\text{A}_{\text{1g}}) &\propto \left|\begin{pmatrix} cos\gamma, & sin\gamma, & 0\end{pmatrix} \cdot \begin{pmatrix} a & \cdot & \cdot\\ \cdot & a & \cdot \\ \cdot & \cdot & b \\ \end{pmatrix} \cdot \begin{pmatrix} cos\theta\\ sin\theta\\ 0 \end{pmatrix}\right|^2 \nonumber \\ &\propto \left|a cos\gamma cos \theta + a sin\gamma sin \theta \right|^2 \nonumber \\ &\propto |a|^2 cos^2(\gamma - \theta) \label{eq:final_a1g} \end{align} \begin{align} I(\text{E}_{\text{2g}}-x) &\propto \left|\begin{pmatrix} cos\gamma, & sin\gamma, & 0\end{pmatrix} \cdot \begin{pmatrix} c & \cdot & \cdot\\ \cdot & -c & \cdot \\ \cdot & \cdot & \cdot \\ \end{pmatrix} \cdot \begin{pmatrix} cos\theta\\ sin\theta\\ 0 \end{pmatrix}\right|^2 \nonumber \\ &\propto \left|\begin{pmatrix} c cos\gamma, & -c sin\gamma, & 0\end{pmatrix} \cdot \begin{pmatrix} cos\theta\\ sin\theta\\ 0 \end{pmatrix}\right|^2 \nonumber \\ &\propto \left|c cos\gamma cos \theta -c sin\gamma sin \theta \right|^2 \nonumber \\ &\propto |c|^2 cos^2(\gamma + \theta) \end{align} \begin{align} I(\text{E}_{\text{2g}}-y) &\propto \left|\begin{pmatrix} cos\gamma, & sin\gamma, & 0\end{pmatrix} \cdot \begin{pmatrix} \cdot & c & \cdot\\ c & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ \end{pmatrix} \cdot \begin{pmatrix} cos\theta\\ sin\theta\\ 0 \end{pmatrix}\right|^2 \nonumber \\ &\propto \left|\begin{pmatrix} c sin\gamma, & c cos\gamma, & 0\end{pmatrix} \cdot \begin{pmatrix} cos\theta\\ sin\theta\\ 0 \end{pmatrix}\right|^2 \nonumber \\ &\propto \left|c sin\gamma cos \theta + c cos\gamma sin \theta \right|^2 \nonumber \\ &\propto |c|^2 sin^2(\gamma + \theta) \end{align} Consequently, the Raman intensity of a doubly degenerate E$_{\text{2g}}$ mode is \begin{equation} I(\text{E}_{\text{2g}}) = I(\text{E}_{\text{2g}}-x) + I(\text{E}_{\text{2g}}-y) = |c|^2 cos^2(\gamma + \theta) + |c|^2 sin^2(\gamma + \theta) = |c|^2 \label{eq:final_E_2g} \end{equation} Eq. \ref{eq:final_E_2g} indicates that the polarization profile of an E$_{\text{2g}}$ phonon mode is a circle under any linear polarization configuration. For an A$_{\text{1g}}$ phonon mode, under the experimental parallel polarization configuration (i.e., XX, $\gamma = \theta $), its intensity I(A$_{\text{1g}}$) $\propto$ $|a|^2$ according to Eq. \ref{eq:final_a1g} and hence the polarization profile is also a circle; however, under the experimental cross polarization configuration (i.e., XY, $\gamma=\theta+90^{\circ}$), its intensity I(A$_{\text{1g}}$) = 0. These results are in agreement with the experimental data. Interestingly, the polarization profiles of CEFs Raman modes are very similar to the polarization profiles of E$_{\text{2g}}$ phonon modes, suggesting that CEFs modes share similar forms of Raman tensors to E$_{\text{2g}}$ phonon modes. \section{Polarization and Angular Dependence} Figure \ref{fig:pol} shows the polarization dependence of the peaks E$^1_{\text{2g}}$, CEF1, CEF2 and CEF3 $T =$ 3.3 K, E$^1_{\text{2g}}$, E$^2_{\text{2g}}$ and A$_{\text{1g}}$ at $T =$ 293 K. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Cs_Figure_S2.pdf} \caption{Polarization and angular dependence of the at $T =$ 3.3 K and $T =$ 293 K.} \label{fig:pol} \end{figure} \newpage \section{Calculated phonon dispersion} \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{dispersion.pdf} \caption{Calculated phonon dispersion} \label{fig:dis} \end{figure} \newpage \section{Position Dependence} As described in the main text, we report subtle spatial anti-correlations between phonon and CEF modes, such as the CEF1, E$^2_{\text{2g}}$, and $\omega_2$ modes. Simple spatial plots of integrated counts over a specific peak may be affected by baseline corrections and/or large peaks nearby. The baseline can be removed by asymmetric least squares fitting but the accuracy is not necessarily satisfactory. Meanwhile, full-blown curve fitting over thousands of spatial points can be computational expensive. Non-negative matrix factorization (NMF) is a simple algorithm that captures the most linearly independent basis vectors out of a given hyper-dimensional data cube with very low computational cost, yet it is effective in exploratory data analysis. Here Figures \ref{fig:spatial_3K}, \ref{fig:spatial_120K}, and \ref{fig:spatial_130K} illustrate spatially resolved Raman spectra at $T = 3$ K, $T = 120$ K, and $T = 130$ K, respectively. In these figures, (a) illustrates a subset of raw Raman spectra, with the inset illustrating the captured NMF components. (b-e) are integrated counts for selected peaks. (f-h) are the weights with respect to the basis vector components for the NMF decomposition. A similar anticorrelation between modes to that described in the main text is observed again in these representations. The $(x, y)$ are the raw coordinate values. A value of $(x, y) =(2127 \;\mu\text{m}, 104 \;\mu\text{m})$ was subtracted in the main text. \subsection{T = 3K} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Cs_Figure_S3K.pdf} \caption{Position Dependence at $T$ = 3.3 K.} \label{fig:spatial_3K} \end{figure} \newpage \subsection{T = 120K} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Cs_Figure_S120.pdf} \caption{Position Dependence at $T$ = 120 K.} \label{fig:spatial_120K} \end{figure} \newpage \subsection{T = 130K} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Cs_Figure_S130.pdf} \caption{Position Dependence at $T$ = 130 K.} \label{fig:spatial_130K} \end{figure} \newpage \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,944
Q: Passing nested values to class methods in scrapy I'm new to web scraping, please pardon the possible vagueness in my terminology :| A snippet of an HTML page that I'm trying to write a spider for: <h3>2019 General Meetings</h3> <p><strong>Group 20:</strong> <br />Wednesday, June 5, 9 a.m. <br /> Bank &amp; Trust, 10000 E. Western Ave.</p> <p>Wednesday, July 11, 9 a.m. <br />Bank &amp; Trust, 10000 E. Western Ave.</p> <p><strong>Group 20:</strong> <br />Monday, July 8, 9 a.m.<br />Hubbard, 1740 W. 199th St.</p> <p>&nbsp;</p></div> The logic I'm trying to follow is: I have the <h3> which is the "top level" (or at least I consider it to be), there are other h3's on the page, so I need to make sure only this <h3> gets passed to the following parsers. For the above, I'm using response_items = response.xpath("//h3[contains(@h3, 'General Meetings')]") And I think I have it working. (But needs more testing to make certain.) I need to pass each of the <p> to a respective parser within the class, and each should return a required piece of information about the meeting, e.g _parser_date will return the date, _parser_address will return the address, and do on. I'm coming short on finding the correct scrapy/xpath syntax for this. Following https://docs.scrapy.org/en/latest/topics/selectors.html I can't get this to work quite well. I'm particularly interested in each parser to "pick up" on a pattern within the <p>'s it's going to parse, and if it's a date pattern then format it, and return. If it's a location pattern.. and so on. I'm trying to avoid using re.(), unless you'd advise it's the right thing to do here. Any insights would be most welcome, Thank you. A: This should work: for p_node in response.xpath('//h3[contains(., 'General Meetings')]/following-sibling::p[position() < last()]'): address = p_node.xpath('./text()[last()]).get() date = p_node.xpath('./text()[last() - 1]).get() I used position() < last() to skip last empty <p> and also I'm parsing data from the end.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,247
Q: Add text above and below an image in a pdf using itextsharp(asp.net,C#) I have converted an image into byte[] using memorystream and then added the byte[] to a pdf using itextsharp. Now my requirement is to add a certain text above the image which gives some information about the image. this is my code private void generatepdf(byte[] byteImage) { iTextSharp.text.Image image = iTextSharp.text.Image.GetInstance(byteImage); image.ScalePercent(0.3f * 100); using (System.IO.MemoryStream memoryStream = new System.IO.MemoryStream()) { Document document = new Document(PageSize.A4, 188f, 88f, 10f, 10f); PdfWriter writer = PdfWriter.GetInstance(document, memoryStream); document.Open(); document.Add(image); document.Close(); byte[] bytes = memoryStream.ToArray(); memoryStream.Close(); Response.Clear(); Response.ContentType = "application/pdf"; Response.AddHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.ContentType = "application/pdf"; Response.Buffer = true; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.BinaryWrite(bytes); Response.End(); } How can I add some text and convert into bytes and then send it through to the pdf object. Or is there any better way to guide me in correct direction. A: you can try like this private void generatepdf(byte[] byteImage) { iTextSharp.text.Image image = iTextSharp.text.Image.GetInstance(byteImage); image.ScalePercent(0.3f * 100); using (MemoryStream memoryStream = new System.IO.MemoryStream()) { Document document = new Document(PageSize.A4, 188f, 88f, 10f, 10f); PdfWriter writer = PdfWriter.GetInstance(document, memoryStream); string text1 = "before image"; Paragraph text1Title = new Paragraph(text1); string text2 = "after image"; Paragraph text2Title = new Paragraph(text2); document.Open(); document.Add(text1Title); document.Add(image); document.Add(text2Title); document.Close(); byte[] bytes = memoryStream.ToArray(); memoryStream.Close(); Response.Clear(); Response.ContentType = "application/pdf"; Response.AddHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.ContentType = "application/pdf"; Response.Buffer = true; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.BinaryWrite(bytes); Response.End(); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,688
Olyra latispicula är en gräsart som beskrevs av Thomas Robert Soderstrom och Fernando Omar Zuloaga. Olyra latispicula ingår i släktet Olyra och familjen gräs. Inga underarter finns listade i Catalogue of Life. Källor Gräs latispicula
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,693
\section{Introduction} \label{sec:intro} Main-sequence stars with masses below 0.35$M_\odot$ are believed to be fully convective and lack the tachocline region that is thought to be essential to magnetic field generation models in partially convective stars. Fully convective stars nonetheless display a variety of magnetic phenomena like stellar spots, flares, and chromospheric emission. Chromospheric spectroscopic activity indicators like Calcium II H+K at 3933 $\mbox{\normalfont\AA}$ and 3968 $\mbox{\normalfont\AA}$ respectively and $H\alpha$ at 6562.8 $\mbox{\normalfont\AA}$ are produced as plasma traveling along magnetic loops heats the chromosphere. It is now well established that the emission is variable in time \citep[see][]{Suarez2016,Lee2010, Kruse2010}. By determining the timescales upon which the activity varies, we hope to gain insight to the physical mechanism responsible for generating magnetic fields in fully convective stars. For partially convective stars, \citet{Suarez2018} and \citet{Fuhrmeister2019} showed that activity monitoring on day--week timescales can be used to determine rotation periods. The ability to measure rotation periods using this method implies the activity occurs predominantly from regions of roughly constant emission fixed to the stellar surface, perhaps originating from stellar spots or plage. Similarly, \citet{Shofer2019} used a sample of both inactive and active (M0-M9) dwarfs to examine periodicity in activity indicators for a sample of 154 stars with measured photometric rotation periods. Using a generalized Lomb-Scargle periodogram, they found that the rotation period is observed in at least one activity indicator, on only a minority of these stars, 66 of the 154. All of the rotation periods they measured are greater than one day as to avoid potential sampling aliases with their observing cadence. Only one of these stars, GJ 406, has a mass that puts it below the limit at which we expect the star to be fully convective. They noted that for the remaining stars the activity variation may be of a different origin. \citet{Kruse2010} explored variability in $H\alpha$ on timescales less than an hour using observations of M0-M9 dwarfs from the Sloan Digital Sky Survey. They showed that variability in $H\alpha$ increased towards later spectral types, but with the majority of stars having three observations or less separated by 15 minutes, it is difficult to determine the timescale of the variations. In a related study, \cite{Lee2010} used high-cadence observations of mid-to-late M dwarfs observed over the course of one hour and found that fluctuations in $H\alpha$ likely occur on timescales of 30 minutes to an hour, and are less likely to occur on shorter timescales. They attributed these fluctuations to stellar flares which are known to enhance $H\alpha$ emission \citep{Kowalski2009,Kowalski2013,Fuhrmeister2018,DiMaio2020}. However \cite{Lee2010} and \citet{Kruse2010} did not explore how variability relates to stellar rotation. In this study, we used spectroscopic and photometric observations of a sample of fully convective M dwarfs with masses ranging from 0.1--0.26$M_\odot$ to examine the timescale of the variability of $H\alpha$, and whether it is correlated with stellar rotation. Such a correlation may imply that the emission of $H\alpha$ is dominated by roughly constant emission from phenomena such as spots and plage that are fixed on the stellar surface and rotating into and out of view. A lack of correlation would indicate that another mechanism governs the variability. Understanding this timescale would allow us to better describe the stellar radiation environment, which is vital as atmospheric studies of planets around these stars will take place in the near future. Furthermore, understanding the various timescales associated with activity is highly relevant to radial velocity surveys because stellar activity can induce apparent radial velocity variations that mimic or mask the Keplerian signal of an orbiting exoplanet. In \S~\ref{sec:sample} we describe the stellar sample, followed by a description of the spectroscopic and photometric observations and data reduction in \S~\ref{sec:spec_obs}. We first investigated whether the variation in $H\alpha$ tracked stellar rotation, and we present those findings in \S~\ref{sec:methods}. We subsequently investigated the variation on much shorter timescales, and we present those findings in section \S~\ref{sec:HC}. We discuss our conclusions in \S~\ref{sec:conclusion} \section{Stellar Sample}\label{sec:sample} Our initial sample consisted of 10 single M dwarfs ranging from 0.1 to 0.26 solar masses. We selected these stars from a volume complete sample of 413 such M dwarfs that reside within 15 parsecs \citep{Winters2021}. \citet{Winters2021} determined the distances to these 10 stars using parallaxes from the second data release of \textit{Gaia} \citep{GaiaDR22018} and estimated masses using the $M_{K_{S}}$ - mass-luminosity relation presented in \citet{Benedict2016}. The mass estimates have a typical uncertainty of 4.7--14.0\% which are dominated by the scatter in the mass-luminosity relation. All stars have been vetted for close companions as described in \citet{Winters2021}. These stars have rotation periods from 0.21 - 91.92 days \citep{Newton2016, Newton2018} determined from the periodic photometric modulation induced by star spots as measured by the MEarth survey \citep[][See Section \ref{sec:mearth_obs}]{Nutzman2008, Irwin2015}. We chose our sample to span a range of rotation periods in each of two mass bins, namely a more massive bin with masses 0.24--0.26 $M_\odot$, and a less massive bin with masses 0.12--0.17$M_\odot$. Based on our findings for our initial sample of ten stars, we subsequently added three additional stars to our sample, GJ 1111, GJ 1167, and LEP 1431+7526 with masses ranging from 0.10--0.17 $M_\odot$. We present the stellar parameters for all 13 stars in Table \ref{tab:stellar_params}. \begin{longrotatetable} \begin{center} \begin{deluxetable*}{lccccccccc} \label{tab:stellar_params} \tablecaption{Stellar Parameters} \tabletypesize{\footnotesize} \tablehead{ \colhead{Star} & \colhead{RA} & \colhead{DEC} & \colhead{Mass} & \colhead{Distance}& \colhead{Rotation \tablenotemark{a}} & \colhead{Semi-amplitude \tablenotemark{b}} & \colhead{Semi-amplitude \tablenotemark{b}} & \colhead{F-Test} & \colhead{Confidence in}\\ \nocolhead{Star} & \nocolhead{RA} & \nocolhead{DEC} & \nocolhead{Mass} & \nocolhead{Distance}& \colhead{Period} & \nocolhead{} & \colhead{error} & \colhead{Statistic} & \colhead{F-Test Statistic \tablenotemark{c}}\\ \nocolhead{yup} & \colhead{hh:mm:ss.s} & \colhead{dd:mm:ss} & \colhead{M$_{\odot}$ }& \colhead{pc} & \colhead{days}& \colhead{mag} & \colhead{mag}& \nocolhead{} & \colhead{\%}} \startdata & & & & &Low-mass Sample & & &\\\hline GJ~170 & 04:30:25.2 & +39:51:00 & 0.17 & 10.89 & 0.72 & 0.0073 & 0.0015 & 1.12 & 57\\ G~97-15 & 05:04:14.8 & +11:03:23 & 0.15 & 10.19 & 0.84 & 0.0015 & 0.00019 & 0.48 & 34\\ LHS~1690 & 04:39:32.0 & +16:15:43 & 0.12 & 11.92 & 3.61 & 0.0037 & 0.0019 & 1.42 & 63\\ LHS~2686 & 13:10:12.7 & +47:45:19 & 0.15 & 12.19 & 28.54 & 0.0081 & 0.0011 & 1.70 & 68\\ LHS~1723 & 05:01:57.0 & -06:56:46 & 0.17 & 5.38 & 91.92 & 0.0056 & 0.00073 & 0.63 & 41\\\hline & & & & &High-mass Sample & & &\\\hline G~7-34 & 04:17:18.5 & +08:49:22 & 0.25 & 14.59 & 0.37 & 0.0111 & 0.00019 & 19.98 & 98\\ G~192-12B & 05:59:55.7 & +58:34:16 & 0.26 & 14.78 & 0.95 & 0.0041 & 0.00018 & 2.34 & 76\\ G~99-49 & 06:00:03.5 & +02:42:23 & 0.24 & 5.21 & 1.81 & 0.0020 & 0.00022 & 0.14 & 13\\ LTT~15516 & 18:42:45.0 & +13:54:17 & 0.25 & 10.94 & 8.06 & 0.0070 & 0.00075 & 1.81 & 69\\ LP~816-60 & 20:52:33.0 & -16:58:29 & 0.24 & 5.16 & 82.92 & 0.0061 & 0.00069 & 0.33 & 26\\\hline & & & & &High-cadence Sample & & & &\\\hline GJ~1167 & 13:09:31.3 & +28:58:42 & 0.17 & 5.46 & 0.22 & 0.008 & 0.00019 & -- & --\\ GJ~1111 & 08:29:49.5 & +26:46:34 & 0.10 & 3.58 & 0.46 & 0.001 & 0.00011 & -- & --\\ LEP~1431+7526 & 14:31:13.4 & +75:26:42 & 0.16 & 14.52 & 0.63 & 0.002 & 0.00069 & -- & --\\ \enddata \tablenotetext{a}{Photometrically determined rotation period.} \tablenotetext{b}{Semi-amplitude of rotational photometric modulation.} \tablenotetext{c}{F-Test statistic which determines whether the model with $H\alpha$ varying in phase with stellar rotation is favored over a model where the $H\alpha$ variation is consistent with a straight line; see Equation 4.} \end{deluxetable*} \end{center} \end{longrotatetable} \section{Observations and Reductions}\label{sec:spec_obs} \subsection{Spectroscopic Observations} We obtained 10--17 high-resolution spectra of each of the ten targets in the original sample. We used the Tillinghast Reflector Echelle Spectrograph (TRES) located on the 1.5 meter telescope at the Fred Lawrence Whipple Observatory at Mount Hopkins in Arizona. TRES has a resolution of R $\approx$ 44,000 and covers the wavelength range 3900-9100 $\mbox{\normalfont\AA}$. Exposure times ranged from 150s to 3~$\times$~1200s to reach a signal to noise ratio of 3-27 at 7150 $\mbox{\normalfont\AA}$. The spectra were reduced using the standard TRES pipeline \citep{Buchhave2010}. For each star, we scheduled the observations to obtain full phase coverage of the rotational modulation. We acquired the observations from UT 2018 September 01 - December 31. After an initial analysis of the original sample, we subsequently decided to observe GJ~1111, GJ~1167, and LEP 1431+7526. We gathered observations continuously of a single star for time spans of 3.94-8.37 hours as follows: we obtained four nights of high-cadence observations for GJ~1167 on UT 2019 March 06, April 03, May 04, May 05 for 4.37, 7.57, 6.03, and 5.94 hours per night respectively with exposures times of 500s. We obtained high-cadence observations of GJ~1111 on UT 2020 January 24 for 8.37 hours with exposure times of 600s. We obtained high-cadence observations of LEP~1431+7526 on UT 2020 February 02 for 3.94 hours with exposures times of 300s. We measured the equivalent width (EWs) of $H\alpha$ according to the following equation, \begin{equation}\label{eq:EW} \rm EW = \sum_{i=1}^{\rm N_{pix}} {\left( 1 - \frac{F_i(\lambda)}{F_c} \right)} \delta\lambda \end{equation} \noindent where F$_i(\lambda)$ is the flux per pixel included in the $H\alpha$ feature and F$_c$ is the average flux deduced from the adjacent continuum region. We summed the flux in each pixel including fractional pixels to measure the flux contained within 6560.3 $-$ 6565.3 $\mbox{\normalfont\AA}$, which we define as the feature window. We measured the average flux value in continuum regions on either side of the $H\alpha$ feature. We defined the continuum regions from 6554.1 $-$ 6559.1 $\mbox{\normalfont\AA}$ and 6566.5 $-$ 6570.5 $\mbox{\normalfont\AA}$. In order to measure F$_c$, we first determined the average flux contained in each continuum region. F$_c$ is then the average of these two values. We chose continuum regions as to avoid tellurics and complex molecular bands from the stellar photosphere. The uncertainty in the equivalent width is the product of the measured EW value and the fractional uncertainties in F$_c$ and F$(\lambda)$ added in quadrature. Negative EW values denote emission. We provide the times of observation, EW values, and their uncertainties in Table \ref{tab:EWs}. \begin{deluxetable}{lccl} \tabletypesize{\scriptsize} \tablecaption{Catalog of Equivalent Widths \label{tab:EWs} (Table Format)} \tablehead{ \colhead{Column} & \colhead{Format} & \colhead{Units} & \colhead{Description}} \startdata 1 & A10 & ... & Star Name \\ 2 & F5.4 & BJD & Barycentric Julian Date \\ 3 & F2.3 & $\mbox{\normalfont\AA}$ & $H\alpha$ Equivalent Width\\ 4 & F2.3 & $\mbox{\normalfont\AA}$ & Error on $H\alpha$ Equivalent Width \\ \enddata \end{deluxetable} \subsection{Photometric Observations}\label{sec:mearth_obs} In order to determine if the variation in $H\alpha$ was correlated with the photometric rotational modulation, we photometrically monitored the first 10 stars in Table 1. We used MEarth North for eight of our targets and MEarth South for the remaining two targets with declinations below 0 degrees. The MEarth-North and MEarth-South arrays each consist of eight 0.4 meter robotic telescopes located on Mt. Hopkins in Arizona and at Cerro Tololo Inter-American Observatory (CTIO), Chile, respectively \citep{Nutzman2008,Irwin2015}. We obtained data on clear nights from UT 2018 September 01 - UT 2018 December 31 to overlap with the spectroscopic observations. In addition, we used all prior observations from the MEarth Survey of these targets. The photometric modulation caused by stellar spots rotating into and out of view allow us to measure rotation periods. We measured the rotation period for each target to ensure that we recovered the same value as that presented in \citet{Newton2016,Newton2018}. We used the methods presented in \citet{Irwin2011,Newton2016,Newton2018}. We searched periods ranging from 0.01--1000 days using a periodogram analysis. We found all periods are consistent with their previously published values. We used photometric observations from the Transiting Exoplanet Survey Satellite (\textit{TESS}) for one target, GJ~1111. This target was observed during Sector 21 from UT 2020 January 21 - UT 2020 February 18. We used the Pre-Search Data Conditioning Simple Aperture Photometry (PDCSAP) two-minute cadence light curve shown in Figure \ref{fig:tess_lc}. \section{Spectroscopic Activity and Stellar Rotation}\label{sec:methods} For each target we fit the null hypothesis, H$_{null}$, which is that the equivalent width is constant, \begin{equation} {\rm H}_{null}(t)= C \end{equation} \noindent where C is a constant describing the zero-point offset of the EW data. We also fit an alternate hypothesis, H$_{alt}$, which is the data are correlated with the stellar rotational modulation, \begin{equation} {\rm H}_{alt}(t) = C_{alt} + A~F(2\pi t/P + \phi) \end{equation} \noindent where C$_{alt}$ is a constant describing the zero-point offset, $A$ is the amplitude of the EW variation, $F$ is a spline fit to the all the phased MEarth data for each target, $P$ is the rotation period, and $\phi$ is the phase. We kept the period constant to the photometrically determined value, varying $A$, $C$ and $\phi$ to find the best-fitting parameters using maximum likelihood. Because H$_{null}$ and H$_{alt}$ are nested hypotheses, we used the F-test to determine whether H$_{alt}$ produced a significantly better fit to the data than H$_{null}$. The F-test is defined as: \begin{equation}\label{eq:ftest} F = \frac{(RSS_{null} - RSS_{alt})/(dof_{null} - dof_{alt})}{RSS_{alt}/dof_{null}} \end{equation} \noindent where RSS$_{null}$ is the residual sum of squares of the fit to H$_{null}$, RSS$_{alt}$ is the residual sum of squares of the fit to H$_{alt}$, and dof$_{null}$ and dof$_{alt}$ are the degrees of freedom for H$_{null}$ and H$_{alt}$ respectively. The results from the F-test indicated that the null hypothesis (i.e. the variation in $H\alpha$ is not correlated with stellar rotation) is favored in all cases except G~7-34. For G~7-34, we observed that as the $H\alpha$ decreases (increased emission), the stellar brightness increases. We show the fit to H$_{alt}$ for G~7-34 in Figure \ref{fig:g734} (top panel). The F-test value for this star is 19.98 which is at the 98.3\% confidence level. We show the fit to H$_{alt}$ for the other stars in Figure \ref{fig:all_stars} and provide the F-test statistic and confidence levels for all targets in Table \ref{tab:stellar_params}. We test whether outlier $H\alpha$ measurements, likely due to enhancement from stellar flares, are affecting our results for the stars that do not show $H\alpha$ varying in phase with stellar rotation. To test this, we chose two stars LHS 1690 and G 170 and removed 3$\sigma$ outliers from the mean from each time series, fit H$_{null}$ and H$_{alt}$, and computed the F-test statistic. We found that even with outliers removed, the F-test was insignificant and thus we conclude the same result; the dominant variation on the other 9 stars is not originating from roughly constant emission that is modulated by spots rotating into and out of view. \begin{figure}[ht] \includegraphics[scale=.5,angle=0]{g734.pdf} \vspace{-1cm} \hspace{-0.53cm} \caption{The top panel shows the photometric MEarth data for G~7-34 phased to the stellar rotation period (blue points, black outlines) along with the spline fit to the MEarth data shown as the black solid line, and binned values plotted in red. The bottom panel show the $H\alpha$ equivalent width measurements phased to the rotation period of the star (blue points) and the best fitting model consisting of the spline fit to the photometry, with free parameters describing the amplitude, phase, and vertical offset (black solid line). \label{fig:g734}} \end{figure} \begin{figure*} \includegraphics[scale=.70,angle=0]{all_other_stars_mearth_ew_9_9.pdf} \caption{The top panel in each pair shows the MEarth light curve for a given target phased to the stellar rotation period (blue points outlined in black) along with the spline fit to the MEarth data shown as the black solid line with binned values plotted in red. The bottom panels show the $H\alpha$ equivalent width measurements phased to the rotation period of the star (blue points) and the best fitting model consisting of the spline fit to the photometry, with free parameters describing the amplitude, phase, and vertical offset (black solid line). The name of the star is stated in the top right corner of the bottom panels. G 7-34 is not shown here, since it is plotted in Figure \ref{fig:g734}. \label{fig:all_stars}} \end{figure*} \subsection{G 7-34 is a Member of AB Doradus} G 7-34 was different from the other targets in that its $H\alpha$ emission correlated with rotational phase. We investigated the galactic kinematics of G 7-34 using the BANYAN $\Sigma$ Tool \citep{Gagne2018}, which takes as inputs the coordinates, radial velocity, proper motion in right ascension, $\mu_{\alpha}$, and declination, $\mu_{\delta}$ and parallax of the star to examine whether G~7-34 is associated with any young stellar associations. We used parallax = 68.55 $\pm$ 0.08 mas, $\mu_{\alpha}$ = 133.97 $\pm$ 0.15, $\mu_{\delta}$ = -377.25 $\pm$ 0.10 mas~yr$^{-1}$. We obtained these values from \textit{Gaia} Data Release 2 \citep{GaiaDR22018}. We measured the radial velocity by cross-correlating each high-resolution TRES spectrum of G 7-34 with a TRES spectrum of Barnard's Star and determined the average radial velocity value. For more details on this method, please see \citet{Winters2020}. We found the average radial velocity to be 15.06 $\pm$ 0.33 km~s$^{-1}$. We found, with 99.99\% confidence, that G~7-34 is associated with the AB Doradus Moving Group, which has an age of 149 Myr \citep{Zuckerman2004}. After we performed this analysis, we learned that its membership in the AB Doradus moving group was previously identified in \citep{Bell2015}. As this star is young, it is likely over-luminous leading to a modest overestimation of its mass determined using the absolute K$_s$ magnitude and relations presented in \citet{Benedict2016}. We also checked the colors of G~7-34 to ensure they were consistent with the color magnitude diagram for stars of this age: using the current literature, we were unable to compile a robust statistical sample of bonafide members of the AB Doradus Moving group with spectral types ranging from (M0-M9)V so we used the Pleiades Cluster which has a comparable age of 125 Myr \citep{Stauffer1998}. We used the V$_J$ and I$_{KC}$ band magnitudes of V$_J$ = 13.84 and I$_{KC}$ = 10.75 presented in \citet{Riedel2014}. After we adjusted to the distance modulus of the Pleiades (m - M) = 5.53 \citep{Stauffer1998} and accounted for reddening E(V$_J$-I$_{KC}$) = 0.06 \citet{Stauffer1987}, we found that V = 18.67 and (V$_J$-I$_{KC}$ )= 3.15. We used the color magnitude diagram presented in Figure 2 from \citet{Stauffer1998} and these values, found G~7-34 to be consistent with other Pleiades members. We found the rotation period of 0.37 days is consistent with the measured rotation periods of Pleiades members of a similar mass using the data presented in \citet{Redbull2016}; the rotation periods for stars with a similar V-K$_s$ = 5.65 to G 7-34 have rotation periods spanning (0.1-1.0) days. This star is younger than the other 9 stars in this study and it is the only star that displays $H\alpha$ variability in phase with the stellar rotation period. It is not immediately clear why the dominant source of $H\alpha$ variability on this young star is different than older active stars. \section{High-Cadence Observations}\label{sec:HC} Our results indicated that for 9 out of 10 targets the dominant source of variability in $H\alpha$ is not regions of roughly constant emission that are modulated by spots rotating into and out of view. Nonetheless, the $H\alpha$ is clearly varying. With this null result, we probed further into determining the timescale of variability for $H\alpha$ as this may point to the possible mechanism responsible for the variation. We proceeded to obtain high-cadence observations of GJ~1167, GJ~1111, and LEP~1431+7526 as described in Section \ref{sec:spec_obs}. The resulting time series of $H\alpha$ measurements are shown in Figure \ref{fig:HC_obs}. \begin{figure*} \includegraphics[scale=0.90,angle=0]{high_cad.pdf} \caption{Each panel displays the time-series of $H\alpha$ equivalent width measurements obtained during high-cadence observations over the course of one observing night. The top four panels show the four nights obtained for GJ~1167, followed by GJ~1111 and the bottom panel shows LEP~1431+7526. \label{fig:HC_obs}} \end{figure*} We explored the timescales of variability using two methods, an autocorrelation function (ACF) and a Gaussian process (GP). \subsection{Autocorrelation Function} We computed the autocorrelation function for each high-cadence $H\alpha$ time series and plotted these in the middle and left columns of Figure \ref{fig:ACF}. We measured the full width at half maximum (FWHM) of the ACF as a probe for the correlation timescale of the data. We determined the point halfway between the global minimum and global maximum values of the ACF. We then mirrored the ACF so that is symmetric about zero. The FWHM is the difference between the time lags at which the halfway point of the ACF occurred. We determined the uncertainty on each FWHM measurement using the bootstrap method. We perturbed each EW measurement using 1000 Gaussian deviates with the standard deviation set to the error bar of the measurements. We then calculated the standard dedviation of the FWHM estimate from those samples taking that to be the uncertainty in the FWHM measurement. We present the FWHM measurements and their uncertainties in Table \ref{tab:params}. We found the FWHM values range from 30--70 minutes. \begin{figure*} \includegraphics[scale=0.80,angle=0]{acf_GP_ACF_flares.pdf} \caption{Left column displays the time-series of $H\alpha$ equivalent width measurements obtained during high-cadence spectroscopic observations. The star name and the date and time of the observation are listed in the upper right. The Gaussian process we used to model the data is shown as the black curve and the one sigma uncertainty in the model is shown as the light purple shaded regions. The middle column shows the autocorrelation function of the $H\alpha$ equivalent width measurements (black curve). The purple lines show the ACFs of the fake data sets consisting of randomly drawn EW values. The purple lines are narrower than the measured ACFs, indicating that we have detected a correlation in the values. The right column shows the ACFs of the fake time series of EW measurements consisting of flares injected into the time series at random times with the same decay timescale as measured by the FWHM shown in Table \ref{tab:params} (purple curves). The black curve is the measured ACF of the real data, and is the same as shown in the middle column. We show the rotation period of GJ~1167 as the red dashed line; the rotation periods of the other two targets are longer than the durations of the respective data sets \label{fig:ACF}} \end{figure*} To test whether we detected a significant correlation, we computed the ACF for 500 randomly distributed $H\alpha$ equivalent width measurements with timestamps equal to those of the true data sets. The EW values were drawn from a normal distribution with mean and standard deviation equal to those of each nightly observation. We found that the ACF that we measured for the actual data is substantially wider than the ACFs derived from the fake data sets (see Figure \ref{fig:ACF}). We concluded that the correlation we detect using an ACF is significant. \subsubsection{Autocorrelation Function of Simulated Flare Time-series} Because the correlation timescales as deduced from our measurement of the FWHM of the ACFs and were similar to the typical duration of flares, and because flares are known to result in elevated $H\alpha$ emission, we considered whether flares provide a plausible explanation for the correlated variability we observed. We proceeded to test this idea by creating mock data sets as follows. We used the flare template from \citet{Davenport2014}, where each flare is characterized by a time of peak flux, the full-width at half maximum timescale of the flare, and the amplitude of the flare. The decay phase of the flare in the template is parameterized as the sum of two exponentially decaying components, one describing the fast, steep decay and the other describing the slower more gradual decay. The fast decay timescale dominates the flare duration. We injected 10 flares into 800 minutes of data with fast decay phase timescales ranging from 30--70 minutes following the distribution of FWHM values obtained with the ACF. For a given flare injected light curve, each flare had the same decay phase timescale within the 30--70 minute range and a randomly assigned amplitude and time of peak flux. We employed boxcar integration on the mock data sets to simulate and match the integration times of our spectroscopic observations for each star. We then computed the ACF and measured the FWHM of the ACF for each mock data set. We found that the measured ACF FWHM values are within 5\% of the input decay time scale. We concluded from this test, that flares can reproduce the correlated $H\alpha$ variability we observed in our high cadence time series. We show the ACFs of the mock data sets in the right panel of Figure \ref{fig:ACF}. We note that some of the ACFs of the real data show side lobes, notably GJ 1167, for which the ACF of the second night, UT 2019 April 3, displays a prominent side lobe at the rotation period of 316.8 minutes. We use the python package \textit{Statsmodels} \citep{seabold2010statsmodels} to compute the ACF and 99.7\% significant level interval for the second night of GJ 1167 observations. In Figure \ref{fig:sig_gj1167}, we plot the ACF as well as the 99.7\% significant level interval. From this, we conclude that the side lobes in the ACF are not significant. Furthermore, we see in Figure \ref{fig:ACF} that side lobes appear often in the simulated ACFs even though periodic phenomena are not present in the mock flare data sets as the flare times are chosen randomly. Additional evidence that the side lobes observed in the ACF of the real data are not significant is that they are similar or smaller in size in comparison to those observed in the mock data. These side lobes likely result from the time offset of the individual, relatively large flares. In addition, we conducted a periodogram analysis on the four full nights of data for GJ 1167 finding no significant peaks at the rotation period or any of its harmonics. We show the results of the periodogram in Figure \ref{fig:sig_gj1167}. \begin{figure}[ht] \includegraphics[scale=.6,angle=0]{plot_acf_gj1167_periodgram.pdf} \vspace{-1cm} \hspace{-0.53cm} \caption{The top panel shows the auto-correlation function for GJ 1167 on UT 2019 April 3. The purple contour shows the 99.7\% significant level interval. The bottom panel shows the Lomb-Scargle periodogram of the combined four nights of GJ 1167. The purple line denotes the rotation period of GJ 1167 at 0.22 days. The orange line represents the peak of the periodogram which occurs at 0.74 days. \label{fig:sig_gj1167}} \end{figure} \subsection{Gaussian Process} We probed the timescale upon which the data are correlated using a Gaussian process and the Matern 1/2 kernel, which is given by: \begin{equation}\label{eq:gp} k(t) = \eta^2 {\rm exp}\left( \frac{-t}{\tau} \right) + \sigma^2 \end{equation} \noindent where $\tau$ is the time scale of the correlations, $\eta^2$ is the variance of the correlations, and $\sigma^2$ is a white noise term. We used maximum likelihood estimation to determine the hyper-parameters $\tau$, $\eta$, and $\sigma$. We used {\sc scikit learn} \citep{scikit-learn} to implement the GP and for the parameter estimation. In Figure \ref{fig:ACF} we show the GP fit for each high-cadence observation. We chose the Matern 1/2 kernel because it describes covariances observed in fast rise and exponential decay problems. Given results from the injected flare ACF analysis in \S~\ref{sec:inject}, we posited that the correlation timescale we observed in $H\alpha$ is related to stellar flares which show an exponential decay in their decay phase and can enhance chromospheric activity indicators such as $H\alpha$ \citep{Kowalski2009,Kowalski2013,Fuhrmeister2018,DiMaio2020}. In this way, $\tau$ is probing the e-folding timescale of the flare decay. We used bootstrap methods to determine the uncertainties in $\tau$. We found that the time scale of variability for the high-cadence data sets range from 20 -- 45 minutes. We show the best fit $\tau$ values and their bootstrap uncertainties in Table \ref{tab:params}. We took these values of $\tau$ to be the true timescale of variability on the star instead of those values measured by the FWHM of the ACF. We also determined the time scale, $\tau$, of 100 mock flare data sets for each night of observations for GJ~1167, GJ~1111, and LEP~1431+7526 using this GP analysis. We find that if we tune the duration of the injected flare data sets so that the average output ACF FWHM value is consistent with the FWHM value we measured for the true data sets, the GP analysis returns times scales that are consistent with the values we measure with the GP values estimated for the true data sets. We provide the median and standard deviation $\tau$ of the 100 mock flare data sets for each night in Table \ref{tab:params}. \begin{center} \begin{deluxetable*}{llccc} \label{tab:params} \tablehead{ \colhead{Star} & \colhead{Date} & \colhead{GP $\tau$} & \colhead{ACF FWHM} & \colhead{GP $\tau$ Injected} \\ \nocolhead{yup} & \colhead{UT} & \colhead{minutes} & \colhead{minutes} & \colhead{minutes}} \startdata GJ~1167 & 2019 Mar 06 & 19.4$\pm$ 2.8 & 34.5 $\pm$ 4.2 & 22.5 $\pm$ 9.5 \\ GJ~1167 & 2019 Apr 03 & 17.7$\pm$ 2.5 & 34.5 $\pm$ 4.6 & 22.5 $\pm$ 9.5 \\ GJ~1167 & 2019 May 04 & 20.0$\pm$ 2.7 & 40.3 $\pm$ 3.8 & 25.4 $\pm$ 7.6 \\ GJ~1167 & 2019 May 05 & 51.3$\pm$ 8.0 & 69.1 $\pm$ 4.8 & 46.6 $\pm$ 8.0\\ GJ~1111 & 2020 Jan 24 & 33.0$\pm$ 2.5 & 57.6 $\pm$ 5.4 & 36.2 $\pm$ 14.2\\ LEP~1431+7526 & 2020 Feb 02 & 56.3 $\pm$ 8.2 & 60.1 $\pm$ 8.7 & 45.4 $\pm$ 9.0 \\ \enddata \caption{GP $\tau$ denotes the correlation timescale of the high cadence data sets as determined by the Gaussian process regression. ACF FWHM denotes the measured full-width at half maximum of the Auto-correlation function. GP $\tau$ Injected denotes the median correlation time scale of the fake time series of EW measurements consisting of randomly injected flares with the same decay timescale as is measured by the ACF FWHM} \end{deluxetable*} \end{center} \subsection{ARIMA Modeling of GJ 1167} As a separate path to explore the correlations in the $H\alpha$ time series, we used the AutoRegressive Integrated Moving Average (ARIMA) on the full four night data set of GJ 1167. ARIMA is useful for modeling stationary time series and is often employed as a forecasting tool \citep{Feigelson2018}. ARIMA can be applied to data only if they are evenly spaced, so we first binned the data for GJ 1167 into bins of 8.6 minutes. We then used the Python Package \textit{Statsmodels} \citep{seabold2010statsmodels} to implement ARIMA. We find that an ARIMA model of (1,0,0) is favored to describe the GJ 1167 time series, with an Akaike information criterion equal to 202. ARIMA(1,0,0) describes a damped random walk, which is sometimes termed an Ornstein-Uhlenbeck process \citep{Uhlenbeck1930}. This describes a time series that has a tendency to return to its mean value, and the strength of the returning attraction is proportional to the size of the deviation from the mean. This is consistent with our physical picture, namely that $H\alpha$ EW returns to its quiescent value after a flare. The ARIMA framework, however, does not quantify the return time scale, but we derive this time scale using the auto-correlation and Gaussian processes methods described earlier. \subsection{Simultaneous TRES and \textit{TESS} Observations}\label{sec:inject} For one high-cadence target, GJ~1111, we gathered the spectroscopic time series described above at the same time the star was also observed by \textit{TESS}. The TRES observations cover 8.37 hours of the 11.04 hour rotation period. \begin{figure*} \includegraphics[scale=0.70,angle=0]{gj1111_tess_lc_detrended.pdf} \caption{The top panel shows the \textit{TESS} PDCSAP light curve of GJ~1111 during the \textit{TESS} orbit where simultaneous spectroscopic observations were obtained. The red points indicate the times during which the simultaneous TRES observations were gathered. The teal curve is the GP model we used to model the light curve. The bottom panel shows the residuals with flares found by our flare finding algorithm shown with an orange tic mark.\label{fig:tess_lc}} \end{figure*} We used the \textit{TESS} data to probe whether the variation of $H\alpha$ observed in Figure \ref{fig:HC_obs} can be attributed to stellar flares. To detect stellar flares, we first removed the photometric modulation due to stellar spots rotating into and out of view. We proceeded as described in \citet{Medina2020}: we first modeled the light curve using a Gaussian process taking the form of a combination of two simple harmonic oscillators from the python package {\sc exoplanet} \citep{exoplanet:foremanmackey17}. We then searched for flares, which are defined as 3 consecutive 3$\sigma$ positive deviants. Our algorithm is able to detect flares (30\% completeness; see \citet{Medina2020}) with energies greater than 4$\times$10$^{30}$ ergs in the \textit{TESS} bandpass. We detected 24 flares total in the full 27 day, two-orbit \textit{TESS} light curve. The detected flares range from 4.3$\times$10$^{30}$ to 8.2$\times$10$^{31}$ ergs. Our flare finding algorithm did not detect any flares during the overlapping timestamps. We show the detected flares for \textit{TESS} orbit which overlapped with the spectroscopic observations in Figure \ref{fig:tess_lc}. \begin{figure*} \includegraphics[scale=0.70,angle=0]{Gj1111_tres_tess_LC.pdf} \caption{The top panel shows \textit{TESS} PDCSAP light curve (black points) with red points showing the \textit{TESS} light curve binned to match the times and durations of exposures of each TRES spectrum. The middle panel shows the residuals of the \textit{TESS} light curve of GJ~1111 after the subtraction of the Gaussian process model we used to detrend the data. The bottom panel shows the time-series of the $H\alpha$ EW measurements. \label{fig:sim}} \end{figure*} In Figure \ref{fig:sim}, we show the \textit{TESS} PDCSAP light curve, the detrended light curve and $H\alpha$ equivalent width measurements during the time of simultaneous observations. Although our algorithm did not not detect a flare during the simultaneous observations, we did observe a positive excursion in both the detrended and PDCSAP \textit{TESS} light curves that aligns with the peak of the $H\alpha$ equivalent width time series (Figure \ref{fig:sim}). We believe this feature observed in both the TRES and \textit{TESS} data sets may be a low-energy flare that is below the sensitivity of our algorithm. To test whether the $H\alpha$ time series and the \textit{TESS} photometric time series are correlated, we first binned the \textit{TESS} data to match the exposure times of each TRES spectrum. We then calculated the cross-correlation of the $H\alpha$ EWs and the binned \textit{TESS} data. We tested the correlation of the $H\alpha$ time series and the binned PDCSAP light curve, as well as, the detrended light curve by calculating the Spearman's Rank and Pearson Correlation coefficients respectively as well as their associated P-values. The P-value indicates the probability of an uncorrelated data set producing the same coefficient value. We found the Pearson Correlation coefficient = 0.700, p-value = 2.04$\times$10$^{-6}$ for the detrended light curve and the $H\alpha$ time series and the Spearman's Rank Correlation coefficient = 0.676, p-value = 5.93 $\times$10$^{-6}$ for the correlation between the PDCSAP light curve and the $H\alpha$ time series. We examined the correlation between the detrended and undetrended PDCSAP light curve and the $H\alpha$ time series with the potential flare excluded (0 $<$ t $<$ 400 minutes) from the analysis to understand the effect of the flare on the correlation. We found the Pearson Rank Correlation coefficient = -0.01, p-value = 0.91 and Spearman's Rank Correlation coefficient = 0.78, p-value = 1.59$\times$10$^{-6}$ respectively for the detrended and undetrended light curves. For completeness, we also computed the two correlation coefficients for the region only including the flare for times greater than 400 minutes. We find the values of -0.82, p-value = 7.34$\times$10$^{-4}$, and -0.63, p-value = 6.71$\times$10$^{-3}$ for the Pearson and Spearman's Rank Correlations respectively. Finding correlations between the $H\alpha$ times series with the exclusion of the potential flare as well as with the potential flare only portion of the light curve suggests that $H\alpha$ variability may include contributions from fixed magnetic phenomena rotating into and out of view as well as flares. \section{Discussion and Conclusion}\label{sec:conclusion} We used measurements of $H\alpha$ equivalent widths as a function of time to examine the timescales upon which this chromospheric activity indicator varies on fully convective M-dwarfs with masses between 0.1 and 0.3 solar masses. We studied 13 active stars with rotation periods spanning from 0.21 to 92 days. We first examined whether the dominant source of variation in $H\alpha$ is due to roughly constant emission from localized magnetic phenomena such as stellar spots or plages and thus varies in phase with stellar rotation period and, if so, whether the correlation shows a mass dependence. Only one star, G 7-34, showed variations that are correlated with the stellar rotation period; as the stellar brightness increases, the star shows increased $H\alpha$ emission. We found, through a kinematic analysis, that this star is a member of the AB Doradus moving group and thus younger than the other nine stars. G 7-34 is the most active star in our sample showing the greatest amount of $H\alpha$ in emission and also has the largest photometric amplitude. We posit that the asymmetric distribution of potentially larger stellar spots could be continuously emitting which could account for the variation of $H\alpha$ being in phase with the stellar rotation. The remaining nine stars showed $H\alpha$ variability, but the variation is inconsistent with the rotational phase of the star. These stars may have more numerous, small spots that display more sporadic chromospheric emission at all rotational phases with flares erupting randomly from these spots and fading away. This stands in contrast to findings for more massive inactive M-dwarfs, which show that chromospheric activity indicators like Ca II H+K and $H\alpha$ trace the stellar rotation periods \citep[e.g.][]{Suarez2016,Fuhrmeister2019,Shofer2019}. These previous findings indicated that the dominate source of activity variability originates from relatively constant emission from localized manifestations of the magnetic field through active regions in the chromosphere. We obtained six nights of high-cadence observations for three stars. We found that each star shows correlated variability on timescales of 20-45 minutes, which is much shorter than the rotation period. \citet{Hawley2014} showed that the flare decay timescale distribution for the active mid M dwarf GJ~1243 peaks ranges from 10--100 minutes. The variability timescale that we observed is consistent with this timescales. From simultaneous spectroscopic and photometric monitoring (with \textit{TESS}) of GJ 1111, we found that as the stellar brightness increased, the amount of $H\alpha$ in emission also increased. This correlation may be due to low-energy flaring events, which is in agreement with results from \citet{Lee2010} and \citet{Bell2012} in regard to their findings for stars with masses below the convective boundary. In those studies, they noted that $H\alpha$ is variable, but do not determine whether the variability is correlated in time. The enhanced $H\alpha$ emission caused by low-energy flares may be masking the underlying localized $H\alpha$ activity observed in inactive stars with $H\alpha$ in absorption \citet{Suarez2016,Fuhrmeister2019} where the flare rate for these stars can be reduced 6 orders of magnitude or more \citep{Medina2020}. The question of how fully convective M-dwarfs generate and sustain their magnetic fields is of intrinsic interest and bears upon the habitability of their orbiting planets. These stars provide the only near-term opportunity to characterize the atmospheres of terrestrial exoplanets. As such, it is essential that we understand the past and present stellar radiation environment, including the timescales upon which magnetic phenomena are occurring. Knowing these timescales is useful for both our understanding of the manifestations of the magnetic dynamo and also for radial velocity surveys. The major inhibitor to detecting planetary signals in radial velocity surveys is disentangling them from the underlying stellar variability \citep{Robertson2014a, Robertson2014b, Robertson2015}. In this paper, we explored the timescale of variability for the chromospheric activity indicator $H\alpha$. Our results suggest that stellar flares present a feasible mechanism to describe $H\alpha$ variability on active mid-to-late M dwarfs as opposed to roughly constant emission from active regions, such as stellar spots or plages rotating into and out of view like as has been observed on inactive early M-dwarfs. Whether this relationship extends to inactive mid-to-late M dwarfs has not yet been studied extensively. In a future study, we intend to examine the link between low-energy flaring behavior and $H\alpha$ variability further by obtaining simultaneous spectroscopic and \textit{TESS} observations for a larger sample of stars. \acknowledgments{We would like to thank the referee for a thoughtful review that improved the manuscript. We thank David Latham, Gilbert Esquerdo, Perry Berlind, and Michael Calkins for scheduling and collection of the TRES data. This work is made possible by a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. The MEarth Team gratefully acknowledges funding from the David and Lucile Packard Fellowship for Science and Engineering (awarded to D.C.) and support from the National Science Foundation under grants AST-0807690, AST-1109468, AST-1004488 (Alan T. Waterman Award), and AST-1616624. This material is based upon work supported by the National Aeronautics and Space Administration under grant 80NSSC19K0635 in support of the \textit{TESS} Cycle 2 Guest Investigator program, and grant 80NSSC19K1726 issued through the XRP program. This paper includes data collected by the \textit{TESS} mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). This work has made use of data from the European Space Agency mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participat-ing in the Gaia Multilateral Agreement. This work has used data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center at the California Institute of Technology, funded by NASA and NSF.} \facilities{\textit{TESS}, MEarth, FLWO:1.5m (TRES)} \software{{\sc celerite} \citep{Foreman-Mackey(2017)}, {\sc exoplanet} \citep{exoplanet:exoplanet}, {\sc PYMC3} \citep{exoplanet:pymc3}, {\sc python}} This research made use of {\sc exoplanet} \citep{exoplanet:exoplanet} and its dependencies \citep{exoplanet:astropy13, exoplanet:astropy18, exoplanet:exoplanet, exoplanet:foremanmackey17, exoplanet:foremanmackey18, exoplanet:kipping13, exoplanet:luger18, exoplanet:pymc3, exoplanet:theano}. \clearpage \bibliographystyle{aasjournal}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,190
Ole Gunderson Kinney (June 1, 1858December 26, 1922) was an American merchant and Republican politician. He served two terms in the Wisconsin State Assembly, representing Dunn County, and was elected to the Wisconsin State Senate in 1922, but died before taking office. Biography Born in Dane County, Wisconsin, Kinney's family moved to Crawford County in 1860 and then to Dunn County in 1863. He was educated in the public schools. He was a merchant and grain trader. Kinney was president of the Community Savings Bank in Superior, Wisconsin. Kinney served as the Colfax Town clerk and also served as chairman of the Colfax Town Board. From 1903 to 1907, Kinney served in the Wisconsin State Assembly and was a Republican. In 1922, Kinney was elected to the Wisconsin State Senate from the 11th State Senate district. Kinney had a stroke on October 31, 1922 and died in Superior, Wisconsin, before he took the oath of office. References External links 1858 births 1922 deaths People from Dane County, Wisconsin People from Dunn County, Wisconsin Politicians from Superior, Wisconsin Businesspeople from Wisconsin Mayors of places in Wisconsin Republican Party Wisconsin state senators Burials in Wisconsin People from Colfax, Wisconsin Elected officials who died without taking their seats Republican Party members of the Wisconsin State Assembly
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,455
package com.echsylon.kraken.request; import com.echsylon.kraken.internal.CallCounter; /** * This class can build a request that tries to cancel a previously placed * withdrawal of funds. * <p> * For further technical details see Kraken API documentation at: * https://www.kraken.com/help/api */ @SuppressWarnings("WeakerAccess") public class WithdrawCancellationRequestBuilder extends RequestBuilder<Boolean, WithdrawCancellationRequestBuilder> { /** * Creates a new request builder. * * @param callCounter The request call counter. May be null. * @param baseUrl The base url of the request. * @param key The user API key. * @param secret The corresponding secret. */ public WithdrawCancellationRequestBuilder(final CallCounter callCounter, final String baseUrl, final String key, final byte[] secret) { super(1, callCounter, key, secret, baseUrl, "POST", "/0/private/WithdrawCancel", Boolean.class); } /** * Sets the one time password to use when performing the request. * * @param oneTimePassword The password. * @return This request builder instance allowing method call chaining. */ public WithdrawCancellationRequestBuilder useOneTimePassword(final String oneTimePassword) { data.put("otp", oneTimePassword); return this; } /** * Sets the reference id request property. * * @param referenceId The reference id of the withdrawal to cancel. * @return This request builder instance allowing method call chaining. */ public WithdrawCancellationRequestBuilder useReferenceId(final String referenceId) { data.put("refid", referenceId); return this; } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,802
\section{Introduction} Let $\mathbb{K}$ be a field, and let $R=\mathbb{K}[X_1, \ldots , X_n]$ be the positively $\mathbb{Z}$-graded polynomial ring with $\deg{X_i}=d_i\geq 1$ for every $i=1,\ldots ,n$. Consider a finitely generated graded module $M=\oplus_k M_k$ over $R$. The graded components $M_k$ of $M$ are finitely dimensional $K$-vector spaces, and, since $R$ is positively graded, $M_k=0$ for $k \ll 0$. The formal Laurent series \[ H_M(t):=\sum_{k \in \mathbb{Z}} (\dim_K M_k)t^k \in \LS, \] is called the Hilbert series of $M$. Obviously every coefficient of this series is nonnegative. Moreover, it is well-known that $H_M(t)$ can be written as a rational function with denominator $(1-t^{d_1})\dotsm(1-t^{d_n})$. In fact, in the standard graded case (i.e. $d_1 = \dotsb = d_n = 1$) these two properties characterize the Hilbert series of finitely generated $R$-modules among the formal Laurent series $\LS$, cf. \cite[Cor. 2.3]{U}. In the non-standard graded case, the situation is more involved. A characterization of Hilbert series was obtained by the second and third author in \cite{MU}: \begin{theorem}[Moyano-Uliczka]\label{thm:jan} Let $P(t) \in \LS$ be a formal Laurent series which is rational with denominator $(1-t^{d_1})\dotsm(1-t^{d_n})$. Then $P$ can be realized as Hilbert series of some finitely generated $R$-Module if and only if it can be written in the form \begin{equation} \label{2eq1} P(t) = \sum_{I \subseteq \{1, \ldots , n\}} \frac{Q_I(t)}{\prod_{i \in I} \left(1-t^{d_i} \right)} \end{equation} with nonnegative $Q_I \in \mathbb{Z}[t,t^{-1}]$. \end{theorem} However, it remained an open question in \cite[Remark 2.3]{MU} if the condition of the Theorem is satisfied by \emph{every} rational function with the given denominator and nonnegative coefficients. In this paper we answer this question to the negative. In Section \ref{section:examples} we provide examples of rational functions that do not admit a decomposition \eqref{2eq1} and are thus not realizable as Hilbert series. On the other hand, we show the following in Corollary \ref{cor:rat} and Theorem \ref{thm:2var}: \begin{theorem} Assume that the degrees $d_1, \dotsc, d_n$ are pairwise either coprime or equal. Then the following holds: \begin{enumerate} \item If $n=2$, then every rational function $P(t) \in \LS$ with the given denominator and nonnegative coefficients admits a decomposition as in \eqref{2eq1} \item In general, the same still holds \emph{up to multiplication with a scalar}. \end{enumerate} \end{theorem} We will also provide an example that the conclusion does not hold without the assumption on the degrees being pairwise coprime. \section{Proofs of the main results} As general references for further details about Hilbert series the reader is referred to \cite{BH}. Furthermore, we are going to use some well-known facts about quasipolynomials power and series expansions of rational functions. For details about these topics, we refer the reader to Chapter 4 of \cite{stanley}. The following notation will be useful. For $\delta \in \mathbb{N}$ and $0 \leq j \leq \delta -1$ set \[ e_{\delta,j}(h) \defa \begin{cases} 1 &\text{ if } h \equiv j \mod \delta, \\ 0 & \text{ otherwise}. \end{cases}\] Obviously, the functions $e_{\delta,0},\dotsc,e_{\delta,\delta-1}$ form a basis of the space of $\delta$-periodic functions $\mathbb{N}\rightarrow\mathbb{Q}$. We prepare three lemmata before we present the proofs of our main results. \begin{lemma}\label{lem:periodic} Let $c_1, \dotsc, c_r: \mathbb{N} \rightarrow \mathbb{Q}$ be periodic functions of pairwise coprime periods $\delta_1, \dotsc, \delta_r$, such that their sum takes nonnegative values. Then there exist nonnegative periodic functions $\tilde{c}_1, \dotsc, \tilde{c}_r: \mathbb{N} \rightarrow \mathbb{Q}$ of periods $\delta_1, \dotsc, \delta_r$ such that $\sum_i c_i = \sum_i \tilde{c}_i$. Moreover, if the sum of the $c_i$ takes nonnegative integral values, then the $\tilde{c}_i$ can be chosen to be integral valued. \end{lemma} \begin{proof} Let us define the coefficients $\mu(i,j)$ by requiring \[ c_i = \sum_{j=0}^{\delta_i-1} \mu(i,j) e_{\delta_i,j} \] For each $i>1$, let $m_i$ be the minimum of the $\mu({i,1}), \dotsc, \mu({i,\delta_i})$ and choose a $k_i$ such that $m_i = \mu(i, k_i)$. \newcommand{\tilde{\mu}}{\tilde{\mu}} Set $\tilde{\mu}({i,j}) \defa \mu({i,j}) + m_i$ for $1 < i \leq r$, $\tilde{\mu}({1,j}) \defa \mu({1,j}) - \sum_i m_i$ and define $\tilde{c}_i \defa \sum_{j=0}^{\delta_i-1} \tilde{\mu}(i,j) e_{\delta_i,j}$. Using the relation \[ \sum_{j=0}^{\delta-1} e_{\delta,j} = \sum_{j=0}^{\delta'-1} e_{\delta',j} \] for all $\delta,\delta' \in \mathbb{N}$, one sees easily that \begin{equation} \sum_{i=1}^r c_i = \sum_{i=1}^r \sum_{j=0}^{\delta_i - 1} \mu({i,j}) e_{\delta_i,j} = \sum_{i=1}^r \sum_{j=0}^{\delta_i - 1} \tilde{\mu}({i,j}) e_{\delta_i,j} = \sum_{i=1}^r \tilde{c}_i. \end{equation} By construction, it holds that $\tilde{\mu}({i,j}) \geq 0$ for $i > 1$ and all $j$. We claim that also $\tilde{\mu}({1,j}) \geq 0$ for all $j$. To prove this, assume for contrary that there exists an index $j_0$ such that $\tilde{\mu}({1,j_0}) < 0$. Note that by construction $\tilde{\mu}(i, k_i) = 0$ for $1 < i \leq r$. By the Chinese remainder theorem, there exist an $0 \leq h < \delta_1 \delta_2 \dotsm \delta_r$ such that $h \equiv j_0 \mod \delta_1$ and $h \equiv k_i \mod \delta_i$ for $i > 1$. Then \begin{align*} \sum_{i=1}^r c_i(h) &= \sum_{i=1}^r \tilde{c}_i(h) \\ &= \tilde{\mu}(1,j_0) + \tilde{\mu}(2, k_2) + \tilde{\mu}(3, k_3) + \dotsc + \tilde{\mu}(r,k_r) \\ &= \tilde{\mu}(1,j_0) < 0, \end{align*} contradicting the assumption. Now we turn to the case that $\sum_{i=1}^r c_i(h) \in \mathbb{Z}$ for all $h \in \mathbb{N}$. By the same argument as above, for $1 \leq j \leq \delta_1-1$ there exists an $h \in \mathbb{N}$, such that $ \sum_{i=1}^r c_i(h) = \tilde{\mu}(1, j) $ hence $\tilde{\mu}(1,j) \in \mathbb{Z}$ for all $j$. Further, for each $1 < i \leq r$ and each $1 \leq j \leq \delta_i - 1$, there exists an $h \in \mathbb{N}$ such that $h \equiv j \mod \delta_i$ and $h \equiv k_\ell \mod \delta_\ell$ for each $1\leq \ell \leq r, \ell \neq i$. Thus $\sum_{i=1}^r c_i(h) = \tilde{\mu}(1, j_0) +\tilde{\mu}(i,j)$ for some $j_0$. It follows that $\tilde{\mu}(i,j) \in \mathbb{Z}$. We conclude that $\tilde{c}_i(h) \in \mathbb{Z}$ for all $1\leq i \leq r$ and all $h \in \mathbb{N}$. \end{proof} \begin{lemma}\label{lem:construct} Let $c:\mathbb{N} \rightarrow \mathbb{Q}$ be a nonnegative periodic function of period $\delta \in \mathbb{N}$ and let $\beta \in \mathbb{N}$. Then there exists a polynomial $q \in \mathbb{Q}[t]$ with nonnegative coefficients, such that the coefficient function of the series expansion of \[ \frac{q(t)}{(1-t^\delta)^{\beta}} \] is a quasipolynomial of degree $\beta - 1$ whose leading coefficient equals $c$. \end{lemma} \begin{proof} Write $c = \sum_i c_i e_{\delta,i}$ with $c_i \in \mathbb{Q}$ nonnegative. Recall that the coefficient function of \[ \frac{t^i}{(1-t^\delta)^{\beta}} = \sum_{h \geq 0} \binom{h + \beta - 1}{\beta -1} t^{\delta h+i}. \] is a quasipolynomial of degree $\beta -1$ with leading coefficient function $(1 / (\beta-1)!) e_{\delta,i}$. So the polynomial $q(t) \defa (\beta-1)! \sum_{i=0}^{\delta-1} c_i t^i$ satisfies the claim. \end{proof} \begin{lemma}\label{lem:shift} Let $p_1, p_2$ be two quasipolynomials of the same period and the same degree. Assume moreover that the leading coefficient function of $p_1$ is nonnegative and greater than or equal to the leading coefficient function of $p_2$. Then there exists a $k \in \mathbb{N}$ such that $p_1(h) - p_2(h-k) \geq 0$ for all $h \geq k$. \end{lemma} \begin{proof} Let $\delta \in \mathbb{N}$ be the common period of $p_1$ and $p_2$. We only consider values of $k$ that are multiples of $\delta$, so we set $k = \tilde{k} \delta$. Let \[ p_1(h) = \sum_{i=0}^\ell a_i(h) h^i \qquad \text{ and } \qquad p_2(h) = \sum_{i=0}^\ell b_i(h) h^i \] Let $\tilde{h} \defa h - \tilde{k}\delta$. We compute % \begin{align*} p_1(h) - p_2(h-\tilde{k}\delta) &= p_1(\tilde{h} + \tilde{k}\delta) - p_2(\tilde{h}) \\ &= \sum_{i=0}^{\ell} a_i(\tilde{h}+\tilde{k}\delta) (\tilde{h} + \tilde{k}\delta)^i - b_i(\tilde{h}) \tilde{h}^i \\ &= \sum_{i=0}^{\ell} a_i(\tilde{h}) (\tilde{h} + \tilde{k}\delta)^i - b_i(\tilde{h}) \tilde{h}^i \\ &= (a_\ell(\tilde{h}) - b_\ell(\tilde{h}))\tilde{h}^\ell + \sum_{i=0}^{\ell-1} \left( \sum_{j=i}^\ell \binom{j}{i} \tilde{k}^{j-i} d^{j-i} a_j(\tilde{h}) - b_i(\tilde{h}) \right) \tilde{h}^i \end{align*} By assumption we have that $a_\ell(\tilde{h}) - b_\ell(\tilde{h}) \geq 0$. Further, we see that all other coefficient functions of $p_1(\tilde{h} + \tilde{k}\delta) - p_2(\tilde{h})$ are non-constant polynomials in $\tilde{k}$ with leading coefficient $\binom{\ell}{i} \delta^{\ell - i} a_\ell(\tilde{h}) > 0$. Therefore all coefficient functions of $p_1(\tilde{h} + \tilde{k}\delta) - p_1(\tilde{h})$ are nonnegative for $\tilde{k} \gg 0$. It follows that for a sufficiently large $\tilde{k}$, it holds that $p_1(\tilde{h} + \tilde{k}\delta) - p_2(\tilde{h}) \geq 0$ for all $\tilde{h} \geq 0$. Consequently, for this value of $\tilde{k}$, it holds that $p_1(h) - p_2(h-\tilde{k}\delta)$ for all $h \geq \tilde{k} \delta$. \end{proof} Now we are ready to present and prove our main Theorem. It shows that a decomposition as in Theorem \ref{thm:jan} it always possible if one allows \emph{rational} coefficients. \begin{theorem}\label{thm:rat} Let $d_1, \dotsc, d_n \in \mathbb{N}$ be pairwise coprime or equal. Let $P(t) \in \LS$ be a nonnegative formal Laurent series which is rational with denominator $(1-t^{d_1})\dotsm(1-t^{d_n})$. Then it can be written in the form \begin{equation} P(t) = \sum_{I \subseteq \{1, \ldots , n\}} \frac{Q_I(t)}{\prod_{i \in I} \left(1-t^{d_i} \right)} \end{equation} with nonnegative $Q_I \in \mathbb{Q}[t,t^{-1}]$. \end{theorem} Let us introduce some more notation to simplify the presentation of the proof. Let $\delta_1, \dotsc, \delta_r \in \mathbb{N}$ denote the different values of the $d_i$, and let $\alpha_i := |\set{j\with d_j = \delta_i}|$ be the multiplicity of $\delta_i$. Then $P(t)$ is a rational function with denominator $\prod_i (1-t^{\delta_i})^{\alpha_i}$. It is known that the coefficients of $P(t)$ are given by a quasipolynomial which we denote by $Q(P)$ (cf.~\cite[Prop.~4.4.1]{stanley}). We write $c_i(P)$ for the $i$-th coefficient of $Q(P)$, which is a periodic function. \begin{proof} Let $q \defa Q(P)$. % We do induction on $\beta \defa \deg q + 1$. If $\beta = 0$, i.e. $q = 0$, then $P(t)$ is a polynomial with nonnegative coefficients. In the sequel, we assume that $q(h) \neq 0$. Because $q(h)$ is nonnegative for all $h \gg 0$, its leading coefficient $c(h)$ is a nonnegative periodic function. % We are going to show that there exists a rational function $g(t) \in \mathbb{Q}(t)$ with the same denominator as $P(t)$, such that $\deg Q(g) = \deg q$ and both quasipolynomials have the same leading coefficient. For this we decompose $P(t)$ into partial fractions and write it as follows: \begin{equation}\label{eq:dec} P(t) = \frac{p_{0}(t)}{(1-t)^n} + \sum_{i=1}^r \frac{p_{i}(t)}{(1-t^{\delta_i})^{\alpha_i}} \end{equation} where $p_0, p_{i} \in \mathbb{Q}[t,t^{-1}]$ are Laurent polynomials. There are two cases to distinguish: \begin{enumerate} \item If $\beta > \max\set{\alpha_i \with 1 \leq i \leq r}$, then $c(h)$ is determined by the first summand in \eqref{eq:dec}. In particular, $c(h)$ is a constant function. In this case, choose numbers $0 \leq \beta_i \leq \alpha_i$ for $1 \leq i \leq r$ such that $\beta = \beta_1 + \dotsb + \beta_r$. Then the coefficient function of the series expansion of $1/\prod_i (1-t^{\delta_i})^{\beta_i}$ is a quasipolynomial of degree $\beta - 1$, and its leading coefficient function is constant. Thus there exists a nonnegative $\lambda \in \mathbb{Q}$ such that the leading coefficient of $Q(g)$ for \[ g(t) \defa \frac{\lambda}{\prod_{i=1}^r (1-t^{\delta_i})^{\beta_i}} \] equals $c(h)$. \item If $\beta \leq \max\set{\alpha_i \with 1 \leq i \leq r}$, then $c(h)$ is a sum of periodic functions of the periods $\delta_i$ for those $i$ where $\beta \leq \alpha_i$. By Lemma \ref{lem:periodic}, we can write $c(h)$ as a sum of nonnegative functions $\tilde{c}_1, \dotsc, \tilde{c}_r: \mathbb{N} \rightarrow \mathbb{Q}$ of periods $\delta_1, \dotsc, \delta_r$, where $\tilde{c}_i = 0$ if $\beta > \alpha_i$. By Lemma \ref{lem:construct}, there are nonnegative polynomials $q_1, \dots, q_r \in \mathbb{Q}[t]$, such that the leading coefficient of $Q(g)$ for \[ g(t) \defa \sum_{i=1}^r \frac{q_i(t)}{(1-t^{\delta_i})^{\beta}} \] is $c(h)$. \end{enumerate} By Lemma \ref{lem:shift}, there exists a $k \in \mathbb{N}$, such that the series expansion of $ P(t) - t^k g(t) $ has nonnegative coefficients. But the coefficient function of $P(t) - t^k g(t)$ is a quasipolynomial of degree $\leq \beta -2$, so the claim follows by induction. \end{proof} \begin{corollary}\label{cor:rat} Let $d_1, \dotsc, d_n \in \mathbb{N}$ be pairwise coprime or equal numbers. Let $P \in \LS$ be a formal Laurent series with nonnegative coefficients, which is rational with denominator $(1-t^{d_1})\dotsm(1-t^{d_n})$. Then there exist a $\lambda \in \mathbb{N}$ and a finitely generated $R$-module, such that $\lambda P$ is the Hilbert series of $M$. \end{corollary} \begin{proof} This follows from Theorem \ref{thm:rat} and Theorem \ref{thm:jan}. \end{proof} \begin{theorem}\label{thm:2var} Assume that in the situation of Theorem \ref{thm:rat} we have $n = 2$. Then the numerator polynomials $q(t)$ can be chosen to have nonnegative \emph{integral} coefficients. In particular, $P(t)$ can be realized as a Hilbert series of a finitely generated graded $R$-module. \end{theorem} \begin{proof} Without loss of generality, we may assume that $d_1 \neq d_2$. Otherwise the claim reduced to the standard graded case which is known \cite[Thm 2.1]{U}. By our premise, $P$ is either a polynomial, or $Q(P)$ has degree at most $1$. If $P$ is a polynomial, so nothing is to be proven. % So consider the case that $\deg Q(P) = 0$. By a partial fraction decomposition of $P$ we see that it can be written in the form \begin{equation}\label{eq:gruppe} P(t) = \frac{p_1(t)}{1-t^{d_1}} + \frac{p_2(t)}{1-t^{d_2}} \end{equation} From this we read off that $c_0(P)$ is the sum of two periodic functions of period $d_1$ resp. $d_2$. By Lemma \ref{lem:periodic}, we can choose this functions nonnegative and integer valued. In other words, there exist two polynomials $\tilde{p}_1, \tilde{p}_2 \in \mathbb{Z}[t]$ with nonnegative coefficients such that \[ c_0(P) = c_0\left( \frac{\tilde{p}_1(t)}{1-t^{d_1}} + \frac{\tilde{p}_2(t)}{1-t^{d_2}} \right), \] so by subtracting (a suitable shift of) this rational function from $P(t)$ we reduce to the case of a polynomial. Next, assume that $\deg Q(P) = 1$. Let us write \begin{equation}\label{eq:pq} P(t) = \frac{p(t)}{(1-t^{d_1})(1-t^{d_2})} \end{equation} with $p(t) \in \mathbb{Q}[t,t^{-1}]$. First, we show that the coefficients of $p(t)$ are integers. For this, let $p(t) = \sum_i a_i t^i$ and write $P(t) = \sum_{h\geq 0} f_h t^h$. It follows from \eqref{eq:pq} that $a_i = f_i - f_{i-d_1} - f_{i-d_2} + f_{i - d_1 d_2} \in \mathbb{Z}$. % % It is not difficult to see that \[ c_1\left(\frac{t^i}{(1-t^{d_1})(1-t^{d_2})}\right) = \frac{1}{d_1 d_2} \] for all $i$, and in particular this coefficient function is constant. As the coefficients of $p(t)$ are integers, it follows that $c_1(P)$ is an integral multiple of $1/d_1d_2$. Hence there exist $\lambda \in \mathbb{N}$ such that \[ P'(t) := P(t) - \frac{\lambda t^k}{(1-t^{d_1})(1-t^{d_2})} \] satisfies $\deg Q(P') = 1$. Moreover, it holds by Lemma \ref{lem:shift} that the coefficients of the series expansion of $P'$ are nonnegative for $k \gg 0$. Thus we have reduced the claim to the previous case. \end{proof} \section{Counterexamples} \label{section:examples} The decomposition is not always possible with integral coefficients. We describe a general construction of counterexamples. For this we consider pairwise coprime numbers $\delta_1, \dotsc, \delta_r \in \mathbb{N}$ and exponents $\alpha_1, \dotsc, \alpha_r \in \mathbb{N}$. Consider two rational functions $P_1, P_2$ of the form \[ \frac{1}{\prod_i (1-t^{\delta_i})^{\beta_i}} \] with $0 \leq \beta_i \leq \alpha_i$. Assume $P_1$ and $P_2$ have the following properties: \begin{itemize} \item[(i)] $\deg Q(P_1) = \deg Q(P_2)$. Let us call this number $d$. \item[(ii)] $d+1 > \max \set{\alpha_1,\dotsc,\alpha_r}$. This ensures that the leading coefficients $c_{d}(P_1)$ and $c_{d}(P_2)$ are constant. \item[(iii)] $c_d(P_1) > c_d(P_2)$, and the former should not be a multiple of the latter. \end{itemize} Under these assumptions, it is easy to see that there exists a $\lambda \in \mathbb{N}$, such that $\tilde{P} := P_1 - \lambda P_2$ is a series, such that $c_d(\tilde{P})$ is smaller than $c_d(P_2)$. This series may have negative coefficients. But by Lemma \ref{lem:shift} we may instead consider $P := P_1 - \lambda t^k P_2$ for a sufficiently large $k \in \mathbb{N}$, and this series has nonnegative coefficients. Now assume additionally that $c_d(P_2)$ is the minimal leading coefficient of all series of the given type and dimension. Then it is immediate that $P$ cannot be written as a nonnegative integral linear combination of such series. We give two explicit examples of this behaviour. \subsection{Example 1} Consider the rational function \begin{align*} P(t) &\defa \frac {1}{(1-t^2)(1-t^5)}-\frac {t^4}{(1-t^3)(1-t^5)} \\ &= \frac{1}{2} \left(1 + t^2 +\frac{t^6}{1-t^2} +\frac{t^2}{1-t^3}+ \frac{1+t^6}{1-t^5} + \frac{t^{12}}{(1-t^3)(1-t^5)} \right) \end{align*} One can read off the first line that the leading coefficient of $Q(P)$ is $1/10 - 1/15 = 1/30$, and thus smaller than $1/15$. So by the argument given above, $P(t)$ cannot be written as a nonnegative integral linear combination. On the other hand, the second line gives a rational decomposition. This shows in particular that the coefficients of the series of $P$ are nonnegative. \subsection{Example 2} The same phenomenon occurs in the case that there are only two different degrees, say $2$ and $3$, but $\alpha_1, \alpha_2 > 1$. As an explicit example consider the following rational function: \begin{align*} P &\defa \frac{1}{(1-t^2)^2 (1-t^3)} - \frac{t^2}{(1-t^2) (1-t^3)^2} \\ &= \frac{1}{2}\left( \frac{1}{1-t^3}+ \frac{1}{(1-t^2)^2} + \frac{t^3}{(1-t^3)^2} + \frac{t^4}{(1-t^2)(1-t^3)^2} \right) \end{align*} \subsection{Example 3} The condition that the degrees $\delta_1, \dotsc, \delta_r$ are pairwise coprime is essential, as the following example shows. Consider the rational function \begin{align*} P(t) &\defa \frac{1 + t - t^6 - t^{10} - t^{11} - t^{15} + t^{20} + t^{21}}{(1-t^6)(1-t^{10})(1-t^{15})} \\ &= \frac{1+t+t^7+t^{13}+t^{19}+t^{20}}{1-t^{30}} \end{align*} One can read off the second line that $P(t)$ cannot be written as a sum with positive coefficients and the required denominator: The coefficient of $t^0$ is $1$, but the terms $t^6, t^{10}$ and $t^{15}$ all have coefficient zero.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,265
La Selecció de futbol de Malawi és l'equip representatiu del país a les competicions oficials. Està dirigida per l'Associació de Futbol de Malawi (en anglès, Football Association of Malawi), pertanyent a la Confederació Africana de Futbol (CAF). És una selecció amb una pobre presència internacional, ja que només ha participat en dues fases finals de la Copa d'Àfrica de Nacions, i no ha passat mai de la primera ronda. Estadístiques Participacions en la Copa del Món = Cap Participacions en la Copa d'Àfrica = 2 Primera Copa d'Àfrica = 1984 Millor resultat a la Copa d'Àfrica = 1a ronda (1984 i 2010) Participacions olímpiques = Cap Competicions internacionals Participacions en la Copa del Món Des de 1930 a 1974 - No participà Des de 1978 a 1990 - No es classificà 1994 - Es retirà Des de 1998 a 2018 - No es classificà Participacions en la Copa d'Àfrica Des de 1957 a 1974 - No participà 1976 - 1978 - No es classificà 1980 - No participà 1982 - No es classificà 1984 - 1a ronda 1986 - No es classificà 1988 - No participà 1990 a 2008 - No es classificà 2010 - 1a ronda 2012 a 2017 - No es classificà Malawi Futbol a Malawi
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,923
Activities&Tours, Events in Rome, Features, Hot Topics / News 21 July 2021 <3 August 2021 La Luna Sul Colosseo Returns With An Emphasis On The Subterranean by Sonja Anderson Go on a magical guided tour of the Colosseum at night Starting July 17, La Luna sul Colosseo (The Moon Over the Colosseum) returns, allowing visitors to see parts of the Archaeological Park in a new light, literally. This year's edition of the annual event will feature the Colosseum's ancient lower level and the House of the Vestals for the first time. Underneath the Colosseum lies the recently restored Hypogeum, a network of tunnels that housed stage sets, gladiators, and wild animals. The Hypogeum was unveiled at the end of June by the director of the Colosseum and its archaeological park, Alfonsina Russo, after a two-year renovation effort. Starting at 20:10 every Saturday, La Luna sul Colosseo will guide groups of 20 or less through the Colosseum. Tour groups will learn about the land and building materials, explore the Hypogeum on wooden walkways, ascend the Colosseum levels by the glow of the sunset, pass the Wayside Shrine of the Cross, and finish with an expansive view of the arena. Visit the Casa delle Vestali at Roman Forum On July 14, 21, and 28, the park is offering evening tours of the Roman Forum, including – for the first time – the Casa delle Vestali by twilight. Beginning at 19:20 each Wednesday, a small, guided group will enter at the Arch of Titus and make their way to the House of the Vestals. Like the Hypogeum, the House of the Vestals has also undergone a restoration, and now allows visitors to explore its rooms, like the chamber in which the priestesses made the mola salsa, a sacred focaccia used in ritual ceremonies. The Moon Over the Colosseum tours will take place every Saturday until October 30, and the House of the Vestals tours every Wednesday through July 28. Visit parcocolosseo.it for more details. Booking is mandatory. Colosseum Walking TOUR with Roman Forum & Palatine Hill Skip the Line Colosseum, Roman Forum & Palatine Hill Skip-The-Ticket-Line TICKET La Luna Sul Colosseo Every Saturday until October 30 Visit lasts 1.15 h €25-22 parcocolosseo.it coopculture.it events in Rome, summer in rome Sonja Anderson More from Sonja Anderson Goethe's Rome Goethe's fascination with Rome and his sudden Italian escape inspired his literary... Dolce Vita on a Diet Cannes Film Festival in Rome Halloween Party at the Chorus Cafè: The Last Judgement Previous articleThe Gelateria Guide of Rome Next articleWhat's on in August in Rome
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,478
Chechen is a beautiful dark brown, but with a character all its own. Chechen has undertones in reds and rich browns and golds with darker brown grain variations. When finished the surface is glass-like, a quality which will be retained for many years (with the proper care, of course).
{ "redpajama_set_name": "RedPajamaC4" }
7,008
Life Changing Stories Now. Your Story. Our Bridge Work For Life OOH Mathare The Birth of People Bridge started in 2014 when my younger brother suddenly passed away and my personal life began to unravel. Out of that brokenness and pain is what led to the birth of People Bridge which focuses on Impacting and Helping ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ Missionaries around US... People from Kenya, Canada... Helping every people in our community and to help impact their lives with God-given talents We helped out Oasis of Hope Mathare, Work for Life, etc. nonprofit organizations... through the years we serve We Share Stories Encouraging People to Live Various kinds of stories you wanted to share for the world to know that there are still great things in this life. That YOUR STORY has the ability to affect those who read it! Uncertainties exist that we can't foresee. Every day seems to bring new hurdles, and the more support offered, the better. List of concerns or obstacles in 2-5 words Programs that aided people's lives and became stories. People Bridge aims to assist nonprofit organizations, families, and our community. Genuinely see and listen to them Offering a helping hands Encourage them to fulfill their God-given talents A Supporting Donor Monthly or Yearly support One time Donor What's a product or service you'd like to show. To continue to strive for our aspirations Sign up to hear from us about updates, encouragement and events. Copyright © 2023 People Bridge - All Rights Reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,758
{"url":"https:\/\/hal-univ-avignon.archives-ouvertes.fr\/hal-01871436","text":"On the transversal dependence of weak K.A.M. solutions for symplectic twist maps - Archive ouverte HAL Access content directly\nPreprints, Working Papers, ... Year :\n\n## On the transversal dependence of weak K.A.M. solutions for symplectic twist maps\n\nMarie-Claude Arnaud\n\u2022 Function : Author\n\u2022 PersonId : 1132149\n\n#### Abstract\n\nFor a symplectic twist map, we prove that there is a choice of weak K.A.M. solutions that depend in a continuous way on the cohomology class. We thus obtain a continuous function $u(\\theta, c)$ in two variables: the angle $\\theta$ and the cohomology class $c$. As a result, we prove that the Aubry-Mather sets are contained in pseudographs that are vertically ordered by their rotation numbers. Then we characterize the $C^0$ integrable twist maps in terms of regularity of $u$ that allows to see $u$ as a generating function. We also obtain some results for the Lipschitz integrable twist maps. With an example, we show that our choice is not the so-called discounted one (see \\cite{DFIZ2}), that is sometimes discontinuous. We also provide examples of `strange' continuous foliations that cannot be straightened by a symplectic homeomorphism.\n\n### Dates and versions\n\nhal-01871436 , version 1 (10-09-2018)\n\n### Identifiers\n\n\u2022 HAL Id : hal-01871436 , version 1\n\u2022 ARXIV :\n\n### Cite\n\nMarie-Claude Arnaud, Maxime Zavidovique. On the transversal dependence of weak K.A.M. solutions for symplectic twist maps. 2018. \u27e8hal-01871436\u27e9\n\n### Export\n\nBibTeX TEI Dublin Core DC Terms EndNote Datacite\n\n167 View","date":"2023-03-24 02:22:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7570218443870544, \"perplexity\": 1724.8867846605847}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296945242.64\/warc\/CC-MAIN-20230324020038-20230324050038-00005.warc.gz\"}"}
null
null
Home > Journals > Journal of Response to Writing > Vol. 1 > Iss. 1 (2015) Commenting across the disciplines: Partnering with writing centers to train faculty to respond effectively to student writing Andrea Scott, Pitzer College writing centers, tutors, faculty development, commenting, first-year writing, conferencing Faculty and writing center tutors bring expertise to writing as practice and process. Yet at many institutions, the two groups work in relative isolation, missing opportunities to learn from each other. In this article, I describe a faculty development initiative in a multidisciplinary writing program that brings together new faculty and experienced undergraduate tutors to workshop instructors' comments on first-year writing. The purpose of these workshops is to assist faculty in crafting inquiry-driven written responses that pave the way for collaborative faculty-student conferences. By bringing together scholarly conversations on tutor expertise and the role of faculty comments in student learning, I argue for the value of extending partnerships between writing centers and programs. Such accounts are important to the field for challenging what Grutsch McKinney (2013) calls the "writing center grand narrative," which limits the scope of writing center work by imagining centers primarily as "comfortable, iconoclastic places where all students go to get one-to-one tutoring on their writing" to the exclusion of lived realities (p. 3). In this case, I describe a writing center where tutors bring their expertise outside the center and into the faculty office, consulting in small groups with faculty with the aim of enriching the quality of instructor feedback in first-year seminars. Scott, Andrea (2015) "Commenting across the disciplines: Partnering with writing centers to train faculty to respond effectively to student writing," Journal of Response to Writing: Vol. 1: Iss. 1, Article 5. Available at: https://scholarsarchive.byu.edu/journalrw/vol1/iss1/5 Arts and Humanities Commons About JRW Submission Guidelines & Policies Register as a Reviewer Reviewer Guidelines and Policies All Issues Vol. 8, Iss. 2 Vol. 8, Iss. 1 Vol. 7, Iss. 2 Vol. 7, Iss. 1 Vol. 6, Iss. 2 Vol. 6, Iss. 1 Vol. 5, Iss. 2 Vol. 5, Iss. 1 Vol. 4, Iss. 2 Vol. 4, Iss. 1 Vol. 3, Iss. 2 Vol. 3, Iss. 1 Vol. 2, Iss. 2 Vol. 2, Iss. 1 Vol. 1, Iss. 2 Vol. 1, Iss. 1
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,580
\section{Introduction} Lynden-Bell (1969) suggested that active galaxies are fuelled by accretion of matter onto supermassive black holes and that massive dark compact objects could therefore be present in the nuclei of many quiescent galaxies as remnants of QSOs. Now there is solid observational evidence for the presence of supermassive black holes in our own and in many nearby galaxies. A correlation between the mass of a supermassive black hole and the luminosity of the bulge of the host galaxy was already apparent in the first small sample of detections. The larger samples meanwhile available have backed up the existence of such a correlation (Kormendy 1993; Kormendy \& Richstone 1995; Magorrian et al. 1998; Ford et al. 1998; van der Marel 1998; Ho 1998; Mc Leod 1998). The growth history of these supermassive black holes is, however, rather uncertain. It is widely believed that they built up during the QSO epoch, around $z\sim 2-3$. As pointed out by Phinney (1997) and discussed in more detail by Haehnelt, Natarajan \& Rees (1998, HNR98), the total integrated mass density in remnant black holes substantially exceeds the integrated mass density accreted onto supermassive black holes by optically bright QSOs unless either the efficiency for producing radiation is rather small or the masses of black holes in nearby galaxies are overestimated (see also Richstone et al. 1998 for a review). This leaves some room for accretion other than that inferred from optically bright QSOs (see also Fabian \& Iwasawa 1999 and Salucci et al. 1999). A striking feature of the activity of optically bright QSOs is the fast evolution of their space density. A rapid rise between $z\sim5$ and $z\sim3$ is followed by a strong decline after $z\sim 2$ (e.g. Shaver et al. 1996). This rapid rise can be plausibly explained by the evolution of the dark matter (DM) distribution in hierarchical cosmogonies (Haehnelt \& Rees 1993, HR93): deeper potential wells form at later times and the first potential wells which are deep enough to retain their gas in spite of the input of energy and momentum due to star formation offer the best conditions for efficient formation of supermassive black holes. As discussed by HR93 and Cavaliere, Perri \& Vittorini (1997), the decline in the activity of galactic nuclei is probably due to a transition from a high-redshift phase, in which supermassive black holes form in initially gas rich nuclei, to a low-redshift phase, where QSOs are refuelled by mergers of increasingly gas-poor galaxies. The late merging of gas-poor galaxies containing supermassive black holes will obviously affect the mass distribution of remnant black holes. For instance, it will introduce scatter in any relation between properties of galaxies and masses of remnant black holes. Here we use Monte Carlo realizations of the merging histories of DM haloes, together with a simple scheme for galaxy formation, to model the merging of galaxies in a hierarchical cosmogony. We investigate different schemes for the accretion of matter onto supermassive black holes and we study the effect on the mass distribution of remnant black holes in low-redshift galaxies. The structure of the paper is as follows. In Section 2 we describe the Monte Carlo simulations. In Section 3 we compare the resulting relations between masses of black holes and luminosities of bulges to that observed in nearby galaxies. Section 4 contains our conclusions. We assume a flat cosmology with $\Omega_M=0.3$, $\Omega_\Lambda=0.7$ and a Hubble constant of $75{\rm\,km\,s^{-1}\,Mpc^{-1}}$. \section{Monte Carlo simulations of the merging and accretion history of supermassive black holes} \subsection{Merging histories of dark matter haloes} The standard paradigm for the formation of structure in the Universe is the gravitational instability of density fluctuations in a primordial Gaussian random field. In this picture, the number density of collapsed DM haloes is accurately described by the Press-Schechter formula \begin{eqnarray} &N(M,z){\rm\,d}M=\\ \nonumber &-\left({\overline{\rho}\over M}\right) ({1\over 2\pi})^{1\over 2}\left({\omega\over\sigma}\right) \left({1\over\sigma}{{\rm d}\sigma\over{\rm d}M}\right) {\rm exp}\left(-{\omega^2\over 2\sigma^2}\right){\rm\,d}M\\ \nonumber \end{eqnarray} (Press \& Schechter 1974). Here $\overline{\rho}(z)$ is the cosmological density at redshift $z$, $\sigma_0(M)^2$ is the variance of the linearly extrapolated density field on the scale $M$, $\omega$ is defined as $\omega(z)\equiv\delta_{\rm c} D(0)/D(z)$, where $\delta_{\rm c}$ is the over-density threshold at which density fluctuations collapse, and $D(z)$ is the linear growth factor of density fluctuations. The variance of the density field is related to the power spectrum of the density fluctuations. Here we assume a standard cold dark matter power spectrum as given by Bond \& Efstathiou (1984), with a normalisation that reproduces the present-day space density of clusters of galaxies (Eke, Cole \& Frenk 1996). The dependence of the linear growth factor on redshift is given by Heath (1977). Techniques for generating Monte Carlo realizations of the merging histories of DM haloes based on extensions of the Press-Schechter formula have been described for instance by Cole \& Kaiser (1988), Cole (1991), Lacey \& Cole (1993), Kauffmann \& White (1993) and Somerville \& Kolatt (1997). The basic equation is the conditional probability that a halo of mass $M$ at redshift $z$ has a progenitor of mass $M-{\Delta M}$ at redshift $z+\Delta z$, \begin{eqnarray} &P(M\rightarrow M-\Delta M,z\rightarrow z+\Delta z)=\\ \nonumber &\frac{1}{\sqrt{2\pi}}\; {\omega(z+\Delta z)-\omega(z)\over[\sigma_0^2(M-\Delta M)-\sigma_0^2(M)]^ {3\over 2}}\; \exp{\biggl \{ -{[(\omega(z+\Delta z)-\omega(z)]^2\over 2[\sigma_0^2(M-\Delta M)-\sigma_0^2(M)]} \biggr \}} \\\nonumber \end{eqnarray} (Lacey \& Cole 1993). We follow the description of Somerville \& Kolatt (1997) to construct Monte Carlo realizations of the merging histories of DM haloes from equation (2). The probability distribution (2) is used to assign a mass $M$ to a progenitor at redshift $\Delta z$ of a DM halo of mass $M_0$ at $z=0$. The procedure is iterated and further progenitors are drawn from the distribution (2), but the merging history is followed only for haloes with circular velocities $v_{\rm c}=({\rm G}M/r_{\rm vir})^{1/2}$ above a certain threshold $v_l$. Haloes with $v_{\rm c} <v_{\rm l}$ are treated as accreted mass. Progenitors with a mass larger than the not yet allocated mass are discarded. We construct progenitors of progenitors until all haloes have $v_{\rm c} <v_{\rm l}$. The procedure is repeated for a representative sample of haloes at $z=0$. We adopt a step of $\Delta z=0.01$ and a resolution of $v_{\rm l}=70{\rm\,km\,s}^{-1}$. \subsection{The galaxy formation scheme} The modelling of galaxies within hierarchal cosmogonies has reached a considerable level of complexity. Here we concentrate on the effects of mergers on the growth of supermassive black holes and on the distribution of remnant black holes. Therefore we adopt a rather simple scheme for galaxy formation similar to that proposed by White and Rees (1978), which should nevertheless catch the essential features of the hierarchical merging of galaxies (White 1996). Important conditions for the formation of a galaxy in a collapsed DM halo are the ability of the gas to cool and the ability of the halo to retain its gas in spite of the input of energy and momentum due to star formation and supernova explosions. The ability of the gas to cool depends on the temperature and the density of the gas. For cooling by Bremsstrahlung these dependencies conspire to give an upper limit for the mass of the gas which is able to cool efficiently (Silk 1977; Rees \& Ostriker 1977). Feedback from star formation is more important in haloes with shallower potential wells and therefore smaller circular velocities (see e.g. Kauffmann, White \& Guiderdoni 1993, KWG93). Here we do not treat the cooling and feedback explicitly. Instead, we introduce an effective efficiency for star formation, which depends on the halo circular velocity in such a way that the mass of the galaxy associated with a DM halo of mass $M_{\rm halo}$ at redshift $z$ scales as \begin{equation} M_\star(M_{\rm halo})= \epsilon_{\star}M_{\rm halo} v_{\rm c}^{2} \exp{[-(v_{\rm c}/v_{\rm max})^4]} \end{equation} independently of redshift; $\epsilon_\star$ and $v_{\rm max}$ are chosen in such a way that the luminosity function of our simulated galaxies reproduces the observed luminosity function of galaxies at redshift zero (Fig.1). The cut-off in virial velocities mimics the inability of the gas to cool and form stars in the very deep potential wells which form at late times. When DM haloes merge, it will take some time before the corresponding galaxies sink to the centre of the merged halo due to dynamical friction. To model this, we follow KWG93 and assume that initially a single galaxy forms at the centre of each halo. When haloes merge, the central galaxy of the largest progenitor halo becomes the central galaxy of the new halo. The ``satellite'' galaxies are assumed to merge with the central galaxy on the dynamical friction time-scale \begin{equation} t_{\rm df}={1.17r_{\rm vir}^2v_{\rm c}\over{\rm ln}\left({M/M_{\rm sat}}\right) {\rm G}M_{\rm sat}} \end{equation} (Binney \& Tremaine 1987). Here $M$, $r_{\rm vir}$, $v_{\rm c}$ are the mass, the virial radius and the circular velocity of the new halo and $M_{\rm sat}$ is the total mass of the satellite including its DM halo. The stellar mass of the merged galaxy is assumed to be the maximum of the sum of the stellar masses of the merging galaxies and the mass given by equation (3). If the latter is larger, we interpret the difference $\Delta M_{\star}$ as the amount of gas that has formed stars in the disks of the galaxies since the last merger. We follow KWG93 again and introduce a critical mass ratio $f_{\rm e}$ above which a merger produces an elliptical galaxy. For mass ratios below the threshold the smaller galaxy is assumed to be tidally disrupted and its mass is added to the disk of the larger one. The value of $f_{\rm e}$ is determined by requiring the Monte Carlo simulations to reproduce the observed bulge luminosity function, once the other free parameters have been fixed. A mass-to-light ratio of $M_\star/L_B=5(L_B/5.8\times 10^{10}L_\odot)^{0.25}$ is used (Salucci et al. 1999). Figure 1 compares the resulting luminosity functions to those observed. \begin{figure} \centerline{\psfig{figure=fig1.ps,width=0.48\textwidth,angle=0}} \caption{Observed and simulated B-band luminosity function of galaxies and bulges (data from Salucci et al. 1998 and Marzke et al. 1998). A relation of stellar mass to circular velocity and mass of the DM halo of the form $M_\star(M,z)= 0.014 M_{\rm halo} \times (v_{\rm c}/200\, {\rm km\, s}^{-1}) ^{2} \exp{[-(v_{\rm c}/300\, {\rm km\, s}^{-1})^4]}$ is assumed. Major mergers result in the formation of a bulge for mass ratios above $f_{\rm e} = 0.2$.} \end{figure} \begin{figure} \centerline{\psfig{figure=fig2.ps,width=0.48\textwidth,angle=0}} \caption{The redshift dependence of the comoving mass density in supermassive black holes for: (i) $M_{\rm acc} = 6\times 10^{-3}\Delta M_{\star}$, ({\it dashed line}); (ii) $M_{\rm acc }= 1.4\times 10^{-3} (1+z)^{2} \Delta M_{\star}$ ({\it solid line}); (iii) $M_{\rm acc} = 10^{-6} (1+z)^{2} M_{\rm halo} \exp{[-(v_{\rm c}/300{\rm\,km\,s}^{-1})^4]}$, ({\it dotted line}). The points show the observed redshift dependence as determined by Choski \& Turner (1992).} \end{figure} \subsection{The growth of supermassive black holes by accretion and merging} In a hierarchical cosmogony supermassive black holes grow by merging and by accretion of gas. Here we assume that a major galactic merger leads to an immediate coalescence of the supermassive black holes contained in the merging galaxies. This should be a reasonable assumption for violent mergers, though in a smooth stellar background it may take longer for the holes to coalesce (Begelman, Blandford \& Rees 1980). Some fraction of the gas which is able to cool in DM haloes is assumed to be accreted onto the merged supermassive black hole. The fraction of cool gas ending up in the hole should have a complicated dependence on halo and galactic properties (redshift, past accretion history, past star formation history, past merging history, angular momentum). We shall not attempt to model these processes here in detail. We shall rather investigate three very simple models for the efficiency of accretion. \begin{itemize} \item[(i)]{During a major merger a fixed fraction of the gas that has formed stars since the last merger in the disks of the colliding galaxies is accreted onto the supermassive black hole formed by the coalescence of the supermassive black holes in the merging galaxies ($M_{\rm acc} = \epsilon_{\rm acc} \Delta M_{\star}$, $\epsilon_{\rm acc}= 6\times 10^{-3}$).} \item[(ii)]{As in model (i) but with an accretion efficiency dependent on redshift to approximate the redshift evolution of the accretion onto black holes inferred from the counts of optically bright QSOs ($M_{\rm acc} = \epsilon_{\rm acc} (z) \Delta M_{\star}$, $\epsilon_{\rm acc}(z) = 1.4\times 10^{-3} (1+z)^{2}$).} \item[(iii)]{During a major merger a fraction of the total baryonic mass of the merged DM halo is accreted onto the supermassive black hole with an accretion efficiency dependent on redshift as in model (ii) and with the same cut-off for high circular velocity as in equation (3) to mimic the inability of the gas to cool in deep potential wells which form at late times ($M_{\rm acc} = \epsilon_{\rm acc} (z) M_{\rm halo} \exp{[-(v_{\rm c}/300\, {\rm km\, s}^{-1})^4]}$, $\epsilon_{\rm acc}(z) = 10^{-6} (1+z)^{2}$).} \end{itemize} Figure 2 shows the fraction of the total mass accreted before redshift $z$ in the Monte Carlo simulations and the observed fraction inferred from the blue light emitted by optically bright QSOs (Soltan 1982; Choski \& Turner 1992). Models (ii) and (iii) approximate the observed redshift evolution by construction, whereas in model (i) most of the mass is accreted at lower redshifts than those indicated by the epoch of optically bright QSOs. Model (iii) is similar to that recently proposed by Haiman \& Loeb (1998). \begin{figure*} \centerline{\psfig{figure=fig3.ps,width=\textwidth,angle=0}} \caption{(a) Observed black hole masses and bulge luminosities for the samples of nearby galaxies compiled by Ho (1998, {\it circles}) and Magorrian et al. (1998, {\it squares}). The solid line shows the linear least square fit to the data, ${\rm log}(M_{BH}/M_\odot)=1.28{\rm log}(L_B/L_\odot)-4.46$, while the dashed lines show the $\pm 1\sigma$ deviation ($\sigma=0.74$). The dashed lines are the same in all panels for comparison. (b) Monte Carlo simulations of black hole masses and bulge luminosities with $M_{\rm acc} = 6\times 10^{-3} \Delta M_{\star}$ (model i). (c) Same as (b) with $M_{\rm acc} = 1.4\times 10^{-3} (1+z)^{2} \Delta M_{\star}$ (model ii). (d) Same as (b) with $M_{\rm acc} = 10^{-6} (1+z)^{2} M_{\rm halo} \exp{[-(v_{\rm c}/300{\rm\,km\,s}^{-1})^4]}$ (model iii). The linear least square fit to the results of model (iii) has a slope of 0.6, shown by the two dotted lines. } \end{figure*} The normalisation $\epsilon_{\rm acc}$ is determined by requiring the Monte Carlo simulations to reproduce the comoving mass density of supermassive black holes at the present time. As discussed in the introduction and by HNR98, this value is still uncertain, and here we have used a value of $10^6M_\odot{\rm Mpc}^{-3}$. This value is about six times larger than that estimated by Choski \& Turner (1992) using optical counts of unobscured QSOs. \section{The mass distribution of supermassive black holes in low-redshift galaxies} Figure 3a shows the observed black hole masses and bulge luminosities for a sample of 39 galaxies as compiled by Ho (1998) and Magorrian (1998), together with a linear least square fit and the $\pm1\sigma$ deviation. Figures 3b, 3c and 3d show the results of the Monte Carlo simulations. Models (i) and (ii) both reproduce the observed roughly linear relation between black hole mass and bulge luminosity. This is mainly due to our assumption that the accreted mass is proportional to the mass that has formed stars, which is strictly true in model (i) and is still a good approximation for model (ii). In model (i) the relation is very tight, tighter than observed. In model (ii), instead, the different redshift evolution of the mass forming stars and of the mass accreted onto supermassive black holes acts as a substantial source of scatter, which is now comparable to that in the observed relation. Model (iii) produces a shallower relation of black hole mass to bulge luminosity; the predicted remnant black hole masses in large bulges are here considerably smaller than observed; this is due to the non-linear relation between bulge mass and halo mass: as the assumed star formation efficiency depends on the depth of the potential well, the bulge mass rises faster than the halo mass with increasing bulge luminosity. The scatter in model (iii) is again comparable to the observed scatter and to that in model (ii). We have checked the sensitivity of our results to some of our assumptions. The slope of the relation of black hole mass to bulge luminosity depends only weakly on the cosmology, on the details of the galaxy formation scheme and on the quasar epoch once the parameters have been fixed to reproduce the present-day bulge luminosity function and the integrated mass density in black holes. The scatter in this relation, however, grows when the difference between the redshift evolution of star formation and black hole accretion increases. \section{Discussion and conclusions} We have used Monte Carlo realizations of the extended Press-Schechter formalism together with simple schemes for galaxy formation and mass accretion onto supermassive black holes to investigate how the merging and accretion history of supermassive black holes within a hierarchical cosmogony affects the mass distribution of remnant black holes in low-redshift galaxies. The observed roughly linear relation between black hole masses and bulge masses can be reproduced if the mass accreted during a major merger is a fixed fraction of the mass of the gas that has formed stars in the merging galaxies since the last major merger. This relation is preserved if this fraction is independent of mass but varies with redshift, but such a variation introduces significant scatter into the relation. The redshift variation necessary to match the evolution of the global mass accretion rate inferred from the integrated light emission of optically bright QSOs (assuming that their radiation efficiency does not depend on redshift) results in a scatter that is comparable to that in the observed relation. The linear relation between bulge and black hole mass and the modest scatter in this relation at the bright end of the bulge luminosity function seem to suggest that for most of the integrated mass density in supermassive black holes a common mechanism determines the efficiency for mass accretion of supermassive black holes and the efficiency for star formation. The scatter in the observed relation must be at least partially due to observational errors and there seems not much room for additional sources of intrinsic scatter at the bright end of the luminosity function (except in the somewhat artificial model (i)). At the faint end the scatter in the observed relation seems to increase. This might indicate that here other parameters, e.g. the spin of the DM halo, influence black hole and star formation in a different way. If the accreted mass scaled with the mass of the halo rather than with the stellar mass, the black hole mass would depend on the bulge mass less steeply than observed and black holes in luminous bulges would be less massive than the observations suggest. \section{Acknowledgements} We thank Guinevere Kauffmann and Simon White for comments on the manuscript. This work was supported by the EC TMR network for ``galaxy formation and evolution''. Andrea Cattaneo acknowledges the support of an Isaac Newton Studentship.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,777
The Armenian Review – amerykańskie czasopismo społeczno-polityczne wydawane od 1948 roku, poświęcone zagadnieniom związanym z Armenią i Ormianami. Opis The Armenian Review jest anglojęzycznym periodykiem wydawanym od 1948 roku w Watertown w hrabstwie Middlesex stanu Massachusetts. Pierwszy numer ukazał się zimą 1948 roku. Początkowo wydawane było cztery razy w roku. Obecnie publikowane jest dwa razy w roku: w maju i listopadzie. Pismo poświęcone jest zagadnieniom związanych z Armenią i Ormianami, publikowane są artykuły badawcze, wywiady, recenzje książek oraz transkrypcję armeńskiej literatury. Redakcja W latach 2008–2018 redaktorem naczelnym był Asbed Kotchikian. Kolejnym redaktorem naczelnym został Khatchig Mouradian, wykładowca studiów Bliskiego Wschodu, Azji Południowej i Afryki (MESAAS) na Uniwersytecie Columbia, doktor nauk historycznych w Strassler Center for Holocaust and Genocide Studies na Clark University. Skład redakcji tworzą naukowcy i wykładowcy akademiccy: Richard G. Hovannisian, Uniwersytet Kalifornijski, Los Angeles Stephan H. Astourian, Uniwersytet Kalifornijski, Berkeley Levon Chorbajian, University of Massachusetts, Lowell S. Peter Cowe, Uniwersytet Kalifornijski, Los Angeles G. M. Goshgarian, Université de Bourgogne, Dijon Ara Khanjian, Ventura College, Ventura Dickran Kouymjian, Uniwersytet Stanu Kalifornia, Fresno Marc Nichanian, Nowy Jork Susan Pattie, University College London, Londyn Ronald Grigor Suny, Uniwersytet Michigan, Ann Arbor Khachig Tololyan, Wesleyan University, Middletown Przypisy Ormianie Czasopisma anglojęzyczne Czasopisma wydawane od 1948 Czasopisma w Stanach Zjednoczonych Amerykańskie czasopisma naukowe
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,611
The Diamond Jubilee…and why Britain's next King should be a Watermelon June 6, 2012 by Justin Huggler 1 Comment Queen Elizabeth II (Photo credit: Wikipedia) Take down the bunting, put away the flags, Britain's brief holiday from the grim realities of the 21st century into its royal fantasy is done for another year. For the second year running, the country has come to a standstill to celebrate its strange anachronism of a monarchy, and everyone is congratulating themselves on how marvellously we do these things. But I should like to make a humble submission: that when Queen Elizabeth II eventually passes on, we should replace her with a watermelon. For, though no one can deny that Elizabeth has made a generally excellent job of being Queen, I can't help feeling that the peculiar requirements of the role could be equally well fulfilled by a watermelon. The weird pageant of the last few days has been held up as evidence of the advantages of a monarchy over an elected head of state — and, it has to be said, the Queen did exhibit superiority over France's President Francois Hollande in one area: she had the good sense to keep out of the rain, and didn't stand dripping like a drowned rat, as M Hollande did at his investiture. It's always pointed out at these events, too, that the fact the Queen is apolitical means the entire nation can rally around her in a way the US, say, cannot unite around Barack Obama. This argument, of course, always infuriates Britain's small republican minority, who point out that they can't unite around a monarch they don't want. The strongest argument in favour of the Queen, though, has always been the elected presidents she prevents us from being forced to endure. President Tony Blair, for instance — though I suspect these days we'd be more likely to end up with President Boris Johnson. Even if you make a presidency apolitical and purely ceremonial, and strictly exclude politicians from it, you could still end up with President Simon Cowell. Or President Cheryl Cole. Or President Posh Spice. Quite. This is a forceful argument. Even the prospect of the strange Prince Charles becoming King pales into insignificance beside the image of President Russell Brand. The problem here, though, is that a watermelon could just as easily block the path of all these presidential hopefuls. Essentially, what we are saying we want from a head of state is some one prepared to take part in all the ceremonies and pageants, but refrain from interfering with the government, and generally keep their opinions to themselves, much as Elizabeth II has done. Prince Charles is regarded as a loose cannon because from time to time he expresses an opinion about architecture. This, we feel, is not his role. Kings and Queens, as far as the British are concerned, should be seen and not heard. What better to fill this role, then, than a watermelon? The melon, which I should like to humbly suggest we call George, would sit quietly on its throne, consent to be carried upon a royal litter, ride past adoring crowds in a gilded carriage or an armoured Bentley, and never presume to interfere with the workings of elected government. And he would do all of this a lot more cheaply than the current set-up. Sure, we'd still have to fork out for the big state occasions, and keep the palaces standing, but other than that his only runnings costs would be a decent refrigerator. Yes, we'd have to change the watermelon from time to time, but we have to do that with the people. The Royal Family is a valuable tourist attraction that brings a lot of money into the British economy, but I can't help feeling that our reputation for endearing eccentricity would only be enhanced if we chose a watermelon as our next King, and tourists would still flock to Buckingham Palace eager to catch a glimpse of his noble green skin. I suspect that the experiment woud prove so popular that other countries would quickly start to emulate it, and France would soon have a watermelon of her own, named Louis. In short then, I humbly submit that this is the perfect compromise for these times of austerity, that by choosing our next monarch from the melon patch we can retain all the advantages of a hereditary monarchy without ever risking that it falls into the wrong hands, and save a fortune, as we advance into the glorious new age of King George the Watermelon. Filed Under: Uncategorized Tagged With: Britain, Elizabeth II, Monarchy, Queen, Royal Family Sushmita Clays says Do add a christmas speech paragraph!!! With a voice for the melon…!! That would be hilarious!!!!! A bit seedy maybe??!!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,060
Amparo Muñoz, née le à Vélez-Málaga et décédée le à Malaga, est une actrice espagnole, élue Miss Espagne 1973 puis Miss Univers 1974. Biographie Elle devint une célébrité instantanée en Espagne, aux côtés de Nino Bravo, Pedro Carrasco, Rocío Dúrcal, Rocío Jurado, Camilo Sesto, La Pandilla et d'autres célébrités espagnoles des années 1970, après sa victoire à Miss Univers avec une carrière fructueuse dans le show-business. En 1979, elle joue dans la comédie Maman a cent ans, de Carlos Saura. Celle-ci a été suivie de représentations de Todo un hombre en 1982, Un paradis sous les étoiles en 1999 et El Tahur en 2003. Dans les années 1990, elle revient au cinéma espagnol grand public avec le film Familia, de Fernando León de Aranoa. Le , Amparo décède d'un cancer. Filmographie Télévision 1968 : Hora once 1982 : Sonata de estío : Niña Chole 1976 : Las aventuras del Hada Rebeca 1983 : Las pícaras d'Alonso de Castillo Solórzano 1983 : Sonatas : Niña Chole 1987 : Vida privada : Concha Pujol 1989 : Brigada Central : Marisa 1993 : La intrusa : Gracia 2006 : El cas de la núvia dividida Cinéma 1973 : El diablo en persona 1974 : Tocata y fuga de Lolita d'Antonio Drove : Lolita Villar 1974 : Vida conyugal sana de Roberto Bodegas. : modèle publicitaire, comme Amparo Muñoz, miss Espagne 1975 : Sensualidad : Ana 1975 : Clara es el precio : Clara Valverde 1976 : Mauricio, mon amour : Doctora Verónica Anglada 1976 : Volvoreta de José Antonio Nieves Conde : Volvoreta 1976 : La otra alcoba : Diana. 1977 : Acto de posesión 1977 : Del amor y de la muerte 1979 : El tahúr 1979 : Maman a cent ans (Mamá cumple cien años) de Carlos Saura : Natalia 1979 : El anillo matrimonial : Alba 1979 : Presto agitato 1980 : Mírame con ojos pornográficos : Magdalena 1980 : Dedicatoria : Clara 1981 : Trágala, perro : Sor Patrocinio 1981 : Las siete cucas : Cresencia 1981 : Como México no hay dos 1981 : La mujer del ministro : Teresa 1981 : El gran triunfo 1982 : El gran mogollón 1982 : Hablamos esta noche 1982 : Si las mujeres mandaran (o mandasen) : Agustina 1983 : Sexo vs. sexo 1983 : Se me sale cuando me río 1983 : Todo un hombre : Laura Monteros 1984 : El balcón abierto : La mujer 1985 : La reina del mate : Cristina 1986 : Delirios de amor 1986 : Lulú de noche d'Emilio Martínez Lázaro : Nina 1987 : En penumbra : Helena 1987 : Los invitados : La catalana 1987 : Las dos orillas 1988 : Al acecho 1989 : La luna negra : Lilit 1996 : Familia de Fernando León de Aranoa : Carmen 1996 : Fotos : Rosa 1996 : Licántropo : El asesino de la luna llena : Dra. Mina Westenra 1997 : Elles : María 1999 : Tierra de cañones : La Cantero 2000 : Un paraíso bajo las estrellas : Olivia Liens externes Actrice espagnole Gagnante de Miss Espagne Participante à Miss Univers 1974 Gagnante de Miss Univers Naissance à Vélez-Málaga Naissance en juin 1954 Décès à Malaga Décès en février 2011 Décès à 56 ans Mort d'un cancer en Espagne
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,807
Le Tour de Chant d'Édith Piaf a l'Olympia - No. 3 is an album from Édith Piaf recorded live at L'Olympia in Paris in 1958. The album was released on the Columbia label (FS 1075). The Olympia was renovated and reopened by Bruno Coquatrix in February 1954 as a music venue. Piaf gave several series of recitals at the venue from 1955 to 1962. Three of Piaf's early recitals at Olympia were released by Columbia as part of its "Le Tour de Chant d'Édith Piaf a l'Olympia" (Edith Piaf's Singing Tour at the Olympia) series. Track listing Side A "Comme moi" (Senlis - Delécluse, Marguerite Monnot) "Salle d'attente" (M. Rivgauche, Marguerite Monnot) "Les prisons du roy" (M. Rivgauche, Irving Gordon) "La Foule" (M. Rivgauche, A. Cabral) Side B "Les grognards" (P. Delanoë, H. Giraud) Mon manège à moi (Tu me fais tourner la tête) (J. Constantin - N. Glazberg) "Bravo pour le clown" (Henri Contet - Louiguy) "Hymne à l'amour" (Edith Piaf, Marguerite Monnot) References Édith Piaf albums 1958 albums
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,048
HARRISBURG, PA — Tröegs Brewery took home three medals at 2010 Great American Beer Festival Competition (GABF), the largest national beer competition recognizing the most outstanding beers produced in the United States. The winners in the 79 beer-style categories were announced September 18 at the 29th Annual Great American Beer Festival in Denver, Colorado. Tröegs Brewery won a gold medal in the Barleywine category for The Flying Mouflan, a silver medal in the Bock category for Troegenator and a silver medal in the Amber Ale category for Hopback Amber. The three medals represent the most medals won by a Pennsylvania brewery at this year's competition. This year's GABF competition attracted 151 beer judgers from 10 countries with 3,523 beers from 516 breweries competing for gold, silver and bronze medals. The average numbers of competition beers entered in each category was 44. More than 49,000 people attended the three-day beer festival with 455 breweries pouring more than 2,200 beers. Visit www.greatamericanbeerfestival.com for a complete list of winners.
{ "redpajama_set_name": "RedPajamaC4" }
8,693
Q: Angular2 @Input and lifecycle hook I have following component: export class AddressListComponent implements AfterViewInit{ @Input() districts: District[] = []; constructor() { } ngAfterViewInit() { console.log(this.districts); } } So it console logs districts as empty array, but I send it as input non-empty array and it displays well this html: <a *ngFor = "let district of districts; let i = index" class="list-group-item clearfix" > WORKS </a> So my question is next: when in the lifecycle hook am I able to console log data from districts array? Because I need to change it before displaying on html A: when in the lifecycle hook am I able to console log data from districts array All inputs are available in the ngOnInit lifecycle hook for the first time the component is initialized. For the consequent updates use ngOnChanges. Or you can use ngOnChanges only since it's also called when the component is initialized. You can see it from the order of operations mentioned in the Everything you need to know about change detection in Angular: 1) updates input properties on a child component/directive instance ... 3) calls OnChanges lifecycle hook on a child component if bindings changed 4) calls OnInit and ngDoCheck on a child component (OnInit is called only during first check) A: It should work fine. I think your parent component it's being populating the variable districts after AddressListComponent it's loaded. Check to load your child component before you set the variable districts, or use ngOnChanges instead of ngAfterViewInit (ngOnChangess hooks to every change , and not only to the init of the component)
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,049
Q: How to get node value / innerHTML with XPath? I have a XPath to select to a class I want: //div[@class='myclass']. But it returns me the whole div (with the <div class='myclass'> also, but I would like to return only the contents of this tag without the tag itself. How can I do it? A: New answer to an old, frequently asked question: For this XML <div class="myclass">content</div> you can use XPath to select just content in one of two ways: * *Text Node Selection This XPath, //div[@class='myclass']/text() will select the text node children of the targeted div element, content, as requested. *String Value of an Element This XPath, string(//div[@class='myclass']) will return string-value of the targeted div element, content, again as requested. Further information: Here's a note explaining the string-values of elements: The string-value of an element node is the concatenation of the string-values of all text node descendants of the element node in document order. A: node() = innerXml text() = innerText both are arrays, so text()[1] is a first children text node... A: You can try //div[@class='myclass']/child::* child::* selects all element children of the context node see details A: With xpath, the thing you will get returned is the last thing in the path that is not a condition. What that means? Well, conditions are the stuff between []'s (but you already knew that) and yours reads like pathElement[that has a 'class' attribute with value 'my class']. The pathElement comes directly before the [. All the stuff outside of []'s then is the path, so in //a/b/c[@blah='bleh']/d a, b, c and d are all path elements, blah is an attribute and bleh a literal value. If this path matches it will return you a d, the last non-condition thing. Your particular path returns a (series of) div, being the last thing in your xpath's path. This return value thus includes the top-level node(s), div in your case, and underneath it (them) all its (their) children. Nodes can be elements or text (or comments, processing instructions, ...). Underneath a node there can be multiple text nodes, hence the array pOcHa talks about. x/text() returns all text that is a direct child of x, x/node() returns all child nodes, including text.
{ "redpajama_set_name": "RedPajamaStackExchange" }
668
\section{Introduction} \label{sect:1} Let $X$ be an $n$-dimensional Fano manifold and $PSH(X,-K_X)$ the set of (possibly singular) hermitian metrics $\phi$ on the anti-canonical bundle $-K_X$ with positive curvature current \[ \omega_{\phi}:=\frac{\sqrt{-1}}{2 \pi} \partial \bar{\partial} \phi. \] We regard $\phi$ as a psh weight, i.e., a psh function on $K_X \backslash \{\text{0-section}\}$ satisfying the log-homogeneity property (e.g., see \cite{BB10} for more detail). Let ${\mathcal H} (X, -K_X)$ be the subset of $PSH(X,-K_X)$ consisting of all smooth psh weights. We say that a metric $\phi \in {\mathcal H}(X,-K_X)$ is a K\"ahler-Ricci soliton if it satisfies the equation \[ {\rm Ric}(\omega_{\phi}) - \omega_{\phi} = L_{V_{KS}} \omega_{\phi} \] for some holomorphic vector field $V_{KS}$ (\textbf{K\"ahler-Ricci soliton vector field}), where $L_{V_{KS}}$ denotes the Lie derivative with respect to $V_{KS}$. K\"ahler-Ricci solitons arise from the geometric analysis, such as K\"ahler-Ricci flow, and have been studied extensively for recent years. For instance, Chen-Wang \cite{CW14} solved the Hamilton-Tian conjecture, which says that K\"ahler-Ricci flow on a Fano manifold always converges to a singular K\"ahler-Ricci soliton defined on a singular normal Fano variety in the sense of Cheeger-Gromov. In this paper, we study the existence problem of K\"ahler-Ricci solitons. As shown in \cite{TZ02}, a necessary condition for the existence of a K\"ahler-Ricci soliton is the vanishing of \textbf{the modified Futaki invariant}: let ${\rm Aut}_0(X)$ be the identity component of the automorphism group of $X$. Since ${\rm Aut}_0(X)$ is a linear algebraic group \cite{Fuj78}, we obtain a semidirect decomposition \[ {\rm Aut}_0(X) = {\rm Aut}_r (X) \ltimes R_u, \] where ${\rm Aut}_r (X)$ is a reductive algebraic subgroup (uniquely determined up to conjugacy of ${\rm Aut}_0(X)$), which is the complexfication of a maximal compact subgroup $K$, and $R_u$ is the unipotent radical. We often identify a holomorphic vector field $V$ such that ${\rm Im}(V) \in {\mathfrak k}:={\rm Lie}(K)$ with its imaginary part $\xi_V:={\rm Im}(V) \in {\mathfrak k}$. Let $T_c$ be a real torus defined as the identity component of the center of $K$ and put ${\mathfrak t}_c:={\rm Lie}(T_c)$. Then Tian-Zhu (cf. \cite{TZ00}, \cite{TZ02}) showed that all K\"ahler-Ricci solitons are contained in the space of $K$-invariant smooth psh weights ${\mathcal H}(X,-K_X)^K$ and $\xi_{V_{KS}}$ is contained in ${\mathfrak t}_c$, which is uniquely determined as the minimizer of the following proper strictly convex function on ${\mathfrak k}$: \[ {\mathcal F}(V):=\int_X e^{m_{\phi} (\xi_V)} MA(\phi), \] where $\phi \in {\mathcal H}(X,-K_X)^K$, $MA(\phi):=\frac{\omega_{\phi}^n}{c_1(X)^n}$ and $m_{\phi}$ is the moment map with respect to $\phi$. The function ${\mathcal F}$ is a holomorphic invariant, i.e., independent of a choice of $\phi \in {\mathcal H}(X,-K_X)^K$ and its derivative at $V$: \[ {\rm Fut}_V (W):= - \int_X m_{\phi}(\xi_W) e^{m_{\phi} (\xi_V)} MA(\phi) \] is called the modified Futaki invariant. Thereafter, Berman-Nystr\"om \cite{BN14} generalized the modified Futaki invariant for singular normal Fano varieties with a torus action, and introduced the notion of algebro-geometric stability, called K-polystability. They also showed that any Fano manifolds admitting a K\"ahler-Ricci soliton are K-polystable. The converse is also true at least in the case of K\"ahler-Einstein metrics (cf. \cite{CDS15}, \cite{Tia15}), however, it is still open in general. K-polystability is not only the notion of stability for the exsitence problem of K\"ahler-Ricci solitons. We study several functionals on the space of hermitian metrics and their asymptotics near the boundary. Then the notion of analytical K-polysitability is defined as the strong properness (or coercivity) of the modified Ding functional. On the other hand, we also consider its analogue in the space ${\mathcal H}_k$ of hermitian inner products on $H^0(X,-kK_X)$ and study their asymptotics as $k$ tends to infinity. Critical points of the quantized functionals are called balanced metrics. The existence of a balanced metric is closely related to the stability of the projective embedding of $X$ (cf. \cite{Don02}, \cite{Don09}). Berman-Nystr\"om \cite{BN14} showed that there exist a certain kind of balanced metrics, called \textbf{quantized K\"ahler-Ricci solitons} and this sequence of metrics converges to a K\"ahler-Ricci soliton as $k$ tends to infinity under some strong assumptions. The main purpose of this paper is to perturb the sequence of quantized K\"ahler-Ricci solitons and show their existence and convergence under weaker assumptions. First, we construct a sequence $\{V_k\}$ of holomorphic vector fields which approximates $V_{KS}$ and fits to the quantized settings. More concretely, this sequence is given as the following: \begin{theorem} \label{theo:1.1} Let $X$ be a Fano manifold and $K$ be a maximal compact subgroup of ${\rm Aut}_r (X)$. Then for sufficiently large $k$, there exists a holomorphic vector field $V_k$ such that its imaginary part is contained in ${\mathfrak t}_c$ and the corresponding quantized modified Futaki invariant at level $k$ vanishes on ${\mathfrak k}^{\mathbb C}$. The vector field $V_k$ is characterized as the unique minimizer of the quantization of the function ${\mathcal F}|_{{\mathfrak t}_c}$ at level $k$ and converges to $V_{KS}$ as $k \rightarrow \infty$ in the usual topology of the finite dimensional vector space ${\mathfrak t}_c$. \end{theorem} Second, we introduce the quantized K\"ahler-Ricci solitons attached to this sequence and show that: \begin{theorem} \label{theo:1.2} Assume that $(X, V_{KS})$ is strongly analytically K-polystable (i.e., the corresponding modified Ding functional is coercive modulo ${\rm Aut}_0(X, V_{KS})$), then there exists a quantized K\"ahler-Ricci soliton attached to $V_k$ if $k$ is sufficiently large, which is unique modulo the action of ${\rm Aut}_0(X, V_{KS})$ and as $k \rightarrow \infty$, the corresponding Bergman metrics on $X$ converge weakly, modulo automorphisms, to a K\"ahler-Ricci soliton on $(X,V_{KS})$. \end{theorem} As a corollary, we have the following: \begin{corollary} \label{coro:1.1} Assume that $X$ is strongly analytically K-polystable (i.e., the Ding functional is coercive modulo ${\rm Aut}_0 (X)$), then there exists a quantized K\"ahler-Einstein metric attached to $V_k \rightarrow 0$ if $k$ is sufficiently large, which is unique modulo the action of ${\rm Aut}_0 (X)$ and as $k \rightarrow \infty$, the corresponding Bergman metrics on $X$ converge weakly, modulo automorphisms, to a K\"ahler-Einstein metric on $X$. \end{corollary} The crucial point is that in our results, we need not to assume that the vanishing of all the higher order (modified) Futaki invariants, which is, in the case of $V_{KS} \equiv 0$, an obstruction to the asymptotic Chow semi-stability (cf. \cite{Fut04}). Thus we can apply our results to even asymptotically Chow unstable Fano manifolds like \cite{OSY12}. Now we describe the content of this paper. In the next section, we review the basic of several functionals on the space of smooth psh weights ${\mathcal H}(X, -K_X)$. The standard reference for this section is \cite{BN14}. However, in order to prove the convergence of quantized metrics in Theorem \ref{theo:1.2}, we need to extend functionals for (possibly singular) psh weights, which requires a lot of knowledge about the pluripotential theory (e.g. \cite{BBGZ13}, \cite{BE10}, \cite{BN14}). So the reader should see these references as needed. In Section \ref{sect:3}, we introduce the quantization of the modified Futaki invariant ${\rm Fut}_{V, k}$ and show that the functional ${\rm Fut}_{V, k}$ restricted to ${\mathfrak t}_c$ is strictly proper convex, and hence has a unique minimizer $V_k$. This observation and the quantization formula (cf. Lemma \ref{lemm:3.2}) yield Theorem \ref{theo:1.1}. Then we review the basic properties of the quantized functionals on ${\mathcal H}_k$ studied in \cite{BN14} and prove Theorem \ref{theo:1.2}. The heart of the proof of Theorem \ref{theo:1.2} consists of mainly two ideas:\\ (1) While Berman-Nystr\"om considered the torus $T_{KS}$ generated by the K\"ahler-Ricci soliton vector field $V_{KS}$, we consider the identity component of the center $T_c (\supset T_{KS})$ and the space of $T_c$-invariant hermitian metrics ${\mathcal H}(X, -K_X)^{T_c}$. Actually, this setting seems to be natural since all of $\xi_{V_k}$ lie in its Lie algebra ${\mathfrak t}_c$ by Theorem \ref{theo:1.1}.\\ (2) The condition ${\rm Fut}_{V_{KS}} \equiv 0$ (resp. ${\rm Fut}_{V_k, k} \equiv 0$) leads to the ${\rm Aut}_0 (X, V_{KS})$-invariance of the functional ${\mathcal D}_{g_{V_{KS}}}$ (resp. ${\mathcal D}_{g_{V_k}} ^{(k)}$). Hence the problem can be reduced to estimate the difference ${\mathcal D}_{g_{V_{KS}}}^{(k)} - {\mathcal D}_{g_{V_k}} ^{(k)}$, which is linear along geodesics. On the other hand, the standard exhaustion function $J^{(k)}$ has at least linear growth along geodesics. Therefore the absolute of ${\mathcal D}_{g_{V_{KS}}}^{(k)} - {\mathcal D}_{g_{V_k}} ^{(k)}$ is bounded above by an affine function $\epsilon_k J^{(k)} + \epsilon_k'$ of $J^{(k)}$ with some positive numbers $\epsilon_k \rightarrow 0$ and $\epsilon_k' \rightarrow 0$. This leads to the coercivity of ${\mathcal D}_{g_{V_k}} ^{(k)}$ and therefore the existence of the quantized K\"ahler-Ricci soliton attached to $V_k$. Finally, we mention a result for extremal K\"ahler metrics proved by Mabuchi \cite{Mab09}, which says that any polarized manifolds admitting an extremal K\"ahler metric are asymptotically Chow stable relative to an algebraic torus. It seems that such a stability for K\"ahler-Ricci solitons has never been discussed, but it is known that relative Chow stability leads to the existence of ``polybalanced metrics'' (cf. \cite{Mab11}). That is why Theorem \ref{theo:1.2} can be seen as an analogue of Mabuchi's result indirectly. \begin{acknowledgements} The author would like to express his gratitude to Professor Ryoichi Kobayashi for his advice on this article, and to the referee for useful suggestions that helped him to improve the original manuscript. The author is supported by Grant-in-Aid for JSPS Fellows Number 25-3077. \end{acknowledgements} \section{Functionals on ${\mathcal H}(X,-K_X)^T$ and analytic K-polystability} \label{sect:2} Let $X$ and $V_{KS}$ be as in Section \ref{sect:1}. Then we see that $V_{KS}$ generates a torus action on $(X, -K_X)$: \begin{lemma}[\cite{BN14}, Lemma 2.13] \label{lemm:2.1} There is a real torus $T_{KS} \subset {\rm Aut}_r (X)$ which acts on $(X, -K_X)$ such that the imaginary part of $V_{KS}$ can be identified with an element $\xi_{V_{KS}} \in {\mathfrak t}_{KS}:={\rm Lie}(T_{KS})$ and ${\mathcal H}(X,-K_X)^{V_{KS}} = {\mathcal H}(X,-K_X)^{T_{KS}}$. \end{lemma} Let $T \subset {\rm Aut}_r (X)$ be an $m$-dimensional real torus acting on $(X, -K_X)$. For $\phi \in {\mathcal H}(X,-K_X)^T$, we denote the moment map with respect to $\phi$ by \[ m_{\phi} \colon X \rightarrow m_{\phi}(X)=:P \subset {\mathfrak t}^* \simeq {\mathbb R}^m, \] where we identify ${\mathfrak t}^* \simeq {\mathbb R}^m$ using a inner product on ${\mathfrak t}$. The image $P$ is a compact convex polytope of dimension $m$, characterized as the support of the Duistermaat-Heckman measure \[ \nu^T:=(m_{\phi})_{\ast} MA(\phi), \] which is independent of a choice of $\phi \in {\mathcal H}(X,-K_X)^T$ (cf. Proposition \ref{prop:3.1}). Now we recall several functionals on ${\mathcal H}(X,-K_X)^T$ which play a central role in the study of K\"ahler-Ricci solitons. Let $\phi_0 \in {\mathcal H}(X,-K_X)^T$ be a reference metric and $g$ a positive continuous function on the moment polytope $P$. We normalize $g$ so that $g \nu^T$ is a probability measure on $P$. Following \cite[Section 2.4 and 2.6]{BN14}, we define the $g$-Monge-Amp\`ere energy by the formula \[ d {{\mathcal E}_g}|_{\phi} (\dot{\phi})= \int_X \dot{\phi} MA_g (\phi), \;\; {\mathcal E}_g (\phi_0) = 0, \] where $MA_g (\phi) := g(m_{\phi}) MA(\phi)$ denotes the $g$-Monge-Amp\`ere measure. Then the functional ${\mathcal E}_g$ satisfies the scaliing property: \[ {\mathcal E}_g (\phi + c) = {\mathcal E}_g (\phi) + c \] for all $\phi \in {\mathcal H}(X, -K_X)^T$ and $c \in {\mathbb R}$. Let $\mu_{\phi}$ be a measure on $X$ given by the local expression \[ \mu_{\phi} = e^{-\phi_U} (\sqrt{-1})^n dz_1 \wedge d\bar{z}_1 \wedge \cdots \wedge dz_n \wedge d\bar{z}_n, \] where $(U; z_1, \ldots, z_n)$ denotes a holomorphic local coordinates and $\phi_U:= \log \left| \frac{\partial}{\partial z_1} \wedge \cdots \wedge \frac{\partial}{\partial z_n} \right|_{\phi}^2$. We define functionals $J_g$ and ${\mathcal D}_g$ by \[ J_g (\phi) = - {\mathcal E}_g(\phi) + {\mathcal L}_{\mu_0} (\phi), \;\; {\mathcal L}_{\mu_0} (\phi):= \int_X (\phi-\phi_0)d\mu_0, \] \[ {\mathcal D}_g (\phi) = - {\mathcal E}_g (\phi) + {\mathcal L} (\phi), \;\; {\mathcal L} (\phi):=-\log \int_X d\mu_{\phi}, \] where $\mu_0:=MA(\phi_0)$ is a fixed probability measure on $X$. One can easily see that $J_g$ and ${\mathcal D}_g$ are invariant under scaling of metrics, i.e., functionals on ${\mathcal H}(X, -K_X)^T/ {\mathbb R}$. When $g \equiv 1$, we simply write these functionals by ${\mathcal E}$, $J$ and ${\mathcal D}$ respectively. \begin{remark} \label{rema:2.1} We can extend these functionals to the space ${\mathcal E}^1(X,-K_X)^T$ of all $T$-invariant (possibly singular) psh weights with finite energy (cf. \cite[Section 2 and Section 3]{BN14}). Moreover, the functional $J$ defines an exhaustion function on ${\mathcal E}^1(X,-K_X)/ {\mathbb R}$ in the sense that each level set of $J$ is compact in $L^1$-topology (cf. \cite[Lemma 3.3]{BBGZ13}). \end{remark} Next we set $T=T_{KS}$ and $g=g_{V_{KS}}:= \exp (\langle \xi_{V_{KS}}, \cdot \; \rangle)$. Then the corresponding functional ${\mathcal D}_g={\mathcal D}_{g_{V_{KS}}}$ is called the modified Ding functional. By \cite[Lemma 3.4]{BN14}, we have \[ \frac{d}{dt} {\mathcal D}_{g_{V_{KS}} }(\exp(tW)^* \phi) = {\rm Fut}_{V_{KS}} (W). \] Moreover, critical points of ${\mathcal D}_{g_{V_{KS}}}$ are K\"ahler-Ricci solitons with respect to $V_{KS}$. \begin{definition}[\cite{BN14}, Section 3.6] \label{defi:2.1} We say that a pair $(X, V_{KS})$ is strongly analytically K-polystable if the modified Ding functional ${\mathcal D}_{g_{V_{KS}}}$ is coercive modulo ${\rm Aut}_0 (X,V_{KS})$, i.e., \[ {\mathcal D}_{g_{V_{KS}}} (\phi) \geq \delta \underset{F \in {\rm Aut}_0 (X, V_{KS})}{\rm inf} J (F^*\phi) -C, \;\; \phi \in {\mathcal H}(X,-K_X)^{T_{KS}} \] for some positive constants $\delta$ and $C$, where ${\rm Aut}_0 (X, V_{KS})$ be a subgroup of ${\rm Aut}_0 (X)$ consisting of elements which commute with the action generated by $V_{KS}$. \end{definition} \begin{theorem}[\cite{BN14}, Theorem 3.11] \label{theo:2.1} If a pair $(X, V_{KS})$ is strongly analytically K-polystable, then $(X, V_{KS})$ admits a K\"ahler-Ricci soliton (as a unique minimizer of the modified Ding functional up to the action of ${\rm Aut}_0 (X, V_{KS})$). \end{theorem} \section{The quantized setting} \label{sect:3} \subsection{The quantization of ${\mathcal F}$-functional, the modified Futaki invariant and the Duistermaat-Heckman measure} \label{sect:3.1} Let $X$ and $T$ be as in Section \ref{sect:2}. \begin{lemma} \label{lemm:3.1} Assume that $m:={\rm dim}T$ is greater than $1$ . Then the polytope $P$ contains the origin in its interior ${\rm int}(P)$. \end{lemma} \begin{proof} Since the lift to $-K_X$ is canonical, we have an equation \[ -\Delta_{\partial} m_{\phi} (\xi_V) + m_{\phi} (\xi_V) + V(\kappa_{\phi}) = 0 \] for all $\xi_V \in {\mathfrak t}$, where $\kappa_{\phi}$ is the function defined by \[ {\rm Ric}(\omega_{\phi}) - \omega_{\phi} = \frac{\sqrt{-1}}{2 \pi} \partial \bar{\partial} \kappa_{\phi}. \] Integrating by parts, we find that \begin{equation} \int_X m_{\phi} (\xi_V) e^{\kappa_{\phi}} MA(\phi) = 0. \label{eq:3.1} \end{equation} Since $m \geq 1$, $m_{\phi}$ is not a constant. Thus the equation \eqref{eq:3.1} implies that for any $\xi_V$, an inequality $m_{\phi}(\xi_V)>0$ holds on some nonempty open subset of $X$. Now we assume that $0 \not\in {\rm int}(P)$, then we can choose an element $\xi \in {\mathbb R}^m$ so that $H_{\xi} \cap {\rm int}(P) = \emptyset$, where $H_{\xi}$ a hyperplane which is orthogonal to $\xi$ and contains the origin. Then either of $m_{\phi} (\pm \xi_V)$ is semi-negative on $X$. This is a contradiction. \end{proof} We define the functions ${\mathcal F}_k$ and ${\rm Fut}_{V,k}$ as \begin{eqnarray} {\mathcal F}_k (W):= k {\rm Trace} (e^{W/k})|_{H^0(X,-kK_X)}, \\ {\rm Fut}_{V,k} (W) := - \left. \frac{d}{dt} \right|_{t=0} {\mathcal F}_k (V+tW). \end{eqnarray} We set \[ N_k:= {\rm dim} H^0(X,-kK_X), \] then these functions give the quantization of ${\mathcal F}$ and ${\rm Fut}_V$: \begin{lemma}[\cite{BN14}, Proposition 4.7 or \cite{Tak14}, Proposition 2.8] \label{lemm:3.2} Let $V$ be a holomorphic vector field generating a torus action and $W$ a holomorphic vector field generating a ${\mathbb C}^*$-action and commuting with $V$. Then we have identities \[ {\mathcal F}(V) = \lim_{k \rightarrow \infty} \frac{{\mathcal F}_k (V)}{k N_k}, \label{eq:3.2} \] \[ {\rm Fut}_V (W) = \lim_{k \rightarrow \infty} \frac{1}{kN_k} {\rm Fut}_{V,k}(W). \label{eq:3.3} \] \end{lemma} If we apply the equivariant Riemann-Roch formula to ${\rm Fut}_{V, k} (W)$, we have an expansion \[ {\rm Fut}_{V,k} (W)={\rm Fut}_V ^{(0)} (W) k^{n+1} + {\rm Fut}_V ^{(1)} (W) k^n + \cdots, \] where ${\rm Fut}_V ^{(i)} (W)$ is the $i$-th order modified Futaki invariant introduced in \cite[Section 4.4]{BN14}. \begin{lemma} \label{lemm:3.3} The function ${\mathcal F}_k |_{{\mathfrak t}_c}$ is a proper strictly convex function if $k$ is sufficiently large. \end{lemma} \begin{proof} We use the following proposition: \begin{proposition}[\cite{BN14}, Proposition 4.1] \label{prop:3.1} Let $P_k:=\{ \lambda_i^{(k)} \} \subset {\mathbb Z}^m$ be the set of all weights for the action of the complexified torus $T^{\mathbb C}$ on $H^0(X,-kK_X)$, i.e., there is a decomposition \[ H^0(X,-kK_X) = \bigoplus_{\lambda_i^{(k)} \in P_k} E_{\lambda_i^{(k)}}. \] Then the spectral measure: \[ \nu_k:=\frac{1}{N_k} \sum_{i=1}^{N_k} \delta_{\lambda_i^{(k)}/k} \] supported on $P_k / k$ converges to the Duistermaat-Heckman measure $\nu^T$ weakly as $k \rightarrow \infty$, where $\delta_{\lambda_i^{(k)}/k}$ denotes the Dirac measure at $\lambda_i^{(k)}/k$. In particular, $\nu^T$ does not depend on a choice of $\phi \in {\mathcal H}(X, -K_X)^T$. \end{proposition} For any $\xi_V, \xi_W (\neq 0) \in {\mathfrak t}_c$, the functional ${\mathcal F}_k$ along the line $\xi_V + t \xi_W$ ($t \in {\mathbb R}$) can be written as the form \[ {\mathcal F}_k (V+ tW)= k \sum_{i=1}^{N_k} \exp(v_i^{(k)}/k + t w_i^{(k)}/k), \] where $v_i^{(k)}:=\langle \xi_V, \lambda_i^{(k)} \rangle$ and $v_i^{(k)}:=\langle \xi_W, \lambda_i^{(k)} \rangle$ are joint eigenvalues of the commuting action generated by ${\rm Re}(V)$ and ${\rm Re}(W)$. Then Proposition \ref{prop:3.1} implies that the functional ${\mathcal F}_k (V+ tW)$ of $t$ is strictly convex for any $\xi_V, \xi_W \in {\mathfrak t}_c$ if $k$ is sufficiently large and hence ${\mathcal F}_k$ is strictly convex. In order to prove the properness, let $\{ \xi_{W_j} \} \subset {\mathfrak t}_c \simeq {\mathbb R}^m$ be any sequence such that $|\xi_{W_j}| \rightarrow \infty$ as $j \rightarrow \infty$. For $\epsilon>0$, let $P_{\epsilon}$ be the interior compact convex polytope with faces parallel to those of $P$ separated by distance $\epsilon$. By Lemma \ref{lemm:3.1}, we can choose $\epsilon>0$ so that ${\rm int}(P_{\epsilon})$ contains the origin. Then Proposition \ref{prop:3.1} implies that there exists $k_0$ such that for all $k \geq k_0$ and $\xi_W \in {\mathbb R}^m$, there exists an eigenvalue $\lambda_i^{(k)}$ satisfying \[ \lambda_i^{(k)}/k \in P-P_{\epsilon}, \] \[ \cos ({\rm angle}(\xi_W, \lambda_i^{(k)})) \geq 1- \epsilon. \] For each $\xi_{W_j}$, we choose the eigenvalue $\lambda_{j,i(j)}^{(k)}$ satisfying the above condition. Then we obtain \[ w_{j,i(j)}^{(k)} := \langle \lambda_{j,i(j)}^{(k)}, \xi_{W_j} \rangle \geq k|\xi_{W_j}| \cdot {\rm inf}_{\xi \in \partial P_{\epsilon}} |\xi| \cdot (1- \epsilon) \rightarrow \infty \] as $j \rightarrow \infty$. Hence we have \[ {\mathcal F}_k (W_j) = k \sum_{i=1}^{N_k} \exp (w_{j,i}^{(k)}/k) \geq k \exp (w_{j,i(j)}^{(k)}/k) \rightarrow \infty \] as $j \rightarrow \infty$. This completes the proof of Lemma \ref{lemm:3.3}. \end{proof} Let $V_k$ be the unique minimizer of ${\mathcal F}_k |_{{\mathfrak t}_c}$. \begin{proof}[The proof of Theorem \ref{theo:1.1}] By Lemmas \ref{lemm:3.2} and \ref{lemm:3.3}, we find that the unique minimizer $V_k$ converges to the unique minimizer of ${\mathcal F}$, i.e., the K\"ahler-Ricci soliton vector field as $k \rightarrow \infty$. Since $V_k \in {\mathfrak t}_c^{\mathbb C}$ and ${\mathcal F}_k$ is adjoint invariant (so is {\rm Trace} in the defining equation \eqref{eq:3.2} of ${\mathcal F}_k$), we have \[ {\rm Fut}_{V_k, k} ({\rm Ad}_{F} W)= - \left. \frac{d}{dt} \right|_{t=0} {\mathcal F}_k (V_k + t {\rm Ad}_F W) = - \left. \frac{d}{dt} \right|_{t=0} {\mathcal F}_k (V_k + t W) = {\rm Fut}_{V_k, k} (W) \] for any $F \in {\rm Aut}_r (X)$ and $W \in {\mathfrak k}^{\mathbb C}$. In particular, set $F_s := \exp(s V)$ ($V \in {\mathfrak k}^{\mathbb C}$) and differentiating at $s=0$ yields \[ {\rm Fut}_{V_k, k}([V,W])=0. \] Moreover, the formula ${\mathfrak k}^{\mathbb C}={\mathfrak t}_c^{\mathbb C} \oplus [{\mathfrak k}^{\mathbb C},{\mathfrak k}^{\mathbb C}]$ yields that ${\rm Fut}_{V_k, k}$ vanishes on the entire space ${\mathfrak k}^{\mathbb C}$. This completes the proof. \end{proof} \subsection{The $g$-Bergman measure and quantized functionals} \label{sect:3.2} Let $X$ be a Fano manifold, $\mu$ a $T$-invariant measure, and ${\mathcal H}_k$ the space of hermitian inner products on $H^0(X, -kK_X)$. We define two operators: \[ Hilb_{k, \mu, g} \colon {\mathcal H}(X,-K_X)^T \rightarrow {\mathcal H}_k^T, \] \[ FS_k \colon {\mathcal H}_k^T \rightarrow {\mathcal H}(X,-K_X)^T \] by the formula: \begin{equation} ||s_i^{(k)}||_{Hilb_{k, \mu, g} (\phi)}^2 := g(\lambda_i^{(k)}/k)^{-1} \int_X |s_i^{(k)}|^2 e^{-k \phi} d \mu \;\;\; \text{for $s_i^{(k)} \in E_{\lambda_i^{(k)}}$}, \label{eq:3.4} \end{equation} \begin{equation} FS_k (H):= \frac{1}{k} \log \left( \frac{1}{N_k} \sum_{i=1}^{N_k} |s_i|^2 \right), \label{eq:3.5} \end{equation} where we denote an $H$-orthonormal basis by $\{s_i\}$ and extend $Hilb_{k, \mu, g}$ to a hermitian inner product on the space $H^0(X, -kK_X)$ by requiring that the different subspace $E_{\lambda_i^{(k)}}$ are orthogonal to each other. We note that the map $FS_k$ is independent of a choice of an $H$-orthonormal basis $\{s_i\}$. Actually, $FS_k (H)$ is just the pull-back of the Fubini-Study metric with respect to $H$. In the case when $\mu=\mu_{\phi}$, we drop the explicit dependence of $\mu$ from the notation and simply write \[ Hilb_{k, g} (\phi) := Hilb_{k, \mu_\phi, g} (\phi). \] Let $H_0:=Hilb_{k, \mu_0} (\phi_0) \in {\mathcal H}_k^{T_c}$ ($\phi_0 \in {\mathcal H}(X,-K_X)^{T_c}$) be a reference metric. We normalize $g$ so that $g \nu_k$ is a probability measure on $P$. We define the quantization of the functional ${\mathcal E}_g$ as \begin{equation} {\mathcal E}_g^{(k)} (H) = \sum_{\lambda_i^{(k)} \in P_k} g(\lambda_i^{(k)}/k) {\mathcal E}_{E_{\lambda_i^{(k)}}}^{(k)} (H), \;\; {\mathcal E}_{E_{\lambda_i^{(k)}}}^{(k)} (H) = - \frac{1}{kN_k} \log \det H|_{E_{\lambda_i^{(k)}}}, \label{eq:3.6} \end{equation} where we compute the determinant in reference to the metric $H_0$. We have an isomorphism \[ {\mathcal H}_k \simeq GL(N_k, {\mathbb C})/U(N_k) \] with respect to $H_0$, which implies that ${\mathcal H}_k$ is a Riemannian symmetric space and therefore geodesics are given as the decomposition of the exponential map and the projection $GL(N_k,{\mathbb C}) \rightarrow GL(N_k,{\mathbb C})/U(N_k)$. Let $s_i^{(k)} \in E_{\lambda_i^{(k)}}$ be an $H_0$-orthonormal and $H_t$-orthogonal basis. Then any geodesic can be represented by $H_t(s_i^{(k)}, s_i^{(k)}) = e^{-\mu_i^{(k)}t} H_0 (s_i^{(k)}, s_i^{(k)})$ for some $\mu_i^{(k)} \in {\mathbb R}$. Thus we have \[ \frac{d}{dt} {\mathcal E}_g^{(k)} (H_t) = \frac{1}{kN_k} \sum_{i=1}^{N_k} g(\lambda_i^{(k)}/k) \mu_i^{(k)}. \] Hence the functional ${\mathcal E}_g^{(k)}$ has linear growth along geodesics. We define the quantization of the functionals $J_g$, ${\mathcal D}_g$ as follows: \begin{equation} J_g^{(k)} (H) := - {\mathcal E}_g^{(k)}(H) + {\mathcal L}_{\mu_0}(FS_k(H)), \label{eq:3.7} \end{equation} \begin{equation} {\mathcal D}_g^{(k)} (H) := - {\mathcal E}_g^{(k)}(H) + {\mathcal L}(FS_k(H)). \label{eq:3.8} \end{equation} These functionals are invariant under scaling of metrics and descend to functionals on the space ${\mathcal H}_k^{T_c}/ {\mathbb R}$. When $g \equiv 1$, we simply write these functionals by ${\mathcal E}^{(k)}$, $J^{(k)}$ and ${\mathcal D}^{(k)}$ respectively. Now we will explain that the quantized functionals ${\mathcal E}_g^{(k)}$, $J_g^{(k)}$ and ${\mathcal D}_g^{(k)}$ are, to the letter, the quantization of the corresponding functionals on ${\mathcal H}(X, -K_X)^T$ respectively. We start with mentioning the $g$-Bergman function: \begin{equation} \rho_{k, \mu_0, g} (\phi):= \sum_{\lambda_i^{(k)} \in P_k} g(\lambda_i^{(k)}/k) \rho_{k, \mu_0} (\phi) \label{eq:3.9} \end{equation} and $g$-Bergman measure \begin{equation} \beta_{k, \mu_0, g} (\phi):= \frac{1}{N_k} \rho_{k, \mu_0, g} (\phi) \cdot \mu_0, \label{eq:3.10} \end{equation} where $\rho_{k, \mu_0, g} (\phi)$ is the ordinary Bergman function of the subspace $E_{\lambda_i^{(k)}}$. We use the following convergence of measures: \begin{proposition}[\cite{BN14}, Proposition 4.4] \label{prop:3.2} Assume that $g$ is smooth, then for any $\phi \in {\mathcal H}(X,-K_X)^T$, we have the uniform convergence \[ \beta_{k, \mu_0, g} (\phi) \rightarrow MA_g (\phi). \] \end{proposition} Now we are ready to prove the quantization formula (cf. \cite[Proposition 4.5]{BN14}). \begin{proposition} \label{prop:3.3} The following pointwise convergence holds as $k \rightarrow \infty$: \[ {\mathcal E}_g^{(k)} (Hilb_{k, \mu_0} (\phi)) \rightarrow {\mathcal E}_g (\phi), \] \[ J_g^{(k)} (Hilb_{k, \mu_0} (\phi)) \rightarrow J_g (\phi), \] \[ {\mathcal D}_g^{(k)} (Hilb_{k, \mu_0} (\phi)) \rightarrow {\mathcal D}_g (\phi). \] \end{proposition} \begin{proof} The direct computation yields that \begin{eqnarray*} {\mathcal E}_g^{(k)} (Hilb_{k, \mu_0} (\phi)) &=& \int_0^{1} \left( \frac{d}{dt}{\mathcal E}_g^{(k)} (Hilb_{k, \mu_0} (t \phi + (1-t)\phi_0) \right) dt \\ &\hbox{}& (\text{because ${\mathcal E}_g^{(k)} (Hilb_{k, \mu_0} (\phi_0)) = 0$})\\ &=& \int_0^{1} \int_X (\phi- \phi_0) \beta_{k, \mu_0, g} (t \phi + (1-t) \phi_0) \\ &\hbox{}& (\text{by \cite[Proposition 4.5]{BN14}}). \end{eqnarray*} Combining with Proposition \ref{prop:3.2}, we have \[ {\mathcal E}_g^{(k)} (Hilb_{k, \mu_0} (\phi)) \rightarrow \int_0^{1} \int_X (\phi- \phi_0) MA_g (t \phi + (1-t) \phi_0) = {\mathcal E}_g (\phi). \] Moerover, by definition, \begin{equation} FS_k \circ Hilb_{k, \mu_0, g} (\phi) - \phi = \frac{1}{k} \log \left( \frac{1}{N_k} \rho_{k, \mu_0, g} (\phi) \right). \label{eq:3.11} \end{equation} Thus using Proposition \ref{prop:3.2} again, we have a uniform convergence \[ FS_k \circ Hilb_{k, \mu_0, g} (\phi) \rightarrow \phi. \] The last two parts follow from the defining equations \eqref{eq:3.7}, \eqref{eq:3.8} and pointwise convergence ${\mathcal E}_g^{(k)} \circ Hilb_{k, \mu_0} \rightarrow {\mathcal E}_g$. \end{proof} \subsection{Modification of quantized K\"ahler-Ricci solitons} \label{sect:3.3} We adopt the same notation as in Section \ref{sect:3.2}. We set $T=T_{KS}$. \begin{definition}[\cite{BN14}, Section 4] \label{defi:3.1} We say that a metric $H_k \in {\mathcal H}_k^{T_{KS}}$ is a quantized K\"ahler-Ricci soliton if it satisfies the equation \[ Hilb_{k, g_{V_{KS}}} \circ FS_k (H_k) = H_k. \] \end{definition} Berman-Nystr\"om showed the following: \begin{theorem}[\cite{BN14}, Theorem 1.7] \label{theo:3.1} Assume that $(X, V_{KS})$ is strongly analytically K-polystable (i.e., the corresponding modified Ding functional is coercive modulo ${\rm Aut}_0(X, V_{KS})$) and all the higher order modified Futaki invariants of $(X, V_{KS})$ vanish, then there exists a quantized K\"ahler-Ricci soliton if $k$ is sufficiently large, which is unique modulo the action of ${\rm Aut}_0(X, V_{KS})$ and as $k \rightarrow \infty$, the corresponding Bergman metrics on $X$ converge weakly, modulo automorphisms, to a K\"ahler-Ricci soliton on $(X,V_{KS})$. \end{theorem} We want to weaken the assumption in the above theorem. For this, we set $T=T_c$ and introduce a slight modification of the notion of quantized K\"ahler-Ricci solitons: \begin{definition} \label{defi:3.2} Let $\{V_k\}$ be a sequence of holomorphic vector fields constructed in Section \ref{sect:3.1}. We say that a metric $H_k \in {\mathcal H}_k^{T_c}$ is a quantized K\"ahler-Ricci soliton attached to $V_k$ if it satisfies the equation \[ Hilb_{k, g_{V_k}} \circ FS_k (H_k) = H_k. \] \end{definition} Then the quantized K\"ahler-Ricci soliton attached to $V_k$ are characterized as critical points of the quantization of the modified Ding functional ${\mathcal D}_{g_{V_k}}^{(k)}$. Moreover, by \cite[Proposition 4.7]{BN14}, we have \[ {\mathcal D}_{g_{V_k}}^{(k)} (\exp(tW)^*H_0)=\frac{{\rm Fut}_{V_k, k} (W)}{k N_k}. \] \begin{proof}[Proof of the Theorem \ref{theo:1.2}] This proof is mostly based on the original proof given by Berman-Nystr\"om. The reader should refer to \cite[Theorem 1.7]{BN14}. The coercivity of ${\mathcal D}_{g_{V_{KS}}}$ implies that the equation \[ {\mathcal D}_{g_{V_{KS}}} (FS_k(H)) \geq \delta J(FS_k(F^*H))-C \] holds for some $F \in {\rm Aut}_0 (X,V_{KS})$, where we note that two operations $FS_k$ and $F^*$ are commutative. Then the LHS can be written as \begin{eqnarray*} {\mathcal D}_{g_{V_{KS}}} (FS_k(H)) &=& {\mathcal D}_{g_{V_{KS}}} (FS_k(F^*H)) \;\;\; (\text{because ${\rm Fut}_{V_{KS}} \equiv 0)$} \\ &=& J_{g_{V_{KS}}} (FS_k (F^*H)) +({\mathcal L} - {\mathcal L}_{\mu_0})(FS_k (F^*H)). \end{eqnarray*} On the other hand, since $g_{V_{KS}}$ is bounded, we obtain \[ \delta J(FS_k(F^*H))-C \geq \delta' J_{g_{V_{KS}}}(FS_k(F^*H))-C \] for sufficiently small $\delta'>0$ depending only on $g_{V_{KS}}$. Thus we obtain \begin{equation} J_{g_{V_{KS}}}(FS_k(F^*H))(1-\delta') + ({\mathcal L} - {\mathcal L}_{\mu_0})(FS_k (F^*H)) \geq -C. \label{eq:3.12} \end{equation} Now we use the following lemma, which compares the two functionals $J_{g_{V_{KS}}} \circ FS_k$ and $J_{g_{V_{KS}}}^{(k)}$: \begin{lemma}[\cite{BN14}, Lemma 4.10] \label{lemm:3.4} There exists a sequence $\delta_k \rightarrow 0$ of positive numbers such that \[ J_{g_{V_{KS}}}(FS_k(H)) \leq (1+ \delta_k) J_{g_{V_{KS}}}^{(k)}(H) + \delta_k. \] \end{lemma} Hence if we take $k$ sufficiently large so that $(1+\delta_k)(1-\delta') \leq 1-\frac{\delta'}{2}$ and $\delta_k(1-\delta') \leq C$ hold, we have \begin{equation} J_{g_{V_{KS}}}(FS_k(F^*H))(1-\delta') \leq J_{g_{V_{KS}}}^{(k)} (F^*H) \left( 1-\frac{\delta'}{2} \right) + C. \label{eq:3.13} \end{equation} Thus we obtain \begin{eqnarray*} {\mathcal D}_{g_{V_{KS}}}^{(k)} (F^*H) &=& J_{g_{V_{KS}}}^{(k)} (F^*H) + ({\mathcal L} - {\mathcal L}_{\mu_0})(FS_k (F^*H)) \\ &\geq& \frac{\delta'}{2} J_{g_{V_{KS}}}^{(k)} (F^*H) -2C \;\;\;(\text{by \eqref{eq:3.12} and \eqref{eq:3.13}}) \\ &\geq& \frac{\delta''}{2} J^{(k)} (F^*H) -2C \;\;\;(\text{because $g_{V_{KS}}$ is bounded}). \end{eqnarray*} Now we consider the difference of the two modified Ding functionals: \[ {\mathcal D}_{g_{V_{KS}}}^{(k)}-{\mathcal D}_{g_{V_k}}^{(k)}=-{\mathcal E}_{g_{V_{KS}}}^{(k)}+{\mathcal E}_{g_{V_k}}^{(k)}, \] which has linear growth along geodesics explained above. On the other hand, the functional $J^{(k)}$ is an exhaustion function on ${\mathcal H}_k^{T_c}/ {\mathbb R}$ and has at least linear growth along geodesics (cf. \cite[Proposition 3]{Don09} or \cite[Lemma 7.6]{BBGZ13}). The following Lemma was inspired by these observations: \begin{lemma} \label{lemm:3.5} The inequality \begin{equation} -\epsilon_k J^{(k)} - \epsilon_k' \leq {\mathcal D}_{g_{V_{KS}}}^{(k)}-{\mathcal D}_{g_{V_k}}^{(k)} \leq \epsilon_k J^{(k)} + \epsilon_k' \label{eq:3.14} \end{equation} holds for some sequences of positive numbers $\epsilon_k \rightarrow 0$ and $\epsilon_k' \rightarrow 0$. \end{lemma} \begin{proof} We set $\epsilon_k:= \sup_P |g_{V_{KS}} - g_{V_k}|+2^{-k}$ and define the functional ${\mathcal E}_{\epsilon_k + g_{V_{KS}} - g_{V_k}}^{(k)}:= \epsilon_k {\mathcal E}^{(k)} + {\mathcal E}_{g_{V_{KS}}}^{(k)} - {\mathcal E}_{g_{V_k}}^{(k)}$. Then we have $\epsilon_k \rightarrow 0$ since $V_k \rightarrow V_{KS}$ and $g_{V_k} \rightarrow g_{V_{KS}}$ uniformly on $P$. By scaling invariance of \eqref{eq:3.14}, we may assume that $H$ is normalized by ${\mathcal E}_{\epsilon_k + g_{V_{KS}} - g_{V_k}}^{(k)} (H)=0$. Now we consider a (non-trivial) geodesic $H_t$ starting at $H_0$ with eigenvalues $(\mu_i^{(k)})$. Then our normalization condition implies that $\mu_{max}:=\underset{i}{\max}(\mu_i^{(k)})$ is positive. Thus, computing in the similar way as in \cite[Proposition 3]{Don09}, we have \[ (\epsilon_k J^{(k)}-{\mathcal D}_{g_{V_{KS}}}^{(k)}+{\mathcal D}_{g_{V_k}}^{(k)})(H_t) = \epsilon_k {\mathcal L}_{\mu_0} (FS_k(H_t)) \geq \epsilon_k \mu_{max} t + \text(const) \rightarrow \infty \] as $t \rightarrow \infty$. Hence the functional $\epsilon_k J^{(k)}-{\mathcal D}_{g_{V_{KS}}}^{(k)}+{\mathcal D}_{g_{V_k}}^{(k)}$ is coercive. To get the second assertion $\epsilon_k' \rightarrow 0$, we use the $g$-analogue of calculation techniques developed in \cite[Section 7]{BBGZ13}. In what follows all $O(1)$ and $o(1)$ are meant to hold uniformly with respect to $H \in {\mathcal H}_k^{T_c}$ as $k \rightarrow \infty$. We use the normalization \[ {\mathcal L}_{\mu_0} (FS_k( H)) = 0 \] so that \begin{equation} \sup_X (FS_k (H) - \phi_0) \leq O(1), \label{eq:3.15} \end{equation} and a reference point $\tilde{H}_0:= Hilb_{k, \mu_0, \epsilon_k + g_{V_{KS}} - g_{V_k}} (\phi_0)$. Let $H_t$ be a geodesic joining $\tilde{H}_0$ to $H:=H_1 \in {\mathcal H}_k^{T_c}$ and put \[ v(H):= \left. \frac{\partial}{\partial t} \right|_{t=0} FS_k (H_t). \] Then we have the formula \begin{equation} {\mathcal E}_{\epsilon_k + g_{V_{KS}} - g_{V_k}}^{(k)} (H)= \int_X v(H) \beta_{k, \mu_0, \epsilon_k + g_{V_{KS}} - g_{V_k}} (\phi_0) + o(1), \label{eq:3.16} \end{equation} where the error term $o(1)$ in the RHS comes from the change of base points from $H_0$ to $\tilde{H}_0$. Then $v(H)$ is estimated as \begin{eqnarray*} v(H) &\leq& FS_k (H) - FS_k (\tilde{H}_0) \;\;\; (\text{by the convexity of $FS_k (H_t)$}) \\ &=& FS_k (H) - FS_k (Hilb_{k, \mu_0, \epsilon_k + g_{V_{KS}} - g_{V_k}}(\phi_0)) \\ &=& FS_k (H) - \phi_0 - \frac{1}{k} \log \left( \frac{1}{N_k} \rho_{k, \mu_0, \epsilon_k + g_{V_{KS}} - g_{V_k}} (\phi_0) \right) \\ &\leq& FS_k (H) - \phi_0 -\frac{1}{k} \log \left( \frac{1}{N_k} 2^{-k} \rho_{k, \mu_0} (\phi_0) \right) \\ &\hbox{}& (\text{by the definition of $\epsilon_k$}) \\ &\leq& O(1) \;\;\; (\text{by \eqref{eq:3.15} and Proposition \ref{prop:3.2}}). \end{eqnarray*} On the other hand, by the uniform covergence $g_{V_k} \rightarrow g_{V_{KS}}$ and Proposition \ref{prop:3.2}, the positive measure $\beta_{k, \mu_0, \epsilon_k + g_{V_{KS}} - g_{V_k}} (\phi_0)$ goes to $0$ uniformly as $k \rightarrow \infty$. Hence we obtain \[ {\mathcal E}_{\epsilon_k + g_{V_{KS}} - g_{V_k}}^{(k)} (H) \leq \sup_X v(H) \int_X \beta_{k, \mu_0, \epsilon_k + g_{V_{KS}} - g_{V_k}} (\phi_0) + o(1) \leq \epsilon_k' \] for some positive number $\epsilon_k' \rightarrow 0$. Therefore \[ (\epsilon_k J^{(k)}-{\mathcal D}_{g_{V_{KS}}}^{(k)}+{\mathcal D}_{g_{V_k}}^{(k)}) (H) = - {\mathcal E}_{\epsilon_k + g_{V_{KS}} - g_{V_k}}^{(k)} (H) \geq - \epsilon_k'. \] One can prove another inequality in the similar way. \end{proof} By Lemma \ref{lemm:3.5}, we have \begin{eqnarray*} {\mathcal D}_{g_{V_k}}^{(k)}(H) &=& {\mathcal D}_{g_{V_k}}^{(k)}(F^*H) \;\;\;(\text{Because ${\rm Fut}_{V_k, k} \equiv 0$}) \\ &\geq& {\mathcal D}_{g_{V_{KS}}}^{(k)} (F^*H) -\epsilon_k J^{(k)} (F^*H)- \epsilon_k' \\ &\geq& \left( \frac{\delta''}{2} - \epsilon_k \right) J^{(k)} (F^*H) -2C-\epsilon_k'. \end{eqnarray*} Thus we have \begin{equation} {\mathcal D}_{g_{V_k}}^{(k)}(H) \geq \frac{\delta''}{3} \underset{F \in {\rm Aut}_0(X,V_{KS})}{\rm inf} J^{(k)} (F^*H) -3C \label{eq:3.17} \end{equation} for sufficiently large $k$. Since $J^{(k)}$ is an exhaustion function on ${\mathcal H}_k^{T_c}/ {\mathbb R}$, we find that there exists a unique quantized K\"ahler-Ricci soliton $H_k$ at level $k$ up to the action of ${\rm Aut}_0(X,V_{KS})$ if $k$ is sufficiently large. We normalize $H_k$ so that the corresponding metric $\phi_k:=FS_k (H_k)$ minimizes $J$ on the corresponding ${\rm Aut}_0(X,V_{KS})$-orbit. Then the minimizing property of $H_k$ implies ${\mathcal D}_{g_{V_k}}^{(k)} (H_k) \leq {\mathcal D}_{g_{V_k}}^{(k)} (Hilb_{k, \mu_0} (\phi))$ for all $\phi \in {\mathcal H}(X,-K_X)^{T_c}$. Thus letting $k \rightarrow \infty$, we obtain \begin{equation} {\mathcal D}_{g_{V_k}}^{(k)} (H_k) \leq {\mathcal D}_{g_{V_{KS}}} (\phi) + \gamma_k \label{eq:3.18} \end{equation} for all $\phi \in {\mathcal H}(X,-K_X)^{T_c}$, where $\gamma_k= \gamma_k(\phi) \rightarrow 0$ is a sequence of constants depending on $\phi$. On the other hand, we have \begin{eqnarray*} {\mathcal D}_{g_{V_{KS}}} (\phi_k) &\leq& {\mathcal D}_{g_{V_{KS}}}^{(k)}(H_k) + \delta_k J^{(k)} (H_k) + \delta_k \;\;\; (\text{Lemma \ref{lemm:3.4} and $g_{V_{KS}}$ is bounded}) \\ &\leq& {\mathcal D}_{g_{V_k}}^{(k)}(H_k) + \delta_k' J^{(k)} (H_k) + \delta_k' \;\;\; (\text{by Lemma \ref{lemm:3.5}}), \end{eqnarray*} where $J^{(k)} (H_k)$ is bounded from above by \eqref{eq:3.17} and \eqref{eq:3.18}. Thus we have \[ \liminf_{k \rightarrow \infty} {\mathcal D}_{g_{V_{KS}}} (\phi_k) \leq {\mathcal D}_{g_{V_{KS}}} (\phi) \] for all $\phi \in {\mathcal H}(X,-K_X)^{T_c}$. Since the set ${\mathcal H}(X,-K_X)^{T_c}$ contains a K\"ahler-Ricci soliton with respect to $V_{KS}$, i.e., a minimizer of ${\mathcal D}_{g_{V_{KS}}}$ on ${\mathcal E}^1 (X, -K_X)^{T_{KS}}$, we have \[ \liminf_{k \rightarrow \infty} {\mathcal D}_{g_{V_{KS}}} (\phi_k) \leq \underset{\phi \in {\mathcal E}^1 (X, -K_X)^{T_{KS}}}{\inf} {\mathcal D}_{g_{V_{KS}}} (\phi). \] This yields that $\{\phi_k \}$ is a minimizing sequence of the functional ${\mathcal D}_{g_{V_{KS}}}$. Since $J^{(k)}(H_k)$ is bounded, $J(\phi_k)$ is also bounded by Lemma \ref{lemm:3.4}. Thus $\{ \phi_k \}$ is contained in a compact sublevel set of $J$, and there exists a subsequence which converges to some metric $\phi_{\infty} \in {\mathcal E}^1 (X, -K_X)^{T_{KS}}$. Since ${\mathcal D}_{g_{V_{KS}}}$ is lower semi-continuous (cf: \cite[Lemma 6.4]{BBGZ13}, \cite[Proposition 2.15]{BN14}), we obtain \[ {\mathcal D}_{g_{V_{KS}}}(\phi_{\infty}) \leq \liminf_{k \rightarrow \infty} {\mathcal D}_{g_{V_{KS}}} (\phi_k) \leq \underset{\phi \in {\mathcal E}^1 (X, -K_X)^{T_{KS}}}{\inf} {\mathcal D}_{g_{V_{KS}}} (\phi), \] hence $\phi_{\infty}$ is a K\"ahler-Ricci soliton with respect to $V_{KS}$, which is smooth by the regularity theorem \cite[Theorem 1.3]{BN14}. The metric $\phi_{\infty}$ may depend on a choice of a convergent subsequence. However, by our normalization of $\phi_k$, we know that $\phi_{\infty}$ minimizes $J$-functional on the space of K\"ahler-Ricci solitons with respect to $V_{KS}$, which can be identified with the space ${\rm Aut}_0 (X, V_{KS}) \phi_{\infty}/K$, where $K$ is the stabilizer of $\phi_{\infty}$ (cf. \cite[Theorem 3.6]{BN14}). Since $J$ is strictly convex on ${\rm Aut}_0 (X, V_{KS}) \phi_{\infty}/K$ with respect to the natural Riemannian structure (where geodesics are one parameter subgroups), such a minimizer is uniquely determined. Therefore, the metric $\phi_{\infty}$ is, in fact, independent of a choice of a subsequence, which yields that $\phi_k$ converges to a K\"ahler-Ricci soliton $\phi_{\infty}$ weakly. This completes the proof. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,864
Le profane est la réalité ordinaire qui ne se définit que par rapport au sacré. Toute réalité qui n'est pas consacrée, toute personne qui n'est pas initiée, ignorante. Présentation Étymologie Le mot « profane » vient directement du latin profanum : de pro « devant » et fanum « lieu consacré ». Ce qui n'est pas sacré La notion de profane se définit par opposition à celle de sacré : c'est tout ce qui est dépourvu de caractère religieux, sacré. Ce qui a trait au domaine temporel, par rapport au domaine spirituel. Elle est définie dans un groupe humain fondé sur une initiation ou une révélation : par exemple la religion où le sacré désigne tout ce qui a trait au divin, déterminant alors le reste de son monde comme profane. L'acte d'introduire un ou des éléments de l'ordre du profane à l'intérieur d'une enceinte consacrée (de façon réelle ou symbolique) est nommé profanation, ce qui est un sacrilège pour le croyant à ce sacré. Tout individu non initié On nomme également profane un individu qui n'appartient pas au groupe initiatique. Par exemple la franc-maçonnerie, ou n'en connaît pas la révélation fondatrice (par exemple un non-croyant pour un croyant), qui n'est pas initiée. Perspectives Par extension, le terme désigne plus généralement une personne qui n'est pas informée d'un fait, de la pratique, novice. Giorgio Agamben apporte une conception plus politique du concept de profane : « profaner, c'est restituer à l'usage commun ce qui a été séparé dans la sphère du sacré ». Notes et références Voir aussi Bibliographie Articles connexes Liens externes Vocabulaire religieux Anthropologie des religions
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,392
Q: How to convert a mysql table in json format for simpledb Below are sample records that I would normally insert into MySQL. I can then do the regular types of queries usig SQL. Note that I will have datetime in 5 minute intervals. datetime account_id country zip count 2012-04-27 03:40 1234 69 91845 234 2012-04-27 03:45 3432 43 91813 212 I will be using simpledb with the python boto api. Given that its a key value data store where the values can be stored as dictionary/json type objects, what is the proper structure to store the data so I can query? E.g. select sum(count) group by country. A: SimpleDB only really supports count(*) aggregation, not sum. You would either 1) need to do some hadoop processing to aggregate the results and return a result or 2) store and increment your aggregates in a separate document (I generally either add the logic close to my repository (like in an update method) or, for documents that require faster update/get routines, add a message to Amazon SQS and then recalculate those aggregates in a background service. Truth told - I don't report much from SimpleDB, it is far easier to write a sync script that updates the data regularly in a relational database, then I can report from that, without worrying about resource contention with the front end application. Thanks, Hal
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,538
#import <UIKit/UIKit.h> #import "UAAnalytics+Internal.h" #import "UAEvent+Internal.h" #import "UAirship.h" #import "UAAnalyticsDBManager.h" #import "UALocationEvent.h" #import "UAUser.h" #import "UAConfig.h" #import "UAHTTPConnectionOperation.h" #import "UADelayOperation.h" #import "UAInboxUtils.h" #import "NSJSONSerialization+UAAdditions.h" #import "UAPush+Internal.h" #import "UAUtils.h" #import "UAEventAppBackground.h" #import "UAEventPushReceived.h" #import "UAEventAppBackground.h" #import "UAEventAppForeground.h" #import "UAPreferenceDataStore.h" #import "UALocationService.h" #import "UARegionEvent+Internal.h" typedef void (^UAAnalyticsUploadCompletionBlock)(void); #define kUALocationPermissionSystemLocationDisabled @"SYSTEM_LOCATION_DISABLED"; #define kUALocationPermissionNotAllowed @"NOT_ALLOWED"; #define kUALocationPermissionAlwaysAllowed @"ALWAYS_ALLOWED"; #define kUALocationPermissionForegroundAllowed @"FOREGROUND_ALLOWED"; #define kUALocationPermissionUnprompted @"UNPROMPTED"; @implementation UAAnalytics #pragma mark - #pragma mark Object Lifecycle - (void)dealloc { [[NSNotificationCenter defaultCenter] removeObserver:self]; [self stopSends]; } - (instancetype)initWithConfig:(UAConfig *)airshipConfig dataStore:(UAPreferenceDataStore *)dataStore { self = [super init]; if (self) { self.analyticDBManager = [[UAAnalyticsDBManager alloc] init]; // Set server to default if not specified in options self.config = airshipConfig; self.dataStore = dataStore; // Default analytics value if (![self.dataStore objectForKey:kUAAnalyticsEnabled]) { [self.dataStore setBool:YES forKey:kUAAnalyticsEnabled]; } [self resetEventsDatabaseStatus]; [self restoreSavedUploadEventSettings]; // Save defaults to store lastSendTime if this was an initial condition [self saveUploadEventSettings]; self.sendQueue = [[NSOperationQueue alloc] init]; self.sendQueue.maxConcurrentOperationCount = 1; // Register for background notifications [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(enterBackground) name:UIApplicationDidEnterBackgroundNotification object:nil]; // Register for foreground notifications [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(enterForeground) name:UIApplicationWillEnterForegroundNotification object:nil]; // App inactive/active for incoming calls, notification center, and taskbar [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didBecomeActive) name:UIApplicationDidBecomeActiveNotification object:nil]; [self startSession]; // Set the intial delay if ([UIApplication sharedApplication].applicationState == UIApplicationStateBackground) { self.earliestInitialSendTime = [NSDate dateWithTimeIntervalSinceNow:kInitialBackgroundBatchWaitSeconds]; } else { self.earliestInitialSendTime = [NSDate dateWithTimeIntervalSinceNow:kInitialForegroundBatchWaitSeconds]; } // Schedule a send [self sendWithDelay:self.timeToWaitBeforeSendingNextBatch]; } return self; } + (instancetype)analyticsWithConfig:(UAConfig *)config dataStore:(UAPreferenceDataStore *)dataStore { return [[UAAnalytics alloc] initWithConfig:config dataStore:dataStore]; } #pragma mark - #pragma mark Application State - (void)enterForeground { UA_LTRACE(@"Enter Foreground."); // do not send the foreground event yet, as we are not actually in the foreground // (we are merely in the process of foregorunding) // set this flag so that the even will be sent as soon as the app is active. self.isEnteringForeground = YES; } - (void)enterBackground { UA_LTRACE(@"Enter Background."); // add app_background event [self addEvent:[UAEventAppBackground event]]; // Send immediately so we can end our background tasks as soon as possible [self sendWithDelay:0]; self.notificationUserInfo = nil; [self clearSession]; } - (void)didBecomeActive { UA_LTRACE(@"Application did become active."); // If this is the first 'inactive->active' transition in this session, // send if (self.isEnteringForeground) { self.isEnteringForeground = NO; // Update the initial send delay self.earliestInitialSendTime = [NSDate dateWithTimeIntervalSinceNow:kInitialForegroundBatchWaitSeconds]; // Start a new session [self startSession]; //add app_foreground event [self addEvent:[UAEventAppForeground event]]; } } #pragma mark - #pragma mark Preferences - (NSDate *)lastSendTime { return [self.dataStore objectForKey:@"X-UA-Last-Send-Time"] ?: [NSDate distantPast]; } - (void)setLastSendTime:(NSDate *)lastSendTime { if (lastSendTime) { [self.dataStore setObject:lastSendTime forKey:@"X-UA-Last-Send-Time"]; } } - (void)restoreSavedUploadEventSettings { // If the key is missing the int will end up being 0 and the values will clamp to there lower end. self.maxTotalDBSize = (NSUInteger)[self.dataStore integerForKey:kMaxTotalDBSizeUserDefaultsKey]; self.maxBatchSize = (NSUInteger)[self.dataStore integerForKey:kMaxBatchSizeUserDefaultsKey]; self.maxWait = (NSUInteger)[self.dataStore integerForKey:kMaxWaitUserDefaultsKey]; self.minBatchInterval = (NSUInteger)[self.dataStore integerForKey:kMinBatchIntervalUserDefaultsKey]; } - (void)saveUploadEventSettings { [self.dataStore setInteger:(NSInteger)self.maxTotalDBSize forKey:kMaxTotalDBSizeUserDefaultsKey]; [self.dataStore setInteger:(NSInteger)self.maxBatchSize forKey:kMaxBatchSizeUserDefaultsKey]; [self.dataStore setInteger:(NSInteger)self.maxWait forKey:kMaxWaitUserDefaultsKey]; [self.dataStore setInteger:(NSInteger)self.minBatchInterval forKey:kMinBatchIntervalUserDefaultsKey]; } #pragma mark - #pragma mark Analytics - (void)handleNotification:(NSDictionary*)userInfo inApplicationState:(UIApplicationState)applicationState { switch (applicationState) { case UIApplicationStateActive: [self addEvent:[UAEventPushReceived eventWithNotification:userInfo]]; break; case UIApplicationStateInactive: self.notificationUserInfo = userInfo; break; case UIApplicationStateBackground: break; } } - (void)addEvent:(UAEvent *)event { if (!event.isValid) { UA_LWARN(@"Dropping invalid event %@.", event); return; } if (!self.isEnabled) { UA_LTRACE(@"Analytics disabled, ignoring event: %@", event.eventType); return; } UA_LDEBUG(@"Adding %@ event %@.", event.eventType, event.eventID); [self.analyticDBManager addEvent:event withSessionID:self.sessionID]; UA_LTRACE(@"Added: %@.", event); self.databaseSize += event.estimatedSize; if (self.oldestEventTime == 0) { self.oldestEventTime = [event.time doubleValue]; } switch (event.priority) { case UAEventPriorityHigh: [self sendWithDelay:kHighPriorityBatchWaitSeconds]; break; case UAEventPriorityNormal: [self sendWithDelay:self.timeToWaitBeforeSendingNextBatch]; break; case UAEventPriorityLow: if ([[UIApplication sharedApplication] applicationState] == UIApplicationStateBackground) { NSTimeInterval timeSinceLastSend = [[NSDate date] timeIntervalSinceDate:self.lastSendTime]; if (timeSinceLastSend >= kMinBackgroundLowPriorityEventSendIntervalSeconds) { [self sendWithDelay:0]; } else { UA_LTRACE("Skipping low priority event send."); } } else { [self sendWithDelay:self.timeToWaitBeforeSendingNextBatch]; } break; } } // We send headers on all response codes, so let's set those values before checking for != 200 // NOTE: NSURLHTTPResponse converts header names to title case, so use the X-Ua-Header-Name format - (void)updateAnalyticsParametersWithHeaderValues:(NSHTTPURLResponse *)response { if (![response allHeaderFields]) { return; } id maxTotalValue = [[response allHeaderFields] objectForKey:@"X-UA-Max-Total"]; if (maxTotalValue) { self.maxTotalDBSize = (NSUInteger)[maxTotalValue integerValue] * 1024; //value returned in KB; } id maxBatchValue = [[response allHeaderFields] objectForKey:@"X-UA-Max-Batch"]; if (maxBatchValue) { self.maxBatchSize = (NSUInteger)[maxBatchValue integerValue] * 1024; //value return in KB } id maxWaitValue = [[response allHeaderFields] objectForKey:@"X-UA-Max-Wait"]; if (maxWaitValue) { self.maxWait = (NSUInteger)[maxWaitValue integerValue]; } id minBatchValue = [[response allHeaderFields] objectForKey:@"X-UA-Min-Batch-Interval"]; if (minBatchValue) { self.minBatchInterval = (NSUInteger)[minBatchValue integerValue]; } [self saveUploadEventSettings]; } #pragma mark - #pragma mark Custom Property Setters - (void)setMaxTotalDBSize:(NSUInteger)maxTotalDBSize { if (maxTotalDBSize < kMinTotalDBSizeBytes) { _maxTotalDBSize = kMinTotalDBSizeBytes; }else if (maxTotalDBSize > kMaxTotalDBSizeBytes) { _maxTotalDBSize = kMaxTotalDBSizeBytes; } else { _maxTotalDBSize = maxTotalDBSize; } } - (void)setMaxBatchSize:(NSUInteger)maxBatchSize { if (maxBatchSize < kMinBatchSizeBytes) { _maxBatchSize = kMinBatchSizeBytes; }else if (maxBatchSize > kMaxBatchSizeBytes) { _maxBatchSize = kMaxBatchSizeBytes; } else { _maxBatchSize = maxBatchSize; } } - (void)setMaxWait:(NSUInteger)maxWait { if (maxWait < kMinWaitSeconds) { _maxWait = kMinWaitSeconds; }else if (maxWait > kMaxWaitSeconds) { _maxWait = kMaxWaitSeconds; } else { _maxWait = maxWait; } } - (void)setMinBatchInterval:(NSUInteger)minBatchInterval { if (minBatchInterval < kMinBatchIntervalSeconds) { _minBatchInterval = kMinBatchIntervalSeconds; }else if (minBatchInterval > kMaxBatchIntervalSeconds) { _minBatchInterval = kMaxBatchIntervalSeconds; } else { _minBatchInterval = minBatchInterval; } } #pragma mark - #pragma mark Send Logic - (void)resetEventsDatabaseStatus { self.databaseSize = [self.analyticDBManager sizeInBytes]; NSArray *events = [self.analyticDBManager getEvents:1]; if ([events count] > 0) { NSDictionary *event = [events objectAtIndex:0]; self.oldestEventTime = [[event objectForKey:@"time"] doubleValue]; } else { self.oldestEventTime = 0; } UA_LTRACE(@"Database size: %ld", (long)self.databaseSize); UA_LTRACE(@"Oldest Event: %f", self.oldestEventTime); } - (BOOL)hasEventsToSend { return self.databaseSize > 0 && [self.analyticDBManager eventCount] > 0; } - (UAHTTPRequest*)analyticsRequest { NSString *urlString = [NSString stringWithFormat:@"%@%@", self.config.analyticsURL, @"/warp9/"]; UAHTTPRequest *request = [UAHTTPRequest requestWithURLString:urlString]; request.compressBody = YES;//enable GZIP request.HTTPMethod = @"POST"; // Required Items [request addRequestHeader:@"X-UA-Device-Family" value:[UIDevice currentDevice].systemName]; [request addRequestHeader:@"X-UA-Sent-At" value:[NSString stringWithFormat:@"%f",[[NSDate date] timeIntervalSince1970]]]; [request addRequestHeader:@"X-UA-Package-Name" value:[[[NSBundle mainBundle] infoDictionary] objectForKey:(id)kCFBundleIdentifierKey]]; [request addRequestHeader:@"X-UA-Package-Version" value:[[[NSBundle mainBundle] infoDictionary] objectForKey:(id)kCFBundleVersionKey] ?: @""]; [request addRequestHeader:@"X-UA-ID" value:[UAUtils deviceID]]; [request addRequestHeader:@"X-UA-User-ID" value:[UAirship inboxUser].username]; [request addRequestHeader:@"X-UA-App-Key" value:[UAirship shared].config.appKey]; [request addRequestHeader:@"X-UA-Channel-Opted-In" value:[[UAirship push] userPushNotificationsAllowed] ? @"true" : @"false"]; [request addRequestHeader:@"X-UA-Channel-Background-Enabled" value:[[UAirship push] backgroundPushNotificationsAllowed] ? @"true" : @"false"]; // Optional Items [request addRequestHeader:@"X-UA-Lib-Version" value:[UAirshipVersion get]]; [request addRequestHeader:@"X-UA-Device-Model" value:[UAUtils deviceModelName]]; [request addRequestHeader:@"X-UA-OS-Version" value:[[UIDevice currentDevice] systemVersion]]; [request addRequestHeader:@"Content-Type" value: @"application/json"]; [request addRequestHeader:@"X-UA-Timezone" value:[[NSTimeZone defaultTimeZone] name]]; [request addRequestHeader:@"X-UA-Locale-Language" value:[[NSLocale autoupdatingCurrentLocale] objectForKey:NSLocaleLanguageCode]]; [request addRequestHeader:@"X-UA-Locale-Country" value:[[NSLocale autoupdatingCurrentLocale] objectForKey: NSLocaleCountryCode]]; [request addRequestHeader:@"X-UA-Locale-Variant" value:[[NSLocale autoupdatingCurrentLocale] objectForKey: NSLocaleVariantCode]]; [request addRequestHeader:@"X-UA-Push-Address" value:[UAirship push].deviceToken]; [request addRequestHeader:@"X-UA-Channel-ID" value:[UAirship push].channelID]; [request addRequestHeader:@"X-UA-Location-Permission" value:[self locationPermission]]; [request addRequestHeader:@"X-UA-Location-Service-Enabled" value:[UALocationService airshipLocationServiceEnabled] ? @"true" : @"false"]; return request; } -(void)invalidateTimer { if (!self.sendTimer.isValid) { return; } if (self.sendTimer.userInfo) { UIBackgroundTaskIdentifier backgroundTask = [self.sendTimer.userInfo unsignedIntegerValue]; [[UIApplication sharedApplication] endBackgroundTask:backgroundTask]; } [self.sendTimer invalidate]; self.sendTimer = nil; } - (void)stopSends { [self.sendQueue cancelAllOperations]; [self invalidateTimer]; } - (void)pruneEvents { // Delete older events until the database size is met while (self.databaseSize > self.maxTotalDBSize) { UA_LTRACE(@"Database exceeds max size of %ld bytes... Deleting oldest session.", (long)self.maxTotalDBSize); [self.analyticDBManager deleteOldestSession]; [self resetEventsDatabaseStatus]; } } - (BOOL)isEventValid:(NSMutableDictionary *)event { return [[event objectForKey:@"event_size"] respondsToSelector:NSSelectorFromString(@"intValue")] && [[event objectForKey:@"data"] isKindOfClass:[NSData class]] && [[event objectForKey:@"session_id"] isKindOfClass:[NSString class]] && [[event objectForKey:@"type"] isKindOfClass:[NSString class]] && [[event objectForKey:@"time"] isKindOfClass:[NSString class]] && [[event objectForKey:@"event_id"] isKindOfClass:[NSString class]]; } // Clean up event data for sending. // Enforce max batch limits // Loop through events and discard DB-only items, format the JSON data field // as a dictionary - (NSArray *)prepareEventsForUpload { [self pruneEvents]; if (![self hasEventsToSend]) { return nil; } NSUInteger avgEventSize = self.databaseSize / [self.analyticDBManager eventCount]; int actualSize = 0; NSUInteger batchEventCount = 0; NSArray *events = [self.analyticDBManager getEvents:self.maxBatchSize/avgEventSize]; NSArray *topLevelKeys = @[@"type", @"time", @"event_id", @"data"]; for (NSMutableDictionary *event in events) { if (![self isEventValid:event]) { UA_LERR("Detected invalid event due to possible database corruption. Recreating database"); [self.analyticDBManager resetDB]; return nil; } actualSize += [[event objectForKey:@"event_size"] intValue]; if (actualSize <= self.maxBatchSize) { batchEventCount++; } else { UA_LTRACE(@"Met batch limit."); break; } // The event data returned by the DB is a binary plist. Deserialize now. NSMutableDictionary *eventData = nil; NSData *serializedEventData = (NSData *)[event objectForKey:@"data"]; if (serializedEventData) { NSError *err; eventData = (NSMutableDictionary *)[NSPropertyListSerialization propertyListWithData:serializedEventData options:NSPropertyListMutableContainersAndLeaves format:NULL /* an out param */ error:&err]; if (err) { UA_LTRACE(@"Deserialization Error: %@", [err localizedDescription]); } } // Always include a data entry, even if it is empty if (!eventData) { eventData = [[NSMutableDictionary alloc] init]; } [eventData setValue:[event objectForKey:@"session_id"] forKey:@"session_id"]; [event setValue:eventData forKey:@"data"]; // Remove unused DB values for (NSString *key in [event allKeys]) { if (![topLevelKeys containsObject:key]) { [event removeObjectForKey:key]; } } } if (batchEventCount < [events count]) { events = [events subarrayWithRange:NSMakeRange(0, batchEventCount)]; } return events; } - (NSTimeInterval)timeToWaitBeforeSendingNextBatch { NSTimeInterval delay = 0; NSTimeInterval timeSinceLastSend = [[NSDate date] timeIntervalSinceDate:self.lastSendTime]; if (timeSinceLastSend < self.minBatchInterval) { delay = self.minBatchInterval - timeSinceLastSend; } // Now worry about initial delay NSTimeInterval initialDelayRemaining = [self.earliestInitialSendTime timeIntervalSinceNow]; return MAX(delay, initialDelayRemaining); } - (NSOperation *)uploadOperationWithEvents:(NSArray *)events { UAHTTPRequest *analyticsRequest = [self analyticsRequest]; [analyticsRequest appendBodyData:[NSJSONSerialization dataWithJSONObject:events options:0 error:nil]]; UA_LTRACE(@"Sending to server: %@", self.config.analyticsURL); UA_LTRACE(@"Sending analytics headers: %@", [analyticsRequest.headers descriptionWithLocale:nil indent:1]); UA_LTRACE(@"Sending analytics body: %@", [NSJSONSerialization stringWithObject:events options:NSJSONWritingPrettyPrinted]); UAHTTPConnectionSuccessBlock successBlock = ^(UAHTTPRequest *request){ UA_LDEBUG(@"Analytics data sent successfully. Status: %ld", (long)[request.response statusCode]); UA_LTRACE(@"responseData=%@, length=%lu", request.responseString, (unsigned long)[request.responseData length]); // Update analytics settings with new header values [self updateAnalyticsParametersWithHeaderValues:request.response]; if ([request.response statusCode] == 200) { [self.analyticDBManager deleteEvents:events]; [self resetEventsDatabaseStatus]; } else { UA_LTRACE(@"Analytics upload request failed: %ld", (long)[request.response statusCode]); } }; UAHTTPConnectionFailureBlock failureBlock = ^(UAHTTPRequest *request){ UA_LTRACE(@"Analytics upload request failed."); }; UAHTTPConnectionOperation *operation = [UAHTTPConnectionOperation operationWithRequest:analyticsRequest onSuccess:successBlock onFailure:failureBlock]; return operation; } // Adds event upload operation to the sendQueue. - (NSOperation *)queryOperationWithCompletionBlock:(UAAnalyticsUploadCompletionBlock)completionBlock { __block NSOperation *operation = [NSBlockOperation blockOperationWithBlock:^{ NSArray *events = [self prepareEventsForUpload]; // Check for empty events if (!events.count) { UA_LTRACE(@"No events to upload, skipping sendOperation."); return; } UA_LTRACE(@"Analytics upload in progress."); self.lastSendTime = [NSDate date]; NSOperation *networkOperation = [self uploadOperationWithEvents:events]; // Set the priority higher than the _outer_ send operation (normal priority) networkOperation.queuePriority = NSOperationQueuePriorityHigh; // Transfer to the completion block to the network operation networkOperation.completionBlock = completionBlock; operation.completionBlock = nil; [self.sendQueue addOperation:networkOperation]; }]; operation.completionBlock = completionBlock; return operation; } - (void)sendWithDelay:(NSTimeInterval)delay { UA_LTRACE(@"Attempting to send update."); if (![UAirship push].channelID) { UA_LTRACE("No channel ID, skipping send."); return; } if (!self.config.analyticsEnabled) { UA_LTRACE("Analytics disabled."); return; } if (![self hasEventsToSend]) { UA_LTRACE(@"No analytics events to upload."); return; } if (self.sendTimer.isValid && [self.sendTimer.fireDate compare:[NSDate dateWithTimeIntervalSinceNow:delay]] == NSOrderedAscending) { UA_LTRACE("Upload already scheduled with delay less than %f seconds.", delay); return; } __block UIBackgroundTaskIdentifier backgroundTask = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{ UA_LTRACE(@"Analytics background task expired."); [self stopSends]; if (backgroundTask != UIBackgroundTaskInvalid) { [[UIApplication sharedApplication] endBackgroundTask:backgroundTask]; backgroundTask = UIBackgroundTaskInvalid; } }]; // Invalidate the timer after creating a new background task [self invalidateTimer]; if (backgroundTask == UIBackgroundTaskInvalid) { UA_LTRACE("Background task unavailable, skipping analytics"); return; } if (delay) { UA_LTRACE(@"Analytics data scheduled to send in %f seconds.", delay); self.sendTimer = [NSTimer timerWithTimeInterval:delay target:self selector:@selector(sendTimerFired:) userInfo:@(backgroundTask) repeats:NO]; [[NSRunLoop mainRunLoop] addTimer:self.sendTimer forMode:NSDefaultRunLoopMode]; } else { [self enqueueSendOperationWithBackgroundTaskIdentifier:backgroundTask]; } } // Enqueues another send operation with the timer's background task - (void)sendTimerFired:(NSTimer *)timer { UIBackgroundTaskIdentifier backgroundTask = [timer.userInfo unsignedIntegerValue]; [self enqueueSendOperationWithBackgroundTaskIdentifier:backgroundTask]; [timer invalidate]; if (self.sendTimer == timer) { self.sendTimer = nil; } } - (void)enqueueSendOperationWithBackgroundTaskIdentifier:(UIBackgroundTaskIdentifier)backgroundTask { NSOperation *sendOperation = [self queryOperationWithCompletionBlock:^{ UA_LTRACE(@"Analytics data send completed with background task: %lu", (unsigned long)backgroundTask); if (backgroundTask != UIBackgroundTaskInvalid) { [[UIApplication sharedApplication] endBackgroundTask:backgroundTask]; } }]; [self.sendQueue addOperation:sendOperation]; } - (void)launchedFromNotification:(NSDictionary *)notification { self.notificationUserInfo = notification; [self startSession]; } - (void)clearSession { self.sessionID = @""; self.conversionSendID = nil; self.conversionRichPushID = nil; } - (void)startSession { [self clearSession]; self.sessionID = [NSUUID UUID].UUIDString; if (self.notificationUserInfo) { // If the server did not send a push ID (likely because the payload did not have room) // generate an ID for the server to use self.conversionSendID = [self.notificationUserInfo objectForKey:@"_"] ?: [NSUUID UUID].UUIDString; NSString *richPushID = [UAInboxUtils inboxMessageIDFromNotification:self.notificationUserInfo]; if (richPushID) { self.conversionRichPushID = richPushID; } } } - (BOOL)isEnabled { return [self.dataStore boolForKey:kUAAnalyticsEnabled] && self.config.analyticsEnabled; } - (void)setEnabled:(BOOL)enabled { // If we are disabling the runtime flag clear all events if ([self.dataStore boolForKey:kUAAnalyticsEnabled] && !enabled) { UA_LINFO(@"Deleting all analytics events."); [self stopSends]; [self.analyticDBManager resetDB]; } [self.dataStore setBool:enabled forKey:kUAAnalyticsEnabled]; } - (NSString *)locationPermission { if (![CLLocationManager locationServicesEnabled]) { return kUALocationPermissionSystemLocationDisabled; } else { switch ([CLLocationManager authorizationStatus]) { case kCLAuthorizationStatusDenied: case kCLAuthorizationStatusRestricted: return kUALocationPermissionNotAllowed; case kCLAuthorizationStatusAuthorizedAlways: return kUALocationPermissionAlwaysAllowed; case kCLAuthorizationStatusAuthorizedWhenInUse: return kUALocationPermissionForegroundAllowed; case kCLAuthorizationStatusNotDetermined: return kUALocationPermissionUnprompted; } } } @end
{ "redpajama_set_name": "RedPajamaGithub" }
3,053
\section{} Graphene has become a promising material for future electronics. The special electronic and magnetic properties of graphene are related to its low-dimensional carbon structures and have aroused enormous interest.\cite{geim:nmat,geim:science,castro:rmp} Since the pure graphene is diamagnetic, introducing magnetic properties to graphene-related systems has attracted theoretical concern.\cite{mcclure:pr,brey:prl,uchoa:prl} By doping magnetic transition-metal elements on perfect graphene or graphene vacancies one can introduce magnetic moment into graphene system.\cite{krasheninnikov:prl,santos:prb,sevincli:prb} Adsorbing small covalent molecules on perfect graphene can also change the total magnetic moment by charge transfer between adsorbates and graphene.\cite{leenaerts:apl,leenaerts:prb,wehling:nanolett} It is notable that these covalent molecules stay away from graphene layer by 3 \AA\ at least and therefore the binding energies are small. Typically, the transferred charges are less than 0.1 e$^-$. Thus, the induced change of magnetic moment is expectedly small.\cite{leenaerts:apl,leenaerts:prb} It is highly desired to find an adsorbate that can adsorb on graphene firmly and induce a large charge transfer or a remarkable magnetic moment. With density functional theory calculations, we found a large local magnetic moment can be produced spontaneously by adsorbing diamagnetic beryllium dimer on perfect diamagnetic graphene. Here we report this interesting finding. All the calculations were performed with the plane wave based VASP code\cite{kresse:prb48,kresse:prb50,kresse:cms,kresse:prb54} using the projector-augmented wave method\cite{blochl:prb,kresse:prb59} and Perdew-Burke-Ernzernhof generalized gradient approximation\cite{perdew:prl}. The supercell is composed of $4\times4$ unit cells with a distance of 15 \AA\ between the adjacent graphene layers. For the integration over the Brillouin zone, we combined $5\times5\times1$ Monkhorst-Pack grids\cite{monkhorst:prb} with a generalized Methfessel-Paxton smearing technique\cite{methfessel:prb} to optimize adsorption geometries. Charge transfers were calculated based on the Bader charge analysis.\cite{bader:aqc,tang:jpm} For accurate charge and energy calculations a more fine $11\times11\times1$ Monkhorst-Pack grids with tetrahedron method with Bl\"{o}chl corrections were used. An energy cutoff of 600 eV was used throughout the calculations. Beryllium dimer is the simplest metal dimer in addition to lithium dimer. It has eight electrons totally and the electronic configuration of the ground state is $(1\sigma)^2(1\sigma^*)^2(2\sigma)^2(2\sigma^*)^2$. The ground state of Be$_2$ is singlet since its electronic configuration is a type of closed shell. In old-fashioned text book, Be$_2$ molecule cannot exist in reality because the full-filled anti-bonding $2\sigma^*$ orbital counteracts the full-filled bonding $2\sigma$ orbital energetically. Since 1980s, theoretical studies gradually indicate the existence of the stable Be$_2$ with a short bond length by mixing excited determinants.\cite{roeggen:ijqc60,roeggen:ijqc101} Recent experimental work determined potential energy curve of Be$_2$ and found it looks like a normal covalent molecule of weak bond strength with a bond distance of 2.45 \AA.\cite{merritt:science} Our calculated bond length of beryllium dimer is 2.43 \AA, in nice agreement with the experimental value. Fig. \ref{F1}(a) shows the band structure of pure graphene. As comparison, the relative energy level of isolated Be 2s, 2p atomic orbitals and Be$_2$ $2\sigma,\ 2\sigma^*,\ 1\pi$ molecular orbitals are also presented in Fig. \ref{F1}(b). The full-filled $2\sigma^*$ orbital is 0.33 eV lower than the Dirac point of graphene while the empty $1\pi$ orbital is 1.30 eV higher than the Dirac point. Such energy level ordering should prevent charge transferring from Be$_2$ to graphene or vice versa. In fact, in the physisorption processes of diamagnetic covalent molecules, such as H$_2$O, NH$_3$ and CO, the highest occupied molecular orbital (HOMO) and lowest occupied molecular orbital (LUMO) of the adsorption system are separated by the Fermi level so the total charge transfer is small.\cite{leenaerts:prb} \begin{figure}[htbp] \includegraphics[scale=0.3]{F1}% \caption{\label{F1}(a) The band structure of graphene. (b) The energy levels of Be 2s, 2p atomic orbitals and Be$_2\ 2\sigma,\ 2\sigma^*,\ 1\pi$ molecular orbitals.} \end{figure} Now we study beryllium dimer adsorbed on graphene. Our calculation revealed that though one single beryllium atom can hardly adsorb on graphene, the beryllium dimer does adsorb on graphene. We have investigated several adsorption configurations. The most stable adsorption geometry is shown in Fig. \ref{F2}. The Be-Be bond length on graphene is shortened by 0.32 \AA\ compared with that in gas phase. The perpendicular distance to graphene layer is 1.49 \AA\ (Fig. \ref{F2}) which is significantly shorter than those for other molecules doped on graphene.\cite{leenaerts:apl,leenaerts:prb,wehling:nanolett,zanella:prb} The binding energy (BE) of the dimer is unexpectedly as high as 1.04 eV. ($BE=E_{Be_{2}}+E_{graphene}-E_{Be_{2}-graphene}$) Apart from large binding energy, it is also amazing that when the singlet diamagnetic beryllium dimer adsorbs on diamagnetic graphene, the Be$_2$-graphene adsorption system acquires magnetic moment of 1 $\mu_B$ per supercell. The emergence of magnetic moment usually indicates selective charge transfer of spin channel, which is contrary to our previous prediction based on the energy spectra of isolated beryllium dimer and graphene. \begin{figure}[htbp] \includegraphics[scale=0.4]{F2}% \caption{\label{F2}The stable adsorption of beryllium dimer on the ($4\times4$) cell of graphene. The dimer is perpendicular to the graphene and adsorbs on the hollow site. Be$_a$ atom is below Be$_b$ atom. Labeling of atoms: C, gray spheres; Be, yellow spheres.} \end{figure} The band structure of Be$_2$-graphene system is illustrated in Fig. \ref{F3}. The upshift of the Fermi level relative to the Dirac point results from the charge transfer from Be$_2$ molecule to graphene. One can also find the splitting of $2\sigma$ and $2\sigma^*$ spin orbitals of Be$_2$ (see Fig. \ref{F3}). The $2\sigma$ spin orbitals are still lower than the Dirac point compared with the one in gas phase, but the $2\sigma^*$ spin orbitals are pushed higher than the Dirac point. Fig. \ref{F3}(a) shows the spin-up component of $2\sigma^*$ ortital is below the Fermi level, while the spin-down component of $2\sigma^*$ ortital in Fig. \ref{F3}(b) is higher than the Fermi level. Especially, the spin-down component of $2\sigma^*$ orbital is much higher than the Dirac point. Thus it is the electron of spin-down component on $2\sigma^*$ orbital which is transferred to the conduction band of graphene spontaneously. In Fig. \ref{F4}(a) we plot the total density of states (DOS) of two beryllium atoms and the total DOS of six carbon atoms around the hollow site. It is not difficult to verify that the split of $2\sigma$ spin ortitals is about 0.35 eV, the $2\sigma^*$ orbital of spin-up component is lower than the Fermi level by 0.27 eV and the $2\sigma^*$ orbital of spin-down component is higher than the Fermi level by 1.77 eV. Using Bader charge analysis we found three valance electrons stay on Be$_2$ and the total magnetic moment of 1 $\mu_B$ localizes on Be$_2$ in the Be$_2$-graphene system which means one electron of spin-down component is transferred to graphene totally. \begin{figure}[htbp] \includegraphics[scale=0.3]{F3} \caption{\label{F3}The band structure of Be$_2$-graphene system. (a) The spin-up component. (b) The spin-down component. The $2\sigma$ and $2\sigma^*$ orbitals of Be$_2$ are marked in green and yellow, respectively.} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.3]{F4} \caption{\label{F4}Comparison of local DOS and COOP. (a) Local DOS of two beryllium atoms and six carbon atoms around the adsorption sites. (b) COOP in red is of two beryllium atoms. The black one is the total COOP between Be$_a$ (see the caption of Fig. \ref{F2}) and six carbon atoms around the adsorption site.} \end{figure} It is instructive to compare the adsorption of paramagnetic NO$_2$ and diamagnetic Be$_2$ on graphene. The mechanisms of the charge transfer are different for the two adsorbates. In the case of the adsorption of the paramagnetic NO$_2$ on graphene, it was argued that since the empty partially occupied molecular orbital (POMO) of adsorbed NO$_2$ is below the Dirac point in the adsorbed configuration, the charge transfer can reach one electron in the dilute limit.\cite{leenaerts:apl,wehling:nanolett,schedin:nmat} In the actual finite theoretical calculation, the empty POMO is crossed by the Fermi level so just fractional charge is transferred to the POMO from graphene and the total magnetic moment decreases.\cite{leenaerts:apl,leenaerts:prb} In the Be$_2$-graphene system, the Fermi level locates between the two distinctly split $2\sigma^*$ spin orbitals of Be$_2$ in the adsorbed configuration. Hence, one electron is transferred from the spin orbital above the Fermi level to graphene and a magnetic moment of 1 $\mu_B$ is produced spontaneously. Large binding energy of doping adsorbates on graphene usually involves the strong interaction between adsorbate atoms and carbon atoms of graphene, such as the adsorption of transition-metal elements.\cite{santos:prb} To check whether this is the case in the Be$_2$-graphene system, we also plot the total crystal orbital occupation populations (COOP) of adsorbed Be$_2$ and the one between Be$_2$ and six carbon atoms around the adsorption site in Fig. \ref{F4}(b). COOP is a more illustrative scheme to reflect the nature and strength of bonding or anti-bonding interaction between two bonding atoms than DOS.\cite{hughbanks:jacs} The COOP plots are characterized with $2\sigma$ and $2\sigma^*$ of Be$_2$ orbitals. The $2\sigma^*$ spin orbitals are split by 2.04 eV which coincides with the DOS analysis. From Fig. \ref{F4}(b) we can infer that the interaction of Be$_2$ $2\sigma^*$ orbitals with graphene carbons is anti-bonding and negligible. The adsorption of beryllium dimer on graphene is a kind of physisorption essentially. But why the binding energy calculated is much larger than that of other small covalent molecule doping on graphene where the binding energies is an order of meV? This is because the electron transferred from Be$_2$ to graphene is from the anti-bonding $2\sigma^*$ orbital. The removal of one electron from the anti-bonding orbital stabilizes Be$_2$ and yields large binding energy. The shortening of the Be-Be bond length by 0.32 \AA\ provides a strong support for this point of view. By far, the study presented here only considers the ferromagnetic configuration of the Be$_2$-graphene system. In order to check the magnetic state of the adsorption structure, we doubled the geometry in one direction and set the initial magnetic moments of beryllium to be antiferromagnetic. Calculation results showed the energies of both configurations are nearly equal, which means the size of supercell in our calculations is large enough to omit the magnetic couplings between Be$_2$ of each supercell. In addition, the spin-unpolarized configuration was also considered and found unfavorable energetically. In summary, we found beryllium dimer is favorable to form on graphene. The physisorption of diamagnetic beryllium dimer on pure graphene induces a charge transfer of one electron and generates a magnetic moment of 1 $\mu_B$. The magnitude of charge transfer and magnetic moment generated by doping beryllium dimer are larger than by doping those reported molecules on graphene. Our study demonstrates that even without transition-metal adatoms or defective graphene a large magnetic moment can be generated spontaneously from two diamagnetic matters. This opens up the possibility of introducing stable and observable magnetic properties, such as magnetic domain and magnetic order, in graphene by molecular doping directly. This work is supported by The National Basic Research Program of China (973 Program, 2007CB613301, 2007CB613305). The national natural science foundation of China No.20973090 is gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,168
Q: SQL query to return data by "month, year" I have data in the db stored in the following format. id period_start_date period_end_date 1 1/1/17 0:00 1/31/2017 23:59 1 2/1/17 0:00 2/28/2017 23:59 1 3/1/17 0:00 3/31/2017 23:59 1 4/1/17 0:00 4/30/2017 23:59 and I want to get this data in the UI as a dropdown as January 2017 February 2017 March 2017 April 2017 can any one please let me know if there is a way to fetch this data in the query in the above format. A: Use to_char to get the month and extract for the year. You can't use one call to to_char() for month and year, because month is black-space padded. And your month is on the left so you need to trim it. SELECT trim(to_char(period_start_date, 'Month') || ' ' || extract(year FROM period_start_date) FROM ( VALUES ('1/1/17 0:00'::date) ) AS t(period_start_date); UPDATE (to select from a table) SELECT trim(to_char(period_start_date, 'Month') || ' ' || extract(year FROM period_start_date) FROM table;
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,631
<?php namespace SaleBoss\Services\Leads\Presenter; interface ThrottleInterface { public function doCheck(); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,771
package io.bootique.resource; import io.bootique.BootiqueException; import java.io.File; import java.io.IOException; import java.net.URI; import java.net.URL; import java.util.Objects; /** * A value object representing a resource URL. Supports 3 common resource * representations: * <p> * <ul> * <li>resource as a URL using protocols recognized by Java (http:, https:, * jar:, file:, etc).</li> * <li>resource as URL with "classpath:" protocol that allows to identify * resources on classpath in a portable manner. E.g. the same URL would identify * the resource regardless of whether it is packaged in a jar or resides in a * source folder in an IDE.</li> * <li>resource as absolute or relative file path.</li> * </ul> * * @since 0.15 */ public class ResourceFactory { protected static final String CLASSPATH_URL_PREFIX = "classpath:"; protected String resourceId; /** * Creates a ResourceFactory passing it a String resource identifier. It can * be one of * <ul> * <li>resource as a URL using protocols recognized by Java (http:, https:, * jar:, file:, etc).</li> * <li>resource as URL with "classpath:" protocol that allows to identify * resources on classpath in a portable manner. E.g. the same URL would * identify the resource regardless of whether it is packaged in a jar or * resides in a source folder in an IDE.</li> * <li>resource as absolute or relative file path.</li> * </ul> * * @param resourceId a String identifier of the resource. */ public ResourceFactory(String resourceId) { this.resourceId = Objects.requireNonNull(resourceId); } /** * Returns a URL to access resource contents. * * @return a URL to access resource contents. */ public URL getUrl() { return resolveUrl(this.resourceId); } /** * Returns resource ID string used to initialize this ResourceFactory. * * @return resource ID string used to initialize this ResourceFactory. * @since 0.21 */ public String getResourceId() { return resourceId; } protected URL resolveUrl(String resourceId) { // resourceId can be either a file path or a URL or a classpath: URL if (resourceId.startsWith(CLASSPATH_URL_PREFIX)) { String path = resourceId.substring(CLASSPATH_URL_PREFIX.length()); // classpath URLs must not start with a slash. This does not work // with ClassLoader. if (path.length() > 0 && path.charAt(0) == '/') { throw new RuntimeException(CLASSPATH_URL_PREFIX + " URLs must not start with a slash: " + resourceId); } URL cpUrl = ResourceFactory.class.getClassLoader().getResource(path); if (cpUrl == null) { throw new IllegalArgumentException("Classpath URL not found: " + resourceId); } return cpUrl; } URI uri; try { uri = URI.create(resourceId); } catch (IllegalArgumentException e) { throw new BootiqueException(1, "Invalid config resource url: " + resourceId, e); } try { return uri.isAbsolute() ? uri.toURL() : getCanonicalFile(resourceId).toURI().toURL(); } catch (IOException e) { throw new BootiqueException(1, "Invalid config resource url: " + resourceId, e); } } /** * Converts resource ID to a canonical file. * * @return canonical file produced from resource id. * @throws IOException */ // using canonical file avoids downstream bugs like this: // https://github.com/bootique/bootique-jetty/issues/29 protected File getCanonicalFile(String resourceId) throws IOException { return new File(resourceId).getCanonicalFile(); } @Override public String toString() { return "ResourceFactory:" + resourceId; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,575
Classical Music Performances Georgia Online Comments Off on Classical Music Performances Georgia Online If you're the kind of person who finds yourself crooning to a Hindustani song, you can learn Indian classical music online. There are only two things. for the music and for yourself versus for a li. A classical music event that literally brings. iPalpiti soloists will perform four different concerts at the Encinitas Library, 540 Cornish Drive. Davide de Ascaniis (Italy, violin), Irakli Tsadaia. The Classical period was an era of classical music between roughly 1730 to 1820. The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music and is less complex. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially later. Take a trip through centuries of classical music with Bay of Plenty Symphonia's Miscellany. as well as her fabulous playin. World War I Armistice Series: "World War I and Classical Music," 6:30 p.m., Loveland Public Library. RSVP is required at m. "I wondered if these old elephants might like to listen to some slow classical music when. uploads videos of his jungle co. ATLANTA (AP) — The Republican nominee has declared victory in a hotly Georgia governor's. Poland's largest music publisher. An up-and-coming baritone singer alleges he was drugged and violently raped in 2010 by two of opera/classical music's shining stars. the same couple drugged and raped him following a performance in. "The Family Series is a great fit with our mission, offering an outdoor family concert, a children's musical and a beloved classical music performance," Brown added. "We hope the community will mark t. Russians Singing American Paratrooper Song He is a KGB officer and a paratrooper. Russian military base located on our territory; the decision was made by Putin in 2015. It is no wonder that even the Events across the area will also commemorate Veterans Day. Here are 10 things to do this week: The 42nd annual Greek Festival. The Levitt AMP program will provide $25,000 to up to 15 of the finalists, allowing them to offer a 10-week series of concerts featuring multiple styles of music. An online voting process. bluegrass. Talvik forms connections with her audience in the intimate settings of her performances by telling stories from. so that w. The audience got to its feet when he was done and tried mightily to get him to play an encore, but he opted for the same unde. The music of Japan includes a wide array of performers in distinct styles, both traditional and modern. The word for "music" in Japanese is 音楽 (ongaku), combining the kanji 音 on (sound) with the kanji 楽 gaku (enjoy). Japan is the largest physical music market in the world, worth US$2 billion in sales in physical formats in 2014, and the second-largest overall music market, worth a. The University of Georgia. the performances are free or discounted for students. Tickets for events presented by the Perfo. Former Bethlehem College student Georgia. studying music and science at the University of Waikato. Last year she won the University of Waikato Aria Competition and gained her ATCL Trinity College o. There's techno, classical tunes, video game music, modern and classic anime themes. you can also throw down in an online r. Atlantic Beach Fl Live Music It's just after dawn on Saturday and Miami Beach is. no thumping music from car radios. The air is gauzy. Everything is a ghostly metallic gray. It's 14 hours before As the dynamic duo who animate the Black Violin brand, the gentlemen purposefully shatter stereotypes with an avant-garde rep. Do The Wave Dance And when I was in New York, I went to the Off-Broadway Dance Center. Anyone can go. You just take the class that your ability is up to. And I Pop Music Wikipedia Pop music (a term that derives from "popular") is a genre of popular music that originated in its modern form in the Western world during the 1950s and 1960s, deriving Before the Dec. 11 performance, there will be a lecture by musicologist David Malvinni. A classical guitarist and author, Malvinni created CAMA's music education program. Complete season informatio. Tickets are available online and will go on sale at the box office one hour before the performance. For tickets, go to music. Tickets for the 2019 Encore Series performances at the Jennie T. Nominated for the Americana/Bluegrass GA Music Award in 2.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,377
De kortsluitproef wordt uitgevoerd om de koperverliezen van een transformator of elektromotor te bepalen. Transformator Bij de kortsluitproef wordt de secundaire zijde van de transformator kortgesloten. De primaire zijde wordt aangesloten op een regelbare voeding en wordt opgeregeld totdat door de spoelen de nominale stroom loopt. Op de meetinstrumenten is af te lezen: A-meter = nominale stroom IN V-meter = kortsluitspanning UK W-meter = koperverliezen Pcu Door de geringe spanning zijn de ijzerverliezen bij de kortsluitproef vele malen kleiner dan de koperverliezen en worden daarom verwaarloosd. De verliezen worden nu bepaald door de (koper)weerstand van de wikkeling zodat Pkort = Pcu. Berekening Uit de koperverliezen en de kortsluitspanning kan de kortsluitweerstand RK bepaald worden. Deze weerstand is de som van de ohmse weerstand van de primaire (Rp) en de secondaire (R's) wikkeling: Uit de kortsluitspanning en de nominale stroom kan de kortsluitimpedantie ZK en kortsluitreactantie XK (totale spreidingsreactantie van de primaire en secondaire zijde) afgeleid worden: Elektromotor Ook bij elektromotoren kunnen de koperverliezen bepaald worden, door de motoras te blokkeren. Daarna de voeding langzaam opregelen totdat de motor de nominale stroom uit het net opneemt. Opmerking: In tegenstelling tot bij de transformator mag deze proef bij elektromotoren niet te lang duren omdat er geen luchtkoeling van de ventilator plaatsvindt. De kans bestaat dat de koperwikkelingen verbranden. Zie ook Nullastproef voor het bepalen van de nullastverliezen Elektriciteit Elektromotor Transformator
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,011
Attar railway station is a small railway station in Khandwa district, Madhya Pradesh. Its code is ATR. It serves Attar village. The station consists of a single platform. The platform is not well sheltered. It lacks many facilities including water and sanitation. Recently gauge conversion started on this line. After conversion it will connect Indore to South India. References Ratlam railway division Railway stations in Khandwa district
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,851
Бабкен Степанян (арм. Բաբկեն Ստեփանյան, Псевдоним Бабкен-Мария, 10 мая 1953, Ереван) — армянский художник, архитектор, керамист и реставратор. Основатель и популяризатор направления «Визуально-математический концептуализм». Работает преимущественно в авторской технике структурного коллажа. Проживает в Латвии с 1983 года, является членом Союза художников Латвии с 2009 года и почётным членом Пушкинского латвийского общества. Биография Родился 10 мая 1953 года в городе Ереван (Армения) в семье инженера-строителя Вазгена Степаняня и бухгалтера Марии Хачатуровой. С детских лет обладал разносторонними способностями: посещал кружок скульптуры, изо-студию, занимался лепкой, авиа-моделированием. Ещё будучи школьником Степанян понял, что ему нравится математическое искусство. В то время, он познакомился с репродукциями картин Маурица Корнелиса Эшера, что очень повлияло на выбор будущего направления. Сам Степанян называет М. Эшера своим духовным отцом. Но любовь к точным наукам победила и Бабкен поступил и в 1970 году окончил физико-математическую школу при Ереванском государственном университете. Продолжая заниматься творчеством, в 1973 году окончил с отличием факультет керамики художественного училища им. М. Сарьяна в городе Ереван. Высшее образование получил на архитектурном факультете Ереванского поли­техни­ческого института, который успешно окончил в 1978 году. Дипломная работа — «Автовокзал», руководитель — профессор архитектуры Григор Гурзадян. В Армении участвовал в выставках с 1973 года и работал в сфере промышленной архитектуры (1978—1981 гг.). Является народным мастером Худфонда Армянской ССР. С 1983 года проживает в городе Рига. 1983 г. — 1990 г. — работал в п/о «Дайльраде» (художественная обра­бот­ка янтаря). 1989 г. — 1994 г. входил в группу «Свободное искусство». 1990 г. — 1994 г. — работал художником в Латвийской Филармонии. С 1993 г. — член Художественного совета АНКОЛ. В 2002—2003 гг. участвовал в реставрации Латвийского национального театра. С 2005 г. занимается реконструкцией и реставрацией работ известного латвийского скульптора Марты Лиепини-Скулме. В 2006 г. одна из работ художника «Дом, который построил Витте» была куплена для оформления особняка американского актёра Ричарда Гира. Отец четверых детей: Мария Степанян, Вазген Степанян, Мартин Степанян и Армин-Баграт Степанян. Структурный коллаж Профессионально в технике структурного коллажа за всё время в мире работало не более 10 человек, например такие как Йиржи Коларж, Вячеслав Колейчук, Лукас Самарас. Структурный коллаж — дело хитрое и не сразу воспринимается зрителями. У многих из них создаётся впечатление, что работа сделана на компьютере, но это не так. В процессе создания работы в этой технике, изображение вручную разрезается на элементы определённой формы, а затем технически безупречно собирается в единую композицию. Каждая деталь находится в строгой взаимосвязи с соседними и подчинена единому алгоритму, рождённому в голове автора. Полученное произведение создаёт впечатление вибрации, словно объекты растворяются в движении. По сути, структурный коллаж — одна из ветвей оптического искусства, основанного французским художником, графиком и скульптором Виктором Вазарели. Оптическое искусство существует сравнительно недавно и каждый художник — основатель своего поднаправления. Бабкен Степанян выделяется своими изысканиями в передаче эмоций через визуализацию цвета и формы. Музеи, выставки, коллекции Работы в музеях и коллекциях Государственный музей народного творчества Армении, г. Ереван, Армения. Музей Сергея Параджанова, г. Ереван, Армения. Историко-мемориальный музейный комплекс, г. Донск, Россия. Музей современного искусства, г. Тарту, Эстония. Коллекция АНКОЛ(ассоциация национальных культурных обществ Латвии), г. Рига, Латвия. «ANTEX Gallery», г. Рига, Латвия. Балтийская Международная Академия галерея «BiArt», г. Рига, Латвия. Персональные выставки 1986 — «Выставка графики»; редакция журнала «Гарун», г. Ереван, Армения. 1989 — «Посвящение 120-летию композитора Комитаса»; Концертный зал Вагнера, г. Рига, Латвия. 1991 — «Выставка визуально-математического искусства»; Дом знаний, г. Рига, Латвия. 1993 — «Выставка коллекции галереи Avers»; Киногалерея, г. Рига, Латвия. 1994 — «Вторая выставка визуально-математического искусства»; АНКОЛ, г. Рига, Латвия. 1997 — «Выставка графики и коллажа»; Этнографический музей, г. Валка Латвия. 1997 — «Выставка графики и коллажа»; Дом культуры, г. Валга, Эстония. 1997 — «Выставка графики и коллажа»; Дом камерной музыки, г. Тырва Эстония. 1998 — «Коллекция галереи Avers»; «Avers Gallery», г. Рига, Латвия. 2005 — «Знаки зодиака»; «ANTEX Gallery», г. Рига, Латвия. 2013 — «Юбилейная выставка одной работы», «HappyArt» музей, г. Рига, Латвия. 2013 — «Осень»; БМА галерея «BiArt», г. Рига, Латвия. 2017 — «Визуальные фикции Бабкена Степаняна»; БМА галерея «BiArt», г. Рига, Латвия. Групповые выставки 1989 — «Zīme, simbols, stāvoklis»; киногалерея «Рига», г. Рига, Латвия. 1991 — «Свободное искусство»; Культурный центр, г. Аренсберг, Германия. 1991 — «Арт миф — 2»; Манеж, г. Москва, Россия. 1992 — «Jet»; ярмарка, г. Цюрих, Швейцария. 1996 — «Дни искусства»; Дом Конгрессов, г. Рига, Латвия. 1997 — «Посвящение балету»; галерея «Ригас вини», г. Рига, Латвия. 1998 — "10 лет группе «Свободное искусство»; галерея «Неллия», г. Рига, Латвия. 1998—2004 — «ANTEX Gallery», г. Рига, Латвия. 2001 — «1700 лет крещения Армении»; Дом Черноголовых, г. Рига, Латвия. 2001 — «Риге 800»; Дом культуры еврейского общества, г. Рига, Латвия 2005 — «В память 90-й годовщины геноцида армян»; Дом Черноголовых, г. Рига, Латвия 2005 — «Посвящение 125-летию М. Сарьяна»; галерея АНКОЛ, г. Рига, Латвия. 2006 — «Посвящение Риге»; Рижская Дума, г. Рига. 2009 — «Вино и виноделие»; Музей Ю. Алунана, г. Елгава. 2011—2014 — Галерея «ИНТРИГА», г. Рига. 2001—2021 — Балтийская Международная Академия, галерея «BiArt», г. Рига Конкурсные выставки 1996 — 4. Ежегодная Рижская выставка «GEO — ĢEO»; Центр современного искусства в Педвале, Латвия. 2009 — «Автопортрет»; галерея Союза художников Латвии, г. Рига 2009 — «Осень 2009» (в соавторстве с Мартой Скулме); «Музей счастливого искусства», г. Рига Награды Почётная серебряная медаль Ассоциации национальных культурных обществ Латвии (АНКОЛ), 2010 год. Награда от галереи «BiArt» в номинации «Структурный коллаж», 2016 год. Награда от галереи «Интрига» за организацию и участие в выставке армянских художников Латвии, 2013 год. Статьи Ссылки Сайт с биографией и картинами Б. Степаняна Youtube-ссылка на персональную выставку Б.Степаняна 1993 г. Примечания
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,213
Kausar Munir has penned the song for the film, which is directed by Srijit Mukherji, and is a remake of Bengali film Rajkahini. Begum Jaan is the Hindi remake of Bengali film Rajkahini. One of the legendary voices of the Indian music, Asha Bhosle is again ready to enthrall her fans with a soulful track in Vidya Balan's upcoming film Begum Jaan. Composed by Anu Malik, the song is titled Prem Me Tohre and has a minimalist approach. While mostly heavy strings are used in the background, Asha's voice has been given enough prominence. It's not exactly a 'ghazal', but has elements of it. Malik has intelligently used his sources to give the song a modern outlook despite keeping the old-world charm. Begum Jaan is scheduled to hit the screens on April 14, 2017.
{ "redpajama_set_name": "RedPajamaC4" }
9,554